Are We Building AI Literacy or AI Dependence? - Alyssa Muhvic
#24

Are We Building AI Literacy or AI Dependence? - Alyssa Muhvic

Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming.

Today I am joined by Alyssa Ick, a high school history teacher in Indiana who brings a rare combination to the table. Alyssa has spent the past several years navigating the shift from traditional classroom instruction to a world where AI is already shaping how students write research, ask questions, and sometimes avoid the harder work of learning.

At the same time, Alyssa has been helping her district think through these issues at a systems level. In this episode, we talk about what it looks like to teach history when students are turning to AI for answers, why equity and privacy remain central concerns, how educators can model responsible use rather than simply prohibit it, and why the work already may be less about mastering every new tool than about teaching students how to think critically in a world saturated with them.

Let's dive in with Alyssa's earliest memory of her own use of these tools.

Alyssa: I think if I recall correctly, I was a senior when our school first went one-to-one in high school. That was the first time. We had Chromebooks using Haiku as our platform, and I don't really remember much about Haiku at all. I just know that's what it was named.

Because it was the first year, some teachers used them and some didn't. It was mostly just having a laptop for accessibility. When I went to college, we obviously all had laptops with the learning platform of Blackboard, which then switched to Canvas my junior year.

So those platforms have been used. I'd say Canvas is the most experience I had with a student on a digital platform. Blackboard was fine, but it was more like hyperlinks to PDFs, whereas Canvas was the first time there was an interactive ed tech tool and platform. Everything kind of became modified for Canvas in terms of ed puzzles, video blocking, online quizzes, and all of those things were all through the Canvas platform.

Priten: As you saw those transitions—both the one-to-one device and the use of a more robust LMS system—do you remember what it felt like as a student? Was it exciting? Was it just kind of an annoying thing you had to figure out how to use? Anything you can remember about what that experience was like?

Alyssa: I remember being frustrated going from Blackboard to Canvas mid-college career. We were learning a whole new thing, saving all our files, and all of that. I remember not being happy about that.

However, I don't ever remember having necessarily specific feelings toward it. I should say that a special quality of me is that I am dyslexic. I had academic supports throughout college. Even though a lot of things were digital or online, I very much needed paper copies of everything to be able to hear and see at the same time.

I felt like I wasn't as dependent or immersed in some of those LMS modules because I always had the book that they scanned in and made into a digital audio. I very much, even to this day, work on pen and paper in my own learning. So I don't really remember having strong feelings toward it either way, just because my style of learning was different.

Priten: Did having your peers use technology more and you use pen and paper cause any feelings, or did you just know that it worked best for you and not really care?

Alyssa: I think it just worked best for me, so I didn't really care. I spent a lot of my college career in the library or in collaborative learning spaces. If I was working in a group, I have fine digital literacy—I can do it. It just wasn't my preference.

I've always thrived in collaborative spaces where there's a lot more verbal processing and talking things out, with somebody taking minutes and notes, and then I can go home and do my own thing. I never felt judged or that it was awkward, or that it was moving at a different pace. It was just that my thought process had to go through different steps in order to get to the conclusions my group was reaching through quick digital aspects.

Priten: Now moving on to your first few years teaching versus post-ChatGPT—I'd love to hear what that transition was like. You were early in your teaching career when it came out. When it did come out, was it shocking? Exciting? Dreading it? Excited to see how you might implement it? Let's think about it together.

Alyssa: Yeah, I think when it first came out I was heavily against it, mostly because of the data and privacy issue. Especially working in education where everything is public record and you have to abide by so many different legal regulations about what information can and can't be.

Also, my own personal information—especially being a young female educator in the field—I have very strict boundaries between my personal and professional life. Using AI with personal information, I just did not trust it and felt like there wasn't enough testing on it.

I will say though, it wasn't like I was against using ChatGPT. I just didn't know enough about it and didn't have enough information to figure out what it was. However, I was always under the impression that we need to figure this out so we can be ahead of it.

I think that's one of the biggest things when talking about education as a whole. Education is supposed to be navigating the way and leading it, developing global citizenship and future educators and careers, and giving the skills needed. A lot of that, especially today, involves digital literacy and how to use these platforms.

There was this urgency because even though education's supposed to lead, we're often the last to adapt to it. There hasn't been a major reform since the 1960s and Brown v. Board of Education. We're not even talking about curriculum development. It's been over a hundred years since we've really looked at how we're trying to lead it.

I really wanted to figure out how are we gonna use this in our classrooms? It's not gonna go away regardless of how you personally feel about it. How do we make sure that our kids don't lose the upper hand academically as all these other schools are advancing with it?

There was just this need. I really wanted immediate professional development. I really wanted studies on it. I was also talking with a lot of veteran teachers about the fact that Google was created, Chrome was created. That was a major change. Now it's daily use in our teaching. It's not something that we get to say yes or no to. It's something that we're going to have to evolve with.

So that was kind of my initial thought process on AI.

Priten: And has that shifted now, about three years later? Do you feel like we're any closer to having a better idea of what data privacy looks like for our students? What role it plays within the education system? Are we moving along with the times? Are we catching up at all?

Alyssa: Yes and no.

When you look at the educational world versus corporate America, corporate America is doing a lot more with AI than the liberal arts—in general careers, which education falls under. I also think that's easier for the corporate world because the corporate world lands in the realm of STEM, in which this technology is rooted.

When I think about how educators are using AI, that's different than training students how to use AI. An educator's number one tool for creating is probably something like Canva, with AI tools to create content and worksheets, PowerPoints, and interactives. But for a majority of our students, they're not going into education. They're going into the corporate world.

I don't know anything about Adobe and AI. Are we closer to using AI as educators on our profession? Yes. Are we closer to using AI in a way that prepares kids? I think we're still figuring that out because that world's completely different. How liberal arts and STEM are using AI are completely different.

Priten: Yeah, that's a good point about thinking through that it's not just like one big ball of AI that we all need to figure out how to use, but that there are so many different variations and tools within it. The ones that are relevant for educators and for the learning context are different than using a tool made for students to learn math or foreign language that's AI-powered. That's still very different than preparing them to use the tool they'll use in their workplace.

Tell me a little bit about how you view that tradeoff. There's other stuff that you think is important to teach. You're a history major, you teach history, and that's very different than teaching Adobe and AI. So what is your vision of how we solve that problem where we make sure students are still learning those important concepts but also preparing them for their future?

When you think about the content areas you teach and that you're an expert in—predominantly history—and you think about the role of teaching students digital literacy and AI literacy in particular for career readiness, how do you balance those two? Because it sounds like you can find ways to incorporate AI into your workflow and you talked about using it for primary sources, and I want to hear more about that in a second. But balancing that with preparing them to use AI themselves for whatever career they choose, do you see that as something you need to deal with, or do you think that's a larger problem for the school system?

Alyssa: I think both. It's in the sense that it is my responsibility to do that within my classroom. However, I think I'm on the smaller proportion of AI acceptance. When you look at teachers, the percentage of teachers over 20 years is just as much as five and under, and they have very different perspectives on AI.

I think it's on the system at large to ensure that educators are equipped, trained, and prepared to do that in their classroom. There's a larger need for uniform policy expectations and standards around digital literacy as a whole. Many states are changing diplomas, laws, regulations, and state standards of curriculum, but there's not a lot of standards around digital literacy and specifically the use of AI.

When we're thinking about the education system as a whole—and I'm not just talking about public education, but also colleges and universities—they need to make their expectations of incoming freshmen clearer so that we can align our work with what they're doing. There is a wide need for policy development, professional development, and education on it.

Then there's also my responsibility as a classroom teacher. I come in contact with 150 kids a year, but those are still my kids that I'm responsible for. The way I see it is I maybe don't know how to use STEM, Project Lead the Way, and all of the fancy things coming out in my history class. But I can model. I can open the door to conversations. I can talk about the ethics. I can explain how I utilize it in an equitable and fair way versus just copying and pasting a prompt. I can talk through how I'm creating things.

I think that's where we have a powerful tool. A lot of times kids are scared of using AI because the only way they know how to use ChatGPT is to copy and paste the question into it. They also need more avenues and exposure points for how it can be incorporated in many different ways and how to use it responsibly. If we don't have those modeling and thought process questions, we're not going to get the ball rolling.

Priten: And what does that look like in terms of how you make time for it in your classroom? There's so much we expect teachers to be on top of. This just feels like yet another thing we're expecting teachers to figure out and navigate with their students. But you're right that it's the need of the time. This will make a difference between what careers students are able to do, what socioeconomic status they end up in, and their readiness to embrace this technology will be a major differentiator.

So what does this look like in your classroom? I know you talked a little bit about how you use it directly with students. What does this look like for you?

Alyssa: Sometimes it's about just explaining to them what AI tool they're being exposed to and how I came about it. For instance, when I was gone for three days at a conference, I also had a student teacher at that time. I was worried that my kids needed support and I wasn't going to be accessible for 72 hours. So what was I doing to fill in that gap?

I went to Magic School and created my own chatbox from the perspective of John Lewis because we were reading his March books in our civil rights unit. I uploaded all of my previous notes, presentations, teaching guides, personal historical record notes for when I'm recapping, and what our assignments were. I typed up summaries about each of the books and put those in there with a biography about John Lewis. So that personalized chatbox had access only to my content. I put that chatbox into our Canvas module.

In the two hours I'm gone, or even honestly, if you need help studying anything, ask this chatbox questions and it will generate all of the information coming from me. When I sent out my message to kids, obviously I'm still not there, but I explained the entire process. I said, "Hey guys, I've used Magic School AI. This is how I used it. These are the records that they have access to. So when you're asking a question, this is what it's pulling from."

I went over expectations and guidelines of how to use it and then explained that on the teacher backend, it will show me how long you've interacted with it, what specific questions you've asked, and what the responses are. So I'll go through to make sure that it's still giving verified information. Some students really like that and some students are like, no, I'm still not touching AI. But at least it's there.

In terms of making things more engaging on my end, I'd consider myself a creative person, but also creative in history and thinking about how to gamify things. When you can't necessarily change the fact that at the end of the day we still have to read the Constitution, it's like two-fold. Yeah, I wanna make it fun, but there are things we just have to do.

That's where ChatGPT has been really helpful as a teacher. In uploading our Constitution, my thought partner was like, I wanna turn this into riddles of a scavenger hunt. We uploaded it into ChatGPT, put our standards of Indiana into that box to make sure we weren't taking out the bottom line of what we have to do, and asked it to make 12 different riddles. Kids would have to answer a riddle, find it in the Constitution, highlight and annotate it, and bring that piece of annotation up to me. The first team through all 12 riddles wins.

I could easily do that as a teacher if I wanted to, but it would take me a lot more time. With ChatGPT entered, it's done. So it's not changing necessarily. I think a lot of times there's this perception that it's making educators lazy, but really what it did was give me more time to focus on kids and content than having to actually create the written aspect of the activity.

I'm very open with kids about the fact that this comes from ChatGPT. This is the use of it.

Priten: You gave me a lot of follow-up questions, but let's start with that last thing you just said. You told the students you used ChatGPT. Tell me a little bit about that decision, because that's a question we get from teachers often: do we tell the students that we use ChatGPT? I agree with your gut decision there. I'm curious what your reasoning is behind that so that other folks can also learn from that.

Alyssa: Yeah, I think from my personal lens, one of my favorite things that shifted my whole perspective is—I'm not sure if you've ever been exposed to the chart of 100 percent individual thought of humans to ChatGPT, with a study that has a scale of 10 steps showing where things fall when trying to figure out academic honesty and use of AI versus human thought?

It was a challenge to me: what happens if you replace AI in that chart with parent help? My parents helped me write this essay. My parent revised my essay and gave me feedback. That was a big challenge of shifting the perspective and mindset of where do you draw that line? And if we switch AI with a parental or tutor perspective, would we view it the same?

That's when I really realized there's a huge equity issue around AI. Because if you take the robot out of it and put a human into it, then I start thinking about how many students go home and don't have access to a parent after school to ask those questions. How many students don't have the financial resources to provide a tutor for their student?

We're using ChatGPT and these ed tech tools as a replacement, sometimes for personal impacts. The world is changing. This is the largest deficit of inequality in terms of financial accessibility for families that we've seen in a really long time.

We have to think about how our students are viewing these tools to compensate for different circumstances they're facing. That's when I realized I need to do a better job of articulating how to use it, why to use it. There are going to be students where this is their only source to ask questions and their only platform to seek help. If we don't teach them how to use it responsibly, they're gonna get themselves in trouble for plagiarism or illicit use or not knowing how to cite it and source it to make sure that it's reliable information.

One of my core things that I always start every year with is: it's not my job to teach you what to think, but how to think. If that's what I stand on as a teacher—how to think—then how to utilize digital resources in a responsible manner also has to do with how to think.

That's where my thought process was in why I started having conversations and being honest and talking about it. As I got more informed and educated on the topic of AI, I realized that we're in one of the wealthiest counties—I think the wealthiest county of the state. Yet we still have quite a high population of free and reduced lunch. Kids with paid access to ChatGPT have a much more reliable output by over 20 percent than our kids using just the free platform of ChatGPT.

So now we're not only thinking about accessibility, we're also thinking about reliability in those technologies. It kind of became clear in my mind. I teach primary sources, secondary sources, source credibility—how we vet for point of view, author, all of those things. Those skills are the exact same skills that have to go into ChatGPT.

If we don't offer them opportunities to find the flaws in the thinking of how those search engines are outputting, then we're not doing a service to our kids. We still are teaching sourcing. We still are teaching accessibility and research, but we have to view ChatGPT as a search engine that we need to figure out how to navigate and use.

Priten: Wow, there was so much there that I think is really useful, even for me to rethink like how we frame this for students, but also for educators in terms of getting them to think similarly about the equity issues in particular that come up. I think oftentimes there's this idea that, oh, this is gonna solve all of our equity concerns. But I think the nuance is like, what type of plan do you have access to? Is it a school-provided plan with a lot of guardrails and teacher monitoring, or do you have the $200 a month plan that no one else knows how you're using it? There's a whole range of both power and accountability that comes with that entire spectrum of different AI tools that students may or may not have access to.

I'd love to hear about your experience being on your school's committee. You mentioned that you were on the AI task force or AI policy committee. You clearly have thought a lot about this. I'd love to hear what that looks like: how is your perspective being taken into consideration? Who else is on the committee with you? I want to know a little bit more about the decision-making process there.

Alyssa: So our committee is district-wide, with elementary, middle school, and high school teachers on it from multiple different subjects. Some of them are ML leaders, some of them are instructional coaches. We have district level assistant superintendent, director of technology, and director of instruction and learning. So we have lots of different voices.

I know that our committee was founded and funded through a grant—I think it's the Eli Lilly one, don't quote me on that—but the goal was to try to get Eric Kurtz to come out. There wasn't enough in the budget for him to come out, but the goal was to get a diverse group of various positions in the district together to pilot some of our AI programs.

Our school decided to go with Magic School, so all of our students and teachers have access to that. We were kind of in the lead of testing out what those tools look like, reporting back how we were using it, what we were finding useful, and which ones we weren't finding useful.

We did a book study on AI in which we looked at engagement, practice, and implementation policy. We all talked through different chapters, and especially when it came to policy terms, we did a lot of looking at what is the role of a district policy versus a building policy versus academic policy. That's all still very much in the works.

Year one was spent a lot on just exposing every single person on the committee. Everyone went to a different conference, whether it was in Ohio, Michigan, or Indiana. We all shared back and reported on it. We created tools for the districts to look at. But it was really about exposure: how we're using it, what is our feedback and thoughts about it, and where do we want to go?

Over summer, I unfortunately couldn't make it to our summer committee meeting, but I know that was where they sent out all of our current policy and asked, based on the work in this first year, what changes do we need to make? So it's kind of this ever-flowing, living thing of just talking about it, collaborating about it, and thinking about how this fits into our mission as a school.

For us, our mission is to be empowered, engaged, and inspired. When we're thinking about inspiring, we're thinking about how are we innovating the next step and challenging the current norms within education to make sure that our students are leaving here with their best possible shot at a future.

In terms of that innovation and empowering and inspiring in that aspect, we understand as a district that there is a huge need to embrace this. If we are going to embrace it, we want to be leaders in that field, but we want to be also very careful and apprehensive of what we are putting out.

Priten: I'd love to hear more about what your concerns are. I know you mentioned equity concerns and data privacy, but what else is causing some apprehension? Why not just go full force?

Alyssa: Because I was a student one time, and if I had the ability to copy and paste my homework into a search engine that would give me the responses and all I have to do is change the format and a couple of words, and the AI generator isn't even gonna catch it—why would there be a sense of concern?

Yeah, I'm not stupid. I would've done it too if I was a high schooler, especially with how busy these kids are—working jobs, including church, extracurriculars, wanting to be involved as a student, AP classes, the biggest push for dual credit courses. There's so many things going on. I totally understand.

I'm gonna be honest: I think the largest issue with academic dishonesty in terms of AI has to do with our AP, high level learner kids, not our low-level learners. I think that's a huge misconception in the educational field. I deal with more panic attacks, tears, and pressure from our AP level and honor students because they are putting so much pressure on themselves internally. They don't want to admit that they're not understanding. They don't want to ever get anything less than an A, because an A is tied a lot of times to their self-worth and self-esteem.

Those are the kids that are utilizing this in a dishonest way because they are trying so hard to be perfect. Whereas the kids who fall kind of in our C average student range—they're gonna struggle regardless of whether they have AI or not. They're not thinking, how do I get this done? They're just like, well, if I don't get it, I'm not going to do it. We're trying to teach academic stamina, determination, accountability, and time management in that aspect.

When I think about our concern, it's not that everybody's cheating. It's that this has become a tool to save time, expand ourselves even more, and it's the easiest way to cheat without getting busted because there aren't as many tools that are there. That's where academic dishonesty comes in.

One of my biggest concerns is that I always tell kids I'm not actively spending my time trying to find every single instance of illegal AI use and cheating. In fact, I don't care enough about it to be going through with a fine-tooth comb. That's not where I wanna spend my time and energy.

My number one concern is I bus kids using AI almost every week because they're not even reading through it. They are just copying and pasting and submitting it. I'll have situations where my concern is not that you used AI, but that you're submitting work as your own without ever thinking that there's a need to read through it. You are not having this opportunity to actually learn and engage in the curriculum. You are not learning life skills of advocacy—to ask for an extension because you're stressed and you need more time, or how to ask for help. You're seeking this alternative thing.

There are so many more things I'm concerned about than the AI use itself. My concern is why we're using it and how we're blindly turning it in, and why do we not think that we still have to check our sources? I know Google has done a great job of reforming Gemini as their AI search engine to have those links attached. But I worry if you're using ChatGPT, especially if you're using it in a private user because you don't want the school to try to track it—which they still can't—but kids think there's a backdoor to everything. It's not giving you those sources and you're not actually reading through it.

My fear becomes: how many kids are getting a grade for work and content that they actually haven't learned? How are we able to measure that learning gap that might be being formed due to that? How do we ensure that they're not learning fake history? Not to say it's actively putting out incorrect results, but last time I checked, regular ChatGPT is only about 68 percent accurate. If you think about that, yeah, it's very concerning. I know how many times kids use ChatGPT daily.

If only 7 out of 10 questions you're asking it are going to be reliable information, how much opinion and history and information is being absorbed and internalized that actually is not real at all?

So those are the concerns that are happening.

Priten: I'd love to just end with this: when you have conversations with students about this, how receptive are they? I know earlier you mentioned that some students were hesitant to use your John Lewis chatbot. I'm curious what the general sentiment is from students both about the productive uses of AI, but also when you're having conversations about academic honesty. How are those conversations going?

Alyssa: Yeah, for context of this question, I have a firm belief that when kids know how to do better, they will do better. When a kid is making a choice, even with digital literacy or digital fluidity and its application, they're doing the best that they know how. If we want them to do better, everything has to be used as an opportunity to teach them.

So when I look at how kids are using it today, honestly the number one visual that I see is them using their Snapchat AI bot. They will quickly slide that over and ask so many questions about it. When it comes to that, if you wanna know where kids are willing to put any and all of their information without thinking about it as a second thought, that is the number one embracing of AI that there is—on that Snapchat AI friend bot thing.

A lot of times when I see that we have conversations of like, what are you asking it? Well, I didn't understand this question. Okay, well, did you ask me for clarification first? Well, no, you were busy helping someone else. Okay, that's fair. However, before you use the internet, you have to use your resources first. Well, why? It gave me the answer. And then I'll say, all right, well it gave you this answer. Can you define what this reference or word is referencing? Can you define what this event is? Did we cover this content in class? Will it be applicable to a test?

When asking those questions and having them think through it, that's when their apprehension kind of starts kicking in. They're like, wait a second. I had a kid—and there were probably five kids who used ChatGPT—but for whatever format I asked the question, for some reason ChatGPT was shooting out an answer about the Rwanda genocide.

So I had about five kids turn in this paragraph about the Rwanda genocide. This was a question about the Iranian hostage crisis and how like the presidents responded and if Reagan really should get credit for it. Nothing to do with the Rwanda genocide. Okay?

In situations like that, kids kind of have this mortifying moment of like, oh, maybe this isn't as reliable. But it's not until they have an experience where they're being called out about it that they're like, wait, what?

I always start those conversations with like, "Hey, Jimmy, tell me about the Rwanda genocide." Usually they stare at me and they're like, I don't know. I think we learned about that in seventh grade. I'm like, "Oh, so you don't really know much about that, right? Because your whole assignment, you wrote about the Rwanda genocide."

And then we can have conversations about like, yes, this is academic dishonesty. There are going to be consequences for you turning this in. But now there's also different conversations about like, oh, so I can't use ChatGPT like that.

I just think that if we were more proactive with AI—if I did assignments where you ask like, right, when I was teaching that, I took that as a moment of like, I don't know why that question triggers the Rwanda genocide in the search engine. But I could take that at the beginning of the year, asking everybody to open up ChatGPT, put that question in, and raise your hand if you got actual information about the Iranian hostage crisis versus how many of you got the Rwanda genocide. And then talk about that.

Have those things, because I think kids are willing to embrace it at any level until they get in trouble or caught, or their bubble about it is broken. Unless we start giving them situations to burst their AI bubble, I don't think they really have concerns or apprehensions about it.

Priten: Wow, I love the idea of bursting the AI bubble. That seems to be like what all of our mission ought to be. How do you keep up with all of this?

I think one of the concerns we get from teachers is they share some of your big picture concerns. They share concerns about equity. They share concerns about how do we get students to understand both the availability of AI tools but also what academic dishonesty looks like. But they also wanna make sure their students are prepared for the future and whatever careers they want to pursue.

But there's just so much information. It's constantly changing. I can barely keep up with what looked like three years ago, let alone what's going to look like next year. Do you find that you have external support systems, or is there something you're doing on your own? Where is that coming from? Because it's fascinating to me to hear how much you've thought about it and also how much you've worked on it.

Alyssa: Okay, so I don't have very many external supports, but that's because as I said, our district is very much in that early beginning stage. I think the fact that we even have AI policy means that we're beyond the beginning stages.

But this is kind of—and I put this together. It started off just as like when I went to Otech. There's so much information out there at one time. But as I have had experiences with it, I've created this. I needed something to do it. So when I know, I kind of work backwards in my use of AI. I create my content the way I would create it and continue to create it that way.

But then as I am creating my content or changing it or adapting it, I'll be like, okay, this needs more visuals. Then I can just come here and click on my visual. I use Napkin all the time. I don't know if you've ever used it, but it's my favorite AI tool, not even gonna lie.

I think that's where networking comes in. Every time I see a presenter, I follow them on Twitter just to ensure that there are things going on, articles and things like this. So this is how I keep things done in the sense that you can't keep up with everything. You can't be an expert in everything. You cannot have enough time in the day to do your job and also do it.

I was looking at a professional development catalog and at somebody else's hub to see how they were working to see if there's anything that I'm missing. I have tabs all over the place about AI use. When it comes to that, there are so many things that I open and keep up on my laptop of like, I wanna come back to this. And I just never do.

I think at some point, educators have to give themselves grace in understanding that you're not meant to be an expert in everything. I'm never gonna read every single World War II book that is out there, but I know which ones I'm going to lean on when I teach it.

I think that's what educators need. They need to know which tools work for them and how to lean on them in a way that protects student privacy but also enhances our educational outcomes and learning goals. I think teachers really need exposure to a plethora of them so that they can be like, wait, this one works really well for me, because what works well for another teacher is not going to work well for me, which is not gonna work well for somebody that I'm mentoring.

I think schools just need to do a better job of creating points of exposure and points of exploration. For instance, there was somebody who had an ed tech March Madness template in one of the communities that I'm a part of. I took that and made it our own for our school district utilizing the specific partnerships we have within the district.

We did it in the month of March as a March Madness three-week challenge. Every single week there was a different winner that either got a free AI tool membership of their choice or administrative coverage for a second prep for the week.

Because when we're thinking about how to best prepare educators for the sheer amount of information and tools that is about to be thrown at them—and if you think about how many have already been thrown in the last three years, I can't even imagine what it looks like in the next three.

When we're thinking about that, we are asking educators to become—they're always active learners, but we are asking them to become a different level of active learner that a lot of them haven't had to be for a long time. Also, if you're looking specifically at Indiana, you're asking them to take on more as an active learner while they are also trying to revamp our core diploma. We're asking them at a higher level to go back to school for a master's to be able to teach within the pipeline of dual credit and core credit.

We're also changing our summer school targeting because now you have to pass iLearn as a third grader. There are so many things that are actively changing in education as a whole. To add in this digital literacy active learner piece is a crazy unforeseen feat.

I would say it's really on districts and administrators to do two things for educators at this time in order for them to be active learners. That's to protect their time and to protect their income. In the sense that people really only care about money and time. If schools cannot give more money—and that's publicly regulated, it's a limited source of income—then we really have to protect educators' time.

If we can provide them more time in the day to be active learners so that's not having to take time outside of their personal lives, I think that's when you're going to have a magic moment in education where we can all have protected spaces to learn so that we can then implement it. And I think that's a long time in the making, but I definitely think it's possible.

Priten: I really, really appreciate your time today.

Alyssa: Of course, ditto. Good luck with everything. Don't hesitate to reach out if you need anything else.

Priten: Thank you.

Thank you to Alyssa. Alyssa does not treat AI as a separate issue. Rather, she treats it as something that is already reshaping the conditions of learning, often in a way schools have not fully caught up to. What she makes clear is that the real challenge is whether students are learning how to use these tools with discipline, honesty, and critical judgment.

Keep listening as we continue exploring the ethics and realities of education technology. And for more on how educators can lead during this transition, check out my book Ethical Ed Tech at ethicaledtech.org.

Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also, share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org.

Until next time, keep making space for the questions that matter.