[00:00:05] Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming. Artificial intelligence is already reshaping how children encounter information. At the center of that conversation, what does age appropriate AI and education actually look like? Today I'm joined by Megan Barnes. Megan is a PhD student at the University of North Texas, longtime K to 12 librarian and a school leader working at the intersection of education, technology, and child development. In this episode, we talk about what young children can actually understand about ai, why the line between real and synthetic interaction matters, and how educators can introduce new tools without losing sight of curiosity, promoting literacy
[00:01:01] and developmental appropriateness. Let's begin.
Megan: All right, so my name is Megan Barnes and I am a PhD student at the University of North Texas studying learning technologies. And I'm also a librarian with 14 years experience in K 12 librarianship, across the country. I currently am an active practitioner in Dallas, Texas, where I hold dual roles as the director of my library, as well as the director of educational technology for early childhood through fourth grade.
Priten: So lots for us to talk about. I would love to start with your research. Tell us a little bit about what you're studying, what you're working towards. What are the questions you're asking?
Megan: So really the driving question that started my whole PhD process was this: I'd get asked regularly, what is age appropriate educational technology? Like what does it look like for technology to be educational at
[00:02:02] a personal and academic level? How do we really assess these things and how do we make it so that it's easy to assess as new technologies come out? I started this process before Chat GPT had landed on the scene in a big way. I went from interview to applying to Chat GPT interviews. So I didn't start out being an AI researcher. It's really just going, as new things come out, how do we know what's important in the development of children to look at anything that comes our way. And then we landed here. So that's a big part of it.
Priten: And so what does that look like now?
Megan: So right now what that looks like is I've partnered with some people. I've got a project where I'm actually researching how decision making is being taught in K-6 classrooms and going, are they using frameworks? Is this a general pro con, is it different in different contexts?
[00:03:00] And that research is ongoing at the moment. I dug into some of the existing ed tech and some of the theory out there that said, this is how we analyze it. This is how you should think about it. I found that a lot of the frameworks were either highly academic and really good for writing meaty research papers, or they were more lesson plan focused. They focus on how do we do this lesson, maybe a unit plan, but not actually analyzing the technology itself.
Now I will give a big caveat to that because since then I have actually found a framework called the Triple E framework that does a little bit of that. So what I'm doing right now is using the triple E framework in my professional life while digging into some of the neuroscience, post cognitive and affective neuroscience to really go, what's the best way to look at this with what we know about developmentally appropriate technology?
[00:04:01] Priten: I think that intersection between figuring out the... So there's an author that I've been following since college, Harry Brighouse, who's written a public facing book about ethical decision making for educators. And one of his core arguments is that education requires a particular synergy between different disciplines. The reality of the data and the psychology and the sociology of your environments shape so much of the decision making that it really needs to be in conversation with those disciplines in order to come up with the right decisions. And so neuroscience and cognitive science are obviously super important when you're thinking about the impact on development. I'm curious now, how has AI changed the questions you've been asking? Do you foresee that your research will focus more broadly on education technology, or are you seeing a specialization in any way into AI?
Megan: You caught me. So what happens is as you go through this process of getting
[00:05:00] a PhD, you explore new things. I remember when I was getting my master's degree, I got a piece of advice: if you leave your PhD program with straight A's, you didn't do it right, so you should always be pushing yourself. In my case, what this meant was going outside and I've actually added curriculum design into my repertoire. As I've reflected, I've gone, oh, I always like making things. I like building things at my house. I don't love the do-it-yourself furniture because of the whole expense thing. I do it because I actually like building my own furniture. That's an easy way to do it. I've been able to take that idea and bring it into my research. So I am working on a project with Edge AI. Instead of it being cloud-based, chatbot based AI, it's actually AI that lives on individual devices so that it is running one small concept and really driving.
[00:06:01] You have Edge AI in your self-driving cars and a lot of other machinery. I'm building curriculum with a very large design team. What I've done is really dug into the idea that it's not about the tool, it's always about the learning and the creativity, right? AI is the thing we're exploring and doing that right now. But you have to think about not only personal development, but you have to get the skill development for K through 12, plus college, and ideally in elementary school you're building the foundations for life. How do I make sure that what I'm doing is setting them up for life? Not just specific skills and specific jobs. AI is going to be a part of that, but really in elementary school it's more about what's the information literacy, how does AI work? How do we interact with it? Where am I seeing it?
[00:07:00] Where am I not seeing it, even though it's in use? So like, I film our morning announcements. I've turned it into a video production and we use a platform that is well known. I don't want to name drop, but a platform that's really well known that has a background remover tool. And I tell them, this is AI right here. And they're like, oh. So it's not that I'm becoming an AI expert, it's that it's just part of what we do. As a librarian and information professional, knowing who's generating information, who is validating information and who's creating information, is a really big part of that skillset across all ages. So AI is just another component in that toolbox.
Priten: That makes sense in terms of seeing it as one element of many for what resources are available for teaching. I'm curious about what unique threats from AI are and how that's
[00:08:00] shaping what you all teach, especially with the library science component. Because I've seen a lot of conversations about K-5 centered on misinformation. The most useful thing we can teach at that age level isn't necessarily here's how you prompt it so that you can generate something. It's how do we get you to understand how concepts of truth and figure out what's real and validate information. And I think as the technology gets stronger, I'm getting more and more scared about our ability to teach the discernment skill between truth and falseness. So I'm curious how you've been approaching it.
Megan: So I spent a lot of time over the summer thinking about this idea of AI in the hands of elementary school children. It really came down to the fact that until they really have a solid concept of what's real and not real—not even true and false, just real and not real—it's hard to hand them something that appears real and let them
[00:09:03] interact with it as if it's real. So I think about the chatbots that are designed to mimic characters or famous people, and I'm like, it's so tempting because it's so humanizing, right? That connection is part of how we are driven as people. But if you are eight and you don't understand that's not really Anne of Green Gables, it's a very hard concept to pull back away from. So for me, part of it is how do we help them understand that even though it seems very real, if you are interacting with the chat bot, it's just guessing. I used AI to build a game that reinforces the idea that what AI does is make predictions. I turned it into a quasi hangman game where they had to guess letter by letter in the correct order what was supposed to be said. Each message has to do with how AI works because understanding
[00:10:01] that it is algorithms, it is guessing, and it is math—that makes a big difference so that you don't go, oh, it's a person talking to me, right? We've seen a lot of conversation about that blurring of reality and not reality over the past year.
Priten: And especially when you think—sorry,
Megan: No, I was like, I don't think I answered your question.
Priten: No, no, that was perfect. When you think about the role that these companies are trying to play in our classrooms and the decision obviously to put out products that create this human-like quality, to me that seems like a good example of some of the behavioral engineering and the misapplication of neuroscience and how our brains form connections. But what else are you thinking about? Because that's not something that I've spent a lot of time diving into.
Megan: Okay, I'm going to start with saying I am not sure it is a misapplication. I think it's on purpose. We live in an attention economy right now.
[00:11:02] I heard a phrase just yesterday and now it's going to be a relational economy. Who are you building the best relationship with? Which model do you have the best relationship with? I think part of it is this idea that it started when we took away pages in Google. When your Google results used to be page one, page two, page three, then what happened was they took the pages away so you just keep going down to keep you on the platform. We see this across many platforms. I think what we're seeing is that concept applied to a computing technology that is conversation based. It's really just that we're trying to stay on the platform. If you've got the history, if you've got the longevity, if you've got the skill experience, by the time you're 18, 19 years old, you are potentially having four or five years of history with a product at the launch of your adulthood, right?
[00:12:02] Which are you more likely to use a product that you have an established understanding of? I'm trying to picture the world five years from now. What does it mean to have five years of chat history? I mean, my Gmail's almost 20 years old, and so imagine that, but like it's all of how you processed everything. So I think part of it is just trying to create that engagement so that you stay in the platform. As far as the really big players, there's also the idea that now they're going to use your chat history for advertising. So that conversation that you were having with the synthetic idea is now also driving what you're seeing on platforms to get you to buy more, right? I think that engagement that drives consumerism is going to become
[00:13:04] more of an issue very quickly. I don't like the idea that what I'm using for teaching is being used to sell me something or my students—not just me, like people.
Priten: Yeah, the ads in particular do scare me a little bit, and it's not surprising. We all knew it was coming. This is not exactly earth shattering, but I think seeing some of the examples put out shows just how insidious this might end up being in practice. When I think about even adults and college age students that I speak to, they approach Chat GPT or Claude or their pick of favorite AI tool with this level of deference. They see it as an epistemic authority, right? The number of times I've heard a college student, they never refer to Chat GPT, it's just chat. And it's like, oh, chat said, right? I argued with chat.
[00:14:01] That is a little frightening. I've heard adults say, oh, I looked up this information and chat explained it this way. This is where I think, again, folks saw this coming in terms of moving away from a list of resources that Google provided to a narrative format from someone who's supposedly talking to you and summarizing this. How that changes how you process that information and process real versus fake. How do you even get, if we can't get adults and college students to slow down and say, oh, wait a second, this is not someone who knows something—there's air quotes here for the listeners—what happens when you introduce this to a kindergartner who's still learning, even the lines are blurred even outside of the technology, right? Between their imaginary friends and their fictional books and how all that's processed. What happens when you introduce tech that's not only intentionally doing that but designed to perpetuate it?
Megan: Okay, so this goes back to that question of what
[00:15:00] is age appropriate, right? I don't think engaging with chatbot based AI is appropriate at that age. And this is my personal opinion, this is my research opinion. I always want to be really careful when I say things like that. First of all, the more layers of anthropomorphizing technology we give it—so like I'm going to take the kindergartner as the example because that's what you used most. Kindergartners can't read or write, so how are they going to be engaging with that chatbot? Their voice? How is the chatbot going to be engaging with them back? A synthesized voice. To make it more engaging for a kid, platforms might—I think I've seen this—have animated characters. We have gone from something printed line by line by what is clearly, clearly a computer interface to a conversation similar to what we're having. That is blurry beyond blurry. I did not really understand how compelling chatbot AI could be until the first time I tried using voice because I was busy. Realizing what it sounds like, how it adds human inflection and pause—the ones I add to try to humanify it—it clicked for me how some people could go, it's real. I think the first step is knowing that at that age if you're going to use it, it needs to be adult controlled and it needs to be a side by side conversation. That's the same as media for kids that age from the American Association of Pediatrics, right? It's co-viewing, it's limited.
[00:17:01] So if your child is wanting to interact with the chat bot, it should be in the control of the adult so that you can talk to it and reinforce this idea that it's a computer, it's making its best guesses. Some of these guesses are good, some of them are not good. But keeping that in our heads as we work with it, I think it's going to be really interesting as these models get more sophisticated. Just thinking about what it was like four years ago is mind-boggling. The question that's already starting to crop up to your point about true versus false is synthetic versus organic. You and I were just discussing the idea of writing and using or not using AI in the writing and revision process. It gets blurry, right? At a certain point, if you let AI take over so much of the writing
[00:18:02] process, where's that line? I know the popular thing is 80 20—human in loop. You let AI do 80% of the work and then you do the last 20%. I was like, is that enough to make it truly yours? It's 80% computer generated. But that's a slippery slope. Then we can't say who the authority is. When you've used a chatbot as a search engine, as a content creation engine, we don't know who's writing, who's creating, where these ideas come from, right? It's a black box. There is one platform that in theory you can actually trace back to figure out where it's pulling from, its actual training data, why it picked that word. I have not had a chance to dig into that one yet. It's very interesting. I just remembered you used the word epistemic and I
[00:19:03] actually did have a class where we were investigating the epistemic stances of different AI models. We had a team of almost 25 of us working on different teams where we looked at Claude, Gemini, Chat GPT, Grok. We went through a list of just philosophical stances. Like what's how do we actually learn? What is information? Can information exist outside of humanity? And what is the nature of the world? Is there a distinct world? We went through and asked each model repeatedly over time to see if it was consistent. That process is really easy. The data analysis is not done yet, so there's not anything I can report. But it was a very interesting process. I know that there's at least one other organization that did something very similar that has reported findings.
[00:20:03] Where I believe—as much as I'm going to say, don't cite me on this—so I would have to
Priten: link to the research in the,
Megan: Yeah, that would be, yes, that would be an interesting read for sure. Just because this idea of the more you dig into it and actually like from a critical lens, not just using it day to day and going, it feels so real when you're talking to it with voice, but the more you ask it to be consistent and have some of those markers of humanity, that's where it falls apart. And so if our kids need consistency, which is what children need, it needs to be consistent across platforms and time and consistent responses to the same question.
Priten: So when we think about maybe let's say K through three, I would probably
[00:21:02] be in the camp of K through six or seven. But the use of, maybe we say okay, it's not appropriate to directly introduce that technology into classrooms at that level. You mentioned early on some other ways to start having those information literacy conversations with them to expose them in a very controlled fashion or just even bring it into conversation about how you're using it. The other side of this is that there's tools coming out for outside of school usage. That's what I was just thinking about. When you think about those little companion robots, these creature pets, little robots that will talk to you. I saw a tool come out around Christmas that was like, oh, talk to Santa and the elves. Obviously the storytelling and putting your creation in and it regenerates things, right? So there's clearly a spectrum there—probably not purposeful—of how developmentally dangerous that probably is. Is there any research on, maybe not necessarily, obviously there probably isn't on AI introduction at that age in those contexts, but research
[00:22:02] on continuing to talk, like having these falsified narratives, right? I think there's a debate. Do you make your kid believe in the tooth fairy or not? What does that do to the parent child trust bond? Now it's like, do you not only make the child believe in the tooth fairy, do you use AI to represent the tooth fairy that they can talk to? I'm curious what that does to a lot of these already existing conversations.
Megan: Oh man, wearing my academic hat, I'm a little like, well, are we just manifesting the tooth fairy? At what point is it no longer a lie if we have something there? Let's say I'm just going to take this to the most extreme, right? We're probably not terribly far away from walking robots. We've got we've seen some stuff coming out of Boston Dynamics. They've got some stuff that's legit.
[00:23:01] We're probably Rosey the Robot level before terribly long, crossed with Amelia Bedelia because it's still AI. I'm just sitting here going, if it's a robot and we feed it a story about how it's supposed to act like the tooth fairy and then it does all the tooth fairy things, what makes something the tooth fairy? That becomes the question. At what point is it just we've manifested what we want? Now I know that's not the heart of your question. I kind of went in a ridiculous direction. But that is part of it, right? At a certain point, maybe the tooth fairy is just a robot and that's okay. But what you're—I got too distracted by the tooth fairy. Let me backtrack in my head.
Priten: Well, no, actually, I think there's something here where my pondering
[00:24:03] and thinking recently has been. But there was a recent Atlantic article about a tool that's made to mimic your deceased relatives. That was another instance of where the founder explicitly talks about how there is this blurring of like, is this person actually gone? If you can still talk to them and it talks like them, you know, the vocal mannerisms are the same or the textual mannerisms are the same. So sorry, we're on a super tangent, but when you just say, oh, where's the line between it being real and it not being real, even for adults, this is going to get very confusing.
Megan: I didn't read that article, but I did hear—I think it was an NPR piece about a very similar situation. She built it as part of her grief process so that she could still have conversation. She wasn't ready to let go. But when she eventually figured out, she was like, I just needed to talk to other people and she was able to eventually just move on. But yeah, you're right. What is the line between, again? At this point we are having to grapple with this idea of what is humanity? What is, does a human have to be involved in information creation? Who is responsible when there's not? Who's ultimately responsible for harmful interactions? And if we can't guardrail those things, when is it appropriate to give it to anybody, right? I would say to give it to any kind of sensitive population. And at the end of the day, I think most of us are some kind of sensitive population because being human is hard. Do I want AI that can help me with my calendar? Do I necessarily want one that's going to try to mimic what it means to be human with my friends, with my students? No, but you can grade my multiple choice questions.
[00:26:06]
Priten: Right, but can you know, grade the essays like your writing, right? The line so quickly becomes like, oh, you know, there is something very human about your writing in a way that multiple choice there isn't.
Megan: This goes back to what we were saying earlier, right? The difference between fact-based writing and philosophy or narrative arcs and all of that. One thing that's come up again and again in conversations about AI in general is this idea that it is making people less creative. And this is one where I'm actually not a naysayer about AI in general. I think I just have a specific lens that I look at it through. But every time there's an advance in creation technology and it adds to the toolbox that the everyday person has access to, there is a growth in the number of people putting out creative concepts. But more does not always mean everyone's great, right? We're seeing a democratization of some of this. It's a little weird right now because we're having this internal battle of what is art? What is writing? Does it have to be made by a human? Those are going to be questions that we're battling for quite a while. I don't think there is a hard and fast answer. I think it's partially generational, and I think it's partially just personal. Each person will kind of have to come up with their own answer to that.
[00:27:03]
Priten: We've put our philosopher hats on, and I guess that's kind of the point of the show, so that's okay. I can't take it off now. You know, the other thing I'm thinking is, an argument that I've heard is maybe we're just being old, right? Maybe these developmental changes are going to be part of our evolutionary history. We should just let kindergartners go all in on this, right? I don't think anybody's making that argument strongly. But the point being that maybe our concerns and our hesitations and us saying, oh, that's weird, right? How much of that is representative of some moral truth or ethical good, and how much of that is just our discomfort because it's new?
Megan: So I think it's both. Honestly. But I also think partially it's both. This is definitely like, compared to who I teach, such an old lady thing to say, but part of it is that experience, right? Oh, I've seen these technologies come. I don't think these conversations are all brand new.
[00:29:02] I know they're not. People made similar arguments about the downfall of Western writing with the printing press. So it's going to be fine. My bachelor's degree is in music. I don't think I mentioned that to you in previous conversations. I am a violinist. I still perform. I was really into technology, so I was getting a recording arts degree there. I've got this really long fancy-sounding bachelor's degree. My very last semester is when iLife—the iLife suite on Apple came out. iMovie, not iTunes—iMovie, GarageBand, iPhoto. I was taking a synthesizer class on how to use a keyboard device and how to play it in different ways to make it sound more like the authentic instruments it was trying to mimic. That sound like anything we've been talking about?
[00:30:02] How does the human make the computer sound more real? I remember my teacher going, ugh, this is awful. It's going to be the downfall and nothing's going to be the same. It's going to be garbage. And I was like, nah, this is really cool. This is going to make more people be able to be creative without some of the barriers. In the end of the day, it's the same thing, right? It's a tool that allows more people to be creative. Are there more pieces of music right now than maybe would have been out there if GarageBand didn't exist? Yes. Are they all masterpieces? No. I've written at least three clinkers. They were so bad.
Priten: Oh no.
Megan: I had one good version and I went one step too far.
Priten: Megan, this is like a clearly specific incident in my head from my twenties. Yeah.
[00:31:01] But AI is kind of the same way, right? You've got this new tool that's allowing more people to explore, which is great. Is it the downfall of humanity? No, it's another tool. I read a book called The Neuroscience of You and I wish I had written down the author's name. Her definition of learning is probably my favorite one. Learning is anything you take in that changes the way you think, feel, or behave. Everything is learning. No matter what ends up happening, AI is part of learning because AI is changing the way we think, feel, or behave, even if we're not the ones interacting with it. So is it a little bit of my I'm-no-longer-20 hat when I'm worried about what it means to give it to certain people? Yes. But that's also partially because we have
[00:32:01] an obligation to make the world as good an experience as possible for everyone that are in our care. And so sometimes that means having to look at these big philosophical questions and go, is it right to do this? Instead of just going, this is how I can do it. Now, if I'm using AI in front of kids, it's at most a we do situation. It's usually I had a fun lesson. I did the first full academic year after Chat GPT—I think that was 22-23. What that was: I read a picture book biography of Marcel Duchamp, who put a urinal in a museum and called it art, right? The birth of the dada movement. So then we used three different AI tools to make images of like, what would a toilet in a children's museum look like as an exhibit? And then we talked about, is this art as a conversation piece? But they were in the process. So if we're wanting to teach kids how to do that, it's can they differentiate between a person is doing this and the computer is doing this. That kind of is where we start with the grownups doing it, and then the next step is we can all do it together before we release it with lots of guardrails, maybe in middle school.
[00:33:02]
Priten: Yeah, and I think that sounds like a good balance between acting on those intuitions to do what is right and good for the people we need to care about, but also the part of that might be that some exposure is necessary given where we're headed.
Megan: I'm curious, how do you feel about wonder and curiosity when you think about the role of learning—not school, but just learning in general. Where do you think curiosity and wonder fall in that process?
[00:34:03] Priten: Gosh, when you use curiosity, wonder and learning, my immediate thought is my college years. I was very lucky to be able to spend those years following my own curiosities and marveling at things that I don't think I would have gotten a chance to do if I had chosen a different discipline. I was a philosophy major, but I got to take classes on Sufi poetry and music. I took a class on the rhetoric of Lincoln and Douglas, platform social psychology. It was completely fueled by curiosity. That was a very unique time in my life. I think that was the most I enjoyed learning ever. So I think my initial thought is, if the intrinsic motivations for learning are probably rooted in curiosity and wonder, then unfortunately I don't think our education system is typically built to tap into that. I think we are very hyper fixated on the extrinsic, and that can be something as simple as the sticker or the grade.
[00:35:02] But it can also be the job or the career or you will not survive if you don't learn, right? That is a very different framing of learning.
Megan: Yeah,
Priten: But I don't know, that's just my initial reaction. I'm curious what you're thinking.
Megan: So this is where the fact that I work in a school, which is formal learning, right? This is a formal learning environment, but my role in my school is a librarian, which is an informal learning environment. So I support both the formal and informal at the same time. I do work in a school that wants to foster that wonder. But to me, the best way to prepare our children for whatever life they lead is to help them know that you can approach new things with wonder and curiosity while still thinking it through. So making sure that we teach one, maintain—'cause I don't have to teach them to be curious at all at this age—to help them maintain that wondering curiosity while giving them the tools to say, when I encounter something new, what are the questions
[00:36:02] I want to ask as I engage with it?
Priten: Yeah, now that you're saying the library, I feel like that does occupy a very unique space at that age level because I'm just picturing myself walking into the library at my school as an elementary school student. There was always this sense of oh, there's so much to discover, right? It is a little bit more boundless than the classroom environment. Everybody got to pick their own books and you got to see what you were interested in and what was appealing to you. Even if there was already a preselected cart of books because there was a certain project we had to work on and there was a narrow scope, there was still some autonomy that came from that that allowed you to kind of preserve that curiosity and wonder. When you talk about not having to teach it at that age, I can't imagine putting a college senior in a library and it not being very transactional in their head. Very much thinking, oh, what is going to get me to my next career goal? What is the most practical thing? Even for sports or music, it is no longer about, oh, I want to learn this because it's cool, right?
[00:37:02] I do wonder what role the library could play in preserving that or further in our school system.
Megan: Okay, so here's my question for you. How can you harness AI to help people tap into and or rediscover their natural wondering curiosity?
Priten: Yeah, I mean I think to some degree it does, right? It is just like organically by nature of making it so much easier to get the precise answers that you're looking for. You don't have to do that digging. The reduction of cognitive labor that I think we sometimes are concerned about in certain educational learning contexts might actually be what enables that curiosity in other ones. I'm thinking about when my wife and I were traveling to Mexico and we visited a historical site and we had Chat GPT on audio. We were asking very specific questions about just things that we were curious about. Like they said Farro grows here. Did Farro organically start in Mexico? That curiosity was we were able to sit with it because we got answers that were specific to that. You don't normally, if you're Googling things you're not going to stand on your phone and look at your phone and Google every one of these questions and read 20 sources to keep going down that rabbit hole. But there is a niceness to that rabbit hole possibility in some context. Yeah, now that's a very adult context. I'm curious if you, like in the school context, how that might
[00:38:04] be something you're thinking about.
Megan: It is one where I get asked so many questions. Today's question at lunch was we were talking about YouTube and Google and I looked at this third grader and I was like, well, you know, YouTube is owned by Google. And he went, and then the other kid was like, is that why there's so many Google ads on YouTube? So this is the kind of thing where what we want to know next is when did YouTube get bought by Google?
[00:39:03] I used my phone at lunch in front of children, which I don't—you know, that is the kind of thing—I could have looked it up on a web search. I could have used a large language model to answer, but it was kind of good to just sit in the curiosity and wonder and then they can come back to it when we have actual resources. Because getting back to the idea of how do you show true from false? How do you show real from not real? I not only have to know how to ask the questions and how to prompt things for the large language model, I have to have the confidence that it's going to give me the right answer. That's where I still struggle because I don't have the confidence. It has definitely led me astray.
Priten: Yeah, it's that part that is probably the hardest. I think that's where the how important is this thing to be real part definitely also matters. But that's a level
[00:40:01] of discernment that we've kind of learned and picked up on, right? So we, intuitively we can maybe gut check things a little bit more and know that, okay, this is kind of important information. I'm not going to just rely on the Chat GPT creepy voice to tell me what the facts are, what the research on this might say. But all of that like has to happen. We do a lot of that already at the early ages. It's about doing it in a new environment now that I think we're trying to figure out. Now I was just going to ask when you think about your role in the next five years with the growth of AI and the changes, are you net excited or net scared?
Megan: In general, I'm excited, but that's just because I'm a little bit of an excited person. I do think that libraries in all contexts, as long as they're like open and really considering, are this really fascinating intersection of learning
[00:41:00] and creation and information literacy. From a library perspective, I think there's a lot of opportunity for us to really dig in as professionals and go, what does it mean? How do we teach it? What are the skills that are transferable? And what are the skills that are domain specific? The hard part is trying to do it while we're also dealing with the fact that it's here now. We don't have time to prepare. It's already here. We have to be flexible because it changes. So what works—and I don't know if you remember when the large language model images first started coming out, people were like, well, you just have to look for the fingers. Well, the finger test is very easy to pass most of the time now, especially if you're using an image specific model. Instead of going super detailed prescriptive, what are the holistic mindsets? That's where it is, and I'm excited. It's change at the end of the day. I don't think that technology necessarily changes our brains in the way people say that it does. But everything we do causes our brain to adapt. When you do it younger, it adapts earlier. I think there will likely be a very stark difference between people who use it early and people who maybe wait a little bit. I think that could potentially be a dividing line later. And it's not good or bad, it just is.
[00:42:01]
Priten: At least we know it's good or bad. We, it
Megan: Yeah.
Priten: Well thank you. I'm glad
Megan: Thank you.
Priten: Thank you to Megan for joining us and bringing both practical experience and academic perspective to this conversation. Megan reminded us that the most important questions about AI and education are not only about efficiency or access, they're about trust, discernment and
[00:43:04] how our children are being introduced to these tools in ways that match what they're actually ready to understand. Keep listening as we continue exploring the ethics of education technology. And for more, order my book Ethical Ed Tech at ethicaledtech.org. Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also, share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.