Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming.
How do you teach responsibility if students do not care? Or maybe more precisely, what do teachers do when students understand the difference between learning and short-cutting, but choose the shortcut?
Today I'm joined by Lorin Koch, an educator whose perspective spans several parts of the education world at once. He came to teaching after first working in journalism, then built a career in high school classrooms. He now teaches across in-person, online, and college settings. We're also lucky to have him on our team at Pedagogy Futures.
Today we talk about what AI looks like in actual classrooms, how students blur the line between assistance and avoidance, why self-paced and online environments make accountability harder, and what happens when technology makes it easier to complete work without really doing it.
Let's begin.
Lorin: I graduated from college with a double major in religion and mass communications. I worked in journalism for a year, and determined that was not where I wanted to spend my whole career. I had a family friend who was an education professor and recommended that I look into education.
I went back to school for a master's in teaching and had been teaching high school ever since, mostly English but in some other subject areas as well. I got a doctorate in education and graduated in 2022 from my online program. I've been trying to work for colleges and have successfully gotten some online college teaching jobs over the last couple of summers. I've moved to the town where that college is located, and the likelihood of having some additional teaching through the college—both to pre-service teachers and current teachers in the master's programs—is very high.
So I have a lot of classroom teaching experience. I've also taught online high school for a couple of years as well. I have a variety of types of teaching experience and also have two kids who are going to be in seventh and ninth grade next year. So I see this from many different perspectives.
Priten: I'll definitely want to hear more about how you're thinking about this as a parent as well.
I'd love to start with your experience as a student before we dive into your teaching and parenting perspectives. What is the earliest memory you have of having an education technology tool used by a teacher in a formal context? Especially if there's one where you or the teacher had a strong reaction to it?
Lorin: I remember people wheeling the VHS cart with the TV into the room to play videos. I even had film strips when I was in junior high where it's one image at a time with audio played. I remember a classmate who got queasy during a sex ed lesson in junior high and tried to leave the room, feeling sick and kind of slamming into the blackboard on the way out. That was quite an interesting visceral reaction.
I also remember in elementary school when we had Apple IIs. We would go down to the media lab and the library and use these computers on Oregon Trail and stuff like that. So that was quite amazing technology at the time.
Just one other really quick story. When I was in ninth grade taking keyboarding, that was in the era when Windows had just come out—I guess Windows 3.1, before Windows 95. I had keyboarding half on typewriters and half on computers. All the students knew the computers way better than the teachers did. We were getting on network stuff and manipulating people's accounts, but that was the era.
Priten: When you think back to those experiences, there's some joy in your voice. Is that an accurate descriptor of how technology was experienced by you and your peers?
Lorin: Definitely. I can't speak for everybody, obviously, but for me, I really had a lot of fun with computers. I would spend hours on them just trying to mess around and see what I could figure out in the programming realm or just how to use it in ways different from what the teachers suggested. But I wasn't doing anything malicious, really—just kind of mischievous.
Priten: When you think about decisions you make in your classroom now, do you think your students feel a similar way about your decisions as you did about your teachers' decisions?
Lorin: I imagine to a certain extent. I think it's harder now because in those days it was really the wild West of computer technology. Now there are a lot more tools that have structured things much more, so people don't have to figure things out for themselves as much. I think probably a lot of kids do enjoy using tech, but not necessarily understanding the tech. It's more about the social components and consuming entertainment content.
Priten: You've obviously taught online classes and online high school, which is obviously very dependent on technology. I'd love to hear about your in-person classrooms first and how you've incorporated the technology and how you think about it. Maybe we can talk about pre-AI stuff first, and then talk about how that's changed post-AI.
Lorin: I've used Google Classroom for years, along with the Google Suite. A lot of creating documents of different types and then submitting them through that. It's really a useful tool as an English teacher to be able to have students turn in their essays, get feedback, and leave suggestions. I have student presentations that they put on there and then share and present from the front of class. I've also taught computer applications classes from time to time, which are really about every component of that—trying to learn how to use applications most efficiently and productively. I prefer teaching applications much more than the programming side. I haven't really done much with that.
Priten: When you think about the transition to AI technology, does AI currently play a role in your classrooms?
Lorin: It does. When AI first started coming out, I'd call myself an early adopter because I had been following some AI research on Twitter before ChatGPT was publicly released. When it came out, I was on the forefront of that, but I was scared of introducing it to students because I didn't want to give them tools to completely cheat on everything.
But there were some students who already knew about it, so we used it to generate content for a yearbook page and images. I tried bringing some image generation into class. I've tried to structure it so that students realize they need to do the thinking and writing themselves, but it can be a great aid in brainstorming, organization, and editing. I was able to do some presentations for other teachers about AI because a lot of people had heard about it and didn't really know what it was. I shared different ideas for using it in class productively rather than just being scared of it.
Priten: I'd love to have you walk me through what it looks like to help your students navigate the difference between cheating and not doing the thinking. What resources are you providing them? What do those conversations look like? What pushback are you getting from students when you're having those conversations?
Lorin: Yeah, it is difficult because a lot of them are constantly looking for a shortcut. I'm so thankful that I did all my degrees before this existed because I know the shortcut would be so tempting, especially in online school where you're just on the computer doing your assignment—watching a video, seeing a presentation, reading on screen, and then typing your answers into blanks like that.
The online school that I teach for has an online services agreement that students have to sign saying they will not use AI to do their work for them. There's even a video that walks through different ways AI often gets things wrong. So even if you turn it in and save so much time, you could get an F and wonder what happened. There's also a Zoom interaction between the teacher and the students where they have to answer questions to explain how they're going to do this without using AI.
Of course they can say as much as they want about how they would never do that, and then they get into it and, sure enough, guess what? They used it. So it's definitely an educational process. I think the distinction between using it productively and using it to do your work for you is important. It's important to educate and clarify, maybe show examples of when it would be a good use and when it would be a cop-out use.
Priten: When I put myself in the shoes of a current high school or college student with the amount of pressure to get things done and balance everything, it would have been tempting to use AI tools in ways that probably would have co-opted my learning. Are students receptive to that idea? Because it feels like a huge part of the way forward is going to be buy-in from students—they're going to have to agree that they need to put the hard work in.
Lorin: I think that's right. I think they do understand conceptually what it means to have it do your work for you and that it's not beneficial to them. They understand that on a conceptual level, but when it comes time to actually do the work—maybe you've gotten behind and you're trying to get caught up—it just seems like a way out of a negative situation. Even if they understand that ethically presenting work that's not theirs is not the best choice, they may still choose to do that.
Priten: How often is it all or nothing? Are students having trouble figuring out the difference between "this is AI-assisted" versus "AI-generated" versus "my own work"? Or are you seeing students using it as a quick out, so it's not really even AI-assisted—it's purely AI-generated?
Lorin: I think I've seen more of that, where they're just trying to get the checkbox checked off. They don't really care, and it's not that they're trying to hide it—it's just that they don't care enough about completing it in a more substantive way. They just want to get it off their list.
Priten: That's interesting to me. So much of the narrative right now is about helping students figure out responsible ways of using AI. But if they already know what they're doing is in a clearly irresponsible way, I wonder how effective conversations about responsible usage would actually be.
Lorin: Yeah, that's a great point. Coming in and saying this is not a responsible use of AI, and they're like, yeah, I know. I made a bad decision.
Priten: That's super interesting. I'm curious now to hear a little bit more about your online classes because obviously assessments have become very difficult at all levels of education, especially in online classes. These classes have a different challenge of not even having the opportunity to assess anything in person.
I'd love to hear how you're managing both the asynchronous and synchronous online class assessments.
Lorin: Someone just gave me a clarification about the difference between asynchronous and self-paced. The high school classes I teach are really self-paced. Students have a year to get through the semester—most don't take that long, but some do—and there are no specific deadlines. It's just their own timeframe. Whereas asynchronous is where there are deadlines, but when you do it is up to you.
The high school classes I teach do have proctored exams, meaning students have to have an adult—not a relative—who's not their family watching them. I've had students turn in basically completely ChatGPT-generated exams that were supposedly proctored. So I know that kids are quick at figuring that out. In fact, I can show you an example.
This was shocking to me and yet not shocking at the same time, but this was an English final. The answer to question number 23 on the test included all of this text: "Here are humanized and shortened answers for the questions you've shared so far," and then it has all of the answers one through 22 in there. So the student accidentally copied way more than he meant to. Reading the test, I was like, oh, this is ChatGPT for sure. And then when I got there I was like, yeah, that confirms it.
I had a conversation with that student and he admitted to it. I said, my first impulse was to fail you from the class. I'm choosing not to do that. I'm going to give you an F on the test. But what are you going to do about this?
And he was contrite—to the point of saying, yeah, I did that. It was a bad choice.
I also had students in the college classes turn in stuff that included "ChatGPT said:" with the text underneath. I don't know how to keep that from happening in an online class as much as you get their agreement that they're not going to do it. It's as good as their word.
Priten: Obviously right now we're at a transition point for a lot of these things because everybody is jointly figuring out exactly what we're going to do about this.
Even when I'm sitting down thinking about new releases coming out from the major AI companies, there's a new one that makes agent operators much more accessible through OpenAI, where you can have it navigate websites for you and do things. I fed in one of the products that we've built—that's in an asynchronous tool that teachers use—and tried to get it to do it. It was fairly successful at going in and going lesson by lesson, taking the multiple choice questions, and taking the end quizzes. That's scary. That worries me in terms of the students who are going to be the hardest to reach and getting them to understand the importance of the actual work.
I feel like the high-achieving students will always be able to piece together the consequences: "If I don't really do this, maybe I won't do well on the test or I won't do well in college or my career." There's some long-term consequence calculation by those students.
But the students who are struggling the most are probably the ones who can only see what needs to happen within the next 24 hours or in that moment because they're thinking about so much else. I think they're going to struggle.
The narrative around it for so many years has been that it's supposed to solve this divide—to help narrow the achievement gap and provide resources and make things more accessible and scalable. Every night it feels like it becomes less and less true, especially when you think about online high schools. A huge part of their existence is about accessibility. When you think about the future of that type of education for these students, how are you processing that and conceptualizing it?
Lorin: It's hard because if the student passes this class with a D, it's an indicator to any future school or any scholarship program that the student didn't actually succeed at a high level. The ones that really concern me are the students who are just good enough to get by, so they have a B or B-plus or something like that. That's a bad signal—one way or the other, whether they get a grade they don't deserve or the teacher assumes they used AI and knocks them down when they didn't actually use AI just because of the phrasing or something like that.
The mismatches between intent, production, and outcome are the biggest concerns to me.
Priten: One of the other things I noticed is how much of this is about your gut telling you that something is AI-generated, right? There's obviously a massive debate about the use of AI detectors given how high the rate of false positives and negatives is, but the teacher's gut still seems like a much better detector than anything else. But I don't know how long that's going to last.
I also think about students who've grown up with this. If you're a ninth grader in 2023 and you've been using AI even in your personal life, the way you use language is going to be influenced by that, right? I'm sure I've picked up phrases and words and grammatical structures from just how much I've read AI text. There's no way my brain hasn't internalized some of that. And I had time to develop my writing style before AI.
I'm curious about what happens when we're looking for these AI markers of text and you get a student who's learned how to write while reading AI. What does that mean for our gut? That scares me a little.
Lorin: Yeah. Everybody who has their editing done by Grammarly or whatever ends up putting in words like "delve" and "tapestry"—it's not just this. It's this.
Priten: Right. But I mean, if you read that enough, it becomes a normal part of your vocabulary and you're using it when you write. What you read is like a fairly accurate maxim.
I'll have my students do online presentations now. You know, all my final assessments are presentation-style. You can do it alone with me if you're not comfortable doing it in front of the class, but you have to present something at some point where you're explaining your work, not just submitting it.
Even then, I get students who have fully AI-generated their PowerPoint text. Oftentimes it's way more words than the student would normally have written for anything. And there are times when they're up there and I'm asking follow-up questions and they're able to actually answer the follow-up. And then there are times where they just reread what was on one of their slides.
There are still some intuitive things that I can pick up as an instructor that help me understand whether or not the student has used AI. But again, I don't know what I do about that student who generated the text using AI—it's very clearly AI-generated—but did learn enough to defend what the AI said. That's interesting in terms of how we just structure grading.
So when you think about your assignments in your classes—even the formative exercises—do you use discussion boards still?
Lorin: Yes, I still think there's benefit in having to publicly state things, even if it's not at the same time. Even if the students aren't there at the same time or somebody comes to it later and looks at it, just having to publicly say something that has your name posted with it is a good signpost that you're putting effort into hopefully making something that sounds relatively coherent.
For students taking the class at the same time, discussion boards are a really good way to make sure they're communicating and probably everyone says more using a discussion board than they would in a real-time class in person. I tended to sit in the back and not really say a whole lot, even though I was listening. But if you're forced to do a discussion board, it makes you put your thoughts into words. So I still think there's value there.
In fact, I've thought about incorporating ChatGPT conversations as a tech-aided discussion board because when you're doing a discussion board, you're supposed to post on Wednesday and then reply to your classmates by Sunday or whatever. By the time you get back to that, are you still remembering everything you said in the first place? But if you have that conversation with ChatGPT, you get real-time feedback and you can ask it to push back or add perspectives you hadn't thought of. It can even be valuable that way.
Priten: The idea of not fully replacing but at least replacing some component with an AI conversation would—especially in a self-paced class—be a very different experience for students. I'm curious about how you're thinking about this as a parent because those are two. Obviously they're young, those are formative years, and major transitions are happening. What does technology look like in your home? What's available to them? What are they allowed to do or not do?
Lorin: Yeah, my kids each have a device. My older daughter has a cell phone, and my younger daughter has access to one of my old phones for a shorter period of time in the day. They use docs and share docs between the two of them. I don't think they typically have AI—not ChatGPT or anything like that. But now even Google search has AI stuff that you can just go in and communicate with.
They've seen me using it for years because I've shown them stuff that I've been doing for Pedagogy Futures, and so they've seen a lot of it. They find it interesting and funny when I come up with some crazy stuff. I don't think they would know to use it on an assignment. I think both of them are a little bit more conscientious than that.
I could definitely see, especially my younger daughter, asking: can we use this to get me unstuck or I don't know where to start with this? But yeah, they know what AI is and they know how ChatGPT works and they know how the image generators work and all that. So it's kind of an interesting glimpse into students' lives. I don't really have experience with students in my household who've used it to cheat that I know of, so I can't get that perspective.
Priten: Do you think that's their natural inclination? Have you had conversations with them about it, or are their natural inclinations enough that they don't need that conversation?
Lorin: I like to think I would know if they were using it in that way. When I see their assignments, it's pretty clear that it's their own writing.
Priten: Are you worried about peer pressure? One of the things I've heard from college students is that there's a sense of feeling like you're doing a disservice to yourself by not using the tools, even if you're not particularly feeling inclined to do so. Because of the competitive nature—especially if you're in the sciences and it's graded on a curve—the peer who uses it and does it in a fraction of the time, then spending the rest of their time doing something else, obviously has an advantage.
I'm curious: at that age level, I'm sure they're not the only ones still being very conscientious and haven't yet fully explored or even thought about the misuse of it. I wonder as they progress through schooling if that's still going to be the case.
Lorin: Yeah, I'm curious too. My older child is very much a self-motivated person, so she's not the one studying with friends or anything like that. She just kind of takes care of business. The younger one is much more social, and I could see it being more of a thing in her life to see what other people are doing and be affected by that. But she also has a very strong sense of right and wrong. That can be tempered sometimes by seeing what other people are doing and justifying whatever it is.
So I think she would talk about it though because she talks about everything. She's an out-loud processor, so I think she would talk about it—like such and such person was doing such and such thing—which would be a good opportunity for a conversation. But it will be interesting to see as they get older how much that comes into their experience.
Priten: When you think about technology in general in their lives, are you seeing it as a net positive? What are your concerns? There's so much coming out about children and device usage and technology in general. It seems like the last few years has been hyper-focused on the negative. How are you seeing your daughters' experience, and how does that shape your understanding of that?
Lorin: I do think of it as a net positive in general, just because of the opportunities to make tasks so much more efficient and productive. Both of my kids really like computers and tend to want to use them. We've set some strict limits on desktop computer usage and device usage—time usage. And then also social media, which we haven't really allowed them to get on particularly.
I do see that a lot of times when they get on my computer, what they want to do is just watch videos, and I think there's a good outlet for that because you do need to unwind sometimes and just veg out and watch videos. If I could enforce time limits for myself like I can for them, that would be great for me.
But they do like using it productively too. As much as they enjoy doing it just for fun, they like to be able to know how to use technology. My older daughter has always been really quick to understand technology. She'll get in there and do something quick and be done with it.
I think about the 10,000 hours that's always been talked about for expert level, and I'm probably not giving them enough time to do that. So maybe I feel a little bit neglectful in that way because maybe they could become an expert at some technology if they had more time with it. But then I'm like, but at what cost? How much of your humanity are you giving up by spending so much time staring at a screen?
Priten: That seems like the question of the hour—across how we deal with education and just children in general. There are clear benefits to using technology, there are productivity gains, there are learning gains, there are career opportunities that open up from it. And that's more and more the case. But yeah, what does that mean for their social-emotional development? What does that mean for their ability to think on their own, to speak to another human? All those things. It is a pretty fine balance.
I just want to hear a little bit about concerns from pre-service teachers that you're working with. There's so much negativity around AI, especially in the humanities, because of how much assessments have been pushed back on or become irrelevant. Are you hearing concerns like that from your pre-service teachers about what it means for them when they enter classrooms?
Lorin: I don't think that I've gotten to a point in either of the classes that I'm working with where that would come up as a topic of conversation, but I'm sure it will. One of them is a tech class, so I'm sure it will come up. They have talked about their experiences with tech and getting involved. But of course, even the students who are sophomores now spent very little of their time in high school with AI existing.
I am interested in that though. I had one student post that she thinks the use of AI is unethical, and I asked for clarification on that. She didn't really follow up, but I was curious to know what she meant by that—whether it was exploiting people's work or if it's just trying to pass off your work as your own, or maybe even the resource use of AI. I'm really not sure what her angle was on that.
But it's enough to know that some students are really in tune with it and some just aren't that comfortable with technology. So I definitely am interested in that topic, finding out what current college students think about it.
Priten: I guess the seniors this year still got at least a year of college without the popularization of the technology. Everybody else got years of high school without it. I'm curious to see how that evolves as more and more students have spent more of their formative years with the technology around—on Google, in their Google searches, on their Snapchats, Facebooks, Instagrams, WhatsApps, and Apple devices, right? They're just everywhere.
If we were to end on your biggest fear and your biggest hope—or what excites you the most about how the technology is evolving—what would that be? Let's start with the fear so we can end on a good note.
Lorin: Thanks. So one of my massive fears is students plug something into Google and then get an answer that's AI, and it's totally wrong, and they take it as fact because it was a search result. We're used to those first couple of results, even if they're promotional or whatever, not being incorrect information. But ChatGPT doesn't know what correct is and what incorrect is. So three people could do the same search and come back totally convinced of completely different facts. Moving forward with that, they all have a different basic understanding of what reality is—shaped by unintentional AI. It's not malicious, it's just what it is.
I don't think I would be as concerned as some people in the news about AI taking over the world. It seems like it has its limitations. But I am really concerned about people getting material that they think is factual and it being completely off.
I think my biggest hope is something someone posted probably a couple of years ago, fairly early on. They said something like: "I don't want AI to take care of my creative functions. I want AI to take care of my mundane logistical functions so I have more time to be creative." I think that's the big hope for me—to look at it and say, can it help me take care of the paperwork, the stuff that's annoying to have to do, so that I don't have to worry about that as much and I can focus on things that are more meaningful and more human-uplifting?
Priten: Yeah, I think the person mentioned laundry and washing dishes as two of the things they didn't want to do, and poetry and drawing as the things they were hoping AI was not the first to intervene in. That feels accurate. And then there are parallels for work-related stuff too. Schedule management is one thing that technology has made so much easier. The amount of back and forth that normally takes if you're trying to manage a schedule is insane. Sometimes you needed an entire person whose entire job was to manage somebody's schedule. Now there are all these tools available, and then there are AI tools like Motion that are coming out, trying to put your meetings in a particular chunk of time and help you. So offloading that to AI technology sounds great.
It's nice to hear for me because you obviously have three very different distinct populations that you work with. I appreciate you taking the time to do this.
Lauren makes clear that AI has not created the responsibility problem in education, but it has exposed it more sharply. His reflections on motivation, integrity, and student disengagement push us to ask what schools can do when students understand the difference between learning and short-cutting but do not always choose the harder path.
Keep listening this season as we continue to explore the hard questions about technology, teaching, and what we want education to become. Pre-order my book for more on how to approach ethical concerns in education at ethicaledtech.org.
Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also, share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org.
Until next time, keep making space for the questions that matter.