How Can We Center Pedagogy During the AI Tech Wave? - Lance Eaton
#5

How Can We Center Pedagogy During the AI Tech Wave? - Lance Eaton

[00:00:05] Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming. Priten Shah: If you've been teaching for a while, you've seen a lot of education technology come and go: online learning, open educational resources, learning management systems, and now AI. Today's guest has not only witnessed these shifts, he's been shaping how educators respond to them. Lance Eaton is a senior associate director of AI and teaching and learning at Northeastern University, and he spent 15 years in instructional design working everywhere from community colleges to Ivy League institutions. I had the pleasure of meeting Lance at College Bound. Lance thinks about technology differently. For him, it's a spectrum from the pencil to virtual reality, and the question isn't whether to use it, but how to integrate it thoughtfully in service of learning.

[00:01:06] Let's get started. Lance Eaton: I'm Lance Eaton. I started as a full-time adjunct in the late 2000s. I taught way more courses than any individual should in a semester, over several years, and found myself in instructional design. Over the last 15 years in that field, I've been moving into this hybrid space where instructional design is still part of my core, but I've also leaped into doing faculty development. Much of my work is thinking about and sitting with faculty and educators to figure out the different ways we can demonstrate learning and how we do that with different technologies. When I talk about technologies, I mean everything from a pencil up through AI and virtual reality. I've moved around at different institutions in the Northeast area from community colleges to Ivy Leagues and most recently landed at Northeastern University as the senior Associate Director of AI and teaching and learning. A lot of that has been in the last two or three years—a mixture of writing workshops and collaborations and just doing a lot of public work and work in my classes that I've been sharing out to help in this rich conversation. We're all trying to figure out what do we do.

[00:02:00] Priten Shah: What do we do is definitely the question of the day. I'd love to hear about the different transition points that you've noticed when it comes to education technology, especially when you think about the last 15 years. We've had lots of different turning points, with the pandemic being one of the obvious ones, but of course also technology turning points. So, big picture, what is the general trend you're seeing across time?

Lance Eaton: In my experience of the last 15 years, I was coming in when online learning was hitting its stride and feeling more legitimized than it had in the previous decade. I took my first online class in 2009, and so I was coming into doing instructional design around 2011 and it was feeling more validated. Still dismissible, but a lot more people interested in it. We were starting to see more significant appreciation of it. So I think that was a big one. That was also happening as we got a wider range of web 2.0 tools from social media, blogging, and video that we could actually access that wasn't horrible in quality or took forever to download. In the mid 2010s, the rise of open educational resources and recognizing the power they had in teaching and learning. We started to see richer, complex conversations around the role of the cell phone in classrooms. And we started to see a lot more synchronous online learning, where we could be in a Zoom room or a Skype room or your Adobe room and start to do classes in that space, which was great because the pandemic happened and we all moved into those spaces.

[00:04:15] And I think one of the things I saw from that is: at that point, you could no longer hold out. There's now an almost universal expectation that you need to learn to use the LMS and have that—which prior to the pandemic, there were still lots of pockets that held out, never did anything with their LMS courses. And then a richer mixture of looking at OER, a richer mixture of thinking about what are other digital tools that are out there. So it did push a lot of people in directions they might not have been ready for, but had to figure out on the fly. And has been AI. I think it's really interesting because the things that have popped up—by and large hasn't taken off. It's really only little pockets—like augmented reality, virtual reality, the metaverse. It's so interesting. It felt like there was something there in the late 2010s.

[00:05:01] Priten Shah: Do you think that most of that is accessibility of the tech, or do you think there's something else at play?

Lance Eaton: I think it's the accessibility of the tech. With generative AI, all you need is a text box to interact with it. That's a very low barrier. Everybody has been using text boxes. They use them on their phones. You see a box, you type into it. So I think that's it. There are still two pathways to create it or to buy a solution, whether that's from a publisher or some entity. And those usually don't hit the mark in the way that faculty want. So you could have a simulation game like Civilization if you want to play around with history.

[00:06:00] Priten Shah: Right.

Lance Eaton: And I love it as somebody who majored in history and has taught history, but it still doesn't quite do the things that I want. It also has its own layer of complexity to get used to. So there ends up being lots of possibility, but until there's a more easier entry point both as the educator to figure it out and customize it in a way that makes sense to them, and for the students on the other side to be able to access it and have it be familiar enough for whatever the class size is.

[00:07:00] Priten Shah: Right. I'm curious. The AR conversation or even VR conversation is slightly tangential to the larger issues at play right now. But I'm also curious about the pedagogical value that you see in these tools if they were to be made more accessible. I hear about virtual field trips, and that's the go-to example of school implementation. Simulations are obviously another one. Do you think this is an engagement tool predominantly, or do you think there's separate pedagogical value that's to be gained from using that versus a different tool?

Lance Eaton: I think the simulation space is where it's so exciting and we're seeing some of that being used with AI. There's a value in the immersion, especially in courses whether they're asynchronous or courses that meet once a week. Anything you can enhance the senses with—this is what I think a lot about, particularly as we went through the pandemic—is embodied learning. What's happening in the body and how that impacts how much you learn. And I think about this in a very simple way: I've never been in a classroom where I'm comfortable because the chairs are horrible. For my body type, I'm just always uncomfortable. And that's a distraction from learning. So the reverse of that is if we're navigating, and this is where I think with immersive learning and virtual reality, if we're enhancing the sensory experience hopefully for the better, there's something there to dig into pedagogically.

[00:08:02] To me, I keep going back to now, embodied learning.

Priten Shah: Do you think that idea of embodied learning helps us do anything differently with AI that we couldn't do before?

Lance Eaton: I think there are ways we may start to get there as we have avatars and avatars for engagement and going through simulations. There's one way I'm thinking about it, particularly how I've been using it in my own work. I haven't really tried this out in classrooms, but one of the things I really like to do now is take ChatGPT, turn on the voice on my phone. I'll have my headphones in and I'll be going for a walk and I will use it as an iteration space. And to me, there's the linking of movement and thinking: we know walking and movement is good for the brain, it is good for thinking. And so to be able to do that and have those thoughts recorded—the AI is asking me questions, I'm giving responses. We're going back and forth. So when it's time to take whatever the purpose is of that interaction, whether it's a talk, something I'm writing, or something I'm planning, it's all digitized and I feel it differently. I experience it differently than just sitting at a computer. I get to be more in my body while I'm also trying to think about things, and I get to do this while going for a walk in the park, which now doesn't make me crazy for randomly talking—because that's what we all do with our phones. I can have those conversations without people looking at me sideways.

[00:09:01] Priten Shah: Right. So does it just prod you with questions? Is that what you're doing?

Lance Eaton: I'll explain whatever it is that I'm working on. I'll tell it to take a particular approach or goal or way of pushing at my responses or digging deeper. It just becomes a conversation that takes a while to get into that rhythm. But once you're into it, once you've done it a handful of times, it becomes a really useful way of thinking aloud.

Priten Shah: That's something I would want to try. My go-to trick has been every time I need to write something, I have a generated outline, but using questions—so I find myself much easier to answer questions than it is to start from scratch. This feels like a further extension of it, and I like the idea of taking a walk and using it. Yeah, that's definitely a cool idea.

[00:10:06] When you think about the kinds of suggestions that you've made to faculty, especially when you're thinking about the instructional design component, what ends up being your go-tos for a faculty member who is thinking about the role of AI in their work for the first time?

Lance Eaton: It's like everything else with instructional design: start small. This is hard with AI because unlike other technology that we could see on the horizon and start to prepare for, it just showed up everywhere. And the thing that's hard to advise is that within instructional design and faculty development, you're always like: what's the goal? That's backwards design—what do you hope they will be able to demonstrate by the end of the course? You do that and you come up with the assessments, and then you come up with the learning activities. Here, the tool has just been made everywhere. We know that students aren't all using it. Some are using it appropriately, some are using it in ways that are bypassing learning. So it's how do you weave this in? Because while it is shiny, it's also something that's just there. And to not be thinking about it in a classroom today is like not thinking about the internet or the whiteboard in a physical classroom. It is part of that space. So how do you use it so that it serves your goals?

[00:11:08] That's the first piece: just the acknowledgement of where does it fit? Then within that, I lean towards let's have the conversation with the students. What is their understanding of it? How do you talk about it regarding the course, the discipline it sits in, and their learning goals? And collectively create some kind of policy or set of agreements to hold one another accountable. Or recognize this is what we're trying to do because we're still all learning with this tool.

So that's one way for the completely new person just trying to figure out where to step first. In terms of an activity, one of my go-tos is: if you do reflection in your course in any way, have students try it with AI. One of the things I find really powerful is that reflection is an amazing tool for learning. It's validated in so much research. But it's a practice. If you're earlier in your reflection practice, you're going to have a harder time digging in versus if you could give me a question to reflect on, I could go for days. That's just a muscle I've worked on a lot. So what I engage with faculty around is getting them to think about creating a reflection bot—or creating a prompt that helps the student do a reflection. So when the student puts in their initial thoughts, the AI comes back with questions to deepen their learning.

[00:13:17] So the student might be like, "I didn't like that reading." The AI can say, "Well, what didn't you?" And keep poking and prodding. Because what ends up happening right now is you ask for a reflection, the student submits it on Friday, the instructor may get to it by Sunday or Monday, the student may see those results on Monday. But when you're looking at it and you see that student say, "I didn't like something," you're thinking, "I want to know why." If something can prod that in the moment, there's a really interesting win there. It helps them understand what does reflection look like. It looks like pushing of questions and nudges as you're bringing up what's in your head. So for many folks, this feels like a really good spot to start because it does address something many of us feel challenged with in education: I want more from the students, not because I just want them to do more, but because I want to learn more about what's behind what they just said.

[00:14:09] Priten Shah: When you think about student responses to these reflection bots, do you notice, or have you heard about, a lack of student buy-in? Or does it seem like students are generally excited to use something like this? To be fair, I'm imagining a middle schooler interacting with this bot, and in my head the answers are often short and glib when they know it's AI on the other side versus a human. Higher ed is hopefully a different space with higher maturity levels, but I can't take that for granted. Have you heard that it's generally successful with students? Are they generally engaged with it, or does that take a little bit of convincing?

Lance Eaton: In the higher ed level, I've seen positive responses. And there's also context of whether everybody is using the same tool and whether everybody has access to the tool. I think so long as it's being framed—"this is what you're doing and this is what you're submitting." And that's the other piece I would recommend: submission becomes the dialogue. It's not do all this and then move it into a document that's properly formatted. No, it's the dialogue and being able to see that growth or that response back and forth. With the middle schooler who might be glib or anybody who might be glib with one-word responses, I think there's a way of having the AI recognize that and try to be more responsive or find ways to connect with what the student is interested in.

[00:16:01] Priten Shah: Right. So when you think about the kinds of tools that are at faculty's disposal, and also the vast landscape of the number of different general use tools that are popping up, and then all of the education-specific tools that are popping up, one concern we often hear is: how do I keep up with all of it? But also how do I choose? I'm curious when you're having these conversations and when you're making policy, how are you making the decisions and what are you recommending other folks make decisions about?

Lance Eaton: Yeah, I compare it to the app explosion in the late 2000s and early 2010s. Pick two or three and stick with those until you have reason otherwise. And sometimes there's hurt with that. Sometimes it means you have to move all of your notes from one tool to another, and that's unfortunate. But I think it's really hard to pick winners in this. That being said, I would look to the tools that might already be there. So if you know your institution is a Google institution or Microsoft institution, it is worth thinking about: if I already have all of the stuff in these environments, if the AI tool that they have is equivalent or reasonably equivalent to what's out there, then I should probably just use the one here. I know ChatGPT was the first out of the gate. I think it's often trying to maintain that status by sometimes irresponsibly putting stuff out or quickly putting stuff out.

[00:17:17] Priten Shah: Right.

Lance Eaton: I'm really interested in the stuff that Google is now doing, and I think they have some really powerful stuff that people are missing. I think the reality is recognizing you can't know everything. I think about this a lot and try to figure out: what are the few tools that I feel like I'm getting enough from? What's the one general tool? What would be two or three more specific tools? And can I keep some stream of information to let me know if there are limitations or newer things on the horizon? But there's no chance to follow all of it.

[00:18:01] Priten Shah: When you're mentioning Google, Microsoft, OpenAI—all with their own incentives for what they are building the technology for—there are folks who raise concerns about allowing that much commercial influence within our schools. AI is just the newest way that's happening. This is in some ways not a new problem, but a new path for that problem. Do you share those concerns? How do you view the role of commercial entities within educational spaces?

Lance Eaton: I think we're stuck in this dynamic: we can choose not to use those premier tools that are readily available and people are absolutely familiar with, or we can choose tools that are not great or feel less intrusive from that commercial side. But there's a whole new learning curve for every new faculty, every new student, every new staff. And there are a lot of costs to that. Judging just from my own choices in the past at different organizations, if I'm not using a tool that feels pretty ubiquitous, I'm still going to end up using it. And I think a good example is at an institution in the mid 2010s. I was at one institution where they were a Microsoft institution, and Microsoft's sharing abilities for their documents and slides—I mean, they're still challenging, but they were even worse then. And I was like, no. If I'm doing stuff with faculty and I want them to have access to it, I'm going to Google Docs. I think that's the thing: to move away from that ideal isn't possible. And I understand there's a lot to be concerned about in terms of how much third-party entities, not just in technology but throughout the university, are there. Because the institution itself can't uphold those. But we're also going to trade off a lot of things in terms of time and frustration, having to learn more things on top of all the other things we have to learn. There's the LMS, the student support system, the different communication systems that an institution might have, your Slacks, your Teams. I'm not a fan of it, but I don't know that I've seen a working alternative that has the resources to do it for everybody.

[00:20:01] Priten Shah: When you think about the purpose of using these tools, a lot of what you've said thus far is tying it directly into pedagogical strategy. That makes sense. I'm wondering if there's any framing of it that's about career preparedness for you. Oftentimes from disciplines that are not traditionally very tech-forward, it seems like some arguments in those spaces have been that we need to have the students at least practice using these tools because these are the tools they'll use later in life. Do you view the role of the technology that way in, for example, a history classroom? Or do you view it differently?

Lance Eaton: I think it's both. I think it's pedagogical, and it's a recognition that the way I'm seeing and thinking about AI is: we're 30 years since the formative start of the internet.

Priten Shah: Right.

Lance Eaton: Every industry business degree—every program operates differently now because of the internet. You don't do history the same way now. In order to do history, you have to understand the internet and digital technologies that allow for that. You have to understand archives and digital archives. To me, that's the same line with AI. It doesn't matter if the AI we have right now is mediocre. It's still going to have this layered impact across lots of different areas. And so there is a good reason within any discipline to be exploring. There's the pedagogical pieces of where does this align with my course? But then there's the professional pieces of: I am engaging somebody in this discipline. This discipline is going to be impacted. We don't have students in history classes going to the library to card catalogs. They're going to digital resources. To me, that's the connection. Both of those pieces are important around AI in teaching and learning.

[00:22:01] Priten Shah: One thing we hear is this tangential use of the technology as a way to expose students to the technology. Like, let's chat with the historical character, which maybe has not well-documented pedagogical results. It's a popular starting place because it makes things feel more engaging. The argument we hear is, well, it gets them chatting with AI. And that's one way to get them to think about when AI is making a mistake, which is an AI literacy skill, but it's not fully the best way to teach historical occurrences. Do you think that's a responsibility of every instructor?

Lance Eaton: I do, but I think it can still be grounded in the discipline. So I'll give you a good example of what I used to do when I taught history. I'd engage with the students and say, "Okay, what kind of learners are you? What's your learning style?" And we know the myths about learning styles. Almost 90% of students would say, "I'm a visual learner." I'd say, "Great. Let's watch this documentary. And now I want you to take the insights from this documentary and tell me what's accurate here." And we would do that and then we'd have this larger conversation about the construction of history and documentary, because it's another form of capturing history. So I think that's the move I would want to see: let's talk to a historical character with AI, but if you're feeding it historical documents, all of a historical figure's writings that are known, and some books about this person, that's a good way to start. And say, "Okay, but where is this wrong?" That is part of what it means to practice history—to put things apart and understand what are the underlying things that create our narrative.

[00:24:03] Priten Shah: Right.

Lance Eaton: I think ultimately, the AI literacy is there—big, broad goals. But it's always going to be about what does that mean in this context? Because the emphasis will change. What it looks like will change from discipline to discipline.

Priten Shah: That's helpful framing. I'll end with asking for your big picture feelings. What are you worried about when we think about higher ed in particular over the next five years? And as the technology continues to develop, what keeps you up at night?

Lance Eaton: I think a worry that I have is this push to delegitimize higher education. And at the same time, here comes this technology that feeds into some ways, into that narrative or desire to delegitimize. I appreciate the deliberateness in which many in academia try to figure things out. And also, how long will it take for us to really work through this systematically?

[00:25:03] Priten Shah: Right.

Lance Eaton: Because the rate at which AI has hit is as hard hitting as the pandemic. But in this dragged out way, we don't have the collective response to figure it out. There's an absence of accepting it and figuring out what that means for teaching and learning. And I understand resistance. I understand being mad and frustrated at the AI, at the companies, and all the challenges we live in in late-stage capitalism. I get worried about gaslighting our students by telling them this tool doesn't mean anything when many of them find really important, useful ways that aren't bypassing learning, but are just helping them figure out a world that's incredibly complex. So that's what I sit with—that tension of recognizing it's here and therefore we have to find new ways of thinking about teaching and learning as we did with the internet, as we have with different educational technologies back to writing.

[00:26:07] Priten Shah: Yeah, there is this sluggishness to education in general. And again, like you said, oftentimes that means when we do get out the other side, it's after thoughtful consideration. The scary part is that the rate of change feels faster than we're keeping up with. Now it feels like we're still trying to get caught up from November 2022 in some areas of the country, and the technology is moving every month.

Lance Eaton: Yeah, I have those feelings. The hopeful piece for me—the thing that I try to counterbalance—I've done this in different conversations. I've looked at the range of technologies in higher education that have become ubiquitous since the 1970s. There are somewhere upwards of 80 or more things: email, projectors, LMSs, automatic grading. What gives me hope is I can see this large amount, dozens of technologies have become ubiquitous in higher education. We've figured our way out through all of them. Whether we like them or not is another question, but they're part and parcel of a higher education system. And in that, it means we have dozens and dozens of playbooks that can help us figure it out. It isn't going to be perfect because AI is something different. But there's a lot we can learn to help build the new playbook for navigating AI. Whenever I go down that rabbit hole, that's the thing that pulls me out: we've been here before with many different technologies.

[00:27:07] The classroom that I teach in is fundamentally different from the classroom that I went to school in, in many ways regarding technology. That's the piece that makes me hopeful. Even though there's deliberation for all the reasons there is, we know we've solved a lot and figured out a lot of these pieces already.

Priten Shah: Do you think that there is this feeling of heaviness that comes from some of these conversations when you're talking to some faculty members about an almost existential crisis for their discipline, the value of their discipline for a student who's not going to pursue that discipline, and of course their assessment methods? Does that feel parallel to other instances, like other technological shifts?

Lance Eaton: I think the strongest parallel is the pandemic because everybody felt that. But I think there's a reasonable parallel in thoughtful conversations I've had with faculty over the years as they've transitioned to online learning. I've heard faculty say, "What does it mean to not be in a classroom?" So there is this big shift. And I remember this happened time and again: when you work with a faculty member to develop an online course, two things would happen. One is they'd realize—because they've got to make everything explicit—how much they had left out in their regular classes. And they'd be like, "Oh, I've taken a bunch of this stuff and now that's part of my face-to-face class." And then also how it's made them a better teacher. All of this grappling, even if this is the best AI we get, still pushes us to think about how to do this better. How to assess better. How to figure out what's a closer assessment than just the five-paragraph essay. What is a more accurate demonstration of learning? And that'll be hard. But there's rich reward there.

[00:30:02] Priten Shah: I think having us reevaluate the fundamentals of our practice is definitely one of the biggest advantages that AI has already brought, because we're having conversations like this. And that's a result of the technology.

Any other last hopes that you have for the technology, even in terms of what you hope the technology can make possible for us?

Lance Eaton: My hope would be we can start to understand where this might give us new answers. One of the things—and I don't know that we'll solve it in higher ed or even K-12—is we structure education for convenience. So it's most convenient or efficient to have people gather together in these large chunks of time to ideally interact and learn and be part of a community. And sometimes, or in some spaces, it's just information dump. It's that banking model. I am curious what happens when we do have this AI companion, tutor, bot, whatever. Is there a way that we can learn what are the other models and how do we support that? Because people's lives are so intense, and this is why many people take asynchronous courses. But I am curious to find out: are there other ways of doing learning better that we can capitalize on? Because right now we're living with a legacy structure of education. It works for some people. And we also know it doesn't for lots of other people. So what happens if we find out we can figure out people's chronotype for education? Like when is it ideal for you to be learning and in what dosage? Is it 30 minutes a day in the morning? That's probably good for me. If I'm going to have to learn something after 8 PM, I am done. Good luck. Like I'm up at 4 AM. So for me, that's what I'm curious about—what this can help us figure out about that challenge.

[00:31:03] Priten Shah: Yeah, and that would be impossible in a world without AI. The reality is, even if we think a teacher teaching you for 30 minutes at 4 AM would be ideal, the second-best thing might be AI that actually does it versus this hope that we figure out how to get humans to provide that level of customization and match the students' needs. Thank you. I appreciate talking to you today. There's oftentimes a lot of pessimism and then there's blind optimism. And I think folks need to hear more from people who are kind of managing this in the middle. Let's not just embrace the technology for technology's sake. Let's think through it thoughtfully. I think hearing someone talk through all of that provides some framework for you to think through it.

Lance Eaton: Thank you.

Priten Shah: Thank you. Lance reminds us that each wave of technological change brings uncertainty, opportunity, and the temptation to either resist entirely or adopt uncritically. His call for small, thoughtful steps in AI implementation grounded in pedagogy rather than external pressure offers a roadmap for educators who want to move forward without losing sight of what matters. Keep listening this season as we explore how to make those choices with intention and integrity. And pre-order my book "Ethical Ed Tech: How Educators Can Lead On AI and Digital Safety in K-12."

For more support on thoughtful AI integration, visit ethicaledtech.org.

[00:33:07] Priten: Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also, share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.