[00:00:05] Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming. Priten Shah: Technology and education looks different depending on where you're standing. Today we're crossing the Atlantic. Tamsyn Smith is a senior learning designer and team lead at the University of Southampton in England, and she's also halfway through a PhD investigating how generative AI can support inclusive education. Her journey spans from learning basic programming at age seven to teaching 11 to 19 year olds, to now designing learning experiences in higher education. She focuses on accessibility. How do we ensure that education technology serves all students, including those with disabilities or learning differences? And what happens to foundational skills when students become too reliant on generative AI?
[00:01:01] Let's get started. Tamsin Smith: My name's Tamsin Smith and I'm a senior learning designer, team lead at the University of Southampton in England. My background has always been in pedagogy. Prior to working in higher education, I worked as a teacher mostly with 16 to 19 year olds, but before that also with 11 to 16 year olds. I spent about a decade as a classroom teacher before moving into higher education. Alongside my role as a senior learning designer team lead, I'm also halfway through a PhD where I'm looking at how generative AI can be used to support inclusive education. Priten Shah: Very cool. Awesome. So I'm sure we'll have lots to talk about today. Let's start with you as a student. What's the very first tech tool you remember using? Tamsin Smith: Oh gosh, now it's going to reveal my age because it was probably using CD-ROM to do research. We didn't have library searches in the way you would have now.
[00:02:02] So being able to use that was great. I was very fortunate—I actually learned some programming in BASIC when I was probably about seven years old. I studied computing when I was about 13 or 14, which was quite unusual. A lot of schools were starting to introduce it and word processing, but I was actually doing computing. But it's a long time ago now. Priten Shah: That's amazing. You barely hear stories of students encountering that level of computer science literacy at that age now. What are the biggest shifts that you've seen in education technology? Tamsin Smith: When I was at university as a college age student, internet access came in my second year. I spent an entire summer in the basement learning how to navigate the internet. That was really unusual. Learning that we could communicate with people in different places so easily—whereas previously at school I'd done communication with people in other schools via letter. That was exciting, being able to use email.
[00:03:07] But the biggest change I've noticed has been the introduction of learning management systems or virtual learning environments. Being able to easily share resources with students and use discussion forums, use interactive learning tools—that's been a massive shift. It wasn't something I experienced when I was a student, but it's what I work with all the time now and what I've worked with throughout my time as a teacher. I introduced Moodle in the college I was working in. That's probably one of the biggest shifts I've seen. But there have been so many shifts since then: the introduction of mobile technology, all students having a cell phone in the classroom, people having easy access to tablets and laptops. Priten Shah: When folks are talking about what AI means for education, how we're going to integrate it, how we're going to safeguard education from it, there's some folks who say this is just similar to the transitions we've always gone through—whether it be the internet, whether it be the computer. Do you agree this is just another technological step in our evolution as humans, or does something feel different about this?
[00:04:04] Tamsin Smith: Oh, I'm really torn on that one. When people had calculators first in the classroom, everyone worried that pupils wouldn't learn how to do the basics of maths. They would just rely on their calculators so easily that they wouldn't do the computing themselves to work out the mathematics. But we've realized that's not the case. You have to have a basic understanding before you use a calculator. And now with the evolution of gen AI tools, people still need to be able to do something manually. They need to be able to do things the old fashioned way. I think students need to know how to formulate a good essay without relying on gen AI.
[00:05:04] Because how can you know whether the output is any good if you couldn't create something yourself? There's a lot of tension there and people are really concerned. But we will spend time learning how to do things, because if you can't do it yourself, you can't assess whether the output from your computer is any good. Priten Shah: As I talk to different teachers and educators, we're all largely agreed that that process is very important before you get to use the tools. One thing I'm noticing that's different is students seem to be even more resistant than I have noticed in the past to gathering those skills or learning those skills, because the larger idea is that AI will do it for us. Now it's the compounded effect of computers, internet, mobile devices, and AI—it's not just that AI exists in a vacuum. It exists in your pocket, in your headphones. What are some ways that you think we can navigate that conversation with students?
[00:06:01] Tamsin Smith: We've had all of these technological changes alongside a huge number of social changes. Over the last decade, the last five years, COVID-19 has been something everyone's been talking about. All those huge changes it made in people's education. The students who are starting college right now will have missed out on some fundamental time in education when they'd have been learning how to form friendships, how to navigate, how to write assignments. They've got gaps in their knowledge. Some privileged students will have had fewer gaps. They will have had access to tutors, access to the internet at home and reliable wifi, their own device. Perhaps they will have been less impacted. But I've seen with students at a whole range of ages that this has had a big impact on how they're studying, and perhaps people's attitudes towards Gen AI would be different if we hadn't had the impact of COVID-19.
[00:07:05] So those things came around the same time and we're still seeing the repercussions of education moving wholly online and students not necessarily having a good online education experience. It was emergency remote teaching. It wasn't something staff were trained for. It wasn't something we'd planned for. We've had two cataclysmic shifts at the same time. But students are unwilling. So many students are saying, will this be in the test? Do I have to know? How can I get the highest grade? Grade grubbing is a real problem where students say, well, I'm paying for this education. I want to get a good grade, or I need to get a high grade to please my parents if they're younger. There are these other factors where they're not seeing learning as a process—they're seeing it as an output. Priten Shah: I've been trying to talk to educators at all levels. This is a problem you face differently in K to 12 versus in higher education. But largely we're failing at answering why learn this. That's really where we all need to refocus and say, okay. In the past we said, oh, because it's going to be on the test. We gave them that answer at times. And so it's only natural that that's what they're repeating. We said, oh, you'll need it for your jobs. Now they're like, oh, I don't need it for my job because I'll have AI at my disposal during my job. And sometimes we just said, because I said so. That was also in our most frustrating moments. I know all of us have felt that at some point.
[00:08:00] So lots of educators have spent time trying to explain that to students. We also don't have that kind of time in our classrooms to spend hours explaining to a college freshman why their education is going to be important. It sounds like you're agreeing that we need to get the students on board with it partially, for us to be able to do any of this. And I think the example of COVID is pertinent because it shook everything up. You're right that we never fully settled from it. It exposed a lot of questions and fracture points, and we kind of all tried to just move along from it and said, oh, things are back to normal.
[00:09:11] But it brought up really important questions. It created long lasting impacts. We didn't fully get to say, okay, what does it mean if technology becomes a larger part of our educational systems? What would it look like if university had stayed online forever? There was a clear turning point, then we had three years to deal with the same thing. It was still really hard. Most institutions barely got through it. This one is different. There's a new turning point every day. There wasn't like a moment in time where things geographically changed. That's part of the fear—it took us that long to adapt to one turning point. How do we work with our teachers and students to continuously adapt as the landscape of education and larger society is changing dramatically? The way folks did their jobs three years ago and the way they do them today are very different. Sometimes it's subtle enough where we don't fully feel it right away, but when you look back at your emails from three years ago, you immediately see conversations were very different. Tamsin Smith: We have lots of concerns about how our students might be changing. There's a reluctance to do things in person. People say, oh, but it's easier online. You know, there are so many advantages. You can talk to someone hundreds of miles away, and it's really easy. But we want students to come to campus and form social relationships and have conversations with people outside of their immediate friendship group. It's just so much easier to socialize online.
[00:10:00] Students are perhaps not getting the breadth of experience that we want them to have. If they're in a classroom, you can easily hear the private conversations or small group conversations going on. If you're teaching online and you go in and out of a breakout room, they notice you're there as soon as you arrive. The conversation stops, changes. You get the bigger picture of which group is in trouble—they don't know the problem they're supposed to be solving, they don't understand it. You can see that in a classroom. You can't see those things as easily online. We've had all these shifts. Staff don't know how to deal with them. They worry that students know a lot more than they do in terms of using technology. But when it comes to using technology for academic purposes, many of our students are not as skillful as we think they are. They may be excellent at using WhatsApp or other apps on their smartphone, great at TikTok. But if you ask them to use a device for academic study, chances are they're not that skillful.
[00:11:00] We need to support our students' digital capabilities to make sure they can use the tools. We can't assume they know how to. Priten Shah: That's true for teachers and students. That's part of the challenge here—we're catching up two different populations who work in the same system, who come with very different backgrounds and different problems. No one is fully there in terms of being able to navigate these questions. One of the things I noticed in the survey that you filled out was you pointed out that teachers don't often read the terms and conditions. What does that mean for your role? How does that come up? Tamsin Smith: You have educators at different ends of the curve. You've got your early adopters and your laggards. The early adopters see a new tool and want to use it. They're like, oh, ChatGPT—let me try this. Oh, I'm going to try Gemini. Like magpies, they're looking for something new and shiny and just want to have a go with it. In the role I'm in, I have to be there and say, okay, hold on. Put the brakes on. Let's take a step back before you start using this with the students. What data are you putting in there? Where is that data going? What is happening with that data?
[00:12:02] If somebody's using a tool to create a resource and they get some output, and they want to use it with their students, they can make an informed judgment about whether it's pedagogically appropriate. But if they're telling the students they have to log in and use a tool online to create some kind of assignment or learning resource, the students need to know what's happening with their data. Staff have to be informed and they have to make those choices around. Is the tool fully accessible? If you've got a learner who uses a screen reader, who has hearing problems or anything like that? We have to be mindful of all of those things. We have to look really closely at things like the VPA and the other parts of the terms and conditions to think, is this actually a tool that's suitable for my classroom? I'm not sure that all teachers do that. When I was an enthusiastic teacher in my early twenties, I'd see something new and think, Hey, this is exciting. My students are going to like doing this. I didn't go and read the terms and conditions. I'd sign up, I'd start using it, and I'd get feedback from the students about whether they liked it or not.
[00:13:17] It never crossed my mind that there might be ethical issues about the data and what was going in there. That's something we really need to pay careful attention to. I have an 8-year-old daughter and I'm quite surprised at how much use she's making of generative AI tools at school. I'm assuming it's all done on the teacher's account. She doesn't have her own email address, but I don't know how she's accessing it and I don't know whether her teacher has looked into that or thought about it. Priten Shah: Let's start with data privacy concerns when we're talking about any tech tool, but especially when we're talking about AI tools. How do you help teachers navigate that? Tamsin Smith: For the educators that I'm working with, we always talk in terms of supported and unsupported tools—which ones are the ones where we have support at the institution, where there are people who know that tool works, where they have looked into all of those terms and conditions and data issues. The institution I work at is a Microsoft institution, so we support our staff with using Copilot. They know that if they use Copilot at my institution, then the data is secure within our system. It's not going outside of it.
[00:14:01] So if they are choosing to upload a university document, we don't have to worry about it being made public or used in other people's searches. It's issues around what data are people uploading, where is it going? If somebody decided they were going to use it to assess a piece of student's work and they chose to upload that essay—which is not a practice I would recommend—at least we know that's within our instance, within our Copilot. Whereas if they chose to put that into ChatGPT or Gemini, I couldn't give them the same guarantee because they're not systems we are working with. I think it's knowing: is there something that's supported at your institution? Are people saying, yes, this is the one we're telling you is okay to use, and within our institutional restrictions, you will be okay with this tool? Priten Shah: What do you say to teachers, especially at larger institutions where they haven't caught up yet? We still work with individual teachers who say, my school hasn't even brought up AI. We haven't had a single professional development session on it. Our school doesn't have a budget for tools or navigating these conversations. So I'm left with students who are using these tools on their own. I know they're going to go home and use these tools to do my assignments. I'd like to figure out what role they play in my classroom because the students want to use the technology.
[00:15:04] When the institution isn't providing that support, I find those cases to be the hardest for individual teachers. There are professionals like you who can spend time looking through those data privacy agreements, helping institutions as a whole navigate which platforms are worth the risks that come with using the technology. And then there are folks who don't have resources like that, and they have to make those decisions on their own. Tamsin Smith: It's the same as other areas of digital capability. I don't care whether a student uses Word or Google Docs for their word processing. At the end of the day, they'll create a document. I think it's thinking about what skills do we want people to have? Do the educators, do the students know how to write prompts effectively? Do they know what goes into crafting an effective prompt? And you can do that in a classroom with no computers. You can talk to students, use a framework. There are hundreds of frameworks out there that you can use for prompt crafting. Getting people to think about what they want to achieve and what they're going to have to include in a prompt to get that is really helpful.
[00:16:09] Getting students and colleagues to think about carefully crafting those prompts is really helpful. It has multiple benefits. They're thinking really precisely about what output they want, so they're going to get a better result. But another concern I hear a lot from the people I work with is the environmental concerns. If we remind people that the fewer prompts you use, then the less energy is going to be wasted—that's a win for people. They'll really think about what needs to go into a prompt. It comes back to other digital capabilities we teach students. If we're teaching them library skills, research skills, we're telling them, well, you need to think carefully about what it is you're searching for. You can't just say you want books on history—that's going to give you thousands. What is it you want to know? It's the same process. It's all part of that academic research skills. Priten Shah: And then what about the data privacy aspect? How do we get students and teachers to make healthier decisions when it comes to data privacy?
[00:17:08] Because I think we hear a couple of different things. You hear from them, oh, this tool is made for schools—that's the tagline. It's made for schools and so it's safe enough to use within our system, within our classroom. And then sometimes we hear a blanket like, oh, it's AI, so it uses the data so we shouldn't use this. We do see both extremes where the bar is very low for us to think that this is safe enough for our schools. On the other end, nothing is going to ever meet our criteria because it requires data. Inherently tech tools will collect some data in order to function. These tools would not be as useful as they are if they took no data from us. Hopefully they don't have to take too much data. Tamsin Smith: What data are you giving away when you sign up for a tool that's free or freemium? How much information do they gather from you at the start? Is it simply that you have an email address and log on? Or do they want to know your name, where you are based, what your job role is? How much of that personal data about you as a user is it gathering? You can generally find that out fairly easily.
[00:18:05] But what else are there any cookies? What's it gathering from you that you're not thinking about? And then the other side is what data are you choosing to put in there? I would always say to an educator: If they wouldn't be prepared to share it on their Facebook page, put it on their work profile, then they probably shouldn't be putting it into a Gen AI tool. I wouldn't be sharing photos of my students, so I wouldn't put them into a Gen AI tool. A large group photo or something where it's the back of students doing an activity is fine, but not the students' faces. What personal information might be given away about other people? If it's not something that I would normally consider sharing, if I wouldn't put it on the LMS, if I wouldn't put it somewhere publicly, then I wouldn't put it into ChatGPT. Would they share a list of all of their students' grades? No, they wouldn't. So let's not put that in when we don't know what might happen with it right now.
[00:19:03] Priten Shah: Right. I think those kinds of heuristics are really useful for teachers because I think it all feels new. Everybody feels like they're starting from scratch when they're making these decisions. But I like the idea of pulling on the social media heuristic to guide them at least as a short term. Okay, here's a quick and easy gut check you can do. That's very valuable and I hope folks are listening to that. I want to make sure we get time to talk about your research. Tell me a little bit about what that looks like. I know you mentioned you're thinking about AI and its possibilities for inclusive education. Tamsin Smith: Many institutions in the UK are not as advanced as those in North America in terms of implementing Universal Design for Learning. But it's something I'm particularly passionate about. I think it helps solve a lot of the problems we experience in classrooms where we have put up barriers that prevent students from learning in the way that they would like.
[00:20:03] A lot of focus around Gen AI tools has been telling students, Hey, you can use this tool and it will make things better for you. But actually, if a student has some challenges that they face in their education already—perhaps because they have a disability—then telling them there's a tool that can help them is great. But it's still making them do extra work. As the educator in the classroom, we should be doing the work. We should be opening the doors to make sure as many students as possible can achieve. It's looking at how can we minimize those barriers for time pressed teachers to make things easier for their students. That's what I hear from most educators now—they really want things to be better, but they don't have any time. Workloads have increased. I think workloads got bigger during the pandemic and then they never came back down. That's the story I hear from my colleagues here. If we can look at ways that simplify how educators can present resources to students, come up with ideas.
[00:21:02] My team has a number of LEGO Serious Play kits, which are absolutely fantastic, but we've also got a lot of educators who've never used them before. They say, well, what could I do? And we say, well, it's really great if you've got some hands-on activities that get the students discussing and thinking. It's more fun than talking with a Gen AI tool, asking questions, getting ideas. It can help with idea generation of thinking, I've got these resources, this is the learning outcome I want—how might I use these resources? So we're making better use of the resources we've got. If we've got a video, it might not already have subtitles so we can use it to help generate those captions. It still probably needs some human editing to make them as accurate as they need to be. But then we can upload those again and quickly create a transcript and a summary. So it's coming up with all of those alternative formats really easily.
[00:22:01] We need to be mindful of what that video is that we're uploading. But it may be that I've recorded something explaining a point for students, and therefore I'm happy with uploading it. It's all those possibilities of how can we create alternative formats, make content more accessible. But also being mindful that if we are asking for image generation, is it coming up with an image that's appropriate? If I ask for a picture of a soldier, will it always be a man? If I ask for a picture of a nurse, will it always be a woman? What stereotypes are we getting? But my recent experience is that the tools are improving so quickly. With those prompts, you will get a variety of different images back and they won't all be the stereotypical image in your head. There are so many possibilities. Priten Shah: You talk about busy teachers, so the narrative around AI is largely that it will help them—lessen their workload, free up more time. Are you seeing that over there?
[00:23:04] Because I'm not seeing much of that here yet. It largely seems to be adding to the teacher's workload more so than taking away. Is that a universal experience? Tamsin Smith: I think it is that eternal optimism where a new technology comes in, you've got that steep learning curve. You say, oh, well it's going to be time upfront, but you'll gain your time back later. But actually in education there's never a later. There's always something more that gets added to someone's workload. It's feeding the capitalist system really—we save the time, and then we can use it for something else that's making money for someone. I think there are some ways where people are saving time. We have quite a lot of manual processes at my institution where somebody might be uploading some data somewhere and then it has to go somewhere else and somebody has to do some processing. I think where people are looking at streamlining of processes and how some simple outputs can come from that data, then they are having some time savings with using AI tools.
[00:24:10] But right now I think people haven't got enough experience. They don't know what the possibilities are. They're worried about the risks. Although there's a lot of progress in the tools, people's behavior and attitudes is quite slow moving. I've encountered a lot of educators who tried ChatGPT when it first came out. When I speak to them, they say, I'm not using Gen AI because it will hallucinate. It's told me things that are clearly incorrect. It doesn't know the latest information. So this tool is rubbish. And I say, well actually, have you tried this? I've done some research recently. It has generated some excellent, really interesting references for papers that I've gone away and read. They were exactly what I wanted. It's not perfect every time, but it's certainly better than it was a year ago. We have to remind staff that it's changing very quickly. They do need to keep going back and checking because what they thought was right yesterday won't necessarily be true tomorrow.
[00:25:02] Priten Shah: I think without institutional support, the burden that's following on individual educators is so high. Because as somebody who spends most of my day thinking about AI and education, I can barely keep up. There's constantly a new tool, there's constantly a new model, there's constantly a new lawsuit against one. That scares me a little bit because I don't think we have the full resources to make sure that everybody can navigate those changes rapidly enough. We face the same thing. We had teachers who tried only the free model from when it initially came out, and then we have others who are trying the $200 a month plan that gives you the deep research tools. It's an astronomical difference in quality of output. Buy-in is very different. If there is one thing that teachers can take away from your experience in terms of how to make sure that the way they're navigating all rapid changes that are happening in education are done in a way that's fair to our students, what would it be?
[00:26:14] Tamsin Smith: Don't assume that students are using gen AI tools. I know from research we've done at my institution—we've asked students about their usage and yes, they're using it, but they're actually very ethical. They understand that they're studying at university to get an education. Yes, they might use it to help with their research. They may be using it to help cut the word count down if they've written too much. But most of our students are honest at heart. There are all these concerns about academic integrity. But whatever technology we were using in the past, students who were determined to cheat would have cheated. They would have paid essay mills, they would have found someone else to write it for them, they would have plagiarized something. If someone's determined, it's a different way of cheating. But most of our students really do want to do something.
[00:27:06] We need to look at why if they are using Gen AI to generate their essays or assignments, why are they doing it? Is it because we're actually expecting too much of our students? Are we giving them just too much busy work without giving them time to think about what it is they need to be learning? It's talking with our students about their usage and understanding what they're doing. We shouldn't just assume that they're going straight to cheating. Priten Shah: That's very valuable advice. Thank you. I really appreciate your time. I feel like there's a lot of overlap between the kind of work we do, just in obviously different contexts. But it was reassuring in some ways to hear that the experiences are global. Tamsin Smith: We've got global students, 25 of us, and I went to a webinar we did yesterday and one of the folks is working in Canada and he was talking about the same problems I've got in the Middle East. So those are all problems. Whether people working with 13 year olds or 23 year olds, it's the same issues. Priten Shah: That's amazing. The last thing I'll leave you with is, I teach part-time in higher ed. I'm working with students on how AI can be productively used in their lives. We did a lesson on biases last week and we tried to do image generators. They still failed. I was quite disappointed because I saw the image generators were getting better. We had every single student try to do a picture of a doctor and a nurse. It was largely like 90% male to female.
[00:28:02] The other one I like to do is have them ask AI to explain the same concept to their mother versus their father and see what analogies and metaphors it uses. That's an aha moment for a lot of them because it's a little bit more insidious. It's not as direct. When you're like, oh, help me explain AI to my mom, it'll give me like, oh, when you're doing laundry, or when you're cooking a meal. And when you're like, help me explain AI to my dad, it immediately goes, oh, when you're in the garage getting these tools. That's a fun one to get them to start making that connection that they need to read a little bit deeper to really see where that bias shows up. And even that—those are still not as obvious as the more regressive levels of bias and stereotyping, but still fairly obvious. Tamsin Smith: I'm sure you've seen the global Barbies. We've got a number of African colleagues on my course and that was one of the first things they talked about when we discussed all the stereotypes you get. They were like, yeah, look at this Barbie that's allegedly from the country I'm from. And we're like, ah.
[00:29:01] Priten Shah: I think a lot of the cultural stuff is really interesting to see. You don't really realize unless it's a stereotype that you can fully see, or you have enough global awareness, which are both sometimes lacking. I appreciate your time. Tamsin Smith: Awesome. That's great. Thank you so much. It's been really interesting talking to you this evening. Priten Shah: Same here. Have a good night. Tamsin Smith: Thank you. Bye. Priten Shah: Tamsin emphasizes that ethical ed tech has to account for more than just efficiency or engagement. It has to work for everyone. Her research on using AI to support inclusive education combined with our concerns about students losing foundational skills captures the tension we all feel. How do we harness technology's potential while protecting what we can't afford to lose? Keep listening this season as we continue exploring what responsible innovation in education really means. And if you're interested in learning more about responsible tech integration, pre-order my upcoming book, Ethical Ed Tech at ethicaledtech.org.
Priten: Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us.
[00:30:03] Also, share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.