Should Students Be Trusted With Phones During Exams? - Dini Arini
#9

Should Students Be Trusted With Phones During Exams? - Dini Arini

[00:00:05] Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming.

Priten: Today I'm speaking with Dini Arni, a PhD student from Indonesia studying language literacy and technology at Washington State University. Dini's journey with technology began with anxiety about being left behind. That experience shapes her research today examining how AI policies can ensure these tools bridge educational gaps rather than widen them. We talk about her unusually optimistic approach to classroom technology, why she thinks teachers need to know their students better than any AI detector can, and what keeps her up at night about where this is all heading. Let's get into it.

Dini: My name is Dini. I am originally from Indonesia. I've been studying in the United States for three years.

[00:01:02] I'm currently a PhD candidate in the language literacy and technology program at Washington State University. My research interests are in the implementation of AI in language teaching. However, that shifted into AI policy because I found it's really needed, specifically in language teaching classrooms for higher education. Back home in Indonesia, we don't really have it. I'll probably bring those things back home and see how it's implemented.

Priten: I'd love to hear a little bit more about what got you interested in technology and language education, but also specifically in AI. Was that an interest of yours growing up? Is that something new? I'd just love to learn more about how you got there.

Dini: Yeah. So that brings me back probably 25 years ago when I was eight or nine years old. English is a foreign language in Indonesia, so we don't really study English a lot.

[00:02:08] English was introduced to me when I was in 6th grade. Almost all my friends got English from courses outside of school. Meanwhile, I didn't have that privilege, so every time I got into English classes or whenever my English teacher came to my classroom, I felt anxious and nervous. I felt uneasy because this was something new that I had no idea what it was. Although it's a skill—English is a skill, something you can learn and acquire—at that time I felt alone. I was left behind because I couldn't afford courses outside of my classroom. Now I felt technology can bridge those things.

[00:03:03] Imagine if AI was invented when I was in that period. I would be able to learn outside of my classrooms as supplementary materials. I'd be able to practice my English skills without my teacher. I'd be able to get all the resources I needed without paying a lot. So I felt this technology could help students with the same conditions as mine when I was a kid. That brought my interest toward technology, specifically AI. Back in 2020, before COVID hit, my university back home in Indonesia had a collaboration with Novo Learning. It's an AI-based app that helps students study and learn English.

[00:04:01] It has very comprehensive materials. It has all the features, all the skills, even the components of English that can help students even when they are in remote areas. I was one of the English tutors at the time, and I asked the CEO of the app if I could use the AI-based app for my research during my dissertation, when I was pursuing my doctoral studies. He said yes. So that was the first time I focused specifically on AI in language learning applications. Then when I was here in 2022, in December, ChatGPT was invented and it boomed. AI is everywhere. I didn't imagine AI would be like this because back in 2020, my proposal was about the app, the AI-based application. But then because of ChatGPT—oh my God—it's just everywhere. People keep talking about it. When I started using ChatGPT itself, it seemed overwhelming at the beginning. But then we realized there are things we should do and things we shouldn't do with generative AI. Now it has DeepSeek, Copilot, and everything. I felt this is growing beyond our ability. We need something to set the border. This is the line: when you are allowed to use this as a student, when you are not using this as a student, especially English learners, EFL, or ESL in this case. That's why I'm putting my interest more into the ethics.

[00:06:01] How is the ethical use of AI for students, specifically English language learners in higher education, because that's my context.

Priten: I want to hear about your experience with your research. I want to hear about your experience teaching. But maybe we can start with your first role as a student. You mentioned you didn't have access to this technology when you wished you would have. In the last few years, what role has the technology played in your own learning journey during your PhD program?

Dini: The whole thing is about technology because my major itself—the reason I chose this program—is because it's about language, literacy, and technology. So it evolves with the technology itself, how technology can be used to enhance language learning or language teaching. In my program, we are encouraged to always use technology, like even using AI. Our professors would say you can use AI and just let me know how you use it.

[00:07:08] Priten: Great.

Dini: That was before any ethical considerations existed. I use AI, for example, to understand the readings. I'll clarify what I understand and then ask generative AI what it thinks, just to confirm what I understood. Because it's not as easy for me as a non-native speaker to understand or comprehend readings. So for me, it's double or triple work. I need to read one article, for example, five or six times and still not understand the content. AI really helped me a lot with that. I also took some technology courses on how to use AI, how to put technology in the curriculum, how to implement technology and embed it in your teaching.

[00:08:04] I also got a graduate certificate in English language teaching using technology. My program really provided us with all the technology we need.

Priten: Yeah, and it makes sense that in your context, you and your professors have wholeheartedly embraced it. I'm curious now about your role as a professor. Do you have similar policies for your students, or I'm assuming it might vary based on what you're teaching and who you're teaching. So what does it look like in your classroom?

Dini: Back home I always tried to use technology, even the smallest part of it—just try using your phone, scan the barcode, use Quizizz or Padlet. Before AI was invented, I always tried to embed technology in my teaching. I think I'm one of the few professors in my department that allowed students to use gadgets in their finals.

[00:09:01] Priten: Tell me more about that. That's super interesting. I rarely hear that.

Dini: I embed the listening, the recording, the reading in Google Forms. So they don't have to open any book, use pencil and pen or paper. They just use their phones or laptops and scan the barcode or click the link I provided. They can do it in the classroom or whenever they want.

Priten: Are you not concerned about integrity? Were you not worried about students changing the tab and finding answers? How did you enforce honor policies?

Dini: Yeah, so I kind of know their ability. If the result is way beyond their own capability in daily meetings, at the bottom of my Google Form I made a statement: "I am doing this by myself. If I am caught that I'm not, then I will get the consequences." I made them read those sentences and check—if they read it, that means they know. Then if I found something wrong, I could just approach them.

[00:10:01] Priten: It is genuinely refreshing because we, as you know, educators are trying to figure out how to deal with assessments. You hear a lot about letting's leave technology out of the room, let's go back to pen and paper, let's figure out how we can make sure students aren't bringing their devices. Let's not do homework assignments that are graded. Everything is about how to move away from technology so it's more secure and safe. Of course, that's a majority, not everybody. But it's good to hear that you're not just allowing students to use devices, but also how you enforced your honor policy by actually asking your students to practice integrity. I think that's remarkable. It's great that it sounds like it worked, because that's refreshing and gives me some hope. So shifting a little to your research, I'd love to hear about how you said you initially were thinking about the practical usage of technology and language education, specifically AI.

[00:11:10] Now you're moving more toward the policy standpoint. You gave me a little context about that switch. But can you spend a little more time explaining why you shifted away from the how? It sounds like there were so many good reasons you had for thinking about how technology might be useful.

Dini: Yeah, so I've been changing my proposal more than five times. At the very beginning, I really wanted to explore how AI itself is used in classrooms. But then I saw that so many researchers already did that. Then I changed my mind to how to implement AI in the classroom experimentally. But that would only be useful for one context. Then I looked at something else because I'm also involved in the XR development lab at my university.

[00:12:02] We were conducting workshops on AI—introducing how to use AI in classrooms for professors and students. I felt there are so many questions regarding ethical use: how it's used and in what way it's accepted. I asked myself, do we have this? Apparently, we only have one sentence: all professors are allowed to choose. That's it. The autonomy is given to professors by the university, and then the classroom depends on how the professor sees AI. If we're talking about the theory of diffusion of technology by Rogers—if you're a laggard, AI will never be used. If you're an early adopter or innovator, of course AI will be used. That's why I'm shifting my focus to how AI can be used in an ethical way.

[00:13:05] I'm in the process of data collection now. I'm interviewing policy makers, faculty members, TAs, students. I'm trying to see how the three dimensions of ethical use in higher education are actually implemented. The first dimension is pedagogical—how AI influences teaching and learning. Then governance itself—how AI is governed, and whenever students commit violations of integrity, how the university deals with it, what kind of punishment we have, and what kind of intervention can be effective. The last one is the practical dimension—how the training and support provided by the university for faculty members is actually effective in helping them embrace this technology. This technology is already here. We cannot hide from it. Let's think about AI like a calculator from years ago. Calculators were also debated, but if we see AI as a tool that helps us and saves time, and we try to keep our critical thinking and engage with it, that would be good. That's what I'm trying to see as a whole. Although I'm only a small part of it, I'm trying to find out how AI is used in the classroom, but I'm seeing the bigger picture on what concerns exist. That's why I'm shifting a little into policy itself—how AI can be used ethically and how it can be used separately by faculty members or students.

[00:15:07] Priten: I'd love to hear your perspective on the ethical use of AI in language learning. Especially when I talk to K-12 teachers, English teachers are struggling to justify to their students why they still need to write at home. Foreign language teachers are struggling to explain why it's still important to learn a foreign language if they can put in headphones that will do the translating for them. Students are struggling to see the value of the learning. Teachers are working very hard to explain that. But I'm curious how you would approach that question, especially because you're pro-technology. So when we talk to teachers who are more pessimistic about the role of technology, it's a very different conversation. But to hear both the optimism about technology and figuring out how to navigate its role in making sure students are using it ethically—I'm curious what your perspective is.

Dini: I think when we're dealing with technology specifically for English language teachers, we need to work harder in recognizing our own students.

[00:16:04] Generative essays or writings are very templating. If you don't know your student, you'll be fooled by it. But if you really know your student—if you really know how they write in the classroom or how their English skills are in reality—you would know whether their work is AI or not. That's my perspective. If you really know your students, you don't need AI detectors. As a teacher, you are the one that filters everything. You're the one that recognizes your own student's ability.

Priten: You know, maybe. Even if teachers can tell now, we do hear from a lot of teachers that their gut instinct tells them whether a student has or hasn't used AI. I'm worried about a year or two or three years down the road when AI technology can take every single example of the student's writing and really mimic the student's own writing.

[00:17:03] What do we do in those cases? I wonder if the gut will always be enough.

Dini: Then we need to be more creative as teachers because there are some things we can always be ahead on. For example, if you're looking for Magic School AI, there are features you can use. It has a feature called AI resistant assignments. It has tips for what teachers should do to prevent students from using AI. For example, you can put keywords in your prompt. When they copy and paste without reading the whole essay, you can tell. We have to be more creative. Second, we can make other assessments. It doesn't have to be all writing. You can make presentations or stuff where students are really there.

[00:18:00] Or you can go back to pencil and paper tests, just do it in the classroom. During the learning process, they can use AI to help them brainstorm, draft, outline. Although we cannot really put aside the technology, we can be creative as teachers to make assessment work based on our objectives. And because one class might work with this technology, it doesn't mean it will work in another context. That's what makes us teachers, what makes us creative. We know this technology will evolve or develop, but as humans, as teachers with real brains—not machines—we can surpass all of those things.

Priten: Yeah. And it sounds like you're in favor of adapting the assessment based on what's being tested.

[00:19:01] So maybe you'll use Google Forms for a class where writing isn't the predominant thing being tested, but if you're testing writing, you'll stick to pen and paper. Which I think matches a lot of what we're seeing on the ground from teachers. I want to make sure I give you some time to talk through any other issues that are keeping you up at night. It's very refreshing to hear someone who's so optimistic. But what are you pessimistic about? What worries you about the technology?

Dini: I'm just a little bit anxious about the affordances of generative AI itself. Now you can upload everything and ask generative AI to respond based on what you upload. But it definitely depends on how the prompt is. It still needs the human to tell it what to do. Also, the dependency itself. If you're too dependent on generative AI, I somehow worry. I read an article about very new research from MIT.

[00:20:04] They tested the brains of people using AI for small stuff. It seems the frontal lobe is smaller than in normal people who aren't really dependent on generative AI. So it definitely influences us. It's impacting us negatively. It makes our critical thinking kind of strange. It makes our brain get strained. I'm just wondering what will happen in the future if we keep doing this.

Priten: Right. I think thinking about what over-reliance does—which is what that MIT study in particular was trying to highlight—is interesting. But one of my fears is that the technology seems easy and accessible. If you use it in the right way, it has all the benefits you're talking about. It can make learning more accessible, it can make it more widespread, especially on a global scale.

[00:21:04] But if we start using it not just to learn or challenge ourselves, but to offload our cognitive labor, that might be very different. And I think for students, it's confusing sometimes to think when you're offloading your learning or hard work in a productive way versus in a harmful way. I think that's definitely a question we'll all have to spend time thinking about. It sounds like you and I are definitely doing so, so we can go back to our students and say, okay, here's how AI will make your life easier in ways that still allow you to maintain your own capabilities, versus here's how you shouldn't use AI because it's going to cause you to lose some of your own capabilities instead of strengthening them. That's definitely a challenge we're going to face. But hopefully we can all navigate it and not rely to the point of losing our ability to think. It was very interesting to hear your perspective, especially given your own educational context and your work and your teaching. You brought a lot to the table in terms of a varied set of experience that is relevant to the questions at hand.

[00:22:02] So thank you so much for taking the time to talk to me today. I appreciate Dini sharing her perspective. What struck me most was her refusal to see this as a binary choice—technology or integrity, innovation or caution. Her approach challenges a lot of conventional thinking. At the same time, she's clear-eyed about the risks of over-reliance and what it means for our capacity to think critically. For more complex case studies that push past the binary, visit my book Ethical EdTech at ethicaledtech.org.

Priten: Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also, share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.