Priten: Welcome to Margin of Thought, where we make space for the questions that matter.
I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming.
Priten : Today I'm speaking with Dini Arni, a PhD student from Indonesia studying language literacy and technology at Washington State University.
Dini's journey with technology began with anxiety about being left behind.
That experience shapes her research today examining how AI policies can ensure these tools bridge educational gaps rather than widen them.
We talk about her unusually optimistic approach to classroom technology, why she thinks teachers need to know their students better than any AI detector can, and what keeps her up at night about where this is all.
heading.
Let's get into it.
Dini: My name is Dini.
I am originally from Indonesia., Um, I've been studying in United States for three years.
. I'm currently a PhD candidate, uh, in language literacy and technology program at Washington State University.
And then my research interests involved in actually implementation of AI in language teaching.
However, that was kind of shift into AI policy because I found that it's really needed.
Specifically in, um, language teaching classrooms, for higher education.
Back home in Indonesia, we don't really have it.
I'll probably bring those things back home and see how it implemented.
Priten : I'd love to hear a little bit more about what got you interested, in technology and language education, but also specifically in ai.
Was that an interest of yours, growing up?
Is that something new?
I'd just love to, learn more about how you got there.
Dini: Yeah.
So that, will bring me back to.
Probably 25 years ago when I was, eight, nine years old.
So, um, English is a foreign language in, Indonesia, so we don't really like study English, a lot.
So English has been introduced firstly to me, when I was in 6th grade and then.
Almost all my friends has gotten English from their courses, like they took courses outside of the school.
Meanwhile, I didn't have that kind of privilege, so every time I got into English classes or whenever my English teacher came to my classrooms, I felt anxious, nervous.
I felt uneasy because I felt like this is something new that I have no idea what it is.
Although this is a skill, right?
English is skill, something that you can learn, something that you can, uh, acquire.
But at that time I felt like I was so alone.
I was left behind because I couldn't afford to have courses outside of my classroom.
now I felt like technology can bridge those things, right?
Imagine if AI was invented when I was in that period, I would be able to learn myself outside of my classrooms, like as a supplementary, um, materials.
I will be able to, practice my English skills without my teacher.
I'll be able to like, get all resources I needed without, I have to like pay a lot.
So I felt.
This kind of technology actually can help students with the same condition as mine when I was a kid.
So that, that actually bring my interest towards technology, specifically in AI because, back in 2020, before COVID hit, so my university back home, Indonesia we had a collaboration with novo learning.
it's an AI based app that helps students to, study and learn English.
it has a very comprehensive materials.
It has all the features, all the skills, even the components of English that can help students even when they are in remote areas.
I was one of the English tutor at the time, and, uh, I asked, the COE of the app, can I use your AI based app to uh.
do my research for my dissertation, like when I'm, studying or when I'm pursuing my doctoral, And then he said yes.
So that was the first time focusing specifically into AI specifically, the language learning application.
AI based.
Then when I was here in 2022.
In December, Chat-GPT was like invented and then it boomed like, wow, AI is everywhere.
Like, I didn't imagine AI would be like this because, back in 2020, my proposal was about the app, right?
Like the AI based, application.
But then because of Chat-GPT.
Oh my God.
It's just like everywhere.
People keep talking about it.
And then when I started using the ChatGPT itself, it seems like overwhelming at the very beginning.
But then we know that there are things that we should do.
There are things that we shouldn't do with generative AI and now it has DeepSeek, Copilot and everything.
And then I felt like this is growing beyond our ability.
So we need something to put the border to it.
Like this is the line of, When you are allowed to use this as a student when you are not using this as student, especially, um, English learners ,or EFL, or ESL in this case.
So that's why I am putting my interest like more into the ethics.
like how is the ethical use of AI for um, students, specifically english language learners, um, in higher education because that's my context.
Priten : I wanna hear about your experience obviously, with your research.
I Wanna hear a little bit about your experience, teaching.
but maybe we can start with your first role, which is as a student.
You mentioned that you didn't have access to this technology, um, when you wish you would have.
in the last few years, what role has the technology played in your own learning journey, you know, during, your PhD program?
Dini: The whole thing is about technology because my major itself, the reason why I chose this program is because it's about language, literacy, and technology.
So, it evolves in the technology itself, like how technology can be used to enhance the language learning or language teaching in my program, we are encouraged to always use technology.
Like whenever we are, even using AI, our professors would say, yeah, you can just use AI and just let me know how to use it.
Priten : Great.
Dini: That was before like any ethical thingy is actually exists.
I use AI for example, in, I'm trying to understand the readings, clarifying what I understand and then just ask, these generative AI, what do you think?
, Just to confirm what I understood.
Because it's not as easy like a native speaker for me to understand or to comprehend uh, readings.
So for me, it's double or triple work.
So I need to read one article, for example, five times or six times without understanding what is the the content of it, right?
So these AI really helped me a lot in doing that.
I also like took some technology courses, um, how to use AI, how to put, technology in the curriculum, how to implement technology and embed it in each of your teaching.
uh, I also got the graduate certificate of, English language teaching, using technology, so, my program really provide us with all the technology that we need.
Priten : Yeah.
Um, and It makes sense that in, in your context, , you and your professors have wholeheartedly embraced it.
I'm curious now about your role as a, professor.
do you have similar policies for your students or I'm assuming, it might vary based on what you're teaching and who you're teaching.
So what does it look like in your classroom?
Dini: So,, Back home I always tried to use technology, um, even like the smallest part of it, like just try to use your phone and scan the barcode and use Quizizz or use Padlet.
So before AI was invented, I tried always to embed the technology in my teaching.
Even, I think I am the one of the professor in my department that allowed them to like use gadget in their finals.
Priten : Tell me more about that.
That's, that's super interesting.
I, I rarely hear that.
So, yeah.
, Dini: I embed the, like the listening, the recording, the reading in the Google form.
So they don't have to open any book.
They don't have to use pencil and pen or paper.
They just use their, um, phones or a laptop and then they can just scan the barcode or they can just click the link that I provided.
They just do it in the classroom or whenever they want.
Priten : Are you not concerned about, um, integrity?
Like were you not worried about students changing the tab and finding answers?
How did you enforce, honor policies?
Dini: Yeah, so I kind of know their ability.
Like if the result is like way beyond their own capability in their daily, meetings, I mean, at the, at the bottom of my, uh, Google forum a made a sentence that I am doing this by myself.
If I am caught that I'm not, then I will get the consequences.
And I made them read those sentences and check, like, if they read it, it means that they know.
And then, if I found some things wrong, then I can just, um, approach them.
, Priten : It is genuinely refreshing because we, as you know, educators are trying to figure out how to deal with assessments.
you hear a lot about, let's leave technology out of the room.
Let's go back to pen and paper.
Um, let's figure out how we can, make sure that students aren't bringing their devices, they're hanging them up somewhere.
Let's not do homework assignments that are graded.
everything is about how do we move away from technology so that it's more secure and safe.
and of course that's a majority, not everybody.
but it's good.
It's nice to hear that you know, not just allowing students to use the devices, but also how you enforced your honor policy by actually asking your students to, practice integrity.
And I think that's remarkable to me.
it's great that it sounds like it worked, because that's refreshing, that gives me some hope.
So shifting a little bit to, your research, I'd love to hear about, you know, you said you initially were thinking about like the practical usage of technology and language education, specifically ai.
and now you're moving more towards the policy standpoint.
you gave me a little bit of context about that switch for you.
But can you maybe spend a little bit more time explaining to me.
why did you shift away from the, how, it sounds like there's so many good reasons you had for thinking about how technology might be useful.
Dini: Yeah, so I've been, I think I've been changing my proposal for more than five times.
I mean, the very beginning I really want to explore and how of the use of AI itself in the classrooms.
But then I saw that there's so many researchers already did that.
And then I changed my mind into like how to implement this AI in the classroom, experimentally That would be only useful for one context, but then I am looking at something else because, currently I'm also involved in the XR development lab in my university.
And then we were conducting some workshops of ai, like introducing how to use AI in the classrooms, like for the professors and for students.
And then I felt like there are so many questions regarding ethical use, like how, and then, in what kind of way it is accepted.
And then I ask myself, do you have, do we have this like here?
And then apparently we only have one sentence.
It says that, all professors are allowed to choose.
That's it.
Yeah.
It's really depending on the professor itself.
uh, the autonomy is given to the professors, from the university and then, the classroom will depend on how the professor is seeing the AI itself whenever they are.
The laggards, for example, if we are talking about the theory of, delution of technology by Rogers means that AI will never be used.
If we're talking about, the early adopters or inventors, of course, AI will be used.
Right.
So that's why I'm shifting into like how this AI can be used, , in ethical way.
I am right now, I am in the process of data collection.
I'm interviewing, uh, policy makers.
I'm interviewing faculty members, TAs, students.
Um, i'm trying to see how the three dimensions of ethical use in higher education is actually implemented.
So the first dimension is actually in pedagogical on how AI influence teaching and learning.
And then the governance itself, how AI is actually governed and, um, whenever students, for example, did violation in their integrity or
stuff, how the university deal with it, what kind of punishment do we have and what kind of return effect can be effective, for example.
And then the last one is practical dimension.
So how, the training supports provided by the university for the faculty members is actually effective To embrace this technology because this technology it's already there.
We cannot hide from it.
Let's think about AI as calculator like years back, years ago, calculators also like Ben, but if we see AI as a tool that helped us that saving times.
We try to keep our critical thinking, we try to keep ourself in it.
That would be really like good thing.
Right?
So that's what I'm trying to see as a whole, although I am only a little part of it.
I was trying to find out how AI is used in the classroom, but then I'm seeing the bigger picture on what concerns lies in it.
And then that's why I'm shifting a little bit into the policy itself, like how this AI can be used ethically and how this AI can be used separately by either faculty members or students itself.
Priten : I'd love to hear your perspective on, the use of AI ethically in language learning.
especially when I talk to K to 12 teachers, English teachers are struggling in terms of justifying to their students why they still need to write at home.
foreign language teachers are struggling with explaining students, why, um, it's still important to learn a foreign language if they can put in headphones that will do the translating for them.
students are struggling to see the value of the the learning.
teachers are working very hard to explain that to them.
but I'm curious how you would approach that question, especially because you're pro technology, right?
Mm-hmm.
So, um, when we talk to teachers who are a bit more, pessimistic about the role of technology, it's a little bit, it's a very different conversation, but
to hear both the optimism about technology, and figuring out how to navigate the role of it in, you know, making sure students are using it ethically.
I'm curious what your perspective on that.
. Dini: I think when we're dealing with technology specifically for English language teacher, we need to work harder in terms of recognizing your own students.
So ai generative essays or writings are very templating like if you don't know your student, you'll be fooled by it.
But if you really know your student, like if you really know how they write like in the classroom or how their english skills.
In reality, you would know that their work is AI or not.
So that's my perspective.
if you really know your students, You don't need the AI detectors.
As a teacher, you are the one that filter everything.
You're the one that recognize your own student's ability
Priten : you know, maybe Even if teachers can tell now, I mean, we do hear from a lot of teachers that their gut instinct tells them what a student has or has not used ai.
I'm worried about in a year or two years, or three years down the road when the AI technology can take every single example of the student's writing and really mimic the student's own writing.
what do we do in those cases?
I, I wonder if the gut will always, be enough.
Dini: Then we need to be more creative as teachers because there are some things that we can always.
be ahead.
for example, if you are looking for, um, magic school ai, there are some features that can be used.
it has a feature so called AI resistant assignments.
So it has tips what teachers should do, like to prevent the students using ai.
For example, you can put something like keywords in your prompt.
So then when they.
Copy pasted and just not reading the whole, essay.
You can tell.
We have to be more creative.
Second, we can make other assessment.
It doesn't have to be all writing, right?
you can make presentation or stuff that where students really there.
or you can go back, go to pencil and paper test, like just do it in our classroom.
during the learning process, they can use ai, like to help them brainstorming, to help them drafting, to help them outlining.
So although we cannot really like, put aside the technology, we can be creative as teachers to make an assessment work like space on our objectives.
And, because one class, for example, if this technology work in this context, it doesn't mean that it will work in other context.
Right?
Right.
So that's, that's what makes us teacher, that what makes us, creative.
We know this technology will evolve or develop, but as a human, as a teacher, and has a brain, real brain, not the machine.
Sure We can surpass all of those things.
Priten : Yeah.
And it sounds like you're in favor of adapting the assessment based on what's being tested.
So maybe you'll use the Google form for a class in which the writing is not the predominant thing being tested, but if you're testing writing, you'll stick to the pen and paper.
which I think, matches a lot of, you know, what we're seeing on the ground from teachers.
I, I wanna make sure I give you some time to just like, talk through any other, the issues that are keeping you up at night?
it's very refreshing to hear someone who's, so optimistic.
but what are you pessimistic about?
What worries you about the technology?
Dini: I'm just.
a little bit anxious about the affordances of the generative AI itself.
Now you can upload everything and you can ask the generative AI to respond based on what you upload, but then it's definitely depending on how the prompt is, right?
Like still needs the human to tell what to do.
Also the dependency itself, if you are being too dependent on generative ai, I somehow.
So I read this article about a new, very new research from MIT
So they tested the brain of people getting used to using AI for small stuff.
Seems like the frontal loop is smaller.
Yeah.
From the normal person that is not really dependent on the.
Generative ai.
So it definitely is influenced us.
It's impacting us negatively.
Right, right.
It makes our critical thinking kind of strange.
It makes our brain getting strained I'm just wondering what will happen in the future if we keep doing this.
Priten : Right.
I think thinking about the what Over Reliance does, which is what that MIT study, I think in particular was trying to highlight, is interesting, but I think one
of, one of my fears is, the technology seems easy and, um, accessible if you use it in the right way, it has all the benefits that you are talking about, right?
It can make learning more accessible, it can make it, um, more widespread, especially on a global scale.
but if we start using it not just to learn or challenge ourselves, but to offload our cognitive labor, that might be very different.
Um, and I think for students, it's confusing sometimes to think when am I offloading, my learning or the hard work in a productive way versus in a hard way, right?
Or, and I think that that's definitely a question that we'll all have to spend some time thinking about.
It sounds like you and I are definitely, um, doing so, so that we can go back to our students and say, okay, like, here, here's how, AI will make your life easier, but in ways that
still allow you to maintain your own capabilities, versus here's how you shouldn't use ai because it's gonna cause you to lose some of your own capabilities, not strengthen them.
So, that's definitely a challenge that we we're gonna face.
but hopefully we can all navigate it and not, rely to the point of losing our ability to think.
very fascinating.
it was, it was very interesting to hear your perspective, um, especially given, your own educational context, your work, your teaching.
you brought a lot to the table in terms of, a varied set of experience that is relevant to the questions at hand.
So, thank you so much for taking the time to talk to me today.
I appreciate Denny sharing her perspective and what struck me most was her refusal to see this as a binary choice technology or integrity, innovation or caution.
Her approach challenges a lot of conventional thinking.
At the same time, she's clear-eyed about the risks of what we're reliance and what it means for capacity to think critically for more complex case studies that push past the binary pre-order.
My book Ethical Ed [email protected].
Priten: Thanks for listening to Margin of Thought.
If this episode gave you something to think about, subscribe, rate, and review us.
Also, share it with someone who might be asking similar questions.
You can find the show notes, transcripts, and my newsletter at priten.org.
Until next time, keep making space for the questions that matter.