25 mins #10 Mar 05, 26 How Can AI Support Writing Instruction? - Kim Cowperthwaite In this episode, Priten speaks with Kim Cowperthwaite, an English Language Arts teacher at Freeport Middle School in Maine who has been teaching for over 20 years. Growing up in a tech-forward household in the 1970s and later working in the newspaper industry as it faced digital disruption, Kim brings a unique perspective on technological change. She was among the first teachers in the nation to work in Maine's pioneering one-to-one laptop program starting in 2004. The conversation explores her unconventional approach to AI in the classroom—treating it like "a book or a pencil"—why she believes building community and relationships matters more than policing technology use, and how she helps students recognize when AI has written their work without making it punitive.Key Takeaways:Know your students better than any detector. Teachers who build relationships with their students can identify AI-generated work by recognizing changes in sentence length, structure, and voice—no detection tools required.Make AI conversations transparent, not secretive. Rather than creating a surveillance culture, openly discuss how AI works, when it's appropriate, and how you can tell when it's been used—students respond better to honesty than to policing.Technology should amplify human expression, not replace it. Start with handwritten journals and personal ideas first, then bring in technology as a tool to enhance what students have already created on their own.Teaching self-control is lifelong. Help students recognize their own impulse patterns with technology—the habit of drifting to games during a thinking pause—because they'll need to manage this their whole lives.Focus on the goal, then find the tool. Instead of teaching specific AI technologies that come and go, teach students to identify what they want to achieve first, then select appropriate tools—this approach works for both students and teachers in professional development.
23 mins #9 Mar 03, 26 Should Students Be Trusted With Phones During Exams? - Dini Arini In this episode, Priten speaks with Dini Arini, a PhD candidate in language literacy and technology at Washington State University who has been teaching for over 15 years. Growing up in Indonesia without access to English courses that her classmates had, Dini experienced firsthand the anxiety of being left behind—an experience that now fuels her optimism about AI's potential to democratize education. The conversation explores her unconventional approach to classroom technology, including allowing students to use phones during exams, why she believes teachers who truly know their students don't need AI detectors, and how her research into AI ethics policy is uncovering the gap between institutional guidelines and classroom reality. Dini also shares what genuinely worries her: emerging research suggesting that over-reliance on AI may be physically changing our brains.Key Takeaways:Know your students better than any detector. Teachers who truly understand their students' abilities and writing styles can identify AI-generated work without relying on detection tools—you become the filter.Technology can bridge access gaps. For students without resources for tutoring or courses, AI tools can serve as supplementary learning support that was previously unavailable.Trust can work as enforcement. Having students acknowledge an honor statement and knowing their baseline abilities can be as effective as surveillance—students often rise to the expectation of integrity.Adapt assessments to what you're testing. Use technology-enabled tests when appropriate, but return to pen-and-paper or presentations when the skill being assessed requires it.Stay creative ahead of AI. As AI improves, teachers must develop AI-resistant assignments and varied assessment methods rather than abandoning technology entirely.
26 mins #8 Feb 26, 26 What If the Answer to Technology Overload Isn't Better Tech But Real Relationships? - Nate Otey In this episode, Priten speaks with Nate Otey, a ninth grade humanities, statistics, and calculus teacher at Boston Trinity Academy, a school that has deliberately chosen a low-tech approach. Nate shares how his school has banned phones for students up to 10th grade, with parents and students largely on board. The conversation explores what happens when a school community prioritizes relationality over connectivity, why friction in human relationships might be essential rather than something to eliminate, and how faith-based education can provide a framework for understanding why face-to-face connection matters. Nate reflects on the practical challenges of enforcing device policies, how teachers can use AI ethically while modeling integrity for students, and the coming wave of emotionally convincing AI that may challenge our understanding of human relationships.Key Takeaways:Students often want the boundaries. Research shows many students know phones are bad for them and appreciate when schools take them away—they just can't opt out alone due to social pressure.Use the "would I tell my students?" heuristic. Teachers can ethically use AI for lesson prep and practice exercises, but should avoid using it for grading or tasks where students would feel cheated if they knew.Relationships require friction. Technology is designed to eliminate friction, but meaningful human connection is inherently awkward and difficult—that's what makes it valuable.Consistent enforcement matters more than strict rules. Students accept boundaries when they're applied fairly and uniformly; arbitrary enforcement breeds resentment.The next wave isn't intellectual—it's emotional. AI that perfectly imitates consciousness will soon challenge how we help students distinguish between real relationships and convincing simulations.About Nate Otey:Nate served as a Fellow in the Harvard Department of Philosophy for over five years, during which time he helped to found ThinkerAnalytix as Lead Instructor and later as COO, among other roles. Nate authored or co-authored many of the core ThinkerAnalytix curriculum and course offerings, including courses for HarvardX, HGSE, and LSAC. Nate currently teaches AP Statistics, AP Calculus AB, and 9th grade Humanities at Boston Trinity Academy in Hyde Park.
27 mins #7 Feb 24, 26 How Can AI Support Inclusive Education? - Tamsyn Smith In this episode, Priten speaks with Tamsyn Smith, Senior Learning Designer and Team Lead at the University of Southampton, who is halfway through a PhD investigating how generative AI can support inclusive education. Tamsyn shares her journey from childhood programming to classroom teaching to higher ed learning design, and reflects on how COVID-19 and AI arrived as dual "cataclysmic shifts" that educators are still navigating. The conversation explores data privacy pitfalls, the myth of digitally-native students, and why Universal Design for Learning matters more than ever—ultimately landing on a hopeful note: most students are ethical, and the real question isn't whether they're cheating, but whether we're giving them meaningful reasons to learn.Key Takeaways:Students still need foundational skills. Just as calculators didn't eliminate the need to understand math, AI doesn't eliminate the need to write well—you can't evaluate output you couldn't create yourself.Don't assume students are cheating. Research shows most students use AI ethically; if they're over-relying on it, ask whether assignments are meaningful or just busy work.Read the terms and conditions. Before asking students to use any tool, educators must understand what data it collects and where that data goes.Use a simple privacy heuristic. If you wouldn't post it on social media, don't put it into a generative AI tool.Technology should open doors, not add burdens. Universal Design for Learning means educators do the work to minimize barriers—not hand students another tool and call it support.About Tamsyn Smith:Tamsyn Smith is a Senior Learning Designer Team Lead at the University of Southampton, where she has worked for over 13 years supporting staff and students with educational technology and digital capabilities. She works closely with academics staff as well as leading a team delivering training on emerging technologies, including generative AI, and she has particular expertise in inclusive education practices. Tamsyn holds SCMALT membership and is a CMALT assessor, and her work has been recognised through Vice Chancellor's Awards and an AdvanceHE Collaborative Award for Teaching Excellence.Tamsyn is currently pursuing a PhD in E-Research and Technology Enhanced Learning at Lancaster University, where her research explores how educators can use generative AI to support inclusion through Universal Design for Learning (UDL) implementation. Her work draws on Cultural Historical Activity Theory (CHAT) to examine the complex relationships between technology, pedagogy, and inclusive practice in higher education contexts.
31 mins #6 Feb 19, 26 How Might AI Support Early Education Interventions in India? - Ratna Gill In this episode, Priten speaks with Ratna Gill, who supports the partnerships team at Rocket Learning, a nonprofit tackling early childhood education in India through WhatsApp. Ratna shares her journey from child safety work to early childhood education and explains how Rocket Learning delivers bite-sized educational content to caregivers and Anganwadi workers serving 600 million children who lack access to early stimulation. The conversation explores their AI-powered personalized tutor, the importance of cultural contextualization, and what ethical ed tech looks like when working with resource-constrained communities—ultimately landing on a hopeful note: technology can expand access to education without replacing the irreplaceable human connections that make learning joyful.Key Takeaways:Meet communities where they already are. Rocket uses WhatsApp because families are already there—no new apps, no tech burden.Technology should supplement, not replace, human interaction. APU is capped at 15-20 minutes daily to preserve parent-child engagement.Context matters more than content. Effective ed tech adapts cultural references, not just language, for each region.Test slowly, learn deeply. Field testing revealed that background noise breaks speech-to-text—rushing would have shipped a broken product.Parents are the most transformative tool. AI can model joyful pedagogy, but it can't replace human connection.About Ratna Gill:Ratna Gill serves as Lead, Special Projects at Rocket Learning. Previously the Head of Government Partnerships at Mumbai-based child safety nonprofit Aangan, she worked with state governments across the country to train school administrators and police officers to create safer communities for children in hotspots for child trafficking. Ratna graduated from Harvard Kennedy School in 2022, where she studied the impacts of parental labor migration on education and development outcomes for adolescents. She has a B.A. in Economics from Harvard College.