Priten Soundar-Shah Host

Priten Soundar-Shah

ED of PedagogyFutures / Founder of Academy 4 Social Civics / CTO at ThinkerAnalytix

Appears in 14 Episodes

#14

How Do We Teach the Journey When AI Offers the Destination? - Varun Gupta

In this episode, Priten speaks with Varun Gupta, an Accounting and Economics professor at Wharton County Junior College in the Houston area who has been teaching since 2007. Varun is refreshingly candid about his own complicated relationship with AI—he uses it extensively for lesson planning, assignment creation, and communication, but worries deeply about what happens when students skip the grind entirely. Key Takeaways:The helicopter problem is real. Using AI to get answers without effort is like taking a helicopter to the top of Mount Everest. You get there, but you missed the point. The grind, the failure, the figuring-it-out—that's where the learning lives.Cognitive offloading is already happening to teachers, too. Varun no longer does mental math. He GPS's the airport he's been to hundreds of times. AI is next. The concern isn't hypothetical—it's already underway for him personally.Post-COVID is the bigger shift, not post-ChatGPT. Students who came through COVID developed habits of not showing up, not following through, and not asking questions. That behavioral shift is more visible than any change attributable to AI alone.The stress is gone—and that's the tell. Before ChatGPT, students peppered him with term paper questions all semester. Now? Silence. They're not less anxious because they're more prepared. They're less anxious because they've already decided how they'll produce the paper.There's inherent hypocrisy in the dynamic—and it's worth naming. Using AI to create assignments while discouraging students from using it to complete them isn't perfectly clean. Varun acknowledges it. The distinction is in where the journey matters: for the teacher creating the prompt, or for the student doing the thinking.The human value is in the face-to-face. In asynchronous online courses, the line between professor and bot is thin. Where Varun sees his irreplaceable value is in the in-person relationship—lived experience, empathy, career conversations, and the daily modeling of what professional effort actually looks like.About Varun Gupta:Varun Gupta, aka, The “Knotty” Economist is a dynamic and engaging economics professor with 19 years of experience making complex concepts both accessible and exciting. He has spent his entire career at Wharton County Jr. College (i. e. the “other” Wharton).  Known for his fun and energetic presentation style, and ever present elaborate necktie,  he has delivered insightful talks at conferences, college professional development events, and civic groups—both live and virtual. A passionate educator, Varun specializes in applying fundamental economic principles to real-world decision-making and classroom engagement. Whether tackling macro, micro, or the economics of everyday life, he brings a unique mix of expertise and humor that keeps audiences learning and laughing. When he’s not using economic concepts to explain the world, he spends time catering to his 4 year old golden doodle Cinnamon.
#13

Can We Preserve Core Classrooms Values While Integrating Ed Tech? — Brian Tash

In this episode, Priten speaks with Brian Tash, an elementary school teacher with nearly 30 years of experience who has witnessed the complete arc of education technology—from Scantrons to Google Classroom to AI. Brian shares how he balances technology integration with preserving fundamental skills like reading stamina and handwriting. The conversation covers his transparent approach to using AI for faster student feedback, why he's concerned about declining empathy and attention spans post-COVID, how he teaches prompt engineering to third and fourth graders, and his hope that educators will become more mindful about why they're using technology rather than just adopting everything new. He argues that personal connection, problem-solving, and collaboration are what students need most—and those can't come from a screen.Key Takeaways:Follow the 80-20 rule with AI. AI gets you 80% of the way—the other 20% is you adding your own elements. This applies to teachers giving feedback and students creating work.Transparency builds trust. When students understand why you're using AI for feedback, they embrace it. Brian's study found 90% of students were in favor once they understood the reasoning.Technology can't replace human connection. Students need to learn how to talk to each other, problem-solve collaboratively, and develop empathy—skills that don't come from screens.Stamina is the real crisis. Post-COVID students struggle to push through hard things. The growth mindset isn't there. Writing a paragraph makes their hands hurt.Teach prompting, not just usage. Focus on prompt engineering—how to get what you want from AI. Experiment with students: change the words, add details, see what happens.Standards-based grading may help. With clear standards, teachers can focus instruction, use AI to target specific skills, and have more time for the human elements once mastery is achieved.
#12

Why Do We Teach Foreign Languages When AI is Multilingual? - Noelia Pozo

In this episode, Priten speaks with Noelia Pozo, a high school Spanish and French teacher with nearly two decades of experience who now heads the Foreign Language and Classical Department at her school. Noelia shares how she transformed her classroom by using AI openly alongside students rather than policing it. The conversation covers how she handles AI-generated work through relationship-building rather than detection tools, why she collects phones in a "Telephone Hotel," how exploring AI bias with students sparked deeper learning than lectures, and her frustration with colleagues who refuse to adapt while hypocritically using AI themselves. She argues that the question isn't whether to engage with these tools, but how to do so while preserving human connection, critical thinking, and genuine learning.Key Takeaways:Show students language is already in their lives. From "in lieu of" to Chipotle menus—they're already speaking foreign languages without realizing it. Recognition breeds respect.AI can't replace human connection. You can't build trust through a machine. Professional relationships require authentic communication, not a technological relay.Create honesty, not surveillance. Use AI openly alongside students and ask only for transparency. When trust flows both ways, students voluntarily admit mistakes—and learn from them.Teach students to verify AI output. AI isn't infallible. Once you put something in your paper, you own it—right or wrong.Explore AI bias together. "Nobody looks like me" in AI images sparked deeper conversations about bias and better prompting than any lecture could.Adapt or be replaced. Teachers won't lose jobs to AI—but they may lose them to teachers who use AI well.
#11

Do Kids Need Phones? — Shon Holland

In this episode, Priten speaks with Shon Holland, a middle school science teacher at Sells Middle School in Dublin, Ohio. After a first career in hazardous waste management and environmental health and safety, Shon made the leap to education about 20 years ago. His experience with both seventh and eighth graders gives him frontline insight into how adolescents interact with technology. The conversation explores his balanced approach to tools like GoGuardian—using technology to monitor without creating surveillance culture—why he believes giving students responsibility actually lightens a teacher's load, and his blunt assessment that smartphones simply aren't healthy for middle schoolers.Key Takeaways:Misuse is inevitable—guidance is the goal. Middle schoolers can misuse anything from rulers to AI. Instead of trying to eliminate misuse, focus on teaching students how to make tools work for them and guiding them when they stumble.Relationships trump detection tools. Teachers who know their students can spot AI-generated work by recognizing when writing doesn't match a student's voice or level—no software required. Treat violations as learning moments, not punishments.Give responsibility to gain freedom. When you trust students with responsibility and show them consequences aren't personal, they give you space to actually teach. The more ownership they have, the less you need to police.Parents need to parent. The research on smartphones and adolescent brains is irrefutable. Kids don't need iPhones—they need dumb phones, landlines, and parents willing to set boundaries even when their children push back.Know the time and place. Technology and AI are fantastic tools that can differentiate instruction, translate languages, and unlock learning. But sometimes you just need human brain power. The skill is knowing when to use tech and when to walk away.
#10

How Can AI Support Writing Instruction? - Kim Cowperthwaite

In this episode, Priten speaks with Kim Cowperthwaite, an English Language Arts teacher at Freeport Middle School in Maine who has been teaching for over 20 years. Growing up in a tech-forward household in the 1970s and later working in the newspaper industry as it faced digital disruption, Kim brings a unique perspective on technological change. She was among the first teachers in the nation to work in Maine's pioneering one-to-one laptop program starting in 2004. The conversation explores her unconventional approach to AI in the classroom—treating it like "a book or a pencil"—why she believes building community and relationships matters more than policing technology use, and how she helps students recognize when AI has written their work without making it punitive.Key Takeaways:Know your students better than any detector. Teachers who build relationships with their students can identify AI-generated work by recognizing changes in sentence length, structure, and voice—no detection tools required.Make AI conversations transparent, not secretive. Rather than creating a surveillance culture, openly discuss how AI works, when it's appropriate, and how you can tell when it's been used—students respond better to honesty than to policing.Technology should amplify human expression, not replace it. Start with handwritten journals and personal ideas first, then bring in technology as a tool to enhance what students have already created on their own.Teaching self-control is lifelong. Help students recognize their own impulse patterns with technology—the habit of drifting to games during a thinking pause—because they'll need to manage this their whole lives.Focus on the goal, then find the tool. Instead of teaching specific AI technologies that come and go, teach students to identify what they want to achieve first, then select appropriate tools—this approach works for both students and teachers in professional development.
#9

Should Students Be Trusted With Phones During Exams? - Dini Arini

In this episode, Priten speaks with Dini Arini, a PhD candidate in language literacy and technology at Washington State University who has been teaching for over 15 years. Growing up in Indonesia without access to English courses that her classmates had, Dini experienced firsthand the anxiety of being left behind—an experience that now fuels her optimism about AI's potential to democratize education. The conversation explores her unconventional approach to classroom technology, including allowing students to use phones during exams, why she believes teachers who truly know their students don't need AI detectors, and how her research into AI ethics policy is uncovering the gap between institutional guidelines and classroom reality. Dini also shares what genuinely worries her: emerging research suggesting that over-reliance on AI may be physically changing our brains.Key Takeaways:Know your students better than any detector. Teachers who truly understand their students' abilities and writing styles can identify AI-generated work without relying on detection tools—you become the filter.Technology can bridge access gaps. For students without resources for tutoring or courses, AI tools can serve as supplementary learning support that was previously unavailable.Trust can work as enforcement. Having students acknowledge an honor statement and knowing their baseline abilities can be as effective as surveillance—students often rise to the expectation of integrity.Adapt assessments to what you're testing. Use technology-enabled tests when appropriate, but return to pen-and-paper or presentations when the skill being assessed requires it.Stay creative ahead of AI. As AI improves, teachers must develop AI-resistant assignments and varied assessment methods rather than abandoning technology entirely.
#8

What If the Answer to Technology Overload Isn't Better Tech But Real Relationships? - Nate Otey

In this episode, Priten speaks with Nate Otey, a ninth grade humanities, statistics, and calculus teacher at Boston Trinity Academy, a school  that has deliberately chosen a low-tech approach. Nate shares how his school has banned phones for students up to 10th grade, with parents and students largely on board. The conversation explores what happens when a school community prioritizes relationality over connectivity, why friction in human relationships might be essential rather than something to eliminate, and how faith-based education can provide a framework for understanding why face-to-face connection matters. Nate reflects on the practical challenges of enforcing device policies, how teachers can use AI ethically while modeling integrity for students, and the coming wave of emotionally convincing AI that may challenge our understanding of human relationships.Key Takeaways:Students often want the boundaries. Research shows many students know phones are bad for them and appreciate when schools take them away—they just can't opt out alone due to social pressure.Use the "would I tell my students?" heuristic. Teachers can ethically use AI for lesson prep and practice exercises, but should avoid using it for grading or tasks where students would feel cheated if they knew.Relationships require friction. Technology is designed to eliminate friction, but meaningful human connection is inherently awkward and difficult—that's what makes it valuable.Consistent enforcement matters more than strict rules. Students accept boundaries when they're applied fairly and uniformly; arbitrary enforcement breeds resentment.The next wave isn't intellectual—it's emotional. AI that perfectly imitates consciousness will soon challenge how we help students distinguish between real relationships and convincing simulations.About Nate Otey:Nate served as a Fellow in the Harvard Department of Philosophy for over five years, during which time he helped to found ThinkerAnalytix as Lead Instructor and later as COO, among other roles. Nate authored or co-authored many of the core ThinkerAnalytix curriculum and course offerings, including courses for HarvardX, HGSE, and LSAC. Nate currently teaches AP Statistics, AP Calculus AB, and 9th grade Humanities at Boston Trinity Academy in Hyde Park.
#7

How Can AI Support Inclusive Education? - Tamsyn Smith

In this episode, Priten speaks with Tamsyn Smith, Senior Learning Designer and Team Lead at the University of Southampton, who is halfway through a PhD investigating how generative AI can support inclusive education. Tamsyn shares her journey from childhood programming to classroom teaching to higher ed learning design, and reflects on how COVID-19 and AI arrived as dual "cataclysmic shifts" that educators are still navigating. The conversation explores data privacy pitfalls, the myth of digitally-native students, and why Universal Design for Learning matters more than ever—ultimately landing on a hopeful note: most students are ethical, and the real question isn't whether they're cheating, but whether we're giving them meaningful reasons to learn.Key Takeaways:Students still need foundational skills. Just as calculators didn't eliminate the need to understand math, AI doesn't eliminate the need to write well—you can't evaluate output you couldn't create yourself.Don't assume students are cheating. Research shows most students use AI ethically; if they're over-relying on it, ask whether assignments are meaningful or just busy work.Read the terms and conditions. Before asking students to use any tool, educators must understand what data it collects and where that data goes.Use a simple privacy heuristic. If you wouldn't post it on social media, don't put it into a generative AI tool.Technology should open doors, not add burdens. Universal Design for Learning means educators do the work to minimize barriers—not hand students another tool and call it support.About Tamsyn Smith:Tamsyn Smith is a Senior Learning Designer Team Lead at the University of Southampton, where she has worked for over 13 years supporting staff and students with educational technology and digital capabilities. She works closely with academics staff as well as leading a team delivering training on emerging technologies, including generative AI, and she has particular expertise in inclusive education practices. Tamsyn holds SCMALT membership and is a CMALT assessor, and her work has been recognised through Vice Chancellor's Awards and an AdvanceHE Collaborative Award for Teaching Excellence.Tamsyn is currently pursuing a PhD in E-Research and Technology Enhanced Learning at Lancaster University, where her research explores how educators can use generative AI to support inclusion through Universal Design for Learning (UDL) implementation. Her work draws on Cultural Historical Activity Theory (CHAT) to examine the complex relationships between technology, pedagogy, and inclusive practice in higher education contexts.
#6

How Might AI Support Early Education Interventions in India? - Ratna Gill

In this episode, Priten speaks with Ratna Gill, who supports the partnerships team at Rocket Learning, a nonprofit tackling early childhood education in India through WhatsApp. Ratna shares her journey from child safety work to early childhood education and explains how Rocket Learning delivers bite-sized educational content to caregivers and Anganwadi workers serving 600 million children who lack access to early stimulation. The conversation explores their AI-powered personalized tutor,  the importance of cultural contextualization, and what ethical ed tech looks like when working with resource-constrained communities—ultimately landing on a hopeful note: technology can expand access to education without replacing the irreplaceable human connections that make learning joyful.Key Takeaways:Meet communities where they already are. Rocket uses WhatsApp because families are already there—no new apps, no tech burden.Technology should supplement, not replace, human interaction. APU is capped at 15-20 minutes daily to preserve parent-child engagement.Context matters more than content. Effective ed tech adapts cultural references, not just language, for each region.Test slowly, learn deeply. Field testing revealed that background noise breaks speech-to-text—rushing would have shipped a broken product.Parents are the most transformative tool. AI can model joyful pedagogy, but it can't replace human connection.About Ratna Gill:Ratna Gill serves as Lead, Special Projects at Rocket Learning. Previously the Head of Government Partnerships at Mumbai-based child safety nonprofit Aangan, she worked with state governments across the country to train school administrators and police officers to create safer communities for children in hotspots for child trafficking. Ratna graduated from Harvard Kennedy School in 2022, where she studied the impacts of parental labor migration on education and development outcomes for adolescents. She has a B.A. in Economics from Harvard College.
#5

How Can We Center Pedagogy During the AI Tech Wave? - Lance Eaton

In this episode, Priten speaks with Lance Eaton, Senior Associate Director of AI and Teaching and Learning at Northeastern University, about navigating the integration of AI and educational technology in higher education. Lance shares his 15-year journey through instructional design—from community colleges to Ivy League institutions—and offers practical wisdom on how educators can thoughtfully adopt AI without losing sight of pedagogy. The conversation explores everything from reflection bots and embodied learning to the tension between commercial tech platforms and educational values, ultimately landing on a hopeful note: we've navigated dozens of technological shifts before, and we can figure this one out too.Key Takeaways:Start small and ground AI in learning goals. Like any instructional design challenge, begin with what you want students to demonstrate—then find where AI fits naturally.Use AI to deepen reflection, not replace it. A "reflection bot" that asks follow-up questions can help students dig deeper than a one-time submission ever could.Pick two or three tools and stick with them. The app explosion taught us this lesson—chasing every new AI tool leads to burnout, not better teaching.AI literacy is discipline-specific. Every field will be impacted differently; the goal isn't generic AI skills but understanding what AI means for your particular context.We've been here before. Higher ed has absorbed 80+ technologies since the 1970s. The playbooks exist—we just need to adapt them for this moment.About Lance Eaton:Dr. Lance Eaton is the Senior Associate Director of AI in Teaching and Learning at Northeastern University. His work engages with the possibility of digital tools for expanding teaching and learning communities while considering the profound issues and questions that educational technologies open up for students, faculty, and higher ed as a whole. He has engaged with scores of higher education institutions about navigating the complexities and possibilities that generative AI represents for us at this moment. His musings, reflections, and ramblings on AI and Education can be found on his blog: AI + Education = Simplified | Lance Eaton, Ph.D. | Substack
#4

What Are Some Ethical Tech Integration Strategies for K-12? - Justin Cerenzia

In this episode, Priten speaks with Justin Cerenzia, Executive Director of the Center for Teaching and Learning at Episcopal Academy, about navigating the complex ethical decisions administrators face when integrating AI and educational technology in K-12 schools. Justin shares his journey from early AI adoption with GPT-3.5 to implementing thoughtful frameworks for tech integration, discussing everything from AI tutors and cell phone policies to the tension between preparing students for the workforce versus fostering deep learning. The conversation explores how schools can balance innovation with pedagogy, the importance of making student thinking visible, and why ethical decision-making requires moving beyond simple policies to embrace experimentation, nuance, and a design mindset that puts learning outcomes first.Key Takeaways:There's no shared AI experience. Different platforms and access levels mean students and teachers use fundamentally different tools—making unified policies nearly impossible.AI detection is a losing battle. Focus instead on making student thinking visible through conversations and walled-garden tools like Flint."Do no harm" cuts both ways. Schools must prevent misuse while also ensuring students aren't left behind on AI literacy.Understand learning science before deploying AI. The key question: are students cognitively offloading the task, or genuinely learning?The future is a design problem, not a prediction problem. Decide what you want from AI and build toward it—don't just react to updates.About Justin:Justin Cerenzia is the Buckley Executive Director of the Center for Teaching & Learning at The Episcopal Academy, where he leads work at the intersection of cognitive science, teacher inquiry, and AI-informed practice. His work centers on translating research into practical, human-centered tools that improve teaching and learning at scale.
#3

What Does Values-Driven Education Technology Policy Look Like? - Joe Carver

In this episode, Priten talks with Joe Carver, Associate Head of School at The Meadow School. Joe shares his unconventional journey from debate coach to technology director to school leadership. He discusses his philosophy of values-driven technology integration—one that involves all stakeholders, resists both hasty adoption and knee-jerk resistance, and centers the teacher-student relationship. He explores how schools can thoughtfully embrace AI and educational technology by using core values as a North Star, building cultures of innovation through targeted adoption, and preparing educators to stay conversant with emerging tools. Joe emphasizes the importance of reverse-engineering what students miss in digital-first communities and advocates for data-informed, iterative decision-making that protects what matters most while navigating what's coming.Key Takeaways:Schools shouldn't rush to adopt every new technology. Taking time for thoughtful due diligence and involving all stakeholders (teachers, division directors, student support services) leads to better outcomes than being the first to implement.Technology decisions should trace back to institutional core values. If a tool can't be connected to values like inquiry or community, it's a hard no.Implement a three-tier approach: no access for youngest students, guided access for middle grades, and unfettered access for upper school. Educators must remain conversant in emerging technologies even if they choose not to adopt them. You can't effectively guide students away from tools you don't understand.Today's students are building digital communities without the face-to-face foundation previous generations had. Schools must explicitly teach digital norms and social skills that used to develop naturally through in-person interaction.About:JOSEPH CARVER is the Associate Head of School at The Meadows School, a Prek-12 independent school in Summerlin, Nevada and the Head of School Elect for the Academy of the Sacred Heart in Bloomfield Hills, Michigan. Previously, Joseph served as the Chief Innovation Officer at The Meadows and the Director of Technology at Carrollton School of the Sacred Heart.A sought-after speaker and panelist on technology, he has presented at a series of national conferences, including  ATLIS, NDCA, FCIS, and FETC, on topics ranging from social media in schools to ongoing education for non-instructional staff. Joseph has worked alongside the Center for Transformative Teaching and Learning. His focus is on a data-driven, mind and brain education approach to decision-making in all aspects of school life.  Additionally, he is a certified Situational Leadership facilitator and a graduate of both the ATLIS Leadership Institute and the Center for Humane Technology’s Foundations of Humane Technology program. Joseph is also the founder and host of “At the Meadow”, a popular podcast focused on Innovation in independent schools.Joe’s experience as instructional faculty, coach, and administrator at Carrollton School of the Sacred Heart has profoundly impacted his “mission-informed” approach to technology integration in schools. Joe currently oversees Technology and Innovation, Advancement, Communications, Athletics, and Admissions at The Meadows School. Joe was unanimously selected as a recipient of the 2025 ATLIS PIllar Award, recognizing his many contributions and long-standing service to and leadership within the independent school educational technology community.
#2

Can We Teach Critical Thinking and Not Mindless Clicking? - Aidan Kestigian

Host Priten Soundar-Shah speaks with Aidan Kestigian, COO of Thinker Analytix, about why nearly half of college graduates lack basic reasoning skills and how explicit instruction in critical thinking can address this gap. They discuss the ethical commitments that should guide EdTech development, including prioritizing pedagogy over gamification, maintaining transparency with students, and building genuine relationships with educators.Key Takeaways:Critical thinking requires explicit instruction—it's not automatically developed through traditional courseworkEthical EdTech means putting pedagogical goals first, not engagement metrics or "stickiness"Reasoning is inherently difficult and requires sustained practice; shortcuts undermine real learningDirect accountability between EdTech developers and educators leads to better products and outcomesMission-driven organizations can prioritize both growth and integrity when the mission guides decision-makingRelevant Links:thinkeranalytix.orgthinkarguments.orgethicaledtech.orgAbout Aidan:Aidan Kestigian, Ph.D. Chief Operating Officer for ThinkerAnalytix, an education non-profit organization, and a visiting Associate of the Department of Philosophy at Harvard University. Aidan received her Ph.D. in Logic, Computation, and Methodology from Carnegie Mellon University in 2018 and taught logic and ethics to college students for a decade before and during her time at TA. She is the author of Democratic Decisions in a Critical Thinking Crisis (2025).