Making space for the questions that matter.
All Episodes

Latest Episodes

All Episodes
#17

Who Builds the Tools Teachers Are Asked to Use? - Yanni Chen

In this episode, Priten and Yanni Chen explore what it actually looks like to build AI tools that support learning rather than shortcut it. Yanni, a master's student at Harvard Graduate School of Education and product developer at Deep Brain Academy, shares her experience creating an AI math tutor with a genuine commitment to scaffolding, cultural inclusivity, and keeping teachers central to the learning process.Key Takeaways:Scaffolding matters more than speed. AI tools often give direct answers because that's what they're engineered for. But real learning requires guiding students through the thinking process—something teachers do that AI cannot replicate. Educators should look for tools that provide step-by-step guidance rather than instant solutions.Teacher skepticism is healthy—and often fades with use. Most teachers approach AI with skepticism, which is appropriate. But just like PowerPoint and video once were new classroom tools, AI becomes less intimidating through hands-on experience. The recommendation: start with personal, low-stakes use before thinking about classroom implementation.Gen Alpha's AI fluency makes teacher presence more important, not less. Students are already fluent AI users. This doesn't diminish the teacher's role—it elevates it. Teachers need to help students navigate bias, develop critical thinking, and understand when AI is appropriate and when it isn't.We lack clear guidelines—so educators must set their own. In the absence of federal or state AI policies, individual educators need to establish clear ethical boundaries around data security, safety, and appropriate use. The technology is moving faster than regulation can keep up.Creative technologies extend beyond chatbots. From 3D printing and laser cutting that let students build physical objects to AR/VR simulations for medical training, there's a whole landscape of educational technology that emphasizes hands-on learning and creative exploration—not just AI conversation.About Yanni Chen:Yanni Chen is an Ed.M. candidate at the Harvard Graduate School of Education, where she studies Learning Design, Innovation, and Technology. She earned her B.S. from Boston University, majoring in Public Relations and minoring in Applied Human Development. Her work sits at the intersection of education, product management, AI, XR, and edtech. She focuses on student experience and the design of educational products that foster engagement, growth, and meaningful learning outcomes. Drawing from both her academic training and her work in edtech, Yanni brings the perspective of both a student and a product manager to conversations about teaching, learning, and educational innovation.
#16

Is Surveillance Culture Ruining Trust in Schools? – Jessica Maddry

In this episode, Priten and Jessica Maddry examine how surveillance culture and rigid policy enforcement are eroding trust and genuine learning in schools. From cell phone bans that criminalize normal behavior to reading programs that strip away the joy of stories, they explore how the gap between written policies and their ethical implementation has created environments of control rather than connection. The conversation spans zero-tolerance enforcement, AI detection tools, and the critical importance of human relationships in education.Key Takeaways:Policies should serve ethics, not replace them. Following rules isn't the same as doing the right thing. When a student has their phone off in their pocket but gets suspended because it's not in their backpack, the punishment no longer serves the policy's original intent of reducing distraction.Surveillance culture damages the learning environment. Constant monitoring and zero-tolerance enforcement create an atmosphere where students feel unsafe and disengaged. When students associate school with punishment rather than growth, absenteeism and mental health crises follow naturally.Deep literacy is becoming a privilege again. Many students no longer read books from start to finish, instead consuming only passages for standardized tests. This loss of story-based learning strips away both the joy of reading and critical thinking skills.AI detection is an unwinnable arms race. The cycle of AI detectors, humanizers, and humanizer-detectors demonstrates a fundamental misunderstanding of how to address academic integrity—tools cannot replace the trust and relationships needed for genuine learning.Human connection is irreplaceable in education. Whether it's a professor scrapping class to process a difficult moment with students, or a teacher stepping aside to comfort a struggling child, the most impactful educational experiences come from authentic human relationships—something no technology can replicate.About Jessica Maddry:Jessica Maddry is an educator, strategist, and cofounder of BrightMinds AI, where she works with schools and districts to integrate AI ethically, intentionally, and with educators at the center. Her work focuses on helping systems move beyond hype toward human-centered, purpose-driven design– supporting policy, implementation, and systems change so technology strengthens learning, equity, and student well-being rather than undermines them.
#15

What Does Representative Governance Mean for Our Future? - Nathán Goldberg

In this episode, Priten speaks with Nathán Goldberg, a philosopher-statistician whose career weaves together two unlikely threads: professional soccer and democratic activism. As Vice President of the US Soccer Federation and founder of both Harvard Forward and Bluebonnet Data, Nathán has spent years thinking about who gets to sit in the rooms where decisions are made—and why it matters.Key Takeaways:Voting isn't enough—perspective is. The people impacted by decisions need to be in the rooms where those decisions get made.Outsiders can win. Harvard Forward gathered 4,500 signatures on parchment paper, won board seats, and a decade of resistance to divestment collapsed within a year.Institutions resist until they can't. Harvard ignored them, then attacked them. It didn't work.The model scales. The same playbook worked at Yale and Penn State. One elected climate scientist shifted Penn State's investment policy.Soccer has the same problem. 4 million youth players, zero recent youth players in governance. About Nathán Goldberg:Born and raised in México, Nathán Goldberg Crenier is a new(ish) American who is passionate about using the power of democracy and sports to make the world a better place. He has been recognized in the Forbes 30 Under 30 list for his work in progressive politics and nonprofit management, in the New York Times for his work as an electoral organizer and climate advocate, and in the Sports Business Journal New Voices Under 30 list for his work as a soccer executive. He is also a proud recipient of the 2025 Paul & Daisy Soros Fellowship for New Americans as he pursues his JD at Harvard Law School, having graduated with a joint degree in philosophy and statistics from Harvard College, where he played for and captained the D1 varsity men’s soccer team.
#14

How Do We Teach the Journey When AI Offers the Destination? - Varun Gupta

In this episode, Priten speaks with Varun Gupta, an Accounting and Economics professor at Wharton County Junior College in the Houston area who has been teaching since 2007. Varun is refreshingly candid about his own complicated relationship with AI—he uses it extensively for lesson planning, assignment creation, and communication, but worries deeply about what happens when students skip the grind entirely. Key Takeaways:The helicopter problem is real. Using AI to get answers without effort is like taking a helicopter to the top of Mount Everest. You get there, but you missed the point. The grind, the failure, the figuring-it-out—that's where the learning lives.Cognitive offloading is already happening to teachers, too. Varun no longer does mental math. He GPS's the airport he's been to hundreds of times. AI is next. The concern isn't hypothetical—it's already underway for him personally.Post-COVID is the bigger shift, not post-ChatGPT. Students who came through COVID developed habits of not showing up, not following through, and not asking questions. That behavioral shift is more visible than any change attributable to AI alone.The stress is gone—and that's the tell. Before ChatGPT, students peppered him with term paper questions all semester. Now? Silence. They're not less anxious because they're more prepared. They're less anxious because they've already decided how they'll produce the paper.There's inherent hypocrisy in the dynamic—and it's worth naming. Using AI to create assignments while discouraging students from using it to complete them isn't perfectly clean. Varun acknowledges it. The distinction is in where the journey matters: for the teacher creating the prompt, or for the student doing the thinking.The human value is in the face-to-face. In asynchronous online courses, the line between professor and bot is thin. Where Varun sees his irreplaceable value is in the in-person relationship—lived experience, empathy, career conversations, and the daily modeling of what professional effort actually looks like.About Varun Gupta:Varun Gupta, aka, The “Knotty” Economist is a dynamic and engaging economics professor with 19 years of experience making complex concepts both accessible and exciting. He has spent his entire career at Wharton County Jr. College (i. e. the “other” Wharton).  Known for his fun and energetic presentation style, and ever present elaborate necktie,  he has delivered insightful talks at conferences, college professional development events, and civic groups—both live and virtual. A passionate educator, Varun specializes in applying fundamental economic principles to real-world decision-making and classroom engagement. Whether tackling macro, micro, or the economics of everyday life, he brings a unique mix of expertise and humor that keeps audiences learning and laughing. When he’s not using economic concepts to explain the world, he spends time catering to his 4 year old golden doodle Cinnamon.
#13

Can We Preserve Core Classrooms Values While Integrating Ed Tech? — Brian Tash

In this episode, Priten speaks with Brian Tash, an elementary school teacher with nearly 30 years of experience who has witnessed the complete arc of education technology—from Scantrons to Google Classroom to AI. Brian shares how he balances technology integration with preserving fundamental skills like reading stamina and handwriting. The conversation covers his transparent approach to using AI for faster student feedback, why he's concerned about declining empathy and attention spans post-COVID, how he teaches prompt engineering to third and fourth graders, and his hope that educators will become more mindful about why they're using technology rather than just adopting everything new. He argues that personal connection, problem-solving, and collaboration are what students need most—and those can't come from a screen.Key Takeaways:Follow the 80-20 rule with AI. AI gets you 80% of the way—the other 20% is you adding your own elements. This applies to teachers giving feedback and students creating work.Transparency builds trust. When students understand why you're using AI for feedback, they embrace it. Brian's study found 90% of students were in favor once they understood the reasoning.Technology can't replace human connection. Students need to learn how to talk to each other, problem-solve collaboratively, and develop empathy—skills that don't come from screens.Stamina is the real crisis. Post-COVID students struggle to push through hard things. The growth mindset isn't there. Writing a paragraph makes their hands hurt.Teach prompting, not just usage. Focus on prompt engineering—how to get what you want from AI. Experiment with students: change the words, add details, see what happens.Standards-based grading may help. With clear standards, teachers can focus instruction, use AI to target specific skills, and have more time for the human elements once mastery is achieved.