In this episode, Priten speaks with Justin Cerenzia, Executive Director of the Center for Teaching and Learning at Episcopal Academy, about navigating the complex ethical decisions administrators face when integrating AI and educational technology in K-12 schools. Justin shares his journey from early AI adoption with GPT-3.5 to implementing thoughtful frameworks for tech integration, discussing everything from AI tutors and cell phone policies to the tension between preparing students for the workforce versus fostering deep learning. The conversation explores how schools can balance innovation with pedagogy, the importance of making student thinking visible, and why ethical decision-making requires moving beyond simple policies to embrace experimentation, nuance, and a design mindset that puts learning outcomes first.
Key Takeaways:
Key Takeaways:
- There's no shared AI experience. Different platforms and access levels mean students and teachers use fundamentally different tools—making unified policies nearly impossible.
- AI detection is a losing battle. Focus instead on making student thinking visible through conversations and walled-garden tools like Flint.
- "Do no harm" cuts both ways. Schools must prevent misuse while also ensuring students aren't left behind on AI literacy.
- Understand learning science before deploying AI. The key question: are students cognitively offloading the task, or genuinely learning?
- The future is a design problem, not a prediction problem. Decide what you want from AI and build toward it—don't just react to updates.
About Justin:
Justin Cerenzia is the Buckley Executive Director of the Center for Teaching & Learning at The Episcopal Academy, where he leads work at the intersection of cognitive science, teacher inquiry, and AI-informed practice. His work centers on translating research into practical, human-centered tools that improve teaching and learning at scale.