Making space for the questions that matter.
All Episodes

Latest Episodes

All Episodes
#22

Can You Still Teach Critical Thinking? - Paul Blaschko

In this episode, Priten speaks with Paul Blaschko, an assistant teaching professor of philosophy at Wake Forest University. Paul's work sits at the intersection of liberal education, critical thinking instruction, and course design. The central question driving their conversation: in an era of AI that can generate plausible-sounding arguments and explanations, can we still teach students to think critically—or must we fundamentally reimagine what critical thinking means?Key Takeaways:EdTech should solve existing problems, not create new ones. Paul approaches technology as a tool only when he's already facing a pedagogical challenge. This shifts the question from "what can this tool do?" to "what does my classroom need?"YouTube explainers preceded ChatGPT in reshaping how students research and learn. Long before AI, students were outsourcing understanding to video tutorials rather than wrestling with dense texts, revealing a deeper shift in how students approach knowledge.Critical thinking instruction requires direct practice with real arguments, not shortcuts around difficulty. There's no substitute for students actually constructing and defending their own positions through dialogue and written work, even when AI can do it faster.Scaling critical thinking instruction demands new infrastructure, not just new pedagogy. Paul and his team are testing whether platforms like Think Arguments can help instructors manage the feedback and iteration needed to teach reasoning at scale across institutions.AI may not replace the professor's role so much as expand it into explicit curation and judgment. In a world where explanations are abundant, the teacher's value shifts toward deciding which frameworks matter and helping students evaluate competing arguments.Paul Blaschko is an assistant teaching professor at the University of Notre Dame. He teaches God and the Good Life, a course dedicated to asking the big questions about meaning, morality, and faith. He also serves as the Director of the Sheedy Family Program in Economy, Enterprise, and Society, a program devoted to exploring how the humanities can help us find meaning in work. With Meghan Sullivan, he has co-authored The Good Life Method (Penguin Press, 2022), a book about how philosophy can help us live better lives. He is currently working on a book on the philosophy of work (under contract with Princeton University Press), and is the co-founder of a Notre Dame based tech start-up that aims to solve problems with dialogue on the internet.
#21

What Is Age-Appropriate AI in Education? - Megan Barnes

In this episode, Priten speaks with Megan Barnes, a PhD student in learning technologies at the University of North Texas and a K-12 librarian with 14 years of experience, about what age-appropriate AI in education actually means. Megan holds dual roles as library director and director of educational technology for early childhood through fourth grade in Dallas, and her research draws on cognitive and affective neuroscience to evaluate how emerging tools interact with child development. The conversation moves through the real-versus-synthetic distinction that young children struggle with, the attention economy driving AI product design, information literacy as a foundation for AI literacy, and why curiosity may be the most important thing educators need to protect.Key Takeaways:Before children can use chatbots, they need a solid concept of real versus not real. Most kindergartners interact with AI through voice and animated characters, adding layers of anthropomorphization that make it nearly impossible for them to distinguish a computer from a person. Megan argues that chatbot-based AI is not developmentally appropriate at this age, and any exposure should be adult-controlled and side-by-side, consistent with American Academy of Pediatrics guidance on co-viewing media.The attention economy is becoming a relational economy—and children are the target. The same design logic that removed page numbers from Google search results is now being applied to conversational AI. If a child builds five years of chat history with a platform before adulthood, that relationship becomes a powerful lock-in mechanism. Megan also raises the concern that chat histories are now being used to drive advertising, meaning the tools students use for learning are simultaneously selling to them.AI literacy in elementary school means information literacy, not prompt engineering. Rather than teaching young students how to use AI tools directly, Megan focuses on helping them understand who generates information, who validates it, and where AI is already present in their daily lives. During morning announcements, she points out the background remover tool and tells students, "This is AI right here." The goal is building foundational skills for evaluating any new technology, not training on a specific product.Every generation of creative technology triggers the same panic—and the pattern holds. Megan draws on her background as a violinist and recording arts student. When Apple's GarageBand launched during her final semester, her synthesizer professor declared it the downfall of music. Instead, it democratized creativity. More people creating doesn't mean everything produced is good, but the tool itself is not the threat. AI follows the same arc.Curiosity doesn't need to be taught—it needs to be protected. Young children arrive with natural wonder intact. Megan distinguishes between formal classroom learning and the informal learning space of the library, where autonomy and exploration still drive engagement. The job of early education is not to instill curiosity but to give children frameworks for approaching new things with wonder while still thinking critically, so that instinct survives into adulthood.Megan E. Barnes is a librarian with over 14 years experience, as well as a Ph.D. student in Learning Technologies at the University of North Texas. Her research focuses on ethical considerations in educational technology adoption and curriculum design. She is currently a research assistant developing curriculum for edge AI and is an ed-tech leader and library director at an independent school. She believes that librarians are information professionals uniquely suited to exploring the intersection of information, technology, and pedagogy.
#20

Is AI Literacy the New Professional Credential? - Anna Zendall

In this episode, Priten speaks with Anna Zendell, a social worker turned educator who oversees healthcare management, human services, and wellness programs at Bay Path University, about what it takes to rebuild a curriculum around AI when the stakes are patient outcomes. Zendell is currently piloting an AI-enhanced program from the ground up, designing courses where a closed AI system mentors students through interactive activities while faculty retain grading authority and instructional presence. The conversation covers why traditional learning outcomes don't translate cleanly into AI-driven instruction, how adult learners in healthcare face unique pressure to acquire AI literacy for careers that already demand it, and the trust gaps between students, faculty, and administrators that complicate adoption.Key Takeaways:Curriculum doesn't absorb AI -- it has to be rebuilt for it. Zendell found that standard learning outcomes written with Bloom's Taxonomy are too broad for an AI system to use as mentoring scaffolds. Her team breaks each outcome into granular component steps, essentially teaching the AI how to guide a student the way an experienced instructor would.AI is the first classroom technology to split faculty, students, and administration into opposing camps. Some faculty add zero-tolerance rubric rows while others experiment eagerly. Students range from uneasy to already reliant. Zendell describes a three-way perception gap she hasn't seen with any previous technology, including the transition to online learning.Healthcare employers aren't waiting for higher ed to figure this out. Zendell regularly scans job postings for healthcare leadership roles and finds AI literacy and AI tool proficiency appearing with increasing frequency, particularly in informatics, clinical data analytics, and healthcare finance. Her students are asking for these skills and feeling the urgency themselves.A student tester changed the entire design process. Zendell recruited an informatics student with an interest in healthcare AI to take each module as a learner before it goes live. That feedback loop -- where the student flags where prompts mislead or where the AI drifts into unproductive territory -- became central to how the team iterates on course design.The real danger isn't AI itself -- it's losing the habit of questioning it. Zendell's deepest concern is dependency: that convenience erodes the capacity to critically evaluate AI output. In healthcare especially, where students might default to ChatGPT instead of dedicated clinical interfaces, the gap between accessible and appropriate matters.Anna Zendell is the program director for the MS in Healthcare Administration program. For over a decade, she has directed degree programs in healthcare administration, health sciences, and public administration. She teaches regularly at the graduate and undergraduate levels. A major emphasis is on ensuring equitable and accessible higher education for students of all abilities by leveraging the power of online learning and the unique attributes that adult learners bring to their learning.Prior to her academic administration and teaching work, Anna oversaw operations and evaluations for grant-funded research projects focusing on issues such as walkable communities, community health education, and dementia interventions. She developed enduring interdisciplinary partnerships with organizations, local governments, and community members. She provided professional development and continuing education for healthcare professionals. Key focus areas in Anna’s work include fostering meaningful inclusion in workplaces and communities and addressing health disparities, particularly around chronic illness and health promotion.Anna earned her doctorate and master’s degrees in social work at the University at Albany with a focus on management and community systems.
#19

What's the Line Between Research Integrity and Using AI as a Tool? - Kari Weaver

In this episode, Priten speaks with Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries (OCUL), about why existing tools like citation and methodology sections can't capture how AI is actually being used in research and learning -- and what a structured disclosure standard might look like instead. Weaver, who also teaches graduate students at the University of Toronto and created the AID Framework for AI disclosure, walks through the practical and philosophical challenges of building trust infrastructure for an ecosystem that doesn't have bright lines yet. The conversation covers disciplinary divides in how AI use is understood, the global effort to establish a disclosure standard, and why the authorship question remains genuinely unresolved.Key Takeaways:Citation can't bridge the gap between AI-generated ideas and their sources. Traditional citation connects ideas to a discrete, traceable origin. AI severs that connection by synthesizing across sources in ways that can't be pinpointed. Weaver notes this is structurally similar to what Western scholarship has long done to traditional and lived knowledge -- and now researchers are experiencing that same disconnection applied to their own work.A global AI disclosure standard is actively being built. Weaver is co-leading a large-scale effort with the European Network of Research Integrity Offices, the International Science Council, and the Committee on Publication Ethics to develop a consistent disclosure framework through the World Conferences on Research Integrity. The goal is to stop researchers from having to tailor disclosures to each journal's idiosyncratic requirements.AI use in research often falls outside methodology entirely. A researcher translating articles from an unfamiliar language using AI is a real and beneficial use case, but it doesn't fit neatly into a methods section. These peripheral uses still shape how researchers interact with and think about their material, which is exactly why disclosure needs to be broader than methodological reporting.Separating the disclosure from the assignment makes students more likely to do it. At the undergraduate level, voluntary disclosure is hard to get. Weaver recommends having students submit a disclosure rubric alongside their assignment in a separate dropbox. This treats disclosure as a professional skill worth practicing on its own, and it gives instructors a reference point if questions arise about how an assignment was produced.Authorship will likely settle at the disciplinary level, not the universal one. Weaver is candid that she doesn't have an answer to the authorship question. In qualitative research, she sees coding as irreplaceable human work. In STEM fields, AI-assisted analysis may be more readily accepted. She expects discourse communities will develop their own standards -- but that shouldn't delay building consistent disclosure practices across all of them.Kari D. Weaver (she/her) holds a B.A. from Indiana University, a M.L.I.S. from the University of Rhode Island, and an Ed.D. in Curriculum and Instruction from the University of South Carolina where her dissertation examined the impact of professional development interventions on academic librarian teaching self-efficacy. She is the Program Manager, Artificial Intelligence and Machine Learning with the Ontario Council of University Libraries on secondment from her permanent role as the Learning, Teaching, and Instructional Design Librarian at the University of Waterloo. Additionally, Dr. Weaver is a continuing sessional faculty member in the Department of Leadership, Higher, and Adult Education at the Ontario Institute for Studies in Education (OISE) at the University of Toronto. Her wide-ranging research background includes study of accessibility for online learning, information literacy, academic integrity, misinformation. She is widely recognized as an expert in AI citation, attribution, and disclosure practices for her development of the Artificial Intelligence Disclosure (AID) Framework and is currently the co-lead of the 2026 World Conferences on Research Integrity Focus Track: Toward a Global Reporting Standard for AI Disclosure in Research.
#18

What Does Medicine Look Like When AI in the Room? - Jack Kincaid

In this episode, Priten speaks with Jack Kincaid, a third-year medical student at Harvard Medical School, about navigating clinical training in an era of powerful AI tools. Jack shares his perspective on Open Evidence (a medical LLM), Harvard's AI Sandbox, and the tension between leveraging new technology and developing as a physician.Key Takeaways:AI tools can accelerate diagnostic reasoning—but training still requires struggle. Platforms like Open Evidence can reliably synthesize evidence and suggest diagnoses, but reflexively reaching for them risks stunting the critical thinking that clinical practice demands. The goal should be building heuristics strong enough to stay present with patients, not offloading cognition.Transparency about surveillance matters. From Canvas quiz monitoring in college to clinical logging systems, students often don't know what's being tracked. Jack's experience as a TA revealed the extent of visibility administrators have—and raised questions about whether strategic ambiguity helps maintain standards or just breeds anxiety.Institutions are starting to take AI governance seriously. Harvard Medical School's AI Sandbox gives trainees access to multiple LLMs in a secure environment that protects curriculum materials and personal data (though it's not HIPAA compliant). This kind of infrastructure signals that leadership is thinking carefully about responsible use.Career concerns about AI replacement are real. For students considering imaging-heavy specialties like radiology or radiation oncology, the specter of AI "scope creep" is a recurring topic in conversations with attendings and senior trainees. It's not paranoia—it's a practical factor in career planning.Discovery often happens peer-to-peer. Jack first learned about Open Evidence by glancing at a classmate's screen during a simulation exercise. The most impactful tools aren't always introduced through formal curricula—they spread through observation and word of mouth.John “Jack” Kincaid is a trainee in the Harvard/MIT MD-PhD Program at Harvard Medical School interested in the intersection of diet and disease. Jack received B.A. (Nutritional Biochemistry and Metabolism) and M.S. (Nutrition) degrees from Case Western Reserve University in 2021, where he helped investigate the impact of obesity and obesogenic diet on cancer development in the laboratory of Nathan Berger at Case Comprehensive Cancer Center. Concomitantly, Jack worked with a variety of food access and health literacy groups including CWRU Food Recovery Network and Cooking Matters STL. After leaving CWRU, Jack relocated to the UK to train as a postgraduate in the group of Sir Stephen O’Rahilly at the University of Cambridge Institute of Metabolic Science, studying the neuroendocrine regulation of human appetitive behavior and body weight. As a physician scientist, Jack hopes to leverage basic science and clinical medicine to help address the growing burden of diet-associated illnesses as well as develop safe, effective treatments for metabolic disease.