Priten: Welcome to Margin of Thought, where we make space for the questions that matter.
I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming.
We spend a lot of time asking whether students should use ai, but what happens when that student is a doctor in training and the stakes are someone's health?
My guest today is Jack Kincaid, a third year medical student at Harvard Medical School, currently in the thick of clinical clerkships at Mass Gen Hospital.
He came up through a generation where Canvas quietly tracked everything, survived the overnight pivot to fully digital learning during
COVID, and is now navigating a clinical world where large language models can suggest a diagnosis before a senior your resident can.
We're going to talk about Harvard's AI Sandbox, what it means to develop as a physician when powerful tools are always one tab away, and why he's genuinely worried about what AI might do to the careers of the doctors who come after him.
Let's begin.
Jack: Um, I am a third year medical student at Harvard Medical School doing my clerkships right now.
So, , I have moved from classroom learning at Harvard Medical School to, uh, more clinical application as a part of various services and teams at Massachusetts General Hospital.
Priten: Before we dive into your other current roles, , can you try to think with about the very first time you as a student ever interacted with, , quote unquote an ed tech tool, whatever that means to you
Jack: I would say maybe the use of Canvas as an educational platform.
That started in college, , and was mainly used just as an assignment database , and grade book database.
I would say it was majorly used.
In fact, a number of my classes at university at that time , were not documented on Canvas.
And then COVID hit, , which necessitated a pretty large transition to digital platforms.
And so then all classes became integrated with Canvas at that time?
At my university.
Priten: At that time, do you remember having a particular reaction to it?
Were you feeling neutral about it?
Were you excited about it?
Was it just some annoying new thing you got to learn?
Jack: I think anytime that like a new technologic tool is introduced, I have a lot of hesitance, with respect to whether or not it's like little issues have
been ironed out in the case of Canvas, I remember particularly when COVID started, all of my quizzes and tests started being administered through Canvas.
And so there were a lot of questions that I felt that myself and peers had about whether or not for example, answers would be documented
properly or how, like what was the functionality of being able to monitor, for example, like what our computer activity was while taking a quiz?
Like what if I accidentally clicked off the tab?
Like, would I be reprimanded for that?
Or is my teacher able to perceive every small, not necessarily during test taking necessarily, but is my teacher able to see that I'm taking, 30 hours on a homework assignment versus five minutes and just speeding through.
Um, , So there are all these things where I, I wasn't really able to ascertain the capability of the platform and therefore had a lot of unease about it, I guess.
Priten: yeah, That's an interesting, perspective from a student, from the student side, of like the, black box effect a little bit
about like, okay, you know, that there's some sort of monitoring going on, you're not exactly sure, what is and isn't being tracked.
Did you ever notice that there was somebody, um, an instructor who provided you enough information that made you feel at, at ease.
Were you, Were they ever transparent about what they could and couldn't see?
Jack: No, no, they never were transparent.
But what was very eye-opening was in my senior year of college.
I became a TA for a graduate level nutrition course where I was then an administrator of the campus page.
And really, you really can see everything.
You can, there are log books where you can basically see every individual motion on that site as one of the course administrators.
There were certain instances as a TA where I could certainly see students violating course guidelines on Canvas that they clearly did not realize I was able to do but yeah, that was really eyeopening.
Priten: you know, Having seen both of those sides, , do you think that we should be more transparent with students about what, what is being, monitored, um, and controlled?
Or do you think there's some advantage to there being a little bit of secrecy about what exactly, can and cannot be seen?
Jack: Yeah, I mean, I'm a personal fan of transparency, , in all things.
I think it's really appropriate.
I will say, like from a student perspective, when there is a lack of transparency, I can definitely, and I, I've viewed this at all levels of my training in college, in post-graduate at Cambridge, and then now in medical school.
I feel like when there is some ambiguity and an awareness on the part of the students, it does almost keep us in line, I would say.
For example, HMS has a mandatory attendance policy, and when students are aware that the sign in will be a QR code, that QR code will then be passed around and the attendance will, the actual attendance will drop.
But the technological artifact of said attendance will be like a hundred percent because everybody has a way to, to sign in.
Priten: What other uses of technology have you seen during your medical education?
You know, medical education is interesting because it's, it's high stakes, Um, and so I'd love to hear what role it has played or has started playing, um, even differently in the last few years.
Jack: Yeah, I was reflecting on this while filling out the interview survey that you sent around.
Um.
I feel like something that I've just been introduced to in the past few months that has made a really huge impact on my practice of medicine, but also something that I've been simultaneously hesitant about is the platform open evidence.
It's a large language model that's being used by clinicians., And it is able to spit out, , from my experience, very reliable recommendations with respect to diagnosis and management of patients, of course, like within the scope of not
violating hipaa, but you can give scenarios and it will by distilling evidence that's available online provide really reliable suggestions, as well as very helpfully, really reliably provide useful citations that support those suggestions.
And I found that really helpful and, it's incredibly powerful as a tool.
I think at the trainee level, my main hesitation is in how this will impact my critical thinking development as training goes on it, it could be very easy for me to
reflex to, entering scenarios that I encounter, in, optional or supplementary assessments directly into open evidence and have very, neatly pump out an answer, right?
So I, this is not a paradigm that's new by any means given the availability of LLMs.
Priten: What are those worries especially for training gears, of introducing a platform like that., Because of course, you know, if you're in
the clinical setting and you're in between patients and you're quickly using it to, catch up on the latest research there might be utility there.
Um, I'm curious to hear more about how you view it, during the training process.
. Jack: I think particularly when I'm entering new services as a medical student, I start clerkships every one to three months.
That is a completely new team, new organ system, new set of patients, Um, and so it's really helpful for subject matter that I'm completely unfamiliar with or part or in particularly complex patients
with potentially rare diagnoses that I don't understand .At some point many patients, particularly those that we see at Massachusetts General Hospital that are medically can be quite complex.
You reach a point in your diagnostic workup that is beyond what, as a trainee you see in school, like feel like I learn the first one to five initial steps that using reasoning I can piece together.
But at some point, if for example, I'm on neurology right now and there are many patients where you send a million labs and you perform a million different imaging and other diagnostic procedures, and all of them can come back negative necessarily.
And perhaps this is some sort of like seronegative autoimmune encephalitis, for example, that has a robust presentation, but is a very quiet disease that evades a lot of diagnostic testing.
At some point you kind of run out of options and, knowledge of set.
Like as a trainee, how do I know?
I don't know the 150, probably thousands of autoimmune markers that I could send off.
So,, I think in that scenario it's really, really helpful.
But , in the converse.
Particularly as an early stage training in this space, I think it's really important to strengthen as much as possible my, fluid intelligence and flexibility and critical thinking.
And I do think that relying on tools like these can be a really dangerous thing because the clinical encounter is so limited and , it's such a person to person thing where you really should hopefully be using as little brain
power as possible and using as many he heuristics as you can initially so that you can, in my opinion, dedicate as much time to building a bond and a connection with patients in those 15 to 30 minutes that you have in the room.
Priten: and then Tell me about what kinda conversations are happening with you all.
Are these, Are these considerations being brought up directly by supervisors and faculty?
Are you all having these conversations peer to peer?
Um, About what role does technology, ought to play in your training?
Jack: I, I wouldn't say so.
, I think I particularly like my time in the hospital now,, is pretty hectic.
And I is the moment I walk in at six 30 in the morning and then to the time that I leave from like five to 7:00 PM at night, I am focused on the patients on the floor.
My interaction with the platform is purely for its use and it's pretty seldom.
Priten: Aside from research usage, um, when you think about earlier in your education, were tools introduced during, your coursework that helped in particular, or that you felt were more of a distraction than helpful?
Jack: Yeah, I, I'm a bit anti ChatGPT.
I think that it can be, again, all of these are very scenario dependent, even between LLMs, right?
Like I think there are individual use cases where you can maximize based on the profile and capability of the platform, its potential.
In the case of ChatGPT I think it's so pervasive that, it's hard not to feel incentivized as a trainee to use it.
Even in the medical school context, when I was, in our classroom learning, you would have probably three to five hours of work outside of the classroom to do each night.
And it would be very hard to, I think, hear from peers who had elected to use an LLM to help them work through,, assignments and do it in, a very small fraction of the time.
I think there's something to say about like the influence of the perspectives around you?
Priten: In terms of like official policies, , guidance again from instructors and faculty, was any of that updated in time
Jack: thoroughly.
Yeah, we have I think very concrete recommendations.
I can speak most about HMS.
I think that these, platforms were being less so adapted during my time as an undergrad.
And then as a postgrad at Cambridge,, I, I really actually did, now that I think about it, I did not hear much about them or think about them.
But now, yeah, we are consistently, for every course, recommended not to use ChatGPT or.
Or an LLM to complete coursework.
, Of course there are huge, instructions never to, input patient information or case information, even in the case of a like simulated patient, into these LLMs.
HMS itself has created an AI sandbox, which contains I think five to seven, LLMs and it is a like completely private and like secure way for trainees at HMS to use LLMs without the data being transmitted elsewhere.
And so I, I think from that perspective, it's, it's reassuring at least as a student to know that my institution is thinking so
heavily about them and particularly where there is concern, I guess, for, patient privacy, for them to, to be taking it so seriously.
I really do enjoy that.
Yeah.
Priten: Was that made available as an option?
Were you to like, want to use an LLM or were you told, oh, these are some instances in which this might be helpful to use because that,, that's pretty remarkable that they've set up, um, that infrastructure.
Jack: Yeah, AI Sandbox.
I can pull it up.
It's a pilot program.
This is a secure tool that will enable users to choose any of the latest and most fully featured large language models.
Features a level of data security that allows for copyrighted curriculum material and personal student details to be safely entered without such information being made available to the AI companies.
However, it is not HIPAA compliant, so no patient information should be entered.
, Yes, it is just a way to keep everything internal.
Priten: You mentioned that you also play a mentorship role, for college students.
, Tell me a little bit about those conversations.
, Have you had the opportunity to talk to them about their AI usage?
Um, Any of them come to you with concerns or, have gotten in trouble?
Jack: Uh, I guess not within students under my direct purview, but within my residential community, have been made aware of situations of AI use resulting in, in plagiarism cases, with the Academic Integrity Board at Harvard.
they are pretty rare, I will say, which is refreshing.
I think students are very hesitant and aware of the risks and,, potential consequences, , of plagiarism in general, but also within the context of AI use.
I think like on a more positive note there are a lot of really helpful use cases as I see you're using with this Zoom call, like automated note taking services, and just ways of distilling large amounts of data.
I work with a ton of pre-medical students and while we haven't necessarily implemented the technology yet, I think this year we're really interested in using these tools to record our meetings and better record.
All relevant information that could be impactful for, in this case, students' applications to medical school, making sure that we're capturing and doing justice to all the hard work they've done to, to assemble those applications.
Priten: Is there anything about , the tools both in the education context or even in the medicine context that's exciting you, that you're, um, hoping that will be productive towards your training or future career?
Jack: Yeah.
I mean, , I think after enumerating them, my main concern is just developing as an individual, right?
Like kind of sacrificing that development of critical thinking, through using AI in the training context.
, Really excitingly is that ability to amalgamate large bits of information.
I think within the clinical context that's, so powerful.
Particularly, for example, like, there can be, it is a very common thing for patients, to, almost word vomit.
That sounds super negative.
It's completely natural.
I definitely do it as well when I go see a doctor, I just share all the information I possibly can.
Some is probably helpful, some is probably not helpful.
And so to have tools that can parse through that information and collect it, hopefully.
I mean, in a safe way for it to be able to be implemented, would be really useful., I'm sure you've heard of instances where AI has been used as a diagnostic tool, , to diagnose like rare genetic conditions.
We're all human.
So it's, it's very hard to have complete universal knowledge of every condition under the book, particularly with like rare syndromes . They are a weird constellation of symptoms that don't necessarily fit together um, into one system, for example.
And so having AI be able to appraise whether or not, I don't know if I had cataracts and my right leg was orange and my left pinky had a wart on it and it turns out, oh my God, I have gene XYB mutation.
Like that's, that's insane , and really impactful for patient lives and outcomes.
And also from a financial perspective would mitigate having to use.
Those shotgun diagnostic approaches that I alluded to earlier, where, which are costing tens of thousands of dollars per patient.
, So yeah, I think that would be massively helpful.
Priten: Would you want more formal training in using those tools for things like that?
Or do you think most of this will be intuitive and things that you grasp, your peers would grasp more, organically?
Jack: Well, gosh, I mean, I, I think based on the way the question's phrased, it's hard to not just desire the second option where it's intuitive and easy to use.
I think it comes down to, the cost benefit analysis, I guess sadly, I feel like I, I've become quite utilitarian in that way where my time is so limited that I think that if these
LLMs do necessitate training sadly, but at the same time are incredibly impactful clinical decision making tools, then yeah, I would absolutely be fine with, growing training.
Priten: Anything you're really afraid about or scared about?
, When you think about the next five years, in the context of medical training in particular.
Jack: Yeah, I mean, as someone that's deciding on what medical specialty to pursue and, what residency to apply to and what space to make a career in, in the long term, it's.
Particularly in, I, I'm partially interested in radiation oncology, and so that's an imaging heavy field, similar to radiology.
And therefore I think that there is concern of AI replacement, , and like, quote unquote scope creep.
, I don't know if I used that actually.
I think I just threw out a buzzword that maybe would sound cool.
But, yeah, it's definitely a concern and a consideration every time I talk to a higher level trainee or attending in that space.
That is always a question that I ask when trying to evaluate, whether or not the space is right for me to apply yeah, having my career replaced by robot would absolutely suck.
Priten: Yeah, Reasonably so.
Jack: that is a major concern.
I think that's a concern shared by almost every one of my colleagues at Harvard med.
Priten: Is there, um, anything else that you uh, would love to share that you think is uh, related?
Jack: No, honestly, talking about open evidence was my main goal and I think, that's a really like, fascinating new area.
Like I, I don't know if I found out about it later than others.
But, and to be frank, the way that I found out about it was I was in a simulation.
So like every week the pediatrics clerkship does these team Sims where we work with a dummy, not a dumb person, like a literal dummy.
And, uh,
few that simulate a patient.
We're in a team of six people, all of them being my classmates.
And we're actually encouraged to use our devices just as you're allowed to do an actual clinical practice.
, And I looked over and one of my friends was using open evidence and it had spit out a really helpful recommendation for the management of a patient with, suspected, uh, croup, which is a pediatric infection, viral infection.
And I was like, oh, what is that?
And I wanna use it.
And then I started using it in the sims and those sims became way, way easier.
, I, I'm excited to see where that tool will go.
Priten: Very cool.
Yeah, I had not heard about it.
, I just forwarded it to my wife and my mom, who is also a doctor.
I'm curious to hear if they've already played around with it, but I'm just like quickly browsing through this.
and Sometimes these specialized tools are just like a wrapper on like the main LLM, so they just add a little like stethoscope and claim it's for medicine.
Um
Jack: Yes,
Priten: like they've done a lot more than that and clearly have partnerships with JAMA.
Jack: I think it is, every time there's like a little statement by New England Journal or jama.
When you are a prompt into the platform, it's really wonderful.
I think it also might be a platform that's in exclusive to healthcare workers and trainees.
I had to verify, I believe.
Priten: It does look like that's the case.
I'll have to, uh, use one of their accounts to play around with it a little bit more because this looks really cool.
awesome.
Well, thank you so much.
Take care.
Jack: take care.
Priten: Bye-Bye.
I really appreciate Jack for speaking so candidly about something that most medical trainees are navigating without a roadmap.
What struck me most was his insistence that the clinical encounter is fundamentally relational.
That the goal of all of this pattern recognition and diagnostic reasoning is to free up enough cognitive space to actually be present with the patient in a 15 minute window.
That framing doesn't make AI the enemy of medicine, but it does make thoughtfulness about how and when we reach for these tools and ethical imperative, not just a pedagogical one.
Keep listening as we continue exploring the ethics of education technology.
And don't forget to pre-order my upcoming book, ethical Ed [email protected].
Thanks for listening to Margin of Thought.
If this episode gave you something to think about, subscribe, rate, and review us.
Also, share it with someone who might be asking similar questions.
You can find the show notes, transcripts, and my newsletter at priten.org.
Until next time, keep making space for the questions that matter.