Code & Cure
Decoding health in the age of AI
Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds.
Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven.
If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you.
We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.
Episodes
36 episodes
#36 - Should A Chatbot Ever Refuse To Reassure You
What if the chatbot that always has an answer is actually making anxiety worse? For people living with obsessive-compulsive disorder (OCD), instant, endless reassurance can feel helpful in the moment while quietly strengthening the very cycle t...
•
18:43
#35 - How AI Image Generators Portray Substance Use Disorder
What does an AI-generated image of addiction look like, and why does it so often default to darkness, isolation, and despair? As AI tools make it easier than ever to produce visuals for health education, those same tools can unintentionally rei...
•
20:06
#34 - Inside ChatGPT Health: Promise, Peril, And Triage Failures
What if an AI health chatbot told you to stay home when you actually needed emergency care?In this episode, we put ChatGPT Health under the microscope using a clinician-authored evaluation designed to test a critical question: can an AI ...
•
24:37
#33 - Patients Don’t Talk Like Textbooks
What if the most confident answer in the room is also the most misleading?Large language models can ace medical exams, yet falter when faced with a real person’s messy, incomplete story. In this episode, we explore how that gap plays ou...
•
29:56
#32 - When Data Isn’t Better: Rethinking Fertility Tracking
What if the most reliable ways to track fertility are also the simplest? In this episode, we examine the science of ovulation timing and hold modern wearables to a high standard, comparing passive temperature and vital sign data with establishe...
•
19:49
#31 - How Retrieval-Augmented AI Can Verify Clinical Summaries
Fluent summaries that cannot prove their claims are a hidden liability in healthcare, quietly eroding clinician trust and wasting time. In this episode, we walk through a practical system that replaces “sounds right” narratives with evidence-ba...
•
23:38
#30 - From Reddit To Rescue: Real-Time Signals Of The Opioid Crisis
What if the earliest warning sign of an opioid overdose surge isn’t locked inside a delayed report, but unfolding in real time on Reddit? In this episode, we explore how social media conversations, especially pseudonymous, community-led forums,...
•
18:39
#29 - AI Hype Meets Hospital Reality
What really happens when a “smart” system steps into the operating room, and collides with the messy, time-pressured reality of clinical care?In this episode, we unpack a multi-center pilot that streamed audio and video from live surger...
•
25:45
#28 - How AI Confidence Masks Medical Uncertainty
Can you trust a confident answer, especially when your health is on the line?This episode explores the uneasy relationship between language fluency and medical truth in the age of large language models (LLMs). New research asks th...
•
25:49
#27 - Sleep’s Hidden Forecast
What if one night in a sleep lab could offer a glimpse into your long-term health? Researchers are now using a foundation model trained on hundreds of thousands of hours of sleep data to do just that, by predicting the next five seconds ...
•
24:12
#26 - How Your Phone Keyboard Signals Your State Of Mind
What if your keyboard could reveal your mental health? Emerging research suggests that how you type—not what you type—could signal early signs of depression. By analyzing keystroke patterns like speed, timing, pauses, and autocorrect use...
•
19:34
#25 - When Safety Slips: Prompt Injection in Healthcare AI
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offerin...
•
25:26
#24 - What Else Is Hiding In Medical Images?
What if a routine mammogram could do more than screen for breast cancer? What if that same image could quietly reveal a woman’s future risk of heart disease—without extra tests, appointments, or burden on patients?In this episode, we exp...
•
Season 1
•
Episode 24
•
24:10
#23 - Designing Antivenom With Diffusion Models
What if the future of antivenom didn’t come from horse serum, but from AI models that shape lifesaving proteins out of noise?In this episode, we explore how diffusion models, powerful tools from the world of AI, are transforming the desi...
•
20:55
#22 - Hope, Help, and the Language We Choose
What if the words we use could tip the balance between seeking help and staying silent? In this episode, we explore a fascinating study that compares top-voted Reddit responses with replies generated by large language models (LLMs) to un...
•
24:58
#21 - The Rural Reality Check for AI
How can AI-powered care truly serve rural communities? It’s not just about the latest tech, it’s about what works in places where internet can drop, distances are long, and people often underplay symptoms to avoid making a fuss.In this e...
•
19:55
#20 - Google Translate Walked Into An ER And Got A Reality Check
What if your discharge instructions were written in a language you couldn’t read? For millions of patients, that’s not a hypothetical, but a safety risk. And at 2 a.m. in a busy hospital, translation isn’t just a convenience; it’s clinical care...
•
31:42
#19 - AI That Tames Your Health Data Deluge
What if your health data spoke in one calm voice instead of twenty buzzing ones? In this episode, we explore an AI “interpreter layer” that turns step counts, sleep stages, and alerts into fewer, smarter signals that nudge real behavior—without...
•
20:35
#18 - When AI People-Pleasing Breaks Health Advice
What happens when your health chatbot sounds helpful—but gets the facts wrong? In this episode, we explore how AI systems, especially large language models, can prioritize pleasing responses over truthful ones. Using the common confusion betwee...
•
25:01
#17 - How Multi-Agent Systems Could Reshape Care, From Wearables To Scheduling
What if digital assistants could triage symptoms, schedule appointments, and coordinate rides—all while doctors focus on the human side of care? That’s the promise of multi-agent AI in healthcare. In this episode, we explore how these intellige...
•
25:06
#16 - Water, Watts, and Wellness: What’s the Real Cost of Medical AI?
Artificial intelligence promises faster notes, smoother workflows, and smarter clinical decisions. But behind every seamless interaction lies an invisible cost—electricity, water, and carbon emissions that rarely enter the healthcare conversati...
•
26:37
#15 - When Algorithms Know Your End-Of-Life Wishes Better Than Loved Ones
What if the person who knows you best isn’t the best person to speak for you when it matters most?We explore a study that tested just that—comparing the CPR preferences predicted by loved ones with those predicted by machine learning. T...
•
23:49
#14 - Medicare’s WISER Pilot: AI, Prior Auth, and the Cost of Care
What happens when an algorithm—not a doctor or a claims reviewer—denies your surgery? A single decision like that can trigger a much bigger conversation about how AI is reshaping access to care.In this episode, we dive into Medicare’s WI...
•
27:22