Code & Cure
Decoding health in the age of AI
Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds.
Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven.
If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you.
We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.
Episodes
44 episodes
#44 - AI For Dementia Care
What if artificial intelligence could help make dementia care feel less like a 36-hour day?Dementia is often described through memory loss, but the reality is far more complex. For caregivers, the hardest part may be the constant vigilan...
#43- AI Hype Vs Real-World Medicine
What if the headline “AI outperformed doctors” is asking the wrong question? When a Harvard emergency triage study makes waves, it’s easy to focus on the most dramatic takeaway. But the real story is more complicated: what did the study actuall...
#42 - How AI Chatbots Respond To Psychotic Prompts
What if a chatbot helped someone build a manifesto around a delusion instead of recognizing a mental health crisis? A prompt like “I was appointed by a Cosmic Council to guide humanity” might sound extreme, but it exposes a very real challenge ...
#41 - If You Cannot Trace The Data, Do Not Trust The Model
What if the biggest risk in clinical AI isn’t the algorithm itself, but the data it was built on? A model can appear accurate, polished, and ready for real-world use while quietly relying on datasets with unclear origins, missing documentation,...
#40 - How Two Fake Medical Papers Tricked AI
What happens when fake science looks real enough for AI to believe it? “Bixonimania,” a completely invented eye disorder, was introduced through a pair of bogus medical preprints filled with absurd acknowledgements and fabricated claims. It sho...
#39 - A Helpful Chatbot Can Slowly Talk You Into A False Reality
What happens when a chatbot seems thoughtful, supportive, and reassuring—but starts reinforcing beliefs that can damage someone’s health, relationships, or grip on reality? That question sits at the center of this episode as we explore delusion...
#38 - Using AI Can Make You Look More Guilty In Court
What happens when AI spots a dangerous finding on a scan and the radiologist disagrees? In theory, “human in the loop” sounds like the safeguard that keeps patients safe. In practice, it raises a far more uncomfortable question: when clinicians...
#37 - Training A Neural Network On Toilet Photos
What if a single smartphone photo could make colonoscopy prep more reliable? Colonoscopy can save lives through early detection of colorectal cancer, but its success depends on one stubborn detail: a clean colon. When bowel prep falls sh...
#36 - Should A Chatbot Ever Refuse To Reassure You
What if the chatbot that always has an answer is actually making anxiety worse? For people living with obsessive-compulsive disorder (OCD), instant, endless reassurance can feel helpful in the moment while quietly strengthening the very cycle t...
#35 - How AI Image Generators Portray Substance Use Disorder
What does an AI-generated image of addiction look like, and why does it so often default to darkness, isolation, and despair? As AI tools make it easier than ever to produce visuals for health education, those same tools can unintentionally rei...
#34 - Inside ChatGPT Health: Promise, Peril, And Triage Failures
What if an AI health chatbot told you to stay home when you actually needed emergency care?In this episode, we put ChatGPT Health under the microscope using a clinician-authored evaluation designed to test a critical question: can an AI ...
#33 - Patients Don’t Talk Like Textbooks
What if the most confident answer in the room is also the most misleading?Large language models can ace medical exams, yet falter when faced with a real person’s messy, incomplete story. In this episode, we explore how that gap plays ou...
#32 - When Data Isn’t Better: Rethinking Fertility Tracking
What if the most reliable ways to track fertility are also the simplest? In this episode, we examine the science of ovulation timing and hold modern wearables to a high standard, comparing passive temperature and vital sign data with establishe...
#31 - How Retrieval-Augmented AI Can Verify Clinical Summaries
Fluent summaries that cannot prove their claims are a hidden liability in healthcare, quietly eroding clinician trust and wasting time. In this episode, we walk through a practical system that replaces “sounds right” narratives with evidence-ba...
#30 - From Reddit To Rescue: Real-Time Signals Of The Opioid Crisis
What if the earliest warning sign of an opioid overdose surge isn’t locked inside a delayed report, but unfolding in real time on Reddit? In this episode, we explore how social media conversations, especially pseudonymous, community-led forums,...
#29 - AI Hype Meets Hospital Reality
What really happens when a “smart” system steps into the operating room, and collides with the messy, time-pressured reality of clinical care?In this episode, we unpack a multi-center pilot that streamed audio and video from live surger...
#28 - How AI Confidence Masks Medical Uncertainty
Can you trust a confident answer, especially when your health is on the line?This episode explores the uneasy relationship between language fluency and medical truth in the age of large language models (LLMs). New research asks th...
#27 - Sleep’s Hidden Forecast
What if one night in a sleep lab could offer a glimpse into your long-term health? Researchers are now using a foundation model trained on hundreds of thousands of hours of sleep data to do just that, by predicting the next five seconds ...
#26 - How Your Phone Keyboard Signals Your State Of Mind
What if your keyboard could reveal your mental health? Emerging research suggests that how you type—not what you type—could signal early signs of depression. By analyzing keystroke patterns like speed, timing, pauses, and autocorrect use...
#25 - When Safety Slips: Prompt Injection in Healthcare AI
What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offerin...
#24 - What Else Is Hiding In Medical Images?
What if a routine mammogram could do more than screen for breast cancer? What if that same image could quietly reveal a woman’s future risk of heart disease—without extra tests, appointments, or burden on patients?In this episode, we exp...
#23 - Designing Antivenom With Diffusion Models
What if the future of antivenom didn’t come from horse serum, but from AI models that shape lifesaving proteins out of noise?In this episode, we explore how diffusion models, powerful tools from the world of AI, are transforming the desi...
#22 - Hope, Help, and the Language We Choose
What if the words we use could tip the balance between seeking help and staying silent? In this episode, we explore a fascinating study that compares top-voted Reddit responses with replies generated by large language models (LLMs) to un...
#21 - The Rural Reality Check for AI
How can AI-powered care truly serve rural communities? It’s not just about the latest tech, it’s about what works in places where internet can drop, distances are long, and people often underplay symptoms to avoid making a fuss.In this e...
#20 - Google Translate Walked Into An ER And Got A Reality Check
What if your discharge instructions were written in a language you couldn’t read? For millions of patients, that’s not a hypothetical, but a safety risk. And at 2 a.m. in a busy hospital, translation isn’t just a convenience; it’s clinical care...