Code & Cure
Decoding health in the age of AI
Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds.
Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven.
If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you.
We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.
Code & Cure
#22 - Hope, Help, and the Language We Choose
What if the words we use could tip the balance between seeking help and staying silent? In this episode, we explore a fascinating study that compares top-voted Reddit responses with replies generated by large language models (LLMs) to uncover which better reduces stigma around opioid use disorder—and why that distinction matters.
Drawing from Laura’s on-the-ground ER experience and Vasanth’s research on language and moderation, we examine how subtle shifts, like saying “addict” versus “person with OUD, ” can reshape beliefs, impact treatment, and even inform policy. The study zeroes in on three kinds of stigma: skepticism toward medications like Suboxone and methadone, biases against people with OUD, and doubts about the possibility of recovery.
Surprisingly, even with minimal prompting, LLM responses often came across as more supportive, hopeful, and factually accurate. We walk through real examples where personal anecdotes, though well-intended, unintentionally reinforced harmful myths—while AI replies used precise, compassionate language to challenge stigma and foster trust.
But this isn’t a story about AI hype. It’s about how moderation works in online communities, why tone and pronouns matter, and how transparency is key. The takeaway? Language is infrastructure. With thoughtful design and human oversight, AI can help create safer digital spaces, lower barriers to care, and make it easier for people to ask for help, without fear.
If this conversation sparks something for you, follow the show, share it with someone who cares about public health or ethical tech, and leave us a review. Your voice shapes this space: what kind of language do you want to see more of?
Reference:
Exposure to content written by large language models can reduce stigma around opioid use disorder
Shravika Mittal et al.
npj Artificial Intelligence (2025)
Credits:
Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/
Could a large language model help rewrite the stigma around opioid use disorder?
SPEAKER_00:Hello and welcome to Code and Cure, where we discuss decoding health in the age of AI. My name is Vasant Sarathi. I'm an AI researcher and cognitive scientist. And I'm here with Laura Hagopian.
SPEAKER_01:I'm an emergency medicine physician and I work in digital health. And I'm very excited for today's topic because I saw a lot of opioid use disorder in the emergency department, both, you know, where I worked as an attending, but also where I trained. We had a huge program. We had a methadone clinic nearby. We see we saw a lot of overdoses. I mean, I had people who would show up to the emergency department and like sort of like knock on the door and dump their friends at the ambulance bay because they weren't breathing and you'd have to go out with the Narcan and uh, you know, a breathing apparatus and and essentially save their life. So this was something that I saw so much of. I saw it tear families apart. I saw how hard it was on the patients, on the family members. Um it's just, it's a very tough condition. And and there is unfortunately a lot of stigma around it where people will point, they'll say, Oh, it's your fault, or you, you know, uh you did this to yourself, or um even the treatments, there was stigma around, right? It was like, oh, if you're going on these meds, um it's it's almost as bad as as using drugs. So I I think this is a this is a topic that's kind of close to me because I worked so hard on trying to intervene with um a lot of people who had opioid use disorder and trying to get them to a place where you know they were able to, for example, detox off the drugs or find medication assisted treatment. Um but it it it's a tough one. And I I think because there's one of the problems is that there is so much stigma around it.
SPEAKER_00:Yeah, yeah. And and in fact, that stigma is what pushes a lot of these people to go online where they can be anonymous and have built find their communities of people who share similar issues or are supportive of this, right? I mean, you have all these TikTok TikTok communities, you have Reddit communities, and is that what you get online though?
SPEAKER_01:Like, do you is it?
SPEAKER_00:Well, that's I think that's what we're we're we're asking, right? And so in this paper, which is incredible paper, is is all about whether or not at one level there's an AI piece too, which I'll get to in a second, but at one level, it's all about whether or not these communities, online communities, actually provide that kind of support. And there's some suggestion that maybe not, because even in these online communities, uh, you know, there, and then people discuss things like um, you know, opioid use or alternative treatments or support or recovery, you know, support for recovery. Um, there are still issues of unhelpful advice, uh, maybe anecdotal evidence that somebody's had some specific thing that may not may apply to them, but not someone else. Um, there is a lot of incorrect and unverified information.
SPEAKER_01:Yeah, and you can upvote things too, right? Like that the top post may be the most interesting post, but is it correct, right? Or um is it safe? Is it is it something that a clinician might say when you're trying to help someone through opioid use disorder? I not always, right? It can it it could be that some of this text could actually marginalize people more, yeah, make people feel worse, you know? And so this is tough because the the place where people may go to anonymously get information and ideas and support and support, I don't know. Like, is it always supportive?
SPEAKER_00:Well, this is also, you know, a reason why this paper was interesting to me was because uh they were interested in using the idea of LLMs and AI um as a sort of a moderator at a level, at some level. And that's some work that I've done in the past where I've tried to build AI systems that are uh that provide improve the pro-sociality of a of a community like Reddit. You see lots of you know um misinformation or toxic information out there, toxic language out there. People use AI systems to reduce that, but maybe AI systems can also promote pro-social attitudes um in these communities, right? And so that's some research work that I had done previously, and it was very for me, this paper was very interesting from that aspect as well, uh, personally. Um, and I I think that the the study itself was was was well done and well focused, right?
SPEAKER_01:Yeah, I think so. I mean one of the things that's so interesting to me is how language can help define how we think about something. Like, for example, I've been saying like someone who has opioid use disorder, that's very different than me saying addict, right? Yeah, those are like they like in theory they're supposed to mean the same thing, but on Reddit you might see, you know, language that labels people as addicts. Yeah. It's like, do you say someone who has diabetes, or do you say that's a diabetic? Do you know what I mean? It's like there are certain things where they're health conditions, right? They're not personal failings. Um, and so if I said to you, oh, you know, I got a cold the other day, would you look at me and be like, well, that that was a that was a personal failure? You you deserved that code and um uh cold, sorry. And you know, you may never fully recover. I would you say would you even would those things go through your head?
SPEAKER_00:I mean that is stick that is the stigma piece there, right? I mean uh the example that like cold, the example that comes to mind for me is always like food poisoning, right? You go to your restaurant all the time that you go to, and then one time you go and get food poisoning, like no one is going to blame you for getting food poisoning, and no one is going to hold that against you. No one is gonna define you by that, uh, or even decide that because you got food poisoning, you're this kind of individual.
SPEAKER_01:Yeah, which is is not the same with opioid use disorder, right? And we know that there are genetic components to it, for example, but that sort of gets ignored. Yeah. Especially when we label people with certain types of language, like the word addict versus, you know, actual person human who has opioid use disorder. Yep, yep, yep. So I think that that's where I saw a lot of opportunity here in this intervention is like, hey, we know, and we've talked about this on prior episodes, that LLMs really try to be helpful, right? And they try to, you know, in a way, like make you like them.
SPEAKER_00:Yeah, yeah, yeah, yeah. So I mean, that's absolutely right. And and that's where I think the the study started off with, right? I was thinking that maybe there's a role here for for AI and LLM systems. So that's, you know, what what did the study? Can you summarize a little bit what the study talked about?
SPEAKER_01:Well, so basically they looked at um attitudes to a few different kinds of stigma that are common with opioid use disorder. One was around medication assisted, right? So some people will go on uh a medication like Suboxone or methadone to um basically assist them in you know stop stopping the drugs that they were using. So that's one.
SPEAKER_00:The second And and sorry, the the stigma would there would be you're just replacing one drug for another. And that's the stigmatizing um concept.
SPEAKER_01:Yeah, I mean, there's lots of stigmatizing concepts that could go along with that, but that's probably the most common one is hey, you're like, are you replacing, you know, you're not you're still in you're still you still got opioid use disorder if you're still using an opioid, because that's what these are. These are opioids, but they're used in a much more controlled fashion, right? Um, you know, there are certain hallmarks of addiction where you might um, I don't know, like use all your rent money to get drugs. And if you're on methadone and you're controlled, maybe you wouldn't do that. And so this is like a harm reduction technique where you do something that um you're able to function function in daily life now because you're not spending your rent money and your food money on on opioids. Right, exactly. Um yeah.
SPEAKER_00:And then then the second one they had there was And so sorry, one more thing.
SPEAKER_01:Yeah There's there's lots of stigma that could be around this. You named sort of the most common one, but like there are people who maybe think that these treatments aren't safe. Um, or that they may not be the best way. Um, you know, or that it's addictive too, right? So there's different ways that you could have stigma towards medication-assisted treatment.
SPEAKER_00:Got it, got it, got it, got so that was the first sort of dimension that they looked at. And then another attitude they looked at was uh peep pee towards people, towards humans with the Leo disorder.
SPEAKER_01:Yeah, it's like labeling, right? Um oh, they're dangerous, oh, they're weak, whatever it is. They don't, they this is like a personal failure, right? Right. And so um, you know, they're responsible for their own condition, that kind of stuff. Um, none of which is true, right? And then the third one was it was about attitudes toward the disorder itself, towards opioid use disorder, like, oh, um, you know, people will will never truly recover from it. Um, it's it's not possible. Uh, this is where kind of moral strength may the the concept may come in as well. Or that it's like um treat treating people with opioid use disorder is futile. Like those are the stigmatizing beliefs that are associated. Um, and and so what they did in this paper was they said, okay, let's let's go on like Reddit, let's find some posts that are related to these things that we want to look at. And often is a query of some sort, somebody's asking a question of some sort. Yeah, exactly.
SPEAKER_00:Yeah.
SPEAKER_01:And we want to compare like an LLM generated response to a human response. To the to the human response, like the most upvoted human response to like nothing, right? To you know, you receive no new information. And let's see what people's attitudes were before reading these and then after reading these.
SPEAKER_00:Yeah. And people start off with some baseline set of attitudes, right? So they ask some questions beforehand to get a sense of the individual who is judging here.
SPEAKER_01:Right, exactly. And so you want to see if there's a change in that attitude. And they did some other stuff here, like they said, oh, let's do it a single episode versus multiple episodes. I don't think we need to get into all of that. But at the end of the day, they found essentially that the LLM responses were way less stigmatizing.
SPEAKER_00:At least for the first um sort of attitude of the um, you know, the m medic medications associated with OUD, right? I mean, especially for that uh attitude, they had a significantly the LLMs were significantly better. And so uh, yeah, I mean, that is that is promising, right?
SPEAKER_01:And when when I I'm gonna read an example in a second, but they actually did like you know fancy linguistic evaluations of these, and they said, hey, these the ones generated by the LLMs, they were more optimistic. They were more supportive, they gave people more of a sense of belonging. Yeah. And so it's a different experience to read something like that. And I thought this was very interesting. The um the distribution of pronouns was different too. So the human written responses would say like I, I, I, I, I, I, this, I, me, uh, a lot of first person pronouns. Whereas the LLM ones really didn't use that many first person pronouns.
SPEAKER_00:I want to return to that one later. I think that's an interesting one. Oh, that was second person. You know, no, but I want to return to that point later. Uh, and I think because it's I want to sort of bookmark that point because it's such a good point. But, you know, and and and I had more words here too. They had, you know, they used the words like credible, informative, and resourceful as well for the LLM responses, which is super interesting from you know, um, just from the way how far AI has come.
SPEAKER_01:But they get upvoted, that's another thing. They didn't look at that. I'm curious to see if they would get upvoted or not.
SPEAKER_00:Yeah.
SPEAKER_01:But I I'd love to give an example. Yeah, please. So here was here was a query. The query, then this one is about the medication um assisted treatment. So here's a query. I have been dependent on Suboxone for three to four years. I am worried that I am just trading one addiction for another. I will end up addicted to both opioids and suboxone. Do others feel the same? And so here's a human response. The human response that was upvoted the most is kind of paraphrased here, but yes, with Suboxone, I traded one addiction for another. Two years later, I am employable, functional, and have money in the bank. Yes, I'm addicted to Suboxone now, but this addiction is an angel compared to the devil that was opioid addiction.
SPEAKER_00:That's pretty good. That seems somewhat destigmatizing, right? I mean, it's saying, yes, you know, while agreeing that it's, you know, while while agreeing with the wrong information that it's a substitute, at least it acknowledges the fact that there's a difference.
SPEAKER_01:Yeah, it's interesting because from my clinician lens, I'm like, well, I I agree it sounds destigmatizing, but I I don't call Suboxone an addiction. Right. It's not, it's not the same.
SPEAKER_00:It's the language, it's the labeling, right? Right.
SPEAKER_01:And it's not true. Like you're using it as medication-assisted treatment. That's a different, that's different entirely. Um here's the LLM generated response. And there's definitely less use of that first-person pronoun in here, so you can take notice while I read this. It's not about treating one addiction for another, but rather about managing the disorder in a safer, controlled manner. While it's true that physical dependence can occur with long-term use, this is not the same as addiction. What do you think?
SPEAKER_00:Yeah. Uh you know, I think this is Yeah, this is very interesting for many reasons. One, you know, I I I want to dive a little bit into the LLM itself because I'm curious. I was curious about this, and I said, okay, what if you were designing an LLM or you were throwing in, you know, you're putting in this into Chat GPT, what would you say to Chat GPT to set it up? And that's the prompt, right? So it's worthwhile thinking about that for a second.
SPEAKER_01:That's what they did too, right? They went in and they did and they prompted. Actually, when I looked at the prompt, I was like, this is way shorter than I expected. I expected it to be this like big long thing where it said, Oh, be empathetic, be kind. Right. Make sure you're, you know, you're talking at like a sixth grade reading level. It didn't do any, I'll let you read it. It didn't do any of those things. So I was I was shocked by how little information was in the prompt.
SPEAKER_00:Me too. Me too. It's the prompt, which actually I think makes a study even stronger. Uh, which uh the prompt basically said, uh, I'm gonna read it out. You are a Reddit user, you actively browse through different subreddits, which are online communities in Reddit, to gather health-related information. While browsing through these communities, you also frequently answer posts sharing opinions or information on opioid use disorder. Write a comment, which could be posted on the subreddit, answering the following question taken from the same subreddit for the for context. The and then they sort of you can sort of plug in the subreddit name. Um blank subreddit is described as and then they provide the actual description of the subreddit. And that's it. And then they post the question. And so to me, the prompt is very simple and very direct, and uh it doesn't assume anything, it doesn't provide any kind of um directions for pro-sociality, uh, it doesn't talk about stigma at all. It's just in response to these comments. Uh it is very fluent, which is wonderful. Um, and and that's typically with these these LLMs, they speak, they can speak like in the same style as the community that so but so telling it that you're a Reddit user is really important because you then tell it how to speak like one would speak on Reddit, which is different from say, well, how one would speak on Facebook or how one would speak on Twitter or Right. So the information's there. Yeah. So that's the there's that piece of information they provide to the internet.
SPEAKER_01:And you know that you're gonna go to the subreddit on Sabox and or the subreddit on opiates or the subreddit on opiate recovery. Like that's that that gives you like a lot of important context, right?
SPEAKER_00:And the subreddit descriptions often have rules for people part of the subreddit. So it'll include here are the things you want to talk about, and then say, here's the tone we want to maintain in this community. And Reddit's very good about policing and about making sure that people in the community behave in the appropriate way of that community.
SPEAKER_01:Do they use AI for that?
SPEAKER_00:For what? To to monitor. Oh, well, that's the thing. That's that was the research I did a few years ago on monitoring that. And there are AI bots out there, but again, the sophistication isn't isn't there because the challenge is human language use is so complex and so nuanced and culturally nuanced that some of these things are not so straightforward. So you could have a general rule by saying don't be rude, but what don't be rude actually means is a whole host of things. You could say a completely normal, calm comment, but in the context of what was said before, could mean something absolutely rude. And so that would be would necessarily need to be flagged as rude, right? But my point is that these moderators set the rules up front and and then they kind of human moderated most of this, um, with the help of some AI bots as well to catch like toxic language and such. But you still need some human moderators, and and they're not catching all the stigma stuff because the stigma things are very much very much nuanced. Um but that also to your point about language choices, I think this, and then going back to the pronouns, right? There's a distinction here, which is that um LLMs don't have OUD, they don't have opioid use disorder.
unknown:Right.
SPEAKER_00:They have never had that personal experience. So the LLMs can and should never have to say, I, you know, had did this and this or I had this experience. Uh if they did that, that would be misleading. And yes, there might be LLMs that do that for the purposes of you know helping, being supportive, whatever. And and you will see this sometimes, right? I mean, there's these commercials, there used to be at least for Alexa and these other Amazon devices, where you would talk to it and it and you would say things like, Oh, how was your weekend? And it would make up something, right? It would say, Oh, I went hiking or whatever. And of course it didn't actually go hiking.
SPEAKER_01:Right. And then you lose trust in it because you're like, that doesn't make any sense.
SPEAKER_00:Yes, all you you lose trust or you just you you you place less importance into what it says. You downvote it. You doubt yeah, you downvote it either mentally or actually, right? And and so there is that. So it can't just come out and say, I did this. That's that seems disingenuous. Um, so it has to come off in a different way, in a different tone. So I think you have another example.
SPEAKER_01:Yeah, I do have another example. So this one is from is the query for this one was fellow opioid addicts, where do you see yourself five years from now? First of all, you can see already, even in the question stem, directly points at somebody directly points at somebody and labels them as an addict. It's not like, hey, hey, um, other people who have opioid use disorder or whatever. Like, I'm not doing a good job here, but you see what I'm saying. It's like the labeling's even in the question stem. So the human written response here was I hope to be clean from any medication-assisted treatment drugs. Again, labeling. What's this word clean? Uh, you know, why are why are medication-assisted treatment drugs wrong or bad? It's the last hurdle for me clean and sober with a better paying job and a home, one thing at a time. There's so much labeling there, right?
SPEAKER_00:Yeah.
SPEAKER_01:And and that might be how that individual person feels. But then you have to think about all the people who are reading this online and saying, oh, that's that's what I need to do too. And they start labeling themselves, right? So interestingly. This is what the LLM generated response was. And it does use first person pronouns, right? Because the question was, where do you see yourself in five years? And so the LLM says, I just wanted to say that it's great you're thinking about the future. It's a crucial step towards recovery. I hope that in five years you see yourself in a healthier, happier place, free from the chains of addiction.
SPEAKER_00:Interesting.
SPEAKER_01:Very interesting.
SPEAKER_00:Doesn't answer the question, I guess. You know.
SPEAKER_01:No, it doesn't really answer the question.
SPEAKER_00:It's just changing the temperature of the conversation.
SPEAKER_01:Mm-hmm. And it also, it's like, how does how like back to your point? How does a bot say, where do you see yourself in five years? It can't answer. It like literally cannot answer the question. And so it doesn't. You would lose trust in it if you realized it was making something up. Right. Right. It is making something up because that's what they do. But like if if it said, Oh, I see myself in this in this place in five years, it does it's nonsensical. Whereas at least this is sensible. It's like congratulating them for thinking about the future, you know, um, as a a means of motivation and like wishing them luck. So it's not, you know, it's not, you're right. It's not answering the question, but I think it would actually be worse if it did.
SPEAKER_00:Yeah, yeah. And and the problem is, of course, that once you have a Reddit user that's an AI system like that, and it's answered one question, how is it going to handle follow-ups to that? If people ask it further questions, how is it going to respond correctly? Is it going to be consistent across its answers? These are all difficult things. And a lot of people in the AI research space design personas for these things, right? It's not just enough like in the prompt to say that you're a Reddit user, but you have to set it up so that it has a proper character background so that it actually is able to answer follow-up questions consistently.
SPEAKER_01:If it were to continue to engage. Yes, yes, yeah. And this was just like a you know, parts of this paper they did do some longitudinal stuff. But in this example, it was just like a single, a single response, and then they would say, Hey, like, what do you what's your attitude after we knew what your attitude was before reading this, what's your attitude after? Yeah. And with the LLMs, the attitudes, uh, you know, this the stigma decreased. But it that didn't always happen with the human ones, which is interesting. We saw some of the human ones though, right? Like there was labeling, um, you know, there was there was stigma in some of them. And so in some of those instances, people, especially people who who didn't have much stigma to begin with, they might read the human written response and it it kind of backfired. Like their stigma level went up. Oh, interesting. So it's like, well, maybe it would have been better if if we did nothing instead of that. And especially for that subset of people who started off without a lot of stigma. Like, why, why intervene on them, right? Save your interventions for someone who has has a much higher level of stigma.
SPEAKER_00:But of course, you don't know who's reading those Reddits. I know. That's the challenge, right? You know what I'm saying?
SPEAKER_01:Custom Taylor thing that you really it's it's open to anyone to read.
SPEAKER_00:Yeah, I mean, maybe the goal of this would be to up the uh change the attitudes in the community in general. And for those people whose stigma levels were low to begin with, at least there's now increased awareness. And although it might have temporarily gone down, uh gone up, it's possible to bring it back down. And so you might that might be the the the approach here. But there is value in having these sorts of discussions of generally improving pro-sociality in the community, but also um more specifically, um, reducing stigma with respect to opioids here.
SPEAKER_01:Yeah, I mean, I think it would help people with opioid use disorder so much if they weren't constantly battling with stigma on top of the condition that they have, right? If they if there's less stigma, that means like it's easier to get help if you don't feel like you're being judged all the time, if you don't feel like you're being blamed, if you don't feel like you're being dismissed. Um, stigma obviously affects like the quality of care someone gets too. Like what, you know, the what the healthcare system or what the general public thinks about them. It can um reducing stigma makes people less isolated. Yeah. I think that's why people turn to these communities, right? Because they they feel so isolated. Right. Um, and there's a lot of social withdrawal there. And of course, we talk about policy all the time, but when people see opioid use disorder as a health condition, that means that it has policies and resources allocated to it. Right. And so the whole point here is that if LLMs can help reduce stigma, they can help remove barriers, they can improve care, they can support recovery, and that can all save lives.
SPEAKER_00:I think with that we can we can end here.
SPEAKER_01:Yes, we will see you next time on Code and Cure. Thank you for joining us.