Code & Cure

#36 - Should A Chatbot Ever Refuse To Reassure You

Vasanth Sarathy & Laura Hagopian

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 18:43

What if the chatbot that always has an answer is actually making anxiety worse? For people living with obsessive-compulsive disorder (OCD), instant, endless reassurance can feel helpful in the moment while quietly strengthening the very cycle that keeps OCD going. In this episode, we explore why AI chatbots and large language models are designed to be responsive, agreeable, and supportive—and how those same qualities can unintentionally fuel reassurance seeking, compulsive checking, and avoidance instead of real relief. 

We break down OCD in clear, practical terms: intrusive thoughts trigger fear, compulsions bring temporary comfort, and that short-term relief reinforces the cycle over time. Whether it shows up as repeated handwashing, constant checking, or asking the same question again and again, OCD often centers on the desperate need to eliminate uncertainty. That is exactly where evidence-based treatment takes a different path. We discuss exposure and response prevention (ERP), the gold-standard therapy that helps people face doubt without falling back on rituals, and why a general-purpose chatbot may accidentally validate the opposite by offering reassurance, endorsing avoidance, or helping users “pivot” toward the answer they were hoping to hear.

We also look at the broader mental health challenge now that people are already turning to AI for support. What responsibility do clinicians, AI companies, and regulators have? We argue that clinicians should ask directly about chatbot use, and we examine what meaningful guardrails might look like—from detecting repetitive reassurance loops to refusing to continue harmful patterns. Using a real-world germ-related prompting example, we show where chatbot advice can be useful and where it can slip into enabling OCD. This conversation will change how you think about AI, anxiety, and the line between support and harm.

Reference:

A transdiagnostic model for how general purpose AI chatbots can perpetuate OCD and anxiety disorders
Golden and Aboujaoude
Nature npj Digital Medicine (2026)

Credits:

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

he Friend Who Never Stops Answering

SPEAKER_00

Imagine having a friend who never gets tired of your questions, never judges your fears, and always has just one more explanation to give you. Sounds like a dream, right? But for the OCD brain, it's actually a nightmare.

SPEAKER_01

Hello and welcome back to Code and Cure, where we decode health in the age of AI. My name is Vasant Sarathi. I'm a cognitive scientist and AI researcher.

SPEAKER_00

And I'm with Laura Hagobian. I'm an emergency medicine physician and I work in digital health.

SPEAKER_01

We are talking about the issue of using chatbots for mental health, right?

SPEAKER_00

Yeah. And we've talked about some of the extremes before, like suicidality. But today we're talking about something that's, you know, a little bit more common, a little bit less extreme. Um, and that is obsessive-compulsive disorder. Yeah.

SPEAKER_01

Yeah.

SPEAKER_00

And how the way that chatpots work may actually not be great for it.

SPEAKER_01

Right. Right. And you know what? I as I was reading this paper, I it also occurred to me that, I mean, I don't have OCD, but you know, I think I d definitely feel like it feeds into those kinds of tendencies to the slightest extent that I might have them. And um, and I found some of the things that the paper said to resonate in in weird ways and almost feel make me feel a little scared. I use uh, you know, AI assistants all the time for coding, for all kinds of things, right? And from time to time you are asking it questions that get more and more general and slightly more and more personal. And at some point it is giving you a it gives like it feels like it's giving you a sense of reassurance and guidance and it's serving it's answering all your questions. And like you said, it doesn't, you know, you know, it doesn't um it doesn't get tired of answering your questions.

SPEAKER_00

Yeah, and um as we've talked about in other episodes, it's like trying to please you, right? That's kind of its goal. And so, you know, if you're if you have someone who has obsessive compulsive disorder, OCD, and is in the room with a therapist, the therapist might say something that the person doesn't totally want to hear.

SPEAKER_01

That's right.

SPEAKER_00

And but they need to hear, right? Right. And that's not necessarily what you're getting out of a chatbot, especially a general chatbot that's not been sort of tuned or programmed for this.

bsessions And Compulsions Explained

SPEAKER_01

Yeah, yeah, yeah, yeah. So, so so walk me through um sort of OCD and why that is the case. Why is it that the chatbot is not doing what you think what it needs to do?

SPEAKER_00

Well, so yeah, I think maybe it helps to define, like, hey, what are obsessions, what are compulsions? Um, obsessions are these like persistent thoughts that you don't want. They could be urges, they could be images, but they're intrusive. They're they're things that cause, you know, anxiety. Um, and they're hard for people to get rid of sometimes. And so an example of that could be like, oh, I'm I'm scared of germs, um, or I'm I'm scared that there's um, you know, dirt contaminating something, or I'm nervous that someone, you know, to have someone touch me, or I'm nervous that I forgot to turn the oven off, or I forgot to lock the door. And it's not just like a one-time thought, it's just like bothering you all the time.

SPEAKER_01

Yeah.

SPEAKER_00

And so oftentimes, along with this, even though you might be trying to ignore such a thought, a lot of times people have compulsions along with this. And that is a repetitive behavior that helps lower the anxiety. So if I if I gave you that hand germ example before fear of germs, then you might have someone who washes their hands a lot. And so it's not just like washing your hands one time. It's like, oh, I'm what I've washed my hands so much that maybe they have gotten raw. Right. For example. Or um, you know, you you're not sure if you locked the door, not only are you checking it once, but you're checking it twice, or you're checking it three times, or you're checking it five times. Um, and so the whole idea here is that they're often paired, this obsession, the unwanted thought, with a compulsion, which is a behavior that tries to lower the anxiety levels. So I think that's like a good baseline set of knowledge to start with. And at the same time, we're gonna we're gonna talk about we we already have talked a lot about how how these general AI chatbots work, how these LLMs work. Right. And um, and so it's not particularly surprising to me that without some sort of mental health or behavioral health clinician oversight, it's not surprising to me that it may actually make something like OCD worse in certain situations. Yeah. Um, and a lot of happen of what happens with anxiety disorders like OCD. OCD is one example of an anxiety disorder, is that people may say, Hey, I I want to avoid doing X. I want to avoid being in a situation where there might be a lot of germs, right? Right. Or I want to keep up with my compulsions because they make me feel better. Right. And these are the coping mechanisms, right? Well, so that's that's interesting. That's kind of like, I guess it's kind of like a lay person interpretation because you're like, oh, well, if I just wash my hands a lot, then then it's fine. I've coped and I'm good. Yeah. But now you've washed, do you think it's a coping mechanism if you've washed your hands you know 27 times in the last hour? So now you're not able to function at your job and your hands are red and raw and peeling.

SPEAKER_01

That's the problem.

SPEAKER_00

So that is not actually something that that is a coping mechanism. Now it's extended far beyond that and it's actually causing harm.

SPEAKER_01

Right.

hy Reassurance Can Become A Ritual

SPEAKER_00

And even though it feels good in the short term, it's actually not adaptive in the in the long term. Yeah. Right. And there are a number of reasons for that. You know, it could just be that, you know, the hands are the your hands are are raw from all that hand washing. But also, it's like if you're so nervous about germs that that's all you're doing, yeah, then you're not able to participate in other pieces of life. And that's when it's a disorder. And it makes it, it makes it worse and worse over time. Right, right. And so that's something that you might hear from a therapist, like, oh, well, we're gonna try some exposure. We're gonna um, you know, instead of washing your hands 27 times, you're only gonna get to wash your hands once. What do you think is the worst that's gonna happen? And they'll work with someone to kind of think through it, make a plan, discuss alternatives, etc. And that's not necessarily what you would get out of a chat bot, right?

SPEAKER_01

Yeah. Yeah. Cause you you would think that uh right, you know, you might start the uh conversation with the chatbot hoping that it can serve as your therapist at some level, right? Well, it's but it's free, right?

SPEAKER_00

And it's there and it's listening, uh you know, and it never never gets tired. Never gets tired of less than a lot of people.

SPEAKER_01

Never gets mad at you, never judges you, right? And that's a huge piece, right? That's why people turn to the chatbots because it's it's it's it reduces that uh social friction that you would have uh with another human, but now you don't have that. You have this this supposedly really smart um system that is just there listening to you and speaking back to you in fluent language, right?

SPEAKER_00

Yeah, and I wanna I'm gonna come back to some of these points because you just made a lot of points at the same time. But one of the things that I think is very interesting, and this is a comment from the article. Um, someone said, I think the reason I ask the AI nowadays is otherwise I'd be asking my parents a hundred times a day. Wow. Yeah. A hundred times a day, a hundred times a day. That's a lot of times. But the and you're right, the AI chatbot does not get tired or fatigued, but I would say that's actually not a good thing. Right. You know, in this situation, it actually might be better to have the social friction. Because as a parent, I might say, No, you're not asking me that a hundred times. We've had the conversation three times already. The conversation is over now. And so that fatigue or that pushback can actually be therapeutic in a way, where you have someone who says, No, I'm not gonna engage with this anymore. Yeah. I'm not going to let you obsess over this thing.

SPEAKER_01

I mean, in a way, asking a hundred times is a compulsion, right? In in an in a in a strange way. I don't know. I mean, it's it's almost like you've transferred the compulsion from washing your hands a hundred times to now asking whether you should wash your hands a hundred times. And that feels at a weird level like a new compulsion that you've just now found.

SPEAKER_00

But that's it. Like people can become dependent on these chatbots. They can spend hours and hours a day on them, especially if it's allowing them to sort of feel better in that moment. Right. And it doesn't have the social challenges that you might get. And so it makes this compulsive engagement sort of easier to sustain.

herapy Builds Tolerance For Uncertainty

SPEAKER_01

Yeah, that makes sense. I think the larger point with the compulsive engagement also is that the um the compulsive engagement is maladaptive because you don't build that tolerance for doubt and uncertainty, which is the which is the ultimate goal of therapy in this in a real instance.

SPEAKER_00

Yeah, avoidance actually is not the goal here. Avoidance, like we said before, makes you feel better in the moment. But what what you would get out of therapy is that you would learn, like, hey, um that feared outcome or that worst case scenario, it's not even likely to happen.

SPEAKER_01

Yeah.

SPEAKER_00

And even if you don't avoid the thing, oftentimes the anxiety goes down over time.

SPEAKER_01

Yeah, yeah. Yeah. I mean, it also seems like people using these uh chatbots are uh worsening their own condition. And if in fact they do have real therapists they're working with, it's almost undercutting the progress they're making with the real therapist. So you almost want the the real human therapist at some point to be kind of involved and understand what the patient is doing beyond the therapy sessions.

SPEAKER_00

Yeah, I mean that's it. The what you would get in therapy is you would have someone trying to help you confront these uh like maladaptive thoughts, right? Yeah. And to overcome and not do all these avoidance strategies. Like, oh, okay, I'm going to leave the house and not check the lock 17 times. Yeah. And I'm going to come back and find that like actually nothing bad happened and that's okay. And so you've exposed yourself to the thing that bothers you. And so, and then you found out that the that worst case scenario didn't happen and you didn't, you didn't actually, you know, fully try to avoid it. And that's something you can talk with your therapist about. But on the other hand, if you if you have a chatbot saying, oh yeah, that's that's not a bad idea. Um, you can always check an extra an extra couple of times or something like that. It is undercutting the work that you would have done at the therapist. And this is this is like actually really important for clinicians to know about, as you've pointed out, is that they should be asking everyone at this point, because so many people use AI chatbots and so many people use them for mental health. They should be asking, hey, do you use this? And what do you use this for? As you were actually mentioning at the beginning, right? You might use it for um coding and for some of your work that you do, but you also might use it for, you know, when you're seeking reassurance. Those are different use cases. And you don't want to, you know, sort of say, oh, you can never use a chatbot for the most part. People are gonna use them. There's no sense in banning it, especially if it's needed for logistics or work or whatever. But at the same time, you need to understand, okay, there need to be some guidelines here. And maybe I need to make sure that the chatbot that I'm using understands, understands, you know, as much as it can where I'm at or what script it should give back to me. Yeah. Or whatever in this context. So I think that piece is key. And so clinicians need to understand how these things work, what the risks are, and they need to ask patients about it and provide some sort of education because we wouldn't want a chatbot making someone sort of ritualized patterns worse and worse over time.

SPEAKER_01

Right, right, right, right. And and especially the patients are not aware that that's happening.

SPEAKER_00

Right. It feels good in the moment. That's it, right? You're like, oh, it agrees with me, and I am gonna wash my hands that extra time, and et cetera, et cetera. And you don't when it does feel good, it's like, okay, I'm gonna, this is this is helping me.

SPEAKER_01

Yeah.

SPEAKER_00

And that is not always true.

uardrails, Incentives, And Real-World Stakes

SPEAKER_01

Yeah, yeah, yeah.

SPEAKER_00

The other thing is in this is not a new topic for us either, but when you have an LLM that's designed for general use, it's not designed for specific use, right? Yeah. And in this case, it's like, hey, are there guardrails that that could be put in here? Um is there a way? So if somebody sort of rapidly asks over and over again about washing their hands or checking the locks or this, this, and the other thing, um, with the same exact theme, where they're asking for reassurance or they're asking for avoidance over and over and over and over again, in a social situation, that would be flagged, right? Yes. And and, you know, this is where the parent would say, no, I'm not answering that a hundred times. Um, maybe something could be put in as a guardrail in these systems too, even though they're general LLMs. It's like, hey, can we can we use our mental health experts to help us understand what are the guardrails we should put in? And then can we use, you know, the humans who have these conditions also to help install guardrails for themselves? Can we engage with regulators around this? It's not an easy problem to solve. It's definitely more subtle than, oh, I'm suicidal, call 91988.

SPEAKER_01

Well, it's also more systemic, right? Because the uh the AI companies are incentivized to keep the people maximally engaged with their with their tool that they're building, right? And so that goes against the very goal here of having the AI push back and say, no, I think you should, I think you're wrong, or you know, I think you're either I think you're wrong or I think that's a good thing.

SPEAKER_00

Or I'm not answering that question again.

SPEAKER_01

Or you need to stop asking this question. Like pushing back in that way is very sort of anti-engagement, and that goes against the you know potential profit motives that companies might have. So this is a very challenging systemic level problem, right? It's not just uh a one-on-one interactions, it's also got these other stakeholders in play, the good the companies, the government, the the healthcare providers, the individual, uh, their families, all of this. And that makes it even more complicated because at one level, you do want a way for the age for the for the agent, for the for the individual to be able to go and talk to talk to someone outside of the therapy, right? Talk, be able to have this conversation.

SPEAKER_00

And it can totally be beneficial in many situations, right?

SPEAKER_01

Right. And so still keeping that door open, but giving them a place where it's a little bit more guardrailed and trusted is I think the way to go. But I think it's a bigger problem. It's like a it's like a big, you know, like I said, social pro social socio-systemic kind of problem.

Gemini Test With Germ Anxiety

SPEAKER_00

Yeah, without, you know, super easy solutions either. And I actually tried this out. I mean, I went into Gemini and I was like, hey, um, you know, I'm nervous about germs. And it gave it gave me some um, it gave me some tips about, you know, cleaning my hands and trusting my immune system, but also some some information about how to manage that sort of nervous feeling. Um and then, you know, I was like, oh, well, should I wash my hands more often? And it gave me, okay, here are the times, you know, before and after using the rest room, be when you're preparing or eating food, um, be careful about overwashing. You don't want to damage the skin barrier. You can use hand sanitizer, all of that kind of stuff. And then I started to go into, okay, well, now I'm supposed to go to a show in the city, I'm supposed to take the subway. I'm not sure I should because of germs. And and it started to walk me through hey, here's the risk versus reward in this situation. Um, you know, they've tried to make uh public transit cleaner. You could maybe uh, you know, use a mask or hand sanitizer or gloves. Um, you know, there's actually a cost to staying home, weigh the pros and cons. So this is this is not sounding horrible so far. Um, but now we're gonna see the sycophantic tendency to come at coming out because I said, hey, I don't think I should go at all. I don't think I should go see the show. I'm I'm worried about germs. And it's like, oh, okay. It's okay to change your mind. Um, give yourself permission to pivot. Maybe you can go another time. Um, maybe you can uh find something closer to home so you don't have to take the subway. Um, you know, and so now it's like, oh, well, maybe it maybe it's fine to avoid it, right?

SPEAKER_01

Yeah.

SPEAKER_00

And a therapist would probably say something different.

SPEAKER_01

Yeah, because therapists probably say something you don't want to hear. And this is now shifting to something you do want to hear.

SPEAKER_00

Right. And so you're, oh, well, okay, I could do something um around my own town this weekend, and I don't need to take the subway to have fun. Uh, even though I'm not I'm not gonna go do this, I could do something else. And so there's all these excuses that come in here, but the thing that you need to do is overcome the fear of germs and just get yourself to the show. And so it's very quick to pivot for me when I push at it. Yes. To say, okay, there's alternatives and let me avoid the situation. Right, right. It's often easily twisted. Right. Yeah. Yeah. Yeah. And it's interesting because if you had a friend going to the show with you, I don't think they'd let you get out of it so easily.

SPEAKER_01

And especially if they know that you have these tendencies, right?

hat Safer AI Support Could Look Like

SPEAKER_00

Exactly. Exactly. So uh, you know, possible solutions on the horizon, both from the clinician standpoint of asking about AI chatbot use. Yeah. And also from the regulatory standpoint in terms of involving behavioral health clinicians and setting sort of guardrails for what is and is not okay here.

SPEAKER_01

Yeah, and allowing AI systems to say no, right? I mean, at the core of it, that's that's you know, disobeying in a human order seems like such a tall thing, but in this instance, it might be the most appropriate thing to do.

SPEAKER_00

Yeah. Yeah. All right. Well, I think we can end here. We will see you next time on Code and Cure.

SPEAKER_01

Thank you for joining us.