Code & Cure

#21 - The Rural Reality Check for AI

Vasanth Sarathy & Laura Hagopian

How can AI-powered care truly serve rural communities? It’s not just about the latest tech, it’s about what works in places where internet can drop, distances are long, and people often underplay symptoms to avoid making a fuss.

In this episode, we explore what it takes for AI in healthcare to earn trust and deliver real value beyond city limits. From wearables that miss the mark on weak broadband to triage tools that misjudge urgency, we reveal how well-meaning innovations can falter in rural settings. Through four key use cases—predictive monitoring, triage, conversational support, and caregiver assistance—we examine the subtle ways systems fail: false positives, alarm fatigue, and models trained on data that doesn’t reflect rural realities.

But it’s not just a tech problem—it’s a people story. We highlight the importance of offline-first designs, region-specific audits, and data that mirrors local language and norms. When AI tools are built with communities in mind, they don’t just alert—they support. Nurses can follow up. Caregivers can act. Patients can trust the system.

With the right approach, AI won’t replace relationships—it’ll reinforce them. And when local teams, family members, and clinicians are all on the same page, care doesn’t just reach further. It gets better.

Subscribe for more grounded conversations on health, AI, and care that works. And if this episode resonated, share it with someone building tech for real people—and leave a review to help others find the show.

Reference: 

From Bandwidth to Bedside — Bringing AI-Enabled Care to Rural America
Angelo E. Volandes et al.
New England Journal of Medicine (2025)

Credits: 

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

SPEAKER_00:

The fate of AI in rural medicine will be shaped not in sterile server farms, but at kitchen tables, over backyard fences, and on front porches. The very places where social support has always done its quiet, steadfast work.

SPEAKER_01:

Hello and welcome to Code and Cure, where we discuss decoding health in the age of AI. My name is Vassant Sarathy. I'm an AI researcher and cognitive scientist, and I'm here with Laura Hagopian.

SPEAKER_00:

I'm an emergency medicine physician and I work in digital health. And I just read a quote from the end of this amazing article about bringing AI-enabled care to rural America.

SPEAKER_01:

Yeah, no, that's uh I I also read their article and I thought it was very interesting. And, you know, I think in this podcast, we've been over the past several weeks, we've been talking about all of these really cool things that AI can do uh with helping physicians and hospitals, you know, ambiently listening, listening to the physician notes and making, you know, creating discharge summaries that are understandable to patients. Documentation, yeah. All kinds of documentation stuff and conversational agents that help with um dealing with patients, but also dealing with doctors directly. Um and then all of these other AI tools behind the scenes that help with you know improving drug repurposing and other you know pharmaceutical type applications.

SPEAKER_00:

Yeah, predictive algorithms, uh triage tools, all of that kind of stuff.

SPEAKER_01:

Yeah, and at the center of all of that is um, you know the this AI system that we're talking about that seems so cool. So to me, you know, it comes naturally to say, of course, you know, you have a cool, and we sorry, one more thing we also talked about was the um how AI can be used to help manage all this data, right, that we're collecting and all these wearables and all that stuff. But you know, on the on the flip side, I think what we missed and what this article kind of focuses on is the idea that humans collecting all this data and reviewing all this data are living all over the country, right? I have my Apple Watch and it's collecting my data and it's giving me re my my information. And maybe, you know, in an ideal world, it's sending that information maybe to the doctor and it's keeping and the and my caregivers are keeping track of it. And all this is wonderful, right? But I, you know, I'm not the only person who's here and there's people living all over the country, right? And that's what I love about this article because it talks about rural America.

SPEAKER_00:

Yeah, exactly. We're not all living in suburban or urban environments, people are living in rural environments, and we talk a lot about data sets and what data these LLMs are trained on. And it's like, well, are they trained differently on rural data?

SPEAKER_01:

Yeah, and at first glance, it seems like, okay, what does that matter, right? I mean, everyone has people have phones everywhere, people have smart watches everywhere across the country. They're being sold, there's Apple stores everywhere. You know, like people can drive to things that are not that far, but what is the actual difference? And as it turns out, there is quite a bit of difference, right? I mean, it's um and speaking to the specifically to the training data point I think you make, which I think is particularly interesting. Again, we've talked about training data as being sort of English versus low resource languages in other parts of the world. And we always think of rural as being some rural country that, you know, has no internet and you know has that they speak a very weird language or an unknown language. And as a result, there's not enough training data that um that the LLMs learn on to understand and speak to those people, right? That's what we think of. We don't think of English speaking um people here in our country. And so Yeah, but they're here.

SPEAKER_00:

I mean, rural rural America is here.

SPEAKER_01:

Yes, and and it and they speak differently too. People speak in uh people in urban areas speak differently from people in rural areas. Um there's different cultural norms that are in play uh that cause them to say things uh you know directly versus indirectly, maybe more politely in some places.

SPEAKER_00:

Or I think in rural areas, certainly, like there's a lot of terms where you know they'll sort of just like deal with the pain. They don't want to be a bother or they'll make do. And so they won't say I'm in 10 out of 10 pain. Oh, it hurts, but I'll make do. Right.

SPEAKER_01:

And that's you know, not a language constraint, that's still English, but that's a it's clearly a resource constraint for the language models because they might not have enough training data. Because again, it's not like people are constantly on the machines typing in Reddit where a lot of the straight data comes from, um, and using all the social media that's out there potentially in the same way in all parts of the country.

SPEAKER_00:

Or if it's trained on like all the data throughout the country and doesn't differentiate between rural, suburban, and urban, that could potentially be a problem too.

SPEAKER_01:

Yes, that's true. It's just like it's all awash, right? It doesn't really think about that aspect. And so those things end up being a big deal, right? That's kind of what I learned from reading this article.

SPEAKER_00:

Yeah, and the other thing that you just kind of skimmed over when we were talking about this is like people may not actually have access to broadband internet. So like you're using your Apple Watch, and maybe you get an alert for, I don't know, like your heartbeat is irregular or the noise level is very high or whatever it is, right? That alert has to go through cell service. At least some of these do. Yes. Some of these alerts have to go through cell service. Um, and so if the cell service is down or you don't have broadband or whatnot, then the alerts aren't delivered, right?

SPEAKER_01:

Yeah, exactly.

SPEAKER_00:

And so this article, they talk about how older adults in rural zip codes are 1.6 times more likely to lack in-home internet service than their urban peers. I mean, I think we take it for granted that everyone can just, you know, log on, listen to our podcast anytime, go on their computer, type on Reddit, but like people don't, not everyone has access to that stuff. Right. And so if you're trying to do remote monitoring, for example, and you're relying on that in the moment service to be available, like maybe it's not available. Yeah. And then how do you trust a system where, you know, it's designed to alert you and the alert doesn't come through?

SPEAKER_01:

Yeah, then you don't trust it. And sometimes the alerts come through, sometimes they don't. And then another level there is the alerts might come through, but because it's been trained on bad data or or incorrect data, that the it might be false positives, right? And that also is sort of, you know, it's crying wolf, and they're they're also missing, and then people start to ignore that and start to, you know, lose trust and lose confidence in these systems again. And, you know, that defeats the whole purpose because we still need, I mean, maybe we need to take a step back and think about do we need all these AI things all the time everywhere, right?

SPEAKER_00:

But some people might, I mean, some people might benefit from those things, right? If they're used correctly. Like you can imagine um, you know, someone who has a heart condition maybe needing to monitor their weight more frequently and uh they step on the scale and they've gained weight. And then you're like, well, what do you do about that? Now they're like maybe they're two hours from their cardiologist. Yes. This is a great place for digital health to come in and say, hey, like there's, you know, there's some alert set up that's noticed a weight gain. What does it tell the member, that patient, though? Does it tell the patient like, hey, you've gained weight? Does it tell the patient, hey, you should call your doctor? Does it alert, you know, a nurse to call the patient and say, hey, how you doing? How's your breathing? We've noticed you gained a couple of pounds. We're worried that might be fluid. Like what, what, how is how is the human involved there? Because that that's one of the things that's very clear to me from reading this article is the importance of having humans, not just in the loop, right? But actually like involved in this system. And it doesn't necessarily have to be the provider, that the doctor, like it could be the neighbor, or it could be a provider like a nurse or a social worker. Um, it's like who's gonna be there in person to help deliver on whatever data is important, who's gonna help figure that out.

SPEAKER_01:

Yeah. And you know, I I also found this interesting because the places that need all this technology the most are potentially these rural areas, because they lack, you know, enough nurses. They lack um, you know, there's a lot of scattered um, you know, healthcare coverage. And like you just talked about, things could be two, three hours away, right? That's exactly where technology can be helpful, but that's what the technology is least prepared for right now, which is ironic in some ways, because you want the technology to serve a human purpose, and right now it's serving another purpose, but that's the purpose it needs to serve. And so we need better training data. We need the system to understand the populations where it's supposed to and can help the most.

SPEAKER_00:

It's really interesting because there aren't enough clinicians in rural areas, it's worse than suburban and urban areas. There's not enough there either. Yeah, but if you have some sort of monitoring tools, maybe your rural nurses can actually go and see more patients, right? Yes. And so it could like this technology could help enable that.

SPEAKER_01:

Yes. I mean, it's a support tool in that sense, right? It's a caregiver support tool. And I think that that's definitely one useful feature, a characteristic of it. Um, but again, we have to fix all the issues, right? The broadband, and that's a core infrastructure issue, providing more access to the internet or finding clever ways to work with, you know, scarce uh resources. But we need people working on those problems, is kind of what I'm getting at.

SPEAKER_00:

Oh, for sure. We absolutely need people working on those problems because that's like the core of how you actually intervene here, right? If you're saying, oh, hey, like we're gonna do some predictive monitoring, let's let's figure out if someone is um, you know, doing worse and we need to intervene. If you don't have the broadband connected, that alarm may never trigger, right?

SPEAKER_01:

Yeah.

SPEAKER_00:

Or if the data, like we've talked about, is maybe not trained. It's maybe the system isn't trained on the right data, maybe it's alarming all the time. And then what do you do with that? You just like ignore it. I ignore it if I get too many alarms that say the same thing over and over again. And so this is why it's like, well, we need the training on the raw data, we need the broadband, and then we need the human to like go and do the intervention.

SPEAKER_01:

Right. And that human connection is still necessary, especially given that we're not quite there with a lot of the AI language interaction and cultural norm-specific capabilities. But you know, I I really liked also this article talked about, you know, the the sort of the different use cases of AI and the particular challenges that you would face in uh in a rural environment. And we can just sort of go down that list because I think that was really great.

SPEAKER_00:

Yeah, I think I just I think I just covered the first one, the predictive monitoring one.

SPEAKER_01:

Yeah, you did. And really that's about you know early detection. Uh, but you know, it it it you can pair alerts with caregivers and do those kinds of things, but of course it's it's risky if the systems don't work well and then you get you know alarm fatigue. Um, and as a result, you might get delayed responses. And so that there's a fine balance there again, goes back to using the right training data and all of that to get the right predictions. Uh then the second use case was triage tools. Um, and and here again, you're um you have to train on rural uh communications.

SPEAKER_00:

Um Yeah, I mean the idea here is you're you might get like a ton of patient messages in and you're like, well, which which ones are the most important? Maybe I have like 50 messages and I have time to go through five of them. Right. Which are the five urgent ones? And can we use an AI system to help figure that out? Of course, this is where the language and cultural concerns come in. Oh, I, you know, um, my stomach's really hurting, but I don't want to be a I don't want to be a bother to you. Um, you know, I just figured I'd check in. Does that set off an alarm bell in an urban area versus a rural area? The I I don't want to be a bother thing, you know, does that actually mean someone is or isn't in pain?

SPEAKER_01:

Yeah.

SPEAKER_00:

And so this is a kind of data that, you know, you could miss. You could miss when something is urgent because the culture is different in a rural location. And if it's not trained on those patterns, um, and you don't understand like how stoic some of these people can be, then you're you're missing out.

SPEAKER_01:

Yeah. And and maybe in that case, you can't, you shouldn't be using AI as a triage tool. That's not necessarily you or you train it on the rural data. If you have it, right, exactly. And and understanding when an AI tool is ready for deployment and when it isn't, I think is really key. Because we see a lot of hype right now about every AI tool that comes out, how it's going to change the world and it's gonna be great. But these types of issues matter, and deployment in a real environment totally matters if you're in fact gonna build trust and have it have people use it.

SPEAKER_00:

It's like you have to have people in that specific community sort of involved in the development, which is hard to scale. But also, like you're you're right. You you can't have these tools where they're not trained on the right data.

SPEAKER_01:

That's right. Um, the AI can also be used um as conversational agents, like chat agents. Um, and the idea would be to help reduce uh loneliness and generally support mental health. And you know, that's already kind of happening right now with Chat GPT and so on. That are, you know, you can talk to Chat GPT and people share a lot of concerns and such. But again, there could be again, it's trained on the same you know, data. And so you not on local norms, not on local norms, and so you have to be very careful with this. And you know, uh, it's not just local norms, but also age-specific norms that could come into play as well, which are also influenced by local cultures.

SPEAKER_00:

This is an interesting one to me, because you know, in the article they talk a lot about how there's uh you know a lot of social support and how we people in rural environments they rely on their their neighbors. Um, and so this is an interesting one because it's like, well, maybe they don't always have that. And it would be nice for them to have someone to talk to as long as they don't get too too attached to this chatbot.

SPEAKER_01:

Right, exactly. Um, and finally they had uh, and this we talked about a little bit already, which is the caregiver support tools. So the AI use as a uh to assist families with uh decision making and and and also with reminders, right? And so it's sort of a helper for the caregiver.

SPEAKER_00:

Yeah, like did you, you know, did you remind mom to take her meds or like maybe you get mom's weight and see if you know her heart failure medication might need to be tweaked so you can call her cardiologist. So I think that again, it has to be sort of tailored to this subpopulation and the norms inside of this subpopulation.

SPEAKER_01:

Yeah, and you know, and again, you have an issue where you begin to over-rely on these automation tools, but then your trust and everything else is kind of conditioned on that. So it if you start seeing things that are just, you know, spurious uh alarms and reminders, then you're gonna start ignoring all of them, right? And so um and some of them are gonna be more important than others. And so again, it goes back to all the same issues we just talked about, which is a set of problems that are unique to rural um uh use cases. However, uh, that's where technology can really help out. And I'm really hoping that people um, you know, develop technologies and and do things in in those communities because they're the ones who need it the most.

SPEAKER_00:

Yeah. And another thing they brought up in this article was like, hey, if you're gonna use these tools, then the insights you give back to the patient has to be something that's like actionable. It's not so helpful to say, oh, your heart rate is irregular. It's like, well, what do I do with that? Like, do I call my provider? Do I like try to make an appointment? Do I drive two hours away? Do I need to call an ambulance? Like, what's what what do I do with that information? Yeah. And um and like who does that information go to? Does it automatically ping to someone else? Is it just staying with me and I have to decide what to do? So it's it's very, very interesting. One of the things that I think is key about this article is we talk about human being in the loop all the time for all of these AI tools.

SPEAKER_01:

As a way to check, make sure AI is doing the right thing.

SPEAKER_00:

Yeah, but I think there's more to it than that here, where it's like the the system needs to be really reinforced by humans. So they're not just like monitoring the AI side of things, right? This is like a tool that humans are using, whether they're the patients, the family members, the care providers. This is something to sort of like supplement and amplify what is already being done. So now you have your nurse who maybe they could see 10 patients in a day because they're going around and doing home visits. Maybe now they can check in with a few more over the phone because they've gotten alerts about weight gain or irregular heart rate or number of steps taken or whatever it is. I think that's one of the keys here is like, hey, this is trust is really important for these communities, and trust is built with humans. And maybe trust can be built with AI too, right? Yeah. Sometimes we overtrust these systems and then all of a sudden they mess up and then we don't trust them at all anymore. Right. And this is where the human element becomes so important, and our reliance on relationships becomes so important.

SPEAKER_01:

Yeah. And you know, just sort of concluding on that point, which is the the human in the loop idea. The human in the loop idea is often introduced in context with an AI automating automating a certain workflow, completely automating a certain thing, right? Or completely automating a certain job function. And I think what we're saying here is that that is not the point here. The point here is not to automate anybody, the point here is to support people who really need um additional sort of scaling help. Um, you know, there's not enough nurses. We need the nurses that exist there to really be able to see more patients and be more effective with their own uh um communities. And so that's a different use case. And it's not about AI automating anything, it's about AI being, like you said, used as a tool for that person, uh, for that nurse or whoever to really help more people.

SPEAKER_00:

And and the relationship can't go away, right? The relationship with the other human, that's the most important thing.

SPEAKER_01:

Yeah.

SPEAKER_00:

And that's what's gonna get uh people the care that they need.

SPEAKER_01:

It's just making their lives easier and more efficient. Um, and seeing when, you know, seeing the the the right patients at the right time, you know, when they need it, right? And so that's really where the AI tools can really come in and help out.

SPEAKER_00:

Yeah. Well, um, I guess we need to solve this problem of unreliable broadband. Yes. And that's something that's you know it's a big issue. Yeah. If we want rural health to go digital, we need broadband, yes, and we need training data for this community. And I think that's what we can walk away from to with today.

SPEAKER_01:

Yeah.

SPEAKER_00:

All right. Well, thanks for joining us on Code and Cure. We'll see you next time.

unknown:

Bye bye.