Code & Cure

#17 - How Multi-Agent Systems Could Reshape Care, From Wearables To Scheduling

Vasanth Sarathy & Laura Hagopian

What if digital assistants could triage symptoms, schedule appointments, and coordinate rides—all while doctors focus on the human side of care? That’s the promise of multi-agent AI in healthcare. In this episode, we explore how these intelligent teams of agents are transforming both clinical and operational workflows.

We begin by breaking down what an AI “agent” really is: not just a chatbot, but a goal-oriented system that can use tools, call APIs, and take real-world actions. You'll hear how agent teams are structured—with supervisors, shared workspaces, and collaborative checks—to ensure safety, usefulness, and accountability before any recommendation reaches a patient.

We also unpack the difference between clinical agents (like wearables that surface risks and suggest tests) and operational ones (verifying insurance or scheduling visits). Scoped access and role-based permissions keep data secure while enhancing efficiency.

Real-world examples bring it all to life. A spike in heart rate triggers a wearable agent to alert a clinician. Another agent gathers context. A third finds the nearest available lab slot. Even when agents disagree—say, over CT vs. ultrasound—a supervisory agent helps weigh the evidence, with the final call left to the human clinician.

We talk candidly about challenges: empathy from conversational agents can help with adherence but risks overreliance or emotional confusion. Guardrails like transparency, audit trails, and clear handoffs to humans are essential for trust and safety.

The big picture? AI agents aren’t replacing healthcare professionals—they’re extending their reach, improving responsiveness, and easing system burdens when thoughtfully deployed. When done right, it’s not man vs. machine—it’s care, better coordinated.

If you enjoyed this conversation, follow the show, share it with a colleague, and leave a quick review to help others discover it.

Reference: 

Coordinated AI agents for advancing healthcare
Michael Moritz et al. 
Nature Biomedical Engineering (2025)

What are AI agents, and what can they do for healthcare?
Carlos Pardo Martin et al
McKinsey Healthcare Blog (2025)

Credits: 

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

SPEAKER_02:

Have you ever watched Inside Out?

SPEAKER_01:

Yeah? Why?

SPEAKER_02:

Because that's what these multi-agent systems make me think of.

SPEAKER_01:

Hello and welcome to Coding Cure. My name is Vasan Sarathi, and I'm an AI researcher, and I'm with Laura Hagopian.

SPEAKER_02:

I'm an emergency medicine physician, and we're here to talk about multi-agent systems today.

SPEAKER_00:

Yeah. So can we just go back to that inside out comment you made before? What does that have to do with this again?

SPEAKER_02:

Well, when I was reading these papers about multi-agent systems, it's like you'll probably do a better job of explaining this to me. But what I was thinking of was the movie Inside Out or Inside Out 2, where you have like these different emotions that are at the helm and like controlling the brain. Sometimes it's sad, sometimes it's disgusted, sometimes it's joy, uh, sometimes it's anxiety, sometimes it's anger, right? But each of them is like at the helm, sort of directing and making decisions. And then together they make sort of a whole to decide what the main character actually does. Right, right. Um, rightly. But but that's like, I don't know, you can tell me if that's a good analogy or not.

SPEAKER_01:

Yeah. Well, yeah, now that I think about it, it does make sense. I mean, you have these different characters that are all doing their looking at the same situation but with different lenses. Yeah, exactly. And they're coming to different conclusions, and um and maybe they they kind of debate each other or whatever, but like they are some or some dominate over others in certain contexts. But um, in some sense, that's what's happening with with these multi-agent systems.

SPEAKER_02:

Yeah, and at the end of the day, like you still have to come to one decision, right? The main character Riley does one thing. Maybe it was more directed by sadness or joy or envy or whatever it is. But yeah, um, all those things can sort of weigh in from their different angles, and that's what what multi-agent systems seem like to me, anyways.

SPEAKER_01:

Yeah. So now, of course, what does that have to do with healthcare and and and your health and wellness?

SPEAKER_02:

That's what we're gonna talk about today. But first, I'd love for you to define uh, you know, better than my pop culture analogy. What like what is an agent and how is that different from other AI systems?

SPEAKER_01:

Yeah, I can do that. So, you know, we've been talking in this podcast about AI systems being used in various different applications. And some of the the way we've used AI has overlapped with machine learning and ML, and which is fine. You know, we have systems that can identify diabetic retinopathy or systems that can write you medical notes, right? These are all kind of overlapping ideas. Um, but you know, I think one thing that we want to take away is that the term AI is very heavily used everywhere, and the world is very broad and covers a lot of things. And um here we're using the term agent. An agent has a very specific meaning, but of course, that also carries with it a whole bunch of other sub-meanings. So an LLM, um, like your ChatGPT has at its very core is uh a machine learning model that is learned to complete sentences and continue right and adding more and more to a sequence. So you give it a bunch of text, and it's a machine learning model that has learned to add more text that is consistent with what was written before. That was then scaled substantially by um people folks at OpenAI and other places to take in lots of text and it was trained on trillions of text from the internet and it produced better quality outputs. That's kind of what an LLM is at its very core. Um it uh and so that's why it's it's overlapping with machine learning because it learned how to complete sentences and and add sequences because that's the type of model it is.

SPEAKER_02:

Yep. Okay.

SPEAKER_01:

Um, but and we call it AI because uh the um aspects of it that are uh produ the text that it produces seems to be kind of amazing. I know.

SPEAKER_02:

I was gonna say I call it you call it AI, I call it magic.

SPEAKER_01:

Yeah, I mean, in a sense, you know, I think what everybody was impressed by with all of these systems was the fact that they were demonstrating what seemed like reasoning or they were demonstrating their incredible knowledge of everything because they were fed in everything. Uh and so that's you know, uh that's where the AI piece came in a little bit more. And we started to use those tools in more general capacity for various purposes. Now, along the way, somebody came to the realization that wait a minute, what if we hook up these AI uh these LLMs to other things? So, for example, if I asked you, you know, is it a good time to travel to Paris today? The LLM, which was trained on data from two years ago, isn't it static, it's not been updated, it's an LLM, it's not going to be able to answer that question. It's not gonna know what the weather is right now. So, what it would need to do is uh go ahead and uh sub you know, kind of submit a request for a weather API or some kind of weather service to figure out what the weather exactly is in in Paris and then come to the conclusion that in fact it's um whether it's good for travel or not. Um and and so in this particular instance, uh, we have you know, what's the weather like uh you know in France today or Paris today, whatever. The LLM is not gonna be able to answer the question, but it's gonna say, hey, let me check the weather. Checks the weather, comes back, and then produces the answer. So the answer it produces wasn't just from its own knowledge. The answer it produced required external access to some other API out there. So that's already something that's interesting, right? The LLM is doing something else. It's going out there, searching the internet, coming back with an answer, and then incorporating that into its own answer. Right? So that's like the first step. It has some agency that it has some thinking of its own or ability to do something else outside of its own self in order to acquire more information and gain gain more knowledge.

SPEAKER_02:

Okay, so robots are taking over the world, you basically just told me. So it hasn't done anything yet.

SPEAKER_01:

All it's done is answered a question, which I requested some information and got that information. Now you can imagine it could answer questions about anything in this fashion, right? Yeah, it could even access an external calculator to get you more accurate results to a math problem instead of answering it based on word completions that it's currently doing. So there's lots of things. These are called tools, and the in the and so it accesses these external tools and produces better answers for you. Um it might even be able to read a document, look up something, or you know, be able to uh and like I said, do a calculation and so on. But uh in addition, maybe it can do one more step, which is it can take actions in the world. So, you know, uh requesting a piece of information isn't the same as taking an action. For instance, if you had a um an AI system that is designed for booking um your next hotel, uh your next restaurant reservation, uh-huh, then maybe it would need to go and check. Uh if you asked it, hey, uh, you know, can I get a restaurant reservation for hotel um XYZ this weekend, it would have to make a lot of different steps of reasoning and determinations. It would have to figure out, okay, what you know, what time are you going to be available this evening? So it would have to look up your calendar and tell you, okay, you're available in this time block, fine. It would have to look up the restaurant and see if it's open and it would get that information. But after it acquires all the information and decides that yes, it can make a reservation because there's blocks available, it has to actually make the reservation. It has to actually do it. Do the thing do the thing. And that has consequences. And in the case of a restaurant reservation, it's a restaurant reservation. But it might not, it might have further downstream consequences, right? It might be that in order to make the restaurant reservation, it would also have to ensure that you can get there. So maybe it'd have to figure out that it has to make an Uber Uber uh reservation or something. And in order to do that, it have to do something else. And the question is, how many of those things does it is it allowed to do out there? Before you're like, wait a minute, I didn't I didn't sign up for all this. I didn't know. Yeah, right. I I just wanted to know if I can get a restaurant reservation. So the agency there is in the fact that it can take actions now, not just take in information. So there's that extra thing. So now all of a sudden you have an AI agent that is there and it's going to be great because it can get all the information it needs and to help you do your job, right?

SPEAKER_02:

So that's like, is that the definition that it's able to like take in action?

SPEAKER_01:

Yeah, that's a great way to think about it. Yeah.

SPEAKER_02:

But then, but like your agent, just like the inside out characters, like it's responsible for doing a thing, not like all the things. Yes. Like it's responsible for I mean, I'm making this up, but it's responsible for being sad or for being happy or for being envy. It's responsible for one of those things, but it's not like ultimately making all the decisions itself, or is it that's sort of up to you how you design the architecture of these systems.

SPEAKER_01:

Oh. And architecture is just a fancy way of saying how do you assemble these agents together to do a thing. Maybe you just have one agent, monolithic agent that does it all, right? Um, maybe you have specialized agents that do specialized things and report back to a supervisor agent that then decides what to do next. These are different ways of arranging these agents so as to perform a certain task. Uh, maybe you have a group of agents that all share a workspace and they work on that workspace together, making adjustments. So think of this like a blackboard. They have a big blackboard up there, and this is actually what it's called. It's called a blackboard architecture where you have a like a blackboard up there and you have a different bunch of different people writing into that and drawing things in there. And as each, as the blackboard evolves, people are, you know, the agents are performing different functions to make it, you know, work better or whatever. And so that's kind of a different architecture, you know, from a that's less hierarchical uh than the one that I described before, where you have a supervisor and maybe some other sub-agents. So there's all of these different architectures. Um, there's all various different modalities and protocols and how they should work together. Maybe before a team uh of agents risk reports its answer, it has to actually have debated the question. So maybe it's not just one agent saying I need to go to the sub, it's like three or four of them that say, let's all kind of think about this first, make a plan, and let's argue about the pros and cons and then do the action. So you can start to you can start to feel like this is very human-like in many regards.

SPEAKER_02:

Well, yeah, that's exactly where I was gonna go with this. I'm I'm like thinking about grand rounds that we had as um in the hospital, right? Where you have a bunch of people get together and talk about a case and decide what to do about it, right? And, you know, I might be part of it. Um, you might have other, you know, specialty providers like an oncologist be part of it. You might have a nurse, you might have a case manager, right? You have people coming at it from different angles with different motivations, with different perspectives. And they all can weigh in, and then you decide, okay, like here's here's what we think the most appropriate path or paths forward could be. Um, and then we move forward, right? And so in many ways, this does resonate because um you kind of want that debate, you want those different perspectives brought in so you can figure out that best path forward.

SPEAKER_01:

Yeah, yeah, yeah, yeah. And then I think that brings us to the papers that we'd be talking about uh today and about the kind of the role of these architectures or multi-agent systems, which just means many agents working together in coordination.

SPEAKER_02:

Yeah.

SPEAKER_01:

Um, how can that be used in in healthcare? And, you know, I think right away we talked a little bit about uh in the restaurant reservation setting, we talked about how you have an agent that's responsible for making the reservation, but maybe another agent that's responsible for checking your calendar, right?

SPEAKER_02:

Yeah, exactly. Um, but in the AI setting, um they're broadly categorized into agents that are one, clinical, like are they related to patient care, diagnostics? Are they gonna, you know, monitor a disease or whatnot? And then two, operational, which is kind of like what you were talking about in the restaurant situation, where it's like administrative or logistical. Can we schedule an appointment? How's the resource gonna be allocated? Let's think through the billing, et cetera. And you can imagine, obviously, that these different agents within those categories would have different goals. They would have um different sort of need to knows, like what information do you need to feed into that system? If you have a system about billing, they're not gonna need to know all of the disease monitoring information. And so, in some ways, it's nice from a privacy perspective that each agent would only get the information that it actually needs to carry out what its task is.

SPEAKER_01:

Yeah, yeah, exactly.

SPEAKER_02:

So I don't know. I mean, we went through an example from from the restaurant angle, but from the clinical angle, I could think of many use cases and they they ran through some in these articles as well. Like um, one of the things that stuck out to me was wearable monitoring, where um, you know, you're not showing your wearable data to your provider every day and they can't possibly synthesize that amount of data all the time. But in one of these examples, it was like, hey, you know, the um the um agent noticed that someone's heart rate was a bit high. And it's like, okay, well, all of the other, all the other data that was coming in looked okay. Um, you know, their oxygen level was okay, their activity level was okay and whatnot. Um, they weren't wearing their glucose monitor, but their heart rate had been high a bit. And so they then the agent can like ping them and say, hey, hey, like I noticed your heartbeat is a little bit high. How are you feeling today? And so it can take that data, which, you know, potentially a healthcare provider wouldn't even necessarily have access to and see, like, hey, does this mean something? Like, are you at the onset of an illness? Are you about to get a fever, whatever it is? Um, and and you know, communicate with that patient around that topic. And that's one example of that sort of more clinical one, like noticing an abnormality, flagging it and seeing if it could potentially mean something.

SPEAKER_01:

Right, right.

SPEAKER_02:

Um, but um and on the flip side, if that person were like, hey, yeah, actually I I don't feel great or I think I need an appointment, that's when the operational ones come in, like a a coordinator that can say, Hey, I can make you that appointment and I can get you that Uber ride or whatever it is, um, I can make sure that we, you know, you're gonna be close to the lab because that because your provider thought maybe you'd need labs drawn. Um, so that the operational side of it is taken care of as well.

SPEAKER_01:

That's a great example. Yeah, and and and there's more, right? I mean, they can do the interaction can keep continuing once they go to their appointment. If the system is connected to other things like insurance and um, you know, medications and follow-on visits and um, you know, radiology uh labs and and so, you know, and so on and so forth, right?

SPEAKER_02:

Yeah, and now that you've said radiology, that triggered another um thought from one of the examples they gave in these papers, which was sometimes the agents actually choose different things from each other. There's one example where, you know, one agent was like, hey, you have abdominal, you have pelvic pain um and abdominal pain. I think you should get an ultrasound. And another agent was like, actually, the pain I think is more in the upper belly, like we should get a CAT scan. And so this was an interesting way to see, hey, you you ultimately need to make one decision, which test are you getting? And two agents might be coming at it from two different angles. They might have two different understandings of the patient.

SPEAKER_01:

Yeah. Yeah. I mean, I think that I think the one distinction here is that with humans is that you would have that one person who kind of synthesizes all this information and then makes helps you make a decision, taking into account a lot of other factors. Here you have these like focused agents that are just only only looking at it with their own lens and pro having and providing a proposal. And you might need another agent that kind of says, Okay, I take into I to I took you into account, radiology person, I took you into account this other agent. Based on that, you know, here is what my recommendation is going forward. Or is that something that's a doctor's a human doctor's job? And I think the human role here is one that's very interesting because it's not, you know, it might not be that the human is doing the specialist work anymore. The human might be the generalist who then contextualizes and understands what the specialists are providing.

SPEAKER_02:

For sure. Yeah. I think I mean we talk about human in the loop a lot, right? And this is an example where obviously you need a human in the loop here who understands how these systems work, right? And like maybe even helps build out that architecture, right? Um potentially even similar to what we would have in a clinical setting or discussion, and then helps ultimately make the decision on what the path forward is.

SPEAKER_01:

Right, right, right. And and you know, to what degree that can also be automated is another question. But there is, you know, I think we shouldn't forget that the physician involved and the physicians involved have training, have tons of training, and are physically, you know, present with the patient and can actually have contextual fact take into account contextual factors that none of these models can. Right. Yeah. And, you know, and that's one piece of it. The other piece of it is like training the physicians about the AI architecture seems like a, you know, a task that is separate from their goal, their their core um role um as care providers.

SPEAKER_02:

I know, but the more we talk about integrating AI into the healthcare system, the more I'm like, gosh, we need to understand how this works so that we can use it appropriately as a tool. It is not like an all-seeing magical thing, right? It really is a tool. And these can be very helpful tools, but like we have to understand what their limits are, what their biases could be, yeah, so that we can decide on the path forward. And we have to be involved in that piece of it.

SPEAKER_01:

They're also very flexible tools, which makes it even more challenging. So then we have to actively and intentionally think about how the interaction is going to be between agents and between the human and the agent and the human agent network. Like what is that interaction going to actually look like? I think intentionally thinking about that and crafting that architecture is absolutely essential. And from time to time, updating it and seeing if it if it still works or it doesn't work. Uh, because I mean these agents can have whatever responsibility you decide to give them, and they're going to do the thing that you ask them to do. And you know, the to the extent that that's not that's ill-defined at the beginning, but needs to be you know updated and and fixed over the course of use, that's also really something that's important, much like a human trainee would need to understand what their jobs actually actually involves.

SPEAKER_02:

Right. And so then the healthcare workers have to be involved at the sort of inception of like hey, what should this agent be doing? Yeah. What's an appropriate use case for an agent, especially for the the clinical ones, you know? Yeah.

SPEAKER_01:

Um Yeah, yeah. And the this paper also had another interesting angle, which was we've always talked about how uh AI is being used for you know improving operational efficiencies in the healthcare system or reducing the human burden of various things, various administrative tasks, or whatever, right? Or improving the accuracy where humans are not as good. Um, but they sometimes can have a completely separate role. And this I thought was very interesting, which is they can interact with patients potentially uh and um and have connections with them that are different from the the the doctor themselves. And the patient themselves might have connections with the with the with the um the AI agent um and uh share information uh that may be different from what they share with the doctor. And that I thought was very interesting. It I feel like it's fraught with a number of ethical issues because you can develop what's called unidirectional emotional bonds, that is the idea that the patient who's the doc who's a human develops bonds with the AI system. And this has been shown repeatedly throughout research. This happens very easily and very quickly. And um, and that you can develop these relationships very, very quickly, and that's the AI is not having any of these relationships.

SPEAKER_02:

Right, of course.

SPEAKER_01:

They're there, it's that's why it's unidirectional, and that can be very dangerous because that can raise a lot of expectations and as a result disappoint the human later and can affect their mental health quite a bit.

SPEAKER_02:

It's interesting because you know, you think this AI agent, if they're texting back and forth, for example, with a human, it's like, well, they could be listening forever, right? They they could keep having a conversation and you could instruct the agent also to infuse empathy into the conversation. Um, and so compared to a provider who has limited time in their day and limited ability to read and go back and forth with you a million times, an agent could do that all day long, right? And then um their responses could be much longer, much more empathetic than potentially a provider could be because of the amount of literally the amount of time in the day, right? Yeah. And so it's it's a cool use case in some ways to be like, hey, can we make our responses more empathetic or can we make them longer or whatever it is? But at the same time, if it's not a human doing it and you get this unidirectional bond, it's that's not a good thing.

SPEAKER_01:

Yeah, and it could serve another role, right? It could serve as sort of an assistant to the human who's having these conversations to give the human another perspective of the patient, to say, look, you're saying these things to the patient, but I'm not sure that they're understanding because they're not doing the following thing. So maybe you need to rephrase how you say this because it you need to, this is an important point you need to make to them or something. Yeah. Helping them prioritize what to say.

SPEAKER_02:

What to say, how to say it, right? You know, lowering the reading level of something, for example, is a great use case. But um, you know, and maybe this is a topic for another podcast. I've definitely read some articles about AI for the loneliness epidemic. And um, I'm totally digressing here, but I think it's an interesting one because can you have an AI bot friend? Clearly, you can have a relationship, but if it's unidirectional, what does it actually mean? So I don't know, maybe we'll we'll cover that another time.

SPEAKER_01:

Yeah, absolutely. And and I just want to make the point that all the typical issues that one has with AI systems also exist here. And in some ways, could be worse, some ways could be better. So, for instance, you have questions of bias, um, you know, uh that that could leak into the AI system's judgment. And and you have that here too. And, you know, you could maybe alleviate some of that by having these AI agents um debate one another, right? You can cancel some of those effects off. You have questions of transparency and interpretability of the results, and uh, you know, proponents of such an multi-agent system would argue that in fact you have more transparency because now you've you're logging and tracking every single AI interaction between all the agents and they're all in langu English language, and so in theory, you can you have a full uh record of everything that uh was used to make a certain decision. Um but again, it's what how the AI decides what it decides is still kind of not so transparent. Um you have questions of privacy, what data can be shared, not shared, and so on. And you have questions of uh just uh practically installing it uh you know a system like this requires a fair amount of infrastructure that is needed for a hospital system or whatever. So there's all the issues that one has with uh AI systems continues to exist here too. But I do think that there is something very cool about having a multi-agent architecture because it it has this sort of decentralized notion to it, and you are able to process a lot of s information simultaneously and therefore can scale better than just having humans do it all separately and burdening people with with it.

SPEAKER_02:

Yeah, I think this is a very cool concept, and I like plus one to this decentralization. It's like you have and you have information flowing into the different agents, and it can be different amounts. It doesn't have to be the full set of information, and then you they sort of have to come together and synthesize it into a whole and into a plan. And I think these this could be, you know, the next in the line of tools that are really helpful in AI and healthcare. And I always think the operational ones are like, you know, it's a pain in the butt to try to schedule your appointment, wait on hold, you know, oh, it needs to get changed. Now you have to wait on hold again. So I think it could alleviate some of these pressures and problems for patients in the healthcare system. But as with as with all of these things, we do have to understand the limitations and we do have to make sure there is a human in the loop who understands what's going on and is able to supervise and really make that follow up decision.

SPEAKER_01:

Yeah. Yeah.

SPEAKER_02:

Great. Well, with that, we'll see you next time on Code and Cure.

SPEAKER_01:

Thank you for joining us.