Smart from the Start_John Halamka: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
Smart from the Start Intro/Outro:
Welcome to Smart from the Start! Presented by Care.ai, the smart care facility platform company and leader in AI and ambient intelligence for healthcare. Join Steve Lieber, former CEO of HIMSS, as he interviews the brightest minds in the health provider space on truly transformative technologies that are modernizing healthcare.
Steve Lieber:
Hello, and welcome to Smart from the Start. I'm your host, Steve Lieber, and it is my pleasure to bring to you a series of conversations with some of the smart minds of health information technology, talking about the smart directions healthcare companies and providers are pursuing. Today, I'm joined by someone who needs limited introduction, Dr. John Halamka. Dr. Halamka is an emergency medicine physician, medical informatics expert, and president of the Mayo Clinic platform, which is focused on transforming healthcare by leveraging artificial intelligence, connecting healthcare devices and a network of partners. Dr. Halamka has been developing and implementing healthcare information, strategy, and policy for more than 25 years. Previously, he was executive director of the Health Technology Exploration Center for Beth Israel Lahey Health, Chief Information Officer at Beth Israel Deaconess Medical Center, and International Healthcare Innovation professor at Harvard Medical School. He is also a member of the National Academy of Medicine. John, welcome. It's good to see you again.
John Halamka:
And so wonderful to see you again. The joy, Steve, is neither of us ever age.
Steve Lieber:
You are way too kind. Thank you. John and I do go a long way back to the early part of the millennium here in terms of what I'm going to call the most recent launch of EHRs, recognizing that there's a history prior to the early 2000s, but we went through a period there of trying to get people to recognize the value of EHRs, government got involved, and we played a number of shared roles there. What I want to have you spend a moment talking about, John, is EHRs' capabilities, adoptions 20 years later. What do you look back on and see as what worked? What didn't? What did we learn?
John Halamka:
Well, so, Steve, you and I were involved at the very beginning, and what we were told was digitize medicine because every other industry has digitized. And if you recall, we went to the FDA and said, well, what do you want? And they said, well, hey, surveillance for devices, very important. So gather device information at every patient encounter. Well CMS, what do you want? We want quality, quality metrics. How about 40 of them? Just simply gather all the quality metric data so every doctor at every visit enters 40 different numerators and denominators. How hard could that be? It's quality! And then we went to CDC, and they said syndromic surveillance because there could be a pandemic. By the time we were done, every doctor at every visit had to enter 140 pieces of information on every visit, and unfortunately, got nothing back from the EHR. So we imposed administrative burden, we lost the hearts and minds of our clinicians, and we didn't give them anything in return. Now we are starting to see the beginning of the payback where you get different kinds of decision support that help you with diagnosis and treatment, help you with burden reduction, help review your process and compare it to other processes in healthcare nationwide and internationally and say, oh, well, hey, I heard that this doctor tried this and it worked pretty well. So, I think we're at almost a perfect storm for innovation. We're going to start to see the return of joy to the practice in the next year or two, I promise.
Steve Lieber:
Yeah, we have heard over the years the clinical reaction to what we brought forward, and there's probably been more disdain than preference and like for what has happened, and so I know clinicians will be glad to hear that we're headed in that right direction. So the tool itself, the platform, the EHR, are they set up and ready for next generation of decision support tools? Because we are going to head into artificial intelligence here in a minute in the conversation, where are they in terms of being ready to move forward? Are they a help or a hindrance? What's your view on the status of technology for our next move forward?
John Halamka:
Let me give you three answers to that question, brief. So the first is they see the need to reduce burden. So you're actually seeing the major EHR vendors work with some big tech companies to enable various kinds of AI inside their product, especially around clinical documentation, inbox management, and that kind of thing. So the EHR vendors are taking a step themselves, which is good. But second thing, the Office of the National Coordinator with the interoperability rule, the information blocking rule, the requirements for opening up the EHR is a bit has created this standard application program interface for an ecosystem of innovators, which means that there are many companies starting to innovate on top of EHRs, where the EHRs have the variable exchange that they need. Sometimes they don't, and so what you're starting to see are standards like SMART on FHIR, the ability to create an app that layers on top of the EHR; even if the EHR doesn't support the data flow that you need, you can still create an EHR workflow that doesn't require a separate website or login. So yeah, in three ways we are progressing.
Steve Lieber:
Excellent. So let's move forward. And I've run across that you are co-founder of the Coalition for Health AI. So tell us about this coalition. Who is it? What are the goals? Work to date? Kind of give us a little insight about what this coalition is all about.
John Halamka:
So two years ago, a number of folks looking down the path of innovation said, you know, I bet we better have guidelines and guardrails for this emerging AI technology use in healthcare. Is it fair? Is it appropriate? Is it valid? Is it equitable? Is it safe? And there were no consistent guidelines and very little regulation. So we started with just six folks, we put together some academic health centers with some tech companies to have an ongoing dialogue. Today, it's over 300 organizations, and that includes Microsoft, Google, Apple, Amazon. It includes most of the academic medical centers of the country, it includes FDA, White House, Office of National Coordinator, everyone working together for free with no legal agreements to create the best practices for the life cycle of AI from ideation to implementation. How do you label an algorithm to say, oh, you know, it works wonderful on people from Chicago, but it doesn't work for people in the Washington DC area? I, of course, made that up, but, you know, based on the nature of training data, that could be true. And is it ethical to use it in Chicago? Absolutely. Is it ethical to use it in Washington? Not so much. And so that's the sort of thing coalition really does, is the implementation guidance, which helps both industry and regulators in creating these algorithms that do no harm.
Steve Lieber:
Seems like an approach you and I followed once before: Multi-stakeholder approach, volunteer participation, companies, providers, government. Everybody talking, everybody getting along? Is it working?
John Halamka:
And it is because there's this recognition that only public-private partnership will get us to good regulation. It's really amazing, everyone saying the same thing, which is we want to innovate and we want to be agile, but at the same time, we need to understand when to release a product and when not. And so isn't it interesting that government and industry are two sides of the same coin? They expect, both need each other to get this done right.
Steve Lieber:
Absolutely. So great work. Sounds like, obviously, some key components that need to be decided, but at the same time, products are already coming out, things are already happening, people are purchasing and bolting on. What's your comment about where we are in that regard in terms of the risks, the opportunities of adopting a technology before you have that standardization?
John Halamka:
Well, a couple of thoughts, which is one, I think you're going to start to see regulation and sub-regulatory guidance from the White House, from Office of National Coordinator, about what should be some of these guardrails, even at this early stage, while we still as a society trying to figure out how to regulate it, and so you're starting to see some things in proposed rules. And remember those notice of proposed rulemaking, one of our favorites, and you see from ONC HTI-1, a proposed rule that says EHR vendors as part of certifying their products, must create a data card and a model card. That is, if there's a model, what went into the creation of the model, and how does it perform? So that's a proposed rule, but it's coming, and you're starting to see CMS even say, oh, liability will accrue to a clinician who uses an AI without appropriate oversight and review. All proposed, but that's sort of directional. So in the meantime, pick low-risk use cases. Would you agree? I'm going to make this up, of course: Blue Cross has just denied your claim, you need to write a claims appeal letter. Having AI write a claims appeal letter seems like a pretty good use of AI. Now, of course, people tell me the joke, but then Blue Cross will have the AI rejecting the appeal, and before you know it, we'll do 1 billion transactions a second, and the lights will dim, but you can pick these administrative use cases, burden reduction, use cases. Or if a human is always in the loop, having an AI generate a suggestion which the human interprets as just a data point or as something to assist them in crafting a note. Probably reasonably low risk and reasonable.
Steve Lieber:
Have you seen any situations where, and I'm going to call it pilot or testing, in a clinical setting? Because one of the things that I've sort of run across is that the algorithm obviously is very important, but at times, you can run the same thing through more than once and come up with a variance or a different answer. Have you run across situations like that where people are using AI-related technology in a clinical setting, and how are they dealing with this early-stage variance that might occur?
John Halamka:
And so again, a couple of interesting ways to look at that. If I have clinical decision support, and it actually works pretty well on most people, but sometimes it just doesn't, and here's a real example for you: Mayo developed an algorithm that predicts your cardiac mortality, and it works really well if your body mass index is below 31 and really not well if your body mass index is over 35. So that is if you're athletic, it's pretty good predictor, if you're not athletic, it tends to overdiagnose sickness. So you might see this situation of, wow, I use this on the soccer team, and it was perfect, and then I used it on folks that I found hanging out at a fast food restaurant, and it was not. So that's a reality of what you see in the clinical use of these things, which is why it's so important to create what we call a nutrition label, showing the algorithms, data set, and performance on different subgroups, which could be race, ethnicity, age, gender, body size, geographic location, social insurance of health, that kind of thing. That's sort of one issue. But we get into this, I know we're going to discuss predictive versus generative AI, but predictive AI, the idea of I can predict if you have a disease or not, it's pretty consistent with the exception of this subgroup problem I mentioned. The problem with generative AI, this new emerging large language model, it's not deterministic, you can get a different answer every time you use it. Imagine if you used a Google search and every single time you searched on the same term, you got a random selection of pages, believable or not, and that's a real problem. So again, we're using it at Mayo, and what we're doing is very low-risk use cases like, I will help a clinician generate text, and then the clinician edits the text before saving the text. And what you find in clinical practice, it's about 80% useful, I mean that 80% is really good, and it takes a lot of burdening out of the clinicians' day, but you can't use it unattended because otherwise, 20% of the language will be inappropriate.
Steve Lieber:
Reinforcing your point of, it's okay for now as long as there is a human in the chain of the process and it's not machines operating on their own without that human interaction. So you mentioned predictive and generative AI, when we were talking before we came on the air, you threw another one at me. So, let's go through a little definitional stage here about the different types of AI that you see.
John Halamka:
Sure, so when I think of predictive AI, I think of looking at millions of patients like you and saying, based on your signs, your symptoms, your history, are you likely to have a disease or not? And we predict that, you know, and that is either you have it now, or you will have it in the future. But it's slightly different to say, now that I have a diagnosis, what's the treatment? And the reason is a little bit more complicated is, imagine again, I'm looking at the care journey of a million people like you with a given diagnosis, what is the path to wellness? Think of it kind of like Waze for healthcare. A million other drivers took this route, how did they get to the finish line and do so with minimal delay? When I think of prescriptive AI, a different kind of term, it's now that I know we've predicted the disease, what do I prescribe is your path to wellness, which could be a complicated journey with a lot of turns in it, so that's prescriptive AI. And then generative AI, and how do I explain what we did to you? In effect, large language models creating the text that explains a bunch of facts summarizing your history or other patient's history.
Steve Lieber:
Excellent, thank you. So again, going back in time and thinking about what we were faced with 20 years ago, and we predicted it, that there were going to be products that wouldn't survive. There were dead ends on the evolutionary product tree. We're going to have the same in this area. Right now, you go to anybody's trade show, and everybody's got AI on their signage, on their booths. True or not? So what are you thinking in terms of how we help the healthcare consumer decide, not talking about the patients, it's really on the technology, and decide where to go and which direction to go? We came up with the certification for EHRs back in the day to help down that path, so you knew that what you were buying met certain standards or whatever. What do you see today in terms of what we have and where we ought to go?
John Halamka:
So here is a vision for you, and this is going to sound familiar. The federal government outlines a set of criteria, a set of metrics. Multiple assurance labs, some may be in academia, some may be in industry, use the government standards and metrics to evaluate a product and then create a completely objective report of its performance. They don't say if it's good or bad, they simply say, Hhere is an objective view of its performance against the government-mandated metrics. There's then a national registry, kind of like oh, I don't know, a health product list, if you will, that has every algorithm, every model, and its objective measured performance by an accredited national lab. And then there will be organizations separate from the labs that say, well, we've examined the objective tests, and it seems reasonable for purpose, for a given condition, a kind of certification, if you will. And then, same purchasers will buy products that appear on this national list and have certification because they're likely to solve the business problem or the clinical problem facing them.
Steve Lieber:
You're right, great vision, does sound familiar. I think it's worked before. Is this just kind of pulled out of the air or some folks talking about this, and can we anticipate something along these lines?
John Halamka:
And so that is what Washington is talking about now. And by Washington, I mean, it's bipartisan. It's both Chuck Schumer who is really taking the congressional lead on AI, but also FDA, ONC, and the Office of Science and Technology Policy in the White House, all aligned with this kind of notion of assurance laboratories and transparency. The only way we're going to get to credible use of AI in healthcare is with transparency and a notion of reliability that these things are actually fit for purpose.
Steve Lieber:
Now, before when we did this sort of thing, it really was bipartisan, nonpartisan. I mean, it really was public sector, private sector, government all involved, but it wasn't a particularly politically charged issue. We were having to convince people that electronic health records were important and we needed to digitize. AI seems to be taking on a little bit of a different political view reaction. Are we steering clear of it, though, in terms of making this a more scientific approach versus political?
John Halamka:
So I would argue that AI has a sense of urgency that EHR regulation did not have, right? See, last week, when you have all the major tech companies committing to safety and quality and transparency, and not from a regulatory standpoint quite yet, but at least from a public commitment saying, these are the things we're going to do. And that was all handled by the White House with the CEOs or chief legal officers of every major tech company physically present to make the commitment. Imagine if you or I, 20 years ago, had every CEO of every EHR vendor coming to the Oval Office and saying, I demand that we have oversight. See, I think the job today is accelerating with a sense of urgency we never had.
Steve Lieber:
Oh, you're absolutely right. The sense of urgency clearly is much different today. We were having to convince people of urgency 20 years ago, and we don't today. So at Mayo, you're obviously down this path. You're using and adopting. Are you mostly building yourself or are you buying? What's the journey at Mayo in terms of AI-related tools?
John Halamka:
And the answer, of course, is yes, which is Mayo believes that build, buy and partner are all important strategies. And when do you build? Will you build when there's no one else in the marketplace offering it? And so you don't default to build all the time, I mean, that wouldn't be reasonable. And so where there are things that are truly so cutting edge that they have not been done in the past, we will build, or build with a partner that will help us because they have tools that can make the job easier. If there's a mature product in the market you buy. And so let me give you some quick examples. So in the field of cardiology we feel like because there are predictive algorithms that can diagnose disease from your Apple Watch, from your alivecor device in your home, but there weren't commercial offerings or companies, we built the models and then spun out a company. And sometimes you just need to do that, it's early, the same thing in digital pathology or doing the same thing in a variety of other emerging specialty areas. But imagine that you want to reduce the burden of your clinicians by ambient listening. A doctor and a patient have a conversation and the chart appears automagically, which I realize is not a real word. Well, there are a number of good commercial products today that do that, so we have adopted commercial products for documentation management and box management. My favorite, to be honest, is partnering, because that way you get the benefit of the clinical expertise inside an academic medical center with the agility of industry to create novel products.
Steve Lieber:
You mentioned ambient listening, and that's one variation of AI that I kind of like to use, that ambient intelligence is different than artificial intelligence. So you see that area, and again, it's definitely got human engagement involvement, as being perhaps a little farther along because of the nature of what monitoring ambient listening does?
John Halamka:
Yeah, well, remember, there are three ways to sell a product. One is it's so cool everybody wants it. The second would be regulation or compliance forces you to buy it. And then the third is that the burden reduction, that is your quality of practice life gets so improved you want it. And what I'm saying is that in the case of ambient listening, it is one of the few tools that radically reduces clinician burden, and so it has tended to move along pretty fast. Here's a quick example of ambient listening that folks haven't used. My mom a year ago was diagnosed with a brain abscess and it fully cured, but I dictated her case and I said, my mother lives alone, my father died 13 years ago. And the ambient listening AI said, the patient is an 81-year-old widow. Now, I didn't use the term widow, right? It was the ambient listening and the AI that turned lives alone, father died 13 years ago, into the term widow. And that sentence sounds like the perfect chief complaint or, you know, the first sentence of .... You know, the patient is an 81-year-old widow, so, wow, I didn't have to type that.
Steve Lieber:
Another area that we've been looking at recently is ambient monitoring. Have you seen any of that at Mayo? You headed down that path in terms of using AI for monitoring as well?
John Halamka:
Oh, absolutely, and so let me give you some examples there. So the notion that your wellness depends upon three things phenotype, genotype, and exposure, right? So phenotype, who you are, genotype probability of disease, and expose them, what's going on around you. And so ambient monitoring can take so many forms from the, what is your blood pressure and your pulse and your respiratory rate and your temperature, to what is the particulate count of pollen in your room? What is the temperature you live in? You know, these kinds of things. And so, Mayo has an entire team, the remote patient monitoring team, to gather continuous high-velocity data as part of our advanced care at home, the notion of sick, and very serious and complex care in the home, but also just general follow-up care. The challenge with it is separating signal from noise. So as you point out, is a doctor going to go look at 10,000 data points a second? No, but an AI that integrates them and says, oh, looks like this patient with asthma is in and out of meds, and is in a high pollen count situation, and air quality is really bad today. Ah, that's a good way to intervene on those who need it.
Steve Lieber:
And signals to the clinician, pay attention to this one. You can not pay so close attention to these others because everything's within normal ranges. Excellent.
John Halamka:
Exactly.
Steve Lieber:
Excellent. So John, to wrap up, our listeners, CIOs, CMO's, other digital health leaders, word of advice as they head into this direction, what should people be thinking about?
John Halamka:
So this is going to be very peculiar for me to say because it isn't technology. But you have experienced, I have experienced the mania and the hype that is coming out of generative AI since November of 2022. I would argue that this isn't really hype. Of course, there's some, but it's actually a fundamentally different set of tools with truly emergent capabilities that are going to give us a lot of interesting technological evolution. The problem we're going to have is change is hard. And I would encourage all our leaders and all our CIOs, of course, study the tech, that's important. But think about the change management challenge, the fear that your constituency will have about change itself. Change is potentially a job position, change is in self-esteem. Well, what if my job is, I am a copywriter and now that task can actually be done by an AI better than I can do it myself? So how are you going to lead your organization through this period of great change calming the fears and preventing harm? That would be my advice for the next couple of months of 2023.
Steve Lieber:
You know, think back to the past and we learn from some spectacular implementation failures 20 years ago that people approach these things as technology projects instead of change management projects, clinical projects, a number of ways to define something other than this is a tech project, and so I think that was a lesson well learned, and you certainly highlighted it right there. Well, as always, I feel like I need to apply for continuing education credits as a result of this conversation. You, as always, are incredibly enlightening and a lot of fun to visit with, so thank you for your time today. And to our listeners, thank you for joining us. I hope this series helps you make healthcare smarter and move at the speed of tech. Be well.
John Halamka:
Thank you.
Smart from the Start Intro/Outro:
Thanks for listening to Smart from the Start. For best practices in AI, in ambient intelligence, and ways your organization can help lead the era of smart hospitals, visit us at SmartHospital.AI, and for information on the leading Smart Care Facility platform, visit Care.ai.
Sonix has many features that you'd love including generate automated summaries powered by AI, secure transcription and file storage, advanced search, share transcripts, and easily transcribe your Zoom meetings. Try Sonix for free today.