Learn about the data- and AI-driven healthcare. The president of the Mayo Clinic Platform, Dr. John Halamka, discusses AI, ethics, and innovation on CXOTalk episode 829.
Data and AI Improve Patient Outcomes at the Mayo Clinic
In episode 829 of CXOTalk, we explore the intersection of health, data platforms, and artificial intelligence with Dr. John Halamka, President of the Mayo Clinic Platform. With a career spanning over four decades in academia and healthcare, Dr. Halamka brings unparalleled insights into the challenges and opportunities of leveraging AI and data to accelerate healthcare innovation.
From discussing the creation of a global federated data platform to addressing the ethical, privacy, and practical considerations of AI in healthcare, this episode explains how technology is shaping the future of patient care. Join us as we uncover the strategies, standards, and collaborations essential for integrating AI into healthcare systems, ensuring equitable access, and ultimately enhancing patient outcomes worldwide.
Episode Highlights
Building a Global Data Platform for Healthcare AI Development
To accelerate the creation of fair and reliable AI for healthcare, healthcare organizations are forming global collaborations for data sharing and algorithm development.
- Sharing healthcare data presents unique challenges about privacy, ethics, data quality, and regulatory compliance across different countries and health systems
- A federated data model allows data to remain at its source under the control of the generating organization, protecting privacy while enabling global collaboration on AI models.
Ensuring Reliability and Practical Usefulness of AI
Healthcare providers and insurers are demanding AI solutions that positively affect patient care, reduce costs, and improve efficiency within existing clinical workflows.
- AI development starts by aligning algorithmic capabilities with real-world needs within healthcare, like reducing negative margins, improving staff retention, and delivering care remotely.
- To gain widespread adoption, AI models must be transparent in terms of the data used, their applicability to various patient populations, and the potential for unintended bias.
The Coalition for Health AI (CHAI)
CHAI is a multi-stakeholder collaboration bringing together key players to establish standards, guidelines, and testing processes for healthcare AI.
- CHAI's goal is to create a trusted nationwide registry of AI algorithms with transparent metrics on performance, fairness, and limitations, enabling informed decision-making by healthcare providers.
- The development of trustworthy AI in healthcare requires both technical standards and consideration of patient preference, ethical implications, and equitable access to AI-powered interventions.
Meeting Patients Where They Are
Delivering the benefits of AI-powered healthcare requires a patient-centric approach, considering disparities in income, access to technology, and health literacy.
- Different countries and populations require tailored approaches to AI development and deployment based on their unique infrastructure challenges and social contexts.
- Organizations looking to impact health on a global scale must collaborate with healthcare systems and providers in low- and middle-income countries to ensure AI solutions reach the most vulnerable populations.
Past AI and the Promise of Generative AI
Healthcare organizations have a history of using AI and rule-based systems, but the latest advances in generative AI bring both promise and challenges.
- Previous experience in building healthcare knowledge graphs may enable validation and quality control mechanisms for the output of generative AI models.
- Assessing the accuracy and utility of generative AI for healthcare decision-making is a crucial challenge due to the technology's ability to produce different results with each use.
Key Takeaways
The Critical Role of Federated Data Platforms in Healthcare Innovation
Federated data platforms enable the aggregation and curation of diverse, high-quality patient data while maintaining privacy and data sovereignty. This approach facilitates the development and validation of AI models across different geographies, ensuring they are fair, valid, effective, and safe. The Mayo Clinic Platform's initiative underscores the importance of global collaboration and standardization in data handling to accelerate healthcare innovation.
The Necessity of Transparency and Standards in AI Deployment
The establishment of clear standards and processes for AI in healthcare is crucial for ensuring the safety, efficacy, and fairness of AI applications. The Coalition for Health AI (CHAI) exemplifies a concerted effort towards creating nationwide guidelines and a public registry for AI products. This initiative aims to enhance transparency, build trust among clinicians and patients, and facilitate the adoption of AI technologies by providing detailed information on the development, performance, and appropriate use cases of AI tools.
Addressing Cultural and Operational Challenges in Healthcare Transformation
The integration of AI and digital health solutions into healthcare systems faces significant cultural and operational barriers. Success requires addressing the core business challenges of healthcare providers, such as improving margins, enhancing staff retention, and optimizing workflow efficiency. Dr. John Halamka's insights highlight the importance of aligning AI deployment with the strategic business needs of healthcare organizations and ensuring that AI acts as an augmentation to human decision-making rather than a replacement. This approach not only fosters acceptance among healthcare professionals but also ensures that AI applications contribute meaningfully to improving patient care and operational efficiency.
Episode Participants
John Halamka, M.D., is the president of Mayo Clinic Platform. In his new role at Mayo Clinic, Dr. Halamka oversees the future direction of the Mayo Clinic Platform that will help establish Mayo Clinic as a global leader in digital healthcare.
Prior to the Mayo Clinic, Dr. Halamka served as the executive director of the Health Technology Exploration Center for Beth Israel Lahey Health in Massachusetts. During his tenure at Beth Israel Lahey Health, Dr. Halamka oversaw digital health relationships with industry, academia, and government worldwide. Previously, he was chief information officer at Beth Israel Deaconess Medical Center for more than 20 years. In his role at Beth Israel Deaconess Medical Center, Dr. Halamka was responsible for all clinical, financial, administrative and academic IT.
As a Harvard Medical School professor, he served the George W. Bush administration, the Obama administration and governments around the world planning their health care information (IT) strategies. In addition, he also was the International Healthcare Innovation Professor at Harvard Medical School. He remains chairman of New England Healthcare Exchange Network Inc. and is a practicing emergency medicine physician. Dr. Halamka has written a dozen books about technology-related issues, hundreds of articles and thousands of posts on the Geekdoctor blog.
Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.
Transcript
Michael Krigsman: Today, on episode 829 of CXOTalk, we're discussing the intersection of health, data platforms and AI with Dr. John Halamka. He is truly, and I'm not exaggerating, a towering figure in this field. John, you're president now of the Mayo Clinic platform. Please tell us about the focus of your work.
John Halamka: I have been in academia and healthcare for 40 years. And what do you learn over those 40 years? The pace of innovation, the dissemination of new ideas, is slow, right? I think we've learned from Clayton Christensen and others it can take 20 years.
So, in 2019, Gianrico Farrugia, the CEO of Mayo Clinic, asked the question, what if you wanted it to be 18 hours, not 18 years? What would you need to do? What would be the technology, the policy, the process? I was hired January 1 of 2020 to be that kind of front door for Mayo Clinic as it seeks partnerships and collaborations with innovators around the world.
And so, in some ways, it's three components, which is how do you curate data and do so in a privacy protecting and ethical fashion? And how do you do it not just for one institution like Mayo, but globally? And we can talk about that. How do you engage with solution creators, small and large, and how do you take the innovations and place them into healthcare systems or others in our ecosystem? Could be pharma payers, et cetera, and do so with little friction.
So, in effect, that's my role. Simply curate the world's data, encourage innovators to build more faster, and place it in the hands of those who need.
Michael Krigsman: So you're aggregating this data, lots of data. You've built a platform. And as we talk, we'll get into the specifics about the platform and the kinds of data. But for what purpose? Why are you curating this data?
John Halamka: All of us have figured out over the last two years that predictive and generative AI will likely impact our future workflow. And the question, of course, is, if you are going to create models that are going to do good for patients of the future, you better have a great curated data from patients of the past so those models perform as we expect them to. You should also have highly diverse data sets globally so you can validate and fine tune these AI models. Because just an AI model developed in Minnesota may not necessarily work so well in rural Georgia.
And so, as you say, it's not data for data's sake. It's creating a global federation of a distributed data network for the creation of AI that we hope is fair, appropriate, valid, effective, and safe. It's testing and it's ultimate use in monitoring.
Michael Krigsman: You are raising a whole bunch of issues. Can you start to drill in? What kinds of data are you collecting? You mentioned a federated model, tell us about that. You mentioned or implied safety and security privacy, tell us about that. There's a whole bunch of very crucial issues that get wrapped up together here.
John Halamka: Again, having done this for 40 years, and I'll ask you a rhetorical question. If somebody said, you know, we're going to build the world's largest healthcare database, we're going to put it in the basement of the White House, it's going to be fabulous. Chances are a centralized, aggregated anything will never happen. And there's issues of technology and policy and trust, but also issues of privacy and the logistics of it all.
So, what have I learned over the last 40 years? Really important to decide what data you have that's good enough for purpose and that is who collected it? Why did they collect it? Is it standardized, is it believable? And then once you have an understanding of data and metadata, the purposes it will be used for, curating it, normalizing it, maintaining it within the organization that created it, and not sending it out of that organization, that's something that's really important for persisting privacy and trust.
And that means that, as you say, global. Well, many countries have data sovereignty laws. They're not going to send their data across the border. So you say, oh, could I work with Canada? Well, sure, as long as the data stays in Canada, Brazil, as long as it's curated by those who created it and there's consent. Israel has to stay with data sovereignty under the control of israeli authorities in israeli data centers.
So, my role has been define standards and processes and tools to deidentify structured data, unstructured text, telemetry, images, genomics, digital pathology, and put them in an industry standard format, OMOP, under the control of those who generated the data. But with the nature of cloud computing, fire APIs, peer to peer governance, enable organizations or even countries to work with each other so that models can be developed, and models can be tested. But really important that data never leaves the organization, that data is never sold, and if a patient doesn't want to participate, their data is redacted.
Michael Krigsman: How do you ensure that the data that is being federated and ultimately aggregated is reliable, accurate and adheres to the privacy standards that you have set up?
John Halamka: First you ask who in the world has the data science and the experience to collect data, curate and normalize data? Well, I mean, this may be an imperfect metric. But if you look at the US News and World Report, top ten healthcare organizations in the US, or top ten organizations internationally, that seems like a reasonable place to start. So we did.
And so our federation currently is the top five nationally and internationally organizations. Because they have worked with data for a long time, they are willing to run tools that we provide to help us understand the quality of the data. How many 150-year-old patients do you have? Have they had three kidneys removed? These kinds of things you look for just to assess. Is the data sparse? Maybe you're missing a lot of social determinants of health. So, you assess the data hygiene. The tools not only technically do de-identification through natural language processing machine learning techniques, but once De-ID is done, we seek certification of the data that resulted.
Brad Malin, one of the world leaders in De-ID, Re-ID, does the certification of each of the resulting data sets. In the case of Mayo, it was graded as 99.6% de-identified and highly curated and normalized. Then it's placed in a sealed cloud container, for which, in this case, Mayo has the keys to its data. But Sheba has the keys to its data. Albert Einstein, its data. At UHN Canada, it's data. None of us have keys to each other's data. But, for example, if I want to validate an algorithm, I can send that algorithm in a containerized fashion to my colleagues in Brazil and say, run this against your data. Let me know how it works. And that way, we're never exfiltrating the data, but we're also protecting intellectual property.
Michael Krigsman: How does this actually work? Because the data never leaves, or, correct me if I'm wrong, never leaves their location. And yet, the ultimate value of what you're doing depends upon this aggregated data that we can all operate on in order to determine that will ultimately yield better clinical results.
John Halamka: Let me just explain to our two use cases.
One would be federated query, right? So that is, the data never leaves, but I can ask questions of each of the data sets and aggregate the result that comes back. How many patients do you have that have diabetes, blood pressure on atorvastatin, and have muscle pain? And you'd get, oh, institution one has this many. Institution two has that many. Institution three has this many. So that kind of federated query.
And then the other way, as I say, is that very often the issue is we develop an algorithm in one geography and we need to tune it for another geography. So, in that case, you're literally working in a sub-tenancy of the cloud, in a given locality. In Mayo, our data happens to sit in Google cloud. But in the case of Mercy, which works with us, they're Azure. In the case of Albert Einstein, they're AWS, it's cloud neutral, doesn't matter, but they're creating sub-tenencies for people to come and work in the context of their cloud.
Michael Krigsman: So, the issue then is the ability to query that data across the collection.
John Halamka: Correct. The second you start mixing data across geographies, you run into incredible regulatory challenges, but also real privacy and security concerns. And so, you always use this term attack surface. And if you've got the data distributed across the world in different clouds operated by different organizations, your attack surface is smaller.
Michael Krigsman: How do you ensure then the quality of the data going across all of these different institutions? Because if I am querying the system, the platform, I want to be sure, obviously, that the results that are returned are going to be valid.
John Halamka: We happen to use international standards like OMOP, and we've extended, by working with the international standards Organization, we've extended OMOP to include certain other kinds of data elements like digital pathology and genomics, that aren't quite in core OMOP.
So what happens is each institution is charged with taking its data, which could be in an EHR that is unique to their location, mapping it to a common set of vocabularies and schema structures in Oma. So now, although you have all these various federated partners, the underlying data store they're using is identical across all these geographies. And we have tools that then run across that data store, as I mentioned, and assess quality by looking for data sparsity.
Example for you, one entity we worked with had an extraordinary data set. But what we found is more than half of the social determinants of health were missing. And so, our biostats people said, you really shouldn't run AI that is based on social determinants of health, because it could be the people who are the lowest income, the lowest educated are the ones not providing the social determinants of health. So that's sort of these tools running on those local enclaves help you understand variation where you can develop algorithms and not.
Michael Krigsman: Check out cxotalk.com subscribe to our newsletter and subscribe to our YouTube channel.
We have a very interesting question from Twitter, from Arsalan Khan, who is a regular listener. He always asks excellent questions, and he says this, that what you're describing using AI in medicine requires a rethink of many things, from training doctors, disrupting processes, to how insurance works. Where do you begin to manage this? How does one start to manage this? And how does the culture of the medicine, medical and insurance industries affect what you're doing? So now he's asking about the cultural dimensions behind all of this.
John Halamka: When you talk to CEOs of hospital systems, do they say, “God, I woke up this morning, and what I needed is an algorithm.” Never, right. What they say is, my margins are negative, my staff is hard to recruit and retain, and everyone's burned out.
So, you say, oh, well, if those are your three critical business problems, what if there was technology that could help you with those business problems? Oh, that would be amazing. Okay, well, if your margins are negative, are there other, say, activities you'd like to begin with? Remote patient monitoring and interpretation across cardiology or neurology or other specialties of patients in their homes? Delivering care in their homes for serious and complex high acute illness chemotherapy in the home? Oh, yes, these are all the wonderful new things we'd love to do, because there's going to be reimbursement. And if AI can empower those, wonderful. That helps my margin pressure?
Well, doctors and nurses are hard to retain, specialists hard to recruit. Well, what if, in effect, a specialist’s knowledge could be encapsulated in an algorithm, and you could actually have every one of your primary caregivers enabled with a cardiologist in their pocket, so to speak. Oh, that would be amazing. Well, what if I could get every doctor or every nurse home for dinner every night? Oh, that would be incredible. Reduce the burden and help with longevity.
So, as you start to look at some of the 240 plus algorithms Mayo has developed, each of them is justified in some fashion by one of those three business imperatives.
Now, you did ask about reimbursement. Here is, again, it's a dream I have. This is not quite the reality, but let's start with a dream. Mayo has 10 million patients birth to death. I have every event that has occurred in their lifetime. I know their signs, symptoms, phenotype, genotype and exposo.
What if insurance companies and provider organizations agreed to work together as two sides of the same coin? And we could say we're going to take hundreds of millions of patient journeys, figure out what worked, and then the standard of care isn't just a set of rules that a, say, payer clerk reads and says, approved or disapproved pre auth or not pre auth, but it's actually based on continuously improving AI models of what's going to benefit the patient the most. And when I've talked to payers, they say, we're not interested in denying care. We just want to pay for the effective care. Okay, well, that's where payers and providers can work together. And these are some of the things we're beginning to pursue when it comes to algorithms.
Michael Krigsman: What is the role of the Mayo Clinic platform? Because so far, you've mostly described the platform as a way of gathering data for queries to be used very broadly. What about algorithms?
John Halamka: I'm, of course, not here to advertise or sell you anything, but Mayo looked at where are the gaps, especially, say, startup companies, and they want to create a novel product or service.
What you find is startup companies may have great engineering and great energy, but don't necessarily have access to clinical expertise. Workflow, understanding, deidentified data. On the one hand, we have an accelerator program where we've brought in about 40 companies so far. Those 40 companies are given the clinical guidance, given access to deidentified data, given an opportunity to test things in workflow.
The end result is a product co-developed with Mayo Clinic platform, and that goes out to the marketplace. Some of these startup companies have gone on to be sold or have significant funding events, and that was a result of the collaboration we had together. On the other hand, some large companies say, well, we may have clinical experts, we may even have data, but we don't have the capacity to deploy these things in a clinical setting and understand their impact.
And so mayo at hymns announced something called solution Studio, where it's for any company we work with, we can take a model. If you've already developed one, fine. We'll then run it against all of our data, understand its biases, and give me deploy it in workflow and be able to gather feedback. How is it used? Did it make a difference? Were the suggestions reasonable? So, think of it in the wheel of AI. From ideation, to building a model, to testing a model, to deploying a model, to monitoring a model, our role is to enable all of our stakeholders to have all those capabilities.
Michael Krigsman: Where is medicine today with respect to the deployment of these kind of technologies? Actually, in practice, to help clinicians solve patient problems?
John Halamka: I think we need a William Gibson answer to that, which is, the future is already here. It's just unevenly distributed. And I think in your question is a perfect observation, which is as you walk the floor at HIMSS and say, oh, my, there are hundreds of these algorithms. How many are deployed in practice every day? How many are making a difference? How many, like a drug, have had a post market surveillance study or a randomized controlled trial? So, at Mayo, for example of those 200, and some algorithms we've developed, say about 30, are used in clinical practice every single day.
A number of them have gone through FDA, a software medical device. A number have gone through randomized clinical trials. But I think your audience should know today, 2024, there's a gap between the generation of AI models and the use of AI models in workflow. It's getting better. And I could argue that. 2024 HIMSS. I started to see last year's hype being turned into a bit more deployed reality.
Michael Krigsman: Can you elaborate on that? I'm assuming what you mean is lots of stuff is being developed, but it's just not yet being put into practice.
John Halamka: Casey Ross, who is a reporter at STAT, a Boston Globe affiliate, wrote a brilliant article last year which said healthcare AI has a credibility problem. And here's what he meant.
He said, well, here is an algorithm. Well, what data was used to develop it? We're not going to tell you. Does it work on tall people, short people, rural people, urban people? We're not going to tell you. So, why would a clinician want to adopt something where there was no transparency as to how the product was developed and how it works? Is the product reliable? Is it consistent? Can I make sure that it does no harm to my patient?
So, the reason the Coalition for Health AI, which we'll talk about, was established, was to create a set of nationwide standards for testing and evaluation. So that just as a mattress has a little label on it that says, oh, it's new, it's got polyester, it's got cotton, don't remove this label.
Every algorithm needs to come with a data card and a model card so a clinician can say, oh, I see. It was developed on patients like mine. It's reasonably credible, it's consistent, it's reliable, oh, I will use it, and therefore, oh, it will have good benefit to my patients and to me. And that's kind of a gap that without all those things foundationally, you're not going to see quite the adoption of the new technologies because doctors won't trust them.
Michael Krigsman: So, you're saying that when it comes to data, lack of transparency and provenance of that data is a major issue here. And when it comes to the algorithms, lack of explainability, which is another way of saying transparency is a major issue, am I understanding this correctly?
John Halamka: How do these things perform in the field? And let me give you a real example.
So, Mayo Clinic developed an incredible tool to predict cardiac mortality. That tool works really well if your body mass index is less than 31 and not so well if your body mass index is above 35. And so what we do, because we analyze the performance of all these things, is we say, doctor, never use this on a patient with a body mass index above 31, and you can be assured it's going to be useful, will do no harm, will be credible, as long as you follow that guideline. And that's the kind of guidance that people need. Where do I use it? Why do I use it? What's it likely going to do? And that's really what I meant by transparency, not only in the data used, but the performance of the model that resulted.
Michael Krigsman: You mentioned the Coalition for Health AI with the abbreviation CHAI. I'll mention, every day I brew up chai tea and I grind my spices every day. I love it. So tell us about the coalition for Health AI CHAI and where that fits into addressing this set of issues.
John Halamka: Many of us who've been in the industry for decades know that government alone, creating regulation without input, may create either laws or regulations that have unintended consequences. Wouldn't it be better to have public private partnerships so that everyone's at the table and plan these things thoughtfully?
Well, to do that takes a coalition. It started two years ago. A number of us said we believe there needs to be nationwide gardens guidelines, guardrails, testing and evaluation. So why don't we start bringing people together? And so I was mentioning to you that in September of 2021, we brought together six organizations. Then by October, there were 90 organizations.
Well, as of today, there are 1600 organizations, including all the big tech organizations, medical centers, large and small, the FDA, ONC, NIST, and the White House all working together on these questions. The board of directors, which we tried to make very representative, has Mickey Chapathi from the ONC as a director. Troy Tasbaz, the person overseeing AI regulation for the country, as a director. Eric Horvitz, who chairs the president's Council of Science and Technology, as a director.
Then provider organizations, entrepreneurial organizations, and patient advocacy organizations. So you try to be highly representative of the board level support that by advisory groups in each domain. So, these affinity groups come together and suggest where we go, and we do this for the benefit of society and the hegemony of none. And so, assume what's going to come out in 2024. Government and industry working together will have mechanisms for testing data and algorithms that will then be posted publicly in a nationwide registry.
So you'll say, oh, I would like to go buy this thing for cardiology. Oh, here's where it will work. Here's how it will perform. And here is an official laboratory that tested it so you can believe in the results.
Michael Krigsman: So you have all of these interest groups. At the end of the day, what is the product that chi will produce?
John Halamka: I want an algorithm to be fair, appropriate, valid, effective and safe. What does that mean? How do you measure fairness? How do you measure efficacy? Well, that's a question society has to answer.
So, what you'll see if you go to the coalitionforhealthai.org is the testing and evaluation framework. All this is downloadable at no charge, defines each of the measures and metrics processes for quality management. How do you analyze your data? How do you analyze and test an algorithmic result? So it's on the one hand, you define all that, call it the what? Well, then CHAI, likely, I mean it's early, will have a hand in helping pick or assure the testing laboratories for this country to make sure that they're credible, that they perform to what we'll call sort of ISO standards, and to audit them to make sure they're doing a good job.
So CHAI won't do the testing itself, but will assign the testing to a nationwide network of laboratories that will report on these products, and then CHAI will host the nationwide registry which makes all the testing results available to all.
Michael Krigsman: So essentially then it becomes a centralized trust mechanism, correct?
John Halamka: I mean, that's a really good point, which is if we've decided as a society how to measure goodness, and then we have a database for the society that shows what is for each product and service its measure of goodness, that starts to build credibility.
Michael Krigsman: You've been talking, John, about fairness and ethics. Where does that whole set of issues come into play with this?
John Halamka: Imagine, Michael, I had a wonderful algorithm. I'm going to ask this two questions because you're going to see why they're different. What algorithm would predict the likelihood you would develop cancer five years from now? And by the way, that cancer is curable, and question is, would you want to know, knowing that if we predicted the cancer? Well, the algorithm maybe had an AUC of 0.9 or something, wouldn't be perfect, but it would be pretty good, and you could make a life decision.
Do you take a cure? Do you do something proactive surgically? You say, oh, well, that sounds okay, as long as the patient decides and it's curable. Oh, that sounds like a kind of good thing for society because you could avoid disease.
How about this? I have another algorithm that will predict your likelihood of Alzheimer's ten years from now. And will let you know if you're going to develop an incurable disease. Oh, how do you feel about that? Do you want to know? Because if I tell you, it's going to create immense anxiety for which you can take no action.
And so, this is why I mentioned a couple of cases like this, because these are real cases. We have to decide not only how does the algorithm perform, but what human factors are involved in using it. What's the patient preference? And every human, I would argue, will have a different preference as to what they want to know and when they want to know it.
Michael Krigsman: What about access, equitable access to care, and then bias in the algorithms, which are two other dimensions of this topic.
John Halamka: I've worked with the Bill and Melinda Gates foundation for the last 15 years in Sub-Saharan Africa and northern India. And each of these, every society, has sort of different challenges with how one delivers care or brings technology to a patient.
What northern India decided to do, it was really a good idea. Bandwidth in northern India is ubiquitous and cheap. Every family has an Android phone. And so we said, oh, what if a community health worker in each village would be given a set of devices like a blood pressure cuff, a pulse oximeter, a small, one lead EKG device, and could work with every family in that village and exercise AI algorithms on their behalf to help the families derive the benefit of some of these things? Would that work? The Gates foundation actually funded a lot out of the community health workers to do this. And the model was great, right?
Because bandwidth is ubiquitous and that enabler, that community health worker, brought the best of digital health to the village. Sub-Saharan Africa. Bandwidth is expensive and hard to find. The model has to be different. So give you an example of what we did there. Could I, on my phone, instantiate a computer vision algorithm so that when a community health worker went to the home and ran a rapid diagnostic test, could be a malaria test or something else. And in fact, they're able to run the algorithm in the home by using computer vision on the test in the home.
Again, Gates foundation funded. I think, you know, it's really important we all agree we're going to meet the patient at the level of their technological sophistication, income level, and the infrastructure of their country in the United States.
Just one quick story for you. An engineer came to visit me and wanted to come to the Medicaid clinics in Boston to see how they're delivering digital health care. So I brought this engineer to a low-income clinic. The engineer went up to an 80-year-old homeless gentleman and said, what's your favorite wearable? And he said, socks. And I tell you that story because as all of us CXOs start to think about this, don't assume everybody is going to have an iPhone 27 and an Apple Watch and infinite bandwidth. Meet the patient at the level of their sophistication, and recognize you all have to deliver digital solutions to those who have no digital technology.
Michael Krigsman: I realize that this question now starts to drift from the center of the discussion on platforms. What you're describing sounds like a great set of ideals to meet the patient where they are. But the fact is, we know that, as you were just describing, resources and access to healthcare are not evenly distributed. And the economic incentives and business incentives almost militate against accomplishing the goal you've just been describing.
John Halamka: I have a set of KPIs at Mayo. Gianrico Farrugia says,” John, I want you, as you build this platform, to, of course, help Mayo be the best practice it can be. But I want you to capture specialty knowledge, and I want you to assist in global transformation. And by the end of your presidency, I want you to have touched 4 billion people.”
Now, how are you going to touch 4 billion people? Well, that will take partnership and collaboration at massive scale, at country scale, and the delivery of algorithms. And it will require that sometimes philanthropy is used to engage countries, especially low- and middle-income countries, to get either data capture or the use of algorithms.
So, as Mayo has gone across the world, for example, we've recently worked with Kenya, and we'll be deploying a variety of algorithms in Kenya. And philanthropy will fund the deployment of the algorithms in Kenya.
So, as you said, it's a complicated question, but I have been charged with touching 4 billion people, so my incentives to do so are aligned.
Michael Krigsman: Chris Peterson says healthcare was one area that adopted rule based expert systems and earlier forms of AI in the past. Does that give the industry a leg up on others that did not have this history?
John Halamka: My postdoc was in AI at MIT in the 1990s. And what did I do? I wrote rule-based systems. And so let me give you an example of a rule-based system.
So, Michael, is a giraffe a mammal? Well, we better define what's a mammal. Oh, has live young, feeds its young, certain characteristics. Does a giraffe have live young? It does. Does a giraffe feed its young? Yes, it does. A giraffe is a mammal. And I give the audience this example of, literally, it took me a month to create a lisp-based system based on a set of rules that would answer questions like that so good.
We have experience creating what I call knowledge graphs, and those knowledge graphs are going to be pretty useful in medicine as we use more generative AI. And I know this is probably not the answer that you expected, but here's the problem with generative AI. How do you assess its quality, its accuracy, its validity? Very hard.
We actually might end up going back to the rules-based systems to look at the validity of the output of generative AI, and that's going to help us assess its utility. So, sure, we've got a lot of examples in healthcare of having done this for years and understanding the complexity. But still, I think we do have to be careful of saying healthcare is in a perfect place to adopt these emerging technologies because there's risk, there's a lack of rapid adoption of change in our industry, and there's regulatory constraint.
Michael Krigsman: You mentioned generative AI. Can you talk about the differences between predictive AI and generative AI in this platform product that you're building?
John Halamka: One of the algorithms we've developed is able to take your lead, one EKG from your Apple Watch or a live core device, whatever it is you have in your home, and assess your ejection fraction. That is, does your heart have a good pump or not?
Well, that's a predictive mathematical algorithm, and we can assess is it good or bad? By simply doing a gold standard test, like an echo, and say, oh, the algorithm worked or didn't work. So predictive is mathematical, and you can measure its performance against a ground truth.
Generative AI, as we know, creates novel content in the quest large language model context. It's predicting a next word in a sentence, and then the next word, then the next word, then the next word.
So, here's a problem. How do you assess that against ground truth? Right? If every prompt yields a different result, and at 141 in the afternoon, you could get a brilliant result, and at 142 get a nonsensical fictional result. It's really hard to assess its accuracy. So that's why I tend to separate the mathematical prediction, which is verifiable, against the generation of new content, which is just so difficult to assess for accuracy and quality.
Michael Krigsman: I guess if you're diagnosing patient illness, you really do want that consistency.
John Halamka: Well, it's worse than that, as you've probably worked with prompt engineering. Depending on how you ask the question, the response can be brilliant or devastating.
And here's just this quick example. We took a New England journal case. Patients of 59 year old crushing, substantial chest pain, hypotension and left leg radiation. We say, hey, journey of AI, what's the diagnosis? Says, oh, having a heart attack. Anticoagulate immediately. We'll ask the question differently. And what diagnosis should I absolutely not miss? Oh, don't miss a dissecting aortic aneurysm. And, of course, the answer in this case was dissecting aortic aneurysm. Anticoagulation would have killed the patient instantly.
So, wow. Depending on the nature of the question, you're going to completely give doctors good or bad advice. Got to be really careful.
Michael Krigsman: Arsalan Khan comes back and he says, what happens when the specialist doesn't agree with what the AI recommends? Who is responsible when AI or the specialists are wrong about the prognosis or the diagnosis? And should patients be aware that AI is being used in their care?
John Halamka: Let me give you an answer from CMS. I'm sure, Michael, you have read the CMS draft nondiscrimination rule, and page 175 answers that question directly. And what it says is a draft rule, right? So, it's not completely finalized. Said AI should be an augmentation to human decision making. It should be like the smart medical student is providing you a review of the research and suggesting that there's some things to look at or test.
But a doctor should never delegate decision making to an AI. In fact, if a doctor delegates decision making to AI, the doctor is responsible for any bad result.
So, the way we use AI at Mayo Clinic is we say, oh, you've got your experience, you've got the literature, you've got the community standard of care. Oh, but there's AI providing, just, again, a set of data points that you put into your entire decision-making process. And absolutely, we would say, patient, we're going to go, just as we order lab tests, we're going to go order this AI test, see what it says, and then together, we'll decide how to interpret that, ignore it or not.
Michael Krigsman: So, ultimately, the physician takes responsibility, and AI is one, we could say, reference source to feed into the physician's judgment.
John Halamka: Exactly. And there are other ways we're using AI.
So, in the field of radiation oncology. Radiation oncology, for those of you who are familiar with the practice, is physics and math. It's figuring out how to target a tumor without affecting the nerves, arteries, and veins around the tumor. Turns out AI is really good at physics and math. And so, we give CT or MRI images of the tumor to an algorithm, and it produces an initial, what's called auto-contouring algorithm that is then handed to the expert radiation oncologist for review, fine-tuning and adjustment.
On average, in our use of this, it used to take 16 hours for a human to develop a sophisticated head and neck tumor treatment algorithm. Today, it takes 1 hour. Does that mean we're going to need fewer radiation oncologists? No, it means we're going to be able to treat more patients in more geographies than ever before. And so, truly, it's just seen as a reduce the burden kind of algorithm, not replacing a human.
Michael Krigsman: And this is from Lisbeth Shaw, who says patients grant permission to the Mayo clinic to use information for research and altruistic purposes. How will these patients react when they find out that their data is being monetized?
John Halamka: We never sell the data in our notice of patient privacy rights. What we say is we would like to use data collected during a care process, anonymized and aggregated, to produce new knowledge and to care for patients in the future. And at any time, you can opt out. Of our 10 million patients, about 3000 of 10 million have opted out.
But we're very careful to never sell data, to never exfiltrate data, to keep it always under Mayo's control. And what we're producing as capture specialty knowledge in algorithms. Those algorithms will use internal to Mayo Clinic, and they may be used external to Mayo Clinic, but always in the interest of patient health.
Michael Krigsman: So, Mayo has this set of policies, and obviously, other institutions, organizations are going to have their own, over which you have no control, even though they may be part of the federated system that you've built.
John Halamka: Mayo is a very values-driven organization, and that means that it's been around for 150 years, and the values are real. I mean, every person at Mayo asks, what would the patient want? And I will be very candid with you that as we evaluate partners throughout the world, we ask about values alignment, and we're quite selective in making sure that organizations, philosophically, are going to work in the same spirit of capturing knowledge, not monetizing data. And so far, so good with the patients that we've engaged so far. Each of which, of course, in their local institutions, have kinds of consent for the secondary use of their data for deriving new cures.
Michael Krigsman: Should we adjust the Hippocratic oath to implore medical professionals to be aware of the pros and cons of using AI? And should other industries have similar oaths?
John Halamka: What I'm going to share with you is something that I have each of my employees sign when they begin work. It's called the digital Hippocratic oath. It is entirely redesigned Hippocratic oath around the notion that we are going to work together to protect the sanctity and the privacy of the data, to validate algorithms before their use, and to strive to do no digital harm. And you can post this. It's a quite comprehensive oath that each of my employees signs, and you could post it on LinkedIn.
Michael Krigsman: This issue of trust becomes central to what you're doing and really the use of data and AI for these purposes.
John Halamka: Well, right. And a question you need to ask all yourself. What would you want? What would your family want? And I think your answer to that would be, wow. I want to be treated with empathy and compassion. I want respect for my own care preferences. And, you know, I can understand the value if a doctor like me, I went to medical school in 1980. Things have changed since then. What if I have the benefit of hundreds of millions of patient experiences and the world's literature giving me some augmentation that's going to make me a better doctor, but only if I respect the needs and the care preferences of the patient?
Michael Krigsman: Earlier you mentioned Obamacare, and you said we could talk about llama care. And you run the unity farm sanctuary. Tell us about that. That seems disconnected to a lot of your other work. Or maybe it's not.
John Halamka: In the patient always comes first paradigm of Mayo, Unity Farm Sanctuary is this wonderful 360 large animal community. We've put together the same principles of multi-specialty care and the creature always coming first.
Here's how it started. In December of 2011, my wife was diagnosed with stage three, breast cancer. It was metastatic. She wasn't quite sure how things would go. And while she had radiation, chemotherapy, and surgery, she said, I want to leave something behind. She's doing fine today. All cured.
So, in 2012, we sold our small home in Wellesley, Massachusetts, and purchased 15 acres in a rural town called Sherborne, Massachusetts. And everyone said, you're going to move to a socially distanced, biologically isolated community. Why would anyone want that? Now? Of course, in 2020, everybody thought we were completely prescient.
But in 2012, we started with small animal rescue, and this would be poultry, be some alpaca. We were able to acquire the land next door and the land next door. We're now 100 acres and 22 barns with 360 large animals, 1000 volunteers, programs for the elderly, programs for the disabled, and a preschool so that students are able to live their early childhood in a farm and forest setting.
And this was done as a community benefit 501. It is truly a community resource for the New England area. Every abused, abandoned, and neglected large animal comes to us. Let me give you. Here it is, the afternoon already. This morning, we've been asked to bring in a pig, a goat, a calf, and a dog. All four of them were going to be euthanized today. And all of them now are brought to our facility where they will live their natural lives. Or if we can't accommodate every one of them, we will find a placement somewhere in the United States in a similar sanctuary.
Michael Krigsman: And for folks who want to get in touch with you at the Unity Farm Sanctuary, how can they do that?
John Halamka: Oh, very simple. Just go to unityfarmsanctuary.org. You'll find all our contact information, and we're on every bit of social media you can possibly imagine.
Michael Krigsman: How do you accomplish all the incredible things that you do? It's unbelievable.
John Halamka: I think there's a couple of answers to that.
And so, while we start with, I'm an emergency physician, and so much of what I've been trained to do over these last, I'm 62, so, right, over these last decades is triage what must be done now, right? And so my wife will tell you, if there's an animal in need, the animal always comes first. Triage.
Figure out then what you can delegate. And so sometimes in my roles, I will create a vision, create a plan, turn it over to a team, monitor that team, support that team, but won't do the work directly, whereas some work I have to do directly.
And this is going to sound completely ridiculous to you, Michael, have you tried to get your tractor serviced anytime during the COVID or immediate post COVID period? Who knew that I would be going through school for seven degrees, and I would be doing hydraulics on tractors or replacing plumbing and electrical, carpentry, et cetera? But yet, those are some things I have to do out of necessity, because there's no one else to do them.
So, triage, delegate, learn continuously. How about this? Tolerate discomfort. You're going to be tired. You're going to be cold and wet. It's okay. And so far, I don't know what each day will bring, but I try to live it to its fullest.
Michael Krigsman: Do you still play the Japanese flute?
John Halamka: I have had less time to play the Shakuhachi than in previous years because these days, with 360 children, I'm doing a fair amount of animal care, fence bending, and just keeping so many moving parts in the world of AI, as we've been talking about running, so that society will benefit.
Michael Krigsman: On that note, I think given all of your work, society certainly does benefit, and I want to say thank you so much to Dr. John Halamka for being our guest today. John, thank you so much for being here.
John Halamka: Well, absolutely. And I know I was in one of your earlier episodes in 2014. Let's do this more often because the world is moving so fast. Happy to share my experience.
Michael Krigsman: Well, thank you. Look forward to the next time.
And a huge thank you to everybody who listened, and especially you folks who asked such excellent questions, as you always do. We have incredible shows coming up. You should check out cxotalk.com, subscribe to our newsletter, and subscribe to our YouTube channel. And we will see you again next time. Have a great day, everybody. Take care.
Published Date: Mar 15, 2024
Author: Michael Krigsman
Episode ID: 829