AI in Healthcare: Choices, Data, and Outcomes

CXOTalk episode 874 explores how artificial intelligence is reshaping healthcare. Featuring experts Dr. David Reich (The Mount Sinai Hospital), Teresa Carlson (General Catalyst Institute), and Dr. David Bray (Stimson Center), we explore AI-driven decisions, data management, ethical governance, and practical strategies for healthcare leaders.

54:22

Mar 28, 2025
6,783 Views

Artificial Intelligence is rapidly transforming healthcare, reshaping patient care, clinical decision-making, and operational efficiency. But integrating AI successfully requires navigating complex choices around data management, privacy, investment, and ethical governance.

In CXOTalk episode 874, three distinguished leaders share practical insights from their unique perspectives:

  • Dr. David Reich, MD, Chief Clinical Officer of Mount Sinai Health System and President of The Mount Sinai Hospital and Mount Sinai Queens, provides a clinical and operational view on implementing AI within one of the nation's leading hospital systems, highlighting practical successes and ongoing challenges.
  • Teresa Carlson, President of General Catalyst Institute, offers an investor and innovator’s perspective, discussing the emerging trends in healthcare AI, investment strategies, and technology ecosystems.
  • Dr. David Bray, a renowned technology governance and ethics expert, explores critical considerations around AI ethics, data privacy, responsible governance, and policy implications for healthcare organizations.

Topics include:

  • The current state and significant impact areas for AI in healthcare today
  • Practical strategies for managing healthcare data integration, privacy, and security
  • How AI empowers clinical decision-making and patient engagement
  • Evaluating the real-world impact of AI on healthcare outcomes and economic sustainability
  • Strategic recommendations for senior executives on adopting AI responsibly and effectively

This interactive, 60-minute discussion includes live audience Q&A throughout, offering business and technology leaders practical guidance to harness AI's full potential in healthcare.

Episode Highlights

Define a Clear ROI for AI Initiatives Based on Business Needs

  • Validate that proposed AI solutions address genuine problems within your organization, avoiding tools that solve non-existent issues. Ensure startups or vendors demonstrate how their offering integrates with existing workflows and provides measurable value beyond mere novelty.
  • Calculate return on investment comprehensively, considering factors like improved efficiency, cost avoidance (e.g., penalties), enhanced outcomes, and personnel time saved. Recognize that "free" pilots require internal resource allocation for IT integration and workflow adjustments.

Integrate AI Seamlessly into Existing Clinical and IT Workflows

  • Prioritize AI solutions designed to augment, not disrupt, the established processes of your clinicians and operational staff. Involve end-users early in the selection and implementation process to ensure practicality and adoption.
  • Acknowledge the significant effort required for IT integration; factor this into project timelines and resource planning. Solutions must fit technologically and operationally to prevent creating more labor or complexity.

Foster Collaboration Across Ecosystems for Innovation and Policy

  • Engage proactively with industry consortia, startups, established healthcare systems, and policymakers to shape practical standards and regulations. Build partnerships to share knowledge, develop best practices, and avoid duplicative efforts or restrictive rules.
  • Actively educate policymakers on the real-world implications of data access, interoperability, and AI deployment regulations. Provide concrete examples of how specific policies either enable or hinder progress toward desired outcomes, such as value-based care.

Prioritize Data Interoperability and Patient Access for Better Outcomes

  • Advocate for and implement systems that promote secure, open, and accessible data flow between providers, patients, and relevant technologies. Overcome tendencies to hoard data by focusing on the improved insights and outcomes generated through shared information.
  • Empower patients by giving them transparent access to and control over their health data, fostering a shared responsibility model. Explore technologies like LLMs to summarize complex records, making data more usable for clinicians and patients without relying solely on rigid standardization.

Shift Focus Towards Value-Based and Outcome-Driven Models

  • Explore and implement AI tools that support a transition from fee-for-service to models rewarding positive patient outcomes and overall health improvement. Utilize data analytics to identify high-risk populations or opportunities for preventive care, aligning financial incentives with improved health outcomes.
  • Develop metrics beyond immediate financial return to measure the success of AI initiatives, including improvements in patient health, clinician time savings, and penalty avoidance. Informed by AI insights, apply behavioral economics principles and 'nudge' techniques to encourage the adoption of evidence-based practices.

Key Takeaways

Validate AI Solutions Against Real Problems and ROI

Ensure AI tools address documented organizational needs, avoiding solutions that search for a problem. Define return on investment comprehensively, including efficiency gains, clinical outcome improvements, penalty avoidance, and the internal resources required for IT and workflow integration.

Prioritize Secure Data Access and Modern AI for Interoperability

Champion secure, accessible data for clinicians and patients to foster informed decisions and shared responsibility. Explore newer AI approaches to overcome data fragmentation where older standardization efforts have stalled, such as large language models for summarizing unstructured information.

Shape Practical AI Policy Through Active Engagement

Educate policymakers on the concrete impacts of proposed regulations on innovation and care delivery within the healthcare sector. Advocate for consistent, industry-aware guidelines and support initiatives such as sandboxes, allowing controlled testing and value demonstration before broad implementation.

Episode Participants

Teresa Carlson is the founding President of General Catalyst Institute, leveraging her decades of leadership experience as a visionary industry disruptor. She serves as an international advisor and board member at the venture capital firm General Catalyst. Teresa is also a veteran executive of large tech companies, including Amazon, where she founded and led Worldwide Public Sector and Industries, growing it into a multibillion-dollar business. 

David L. Reich, MD is Chief Clinical Officer of Mount Sinai Health System and President of The Mount Sinai Hospital and Mount Sinai Queens, provides a clinical and operational view on implementing AI within one of the nation's leading hospital systems.

Dr. David A. Bray is a Distinguished Fellow and Chair of the Accelerator with the Alfred Lee Loomis Innovation Council at the nonpartisan Henry L. Stimson Center. He is also a nonresident Distinguished Fellow with the Business Executives for National Security, a CEO and transformation leader for different “under the radar” tech and data ventures seeking to get started in novel situations, and a Principal at LeadDoAdapt Ventures.

Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator known for his deep expertise in digital transformation, innovation, and leadership. He has presented at industry events worldwide and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.

Transcript

Michael Krigsman: We're exploring AI in healthcare today on CXOTalk number 875, with three experts shaping this future. Dr. David Reich is the distinguished chair of the Accelerator at the Stimson Center. Teresa Carlson is president of the General Catalyst Institute. And Dr. David Reich is the chief clinical officer of Mount Sinai Health System and president of the Mount Sinai Hospital in New York.

We're going to explore what's real, what's hype, and how AI impacts decisions, data, and patient outcomes. Thoughts on solutions and how AI can support the overall healthcare system?

David Reich: The answer is to analyze from the perspective of the providers what is a true return on investment. I have many people in the startup world who are approaching and presenting me with a problem that they can solve, but I don't have that problem. They're trying to convince me I do. And there are many problems I have that they don't work with.

The other thing is that people in the early stages want to work with the health system, and they say, "Well, we'll work with you for free."

'Free' isn't free because my friends in information technology get upset with me. As a clinician, I come to them and I say, "Look at this bright, shiny object," and they try to explain to me the amount of effort that would go into integrating it into the IT workflow, let alone the clinical workflow. I'll stop there for a moment.

David Bray: I want to amplify what David was saying. It's bringing back a lot of flashbacks. In a past life, I was with the Bioterrorism Preparedness Response program; this was back in the early 2000s to 2005. We were trying to plug into hospital systems, state systems, and things like that to basically monitor.

As David so well put it, the irony of electronic health records is that they've actually created more human labor to complete.

I think also, as David was saying, any good solution in this space has to fit into the workflow that's already being done by clinicians.

Now, there's a bit of a catch-22 because technology also changes the art of the possible. It's really you have to involve clinicians.

I'll give a nod to what Teresa's talking about. We're now in a place where technology is changing what's possible so quickly—it doesn't mean it's being adopted, but it's changing what's possible rapidly. Policymakers almost have to iteratively learn also from what's even possible in the technology, both to remove barriers and also to put in place incentives if we're going to move forward and think about third and fourth-order effects almost in real time. This makes this a really hard space to find solutions.

It's doable, but it's a very hard space.

Teresa Carlson: David's so right. Don't bring a solution to an area where there's no problem. I think that should be the number one use case for the companies: What problem are you actually solving?

There's got to be an ROI because you can't add more cost.

Our job should be to make our clinicians' lives easier. Our physicians should be able to service more patients faster and more effectively, providing them the data they need to get their job done more rapidly.

For us, I would tell you, it's very much about ensuring we're doing value-based care and outcomes-based care. I'll just share one example.

We have a couple of companies that we were with yesterday that we've invested in. They are really thinking about patients that are in underserved populations. From Kentucky—hence the accent.

I'm from a very rural part of the United States. One of our founders is focused on rural-based care, ensuring that we're not only training and teaching but using technology to force-multiply the capabilities there. A lot of times, you don't have the specialist or the access to education or advocacy for these individuals.

They have individual and unique needs in their healthcare, and they are usually the last to get it.

She's thinking about a problem. There's a lot of America that is rural, not urban.

The second example is a company we have called Cityblock. It's taking care to the most underserved; there's a Medicare/Medicaid overlap with what they do. But they literally do not get paid until the patient gets better.

It's a value- and outcomes-based model. Both of these companies, Homeward and Cityblock, are doing this.

That's very unusual. When we talked to legislators on Capitol Hill about that yesterday, they were like, "I didn't even know this existed."

One of the physicians from Cityblock, Toyin, who started this company (she's a physician herself), said, "Look, I could see a lot of patients during the day. I could give them a prescription and hope they leave my office, go fill it, and take it. But I can't ensure they're doing that, yet I get paid for that visit regardless."

Their model is much more about ensuring there are outcomes from what they're prescribing and doing. That's what I call a shared responsibility model between the physician and the patient.

We're really looking for creative ways to ensure we're getting healthier outcomes, not just charging the system more.

Michael Krigsman: David Reich, your thoughts on the relationship between innovators and hospitals? You're president of a major hospital; I'm assuming you deal with these practical issues constantly.

David Reich: One of the key things that we're working on, Michael, is a seamless pipeline for digesting the multiple groups that are coming to us with fantastic ideas, General Catalyst being one of the top groups we pay attention to.

We regard anything that comes through an organization like General Catalyst as being highly vetted; it comes to us curated in a way that merits close attention.

As we do this, we try to break them down into different buckets: For example: Is it something that would improve patient experience? Something that would improve workflow? Something focused on a patient outcome?

This allows us to look at the ROI related to the particular idea. ROI isn't always strictly financial because in our world, we have penalties for failing to meet certain criteria.

Sometimes penalty avoidance is an ROI. Sometimes better outcomes and a shorter length of stay is an ROI because we can backfill with other patients.

It all comes down to whether the idea integrates into a category that makes sense for us and is something where we can rationalize an ROI. Sometimes we partner. Since we're an organization that likes to think of ourselves as entrepreneurial, we potentially work with startups and other groups, putting in what is often the sweat equity to help develop and make a product commercially viable.

Michael Krigsman: Check out cxotalk.com, subscribe to our newsletter, join our community.

David Bray, I'm hearing a core issue here is alignment between groups that have overlapping agendas and goals, but also distinct ones. Is that a correct way of looking at it? And how do we bridge that gap?

David Bray: Yes. I think this would be the definition of what we would call a wicked problem, where you have not only different folks aiming for different goals that sometimes overlap, sometimes don't; the very goals and how you're going to pursue them are also changing.

That's why everyone says, "There's a magic wand that'll magically fix healthcare and solve it by the next hour."

I think we're all trying to say that what you can do is get the stakeholders together. Because things are moving so quickly and shifting, I suggest doing three things. First and foremost, think about how this improves the practice.

David Reich was very good at pointing this out: think about ROI not just in financial terms, but also as giving back time to clinicians, achieving healthier patient outcomes (like earlier release), and penalty avoidance.

So, practice first and foremost. On top of that, think about the new program and have some curation.

It's not just that every shiny idea gets tried out, because you can't absorb that. The organization would be stressed, and it might disrupt the organization more than help.

Finally, the policy layer. That's where Teresa was talking. Because we have policies, our laws usually don't have expiration dates. Sometimes they do, but usually, no expiration dates exist.

There are things still on the books from the 1940s, 1950s, and 1960s that may have made sense then that don't necessarily fit now.

I think what I would submit is one area in particular that I'm personally passionate about, coming from a background with the People-Centered Internet Coalition and what we're doing at the Stimson Center now—data, in my opinion, is a form of human voice.

I think we have to think about data differently. Instead of data being opaque to both the patient and the clinician—about where it came from, how it's informing decisions, and so on—we have to think about how it gives more visibility and stakeholder agency to clinicians as well as the patient. This is going to help influence outcomes.

If you're in a rural state where things might not be digitized or accessible yet to help inform a clinician about your care, that's an unfair situation we have to address.

But at the same time, we don't want patients and clinicians to lose the ability to choose when and where that data is used.

Teresa Carlson: If we want to take advantage of these most transformative technology companies that we hope will help David Reich in his practice and our patients, the one thing I will share is that we also can't have a patchwork of a regulatory environment that doesn't enable this.

A good example is healthcare. If you think about it, there could be a federal AI model and then 50 state models. Then, if these companies work outside the US, they have another model to adhere to.

That is not sustainable; it's an existential threat to these transformative technology companies. We want physicians, nurses, and clinicians to have the best technology for what they're trying to achieve.

What we adhere to is the belief that healthcare is a good example where you can allow the industry to control how they want AI enabled. I'll give one quick example.

The FDA has been doing its job a long time. Let's let them work toward the right policy for their environment in healthcare. They're already doing it.

We have companies working with the FDA. Aidoc was with us yesterday. They work directly on the physician, looking for capabilities to allow the physician to see if there are medical errors, to help them with that.

We know that medical errors are one of the largest costs. Clinicians are moving fast; they need technology to support their efforts. If we can let those industries operate, we can take this AI capability industry by industry with the right policies. I think it will allow us to move faster and let the experts who know that industry work on the right policies.

David Reich: There's an organization called the Consortium for Healthcare AI. Mount Sinai is involved, along with all the major health systems.

One of the aspects that we're all concerned about is that we could be regulated out of existence before we even fully exist.

It isn't just at the federal and state levels. We also have the Joint Commission and other certifications. We have local health boards as well.

What we've often done is try to come together as a group of healthcare organizations. We bring all the three-letter federal groups together (at least as a start) with the concept that we might develop assurance labs, such that as the AI tools are developed, we have a way of assuring their quality and ensuring they fit into the culture, as well as all the different matrix elements of working in complex health systems.

I think one of the ways we get around it right now involves many of the tools we've implemented, especially internally developed ones for decision support. We try to think of them as bringing the right team to the right patient at the right time.

We don't make the diagnosis, but we try to identify the patients most likely to have that diagnosis and bring the clinicians to them.

We'd like to move beyond that at some point, but that's going to take a little help. We need to have a real partnership between government, industry, and healthcare providers to be successful.

Michael Krigsman: We've had John Halamka as a guest several times on CXOTalk; he's one of the founders of CHAI. Looks like an interesting organization. We have some questions coming in from LinkedIn and Twitter, why don't we jump there? This is a question from Arsalan Khan, and I'll toss it out to the three of you—whoever wants to grab this. Has anyone mapped the entire set of healthcare workflows? What about having AI recommend which workflows are archaic?

David Bray: This is a needed space. There are different groups talking about how to translate business process modeling in the healthcare domain into some way that can then be analyzed rigorously across different systems.

There are different vendors that have tools for business process modeling in the healthcare space, but they're each proprietary in some respects.

If we can get to that and have a universal way of expressing business process modeling in the healthcare space, we can do another thing: We can go to the journals. This is where I'll give a nod to my colleague, David Reich. Sometimes the journals have peer-reviewed articles recommending different ways of achieving outcomes that might be different. But because it's currently in text, as opposed to comparable business process models, you can't compare them to determine which leads to better patient efficacy.

This is an area where, with AI, if we can arrive at a non-proprietary, interoperable way to express business process modeling across hospital and IT systems—and also translate peer-reviewed 'gold standard' care from text journals into this format—we could start to do very interesting things and achieve that goal.

However, first we have to figure out the mechanism that encourages people to think more broadly than their own specific system, but think holistically across systems of systems.

David Reich: I'd like to address the word 'archaic'.

One of the jokes in medicine describes how you learn: If you've done one case, you say, "In my experience." If you've done two cases, you say, "In my series." If you've done three cases, you say, "In case after case after case."

One of the problems we have in medicine is this culture where experience leads to 'best practices' that are not always evidence-based.

As we develop the evidence base, getting physicians to change, getting nurses to change, getting transporters to change—everyone who's involved in care in a healthcare setting requires some sort of persuasion.

The common term now is the Nudge Unit. The Nudge Unit is intended to identify and gently (or perhaps not so gently) correct behaviors heading in the wrong direction.

Often, those are wasteful things, either clinically or financially. A perfect example was when the University of Pennsylvania managed to switch prescribing behaviors to less expensive drugs very successfully.

We see similar things at Mount Sinai, where we've had some success.

Going back to 'archaic,' the most important thing I take into account is that we have to think about behavioral change. Sometimes that will be based upon behavioral economics.

Behavioral economics are perhaps the answer I would give regarding starting to look at ways of overcoming archaic processes.

Michael Krigsman: We have another question that has come in on a related topic, the topic of interoperability. This is on Twitter from Chris Peterson, who says, "We're 30-plus years into healthcare interoperability standards, but we're still not there in many respects." How does this lack of standards affect the advancement of AI in healthcare?

Teresa Carlson: We do hear a lot from our companies that this is a challenge, because they are so creative.

These entrepreneurs—I can't stress this enough—are trying to solve a problem, not just look at it, jump over it, or step in it. They are trying to solve a problem.

I would tell you that they do say interoperability is a top challenge.

What we're also seeing, Michael, are creative ways they are trying to work around it, looking for new ideas, even starting from scratch on some things (which I know nobody wants to do).

They say, "Okay, we have to solve it, not just keep talking about the problem, but look for creative solutions."

I will tell you, AI—because the internet, these data sets, these large language models are getting so good—is almost beginning to create new systems of record without us even knowing they're being created.

You're seeing these ideas. They're beginning to access existing systems in a way that creates new ideas for looking at that data and finding solutions.

While it would be ideal—we talk about freeing the data a lot—we need to free that data. Data needs to be free.

Our physician, David Reich, needs to be able to take advantage of that data in every aspect of his practice. He should always have access to the information he needs at his fingertips to treat a patient.

On the research side, like Dr. David Bray, we want researchers and the entrepreneurs creating new technologies to have those capabilities that they can bring back to that clinical work.

I would say, whatever we do in this country and around the world, we must ensure we have open and accessible data.

Not just for the clinician, but for me, the patient—I want 100% access. I want to make decisions about my own data.

You're seeing consumers get a lot smarter. We need systems that are open and interoperable for patients themselves to make the right decisions about their care.

Michael Krigsman: David Bray, let me ask you a question. As David Reich and Teresa were just saying, interoperability is very important. Teresa described the need for the free flow of data to support innovation. On the other side, the economic incentive for many healthcare information systems is to keep that data, because 'he or she who holds the data holds the money.' What about that?

David Bray: I think we need to unlearn the meme from about a decade ago that data was 'the new oil'. Oil: use it up, it's gone. Data: you use it, it's still there.

In fact, if you involve people associated with the data (whether a patient or a clinician), they'll find things wrong in it. They'll fix it. They'll make it better. They'll create more data. They'll create better quality data.

In some respects, we need a different metaphor than 'data is the new oil'.

I also think we need simplified consent mechanisms and transparency so a patient can see, "Where is my data going?" Because right now, you fill out forms (usually paper-based), and you have no idea where the data is going.

Similarly, a clinician, as Teresa says, needs access to that data to do what they need to do. There are technically policies against information blocking.

This is one of the tensions in the US system. Given that we have a free market (which is great), we are also trying to figure out how to overcome individual self-interest regarding hoarding data.

The last thing I would say on the interoperability challenge, Michael, is that I often use the phrase 'standards are like toothbrushes'. Everybody says they want to use one. They just don't want to use anybody else's.

You have to be careful because not all standards are created equal. Some standards are vendors trying to build a moat. They are standards propped up by them. Other standards groups are about trying to level the playing field for established companies, startups, and others.

Usually, I look for whether they charge to use the standard.

The last thing I would say is that technology is advancing what's possible in medicine. We've gone from about 12,000 CPT codes for claims 20 years ago... Now we're up to 72,000 CPT codes for claims.

It's becoming difficult for a human to know which claim code to use—whether an experienced coder or a patient trying to figure out their bill.

This might be something where we need to think about a new strategy, similar to what Teresa was saying. What would it look like if you had your own personal chatbot to help you understand your bill or pre-authorization? How could you navigate what is a very complicated, standards-based process? And how do you make it more accessible to everybody, not just technological experts?

Michael Krigsman: We have a question from LinkedIn, from Ravi Karkara, who is the co-founder of the AI for Food Global initiative. David Reich, he says this, "What AI skills will be needed for medical and nursing schools to teach in order to create this AI medical revolution?"

David Reich: We don't teach many things in medical and nursing school other than perhaps the traditional things emerging from anatomy, physiology, pathology.

We need to teach people about systems-based practice. And it isn't just AI. The whole point is technology is here.

I would argue that healthcare is a technological industry that has failed to recognize it for decades.

I couldn't agree more with the individual who asked the question. As we reform curricula, it is about teaching people how best to integrate with technology. This comes with the proviso that you also have to teach that AI and other technologies are not a threat.

It's a way to augment how you deliver care.

I have a friend who's chair of radiology here at Mount Sinai, and he says AI won't replace radiologists, but radiologists who use AI will replace radiologists who do not. I'll leave it at that.

Michael Krigsman: I just want to remind everybody you can ask questions on Twitter using the hashtag #CXOTalk. If you're watching on LinkedIn, pop your question into the chat. When else will you have the chance to ask such an esteemed group whatever you want? I hope you take advantage of it.

This is from Arsalan Khan again, who asks: Healthcare data—do we need more or less? What can the government do at the federal policy level to help with data and interoperability?

Teresa Carlson: I've been working in the public sector world for over 28 years and am used to a lot of data. I've seen it continue.

I don't know that there's a limit on data anymore. Artificial intelligence is a key solution for us to enable understanding of that data, research it, and get immediacy from it.

Data... I would be personally okay with as much as we need. We're going to keep building on it. I don't know that there's a limit.

When working with folks like the intelligence community and defense, you see over the years the amount of data. In healthcare, it's also significant.

To your point, what are the policies? I would tell you, we need to educate a lot more of our policymakers.

They want to understand. However, I don't know that we've done a good job explaining to them the harm caused if data isn't free-flowing and accessible. Why does it inhibit our ability to make America healthy again?

If that is the goal of this administration, what does lack of access do?

I see our job at the institute, working with partners like these, is to go in with examples.

To David's point earlier, there are policies in place intended to ensure data is open and free-flowing. For some reason, we hear from many companies, large and small, that it's not working.

Not just from them; we hear it from physicians and clinicians in hospitals too. It's not working the way it's supposed to.

We need to figure out why that is. We need to ensure the policies already in place are being activated. Somebody needs to watch out for the patient to make sure that data is open and available to them, the clinicians, the hospital systems, and the vendor partners that have to have it to make America healthy.

Healthy. America is already great; let's make it healthy.

David Bray: Michael, if I can build on what Teresa said—because I want to give a nod.

General Catalyst (The General Catalyst Institute) put out a report. What was interesting is they discussed what Teresa mentioned earlier: shifting some examples from a fee-for-service model (where you get paid, but the patient doesn't necessarily follow through with advice to get healthier) to value-based and outcome-based care.

I would submit this is where data is crucial. If you're going to do value-based and outcome-based care, you first have to understand the long tail of that patient—for instance, are they pre-diabetic? Therefore, if you get involved now, you can avoid some more costly outcomes if they become diabetic.

That's one of those things, again, related to what David Reich was saying about fitting into the existing workflow for clinicians. Because if you give them a 40- or 50-year-old's patient record, they don't have time to wade through it to ensure they catch everything.

How can you give them confidence if they're working with an AI-augmented tool? It's not replacing them, but augmenting them. Confidence that the AI has brought the relevant things to help inform their decisions.

When you teach future generations of doctors and nurses, how do you frame it not just about the individual clinician but as part of a larger system? How do you ensure the handoff among everyone providing that value-based, outcome-based care keeps patient centrality in focus?

That's a major revolution to try and fit into the next five to ten years within a system already under strain.

That's what makes this space interesting. That's why we need these conversations now: Let's make sure we don't lose sight of empowering the patient and ensuring clinicians have the tools at hand to help.

Teresa Carlson: When we talk about maximizing fiscal responsibility in US healthcare, the other point I wanted to make is this: If we want to cut the red tape, we don't want to make it more expensive to get to the data and access it for those outcomes.

Because let's remember, AI and technology over time should drive costs down. People talk about costs increasing, but it should drive costs down because you can achieve outcomes faster and more efficiently.

Not just less expensive; you get results faster. That allows our clinicians to make decisions much faster.

I wanted to say that because we talk a lot... This administration is very focused on cost-cutting. We can't make it more expensive. That's why technology should be the enabler: to move faster, get resources there, and not make accessing data more expensive.

David Reich: Let me give a couple of practical examples that illustrate the points we're making.

About four years ago (maybe even longer), we started working on a predictive algorithm for delirium in hospitals, which is a big problem. The structured data were not very helpful. It did not result in a predictive model that was all that impressive.

But as soon as we added natural language processing (NLP) to the unstructured data in physician and nursing notes, the model became predictive—almost scary predictive in terms of its accuracy.

As someone who was working with SNOMED and ITSDU years ago, trying to establish standards, and then watching the moats (as David Bray described) being built by the different electronic health record (EHR) providers... At first, I thought, "Oh, this is the end of standardization. We'll never have interoperability."

But now I'm starting to see that the tools, especially large language models (LLMs) capable of creating summarizations of massive data sets, are very important.

I walked into the OR this morning. The patient had undergone five previous cardiac operations. I'm a cardiac anesthesiologist. Just imagine trying to summarize that data. It's challenging.

If a large language model had been available to me and had provided a five-paragraph summary of the most important unstructured data in all those records, it would have made my job much simpler. I would not have spent 45 minutes last night poring through electronic records.

Those are practical examples of how we've perhaps lost the battle on having perfectly structured data. But with interoperability and newer tools, I think we can overcome those 'sins of the past'.

Michael Krigsman: What I find fascinating is that we even have to have a discussion where we remind ourselves that we have to put the patient first.

David Bray: There are so many different stakeholders in this system. It is because we separate our public and private sectors (which is a strength). It does mean you have to think about how to empower clinicians (plural) working across different IT systems provided by different vendors.

Consider also the fact that the patient can choose where they go, in some cases, potentially crossing state lines. For some mysterious reason in the Constitution (because nobody specified who oversees healthcare), it defaults to being a state right, unless the federal government uses preemptive clauses.

In some respects, I liked what David Reich was saying. I also have battle scars from the standards battles about 20 years ago. There were people trying to create a standard identifier for everything in the known universe—I would submit that is never doable.

Now, using unstructured data and new approaches, we don't have to have everything specifically specified. We can help clinicians, whether through text summaries or computer vision analyzing thousands of images to highlight the ones needing attention. If anything, we may be in a world drowning in data but lacking insights.

If you can give the insights back to clinicians and the patient, that's how we're going to get through this.

Michael Krigsman: We have an interesting question from LinkedIn from Greg Walters on this set of issues.

Before I take it, I want to mention to everyone watching: you are an amazing audience asking incredible questions. We want you to join the CXOTalk community. Subscribe to our newsletter at cxotalk.com so we can tell you about upcoming shows, because we have incredible ones coming up.

Greg Walters says he believes AI can be the glue in healthcare—not only accessing old data but creating and tracking new data, therefore being the connective tissue. He wants to know: What is the level of investment dollars and commitment regarding this mission of AI that the healthcare industry is currently putting towards these technologies? What is the engagement level as well?

David Reich: We look at ourselves using standard surveys; I believe Gartner did one for us. For example, we look at how much we spend on cybersecurity versus the banking industry, and surprisingly, it's much less.

Especially with the challenges in healthcare, we're an industry with very low margin. In fact, with downward pressures we're going to see (related to a desire by the administration to reduce healthcare spending), we wonder if we'll be able to afford to maintain the level of spending other industries have on information technology writ large.

The answer has to be that we always, when looking at the ROI on a tool, have to see a way it reduces some aspect of the expense. For example, finding tools better able to engage patients as partners in their own care.

Truly personalizing medicine isn't just looking at genomic markers, biomarkers, and environmental factors and deciding what's best for that patient. You also have to analyze something we've never looked at: the receptiveness of the individual or family to being a partner in that care.

As we move from the 'case after case after case' I referred to previously toward population health and personalized medicine, we have to start bringing it down not just to the level of the provider and patient, but to the patient and their desire and interest in interacting with the provider.

Making America healthy is a partnership.

Teresa Carlson: Michael, if you look at venture capital, VC is investing about $11 billion this year in AI technology for healthcare. It might be small relative to the $4.5 trillion health market, yet it's getting there. We believe in it.

I will tell you, our CEO, Hemant Taneja, has been investing in healthcare for so long. He's a believer. He's created companies.

We believe so much in this model of AI transformation in healthcare. We have a whole group dedicated to this.

The other thing I'll share is what we are seeing on the healthcare side. For the six CEOs and founders we had here yesterday, their business is growing at a rapid rate, all focused on AI. We're seeing the healthcare industry—payers and providers—lean in because they see the opportunity.

I expect AI efforts in healthcare to grow exponentially.

I'll give you one example we've been discussing with this administration, and which we put in our white paper released a week ago: We advocate for sandboxes. These allow healthcare institutions, the US federal government, and states to see the value. They try things out; they realize they need to understand it.

We're big believers in experimentation. Because, back to Dr. David Reich's point, we don't want people selling him things he doesn't need. He should see the value in a technology before he adopts it.

He should have the opportunity to try it out in a sandbox. Because he might say... To his credit, he's a smart guy. He works very hard with patients every day. He might say, "I'm not sure, but I'd like to try."

We need ways to allow clinicians and healthcare providers to try out tools and show the value quickly. They can then acquire and scale them.

In today's world of cloud and AI, it's not a license-based world. You should be able to try things out and scale them.

Michael Krigsman: David Reich, how do you measure the clinical value and impact of tools like those Teresa described, including the outcomes and impact on patients?

David Reich: It comes back to the many ways ROI can be calculated. A simple one for a very complex tertiary or quaternary care hospital is this: Can I assure a better outcome, lower mortality, or a lower infection rate? Can I empty a bed more quickly and fill it with another patient generating another DRG? That's the simplest level.

At the population health level is value-based care. This is something we have not been successful at—much less so on the East Coast, a bit better on the West Coast. If I had a panel of patients with a narrow network who had to use our health system for their care, and I better managed their diabetes, hypertension, et cetera—could I reduce their spending?

Can I truly bend the healthcare cost curve? When you have a wide-open rodeo here in the United States, where if I'm not happy with Mount Sinai Health System, I can go to the health system a mile down Second Avenue or Third Avenue to a different one... We don't have the ability to model and measure at that level.

To answer your question, Teresa and Michael, it's such a mess because we don't control the spending, and we don't have a good way of measuring outcomes in an environment where people can get care wherever they choose.

That's part of what we value as Americans, is freedom of choice.

If you go to Europe, they've taken a different approach with national health insurance and created a captive market. They can (it's not a word we like to hear) ration care by deciding what's in the 'national basket'—what drugs and services will be provided. They can apply standards that are not really acceptable in US culture.

We have to think about these cultural barriers to bending the healthcare cost curve.

Michael Krigsman: Dr. Faith Mehmet Gul, CEO of The View Hospital in Doha, says he appreciates the valuable medical examples shared for AI applications during this discussion.

However, he notes, we should not overlook the significant potential of AI in non-medical areas within healthcare. As we continue to face challenges around resource allocation and affordability, AI can play a transformative role in driving efficiency and improving access to care.

David Bray: We know, for example, with this incoming administration, there's a focus on eating healthier. Also, trying to change lifestyle habits: go outside, enjoy the sun.

Those are all things that you don't need a medical doctor or a nurse to prescribe. They don't have to prescribe them, and second, it's hard for them to follow up.

You could imagine, however, an AI app focused on helping inform the patient so they could make the behavioral changes David Reich mentioned. So much of health, especially if you want to bend the cost curve he discussed, involves things traditionally called 'public health' in the United States. These are behavioral changes.

Again, because the strength of the United States is its high decentralization, that's also the challenge. You have freedom of choice.

The question is: How do you bring the nexus of choice and agency to the individual, letting them pick what they want to do, while also finding a way to bring together all the different parts outside the immediate medical setting that can improve health outcomes?

That's where there's a long tail, both before seeing a doctor or clinician and afterwards too. There are huge opportunities for innovation in this space, recognizing again that different countries have different choices.

The US strength is decentralization, but any solution has to take that into account.

Michael Krigsman: This is from Philip C. on LinkedIn, who describes himself as a builder, investor, and advisor. He says, "Regarding sandboxes: Is there a realistic pathway to bring together user data isolated in healthcare systems and user data scattered across their consumer apps? It seems like that's a natural next step for greater contextualization of people's healthcare needs via Gen AI and other AI."

Going back to what we were talking about earlier, David Reich, do you want to jump into this one? How do we bring the data together?

David Reich: I have a friend who taught me the expression "no one lies to their search engine." When we stop laughing about it, there's a certain truth to it, because we want to know certain things.

Looking at consumer information and marrying it with medical information certainly is going to provide additional insights. It's something I would look forward to.

The other thing we haven't addressed much in this talk is the concept of privacy. What is privacy? Did we give it all up once we started tracking ourselves with our phones? Of course, we gave up some aspects of it.

There's a balance in society between respect for privacy and the need for things like national security. Where are these balances going to occur?

As genomic information proliferates (Mount Sinai has 300,000 genomes in a 1-million-genome collection project), remember there's no such thing as a de-identified genome. We have to bring together what can be de-identified and perhaps work on that.

We have to create safe harbors in the national interest, where we can bring together data that must be protected so individuals' genetic information is not released.

To personalize medicine and bring data together effectively, you need the information outside the medical records and the information inside. It has to be shared in a way we are comfortable with as a society.

Teresa Carlson: I'll say one thing: Hippocratic AI.

I'm going to answer the gentleman from Doha and the previous questioner by saying I agree: There are many health applications out there that help in many ways, not just with the day-to-day clinical care we're discussing.

One solution I love is one of our investments called Hippocratic AI. Their mission is to close the growing shortage of healthcare workers by helping them scale. They don't do prescribing. They do follow-up; they make calls.

There's an agentic AI that calls the patient and asks questions about their health: "Have you checked your blood pressure? Have you taken your medicine? Did you get your prescription filled? Have you eaten healthy today?"

Back on nutrition, where you could scale a nutritionist to a million people at a time.

Merging those two data sets, they're a great example of opening up the world's information alongside clinical details—without prescribing or doing a doctor's work. This helps the doctor and nurse be better and have more time. They get information back about what the patient is doing. When I see them face-to-face or via telemedicine, what core things do I need to focus on now?

We're seeing many great applications now that are helping our physicians and nurses scale their work effort. In the backend, it's more administrative work. It's helping educate and advocate—things I'm sure Dr. David Reich would love to do but doesn't have time for. He doesn't always have time for that.

This is where AI is going to help us in our world of patient care.

Michael Krigsman: David Bray, quickly, a question from Twitter: How do you ensure AI is used ethically in healthcare? There are competing agendas, including extracting value from patients for business. How do we manage these competing agendas, and where do ethics come into play? Quickly, please.

David Bray: Two things we need to see more of (I've seen this work in the UK; we can do it in the US):

In the UK, they call them data trusts; here in the United States, we call them data cooperatives. I volunteer with an effort called Birth to 3 that tries to make sure every infant in the United States gets the necessary physical, mental, and emotional care they deserve. Obviously, data about infants is very sensitive data.

What we did was use existing contract law to create a data cooperative. It includes members of that community, including representatives, who have a choice as to how and where the data is used. We explicitly stated that at no point will this data be monetized. You can choose which systems are used.

We're increasingly pushing for algorithms, if they are going to train, to come train in situ with the data, as opposed to porting the data elsewhere. This also solves the interoperability problem because it's not about data interoperability elsewhere; it's about bringing the algorithms to the data.

Now, there's challenges with that. You can do that.

The last thing I would say is we may need more boards that include members of the patient community being served by a clinical setting. They could be involved every three or six months, with explanations like, "Let's explain what we're using the AI for, and you can ask questions." Stakeholders involved with both the data and the AI.

Michael Krigsman: David Reich, you mentioned using Gen AI to summarize complex patient histories. A question from Twitter: "Who should be held liable if the AI recommendations are hallucinations? How would you even know they are hallucinations?"

David Reich: We have a real obligation as we develop the tools to ensure they are accurate.

I often refer people to a study by radiologists showing that when mistakes were intentionally entered into a predictive algorithm, it led people astray. The residents were wrong 80% of the time. Even the experienced radiologists were wrong 50% of the time when the AI led them astray.

It is a prerequisite that the tools we develop must be overseen because when we build tools, we build them on data biased in nature due to inequities within our healthcare system. That's part of the quality assurance process.

We make many errors in medicine, and we're always struggling to make fewer and improve things. The same thing applies to any technological tools that we have at our disposal.

Michael Krigsman: As we finish up, I'm going to ask each of you a separate policy-related advice question, and I'll ask you to answer quickly.

Teresa, let me start with you. Given all we've been discussing, what advice do you have for policymakers regarding technology like AI in healthcare?

Teresa Carlson: My top advice to them is they have to learn.

I'm seeing them open to this; they have to learn, and their policies must move faster. AI technology is moving much faster than they're instituting policies. They have to learn and understand. They have to be open to new ideas. They have to be on top of this.

Last, as we discussed earlier, they need to think about AI at the industry level. Let the experts do their job. Don't do broad policies that hurt the industry. Do something that allows the industry to understand.

Don't over-regulate. Let them proceed; they're already doing it. Let them own it and do it well.

Michael Krigsman: David Bray, what advice do you have for technology developers when it comes to AI and healthcare? Very quickly, please.

David Bray: Learn from what happened with the introduction of ultrasound for sonograms in the healthcare space.

Initially, when ultrasounds and sonograms for pregnant women were introduced, there was resistance. There was question about efficacy; it had to be proven. How was it worked into the workflow?

Now, nobody would question it. In fact, if anything, it might be considered wrong not to do an ultrasound for a pregnant woman.

It's worth understanding that there is a journey. Be patient with that journey and figure out (again, as David Reich said) how to plug into that workflow and show quality.

Michael Krigsman: David Reich, what advice do you have for healthcare leaders when it comes to AI and healthcare?

David Reich: Surround yourself with a group of forward-thinking people and perhaps even some from the old guard. You need to have a brain trust. That brain trust must be present.

It has to come together and provide the best advice, because it's becoming like the Wild West out there. We have to have a way of digesting and facilitating better healthcare within our organizations: reducing waste, cutting costs, and ensuring quality.

Technology can be a double-edged sword. We have to be sure we do the best we possibly can. The way to do that is to surround yourself with the best minds.

Michael Krigsman: Makes sense. Surround yourself with the best people. And with that, a huge thank you to Dr. David Bray, Distinguished Chair of the Accelerator at the Stimson Center; Teresa Carlson, President of the General Catalyst Institute; and Dr. David Reich, Chief Clinical Officer of Mount Sinai Health System and President of the Mount Sinai Hospital in New York.

And a huge thank you to all of you who watched and asked such great questions. Check out cxotalk.com, subscribe to our newsletter, join our community, and come back. Come back next week. We have two members of the House of Lords next week, talking about AI and the ethical aspects and regulation. Join us.

Thanks so much, everybody. I hope you have a great day, and we'll talk to you soon.

Published Date: Mar 28, 2025

Author: Michael Krigsman

Episode ID: 874