We’re in the middle of a healthcare revolution – and artificial intelligence is going to make the difference. AI can help us diagnose diseases faster, deliver care more effectively, improve outcomes, and increase accountability.
We’re in the middle of a healthcare revolution – and artificial intelligence is going to make the difference. AI can help us diagnose diseases faster, deliver care more effectively, improve outcomes, and increase accountability.
To learn more, we speak with Kimberly Powell, vice president of healthcare at NVIDIA , and Daniel Kraft, a Stanford and Harvard trained physician-scientist, inventor, entrepreneur, and innovator and is serving as the Chair of the XPRIZE Pandemic Alliance Task Force.
The conversation covers topics including the impact of AI and natural language processing (NLP) for healthcare organizations, electronic medical records, healthcare AI startups, and much more:
- About AI and machine learning in healthcare
- AI in drug discovery and development
- Why AI is important in healthcare
- Obstacles to adoption of AI in healthcare
- How to democratize healthcare data and healthcare ai
- Managing security and privacy of healthcare data
- Data science and bioinformatics
- Educational resources for healthcare AI startups
- Clinical decision support and trusted AI systems
- How AI will change healthcare and medical practice
Michael Krigsman: AI in healthcare. We have two extraordinary guests. First, I'd like to introduce Kimberly Powell from NVIDIA.
Kimberly Powell: The work that we do at NVIDIA is really to create the computing platforms for the scientists of our time. If you think about Rosaline Franklin and her breakthrough in discovering the structured DNA, we create the computers that we hope will enable the next generation of things like digital biology, computational chemistry (through the use of AI computing and simulation), and also to enable the development of artificial intelligence that can really help synthesize the enormity of digital healthcare data that we hope will assist in clinical decision-making. Be it AI-enabled medical instruments or clinical language model understanding, eventually, this is going to help feed a very large and complex recommender system that's going to really help assist our healthcare professionals. Its' really all about creating the computational platform to enable all of this to accelerate AI in healthcare.
Michael Krigsman: Our second guest and my esteemed guest co-host is Dr. Daniel Kraft. Daniel, tell us about your work.
Dr. Daniel Kraft: I'm a traditionally trained physician-scientist and have become (sometimes people call me) a futurist. I think it's a bit accidental, but I've always loved all the different elements of health, medicine, and technology, how they come together, and to think about how we leverage this exponential age, whether it's AI, robotics, 3D printing, nanotech, genomics, to chatbots and drones, that are all kind of converging (sometimes super-converging) to enable us to rethink how we do health and medicine across the care continuum from prevention and public health to better diagnostics to better therapy.
I think AI is becoming such an integrated component. I like to call it IA (intelligence augmentation) as a way that does not necessarily turn medicine on its head but becomes a real key tool to enabling us to go from our new sets of exponential data to action that can really improve outcomes and democratize healthcare around the world.
Michael Krigsman: Give us a little bit of context about the importance of AI, machine learning, and similar techniques in healthcare.
Kimberly Powell: For starters, as a patient and consumer of healthcare, we don't know that most of our medical instruments today are running AI in the background. To do everything from the creation of the images themselves to enhance the images to make it safer for patients so they don't have to undergo long procedures and reducing injections.
You can see these FDA-approved algorithms that are so powerful that allow for automatic triage of brain injury where every second counts, or even to the breakthroughs that we've just had to really exercise in DNA sequencing so that we can do genomic surveillance. All of these instruments, actually, are using AI to an extreme, but it really isn't noticed by the consumer.
I would say the other area that is of great importance and is having an absolute revolution is in the area of digital biology. We mentioned genomics. It's one of the levels of biology, but proteins just had a significant breakthrough, all the way through to what we can see in imaging and radionics, all to help us understand disease in greater detail, and we hope to really accelerate the discovery of therapies.
It's really happening more than people think. It's actually out there and deployed. There's an exponential curve of FDA-approved algorithms, so we're at the beginning of this hockey stick.
Dr. Daniel Kraft: You mentioned AI in drug development. Now you can model a protein or change a spike protein in COVID. Potentially now, instead of the trial and error elements of picking a drug and seeing if it works, design computationally the next change in the base pairs that will make the protein therapeutic that fits.
I think we're entering this hopefully golden age where instead of being ten years to discovering a drug to market, we can use AI to design the therapeutic, whether it's an mRNA, traditional drug, or biologic. We're using AI and machine learning and now this Internet of Medical Things to enhance how we can do virtualized clinical trials—sometimes with humans or now this new era of digital twin—where it's AI and modeling to predict (in vitro or in silico) what might be working, what might the side effects be, and to narrow and quicken the path to effective therapy. Those of course can be an actual drug, but now we're in the age of digital therapeutics or software as a medical device where you can really start to hone and leverage our digitome, our microbiome, our genome, our sociome and integrate that in a way that it makes sense and not overwhelm the patient, clinician, or healthcare system on the implementation side.
Kimberly Powell: Yeah. Daniel, you had a phrase that I love. We have another phrase that is similar but has a different concept of super-convergence; super-convergence of the computer science industry, the digitization of biology, and now the accelerated computing era.
Just like we had in the deep learning revolution back in 2012, we had the element and you had to have enough data. You had to have the AI algorithms. Then you had to have enough compute to start to teach these algorithms to learn from data. We're at a super-convergence in the digital biology era, and it's more than exciting.
We use the term at NVIDIA called the super-exponential, which you also touched on, which is the rate of progress. The rate of progress by fusing something like artificial intelligence and simulation in accelerated computing is helping us create million-fold times in our understanding of biological systems and the breakthroughs that have recently happened at our national labs where they were able to simulate a very, very large COVID-19 virus at an utmost detail.
They call it a digital microscope, so they can see how this protein is interacting with the cell. It really needed the entirety of a very large supercomputer. Again, it really illuminated on some features and activity that's going on in a biological space that we've never seen before.
That rate of progress of millions of times more, I think the world (especially in the healthcare industry) is just starting to get their hands around. Super-convergence, super-exponential are two great terms.
Dr. Daniel Kraft: I think many folks, even those folks watching who are very into tech and health and medicine often don't appreciate exponential trends.
I founded and run a program called Exponential Medicine. It's all about understanding the pace of change. Fifteen exponential steps is 32,000 but the 30th step is a billion, and then the 31st step is two billion. The capabilities are incredible.
I think a lot of folks might associate (as I did) NVIDIA with video gaming and elements, you know, the supercharger computer. I'm a pilot physician. I love flying. Just looking back to the old version of the Microsoft flight simulator and the new one, which leverages NVIDIA (and I uploaded my new version to speed it up) is incredible what you can do just on a video gaming platform.
Now, you take that to simulating life and biology or leveraging that for virtual reality training for a surgeon to go into a virtual environment (just like a flight simulator) doing a procedure or interacting with a molecule and doing that collaboratively around the world at 5G or faster. It really seems magic.
I think the last ten years were pretty incredible. The next ten years will make the last ten years look slow. This super-convergence needs to be appreciated. Any component that you're doing in technology or healthcare, you need to think about not just what's been pretty incredible in 2021, but where things will be (skate to where the puck will be) in 2025 and 2030. AI is going to increasingly get sometimes scarily powerful.
Michael Krigsman: Are there particular areas of the greatest promise for using these techniques?
Kimberly Powell: I'll speak about medical imaging got a really great, early head start. That was again through this rate of change that we didn't really understand back in 2010 when iPhones were in everybody's hands. We could capture the entire world digitally and temporally and upload that data to the Internet where computer vision was really transformed overnight because of deep learning. Medical imaging was able to ride on that coattail.
Oftentimes, what you're doing in images is you're looking for anomalies. You're trying to quantify what you're seeing in the images. Medical imaging is far and away having a fantastic advantage of taking true advantage of artificial intelligence.
Where we're seeing an incredible, incredible opportunity (and the world is seeing) is, about three years ago the discovery of a new AI algorithmic approach (usually used in natural language processing called transformers) – it's a new architecture – gives you the capability to learn in an unsupervised way. You do not have to have labeled data.
What you saw was (several years ago), all of a sudden, when you spoke to your phone, it actually worked. That was for these algorithms that could go and study, again, the entirety of the Internet and the written words, so they could understand language.
This technology, this transformer technology and algorithmic approach is actually very applicable throughout the healthcare domain, and we're just starting to see how powerful they are. Because healthcare has the challenge of access to data and access to labeled data, in particular, these approaches can really go off and unlock a bunch of dark healthcare data we've never been able to look at before, and vast amounts of data.
These transformers, actually, you need to feed them with huge amounts of data. We're seeing they can now learn the language of biology. We saw that with AlphaFold.
They can learn the language of chemistry. We're pioneering this with AstraZeneca right now.
They can learn the language of images and segmenting organs. They can learn the language of doctors' notes, which we know they're very unique. They're institutionally unique. They're never the same.
Because now we can learn from unlabeled data – and large amounts of it, which we have in healthcare but we were always kind of throttled by the fact that it wasn't labeled and we didn't have these transformer techniques a couple of years ago – this is what's going to right now put us on another hockey stick effect of how AI is going to change how we do things across healthcare and life sciences. This is literally across the entire industry.
Michael Krigsman: We have an interesting question from Arsalan Khan, who is a regular listener. Arsalan, thanks for tuning in. He asks, "Is the grass always greener on the other side of this super-convergence? What are the negatives of AI in medicine?" It's an interesting question.
Kimberly Powell: We have to be incredibly diligent on the safety of these algorithms. It's no one company. It's an absolute full ecosystem approach.
You're seeing the FDA rapidly transform so they can understand how we can effectively bring these software as a medical device algorithms to market in the safest way possible. We need to be able to have surveillance on these algorithms as they live out there in the wild.
Just like humans, we learn something every day. These algorithms need to be able to continuously learn.
The entire industry and ecosystem has to evolve and be acutely aware of the fact that if we don't continue to learn and we don't put the right processes in place, these algorithms could be biased. They could miss things. These are where technology can be helpful, but it's a full ecosystem approach and technology has to understand the FDA processes so we can help facilitate this for the industry to move forward.
That's like with any artificial intelligence. We have to be very, very careful on both its robustness and its ability to continuously learn because the world is everchanging.
Daniel, your perspective?
Dr. Daniel Kraft: Yeah, and that learning is often dependent on what data we feed the machine and, in some cases, that could be biased. We base so much of our healthcare decisions in cardiovascular on the Framingham Trial, a pretty limited set of mostly Caucasian nurses in Western Massachusetts.
Now we need to think about health equity, access, and data sets from genomes to sociomes to metabolomics that might be different amongst different patient populations, as well as the bias that we, as clinicians and others, (in meaningful and intentional ways) feed into the algorithms about maybe a particular course of care for oncology that's only been tested in a certain socioeconomic or racial group. There are lots of ways that you can potentially bias the outcomes by feeding it new forms of data, labeled or unlabeled. That's one area.
I would say, with any fast-moving technology, there are pluses and minuses, just like 3D printing. You can use it to 3D print a medical device or a gun. AI can be used for good or for bad in terms of over-surveillance or potentially amplifying bad actors.
I think, as Kimberly mentioned, we need to think proactively, put the right guardrails, and study how we can move these forward. It's tremendously powerful, but there are always (like with anything) some positives and negatives, moving forward. Sometimes the regulators and the policymakers are a bit behind the exponential curve on what might be coming next.
Kimberly Powell: Yeah, and to dovetail off of that, there are two things that we think a lot about when we invest a lot of energy and effort.
One is, the ability to learn from global data without having to share data in healthcare is an absolute must. And so, we're really pioneering what's called federated learning where you bring the algorithm and the compute to the data so it can inject these data sources and the characteristics of the local data into the algorithm without sharing any of the patient information. This is going to be the bedrock of future AI development, bringing global equity and reducing bias.
The other that you touched on is, there could be bad actors. The way NVIDIA has thought about this from day one is to democratize it. If you democratize it to the far-reaching corners of the Earth, the more we can stamp out those bad actors. No one company, no one powerhouse has all the capabilities in the world.
That's why we're so passionate about democratizing this. Our mission in the healthcare team at NVIDIA is to democratize it even into the clinicians so, one, they get educated on it, and they're empowered to do and build their own AP applications and deploy these AI applications for better serving their patients.
Democratization is one of the most important things we can do. The movement in the open-source software community allows for each researcher to build on the one before it, as well as put it into the hands, so we're educating students at all levels, we're educating practitioners who need to be brought into the fold, and then the computing platform is ubiquitous.
As you said, the GPU that's used for gaming, you can build AI applications on it just the same. I think those are really important considerations.
Dr. Daniel Kraft: Part of that, I think, opens up the question of how do we educate, let's say, medical students and nurses today to enter this AI-powered future. There's not much in the way of even digital health education or digital medicine in medical school.
How do you validate or use an AI algorithm? There's a lot of resistance or fear about AI taking the jobs of radiologists, pathologists, dermatologists – let's say the ones that do a lot of pattern recognition. There's a famous quote – I'm not sure who originated it – that AI is not going to replace your doctor, but the doctor using AI will replace the doctor who doesn't.
We need to think more of the collaborative form of using AI (augmented intelligence or however you want to phrase it) in health and medicine. In some cases, you need to be able to look beneath a black box. If there is an ability like Google DeepMind to look at your retina and predict the progression of diabetic retinopathy or risk for heart attack or stroke, how is that derived? Sometimes it's easy to put out, sometimes very difficult.
How do we give trust and enter AI into the workflow of a clinician, so when they're choosing the right statin for a patient, and the AI algorithm suggests that based on the pharmacogenomics, they can really know to trust that, validate it, and keep improving those algorithms? Lots of challenges on integrating this into the workflow, into the culture of health and medicine, which is sometimes very resistant to change.
Michael Krigsman: When you talk about democratizing healthcare using AI to democratize the delivery of healthcare, are you talking about having transparency into algorithms? Are you talking about making it easier for non-doctors to diagnose? Aren't there risks associated with that? Are you talking about spreading healthcare knowledge to parts of the world where maybe it's not as readily available? What exactly do you mean by democratizing?
Kimberly Powell: Yeah, it's all of the above, and we definitely take a full ecosystem approach to it.
The RSNA is the Radiology Society of North America. It has its largest conference at the end of the year every year. We showed up four or five years ago, and there was a lot of resistance to us and what we were trying to do.
What we did the next year is we set up what we called our Deep Learning Institute for Radiologists. We had thousands of radiologists pass through this training. The next year, they asked for it again. The next year, they asked for it again.
One, education is absolutely the starting point. Where we've evolved to now is we actually have radiologist tools, tools that they use every single day where they can actually get involved, label some data, train their own algorithms. That is another form of democratization.
Allowing them to take their own destiny in their own hands and say, "Oh, wouldn't it be great if I had an algorithm that could do this, because it takes me ten minutes every time I read this type of study?" They're trying to make themselves more efficient and really exercise all their time on the craft that they've mastered over the years and focusing on the patient.
Other forms of democratizing is now in the deployment of these algorithms. Because of artificial intelligence, the medical device and medical instruments industry is going through a revolution. They're able to shrink these instruments. They're able to make them cheaper because of artificial intelligence. And they're even able to make them intelligent in their own right.
There's a wonderful company called Caption Health. They have an ultrasound (it's the first FDA approved) that's image-guided. It largely can tell the practitioner who is giving the ultrasound, "Up to the right, over here, to the left," and guiding them to really be able to help understand some of the heart conditions and even approved for looking at what's going on in lungs for COVID-19 patients.
What this allowed was, yes, the fact is, number one, ultrasound is intra-user. There are a lot of dependencies. It's subjected to the user that captures the image. The more you can guide it, the better image quality we're going to have out of it.
Secondly, you can push it from having only a specialist be able to capture these kind of images to maybe out into the nurses. They actually did this while COVID-19. You don't want to have to move patients or we were overloaded and there were patients in the hallway. You could actually use these kind of instruments. That should democratize it to the world.
Most people don't know this because we live here in the United States, but only one-third of the population has access to diagnostic imaging today. If we can make it cheaper and intelligent, that improves the access tremendously. That's what we need to do.
We need 100% access to this technology. It's the way we do early detection, diagnosis, and treatment. Without it, it's not an equitable world.
Dr. Daniel Kraft: I love the example of the ultrasound. Another company that's pioneered this, I think it's called Butterfly. They have what used to be a $200,000 cart that I would push around as a medical student now has shrunk to a device that plugs into your iPad or smartphone and is also empowered with AI so that almost anybody – it could be a nurse practitioner, a community health worker, the patient themselves – could do the exam and it will calculate the ejection fraction or guide you through different sorts of exams that again can democratize where and how we do imaging to bring that to rural Africa or to rural California at much lower costs. That's just the very beginning.
What I think is interesting is that concept of crowdsourcing, just like Tesla when they're doing partial self-driving and there's a curve they need to slow down on, they can, in the next upload, inform (in their hive mind) all the other Teslas in their update. And so, the power of AI, machine learning, big data, interconnectivity, and the fact that we're going to get satellite-based Internet connectivity to most of the planet pretty soon is empowering the big data, AI, and diagnostics in all elements of everything from telemedicine to AI decision support that's going to really change the game and hopefully bring care to places that didn't have it or enhance the care where we do.
Kimberly Powell: Yeah. Imagine spreading it out, just like phones were able to spread. There's a reason why Tesla is so far ahead in the automotive industry is because of the number of instruments, if you will, in the field that are collecting data. The sooner we can get these instruments out there and multiply the numbers of them by massive amounts, it's a flywheel for AI, just as you said. Really, really exciting times.
Michael Krigsman: We've got several questions backing up on Twitter. Let's turn to those. You can see I try to prioritize those questions that come in from our great audience.
The first topic is coming from Chris Petersen. He wants to know about security. Daniel, maybe I'll direct this one specifically to you first. Chris Petersen wants to know, "What role should companies like NVIDIA or other providers of the underlying technology play in data security, privacy, and the challenges around those gigantic data sets?" Daniel, why don't you take that first, and then we can ask Kimberly?
Dr. Daniel Kraft: It requires a collaborative mindset to maybe help set some of those guidelines, whether it's interoperability or privacy standards. I think we also want to encourage folks to be able to share their data in private, anonymous ways (when appropriate).
The idea of being a data donor can help feed the AI engine, just like Google Maps or Waze. We share our driving data and our speed and location. It builds a better map for our particular driving. That can come to health and medicine as well.
I believe the individual should own their own health data, but also the ability to share it and hopefully learn from it. I think, from the Apple, Google, Facebook, Microsoft, and NVIDIA, it needs the common guidelines to help make it a safe process going forward and to envision what might be coming next.
One quick example, I've been chairing the XPRIZE Pandemic Alliance Taskforce, and NVIDIA is one of the members of the alliance, and we're blending that Health and Pandemic Alliance. Part of the theme is to democratize healthcare and to leverage data and AI to bring care and insights to places that didn't have it. That requires trust in the system, the ability for folks to be data donors, to use other federated learning or other processes to keep data appropriately available but also sharable to make the impact that it has the promise to do.
Kimberly Powell: From a technology standpoint, this is front and center to where we're thinking. It is absolutely the case that AI needs to move out further to the edge for all the reasons we just described (getting it closer to the patient). But what that does now is you have to be able to secure the data, even inside those machines or inside the walls of the hospital.
NVIDIA, (I think the world knows) we joined forces with Mellanox. This is the world-class networking company where there are extreme architectures that we can put in what's called the Data Processing Unit to be able to create a very secure (at the node level) embedded in an instrument, each individual server inside the walls of the hospital and protect it all the way through to cloud computing. Absolutely, we need to move away from just the outskirts of a data center firewall into the absolute node level where compute and where patient data resides.
We are well along the trajectory of injecting all of that security from a technology perspective into all of the computing platforms that we build. It's absolutely one of the considerations that every computing company has to take, and we have to.
Again, this is no one person's problem. This is everybody's problem. Whether it's you as a data owner, you as a data host, or even those that compute on the data, have to take responsibility for the security of it.
Michael Krigsman: Chris Petersen comes back, and he wants to know, "Is the AI future in medicine solely about algorithms that learn, or is there still a role for the kind expert systems that have played in this space for decades?" If I can translate that, basically, what's new here?
Kimberly Powell: Data science, you could say, is somewhat of a superset. The methods that are used in data science are still very, very valid.
We have just the same where we have deep learning neural networks. We have all of the machine learning algorithms that the bioinformatics would have used extensively for decades. What we're doing now is accelerating those.
It used to be that you would ask a bioinformatician a question. They would go execute the algorithm and it might take two days or six hours for it to return the result. We're trying to create an interactive experience.
Imagine being able to visualize and interrogate very, very large data sets in the traditional machine learning way but now completely interactive. It will absolutely transform the way they work.
I would also say that without data science and machine learning (and everything that we've had to do on the data processing end), we can't even feed the neural networks. It is the first part of the process, which is gathering the data, co-horting the data, massaging the data such that it's fit for function to be put into a neural network. Without them, we won't go anywhere, and they're absolutely complementary in so many ways.
Recommender systems of today, what are they dealing with? They're dealing with trillions of transaction data and they're trying to get all the way through to a customized recommendation. That really brings to bear every aspect of computer science from machine learning to deep learning networks and so on.
What's been new here has been the deep learning revolution and it continues to have incredible new promise. But without data science and traditional machine learning, it won't go as far as it could go.
Dr. Daniel Kraft: One other thing, one of the elements about where it can go, I just was at a TED conference and heard from the head of OpenAI about what's happening with GPT3, which is the ability to potentially write your medical notes for you or script certain elements. I think the challenge might be not just the technology but how do you integrate that into care.
There's a whole rising issue of clinician burnout dealing with electronic medical records and sometimes too much data. AI can play a role in helping feed that in, in the appropriate way, whether it's taking thousands of elements of data and understanding who is likely to have sepsis, a fall, or line infection in the in-patient setting all the way to start to monitor our homes and patient behaviors in physiology and voice, et cetera, to really be proactive to identify problems early and leveraging all these new elements, to give the right recommendation, to bring the right action in a way that really catches disease early before it's expensive or deadly.
Michael Krigsman: We have another question from Twitter. This is from Viacheslav Phaedonov. He wants to know what kinds of information (books, blogs) can you recommend for medical entrepreneurs who want to learn more about AI and integrating AI into their businesses, particularly from a healthcare perspective. Any thoughts on that?
Kimberly Powell: We do our absolute best, at least in video. I'll go there and then I'll go with some of the others that I enjoy. We try to put everything into speak that anyone could understand. I believe that I could transform anyone from another domain to actually come and work on healthcare because you can talk about the contribution that applying technology or some skillset has.
We put all of our ecosystem stories, whether it's the University of Florida where we just pioneered the largest clinical language model (that Daniel was just talking about) to do all of this amazing natural language processing inside of the clinical care. We put all of that information out into blogs and even into training notebooks and ways that you can put your hands on the code and think about doing it yourself. I would say NVIDIA is a fantastic source, and you can follow us on our social channels.
Others are those things like The Medical Futurist is a wonderful read that really captures all the dynamics of what's going on across.
There are several conferences that are also really great to attend. GTC is NVIDIA's GPU Technology Conference where we have complete AI healthcare tracks and trainings that are inside of it. We attend, as others, the RSNA, which has AI pavilions, all the exhibitors, and actual training there as well. You can really tap into a lot of these conferences and/or digital channels to learn now only how does it work, but even get to do some of the work yourself.
Dr. Daniel Kraft: Clinicians out there, a lot of doctors are retraining in AI. A colleague of mine, Dr. Anthony Chang, who is a pediatric cardiologist, has multiple platforms on AI, has a new book out called Intelligence-Based Medicine, which sort of covers the gamut of how intelligence-based medicine is evolving.
A colleague of mine, Rajeev Ronanki, SVP and Chief Digital Officer for Anthem – there's Anthem AI, one of the big payors, the third-largest payor, thinking about using machine learning and AI – has a new book called You and AI. And so, there are lots of resources there.
We have a lot of videos from my Exponential Medicine conference that covered the potential and the future of AI in healthcare as well.
Michael Krigsman: We have a very interesting set of questions from Twitter from Arsalan Khan again. These really have to do with trust. He says, "Is the idea behind AI in medicine to be just an aid to medical professionals or to replace medical professions? If it's just an aid, then who is responsible when there's malpractice? Is it the AI, the medical professional, both, neither?"
Then he goes on to say, "Do patients really need to trust AI for their healthcare needs?" I think that this gets back also to the algorithm point that Kimberly raised earlier.
Dr. Daniel Kraft: There's an old adage that 50% of doctors are below average. [Laughter] We know that there are huge issues with medical errors.
Now, in this exponential age of all the new data sets and new papers and new learnings, there's no way to keep up. And so, if we're going to up-level ourselves, I think the potential for AI is not to, again, replace the clinician but to up-level and enable us to make extensive new forms of information, new convergences, new crowdsourcing so that the future of care will be, "I've got a patient with problem A, and I have options of drug B or drug C, but there's no double-blind placebo-controlled trial, or there might be one that's quite limited from ten years ago."
The future will be, I'm going to have the just-in-time real-time clinical data from Geisinger, VA, NHS, all synthesized, helping guide my care for that patient in front of me, virtually or in person. I think that's some of the potential to, number one, make the best diagnostics and therapeutic decisions.
Also, our brains haven't had upgrades. The NVIDIA platforms upgrade every month or week. We need to synergize with these to do the best for our patients at the public health level to stave off the next pandemic or to help find cures for different forms of mental health, to cancers. We need to think collaboratively, and that means we need to think about education, user interface, design, understanding how AI can synthesize into practice, to drug development, to public health, and beyond.
Kimberly Powell: From a trust perspective, as I started early on in the conversation, the number of steps to even just acquire a medical image – let's say you come in with a knee injury and you're going to get an MRI – there are dozens of steps that have to happen before you even get to the interpretation, the diagnosis of that MRI. There are AI algorithms that are helping all along the way, and so we should trust that part of technology that is making technology better.
As patients, going from film x-ray into digital x-ray, I don't think that we had a strong feeling to say, "Should we trust the digital x-ray?" We could imagine it being a much more powerful tool to, number one, give our clinicians all the information that possibly they could have at the best quality and maybe even faster than they had it.
There is so much on the efficiency here in healthcare that Daniel touched on, which is removing medical error. Doctors are human after all. And so, the more data we can present to them, the better decision-making they will have.
Daniel said, instead of having to amass 30 years of experience, we can now take clinicians who have dedicated their life to this, but maybe in their first 5 and 10 years, be at that 30-year level because of the augmentation of artificial intelligence.
I think really we should think of this as a tool. It's not a technology. It's not a replacement. Just how we enjoy it for the efficiencies in our own home, whether it's making our grocery list or setting a timer automatically with our voice, why shouldn't this industry bring to bear all of that technology to really help with efficiency and better outcomes for patients?
That's my perspective on trust is, these are tools that help along the way. I think, from a patient perspective, as I said before, we're actually not going to know exactly what AIs were involved in a patient journey because it's just now part of the instruments, the workflow efficiency, and I think that's the way it should be.
Dr. Daniel Kraft: We're still early in this AI evolution. It's just really starting to rubber meet the road in health and medicine, particularly in radiology as that starting point. But it's going to become incredibly transformative.
If you're a technologist, entrepreneur, clinician, or patient out there and you see a pain point that you want to solve—whether it's personalizing your medications, helping prevent a fall with a camera and AI, or taking vital signs off your mobile device or from your voice—there are ways to now collaborate and solve for that X problem using AI and the convergence of all these other super-exponentials. It's a really exciting time and I encourage everyone to try and stay up to date because it has a huge potential across health and biomedicine.
Michael Krigsman: On Twitter, Jim St.Clair raises another aspect of the transparency issue. He says, "Part of this is understanding how doctors are working with the AI. For example, did the doctor just prescribe because the AI said so, which indicates maybe too large a reliance upon the AI as opposed to on their own experience?"
Kimberly Powell: Not being a clinician or not being able to put myself in those shoes, I think we have to continue to trust in our medical system and in the individuals who have trained and gone into this field for the better of patient good that they're going to similarly have to understand how this technology works and how it affects their way of working. Again, it comes down to education, understanding how these algorithms work, and how they should be used.
In fact, if you look at the FDA, they have tiers of which they call:
- Is this a non-diagnostic algorithm?
- Does it have some level of patient interference?
- How critical is the decision that the algorithm makes?
- With any kind of software (whether it's AI or not), could it injure the patient?
We need to have that spectrum across all decision-making in healthcare.
Michael Krigsman: As we finish up, do you have final thoughts on the democratization of healthcare and the benefits that AI and machine learning are going to bring to us ordinary people who just want better healthcare?
Kimberly Powell: Building on the shoulders of giants, which our consumer Internet companies have provided us using computer vision to make our doorbells intelligent or have smart speakers or recommend things to us, being able to bring that power into healthcare is what is to me the most exciting.
Imagine if you're admitted into the hospital, having the ability to have a conversation with a virtual nurse instead of having to have a physical person (who is already over-scribed) come in and have to answer questions about what time is your surgery. If we can bring to bear all of these technologies, I think, as patients, we would appreciate it.
Why should a hospital feel more antiquated than your own home? That's number one, and we want to democratize that and leverage all of this key and core (which is now kind of everyday technology) into the whole patient care system.
On the other hand, the other place that we're so deeply passionate about is building the tools for which the community (ecosystem) can develop their own artificial intelligence algorithms. We set off on an endeavor with the academic community, King's College London, who is a world leader in medical imaging AI, and we have built an open-source framework specifically for imaging in healthcare.
It's called MONAI. It's only about a year old, and we're seeing thousands of downloads a day of this tool. This tool and this package that we put together was what I alluded to before.
It is actually something that a radiologist can use. They can label their own data. They can train their own algorithms.
On the other hand, it's also someone who is deep in academia, getting their master's or Ph.D., can use to develop very sophisticated, state-of-the-art algorithms.
It's also being used by medical imaging instrument companies in their R&D efforts to accelerate the iteration of innovative technology that they can deploy.
This notion of open-source tools and making it accessible to a very deep researcher in academia, a radiologist in the clinical space, and then the healthcare industry itself is really about accelerating it and democratizing it across the ecosystem. Healthcare is just too complex of an industry for a tech company to know all the answers, even for a healthcare industry who has been in it for 100 years, to be able to think differently about how AI is going to manifest itself.
Nobody but the clinicians can be in the shoes of clinicians, and so that's the democratization that I think is just so wonderful. We continue to invest large amounts and work across the ecosystem (start-up companies, academia, clinicians, industry) to really accelerate this revolution, which is so exciting.
Michael Krigsman: It sounds like that's the driving force for the work that you're doing.
Kimberly Powell: Yeah, exactly.
Michael Krigsman: Well, I'd like to thank, so much, Kimberly Powell from NVIDIA for being here. Kimberly, thank you very much. I really appreciate it.
Kimberly Powell: Yeah. Thank you. It was wonderful. It's a great show. Appreciate the opportunity to share my passion about the future of AI in healthcare.
Michael Krigsman: Daniel Kraft has just dropped off, but I would like to thank him as well for being my guest co-host today. Everybody, thank you for watching, especially the folks who asked such great questions.
Now, before you go, please subscribe to our YouTube channel, hit the subscribe button at the top of our website so that we can send you our newsletter, and check out CXOTalk.com because we have awesome shows coming up. Thanks so much, everybody, and I hope you have a great day.
Published Date: Aug 27, 2021
Author: Michael Krigsman
Episode ID: 718