Exponential Technology and Public Policy

What is exponential technology and how is it changing the world? Georgetown University’s Dr. Elizabeth “Libbie” Prescott and Harvard’s Dr. David A. Bray tell Michael Krigsman of CXOTalk about rapid changes in exponential technology, its implications on public service and public policy – and the legal or ethical implications.


Aug 25, 2017

What is exponential technology and how is it changing the world? Georgetown University’s Dr. Elizabeth “Libbie” Prescott and Harvard’s Dr. David A. Bray tell Michael Krigsman of CXOTalk about rapid changes in exponential technology, its implications on public service and public policy – and the legal or ethical implications. 

Prescott works at the intersection of science, technology, and security as the Deputy Director and Education Portfolio Lead for MD5 National Security Technology Accelerator (MD5 NSTA) and Adjunct Associate faculty at Georgetown University in the Center for Security Studies in the Walsh School of Foreign Service. She previously served at the U.S. Department of State as a Special Assistant to the Deputy Secretary of State for Management and Resources Heather Higginbottom; Counselor and Strategic Advisor to the Science and Technology Adviser to the Secretary of State E. William Colglazier; and Science and Technology Adviser to the Assistant Secretary of State for the Bureau of East Asian and Pacific Affairs Kurt Campbell.

Bray was named one of the top "24 Americans Who Are Changing the World" under 40 by Business Insider in 2016. He was also named a Young Global Leader by the World Economic Forum for 2016-2021. He also accepted a role of Co-Chair for an IEEE Committee focused on Artificial Intelligence, automated systems, and innovative policies globally for 2016-2017 and has been serving as a Visiting Executive In-Residence at Harvard University since 2015 focusing on leadership strategies for our networked world. He has also been named a Marshall Memorial Fellow for 2017-2018 and will travel to Europe to discuss Trans-Atlantic issues of common concern including exponential technologies and the global future ahead.


Michael Krigsman: Digital-enabled biology. What in the world does that mean, and what are the implications for healthcare? For policy? For the government? How does it all relate to innovation and what's the government doing about this anyway, because public policy plays a role? And today, on Episode #252 of CxOTalk, we are speaking with two amazing people and that's what we're talking about.

I’m Michael Krigsman, an industry analyst and the host of CxOTalk. And before we begin, I want to say “Thank you!” to Livestream for supporting CxOTalk. They provide our video streaming infrastructure and they’re great. And if you go to Livestream.com/CxOTalk, they will give you a discount on their plans and I hope you do that because they’re really great.

So, without further ado, let’s begin by introducing our first guest. Libbie Prescott has an amazing background in the government and now, she is a professor at Georgetown University. Hey, Libbie Prescott, how are you doing?

Libbie Prescott: I’m great! Thanks! It’s great to be here!

Michael Krigsman: So, Libbie, tell us about your background and tell us what you do!

Libbie Prescott: So, I often introduce myself as a "recovering scientist." I am one of those who did my Ph.D. in Molecular Biology before I realized that I was less comfortable in a lab, or less interested in being in a lab, and more interested in engaging globally on issues at the interface of the technology and how it was integrated into society. I've now been in government for a decade, and three different agencies working across not only bio, but also the data and data transparency issues within government. And, a lot of how we embrace and engage and bring on board new technologies, to use them for the national security mission and governance in general.

Michael Krigsman: And, you said you’re a “recovering scientist,” and so, what was – or what “is,” I should say – scientific field?

Libbie Prescott: I did my Ph.D. in Molecular Biology on gene transcription, so that step between when you take DNA and actually turn it into a transcript that is then read into a protein, so at the very basic level of research. And, I really was always captivated by biology, but I wanted something more engaging with groups and people than just doing independent research at two in the morning in a lab, which, I think, is something many scientists learn along their path.

Michael Krigsman: And so, you joined the government and, very briefly, what did you do for…

Libbie Prescott: I started in the National… Originally, I actually started on the Hill doing healthcare policy working with Senator Kennedy many moons ago, and that was an amazing opportunity to see healthcare at a political level; less about policy, more about politics and that context. And then, I moved over to do international science policy working on Asia and on how a lot of countries were using science in their policymaking and applying it to the goals of the nation. And then, I moved over to the State Department, working the Bureau of East Asian and Pacific Affairs to look at Asian regional science cooperation…

Michael Krigsman: Got it.

Libbie Prescott: …and then, ended up working for the Deputy Secretary trying to modernize, and help modernize the State Department leading with technology. And then, over to DoD for a couple years where I was looking really at how DoD does their technology and how we can get some of these more emerging technologies into how we do our core business, which is defending the nation.

Michael Krigsman: Okay! So, long history in the government. And, our second esteemed guest, and the person who introduced us and introduced me to Libbie Prescott, is David Bray, who, in one sense, doesn’t need any introduction because he’s been on CxOTalk many times. But, maybe, it’s a good time to do a re-introduction. And so, David Bray, welcome back to CxOTalk!

David Bray: Thanks for having me, Michael! And, I have to say, this is a particular treat. Libbie and I have known each other for several years, and to be on this show with Libbie, who as you saw and heard, is an impressive change agent with a diverse career across public service, to be with you. To me, it’s a special treat to be here because it shows how broad public service is, and how many different pieces are moving at all different times with different people pushing the envelope.

As for myself, thanks again for having me. I guess, in terms of a re-introduction, I assume we'll be starting a new role, and that will be – it's been announced – with Vint Cerf, for what's called the People-Centered Internet Coalition. It's been around for about three, four years. I first got involved when I was an Eisenhower Fellow, so we actually had some conversations earlier with CxOTalk, I guess, two years ago, about this topic. But Vint and I were talking, and they said they needed a new Executive Director, and so, by the start of October, I will leave the government, as Libbie did as well, and my role will be Executive Director and basically looking at, spotlight, support, and assist existing projects that measurable and demonstrably improve people's lives using a community-based approach to the internet. So, obviously, that's very broad. I look forward to it, once I start to help sort of narrow it down to some focus goals and priorities. But, the chance to work with Vint is a once-in-a-lifetime opportunity.

The other thing, and hopefully, I’m not looking too tired on this show today, is about two months ago, I became a father of a newborn baby boy that we adopted. And so, that’s also what triggered the change in life, is really thinking about what’s going to be …that he is going to be using when he’s 18-20 in about two decades from now. And, look forward to conversations today on that.

Michael Krigsman: Okay! So, you are joining a new organization and you have a new baby. And so, how’s sleep going these days? [Laughter]

David Bray: [Laughter] Umm, what is sleep? I hear it’s a rumor! I think I recall it maybe about three months ago, but that’s about it!

Michael Krigsman: Alright. So, let’s dive in. Data-enabled healthcare; Libbie, you want to explain what do we mean by that and this concept of exponential technologies; where does it fit together?

Libbie Prescott: Absolutely! So, I think part of what I really enjoyed about tracking emerging technologies over the years is there's always something new emerging, and particularly right now, a lot of technologies are converging. And with that, you get these really unique interfaces but also a lot of challenges to how to think about combining the communities, but also the technologies and then ultimately, the governance of those technologies and applying them in any context, but particularly in government is always really difficult. So, when I talk about biotechnology, or biology, or healthcare in general, I think we're seeing a merging of data and health. And, I think, in the past, a lot of the delivery of our healthcare system, which I often would… I think the more appropriate descriptor for that is really a "disease-care system" because we're not optimizing for the health of an individual or even of a population, but actually, we're just really treating things in […].

What data is allowing us to do is to think about understanding what the human, as an organism, actually is operating in closer to real-time. And, with that, we will be able to do a lot of unique things that are not only to prevent, hopefully, the negative outcomes, which would be disease-oriented, but also then you get into the optimizing of our not only just our health, but also our behaviors and our performance. And, that’s where we really start thinking about what does it mean to be performing at full functionality? What does augmenting of a human mean? And, to some extent, we are comfortable with certain types of permanent changes that we already do to ourselves as humans. But, as we get more data and understand more about really what it is to be in real-time at a biochemical level, operating in different contexts, I think we’re going to have an entirely different way we think about even what it means to be human.

Michael Krigsman: Pretty interesting set of questions! David, you also have a background in biology and specifically in bioterrorism. And obviously, that intersects very, very strongly with this notion of government and caring for disease rather than caring for health.

David Bray: Yes. And, I do want to caveat and say that I was against bioterrorism. So, I’m definitely not a for bioterrorism person.

Michael Krigsman: [Laughter] I guess that…

David Bray: [Laughter] With that caveat put out there, yes, my background, when I was undergrad was computer science and biology and I was fascinated in some sense because computers are things we humans have designed almost from the top-down in some respects, whereas biology, obviously, it’s evolution, it’s natural selection, and so, trying to understand that sort of bottom-up approach convergent as to what makes us humans, mammals, organisms, relative to machines […] a very interesting nexus.

With bioterrorism preparedness and response, when I was at the Center for Disease Control back in 2000, one of the things that you realize is actually when the Constitution was written, nobody actually said who gets to oversee healthcare. Of course, that wasn't something that was on the top of their minds. And so, it falls out to be a state right as opposed to a federal right because of the loophole in the Constitution that says if it's not explicitly given to the federal government, then actually, it's a state right. And, that's why you see, at least for public health, each of the different states have their own public health system. And, that's what we actually engage with.

And so, unlike in movies where suddenly, miraculously, CDC shows up at the scene immediately, in real life, it actually is going to be probably first detected at the local level; something odd is going on in terms of an outbreak, or something like that, by first responders at the local level that will be involved with their state system. And then, their state public health system may then reach out to the federal level, and say, “We need your assistance.”

But, what does this mean for the future of health? Well, one of the things we were doing back in 2000, 2001, and 2002 was we were trying to place an emphasis on what was called, "syndromic surveillance," and that was the idea of while you don't need to know the identities of the specific person; in fact, at the federal level, we're not supposed to because HIPAA protects that; we're just looking for the general trends. Are we seeing increases in flu season starting earlier, or later? Are we seeing broad trends in terms of increase in gastrointestinal illness? And, these raise interesting questions because if we can spot something in the broad population, there may be ways that we can address it earlier and get what we talk about of "Left of Boom." So, earlier before something goes wrong.

And, what Libbie has been talking about is we are now at the point of the convergence of Internet of Things, Internet of Everything, machine learning; that what was being done at the aggregate level without knowing the individuals themselves, now can actually be personalized and tailored to the individual one. It can actually say, based on what we see through genetic history, also making sense of your probiotics, so the bacteria that are in your body, we could actually tailor specific Left of Boom interventions that will make it less likely for you to have a specific type of cancer later in life or a specific type of heart problem.

And that raises huge questions, however, because do we really want to know all the things that might go possibly wrong in our life? And, that, to me, raises ethical questions as well as how do you make sure that people are not discriminated against if it shows out that they are going to be at a higher risk when they’re fifty or sixty. Right now, they’re just twenty. How can you make sure they actually get the care they need and are not discriminated against based on what their genetics show, or what their bacteria show in their body?

Michael Krigsman: So, Libbie, maybe you can touch on the technology dimensions of this and then move into… It’s David was talking about kind of a combination of policy and privacy, and sort of all bundled together. And maybe, you can untangle this for us?

Libbie Prescott: I will try! There’s a lot to untangle. I was… What I think is really also often overlooked, as we go to this data-driven health environment, is that the entire way we do research is going to be affected by this. Because, at present, when we learn about health, we take individuals, we put them in, ideally, in a placebo-controlled or clinical trial of some sort, where there’s comparing. Then, we usually go down to the average. And, it starts… You know, as a scientist, you’re always interested in what was the population size that you did your research on, because that conveys some sort of breadth and depth of your… of the dataset.

And really, you know, we've been increasingly going to a world in which N, the number that is in your data set getting bigger, is actually seen as a better way of getting insights into the health of humans. And, what we really have the potential to move towards is the opposite. And, going down to the N = 1 world, where when someone is born or originally begins to be measured, you no longer have to compare them over their lifecycle against the average, but against themselves previously. And with that, it becomes a much more powerful tool to not only understand the real changes that someone’s experiencing over their lifetime or over disease formation, which we will change over our lifetime but what you really want is when are things problematic?

But, we’re also going to be realizing how different we each are as individuals, which we’ve just been averaging in the past. And, I think, that affects not only how we use that information in a healthcare context or even in a health and fitness context, because I do think a lot of these interventions are going to be shifting to the consumer rather than necessarily going through at least the existing disease care system, as I described it, because, I think we need different institutions to look at how to care for, or how to optimize health than what our current health care system looks like because there's no reason to bring someone who is otherwise healthy into a hospital just to help them optimize for health.

But also, you know, we are going to have to understand and think about how we do our research differently, which is very quickly gets to what does it mean if I, as an individual, have my information through a device that I've purchased, and through a system and potentially a proprietary data set that a company won't even share with me. And, what if I want to contribute it to public health? What if I want to say, "I want others to learn from my experience in either…” whether it be genetic, or biochemical, or whatever measures that I have taken potentially on my own. We need to be able to have people, one, be able to measure, be comfortable with what is done with those measurements, and then, also, be able to learn from those through these aggregations, but yet, in a way that people are comfortable, their privacy feel… they feel their privacy is protected, and are able to then make conclusions about how we should move in either for that individual, or as a society for health and treatment options.

Michael Krigsman: David, you’re actively nodding your head there.

David Bray: Yes, so if I could just give an enthusiastic “yes” to what Libbie said; you think about it, and this just gets a little bit to the people-centered internet dimension that I’ll be wearing a hat on in a future life. If the data that you produce was analogous to growing an apple tree, and that was an apple tree that you spent time cultivating and you were collecting it from, in this case, […] that you mentioned. Maybe, you had different devices, maybe a different software, but it was your data, it was your apples from the tree that was being grown on. We wouldn’t expect, in real life, that anyone could show up and take those apples and walk off with it without talking to you, or build a fence around it and say, “No, you can’t move these apples. These are no longer your apples […]”. But, it is seemingly a trend in the digital space, within which people are not really having the opportunity to weigh in as to what they want to have done with the data they produce.

Maybe, we […] that, and we read a thirty-page Terms and Conditions; maybe we read it all the way or not, and we're getting that app for free, in exchange for our data; and maybe, if you want to get service, you either accept it or not. But, it's not really giving a locus of choice back to the individual. And, I think that is going to be so essential because there aren't going to be some people that say, "I don't want to share my data because I'm worried that's what will be done about it." Or, "I want to have confidence that it will be kept anonymous in private," and that's a valid concern to ask. But, there may also be people that say, "Because of community-based issues, and because maybe I'm dealing with some disease that is chronic in nature, I would like to anonymously contribute my data to the research pool so that, proactively, we can begin to find more tailored approaches to treatment as a result of sharing that." And, I think that's only going to happen when people have the trust in what's being done with your data. And so, I think that is a very key thing to ask.

The other thing that I think Libbie also was touching on is more and more of this is going to come from the consumer today. I mean, we’ve already seen this in general IT trends that consumer space is, in some respects now, influencing what happens in technology enterprises. We shouldn’t be surprised if the same things start happening in the bio space and the healthcare space that consumer trends will start influencing the insurance and the enterprise influences as well. And, what do I mean by that? Well, there are companies already that are using machine learning to identify one, who could be in a clinical trial automatically. I mean, trying to do this manually can take between three to six to nine months, if you do it manually. A machine can actually look at, if you choose to share data and say, “You would be perfect for this therapeutic drug treatment, would you like to do it or not?” And, that’s being done in near-real time as opposed to delays of six to nine months.

Similarly, you’ve [actually] seen instances where people are actually trying to see if they can use your smartphone to try and take either an image, or some other type of recording that could, at least, have an initial diagnosis of “Do I need to take my child in to see the doctor, or not? This rash, is it innocuous, or do I actually need to have a conversation about it?” Or, if maybe, I’m a diabetic or I have high blood pressure, can I take a photo of my eye, and that could actually inform how my treatment’s doing in a non-invasive way.

Now, [it] raises huge questions about how you’re going to certify that these things have integrity and are trustworthy, so that gets into the Food and Drug Administration, which I can’t possibly dive in or comment on, but this shows the Art of the Possible and things that we need to solve in the very short-term, as opposed to waiting five years from now.

Michael Krigsman: So, Libbie, is the challenge here a set of technology obstacles, or policy obstacles relating to privacy and some of the other issues that David was just describing? And, where do the technology and the policy intersect, overlap, and diverge and have conflict?

Libbie Prescott: In many areas of emerging technology, policy is often an impediment to scale; to implementing in an environment more than just a pilot or onesie-twosie, or something in a lab. But, nowhere is that more true than in health. And as somebody… We’ve… Since 2000, I did some consulting work with the National Health Service in the United Kingdom about being able to roll out some capabilities within their healthcare system, to be able to use what we already know, and have known for quite a while, about how to use the speed at which an individual processes medication, to be able to more effectively prescribe medication.

So, every individual, based on certain genetic changes, will have usually it falls into a fast medium or slow way of processing lots of known medications. If we even have that degree of granularity, we could more accurately prescribe individuals to say either a larger amount than on average, or a smaller amount than on-average; or just give them the average based on where they fall.

But, there’s no way to actually do the individual assessment of where someone fell on that curve because of lack of comfort on the privacy side, for who gets to see access to that genetic information to say which category you fall into. And, it largely just didn’t end up getting to the clinic and still is relatively… definitely under-utilized within the clinical setting, but potentially largely unutilized.

Various consumer-driven genetic tests have actually done, I think quite a bit to advance that, where a platform like 23andMe, and particularly their pre-2009 interface, actually gave consumers that input to say whether they were fast, medium, or slow metabolizers, which is the term for how you process drugs. And that actually… If we could find a way to create the policy comfort for individuals and for the medical practitioners, who don’t really know how to handle this data with the severity of implications, that would greatly facilitate moving that into the clinical practice.

I think here, it’s even more imperative because what we have are… It’s not a question of the consumers not getting the products with the data-driven health technology; it’s because they’re going to be using them. It’s a question of how far down the line we get before we discover there’s just a really big policy gap that we should have been building for a decade ago and that either we have to create a very suboptimal workaround, or we just can’t go and do the things that we should be able to do with the information. And, I do believe that there are a lot of technologies that we can think about using to enable the consumer or the owner, or the generator, the source of the data, whatever term you want to apply, to make them comfortable that that data is theirs and that they can share it and hopefully, un-share if they lose comfort whether it’s a platform, or whether it’s a pool of information.

Right now, we really don't have those technological capabilities at scale. And, we, you know, individuals can anonymously donate their data but a really big hurdle is that if something really interesting comes out of that data, the researchers often cannot go back and get additional information from the participant or the anonymized individual because of ethical and some legal restrictions. So really figuring out how we can have the fluidity of information going back and forth, and if someone is uncomfortable doing any of it, that they have a legitimate way of doing that; but also those of us that might want to share it do have a way of sharing without having to either go to really herculean efforts or giving over all of our rights because it’s a “yes” or a “no.”

Michael Krigsman: And, we [actually] have a couple of comments and questions from Twitter. And, I want to combine these two because I think that they’re related. So, first, Arsalan Khan makes the comment, “What about data harmonization across systems and the difficulty of moving data, and of vendors being unwilling to open their data?” And then, Sal Rasa is asking about, “What about getting the voices of patients’ families and caregivers factored into that data stream?” So, questions about the composition of the data and the movement and ability to share that data, and the willingness of system providers, the vendors, to even make their systems open enough to enable the kind of sharing that Libbie was talking about. So, David, any thoughts on that?

David Bray: [Laughter] All right! So, great questions and I think this gets to why one still needs a role for public service or for government agencies is to make sure that vendors don’t use proprietary standards as a way of fencing their customers off from being able to work with other companies and have choice. Now, that said, I can recall back in 2000-2001, House Level 7 was being talked about. LOINC was another type of standard. It gets very challenging to try and identify a very detailed ontology from almost everything you want to describe in the healthcare space – […] public health. I mean, public health, it’s just about everything in our universe and I’ve not seen a good ontology for describing everything in our universe at a detailed level.

However, I think there needs to just be a general movement to try and solve the data integration issue, which could include a community-based approach to make sure it's not just what the individual is presenting in a conversation with the doctors, but you're right. If something is a stress symptom, it's an emotional issue, a mental issue, you do want to have other family members' perspectives be brought into their care because it may be that it looks like it's a pulmonary issue or a cardiovascular issue, when in fact, it may be stress-related and it's presenting itself as that.

So, how do we go from here? Because, I mean, this sounds very daunting and it is very daunting. I’m going to give real quick three short things. The first is we do need to actually have that data integration; that data conversation; as Libbie mentioned. How can we solve such a… You have choice in what data you want to share and with whom, and for how long. And, for that, I’ll give a brief shout out Philip at the University of Texas, who is doing very interesting things with side-chain and student records. And in a later conversation, we can talk about how that might be applied to similarly allow people to have choice about what they share with healthcare.

The second thing is we do need to actually also have some integrity boards that look at what’s being done to make sure there are not monopolistic practices; there is integrity, both in terms of the devices that are going to be used by people so they're not just claiming they can detect something, when in fact, they're only accurate half the time. But also, make sure the integrity of the data and what's being used.

And then, finally, the third thing we need to do is, as Libbie mentioned, we've got to rethink how we do health care and health prevention/diseases prevention, not just the treatment. And so, that's probably going to require a conversation across private sector, non-profits, academia, as well as government, to re-envision what's next and then, figure out how do we get from the as-is to the […] with as little friction as possible.

Michael Krigsman: We had on this show David Edelman, who, right now, is the Chief Marketing Officer at Aetna. And, Aetna’s negotiating – it’s been reported – is negotiating with Apple to supply their members with Apple Watches, in order to take measurements and distribute healthcare responsibility back to patients; back to health care; the insured. And so, Libbie, what about that? That becomes, again, sitting right on this intersection of policy and technology?

Libbie Prescott: Yeah, I think it’s a great example because it also raises a couple other issues of what it means to volunteer, right? In order to consent to something, there has to be a credible option to not agree. And, this is, in that example, one has to ask what are the ways in which an employee can credibly say “no?” And, this gets even more important when we start thinking about, in a federal government, or a national security context, when we have our military personnel who… It’s incredibly rich and dedicated community of individuals who have volunteered to die for the country, often, who are, in many cases, standing up in the hundreds of thousands, if not millions, to volunteer their genetic information through the veterans’… The Veterans’ Administration has a program to allow them to, after they either when they’re in service, or after they’ve separated, to say, “No, I want to continue to serve my country and I want to volunteer to be a part of learning more about the human organism.”

But, when they're on active duty, they may or may not be able to say "no." And even if there's a box they have to tick to say "yes," they really don't… The concept of consent within an employee/employer relationship is something that I think we need to be very mindful may not fully be fully-informed, or really voluntary. So, we do need to be mindful to how we build these systems so that it doesn't end up undermining the trust that individuals have with not only the data itself, but the organizations that then use that data.

I think there are lots of examples of different companies who are incentivizing the use of whether it be wearables, like the Apple Watch, or other types of data measurements. Or, even as simple as tracking when you go to yoga or the gym and allowing that to give you discounts on different types of health-oriented services. All of those, I think, are thinking about innovative ways of trying to align incentives, but we do need to make sure that we’re thinking about whether that actually gives the full protections that society’s going to need for those that may, or may not, feel comfortable participating in those systems.

Michael Krigsman: David Bray, what about the ethics of all of this? Because, I think that’s where, you know, the questions Libbie was just raising cut right to the heart of some of the ethical challenges and the competing goals and competing interests. So, how do we manage that?

David Bray: Right. Well, I think Libbie is spot-on that it’s all about choice and we need to make sure there is the opportunity to make an informed choice that it… You’ll expect that probably organizations should offer a choice of architecture. So, it’s not just a Boolean “yes” or “no,” but it says if you choose to give no data, that’s okay. If you choose to give some data for a limited period of time, here’s maybe the incentives we’re offering, but we’re making sure you understand the trade-off that is being done.” Maybe, you choose to give more data for a longer period of time as well, but try to get more granular and make sure that, again, the nexus and the locus is on the individual and their family having choices as opposed to, “Yes, here, you’re going to buried underneath forty pages of legal paperwork; you’re not probably going to read it; but, if you don’t sign, you’re not going to get the care you want.” That’s not an informed choice. That’s not empowering the individual.

The other thing that we also have to think about is, you know, when we're born into this world, before we're born, none of us know if we're going to be a healthy individual or not. We don't know if we're going to be in a nice setting in which we had access to good healthcare or not. And so, I think we have to think about that context, which is what do you do for the people that, before they're born, may, when they are born, actually have circumstances beyond their control that they have something that was either a genetic anomaly, or they were born into a less-affluent area that can have less access to healthcare. What would be fair? And, that's actually what Rawls talks about as a philosopher. He says that "In any situation, you're going to have people that are worse off than others. And, trying to get complete fairness, that may never happen. But, what situation do you want to do such that those who are worse off are the best-off out of all possible scenarios?" And, I think that something that we can strive for is that making sure we don't have a very significant drop-off for those that, you know, through no fault of their own, have a genetic anomaly at birth or later on in life. Or, we're born into circumstances where they did not have access to good healthcare. Let's make sure we think about them and recognize that, yes, it is a free market. It is a system in which you can make choices, but also think about…

You know, I mean, as I just adopted a newborn baby boy, I don’t know his entire genetic history. I don’t know his family history. So, obviously, I need to think about, in that context, what would be a situation in which it both allows people to have choice, allows the free market dynamics to work themselves out, but also, address the fact that none of us know before we’re born how healthy you are going to be in life.

Libbie Prescott: And, I’d also add to that, I think, that example, in particular, also highlights the fact that any individual’s ownership of their own, particularly, their genetic information; is not theirs alone. And so, when someone talks about privacy, and their desire for privacy, particularly around genetic information, there are many other generations of their family that are affected by that. And so, someone chooses to give over their information, it’s not entirely their choice to make if others don’t’ want it. And that makes it much more complex when we think about creating privacy within, particularly, the genetic space. It’s not, I think, often people think of it as being theirs and theirs along, and I think we just need to think broader and acknowledge that it’s much more complex.

Michael Krigsman: Yeah. That’s a very, very interesting example: genetic information. We have a really good question from Twitter. Gus Bekdash is asking, “So, technology and AI: At what point does the tech move exponentially faster, or move faster and then the rate at which society can safely absorb that technology?” And, maybe, that’s the issue we’re talking about. But, I think it’s a very compelling question.

David Bray: We may be there already, to be honest.

Libbie Prescott: Yeah, I’d say we’ve been there for decades. I mean, I think policy is always behind technology, particularly emerging technologies because policy is supposed to, you know, you can’t make policies about everything and particularly, things that have not yet emerged, you can’t regulate. And so, you have to have guidelines for how to diagnose when you’re at a point where there is need to put in place either protections, or potentially, efforts to more rapidly advance fields. And, I think AI is in that place where, depending on how you look at it, there’s a lot of amazing things that we can do with data-driven decision making and sort of outsourcing to algorithms and other…

The things that maybe humans aren't very good at, based on our skill sets and our cognitive bias… But, we're never going to have a point… And, I would argue, we have never had a point in which policy has been up to date with technology. And, then the question is, are the technologies that are emerging now fundamentally different in ways that we need to even think about our regulatory tools totally differently? And, really go back to what it means to create a regulatory, or a government context around emerging technology, because potentially, they are just very different than the technologies that have emerged in the past.

David Bray: And, to build on what Libbie is saying, I think what also makes things interesting is the internet itself is global in nature. Now, we still have an obligation less than […], you know. We’re only at about 45% of the planet is connected. So, we still have, as we talk about this, what do we do both for the nation, but also for the world to bring it online because we would be unfair for us to make massive advances in delivering healthcare/health prevention online, but only for those that were connected.

But then, two, if in the past, regulations were defined by geography, whether it be what the state decides, or what the nation decides. That's going to be very difficult to do in this new era because you can't define yourself by geography. And so, the question is, how do we even do the process of making sure there is both informed consent, there are protections, when people are citizens of the world and these devices are going to cross borders? And, that… This is, what I call "terra incognita." It's cyber terra incognita. It's unprecedented. And, that's why, I think, when I mentioned those three things we need to do, we need to, as quickly as possible, have conversations across borders, across organizations, as to if we were to reconceptualize how we would do health care prevention of diseases; health, longevity... How do we do that going forward in almost like a micro-services modular fashion? Just like with IT, what we've seen with cloud services, we can interchange things in and out, these are going to be services that are going to be delivered in devices, they're going to be delivered […]. But, that’s a totally new framework, and I don’t’ think top-down is the going to succeed because it is unprecedented, this digital era we’re in.

Michael Krigsman: Libbie, we have… You know, let’s see. We have seven minutes left, and it’s, I guess, your opportunity to solve this problem for us!

Libbie Prescott: [Laughter] I think it’s going to take a lot more engagement than just one person to sell those. I will say, you know, I am… I think the scientific community tends to be perennially optimistic and I would like to put myself in that category. But, as someone who has been on the government side and the national security community, you know, there are concerns we need to think about, but what I would encourage people to do is make sure not to lose the benefits just because things are moving very quickly and potentially going into uncharted territory.

I do think, and it’s something when I’m teaching, I often have to compel my students to really think about… It’s not… When we think about ethics and we think about using technologies, we do have to think about the potential costs or the risks to an individual or to society for using a technology. But, we really also need to think about what are the costs of choosing not to use it? And, this is particularly relevant when we talk about, within national security and protecting our warfighters, and if there are interventions that we can do, but we choose to not do them, you know, it raises entirely different ethical questions. And, one in which we don’t necessarily default to action, then, although as humans and definitely as Americans, we often do default to action, but we need to at least acknowledge that not using a tech, and not applying it in a certain context, also can have an ethical or a cost to individuals.

And, nowhere is that more true than in the healthcare space. So, it’s a conversation that’s, I think, just beginning. I think there just needs to be a lot of different voices involved and not only just the caregivers that are ill, but also, you know, technology and governance are often done by a few for the many. And, there was some need to be those that can stand and facilitate and really, you know, agree to put that hole above an individual, which is… What is a government… […] a public servant, you are actually agreeing to do. You know, you have to put the collective above your own. We need people to do that, but we also need those voices from everyone else to have a thoughtful contribution to the conversation.

But, recognize that at the end, a decision will have to be made and there will have to be a path chosen that may or may not be something that was your first choice, but hopefully, however, it does play out. People will then say, "Okay, well how can I best engage in this to make it the best outcome that I can?

Michael Krigsman: David, we have a… On Facebook, there's a listener, Lydia Segal, who happens to be one of my oldest friends and is an ethics professor here in Boston at one of the universities. And she's saying, "Absolutely fascinating!" And so, David, this ethical question; if you were to prioritize the ethics, the technology, the policy, what is the… Where's the obstacle, and where is the enabler of moving forward with these technologies in healthcare? But, it's probably also similar in other fields as well.

David Bray: Yes. So, I think you can look across anything that the United States tries to do that spans sectors. We are great when we can get alignment between what the public sector is looking for, what the private sector is looking for, what the individuals are looking for, what non-profits and academia are looking for. The obstacle is there's not currently a narrative, a set of shared goals, that brings those sectors into alignment. You know, the private sector is thinking about healthcare as to, well, how do I bring in money? It's a free-market system. The public sector is thinking about what to provide for their constituents. If the individual is thinking about their own individual health, which they should be… And so, you have different things that are, right now, they have not been brought together in a narrative that creates the umbrella where they can come together.

As to how to address this, two brief things: empathy and community-based approaches. I am concerned increasingly that the internet we had the supposition that the internet would lead to more transparency in our lives, which leads to better truth and better understanding. And what we discovered, unfortunately, is more information on the internet actually leads to selections of different pieces of information that fit our cognitive biases, whatever they might be, and actually fragmentation as opposed to bringing us together. And so, we need empathy from the private sector. We need empathy from the public sector, as to what are the day-to-day stresses of individuals? Empathy from the public themselves, recognizing these are really hard problems… You know, if it was easy to figure out, it would have been done already.

And then, finally, community-based approaches: How can we actually think about this in terms of what the center is the individual or the family and as Libbie actually pointed out, not just the individual's choice, but actually, what their other family members choose as well, because that will impact them. And then, what role can the private sector to do to help empower them; the public sector, non-profits, academia. I feel like, right now, when it comes to healthcare, we’re lacking that narrative that brings us together. We’re lacking that empathy, and we’re lacking that focus on community-based approaches.

Michael Krigsman: Libbie, in the last… I love that. I love the idea of empathy-based thinking relating to technology, to policy, to privacy, to the technology itself. Libbie, in the last minute, where is all of this going? As a biologist, and as a scientist, where do you see this going in the next few years? So, please, share with us a glimpse of the future. And, you have about a minute to do this.

Libbie Prescott: I think we’re going to see a lot of new capabilities brought online at small scale. So, we have already had a lot of amazing neuro-based prosthetics, a lot of ability to measure in real-time, at a biochemical level, for an individual. A lot of these capabilities exist in small-scale, and the real question is to what degree we can take them to something that allows for either consumers or medical practitioners to be able to use these more effectively.

 I do think we’ll get there. I think that probably won’t be necessarily a smooth transition, because I think what’s probably going to happen is that consumer products are going to emerge in otherwise unregulated spaces. And, we’re going to have to then think about them after the fact, and I think 23andMe is a great example of that, where it really got ahead of delivering a consumer-based product and in a space that has, historically, been very regulated and everyone in healthcare had just sort of assumed they were regulated and therefore, acted accordingly. And, I think we’re not going to see that going forward. I think we’re going to see individuals who are either appropriately defining themselves as outside of regulation and moving forward, or potentially, finding loopholes and delivering things to society, potentially at-scale, and then we’re going to have to grapple with what we do with that.

I think many of them will have the potential to provide incredible benefit to a lot of people, and I think the question, then, is how is society… Do we react to these shocks as they come along? And I hope we don’t just say, “Well, this is too risky, we can’t do it.” I hope it then gets to more of this community-centered dialogue that David was talking about where we can really say, “Okay, well let’s make sure that we’re not… Let’s make sure we’re giving the benefits where we can safely give them, and effectively give them, but also, protecting those that otherwise wouldn’t have a voice.”

Michael Krigsman: And David, it looks like we have about thirty seconds left. Actually, we’re over time, so you’re going to get the last word. Where’s the future going? Define the future in a tweet, please.

David Bray: “Define the future in a tweet?” The net is fragmenting us. We must have empathy to bring us back together.

Michael Krigsman: I love it! “The net is fragmenting us. We must have empathy to bring us back together.”

Wow! What a great discussion! I want to thank our guests today. David Bray is the incoming Executive Director, is that the right term, David?

David Bray: Yes!

Michael Krigsman: Of People-Centered Internet. And, Libbie Prescott is a "recovering scientist" who has worked in the government and is now a  professor at Georgetown.

Thank you both for taking the time!

Libbie Prescott: Thank you! This was a lot of fun!

Michael Krigsman: And, everybody, next week, there’s no show because it’s Labor Day, but come back in two weeks and we have more amazing CxOTalks. Thanks, everybody, have a great day and goodbye!

Published Date: Aug 25, 2017

Author: Michael Krigsman

Episode ID: 467