AI Explainer: US Presidential Executive Order on Responsible AI

Explore the presidential Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence on CXOTalk episode 815. Learn about responsible AI.

43:25

Dec 01, 2023
13,555 Views

In this important CXOTalk episode (number 815), we explore a topic in the forefront of technology advancement and governance: the Presidential Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Signed by President Biden on October 30th 2023, this directive stands as a landmark in the evolving landscape of AI, setting a precedent for future development, usage, and regulation.

Our guest is Dr. David Bray, previously CIO of the Federal Communications Commision for four years, IT Chief for the Bioterrorism Preparedness and Response Program, Senior National Intelligence Service Executive, and an expert on AI governance and ethics.

In this episode, we examine the presidential order to:

  • Explain why it's a crucial step for the future of AI.
  • Unravel its significant impact on the business world, examining the immediate and long-term implications for industries navigating this new regulatory landscape.
  • Consider what the order did not cover and explain what is missing.

As AI continues to redefine the boundaries of innovation and efficiency, understanding the framework set by this Executive Order is imperative for business leaders. This episode provide insightful perspectives, clarifying how businesses can adapt, comply, and excel in this new era of AI.

Watch this discussion to bridge the gap between technological evolution and business strategy, with valuable insights for executives and decision-makers in the rapidly changing world of AI.

Dr. David A. Bray is both a Distinguished Fellow and co-chair of the Alfred Lee Loomis Innovation Council at the non-partisan Henry L. Stimson Center. He is also a non-resident Distinguished Fellow with the Business Executives for National Security, and a CEO and transformation leader for different “under the radar” tech and data ventures seeking to get started in novel situations. He is Principal at LeadDoAdapt Ventures and has served in a variety of leadership roles in turbulent environments, including bioterrorism preparedness and response from 2000-2005. Dr. Bray previously was the Executive Director for a bipartisan National Commission on R&D, provided non-partisan leadership as a Senior Executive and CIO at the FCC for four years, worked with the U.S. Navy and Marines on improving organizational adaptability, and aided U.S. Special Operation Command’s J5 Directorate on the challenges of countering disinformation online. He has received both the Joint Civilian Service Commendation Award and the National Intelligence Exceptional Achievement Medal. David accepted a leadership role in December 2019 to direct the successful bipartisan Commission on the Geopolitical Impacts of New Technologies and Data that included Senator Mark Warner, Senator Rob Portman, Rep. Suzan DelBene, and Rep. Michael McCaul. From 2017 to the start of 2020, David also served as Executive Director for the People-Centered Internet coalition Chaired by Internet co-originator Vint Cerf and was named a Senior Fellow with the Institute for Human-Machine Cognition starting in 2018. Business Insider named him one of the top “24 Americans Who Are Changing the World” under 40 and he was named a Young Global Leader by the World Economic Forum. For twelve different startups, he has served as President, CEO, Chief Strategy Officer, and Strategic Advisor roles.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Today on Episode 815 of CXOTalk, we're discussing the executive order from the Whitehouse on artificial intelligence and responsible AI. Our guest is David Bray. He's a distinguished fellow at the Stimson Center. It is a Washington-based think tank. Give us a sense of why you're qualified to talk about this. 

Dr. David A. Bray: I was crazy enough to be recruited by the government when I was 15 because I was good with computers. They had me working initially building computer simulations at a physics facility. Later, I started working on some classified DoD satellites involving some interesting computer technologies. At the time, those were expert systems, which was a version of AI (decision support systems).

Fast forward, I later worked in bioterrorist and preparedness response. I was part of the response to 9/11 and the anthrax events. Thereto, we were using it. It wasn't the version of generative AI we have now, but it was using a flavor of AI. 

I played a role working with the intelligence community where we reviewed all the research development programs of the U.S. intelligence community. Obviously, AI was a part of that back in 2012, 2013. 

Four years at the Federal Communications Commission, while we didn't do a lot of AI because it was mainly just upgrading their legacy systems, we did successfully lead the way in terms of moving to cloud there, which at the time was not something that a lot of government agencies had considered, so it was in an operational role. 

Then with Vint Cerf and the People-Centered Internet Coalition (back in 2017 to 2020), I looked at how we could make the Internet more people-centered, but also thinking about AI.

I did also play a role in 2020 and 2021, right before Stimson, where it was now not just the U.S. but our Canadian, our UK, our Australian, and New Zealand allies, as well as other countries like Japan, Germany, and India that we also partner with.

Understanding executive orders and their impact

Michael Krigsman: When we talk about an executive order, very briefly, what is that (just to give us some context)? 

Dr. David A. Bray: Presidents have the ability to issue executive orders, which means they are taking right now what authorities they have as president through the Constitution and any other laws that have been passed by Congress, and then issuing directions to the departments and agencies to move forward. 

They can't necessarily create anything if they don't have the authority to do that. But if they already have the authority as the president through the Constitution and through laws that were passed, they're now giving guidance as essentially the Commander in Chief of the Executive Branch as to how departments and agencies should move forward.

The executive order then ideally can last past the Administration unless there are additional updates or rescinding by future administrations. 

Overview of the executive order on responsible and trustworthy AI

Michael Krigsman: In this case, we're talking about this executive order on responsible AI. Give us just a high-level overview of what it actually is.

Dr. David A. Bray: It's been building. It's a long time coming. It was about a year in the making, and there were different draft versions going around. 

But even before then, you saw at the very end of the President Obama Administration, there were actions on AI that were done through the Office of Science and Technology Policy, which is part of the Executive Office of the President. Then underneath the Trump Administration, you also saw some actions that were coming out. Again, mainly through the Office of Science and Technology Policy.

This is now at the Executive Office of the President level, so the President himself is putting forward the idea of how we move forward consistently both within government but also with the business sector. And so, briefly, what it calls out for is, first, thinking about new standards for AI safety and security. That's really the heart of what this executive order is looking at.

It kind of is unprecedented in terms of the length of the executive order. It's more than 110 pages long (depending on the type font and everything you're doing). That's very long for an executive order, so I'm going to briefly highlight it. We can always dive deeper on the specific issues.

As it calls out new standards for AI safety and security, it's requiring that developers of powerful AI systems share their test results before they release their product to the market.

It's also thinking about tools, tests, and standards because, right now, we do not have a good sense of what are the standards for demonstrating that AI systems are secure. Michael, you and I can talk a little bit about how that might be challenging with the design choices in generative AI at the moment.

Also then it's calling out a need to ensure that AI does not enable people to do bad things with biology. Again, from my bioterrorism background days, it's something I'm intimately aware of. 

Then ultimately seeing that Americans needed to be protected from AI-enabled fraud and deception. Generative AI, unfortunately, is going to create a whole lot of fraud and disinformation. I know we've had conversations on past CXOTalk conversations about that.

Then thinking about cybersecurity because, again, how do you understand the vulnerabilities of an AI system and how it might be either poisoned through data or other cybersecurity exploits?

Finally, saying there's going to be additional follow-up that you will expect to see as a national security memo, and so that's charging what's called a National Security Staff, in the area of what needs to also be done with AI and security (thinking more geopolitically).

There are some additional things underneath that, but I'll pause here because I realize that was a lot to just give upfront in terms of what the executive order calls out.

Michael Krigsman: David, this executive order is so far-reaching. If you look at the language, it's very specific in a lot of different areas. It's very specific in terms of calling out what government agencies, for example, must do. There are wide-ranging implications. Why now? Why did they release it at this particular time?

Dr. David A. Bray: We just had the one-year mark of ChatGPT being released, and so I think that ignited the public's focus and attention and zeitgeist in ways that were not expected. And so, in some respects, there was a huge interest and question.

They had previously done work on what was called the AI Bill of Rights. And so, this is now sort of codifying what was a voluntary bill of rights.

Those bill of rights that were issued at no point in time were enshrined in actual law. They were voluntary. 

This is now the President using the authority they have as a presidential administration (short of Congress passing anything) to put a marker in the sand as to what needs to be done. That's why, to your question, why do we see government agencies being called out? Well, because the President can direct government agencies.

As for the business sector, the President cannot tell the business sector obviously what to do. That's part of what makes the United States an open society.

However, the President can ask for government agencies to put in place either standards or ways of interacting with the business sector, and that's how it's going to have impacts on the business sector through what the government agencies are putting in place, whether it's with standards, whether it's with regulations, or things like that (if they have the current authority to do so). In some cases, we may have to wait for Congress to pass that authority so then the government agencies can do specific things.

Generative AI in cybersecurity and detecting / preventing fraud

Michael Krigsman: We have a question on Twitter from our good friend Anthony Scriffignano, who is one of the most prominent data scientists in the world. Anthony says, "You mentioned fraud. Can you talk a little about how this executive order addresses novel fraud, meaning fraud that has never been seen before? This kind of fraud cannot be simply modeled or learned through generative AI."

Dr. David A. Bray: What generative AI is really good at doing is creating things that look like data it's been fed. I don't want to be an alarmist. That's not what I seek to do. But I have seen that (for less than ten minutes of compute time with some additional plugins and using what's called WormGPT, which is supposedly the dark side cousin of ChatGPT) you can create about a million realistic-looking records of healthcare claims that are at $200 or less, which is below the fraud threshold.

This is novel in the sense that it's going to look realistic. It's going to be below the cost that you would normally spend to try to authenticate if it's real or not. And if you did a million of those, and let's say it was healthcare claims in the middle of December for upper respiratory infection, if those were submitted, most people just think that's just a blip in terms of what's happening naturally.

It calls out that the Department of Commerce is going to be putting a lot of thought into it. It says content authentication. It also mentions watermarking, and I'm not sure watermarking by itself will solve it. I think that's probably implicit in what Anthony is saying.

And it's recognizing that what the technology gives us, which is the sheer ability to produce human-looking content, realistic-looking content (whether it's photos, texts, or the like), it's also going to unfortunately empower those who will use it for bad purposes, including fraud and deception.

Separate from this executive order, about 3.5, 4 months ago – and I know we're friends with Lord Tim, Michael – I was invited to talk to the UK government, which is thinking about what if, by 2030, it'll be near impossible to separate what's authentic versus inauthentic in the public space. 

And so, again, this is getting ahead of that coming wave. There already is a lot of disinformation and fraud online. This is only going to unfortunately multiply it, and we need to get ahead of it now. 

Michael Krigsman: Please subscribe to the CXOTalk newsletter to stay up to date. And subscribe to our YouTube channel. We have incredible shows coming up. 

Implications of the executive order on AI policy, AI in the public sector, and business strategy

Let's dive into the content of this executive order, and then we'll discuss the implications for us as individual citizens, for businesses, and so forth. I think the best place to start is, can you give us an overview of what this is and where do the tentacles reach? 

Dr. David A. Bray: Recognizing I broke the highlights to talk about standards for AI safety and security (and that's really what this is focusing on), that then reaches into the parts of government that have touchpoints with the business sector in terms of ensuring that their products, their services are safe and secure. 

There's another section that goes from there that talks about thinking about the privacy of those of us in the United States. Thinking about privacy because, again, that's not something that's really hard to discern at the moment because, in some respects, the way businesses have chosen to launch generative AI, you have very little information both on the data that was used to train the machine as well as you have very little information about the parameters and the actual tweaking of the machine's algorithm. 

That's something that's being called out that's kind of vague, but at least it's showing that the government is going to be taking this seriously, and they are strengthening privacy, preserving research, evaluating how the government collects and uses commercially available information to make sure privacy is protected, and then also trying to figure out guidelines for effectiveness. 

I don't think you're going to see that right away because it's a hard problem. And we may run up against the fact that the current design choices we have with generative AI makes it really hard to demonstrate you've protected privacy, but it shows that the government is very intent on solving that.

Another touchpoint that it has is thinking about equity and civil rights. This again builds on the aspirational AI Bill of Rights that came out earlier. Calling out clear guidance will be issued by government agencies to landlords, to federal benefits programs, and people who contract with the government that you have to demonstrate that your AI is not doing algorithmic discrimination. 

Also, they've also called out for the FBI and Department of Justice to think about fairness in the criminal justice system to make sure AI is not resulting in unfair treatment within the justice system.

Then they're thinking about it (as I talk about it) in terms of consumers, patients, and students. This is kind of a little bit more vague, but it's saying that there needs to be responsible use. In fact, the healthcare part still needs a lot more definition. That's something that's absent here. 

Then also, how can AI transform education? It also needs to be spelled out a little bit more. I think that's going to be something that's going to be a very interesting space to watch for the next year and a half.

Then thinking about workers because we know there is the possibility that people might be displaced from their jobs. How can we mitigate the harms and maximize the benefits? Then also, getting ahead of the curve for what the disruption is going to have. 

Again, recognizing it's a big executive order, I'm going to give you three more parts of it.

  • Thinking about how we can do research, and so this is calling out for a national AI research service. That's something that businesses can tap into.
  • Promoting a competitive, fair, and open AI ecosystem. 
  • Then also, how do we hire for talent?

There's also a callout that involves the Department of State and thinking internationally about what are the relationships we have with different countries. That should also support international commerce and how we collaborate.

Finally, saying that there needs to be guidance for the AI use within the government. And so, you already see the Office of Management and Budget preparing that implementation guidance and asking for feedback and comment. Then also ensuring that if government agencies use AI, they have spelled out how they're using it. And they can hire the talent to make it happen.

That's why you see more than 110 pages because that's a lot in one executive order.

Implementing responsible and trustworthy AI government policy

Michael Krigsman: Is there a set of unifying goals or principles or glue that kind of binds the whole thing together?

Dr. David A. Bray: I would submit that we're now in a world in which, in the past, between the time government issued policy and did something, it was about a ripple effect between three to five years before you started to see impact. Then it would have impact over the next decade.

I think now with the rate of technology change (and as I tell people), the good news is we're democratizing technology. The challenging news is we're democratizing technology. So, that means faster decision cycles, faster impact cycles.

I think you're now seeing it's between three to six months between the government making a policy decision and it starting to have impacts on the marketplace, on businesses, on the nation. Then it probably has a longtail of between 18 to 36 months before you're going to have to do something else, which means basically this is a very impactful policy with a time horizon between six months to three years.

We've got to learn by doing because that's just so fast. What I think I would really like to see as a next step to build on this is, quite frankly, I think it would be wonderful if we could call out three demonstration projects that would involve the public, that would involve businesses, that would involve the private sector, nonprofits, and universities around AI. 

Just pick three and say, "We're going to take everything that's in this 110+ page document, but we're going to bring people around it and learn by doing and be focused around three issues." 

  • Maybe one would involve how AI could help improve delivery of healthcare, or how it could improve delivery of services to citizens because that's key.
  • Another one might be thinking about how it could improve access to education or making education more accessible – something like that.
  • A third one, maybe it's more in the national security domain.

We know, unfortunately, we live in an open world in which some of our service women and men may have, unfortunately, vulnerabilities where their data is online to other countries. And so, can we use AI to both detect that, protect them, and protect them so that they don't feel exposed to possibly whatever other outside nations might do to take advantage of the fact that we are an open society and we have information online? 

Regardless, pick three demonstration projects, and then that can instantiate and solidify all of the aspirations present in this executive order.

Michael Krigsman: We have a really interesting question from Chris Petersen on Twitter who is asking about artificial general intelligence and where is the intersection between AGI and this order, or even something that looks a little bit like AGI. You know a more powerful AI than we currently have broadly today.

Dr. David A. Bray: Artificial general intelligence is not here yet. We still have generative intelligence, but that's different. 

It's worth noting that what we have right now with generative AI, our systems are really good if the present and the future looks exactly like the data they've trained on. In other words, if the past is informing the past, future, and present, we're great. But if the present and future is different, that's when you see generative AI get kind of weird and wonky. 

We saw this with COVID. It wasn't called generative AI at the time, but we already heard for Anthony and others. There were AI systems that kind of went off the rails when COVID happened because the world fundamentally behaved differently after that.

And so, the hope with artificial general intelligence is, if we take the most basic definition of a system that is able to not just learn about the past and apply it to the present but also be curious and explore mental models or digital models (as the case might be) about what the future might be, and have that applicability so that it doesn't go off the rails if the future is different than the past.

The good news is a lot of this calls out similar actions you should take whether you're talking about the current generative AI that we have or if and when AGI shows up. This is goodness regardless. 

What I think is still needed here is, one, conversations about data. It's interesting that data is not really discussed much here. I think, especially in open societies like the United States (but also other nations like Canada, UK, Australia, New Zealand, and others), we've got to think about how people can have a voice in how their data is employed. 

I think that's needed whether you have AGI or AI. That's not present. And what does that look like? How do you bring people together? 

Most of us don't have time to navigate it all. So, it may be a collective action problem.

The second thing is if AGI was to surface, how do we have the sufficient constraints to make sure it doesn't go off the rails and start doing things that we don't want it to do? 

Now, I'm not one of those people that's an AI doomsdayer. I don't think that's the case.

I often tell people, "Replace the word AI with very fast organizations," because that's what AI really is. It is, in some respects, the same approach (whether you call them laws, policies, et cetera) that we want to apply to very fast organizations we should also apply to AI because, despite the doomsday arguments that somehow AI is going to take over the world, I don't see that. I think there are several AI experts that would also agree with that. With AGI, it's just more of how do we make sure we have the appropriate constraints. 

Then finally – and again this gets to what I was hinting about earlier – it may very well be the design choices we've made with the current version of generative AI may not give us the sufficient ability to test and have confidence in that. 

For example, if I asked you, "Do you want to cross this bridge, Michael?" you'd probably ask me, "Well, how did you design the bridge?" If we can't explain that, then that's kind of problematic.

But let's say you say, "Okay, you can't tell me how you designed the bridge. Can you tell me what materials?" what data, in this case, or what physical materials you used to build the bridge. If you can't explain that either, the question is, do you really want to cross that bridge?

I think we may need to think about other models, and I've seen some. We can talk about that if you want. I've seen other models that give me hope that we will have better approaches to future flavors of AI, whether it's AI or AGI, that will allow us to know that it's constrained enough that it's not going to do things that we don't want it to do.

Michael Krigsman: One of the points that you are making is the uncertain nature of the evolution of artificial intelligence and the fact that we have an executive order that is static in time that has far reaching ramifications into the future (even though we don't know what's going to happen with that future).

Dr. David A. Bray: Correct. 

Implications of the executive order on federal government agencies

Michael Krigsman: On that subject, we have a very interesting question, again from Twitter. This is from Arsalan Khan who says, "When all of government tries to do AI standards, it becomes very difficult. The execution is difficult because the standards folks might not be fully aware of how each agency works." He's asking about the implications of that for this because this executive order is very explicit with different departments doing certain things together. 

Dr. David A. Bray: I often have the phrase that standards are like toothbrushes. Everybody needs one. They just don't want to use anybody else's. 

You've seen with XKCD, they also have the, "There are 14 standards. We must bring a standard to bring everything together. There are now 15 standards."

That's again why I make the call for let's pick three demonstration projects because that will bring the different departments and agencies together in a unified way as opposed to a disjointed way.

The other thing that I would say is we should look to what's already there. And I'm going to give a shout-out to two entities. 

One is Gisele Waters, who has worked with IEEE. I know, Michael, you and I have done things with IEEE that has advanced a standard for the procurement, how you think about the procurement of AI, which I think is something that businesses or governments could use because that's really where you can specify how you're using the data, how you're demonstrating to me that there's safety in this. 

That's an existing standard that she has been advocating through IEEE. So, again, shout out to Gisele because I think that's something people can look to.

The second is I'm going to put another shout-out. There's work on IEEE standards for a spatial Web. 

The reason why this gets interesting is imagine if another way to approach AI is to say that things in the real world have X, Y, Z coordinates and time coordinates. You can say, "Only consider this data if it's involving this space." 

And so, if it's inside your house, maybe that's only you get to have the choice on what's used with the data. But if it's outside the house – let's say it's an airport – then the airport authority and the government gets to have a choice.

You can say, "Cars should stay on roads. Drones should not fly in buildings." You can use physical boundaries to have some confidence that the AI will perform safely. 

What's interesting is you can then flip it on its head and apply that to law and medicine where you may not have physical boundaries, but you have legal boundaries or you have policy boundaries, and it works. That's an IEEE standard that's also out there.

I would love us to not create new because then we're just going to propagate that problem of too many toothbrushes. But if we look to things like IEEE and the existing standards (and those are the two that I would start with: procurement and spatial Web), that will give a framework that, regardless of whether it's our government, the UK government, businesses, or whatever, everybody can build from. 

AI strategy for federal government agencies

Michael Krigsman: What about the implementation aspects of this? We've really just scratched the surface in terms of the content because it is so big, this executive order. What about the actual implementation and execution of it? How does that take place?

Dr. David A. Bray: It's worth remembering that it was around 2008, 2009 that previous administrations called out, "Let's go to the cloud, government." Even when I was at the FCC from 2013 to 2017, and we made that big leap where we moved all of our systems either to public cloud or private hosting, I think we were one of 5% or 6% of government agencies at that time had made that leap. 

I think the new stat is somewhere between 15% and 20%. That's more than 15 years since the original call. On top of it, it's gotten a lot harder to do implementation.

The good news is, we have 24/7 news. We have more transparency. But the bad news is we also have more spin and disinformation. 

I've had these conversations not just with government folks but private sector folks. A lot of people are afraid of things not working out well and how the perception of failure, even if it's not failure, might be spun, or the perception of a bad choice even if it was a good choice might be spun. 

Oddly, I would submit, in this AI era, we also have resulted in an environment that has resulted in very risk-adverse implementation. Nobody wants to be the first one because they're worried you might get shot at.

That's where I think the case needs to be made that the presidential administration has to provide top cover to these initiatives. If you leave it to the individual departments and agencies, there's a risk that they'll get shot at or things will be spun out, and things will become political. If you'd pick three and, again, I'd pick three things that, in some respects, are nonpartisan in nature or bipartisan – or whatever you want to say. 

One example I'll give is an effort called Birth to Three, which ensures that every individual in the United States (between the ages of when they're born and three years old) should get the physical, mental, and emotional services they need.

Now that involves forms. One, you have to go and fill them out. You have to have the time to go fill them out. And you have to know which form to do.

Most caregivers for infants don't have the time to do that. So, what if instead we used a combination of an AI system and SMS texting, basic texting? You don't even need a smartphone to say, "I would like to get the following physical service or emotional health service for my child." 

It says, "Okay. What are you looking for? Give me some details." And before you know it, not only have you applied for the service and been vetted, but it's been a conversation, and you can get back to them much faster. It's not waiting 6 to 12 months for a response. 

That I think is the future of government where it's not filling out forms or knowing the right forms. It's more of a conversation. 

But what if we said, "This is going to be—" I don't want to put words in the Administration. It's up to them what they want to do. But it's going to be at the level of something championed by the Administration, and it's going to obviously involve many different departments and agencies.

The nice thing is that implementation now has top cover from what might normally be people that are risk-averse or being scared of being taken out of context if they didn't have that. And so, I think that's why, again, I keep on going back to, in some respects, it's almost like imagining if instead of one NASA during the space race, we had said, "Let's have 30 or 40 different NASAs, and you all go take the different risks." 

That would never have worked. We've got to have this be more of let's have a focus. Let's give top cover for those who are brave and bold enough and benevolent enough to be willing to stick their necks out and try and do the implementation of AI because there is no textbook here.

Michael Krigsman: You think that this executive order provides enough cohesion that it will help bring the parties together?

Dr. David A. Bray: It's worth going back to the 1890s and 1900s where there was massive disinformation and, similarly, polarization. Congress was even more polarized in the 1890s and 1900s than there is now. 

There was rapid technological innovation. It may very well be that open societies, as a response to stress, respond in such a way. 

That's exactly why I think you see that sort of hesitancy. Nobody wants to have anything they do taken out of context. As one who has lived through this and survived and come out the other end, you have to almost get over it. 

You have to just say, "Look. I am going to try and do the right thing." Almost like Marcus Aurelius. "I cannot control how other people will spin it."

This executive order, it's interesting. It does not deconflict between the callout for chief AI officers separate from chief information officers separate from chief technology officers, separate from chief information security officers separate from chief data officers. And so, that's a potential risk here. 

That said, all is not lost because, again, it calls out for both the Office of Management and Budget (that oversees how funds are spent per whatever Congress pass) to give implementation guidance. And so, they may be able to resolve that. And OMB also could try and solve how are they going to give top cover so that people are brave, bold, and benevolent enough to be willing to be leaning forward versus saying they're doing something but really hesitant because nobody wants to risk their reputation or career on something that might go boom, not because anything was done wrong but just because this is hard stuff to do.

International perspectives on AI ethical standards and governance best practices

Michael Krigsman: Anthony Scriffignano comes back, and he says, "Kudos for addressing the speed of AI democratization versus the pace of regulatory evolution. Is there a related concern that places with less regulatory focus could create enclaves either for faster innovation or malfeasance?" I'll just ask you to address this point in the context of the executive order. 

Dr. David A. Bray: The executive order calls out the need for us to partner with several nations of a similar mindset as us. It calls out UK, Canada, Australia, and New Zealand, but also colleagues in Europe and around the world. I think that's a recognition that's implicit to what Anthony just asked. 

I just gave you all the reasons why, in our checks and balances system, our open society (that has separations and divisions between the legislature and the executive branch and the judicial branch), we may be more deliberative and slow as opposed to more autocratic systems. Which is, if the autocracy wants to go somewhere, they're going to go it. And if someone doesn't like it, they're either fired, imprisoned, and/or killed, which is awful but that's the way.

The implementation of AI might benefit more unilateral autocracies, or autocracies in thought, as opposed to the more deliberative nature of our pluralist system of government. And so, this executive order calls out, one, the need to partner with other countries and allies because, again, we don't have the monopoly on the best of insights. I mean, there are other countries that are there. 

Two, that's where you see the call out for the National Security Council to come up with a memo because I think it's probably thinking of that dimension, which is, how do we not fall behind because of our deliberativeness while, at the same time, not race ahead unilaterally without thinking about what makes our society good is there is deliberation, there are differences of opinion. That's what we want to have happen, but we also don't want it to slow us down so much that we essentially lose the race even before we start.

Will federal government oversight and regulation of AI slow innovation?

Michael Krigsman: We have a question from Arsalan Khan who says, "If the government is asking to review and regulate security features before AI is released to the public by the private sector, and we know the government is slow and doesn't always have the expertise, isn't this just going to slow down AI progress overall?"

Dr. David A. Bray: I would not disagree with you, Arslan. That said, I'll give you some hope, which is, what if instead of this being done by government, this is through grants to universities, grants to nonprofits, grants to other institutions outside of government that can move faster? 

It's interesting. Universities right now have the highest trust level. I think they have more trust than either the private sector or government in terms of perceptions in the U.S. society. But also, nonprofits could.

Those are the places that could do this, and they are funded through grants. The nice thing is, you could maybe initially fund, say, five to ten different demonstration projects throughout the country because I think you're going to find that, one, you can't do a once-size-fits-all to AI. 

It's got to be domain-specific. You're going to need one for healthcare, and what you do for healthcare is going to be different than what you do for cars and what you do for AI in education, for example, and so you're going to need to do that.

Two, it could be performance-based. And so, the government agency says, "Look. We're going to fund you for the following outcomes. We're not going to tell you how to do it, so that gives you the freedom to explore and move fast. But we're looking for the following outcomes," whether it's improvement of delivery of services at a faster time or greater reach or things like that. It could be outcome-based policy and spending.

Now, that said, this does point to we need more operational implementation-oriented nonprofits and universities because we have a lot of universities and nonprofits—and this is not to call anyone out specifically—that are great at writing papers and admiring the issue. But now, in today's era, we need to have more operationally focused nonprofits that are held to deliver results because they can move faster, and they can explore the space better than the government by itself.

Michael Krigsman: Is this actually going to work? 

Dr. David A. Bray: I'm an eternal optimist. I think anything you get you can use it as a way to motivate goodness in the world. There may be people that will detract it or say there are gaps in it, and there are possibly gaps in it. 

Partly why I love the Stimson Center – and I'm going to give a call out to Stimson – is they are an operational NGO think tank, and so they're a think and do tank. They're small. We don't have a lot of people, but it gives me hope that there can be places where people come together with a mission focus that don't have the same either concern that whatever they do will be taken out of context or the same constraints at government agencies by themselves. 

If you look at, again, I call back the space race example. When we were pioneering to the moon, rockets blew up – and things like that – but we didn't stop – and everything like that. 

But it also wasn't done exclusively by the government. The government partnered with the private sector. There were contractors that built all the parts for the rockets. The government didn't build that. And there were also nonprofits involved with analyzing as to where they were going and things like that.

I think it is doable if we work in bringing together all of the different parts of the U.S., not just one part. 

I will also give one example that Project Corona (which was not done by NASA) – this was done by the National Security Community in 1959, 1960 – launched a rocket that would take photos of the Soviet Union and then parachute a film canister that would be picked up by a plane or helicopter. The first 13 rockets blew up. It wasn't until rocket attempt number 21 they succeeded. 

Obviously, that helped with the Cold War. But later, in the 1990s, it was declassified, bought by a company called Keyhole, that was later bought by Google, and became the basis of Google Earth.

I often say government is innovative. We just don't monetize it. We leave it to somebody else.

But imagine if we tried Project Corona now in an AI dimension or the equivalent thereof. How tolerant are we as a public? How tolerant are we with our news media and Congress to have 13 rockets blow up before we finally succeed in getting it into orbit?

The same thing is true with AI. I think we're going to have to figure out how can we do rapid solicitation from the private sector for solutions that can help here. Rapid involvement of the public as well in standing up for the public in terms of data stakeholder involvement and how the AI is employed. And then finding those operational, implementation-focused universities and nonprofits and say, "Look. We need you to help here because time is of the essence." 

Again, in the past maybe we had 15 years between something being written and action. We really have between two to three years to get this right.

That's where I think, where you said, "How will this impact decades from now?" if we don't rise to the occasion, if we're held back either through division or polarization or fear, it may be looking back as a missed opportunity for open societies.

Links between AI capabilities and enterprise architecture

Michael Krigsman: On this topic of interoperability, Arsalan Khan comes back. I'll ask you to answer this, David, very quickly because we're running out of time. He says, "Where is the link between enterprise architecture and AI in government since a lot of money has already been spent on enterprise architecture?"

Dr. David A. Bray: That's not called out in this executive order, but I will say I am working with an international nonprofit health group who has a lot of equivalent of enterprise architecture and the data standards around health and medical records. 

One of the things that we are actively looking at is how can you use those data standards to go to a generative AI system and effectively say, "Given what you know of enterprise architecture, given what you know of these data and technology standards, help me build a reference implementation model. Help me build an interconnect between system A and system B."

Probably the first time we do it, it will only get about 50% or 60% right. But then you have the humans inform and help train the machine. The next time we do it, it's 70% or 80% right.

I think the future of enterprise architecture is training the AIs to (with a human) build the interconnect and interoperability faster. 

Michael Krigsman: Human plus AI. 

Dr. David A. Bray: I wouldn't do it AI alone. But if you can carve out and make it so that instead of spending six months to implement a system you can now do it in less than six weeks, that may be how we solve the problem of AI is going to need to move faster, and AI plus human can do enterprise architecture and data implementations faster, too.

AI ethical guidelines and governance best practices for business leaders

Michael Krigsman: David, what are the implications for business and technology leaders? In other words, should business people be thinking about this, doing anything about this today, or is it just still too soon?

Dr. David A. Bray: Definitely should be doing something about this because, if you don't disrupt yourself, someone else will disrupt you. That's not just coming from government. If a business says, "We're going to wait and see whatever we're going to do in terms of AI," I guarantee you someone else is going to say, "No, we're going to jump in."

I have had conversations with private sector CIOs, CTOs, colleagues. I help advise some of them. Again, they're right now saying, "We do want to wait and see," partly because of that fear factor and partly because a lot of AI, unfortunately, does have a lot of hype. And so, they want to have that sort of ground truth.

That said, I worry that it's going to give rise to boards creating this confusion between how is the chief AI officer relating to the CIO, the CTO, the CISO, and the CDO. And so, what I would recommend right now, businesses should say, "Look. We're going to do two to three initial toe in the water, explore this space and how it might change how we do business." 

Maybe it's just as simple as instead of us having to always write a press release, we can now involve AI in that process or things like that. Thinking about how we do frequently asked questions within our own company, something that's just a toe in the water, and say we're going to do three or four small, not too expensive, demonstration projects.

Similarly, though, if you're involved with government as a federal contractor, if you're involved in the government because your industry is regulated by government, if you're involved with government because you provide services, lean in because there will be opportunities to help inform. And if you don't inform in this very short period, that wisdom that you can bring from the private sector might be to our detriment. 

I would say, don't go all in. But as a business leader, you have to recognize this will disrupt you. And it's a question of whether or not you're determining your own fate or you're letting someone else do that.

Michael Krigsman: Why will this disrupt you if you're a business leader?

Dr. David A. Bray: It will disrupt you because this changes the notion of how you get things done, how you deliver results to your business, who you involve.

I am hopeful that it's not exclusively AI versus humans. It's humans plus AI. It's augmented intelligence as another way for saying collective intelligence. 

If you want your employees to be smarter, if you want your frontline delivery people to be smarter, you want to pair them with AI. You don't want to replace them with it, but you want to pair them with this because they'll know more, be able to respond faster to both opportunities as well as risks.

Michael Krigsman: But very specifically, the executive order. 

Dr. David A. Bray: Yes. This executive order impacts you because this is how the business landscape will be shaped through the actions of government, including standards, including regulations, including policies. It will also be the way you do business with other nations. If you're an international entity, it will be shaped. 

You have to recognize that the business landscape is being changed by the technology and what government does will reshape your sector and will reshape the industry. 

Michael Krigsman: Coming down the pike is going to be one whole lot of regulation and government oversight in AI.

Dr. David A. Bray: I think that's a political choice, which I can't weigh in. Even if it's not "regulation," if the guidance, the involvement, and the way that government is choosing to either use or not use, or the way that the Department of Justice and FBI is pursing whether things are fair or not fair, whether other determinations are being done, you don't necessarily see that are regulation but it is shaping forces from the actions of these different governments and where they're putting their money. Where they put their money, where they choose to put their focus, where they choose to put their attention, that will shape things with or without regulation.

The Chief AI Officer role in business

Michael Krigsman: Arsalan comes back with our final question. He says, "Should we have a chief AI officer that is an AI to help agencies understand where they can use AI and how they should be responding to this executive order?"

Dr. David A. Bray: I don't want to dismiss human chief AI officers, but I think that makes a little bit more sense because the C-suite is already crowded enough as it is. And in some respects, maybe it's not even a chief AI officer. 

It really is just an AI assistant, digital assistant, additional perspective on your C-suite and on your board to say, "Have you thought about this?" because, again, not everything that generative AI produces is real. Not all of it valid and will change its mind. And so, you have to be ready for hallucinations.

But what we've seen generative AI good for, especially in medical settings, is those exotic edge cases. A physician is really good at diagnosing flu or RSV or whatever. But if something is really exotic and you've never seen it before, the AI might say, "Have you thought about this?"

If you're in a business context, it might say, "Look. You think it's this, but have you thought about this?" And so, almost having that AI be the edge case thinker but don't let it go off on its own. Involve it in whatever you're doing in your business.

Michael Krigsman: With that, we are out of time. A huge thank you to Dr. David Bray. Thank you, David, for taking the time to be with us and teach us about this executive order.

Dr. David A. Bray: Thank you, Michael. It's always a joy to be here with you on CXOTalk. 

Michael Krigsman: Well, I really appreciate it. And a huge thank you to our audience. You guys are an amazing audience; all the very intelligent, bright questions that you ask. 

Now before you go, please subscribe to the CXOTalk newsletter to stay up to date. And subscribe to our YouTube channel. We have incredible shows coming up, so check out CXOTalk.com. And we will see you again next time. Have a great day.

Published Date: Dec 01, 2023

Author: Michael Krigsman

Episode ID: 815