What should business people know about artificial intelligence and other emerging technology trends in 2019? Quantitative futurist and author, Amy Webb, shares her research with Michael Krigsman and CxOTalk.

Amy Webb Biography

Amy Webb is a quantitative futurist and professor of strategic foresight at NYU and founder of the Future Today Institute, a leading foresight and strategy firm. Now in its second decade, the Future Today Institute helps leaders and their organizations prepare for deep uncertainties and complex futures.

Founded in 2006, the Institute advises Fortune 500 and Global 1000 companies, government agencies, large nonprofits, universities and startups around the world. Amy was named to the Thinkers50 Radar list of the 30 management thinkers most likely to shape the future of how organizations are managed and led and was won the prestigious 2017 Thinkers50 RADAR Award. Amy’s special area of research is artificial intelligence, and she has advised three-star generals and admirals, White House leadership and CEOs of some of the world’s largest companies.

Amy is the author of three books, including The Big Nine: How The Tech Titans and Their Thinking Machines Could Warp Humanity (PublicAffairs/ Hachette, March 5, 2019), which is a call-to-arms about the broken nature of artificial intelligence, and the powerful corporations that are turning the human-machine relationship on its head. Her last book, The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream (PublicAffairs/ Hachette, December 2016), explains Amy’s forecasting methodology and how any organization can identify risk and opportunity before disruption hits. Signals is a Washington Post bestseller, was selected as one of Fast Company’s Best Books of 2016, won a 2017 Gold Axiom Medal for the best book about business and technology and was one of Amazon’s best books 2016. Signals has been released in multiple international editions and has been translated into a number of languages. Her bestselling memoir Data, A Love Story (Dutton/ Penguin 2013) is about finding love via algorithms. Her TED talk about Data has been viewed more than 6 million times and has been translated into 32 languages. Data is being adapted as a feature film, which is currently in production.

Transcript

Michael Krigsman: Today on CXOTalk, we're speaking with somebody who unpacks the complexity of technology, the complexity of artificial intelligence, Amy Webb. She is a professor at NYU in the Business School, and she is the head, the founder of the Future Today Institute, and a very, very interesting woman. Amy Webb, thank you so much for being here on CXOTalk.

Amy Webb: Of course. Thank you for having me.

About The Big Nine

Michael Krigsman: Amy, your book is just amazing. It's one of the best books that I've seen. It's getting a lot of attention, and it's very well deserved. Please, very briefly, tell us about your background. I think that's a good place to start.

Amy Webb: Sure. I have maybe a strange job title. I'm a Quantitative Futurist. My job is to use data to model emerging technology trends and then to develop risk and opportunity scenarios that tend to be longer-term.

For the most part, my organization, which has been around for 15 years, advises the senior leadership at very large, Fortune 100 companies. We also work with branches of the federal government and military. The purpose of this work is to help everybody see around corners, not make predictions but, rather, make connections. That's what I do. All of the research that I do and all of our methodology and our tools, it's all open source and made available to everybody for free.

Michael Krigsman: The key thing is that these are not just guesses, but all of your predictions are backed by really intensive research.

Amy Webb: Right, so the methodology that we use is sort of a hybrid between process thinking and big sky thinking, but it's a very, very rigorous model. It requires a lot of analysis using numbers. Then there are parts of it that require work in teams where we have different perspectives rolling out the downstream implications of decisions that are being made.

Michael Krigsman: Now, Amy, you just released a fascinating book called The Big Nine. What is The Big Nine? Let's start there.

Amy Webb: Sure. In the course of the normal work that I do as a futurist and one that predominantly focuses on emerging technologies, several years ago I noticed that when researching artificial intelligence which, to be fair, is a pretty big field, it seemed as though I kept coming back to the same companies over and over and over again. In fact, it was these companies; there were nine of them. These companies that are building the custom frameworks, the custom Silicon, it's their algorithms. It's their patents.

They have the lion share of patents in this space. They're able to attract the top talent. They have the best partnerships with the best universities. Essentially, it's these nine companies who are building the rules and the systems and the business models for the future of artificial intelligence. As a result of that, have a pretty significant influence on the future of work and everyday life. There are nine companies: three in China and six here in the United States.

Michael Krigsman: Do you want to list off who those companies are?

Amy Webb: Sure. The ones in China may not be familiar to everybody. They are Baidu, Alibaba, and Tencent.

  • The best way to think about Baidu is its U.S. cousin is Google. Baidu is a gigantic search engine that has a lot of other subsidiaries and business verticals. It also has, much like Google, an autonomous driving unit.
  • Alibaba is sort of akin to Amazon. Alibaba is enormous. It's an online retailer, but it has, again, many other facets just like Amazon does into many other areas of life.
  • Tencent is part social network, part FinTech, payments processing system and is also fairly big in the future of healthcare.

That's China.

In the United States are what I call the G-mafia. This is Google, Amazon, Microsoft, IBM, Apple, and Facebook. Together, it's these nine companies that again have quite a bit of influence on our futures.

Michael Krigsman: I want to remind everybody; we're talking with Amy Webb, who just wrote this very interesting book called The Big Nine. Amy, artificial intelligence is an area that you seem very concerned about. Why? Why is that?

Amy Webb: The best way to think about AI is not as a singular technology or some kind of cool technology that's out on the horizon. It's simply the next era of computing. We've had computers in some form now since the mid-1800s, believe it or not, as crazy as that sounds. The first era of computing was simple tabulation, automated, using a machine. The second era of computing was programmable systems. Here we are in the third era.

All you really need to know about AI is that this is a complicated system that uses data to make decisions for outcomes that somebody has determined in advance, which means that all of us are surrounded by artificial intelligence all day long. We just don't think about it in that way. Something simple like when you're in your car and you're driving backward. If you're in a newer car, you'll hear beeping sounds to make sure that you don't run over a scooter or bicycle, hit a tree, or something. There might be a little dashboard offering you a video of what you're driving in front of, and it uses computer vision to help out that process.

All of that is something called artificial narrow intelligence, and there are literally millions of examples of artificial narrow intelligence that is in use in everyday life now. That doesn't seem bad. [Laughter]

The problem that I'm starting to see arise is that artificial intelligence is on two developmental tracks that are fairly different. In China, those BAT. companies may be independent but, as Chinese companies, they have to follow the leadership of the Chinese government. China has very different viewpoints on data and privacy, on freedom of speech and expression and, also, how business ought to be done worldwide and even what the geopolitical map should look like.

In the United States, our six companies, the G-mafia, are publicly traded companies with fiduciary responsibility to shareholders. As a result, they're often under the gun and significant pressure to push AI into the marketplace using commercial products as soon as possible.

Essentially, what we have is consumerism and capitalism driving AI in half of the world. In the other half of the world, the development of artificial intelligence being done so to further the ideas of the Chinese Communist Party. The challenge is that all of us are stuck in the middle, everyday people. There's very little transparency about how decisions are being made and there is not a lot of long-term planning at the uppermost levels of leadership to think about what all of this might mean 10 years from now, 20 years from now, 50 years from now.

Michael Krigsman: We are starting to get some questions from Twitter. We'll get to those in a minute because I have some questions of my own. As the moderator, I will assert moderator's privilege and ask a couple of my own questions first.

AI Different From Other Technologies

Amy, this is all great, what you're saying, but how is this any different from other technologies? What's unique about this? The lack of transparency sounds like you have just described technology and government operations as they have been going for a long time.

Amy Webb: Sure. I think that the key difference is that decisions that drastically affect everyday life are being made by algorithms which were designed by a relatively small group of people working at just a few companies and that process is not in any way meaningfully transparent. What that means in practice is that if you're somebody who has graduated with a computer science degree and you're out looking for a job, as many people are looking for jobs across many different fields, the hiring process is becoming automated. Rather than a human reading your resume, instead, a system is looking through all resumes using pattern recognition and looking for anomalies or looking for areas that meet certain criteria.

We've already started to see evidence of weird biases. If you're somebody with a computer science degree and you took a bunch of extracurricular courses on things like theology or comparative lit or music, to the algorithms that have been trained, those look like anomalies and those resumes have been discarded.

Now, the weird thing is that if I was a hiring manager, somebody who was adept, skilled, and showed great promise in AI who also had a very broad world view and lots of additional extracurricular inputs, to me that's going to make a much better, more well-rounded employee. But because we've relegated some of that decision making to an algorithm rather than a person, those resumes are being discarded. By the way, that's not just happening in CS.

I would argue that this is a little different. Government has tried to step in before to regulate our financial systems, parts of our transportation systems. This is very different because AI is not the same type of technology. This is a series of systems built by a relatively small number of people into which many new startups are plugging in and those systems are predicated on making decisions using various pieces of our data. That's where I think things are different this time around.

Lack of Transparency on AI Data and Algorithms

Michael Krigsman: Fundamentally, is your concern the consolidation of intellectual power as well as capital that is feeding this technology that is surrounding us, combined with the lack of transparency? Is that the core issue here?

Amy Webb: No, the core issue is that we aren't doing any long-term planning. These are publicly traded companies and, in our free market economy, I believe that these companies should have the opportunity to succeed. The challenge is that, here in the United States, we don't have a single capitol. I do not believe that Washington, D.C. is the only capitol of our country.

We have a nexus of three powers: Washington, D.C.; but also, the financial centers, which is New York; and the technology epicenter, which is Silicon Valley along with the Pacific Northwest, so basically the west coast. All three capitols of the United States are codependent. They may not always see it that way, but the decisions that they make affect each other, and they can't survive without each other.

The problem is that I see very little evidence of strategic, long-term planning in any of these places. As a result of that, we don't see the kind of risk modeling that you might see in other areas because speed is prioritized over safety again and again when it comes to AI. By the way, that's not the first time. Artificial intelligence in the modern era has been around since the 1950s, and the same exuberance that we're seeing today was present in the 1980s. When a lot of the amazing commercial products failed to materialize, funding was taken away, so it's something to keep in mind.

The issue is that the government has a transactional relationship, at best, with the Valley. These companies are the government's clients. Investors expect returns and are putting undue pressure on these companies to commercialize their products as soon as possible. The market is fickle, in part because of weirdness right now in Washington, D.C. This is not good going forward.

I actually do not believe that these nine companies are evil. I don't believe that the G-mafia are trying to intentionally harm any of us. I think it's just a circumstance of our current situation that, because we have no culture of long-term planning, it's not part of our federal government, it's not part of most of these companies, we don't have long-term, rigorous, strategic planning, and so everybody is sort of doing what they think is best for themselves.

We see exactly the opposite happening in China where there is a long history of long-term planning. It's part of the governance culture, and there is an incredibly smart person right now at the helm. Xi Jinping is very, very bright. He understands technology, and he is in the process of exporting Chinese AI and the various systems that have been built with it like surveillance and monitoring systems out to other vulnerable countries with authoritarian leaders. That's what has me concerned.

AI Ethics

Michael Krigsman: All right. We have a couple of questions from Twitter that actually touch on both these questions around the lack of planning as well as China. Let's go there. Arsalan Khan points out that just today Google announced that it is disbanding its board to check for AI ethics. What do you think about that and what should companies be doing to do the right thing, essentially?

Amy Webb: Sure. That actually happened a couple of days ago and it had been in progress. This was not an internal board. Some big companies have internal C-suite ethicists with departments that are integrated throughout the rest of the organization. This was not what Google did.

Google assembled a fairly small board, an advisory board, that had no real teeth. This was intended to be a group of people to share knowledge within the rest of the organization. It was a strangely assembled group of people. If I were to put together a group of people to provide serious feedback on ethics but, also, on, again, the longer-term implications of a transformational technology like artificial intelligence, this would not have been the group of people that I put together. As far as I know, there was a little bit of backlash because of some of the politics of those people who were appointed. Basically, a week after it started, it's now gone.

Here's what I will say. Most companies publish some kind of statement of values. Amazon, Google, Microsoft, IBM, you can sort of look around on their websites and their corporate communications materials, and you'll get some sense of what the corporate values are. That is very different from stating, in a granular way, what a company's positioning is on how decisions are made about artificial intelligence. We just don't see a lot of companies taking that piece of this very seriously, not the G-mafia and certainly not outside.

In my book, seeing that this was a problem, I developed a list of 15 questions that would become the basis for that ethics statement, ethics and value statement within an organization as a way to get started. We just don't see enough action in any real meaningful way. Again, why? Let's be skeptical.

It seems like a no-brainer, so why wouldn't Amazon and Google; why wouldn't they just come up with, like, "This is our position on ethics and AI. These are the people. This is our accountability group and we're going to go all in."

Why isn't that happening? Because it means that we have to make some short-term financial sacrifices in the process. Again, the Street doesn't like that.

Michael Krigsman: Are you a pessimist or an optimist? [Laughter]

Amy Webb: No, no, I'm neither. I'm a pragmatist. My job is numbers, so I work with data all day long. I would say that I'm very passionate about this particular subject because I'm having a hard time building out a model that doesn't end in either a bad future or a catastrophic future. But I am not a natural born dystopian thinker. I'm a pragmatist who understands that we all have agency in the future and the future is not already pre-set; but that if we want great outcomes, we have to put in the hard work in advance.

I often tell people that, just like great marriages, the future takes extraordinary hard work. If you think about people in your life that you know who have been married, who have been legitimately happily married for decades, typically the hallmarks of that marriage include things like compromise, transparency, being authentic, keeping each other accountable, and all of that stuff that's hard to do but you know you have to do in order to make that relationship work.

It's the same when it comes to planning out our futures. You have to be willing to put in really hard work. You have to make short-term sacrifices for the greater good. You have to be willing to compromise, spend time on things that are tedious, all that stuff. But if we're willing to do that together, not just in the United States, but across our geographic country lines, then, yeah, we could have an amazing future and there is potential.

Some of the stories that you're hearing about how AI will do all of these amazing things, they're absolutely plausible, but they're not just going to show up fully formed. We've got to put in the work now to make sure that those things happen.

Practical Steps? /  Will China help?

Michael Krigsman: Okay. On that subject of the optimist, the optimistic view of humans cooperating, we have a perfectly timed question from Frank Diana, who has been a guest on this show before. He's the chief futurist at TCS, which is one of the largest computer services firms in the world. It might even be the largest one; I'm not sure. Frank says this, "In your book," he's reading your book, "You map out prescriptive steps to take to ensure that AI enables human flourishing. China is critical to this outcome. Do you see a path to China cooperating?"

Amy Webb: Yes. I don't know that that path is going to be achievable with this current Administration, so I'll give a quick PSA. I'm politically independent.

With that being said, I think we've had too many instances on the global stage of making promises, pointing fingers, and pulling out of an agreement that, at this point, for the Trump Administration to suggest that there's going to be some new, big, international coalition. I think everybody would roll their eyes and collectively probably laugh at that, so it may not happen with this Administration. However, I think that there are a couple of possibilities going forward.

One of the tactical suggestions that I'm making is the formation of an international coalition that is primarily incentivized using economics. The organization I've named GAIA: Global Alliance on Intelligence Augmentation. It would include member states from all around the world just like a UN or an IAEA, for example, but it would also include the big tech companies.

Rather than it functioning as a regulator, instead, this would be a global body charged with developing the tools, assets, and guardrails for the farther future.

The problem with China is that China's economy is growing and its population is growing. What we're all probably going to see soon is China experiencing upward social mobility at a scale we've never seen before in modern human history. The only way that China comes to the table and collaborates in a meaningful way is if China's economy still wins. The only way that the United States comes to the table, I think, is if we can avoid the kinds of regulation that would stifle innovation and growth, which means that we need a different model.

We have sort of like a blueprint if we look at, for example, in IAEA or a UN, but we need a different implementation of this. GAIA would be charged with creating those guardrails, shoring up the bias in the datasets that everybody knows exists, doing some of the bigger risk modeling in a very cohesive way. If certain pieces of the ecosystem advance, what could that mean and how can we avoid problems at the beginning? What could the global standards look like so that everybody can win in the long-term?

I know that's a big ask and it seems like, especially in this business environment, that there's no possible way that we would all come to the table with China, but I think that it's worth a shot. That's one way of doing it.

The other way of doing it is, we just assume that China is never going to cooperate, that they see us as an impediment to their global ambitions, and we let China do its own thing. We bring together all of the other nations around the world and, if we have enough of them with enough people and enough economic power then, collectively, we have some leverage. At that point, the best case scenario is, we have the best and the brightest working on GAIA and they're doing what they were going to do anyway, in addition to planning for cyber issues with China down the road.

Michael Krigsman: Very clever naming it after the Gaia Hypothesis from James Lovelock. [Laughter]

In practical terms then, what should we be doing? When I say "we," maybe break it down into corporations, the government, even individual citizens.

Amy Webb: Sure. I'd love to maybe address the business crowd because I think they get left out of this conversation a lot. AI is already part and parcel of every single business, even a small business. In some way, your organization is currently using artificial intelligence of some kind.

As the business grows and as your core functions grow, you're going to have to start making decisions. Some of those decisions may be whether to use Google's cloud, Amazon's cloud, or Microsoft's cloud. Increasingly, a lot of the AI functions that are super useful to businesses, like predictive analytics [and] understanding customer sentiment, all of those things are going to be located in those clouds, which means that you have to start making smarter decisions that aren't just on a cost basis of which one of these companies you're going to hitch your wagon to because, while they may offer open source AI systems, they're not free and open source doesn't necessarily mean interoperable, which is my way of saying it's not like you can just take all your stuff out and port it over to somebody else's network down the road without a tremendous amount of hassle, cost, and everything else.

I would argue that you have to get much, much smarter fast about what AI is, what it isn't, what it actually does, and be able to distinguish between all of that and a lot of the marketing promises companies are making because, when it comes to AI, there are a lot of misplaced optimism and fear. Too many times, I'm seeing the executive leadership in organizations all around the world either making terrible decisions on AI or assuming that it's not here; that there's some event horizon off in the distance and they're just going to wait. That's a terrific way to get left behind or, worse, to have to make a decision under duress.

I wrote the book for people who are in organizations who sort of need to get smart fast on AI. The book is also for people who are in those three capitols: the finance folks, the government folks, and the tech folks.

In our government, and in every government, there has been talk about developing country-level AI regulations, which is a terrible idea. First of all, any regulation that's going to have teeth has to be specific. Any super specific regulation with regards to something as fast-moving as AI is going to wind up outdated or impossible to implement.

This is where sort of the levers of a democracy are showing some stress in this age of technology. I think that we're going to have to develop a different way of thinking around this at our government level. We're going to have to develop policy, the kinds of policy we've never seen before.

I have conversations a lot with various parts of the government and military who are very reluctant to do anything different. They don't like change. But if we don't start making some changes and figure out new ways to collaborate where there are economic incentives involved to get the big nine, or at least our six of them, to the table in a voluntary way where they want to help out, then we're going to have all kinds of problems.

AI Planning for Business

Michael Krigsman: When you talk about businesspeople needing to become more educated and make choices around which cloud they use, for example, that seems like an extremely tall order because, essentially, what you're asking them to do is, in addition to whatever functional requirements they have, they need to overlay this kind of almost altruistic of where they think the future of AI and the trajectory of these companies is going for your average businessperson that is A) not knowledgeable about these trajectories and B) is simply trying to get the job done where I want to install a finance system or whatever it might be.

Amy Webb: Right. Any successful business does financial planning of some kind. I can't think of any example. It would be impossible to run a business if you don't have some sense of what your outlays are, your capital outlays, your staffing costs, what your revenue projections are for the year. That's an example of planning. Most companies do it on a quarterly or an annual basis.

All I'm saying is, add into your planning mix a slightly longer outlook. Again, this is partially why I made my entire methodology available for free and made it open source because we don't do that in the United States, not just in business, but in general. I'm not saying, like, "Figure out the future of AI and then go toward it." What I'm saying is, "Don't allow others to make those critical technology decisions for you. Certainly not the vendors."

The vendors are never going to know you and your organization as well as you do. To outsource this critical thinking and strategic planning to just the IT department or just a skunkworks within the organization is also a mistake because you have to think about technology and things like process automation, which a lot of companies are now looking at because it's a huge cost-savings and there's a lot of promise there. You have to do that through the lenses of HR, compliance, sales, marketing, and all the other business units. That means there has to be executive leadership and vision from the top on AI.

The easiest thing everybody can do, like the one thing everybody could do today, you could either read my book and do this or go online and do it. Be able to explain succinctly to some other person what artificial intelligence is, why it matters, and how it impacts your industry or your company. That whole entire process of explaining should take less than five minutes. If you can do that, you're going to be so far ahead of everybody else. If all of us could do that, we would be in such a better position going into the future than we are right now because where we are right now is constantly referencing sci-fi images of artificial intelligence from pop culture like Skynet from the Terminator or the drones and the hosts from Westworld or any other number of books, movies, and TV shows.

AI Hype in the Enterprise Software Industry

Michael Krigsman: Very briefly because we're going to soon run out of time, I work with quite a lot of enterprise software companies, large ones and small ones. The hype around AI is ridiculous. Any advice for software, for the major software, or even smaller ones around how they talk about what they do? Let's put it that way.

Amy Webb: Yeah. I'm really, really glad you asked that question because I haven't had a chance to address it at all with anybody yet. Yeah, the hype is real; it's real bad. [Laughter] You're right.

Again, in meeting, over the years, with so many people who are working in the trenches, the heads of organizations, advising some organizations, here is my observation. The people who are working in the trenches who are actually doing all of the building, the coding, the testing, the supervised learning, and the research, they will tell you that it's a slog. Everybody is overpromising and underdelivering. All of the crazy AI cool stuff is many, many years away. If you talk to researchers, some of the more predominant researchers, they may tell you a different story, which is, breakthroughs on the horizon, all kinds of great stuff happening, we're going to see unimaginable successes here fairly soon. What's going on between the two groups?

The latter group, some of these big named researchers, they need to attract funding, partially because a lot of countries around the world, including our own, have stripped all of that foundational research funding from the budget. Outside of the military, we have no budget for fundamental research in AI, which means these researchers have to get it from somewhere else. Where are they getting it from? The six companies, the G-mafia in the United States. [Laughter]

They have to get excited but, in order for them to get excited, they have to make sure that they've got the funding. Where does that come from? The investors who then want to see returns and in commercial products.

When it comes to the people working in the trenches, oftentimes they're not aware of the bigger picture or what piece their little thing that they're working on fits into. For anybody who remembers high school biology, chemistry, or physics where you did any kind of experiment, you remember what it was like over and over and over and over again building, testing, scrapping, and learning in order to make something work. For that reason, I think everybody's timeframes are out of whack. That's where, again, it's good to take a much more comprehensive picture and view of what everything looks like and to not worry about what exact year will the robots come to take the jobs. You know what I mean?

Michael Krigsman: Yeah.

Amy Webb: Instead, focus on how fast we really are moving and what all of that implies.

Michael Krigsman: Okay. You'll have to come back another time, and we'll dive deeper into the relationship of the software vendors and how this works. This happens to be where I spend a lot of my time, but we have to move on because we are going to run out of time.

Government Policy and Regulation of AI

We have another question from Twitter. Let's turn our attention back to the government. You mentioned earlier that country-level regulation or policy is not a good thing.

We had, on this show, Lord Tim Clement-Jones who was the head of the U.K. House of Lords Select Committee on AI. I say "was" because that committee did its job and it's no more, as far as I know. Let's take that. Oh, and I'll say, he was a good guy and he seemed to have great recommendations. His heart is absolutely in the right place with this.

Then we have Frank Diana, again, who on Twitter is asking, "Amy, in your conversations with government, do you see any signs of movement at the policy level and any indication that there's an appreciation for the massive shifts on the horizon?"

Amy Webb: Government is big and here is what I will say. There are people working within state, the State Department, the White House, many other branches of our government and military who I think are extraordinarily bright, who understand the sense of urgency, who also happen to have a deep and broad knowledge of artificial intelligence, who themselves come out of engineering and computer science fields, who do care, who are working on policy. That being said, there are lots of people who are also part of this process who are political appointees or wound up in these positions for various other reasons who have no knowledge or understanding.

Under other circumstances, that may not be as critical an issue. However, we have no executive leadership on artificial intelligence. We have no national strategy. We have no national point of view. We have no coordination between the agencies who are working on this.

We have no singular person or department who is in charge of long-term strategic planning. Therefore, these incredibly hard-working civil servants who probably could be making a bunch more money out in the Valley or somewhere else, to some extent are spinning their wheels. That, to me, is terrifying because of what we see happening in China.

I will also say that that exact comment that I had on our national leadership on AI is analogous to biotechnologies like CRISPR and other fields. We have no funding. We have no national strategy or leadership. This is probably the worst possible time for us to be, as a country, in this position because it also impacts our allies overseas, which is why, any time anybody from the government calls, I always show up because I feel like I have a sense of civic and moral duty to help out. But it is becoming more and more difficult for me.

Anyhow, there is a lot of movement. There are a lot of smart people. We are not moving nearly as quickly as we should, and the progress that has been made thus far, I think, is being celebrated too quickly.

Michael Krigsman: You sound like the committed public policy makers and government servants that I know.

Another question for you as we finish up. I think this is a good one to end on from Twitter. This is from the @CXOTalk account. "Why should we care about the consequences and impact of the way we use AI? Why is it so urgent that we deal with this now?" I'll just add, by extension, Amy, why don't you just leave well enough alone? The free market works absolutely perfectly. We have diversity of opinion. Why are you bothering with this?

Amy Webb: Why bother? Because the free market is failing us at the moment. Again, I will just address the six companies that are based in the United States. We are seeing an extraordinary amount of consolidation and there are, again, just a few people, relatively speaking, with limited world views who, statistically speaking, do not probably have the same backgrounds as you and me, who are under pressure to commercialize their work. They are making existential decisions for all of us.

By existential, I mean they are in a position to fundamentally alter human existence. I'm not talking about, again, killer robots that are coming to get us all. Instead, I'm talking about much more insidious decisions, decisions that are intended to optimize our lives by nudging us, for example, on our digital devices or by optimizing workstreams within organizations without doing the long-term planning.

The Need for AI Transparency

My concern is, without any transparency, without the collaboration that we might see in other cases, without an influx of funding from the federal government, without coordination between governments, organizations, and companies elsewhere in the world, that we are going to wind up seeing new types of discrimination in which all of us are affected. We're going to see ourselves disenfranchised in ways that could have been prevented. And we're going to, at some point, discover that all of this cool, whizbang, smart technology in a way is making our lives a lot more miserable. That doesn't even include what the ramifications are if China manages to pull off what I think it is attempting to do to reshape the global world order because that impacts or could impact your ability to travel in the future. It could impact your business's ability to do business not just in China, but in other countries that China had partnered with around the world. We are talking about significant change, existential change that will not happen tomorrow or the next few years but will sort of happen over a period of five to seven decades.

The best way to think of us collectively right now is the proverbial frog in the pot of water. We are the frog. We are in a pot of water and the stove is on. It's going to heat up slowly over a very, very long period of time. Slowly, over that period, it will start to feel hot until, ultimately, we meet our demise.

I know that's a sad way to end the show, but we have to stop having fantastical conversations about AI. We have to stop fetishizing the future. It is time that everybody understands what this technology actually is, how it works, what is at stake, how it impacts your business, and how this could forever change your life, your kids' lives, your grandchildren's lives because it doesn't have to be negative.

We have an opportunity to do great things and to live in a terrific world, one that's far better than we're living in today. That's what I think we could do. I think that's on our horizon, but only if we make better decisions and exercise creative and better leadership right now.

Michael Krigsman: Wow! I bow down to Amy Webb. Amy, thank you so much for taking your time to be with us today.

Amy Webb: Thank you. These questions were amazing. Thank you so much for having me on.

Michael Krigsman: We have been talking with she who shall be known as the Amazing Amy Webb. She wrote this book called The Big Nine. It's one of the most outstanding books that I've seen about AI, the AI future, and what should be done.

Everybody, thank you for watching. Right now, subscribe on YouTube, subscribe to our newsletter. Next week, we're talking with the president and chief marketing officer of AT&T Business. Maybe I should ask him about some of these questions as well.

Amy Webb, thanks again. Everybody, thank you for watching and we'll see you next time. Bye-bye.

Michael Krigsman: Today on CXOTalk, we're speaking with somebody who unpacks the complexity of technology, the complexity of artificial intelligence, Amy Webb. She is a professor at NYU in the Business School, and she is the head, the founder of the Future Today Institute, and a very, very interesting woman. Amy Webb, thank you so much for being here on CXOTalk.

Amy Webb: Of course. Thank you for having me.

About The Big Nine

Michael Krigsman: Amy, your book is just amazing. It's one of the best books that I've seen. It's getting a lot of attention, and it's very well deserved. Please, very briefly, tell us about your background. I think that's a good place to start.

Amy Webb: Sure. I have maybe a strange job title. I'm a Quantitative Futurist. My job is to use data to model emerging technology trends and then to develop risk and opportunity scenarios that tend to be longer-term.

For the most part, my organization, which has been around for 15 years, advises the senior leadership at very large, Fortune 100 companies. We also work with branches of the federal government and military. The purpose of this work is to help everybody see around corners, not make predictions but, rather, make connections. That's what I do. All of the research that I do and all of our methodology and our tools, it's all open source and made available to everybody for free.

Michael Krigsman: The key thing is that these are not just guesses, but all of your predictions are backed by really intensive research.

Amy Webb: Right, so the methodology that we use is sort of a hybrid between process thinking and big sky thinking, but it's a very, very rigorous model. It requires a lot of analysis using numbers. Then there are parts of it that require work in teams where we have different perspectives rolling out the downstream implications of decisions that are being made.

Michael Krigsman: Now, Amy, you just released a fascinating book called The Big Nine. What is The Big Nine? Let's start there.

Amy Webb: Sure. In the course of the normal work that I do as a futurist and one that predominantly focuses on emerging technologies, several years ago I noticed that when researching artificial intelligence which, to be fair, is a pretty big field, it seemed as though I kept coming back to the same companies over and over and over again. In fact, it was these companies; there were nine of them. These companies that are building the custom frameworks, the custom Silicon, it's their algorithms. It's their patents.

They have the lion share of patents in this space. They're able to attract the top talent. They have the best partnerships with the best universities. Essentially, it's these nine companies who are building the rules and the systems and the business models for the future of artificial intelligence. As a result of that, have a pretty significant influence on the future of work and everyday life. There are nine companies: three in China and six here in the United States.

Michael Krigsman: Do you want to list off who those companies are?

Amy Webb: Sure. The ones in China may not be familiar to everybody. They are Baidu, Alibaba, and Tencent.

  • The best way to think about Baidu is its U.S. cousin is Google. Baidu is a gigantic search engine that has a lot of other subsidiaries and business verticals. It also has, much like Google, an autonomous driving unit.
  • Alibaba is sort of akin to Amazon. Alibaba is enormous. It's an online retailer, but it has, again, many other facets just like Amazon does into many other areas of life.
  • Tencent is part social network, part FinTech, payments processing system and is also fairly big in the future of healthcare.

That's China.

In the United States are what I call the G-mafia. This is Google, Amazon, Microsoft, IBM, Apple, and Facebook. Together, it's these nine companies that again have quite a bit of influence on our futures.

Michael Krigsman: I want to remind everybody; we're talking with Amy Webb, who just wrote this very interesting book called The Big Nine. Amy, artificial intelligence is an area that you seem very concerned about. Why? Why is that?

Amy Webb: The best way to think about AI is not as a singular technology or some kind of cool technology that's out on the horizon. It's simply the next era of computing. We've had computers in some form now since the mid-1800s, believe it or not, as crazy as that sounds. The first era of computing was simple tabulation, automated, using a machine. The second era of computing was programmable systems. Here we are in the third era.

All you really need to know about AI is that this is a complicated system that uses data to make decisions for outcomes that somebody has determined in advance, which means that all of us are surrounded by artificial intelligence all day long. We just don't think about it in that way. Something simple like when you're in your car and you're driving backward. If you're in a newer car, you'll hear beeping sounds to make sure that you don't run over a scooter or bicycle, hit a tree, or something. There might be a little dashboard offering you a video of what you're driving in front of, and it uses computer vision to help out that process.

All of that is something called artificial narrow intelligence, and there are literally millions of examples of artificial narrow intelligence that is in use in everyday life now. That doesn't seem bad. [Laughter]

The problem that I'm starting to see arise is that artificial intelligence is on two developmental tracks that are fairly different. In China, those BAT. companies may be independent but, as Chinese companies, they have to follow the leadership of the Chinese government. China has very different viewpoints on data and privacy, on freedom of speech and expression and, also, how business ought to be done worldwide and even what the geopolitical map should look like.

In the United States, our six companies, the G-mafia, are publicly traded companies with fiduciary responsibility to shareholders. As a result, they're often under the gun and significant pressure to push AI into the marketplace using commercial products as soon as possible.

Essentially, what we have is consumerism and capitalism driving AI in half of the world. In the other half of the world, the development of artificial intelligence being done so to further the ideas of the Chinese Communist Party. The challenge is that all of us are stuck in the middle, everyday people. There's very little transparency about how decisions are being made and there is not a lot of long-term planning at the uppermost levels of leadership to think about what all of this might mean 10 years from now, 20 years from now, 50 years from now.

Michael Krigsman: We are starting to get some questions from Twitter. We'll get to those in a minute because I have some questions of my own. As the moderator, I will assert moderator's privilege and ask a couple of my own questions first.

AI Different From Other Technologies

Amy, this is all great, what you're saying, but how is this any different from other technologies? What's unique about this? The lack of transparency sounds like you have just described technology and government operations as they have been going for a long time.

Amy Webb: Sure. I think that the key difference is that decisions that drastically affect everyday life are being made by algorithms which were designed by a relatively small group of people working at just a few companies and that process is not in any way meaningfully transparent. What that means in practice is that if you're somebody who has graduated with a computer science degree and you're out looking for a job, as many people are looking for jobs across many different fields, the hiring process is becoming automated. Rather than a human reading your resume, instead, a system is looking through all resumes using pattern recognition and looking for anomalies or looking for areas that meet certain criteria.

We've already started to see evidence of weird biases. If you're somebody with a computer science degree and you took a bunch of extracurricular courses on things like theology or comparative lit or music, to the algorithms that have been trained, those look like anomalies and those resumes have been discarded.

Now, the weird thing is that if I was a hiring manager, somebody who was adept, skilled, and showed great promise in AI who also had a very broad world view and lots of additional extracurricular inputs, to me that's going to make a much better, more well-rounded employee. But because we've relegated some of that decision making to an algorithm rather than a person, those resumes are being discarded. By the way, that's not just happening in CS.

I would argue that this is a little different. Government has tried to step in before to regulate our financial systems, parts of our transportation systems. This is very different because AI is not the same type of technology. This is a series of systems built by a relatively small number of people into which many new startups are plugging in and those systems are predicated on making decisions using various pieces of our data. That's where I think things are different this time around.

Lack of Transparency on AI Data and Algorithms

Michael Krigsman: Fundamentally, is your concern the consolidation of intellectual power as well as capital that is feeding this technology that is surrounding us, combined with the lack of transparency? Is that the core issue here?

Amy Webb: No, the core issue is that we aren't doing any long-term planning. These are publicly traded companies and, in our free market economy, I believe that these companies should have the opportunity to succeed. The challenge is that, here in the United States, we don't have a single capitol. I do not believe that Washington, D.C. is the only capitol of our country.

We have a nexus of three powers: Washington, D.C.; but also, the financial centers, which is New York; and the technology epicenter, which is Silicon Valley along with the Pacific Northwest, so basically the west coast. All three capitols of the United States are codependent. They may not always see it that way, but the decisions that they make affect each other, and they can't survive without each other.

The problem is that I see very little evidence of strategic, long-term planning in any of these places. As a result of that, we don't see the kind of risk modeling that you might see in other areas because speed is prioritized over safety again and again when it comes to AI. By the way, that's not the first time. Artificial intelligence in the modern era has been around since the 1950s, and the same exuberance that we're seeing today was present in the 1980s. When a lot of the amazing commercial products failed to materialize, funding was taken away, so it's something to keep in mind.

The issue is that the government has a transactional relationship, at best, with the Valley. These companies are the government's clients. Investors expect returns and are putting undue pressure on these companies to commercialize their products as soon as possible. The market is fickle, in part because of weirdness right now in Washington, D.C. This is not good going forward.

I actually do not believe that these nine companies are evil. I don't believe that the G-mafia are trying to intentionally harm any of us. I think it's just a circumstance of our current situation that, because we have no culture of long-term planning, it's not part of our federal government, it's not part of most of these companies, we don't have long-term, rigorous, strategic planning, and so everybody is sort of doing what they think is best for themselves.

We see exactly the opposite happening in China where there is a long history of long-term planning. It's part of the governance culture, and there is an incredibly smart person right now at the helm. Xi Jinping is very, very bright. He understands technology, and he is in the process of exporting Chinese AI and the various systems that have been built with it like surveillance and monitoring systems out to other vulnerable countries with authoritarian leaders. That's what has me concerned.

AI Ethics

Michael Krigsman: All right. We have a couple of questions from Twitter that actually touch on both these questions around the lack of planning as well as China. Let's go there. Arsalan Khan points out that just today Google announced that it is disbanding its board to check for AI ethics. What do you think about that and what should companies be doing to do the right thing, essentially?

Amy Webb: Sure. That actually happened a couple of days ago and it had been in progress. This was not an internal board. Some big companies have internal C-suite ethicists with departments that are integrated throughout the rest of the organization. This was not what Google did.

Google assembled a fairly small board, an advisory board, that had no real teeth. This was intended to be a group of people to share knowledge within the rest of the organization. It was a strangely assembled group of people. If I were to put together a group of people to provide serious feedback on ethics but, also, on, again, the longer-term implications of a transformational technology like artificial intelligence, this would not have been the group of people that I put together. As far as I know, there was a little bit of backlash because of some of the politics of those people who were appointed. Basically, a week after it started, it's now gone.

Here's what I will say. Most companies publish some kind of statement of values. Amazon, Google, Microsoft, IBM, you can sort of look around on their websites and their corporate communications materials, and you'll get some sense of what the corporate values are. That is very different from stating, in a granular way, what a company's positioning is on how decisions are made about artificial intelligence. We just don't see a lot of companies taking that piece of this very seriously, not the G-mafia and certainly not outside.

In my book, seeing that this was a problem, I developed a list of 15 questions that would become the basis for that ethics statement, ethics and value statement within an organization as a way to get started. We just don't see enough action in any real meaningful way. Again, why? Let's be skeptical.

It seems like a no-brainer, so why wouldn't Amazon and Google; why wouldn't they just come up with, like, "This is our position on ethics and AI. These are the people. This is our accountability group and we're going to go all in."

Why isn't that happening? Because it means that we have to make some short-term financial sacrifices in the process. Again, the Street doesn't like that.

Michael Krigsman: Are you a pessimist or an optimist? [Laughter]

Amy Webb: No, no, I'm neither. I'm a pragmatist. My job is numbers, so I work with data all day long. I would say that I'm very passionate about this particular subject because I'm having a hard time building out a model that doesn't end in either a bad future or a catastrophic future. But I am not a natural born dystopian thinker. I'm a pragmatist who understands that we all have agency in the future and the future is not already pre-set; but that if we want great outcomes, we have to put in the hard work in advance.

I often tell people that, just like great marriages, the future takes extraordinary hard work. If you think about people in your life that you know who have been married, who have been legitimately happily married for decades, typically the hallmarks of that marriage include things like compromise, transparency, being authentic, keeping each other accountable, and all of that stuff that's hard to do but you know you have to do in order to make that relationship work.

It's the same when it comes to planning out our futures. You have to be willing to put in really hard work. You have to make short-term sacrifices for the greater good. You have to be willing to compromise, spend time on things that are tedious, all that stuff. But if we're willing to do that together, not just in the United States, but across our geographic country lines, then, yeah, we could have an amazing future and there is potential.

Some of the stories that you're hearing about how AI will do all of these amazing things, they're absolutely plausible, but they're not just going to show up fully formed. We've got to put in the work now to make sure that those things happen.

Practical Steps? /  Will China help?

Michael Krigsman: Okay. On that subject of the optimist, the optimistic view of humans cooperating, we have a perfectly timed question from Frank Diana, who has been a guest on this show before. He's the chief futurist at TCS, which is one of the largest computer services firms in the world. It might even be the largest one; I'm not sure. Frank says this, "In your book," he's reading your book, "You map out prescriptive steps to take to ensure that AI enables human flourishing. China is critical to this outcome. Do you see a path to China cooperating?"

Amy Webb: Yes. I don't know that that path is going to be achievable with this current Administration, so I'll give a quick PSA. I'm politically independent.

With that being said, I think we've had too many instances on the global stage of making promises, pointing fingers, and pulling out of an agreement that, at this point, for the Trump Administration to suggest that there's going to be some new, big, international coalition. I think everybody would roll their eyes and collectively probably laugh at that, so it may not happen with this Administration. However, I think that there are a couple of possibilities going forward.

One of the tactical suggestions that I'm making is the formation of an international coalition that is primarily incentivized using economics. The organization I've named GAIA: Global Alliance on Intelligence Augmentation. It would include member states from all around the world just like a UN or an IAEA, for example, but it would also include the big tech companies.

Rather than it functioning as a regulator, instead, this would be a global body charged with developing the tools, assets, and guardrails for the farther future.

The problem with China is that China's economy is growing and its population is growing. What we're all probably going to see soon is China experiencing upward social mobility at a scale we've never seen before in modern human history. The only way that China comes to the table and collaborates in a meaningful way is if China's economy still wins. The only way that the United States comes to the table, I think, is if we can avoid the kinds of regulation that would stifle innovation and growth, which means that we need a different model.

We have sort of like a blueprint if we look at, for example, in IAEA or a UN, but we need a different implementation of this. GAIA would be charged with creating those guardrails, shoring up the bias in the datasets that everybody knows exists, doing some of the bigger risk modeling in a very cohesive way. If certain pieces of the ecosystem advance, what could that mean and how can we avoid problems at the beginning? What could the global standards look like so that everybody can win in the long-term?

I know that's a big ask and it seems like, especially in this business environment, that there's no possible way that we would all come to the table with China, but I think that it's worth a shot. That's one way of doing it.

The other way of doing it is, we just assume that China is never going to cooperate, that they see us as an impediment to their global ambitions, and we let China do its own thing. We bring together all of the other nations around the world and, if we have enough of them with enough people and enough economic power then, collectively, we have some leverage. At that point, the best case scenario is, we have the best and the brightest working on GAIA and they're doing what they were going to do anyway, in addition to planning for cyber issues with China down the road.

Michael Krigsman: Very clever naming it after the Gaia Hypothesis from James Lovelock. [Laughter]

In practical terms then, what should we be doing? When I say "we," maybe break it down into corporations, the government, even individual citizens.

Amy Webb: Sure. I'd love to maybe address the business crowd because I think they get left out of this conversation a lot. AI is already part and parcel of every single business, even a small business. In some way, your organization is currently using artificial intelligence of some kind.

As the business grows and as your core functions grow, you're going to have to start making decisions. Some of those decisions may be whether to use Google's cloud, Amazon's cloud, or Microsoft's cloud. Increasingly, a lot of the AI functions that are super useful to businesses, like predictive analytics [and] understanding customer sentiment, all of those things are going to be located in those clouds, which means that you have to start making smarter decisions that aren't just on a cost basis of which one of these companies you're going to hitch your wagon to because, while they may offer open source AI systems, they're not free and open source doesn't necessarily mean interoperable, which is my way of saying it's not like you can just take all your stuff out and port it over to somebody else's network down the road without a tremendous amount of hassle, cost, and everything else.

I would argue that you have to get much, much smarter fast about what AI is, what it isn't, what it actually does, and be able to distinguish between all of that and a lot of the marketing promises companies are making because, when it comes to AI, there are a lot of misplaced optimism and fear. Too many times, I'm seeing the executive leadership in organizations all around the world either making terrible decisions on AI or assuming that it's not here; that there's some event horizon off in the distance and they're just going to wait. That's a terrific way to get left behind or, worse, to have to make a decision under duress.

I wrote the book for people who are in organizations who sort of need to get smart fast on AI. The book is also for people who are in those three capitols: the finance folks, the government folks, and the tech folks.

In our government, and in every government, there has been talk about developing country-level AI regulations, which is a terrible idea. First of all, any regulation that's going to have teeth has to be specific. Any super specific regulation with regards to something as fast-moving as AI is going to wind up outdated or impossible to implement.

This is where sort of the levers of a democracy are showing some stress in this age of technology. I think that we're going to have to develop a different way of thinking around this at our government level. We're going to have to develop policy, the kinds of policy we've never seen before.

I have conversations a lot with various parts of the government and military who are very reluctant to do anything different. They don't like change. But if we don't start making some changes and figure out new ways to collaborate where there are economic incentives involved to get the big nine, or at least our six of them, to the table in a voluntary way where they want to help out, then we're going to have all kinds of problems.

AI Planning for Business

Michael Krigsman: When you talk about businesspeople needing to become more educated and make choices around which cloud they use, for example, that seems like an extremely tall order because, essentially, what you're asking them to do is, in addition to whatever functional requirements they have, they need to overlay this kind of almost altruistic of where they think the future of AI and the trajectory of these companies is going for your average businessperson that is A) not knowledgeable about these trajectories and B) is simply trying to get the job done where I want to install a finance system or whatever it might be.

Amy Webb: Right. Any successful business does financial planning of some kind. I can't think of any example. It would be impossible to run a business if you don't have some sense of what your outlays are, your capital outlays, your staffing costs, what your revenue projections are for the year. That's an example of planning. Most companies do it on a quarterly or an annual basis.

All I'm saying is, add into your planning mix a slightly longer outlook. Again, this is partially why I made my entire methodology available for free and made it open source because we don't do that in the United States, not just in business, but in general. I'm not saying, like, "Figure out the future of AI and then go toward it." What I'm saying is, "Don't allow others to make those critical technology decisions for you. Certainly not the vendors."

The vendors are never going to know you and your organization as well as you do. To outsource this critical thinking and strategic planning to just the IT department or just a skunkworks within the organization is also a mistake because you have to think about technology and things like process automation, which a lot of companies are now looking at because it's a huge cost-savings and there's a lot of promise there. You have to do that through the lenses of HR, compliance, sales, marketing, and all the other business units. That means there has to be executive leadership and vision from the top on AI.

The easiest thing everybody can do, like the one thing everybody could do today, you could either read my book and do this or go online and do it. Be able to explain succinctly to some other person what artificial intelligence is, why it matters, and how it impacts your industry or your company. That whole entire process of explaining should take less than five minutes. If you can do that, you're going to be so far ahead of everybody else. If all of us could do that, we would be in such a better position going into the future than we are right now because where we are right now is constantly referencing sci-fi images of artificial intelligence from pop culture like Skynet from the Terminator or the drones and the hosts from Westworld or any other number of books, movies, and TV shows.

AI Hype in the Enterprise Software Industry

Michael Krigsman: Very briefly because we're going to soon run out of time, I work with quite a lot of enterprise software companies, large ones and small ones. The hype around AI is ridiculous. Any advice for software, for the major software, or even smaller ones around how they talk about what they do? Let's put it that way.

Amy Webb: Yeah. I'm really, really glad you asked that question because I haven't had a chance to address it at all with anybody yet. Yeah, the hype is real; it's real bad. [Laughter] You're right.

Again, in meeting, over the years, with so many people who are working in the trenches, the heads of organizations, advising some organizations, here is my observation. The people who are working in the trenches who are actually doing all of the building, the coding, the testing, the supervised learning, and the research, they will tell you that it's a slog. Everybody is overpromising and underdelivering. All of the crazy AI cool stuff is many, many years away. If you talk to researchers, some of the more predominant researchers, they may tell you a different story, which is, breakthroughs on the horizon, all kinds of great stuff happening, we're going to see unimaginable successes here fairly soon. What's going on between the two groups?

The latter group, some of these big named researchers, they need to attract funding, partially because a lot of countries around the world, including our own, have stripped all of that foundational research funding from the budget. Outside of the military, we have no budget for fundamental research in AI, which means these researchers have to get it from somewhere else. Where are they getting it from? The six companies, the G-mafia in the United States. [Laughter]

They have to get excited but, in order for them to get excited, they have to make sure that they've got the funding. Where does that come from? The investors who then want to see returns and in commercial products.

When it comes to the people working in the trenches, oftentimes they're not aware of the bigger picture or what piece their little thing that they're working on fits into. For anybody who remembers high school biology, chemistry, or physics where you did any kind of experiment, you remember what it was like over and over and over and over again building, testing, scrapping, and learning in order to make something work. For that reason, I think everybody's timeframes are out of whack. That's where, again, it's good to take a much more comprehensive picture and view of what everything looks like and to not worry about what exact year will the robots come to take the jobs. You know what I mean?

Michael Krigsman: Yeah.

Amy Webb: Instead, focus on how fast we really are moving and what all of that implies.

Michael Krigsman: Okay. You'll have to come back another time, and we'll dive deeper into the relationship of the software vendors and how this works. This happens to be where I spend a lot of my time, but we have to move on because we are going to run out of time.

Government Policy and Regulation of AI

We have another question from Twitter. Let's turn our attention back to the government. You mentioned earlier that country-level regulation or policy is not a good thing.

We had, on this show, Lord Tim Clement-Jones who was the head of the U.K. House of Lords Select Committee on AI. I say "was" because that committee did its job and it's no more, as far as I know. Let's take that. Oh, and I'll say, he was a good guy and he seemed to have great recommendations. His heart is absolutely in the right place with this.

Then we have Frank Diana, again, who on Twitter is asking, "Amy, in your conversations with government, do you see any signs of movement at the policy level and any indication that there's an appreciation for the massive shifts on the horizon?"

Amy Webb: Government is big and here is what I will say. There are people working within state, the State Department, the White House, many other branches of our government and military who I think are extraordinarily bright, who understand the sense of urgency, who also happen to have a deep and broad knowledge of artificial intelligence, who themselves come out of engineering and computer science fields, who do care, who are working on policy. That being said, there are lots of people who are also part of this process who are political appointees or wound up in these positions for various other reasons who have no knowledge or understanding.

Under other circumstances, that may not be as critical an issue. However, we have no executive leadership on artificial intelligence. We have no national strategy. We have no national point of view. We have no coordination between the agencies who are working on this.

We have no singular person or department who is in charge of long-term strategic planning. Therefore, these incredibly hard-working civil servants who probably could be making a bunch more money out in the Valley or somewhere else, to some extent are spinning their wheels. That, to me, is terrifying because of what we see happening in China.

I will also say that that exact comment that I had on our national leadership on AI is analogous to biotechnologies like CRISPR and other fields. We have no funding. We have no national strategy or leadership. This is probably the worst possible time for us to be, as a country, in this position because it also impacts our allies overseas, which is why, any time anybody from the government calls, I always show up because I feel like I have a sense of civic and moral duty to help out. But it is becoming more and more difficult for me.

Anyhow, there is a lot of movement. There are a lot of smart people. We are not moving nearly as quickly as we should, and the progress that has been made thus far, I think, is being celebrated too quickly.

Michael Krigsman: You sound like the committed public policy makers and government servants that I know.

Another question for you as we finish up. I think this is a good one to end on from Twitter. This is from the @CXOTalk account. "Why should we care about the consequences and impact of the way we use AI? Why is it so urgent that we deal with this now?" I'll just add, by extension, Amy, why don't you just leave well enough alone? The free market works absolutely perfectly. We have diversity of opinion. Why are you bothering with this?

Amy Webb: Why bother? Because the free market is failing us at the moment. Again, I will just address the six companies that are based in the United States. We are seeing an extraordinary amount of consolidation and there are, again, just a few people, relatively speaking, with limited world views who, statistically speaking, do not probably have the same backgrounds as you and me, who are under pressure to commercialize their work. They are making existential decisions for all of us.

By existential, I mean they are in a position to fundamentally alter human existence. I'm not talking about, again, killer robots that are coming to get us all. Instead, I'm talking about much more insidious decisions, decisions that are intended to optimize our lives by nudging us, for example, on our digital devices or by optimizing workstreams within organizations without doing the long-term planning.

The Need for AI Transparency

My concern is, without any transparency, without the collaboration that we might see in other cases, without an influx of funding from the federal government, without coordination between governments, organizations, and companies elsewhere in the world, that we are going to wind up seeing new types of discrimination in which all of us are affected. We're going to see ourselves disenfranchised in ways that could have been prevented. And we're going to, at some point, discover that all of this cool, whizbang, smart technology in a way is making our lives a lot more miserable. That doesn't even include what the ramifications are if China manages to pull off what I think it is attempting to do to reshape the global world order because that impacts or could impact your ability to travel in the future. It could impact your business's ability to do business not just in China, but in other countries that China had partnered with around the world. We are talking about significant change, existential change that will not happen tomorrow or the next few years but will sort of happen over a period of five to seven decades.

The best way to think of us collectively right now is the proverbial frog in the pot of water. We are the frog. We are in a pot of water and the stove is on. It's going to heat up slowly over a very, very long period of time. Slowly, over that period, it will start to feel hot until, ultimately, we meet our demise.

I know that's a sad way to end the show, but we have to stop having fantastical conversations about AI. We have to stop fetishizing the future. It is time that everybody understands what this technology actually is, how it works, what is at stake, how it impacts your business, and how this could forever change your life, your kids' lives, your grandchildren's lives because it doesn't have to be negative.

We have an opportunity to do great things and to live in a terrific world, one that's far better than we're living in today. That's what I think we could do. I think that's on our horizon, but only if we make better decisions and exercise creative and better leadership right now.

Michael Krigsman: Wow! I bow down to Amy Webb. Amy, thank you so much for taking your time to be with us today.

Amy Webb: Thank you. These questions were amazing. Thank you so much for having me on.

Michael Krigsman: We have been talking with she who shall be known as the Amazing Amy Webb. She wrote this book called The Big Nine. It's one of the most outstanding books that I've seen about AI, the AI future, and what should be done.

Everybody, thank you for watching. Right now, subscribe on YouTube, subscribe to our newsletter. Next week, we're talking with the president and chief marketing officer of AT&T Business. Maybe I should ask him about some of these questions as well.

Amy Webb, thanks again. Everybody, thank you for watching and we'll see you next time. Bye-bye.