Artificial intelligence is primed to pervade everyday life, from autonomous vehicles to intelligent ads that anticipate your desires. How will these shifts vary globally, and what do they mean for the future of work, life, and commerce? Two big thinkers share their views: Darrell West, editor in chief of TechTank at the Brookings Institution, and Stephanie Wander, who designs prizes for XPRIZE. Longtime CXOTALK guest, David Bray, joins the conversation.
The Global Impact of Artificial Intelligence
Artificial intelligence is primed to pervade everyday life, from autonomous vehicles to intelligent ads that anticipate your desires. How will these shifts vary globally, and what do they mean for the future of work, life, and commerce? Two big thinkers share their views: Darrell West, editor in chief of TechTank at the Brookings Institution, and Stephanie Wander, who designs prizes for XPRIZE. Longtime CXOTALK guest, David Bray, joins the conversation.
Dr. David A. Bray began work in public service at age 15, later serving in the private sector before returning as IT Chief for the CDC’s Bioterrorism Preparedness and Response Program during 9/11; volunteering to deploy to Afghanistan to “think differently” on military and humanitarian issues; and serving as a Senior Executive advocating for increased information interoperability, cybersecurity, and civil liberty protections. He completed a PhD in from Emory University’s business school and two post-docs at MIT and Harvard. He serves as a Visiting Executive In-Residence at Harvard University, a member of the Council on Foreign Relations, and a Visiting Associate at the University of Oxford. He has received both the Arthur S, Flemming Award and Roger W. Jones Award for Executive Leadership. In 2016, Business Insider named him one of the top “24 Americans Who Are Changing the World”.
Darrell M. West is vice president and director of Governance Studies and holds the Douglas Dillon Chair. He is founding director of the Center for Technology Innovation at Brookings and Editor-in-Chief of TechTank. His current research focuses on educational technology, health information technology, and mobile technology. Prior to coming to Brookings, West was the John Hazen White Professor of Political Science and Public Policy and Director of the Taubman Center for Public Policy at Brown University.
Stephanie Wander comes to XPRIZE with over 12 years of diverse experience in film production and digital media. Recently, Ms. Wander earned her MBA from the UCLA Anderson School of Management, where she focused on Sustainability, Corporate Social Responsibility, and Non-Profit Management. Ms. Wander uses this background to help design prizes that address the world’s grand challenges.
Transcript
Michael Krigsman: Live from New York City! No, we’re not in New York. I’m in Boston! [Laughter]
Welcome to CxOTalk! I'm Michael Krigsman, industry analyst, and your host. CxOTalk brings the most interesting, experienced people to have in-depth conversations about issues such as the one we're talking about today: artificial intelligence. We have a wonderful show today, and we'll be speaking with three great guests.
Stephanie Wander is with the XPRIZE. David Bray is an Eisenhower Fellow, an executive-in-residence at Harvard, and is CIO of the FCC. And, Darrell West is from the Brookings Institution.
[...] I want to remind everybody that there is a tweet chat happening right now, using the hashtag #cxotalk. And, I want to give a special thank you to Livestream for supporting us with our video infrastructure and distribution. Livestream is really great, and we love those guys.
So, let’s begin. Stephanie Wander, how are you? Thanks for being here. Tell us about yourself and about XPRIZE.
Stephanie Wander: Thank you so much for having me! XPRIZE is a nonprofit foundation. We are dedicated to changing the world by offering large-scale incentive competitions. So, we offer millions of dollars to teams of innovators to solve the world’s biggest problems. At XPRIZE, I’ve been really privileged to design several prize competitions, including the one we recently launched for the IBM/Watson XPRIZE. And, I’ve been recently changing roles a little bit; I’m going to start working to launch our XPRIZE Institute, which will be a strategic pillar for our organization helping to set forth our vision towards overall abundance.
Michael Krigsman: Okay. David Bray, you have been a guest here on CxOTalk quite a number of times. Welcome back!
David Bray: Thanks for having me, Michael! So, you already mentioned my role as Chief Information Officer at the Federal Communications Commission. It’s a nonpartisan role, working across the eighteen different bureaus and offices of the FCC. Our scope is everything wired and wireless in the United States.
My other hat is that of an Eisenhower Fellow to Taiwan and Australia, which meant in 2015, I got to meet with both their public sector and private sector leaders in each country, and as what their plans were for the Internet of Things. And that battle continues now. Obviously, with the Internet of Things, the only way you’re going to make sense of all that data is with machine learning and AI.
Michael Krigsman: Fantastic! And, Darrell West. This is your first time, and welcome to CxOTalk!
Darrell West: Thank you, Michael! It’s nice to be with you.
So, I direct the Brookings Center for Technology Innovation, so we’re interested in all things digital. We’re especially interested in the legal policy and regulatory aspects of technology; how all the various innovations that we’re seeing affect people, the impact on society and the economy. Some of our work is based in the United States, but some of it is global in nature.
Michael Krigsman: So, Darrell, let’s kick off the conversation, and would you share your view of digital disruption with us, and the changes that we’re seeing around us?
Darrell West: Well, we are living in such an extraordinary time period, just because the pace of change is amazing, you know? When you think about what we’re seeing now, the rise of robotics, that’s starting to transform the workplace. Many factories are being automated. The development of artificial intelligence is developing many different areas. Virtual reality is starting to come on the scene. Now, last week, I was watching the Super Bowl, and there actually were ads for virtual reality. So, we’re seeing that start to hit the consumer market.
So, it really is a great time period, but it also raises good and bad questions for society. Therefore, we need to think about what impact these various emerging technologies are going to have on all the rest of us.
Michael Krigsman: So, what are some of those impacts on society that we’re likely to experience with artificial intelligence, autonomous systems, autonomous vehicles, and so forth? Does anyone want to take a crack at that?
Stephanie Wander: Sure, I’m happy to. I think we, at XPRIZE, really look to AI … Well, we look at it for twofold [reasons]. One is we believe these disruptive technologies will have an incredible impact on our ability to change the world for the better, to actually help create more equity, and to really enable us to personalize solutions so that everybody has access to the best possible solutions for them, whether it’s in health or education.
We certainly are looking at risks like automation and how it might impact the workforce. We’re also really interested in these surprising kind of events that may happen, such as we were looking at tissue engineering and realized that as we get autonomous driving in place, we’re going to have a lack of organs available for transplant - so I’m thinking about how do we get ahead of those problems at XPRIZE, and think about how we plan for that future in the best possible way.
David Bray: So, I’ll build on what Stephanie said, and say my conversations in Taiwan and Australia is really about how do we do the business of a nation? How do we do the business both locally, at the local level? We know with automation and machine learning and AI, there’s going to be huge advantages. We may be able to make sense of data to make communities healthier and safer in ways that humans would just not have all the time to wade through all the data.
At the same time, we know that a lot of what the private sector is aiming for is to automate a lot of these jobs that right now, may have a human in the loop. You’ll actually have greater productivity if the human’s not in the loop, which raises questions about what are the jobs of the future? Will more jobs be created, then destroyed by AI and autonomous systems? And really, what is the type of education needed to make sure we have a workforce that can even be competitive for that future scenario?
Michael Krigsman: Stephanie mentioned this combination of providing greater equity in the world, but at the same time, [there are] risks. It seems to me that this question [of] balance between the possibilities, opportunities, and risks seems to be at the crux of the issue and the questions around AI, as well as the ethical questions.
Darrell West: I think that is the case. By trying to find the proper balance there on technology innovation is the key challenge that awaits us, because when you think about a lot of these technologies, they are going to liberate people, make us more efficient, and free us to do a lot of other things. There are a lot of good things. I personally love technology and the freedom that comes with that.
But then at the same time, there are questions in terms of, is there a possible loss of privacy? We’re going to have billions of sensors, they get deployed in the workplace, in people’s homes, the systems of transportation and energy, and in health care. How are we going to navigate this new world? People are used to dealing with computers kind of … You have a computer, and you work on your tablet and your laptop. Increasingly, computing is going to move to machine-to-machine communication. So, humans are going to be taken out of the equation. However, we have to make sure that when these machines are making decisions, they are respecting basic ethical considerations, making decisions in a non-discriminatory manner, and doing the types of things that we want, as opposed to things that might create problems for us.
Stephanie Wander: I think if we step back for a bit, but I think we’re not even talking about artificial intelligence. We’re talking about a really rapid pace of change in society, and how do we get data, capture data, understand what’s actually happening and understand what kind of impacts we’re even having. And that time to sort of analyze action that’s going to decrease over time. And so, it’s really going to be interesting to see how as a society we manage all of the opportunities and challenges that arise.
David Bray: And if I can echo what both Darrell said, and what Stephanie said, because I think it was great that they’re talking about it. There are huge opportunities and huge benefits, and it really is about the rapid pace of change. If you think about it, when the car first came out, nobody really thought that we’re going to have these challenges of interstate crime, because now you can actually use a vehicle to participate in a crime event that’s not in your locality, and the local police may not know who you are before you return to the scene. That doesn't mean we should roll out cars. We definitely should move forward, and we should try to embrace these things because technology itself is amoral. It's how we choose to use the technology that determines whether it's a good or bad outcome.
And so what I want to see is what are the conversations we need to have with the rollout of AI and machine learning so that we can be informed of the choices, both as individuals as well as societies. Because, really what we're facing is a rapid change of technology, AI, and how it's impacting individuals, is people are becoming super empowered, but humans themselves aren't going to be really changing. And so, ultimately, what do we do in our world in which people are super-empowered through technology? What does it mean for family lives, work lives, and society?
Michael Krigsman: Is this something that's screaming out for regulation? Does the market regulate it? How do we balance the risks associated with AI and the fact that there may be disproportionate advantages and disadvantages to certain groups inside society?
Darrell West: I think we have to be careful about being heavy-handed in the regulatory process. When you look in the past with emerging technologies, we've often done that. But, with digital technology over the last few decades, we have allowed private sector companies to experiment, to innovate, and to bring new products to the marketplace. Basically, the government role has been building infrastructure and thinking about the broader legal and regulatory environment but trying not to impose too many restrictions, because we want to see what these innovations can do. Now, as we have seen that, we've seen some problems. So, I think it appropriate for agencies to step in and deal with particular issues.
David Bray: I personally am concerned that the pace of technology changes such that top-down is not going to be able to keep up with it. So, I'll agree both with Darrell in terms of you want to see what's possible, but we may even need to rethink as to how do we even begin to address this, just because of the sheer scope of change … If you go to the normal process of view and coming up with some idea and some response, that might be two or three years, and we know that's lifetimes. And Stephanie can maybe speak to it in the next XPRIZE cycles. You probably don't think beyond six or twelve months, just because of the nature of what the technology can do is changing every six months.
Michael Krigsman: Yeah...
Stephanie Wander: That's spot on. I would say when we think about sort of three to five-year time horizons for the most part. I wanted to speak to the other side of this, which is sort of, what do we do about bringing everybody with us on this journey? At XPRIZE, we feel really strongly that the smartest people are out there in the crowd. We're about to see a billion people come online in the next decade, and for us, it's really about ensuring that they have access to education, that we are empowering everybody with tools to be solvers, and I think a lot of this will come down to actually having access to the wealth that's generated in the coming decades, in terms of whether or not we see everyone getting benefit, or whether or not we see a sort of top-down kind of model.
Michael Krigsman: You know, the thing is with this is that it sounds like in order for everybody to get the benefit, it requires social change. So, we’ve got this technology, AI, and technology in and of itself is neutral. But, it has the power to drive so many changes as David was saying: economic change, social change, cultural change … And so, how do we manage through this transition period, which may be a lengthy one as well?
Darrell West: I mean, the key thing I think, at this stage, is really digital access. We still have about 20% of Americans that are not online, and therefore not able to share the benefits of this amazing technology revolution that is taking place, but even among the 80% that does have access to the internet, some of them have slow speeds so they can take advantage of the latest developments. So, I think one key challenge for all of us, is to increase access in a way that allows everybody to take advantage of the things that are taking place. And then, when you look internationally, as Stephanie mentioned, there are billions of people who have no access to a technology. So, many of those people are located in India and China, and in various parts of Africa. Therefore, we’re working on ways to improve the digital access in those parts of the world as well.
David Bray: I would almost equate it to how the industrial revolution took about a hundred years. I think we are going to compress that, and I'm just going to put out an estimate. It's going to be a hundred years of change, compared to the industrial revolution, in less than 20 years. And, if you think about where we started in the 1800's, 95% of people never went beyond a five-mile radius of where they were born. And then, at the end of the industrial revolution, now we travel over both oceans as well as across large continents. And so, you're right, Michael, that this is going to change how we live, how we work, how we experience.
Supposedly, according to historians, the way we dealt with the industrial revolution, how we coped with it was through alcohol. Now I’m not saying we’re going to cope through the AI revolution through alcohol, but humans are going to need some safety valve. Maybe it is going to virtual reality, or maybe it is augmented reality or some other way to help us through coping where we actually have a little bit of fun with the technology, but we also recognize the way we fundamentally live, that we are no longer just going to live in the five mile radius where we were born.
That same sort of thing is going to happen where that same thing is going to happen that it’s not going to necessarily have just one job, or even that you’re doing the job by yourself - that you’re now doing with both the collective intelligence of both machines and humans working in ways that we could even comprehend, today.
Michael Krigsman: How do we avoid the problems that we currently have? I mean, even in this country, resulting from globalization? Because for example, people who for example are working in a factory, where the jobs have been displaced, and those folks are taking the brunt of the broader economic benefits that are accruing to the country from globalization. So there's a disproportionately negative effect on that particular group of people, and it's expressed itself in politics, today.
David Bray: I will avoid the political side, because I'm nonpartisan, but I will say you've hit the nail on the head, and this is in all respects to anyone who's an economist. Economics was developed at a time when you couldn't know all the economic thoughts that were going on in the world in real-time, and so it is an approximation of what we think human behavior is. But, it's really not a science. And, in fact, there's a wonderful article from the early 2000's in the American Economic Journal that takes ten classic predictions from game theory and looks at how people actually behave, and it turns out it only has about a 30 accuracy to their actual behavior.
And so, it may very well be that we have made guesses on policy in the past that were not based on actual empirical evidence as to whether this would create jobs or not, or whether this would create people’s livelihoods or not. And we’re now facing the fact that there may be this low-tide of globalization that workers that are in a country with strong currency cannot compete as much as countries with weaker currencies. Now, I’m not saying we should devalue our currency, but that does raise a question of if we made decisions in the past that were anecdote-based, as opposed to empirical science-based.
Maybe now with the Internet of Things and machine learning, we can look at what people’s economic decisions are around the world in near real-time. We can see what would actually trigger and stimulate more jobs at the local level in rural areas that maybe are losing jobs at the moment, and actually have it be evidence-based job creation as opposed to anecdote-based job creation.
Michael Krigsman: Darrell, what’s going on around the world? You spent a lot of your time in other countries and so, how are other countries thinking about this very difficult set of issues?
Darrell West: Every country in the world is trying to figure out exactly what the policy should be and what type of encouragement they should give to build a pro-innovation based economy. They want to get the advantages that they see taking place in the United States and in Europe. So recently, I've been in Singapore, which is a hotbed of technology innovation. Singapore actually is a global leader in many aspects of new technologies. I've been to China; they're trying to figure out how to take advantage of these trends. They see a technology as a big driver of the next stage of economic development, and they want to make sure that their companies are at the leading edge of this.
But, you know, it's complicated for every country, because they look at the United States and especially Silicon Valley and they say, "Oh, we'd like to have a Silicon Valley in our country as well." But it's been virtually impossible for other countries to replicate that model because the United States has this particular blend of educational institutions, the ability to raise capital through venture firms and otherwise, and then a regulatory process that has been pretty light touched that has allowed these firms to innovate. So, other countries are trying to find their own particular niches, so that they can build a twenty-first-century economy.
Michael Krigsman: Darrell, I know you have focused very much on autonomous systems like autonomous vehicles. How does that breakout, as opposed to the broader sector AI?
Darrell West: We have done work on autonomous vehicles. We put out a paper on this looking at the development of autonomous vehicles in China, Europe, Japan, Korea, and in the United States. And, virtually every region and every major car manufacturer around the world is interested in this new technology and spending millions and billions of dollars trying to promote it. So, everybody is interested in this.
This is a revolution that is probably going to take place much more rapidly than many people realize. Most of the major car companies are aiming to roll out actual autonomous vehicles by 2020, so that’s not very far away. We’re already starting to see it in the taxi area, and in the car sharing business. Another sector likely to be disrupted is truck driving and delivery systems. There is a lot of experimentation taking place there, so both in the United States, China, and other places, there’s a lot of enthusiasm about this because they see this technology as developing very rapidly, and they’re poised to deploy this commercially.
Stephanie Wander: And just to add to that too: I’ve encouraged people to really pay attention to autonomous flight, as well. There’s actually quite a few companies building electrical, highly efficient aerial vehicles for human transport, so pay attention to Uber, pay attention to a company called EHANG; there’s just a lot of interesting stuff that we’ll probably see a lot sooner than we think. There are going to be some major regulatory issues for them to get through, but from a technology standpoint, they’re very close to having a personal flight in our lifetime.
Michael Krigsman: We have a number of different, we could say, application areas of AI, and autonomous systems, and machine learning. Are the ethical and policy issues distinctly different for each of these? How do we address that aspect?
David Bray: So, I personally would advocate context, context, context. I do think context does matter. That said, I do think we need to approach it first and foremost with almost a human-centered approach regarding, "Is this giving more freedom, more autonomy?" as Darrell said? Technology is great, in that it gives people freedom, so we need to think about continuing to provide people more freedom.
So, at the same time, freedoms provided to the individual, what are the possibilities in what they could do to other individuals as well? So I almost think we need to take the Golden Rule of, "Do unto others as you would have them do unto you," and maybe update it for the technology era, which is, "Have the AI, have the machine learning be such that it allows you to do unto others as they would permit you to do unto them." So, we need to be able to express in a way that's not checking a thousand boxes, or trying to change our privacy settings or something like that, but that you can express what you're preferences done to both your person, as well as your digital self. And then, the machine and the AI respects that.
Darrell West: And to add to what David was saying, I think when we look at autonomous vehicles, the legal and regulatory challenges are enormous, just because the transformation and the impact on people’s lives are going to be very enormous. For example, when we’re doing our research on autonomous vehicles, I was surprised to discover that fully autonomous vehicles collect over a hundred thousand data points. People have no idea how many sensors there are and what kind of information is being measured. Autonomous vehicles have sensors that measure what’s going on in the engine, the speed, and how you’re dealing with various things that you encounter. People are going to be surfing online and texting while they’re riding in fully autonomous vehicles. They’re often doing so over the automobile’s WiFi system, so basically everything you’re doing, which you think is a personal act, is going to be captured.
So, the question is how do we deal with that, and who has access to that information? Insurers, of course, love this because now when they're offering safe driver discounts, they're basically having to take your word that you're respecting the speed limit and driving safely. In the future, they may be able to get access to your car's actual data to find out, are you speeding? Are you going over the speed limit? Are you driving drunk, you know. There are sensors that can measure your alcohol level that is used as you sit in the car seat. You know, who's going to have access to this information? Who owns it, and to what purposes are they going to use this information?
Michael Krigsman: We have an interesting comment from Twitter, and I’m hoping I’m going to pronounce his name correctly. Ergun Ekichi, I probably pronounced it wrong. Anyway, he makes the comment that with the increased adoption of AI, technology is changing the way enterprises engage and understand customers, and I think that there’s a real possibility for that exact same thing to happen inside the public domain, with the relationship between governments, policies, and citizens. So, any thoughts on this?
Darrell West: Well, there's always a potential that technology can bring citizens closer to the government in the sense that if you have complaints about garbage collection in your neighborhood, you often feel that government is remote, they're not responsive, and they don't really address your problems. But, through some of the smart city applications that are coming online, it's possible to make that complaint for the city agency to deal with that, and for you in real-time to be able to track what they are doing and how they are responding to your particular problems. So, there's a potential of really good things coming out of this. There's a lot of citizen cynicism today; people feel that government's not very responsive to what people want. Technology may end up being part of the solution for that.
David Bray: I will build on that real quick and just say, I think with machine learning and AI, there’s an opportunity to both ideally bring citizens and public service closer together, but maybe even re-envision how we actually even do public service. If you’re thinking about it, when I talk to audiences, I like to ask them to raise their hand and say, “Do you have in your pocket the ability to call anyone in the United States, anyplace, anytime?”, and most of them raise their hands if they’ve got a smartphone or a cell phone. But, I say, “Did you have that twenty years ago?” and most people did not.
And so, the same thing with machine learning and AI, we may even be able to sort of stop doing things that require a government professional to do them, and instead, maybe be able to think about things that can be done by the public directly. I mean, even if you saw something that was pollution in your area, or you saw traffic in a dangerous road construction in your area, would you be willing if the data was kept private and kept anonymous to share that data to inform public service to fix the problem? Probably you would if you were assured it would be kept private. And so things that in the past required government workers to spot the problems to try and fix them, the public could, maybe if they're concerned at a local level about making their communities healthier and safer.
And similarly, things that had to be just because of the time it took for something to go from Topeka, Kansas to DC was four days on horseback, maybe one-hundred-and-fifty years ago - now, it's milliseconds. And so, there may also be public-private partnerships, and that's why I like to say, "government" is an increasingly outdated term. What we really should be using is the word, "public service" that includes members of the public, as well as public-private partnerships, and then, government workers.
Michael Krigsman: So, one of the themes that have come out so far, during this conversation, is this notion of equitable access to resources, and also the notion of partnership. David, you just mentioned public-private partnerships. And Arsalan Khan asks a very, very interesting question on Twitter that I think hinges on the notion of bias in the data. With AI, if your data is biased, you’re going to have biased outcomes, and equitable access and results depend on impartiality. So, how do you think about this issue of bias?
Darrell West: This has been a risky area for some of these emerging technologies. So for example, there already have been some issues where technology, instead of playing to our best instincts, allows people to play to their worst instincts. So, for example, on car sharing services, on AirBnB and so on, there has been some evidence that sometimes if you see a picture of that African American that wants to rent in your home or get in your vehicle, there's some evidence that drivers are a little less likely to pick up a minority rider. So, that's an example of where the technology itself is neutral, but the way the people use it isn't necessarily so neutral. So, we have to be careful that as these artificial intelligence systems develop as data analytics take place, as we see a big increase in machine and machine communications - that the technology is respecting the values that we care about; that it doesn't allow us to discriminate, it doesn't allow us to act unfairly, it doesn't allow us to play to our own worst instincts.
Stephanie Wander: Yeah, I think just to build on that: I mean, what you sort of [...] Michael, is really the double-edged sword of artificial intelligence. It's the idea that we actually have technology that can help make decisions for us, or can personalize things. Of course, humans have an implicit bias, so our data will also be biased, or those decisions that even machines make for us may have some bias in them. I think it's going to be the question of our age, which is how do we really enable our decisions to be outsourced to improve our lives? But then, how do we also sort of manage that, and ensure that we're getting exposure? I think we have much more of a sort of opt-in culture to the world's knowledge, and I think there's also a potential scenario where we start capturing a lot of information and kind of lose it as a society. It's captured, but we don't choose to look at or question anymore of it because it has this sort of obsolescence that has come across. I think it really is a tough, tough question and I think we'll spend a lot of time talking about it in the future.
Michael Krigsman: In a way, it’s different, but we’re dealing with the issues today of privacy and data collection. It’s not exactly the same, but in terms of the pervasive collection of data, and then what do third-parties such as [other] companies and the companies do with that data? It seems like drilling down into this is one of the most basic ethical issues associated with AI, at least in the public sector for sure.
David Bray: So, I would say the computer science mantra of garbage-in is garbage-out, what we really need to think about is can we have something where people can be at least aware of the data that is being fed to teach the machine, and inform it? And then can we even also maybe have a machine that is actually almost watching what other AIs reach conclusions to, and point out if it observes biases? For example, California right now is actually trying to use machine learning and AI to help set bail decisions. The challenge is initially it was fed historical bail decisions, and when it was said that the vertex it started to make on bail decisions realized there was a bias in those past bail decisions. It saw things like, it was taking someone’s gender into account, or their race, or their height, or their weight, which really should not matter when you’re trying to set a bail decision.
So, I agree that, in some respects, it's almost like if you go back to James Madison, 1788, in the Federalist Papers where he said, "What is government, but the greatest reflection of humanity? If all men were angels, no government would be necessary." Let's just replace the word "government" with AI, and say, "What is artificial intelligence, but the greatest reflection of humanity? If all men and women were angels, we wouldn't need AI." And so, it's going to reflect us, but it may also be able to ... I'd actually love to see, and unfortunately I don't know if I can convince Stephanie at XPRIZE to do this; it would be great to have a challenge of a machine that would actually help point out our own biases as an individual, so at least we're aware of them, and then we can try and figure out how we're going to address them.
Michael Krigsman: We have a very interesting question from Twitter. Chris Curran is a partner at PWC and has been doing digital transformation and CIO-related research for the many years that I’ve known him. He asks, “How do you determine if a large machine learning training dataset is biased?” I mean, that’s a really tough question!
Darrell West: Yeah, I think that's a great question, and I think the answer comes down to open data. I mean, these technology systems are generating extraordinary amounts of information, and this information can allow decision-makers to make decisions in real-time. You know, we're used to research projects that take days, weeks, or months to collect data, to analyze the data, and to report back. Well, in the digital world, we can actually get those data in real-time, analyze them almost immediately, and be able to act on the latest information. But, as Chris suggests, it's a tricky issue that when you're kind of analyzing material in real-time: how do you make sure that the information is there, that it's accurate, that it's being used for good purposes, and that it doesn't enable discrimination on the part of people who hold various points of view? So, that is the real challenge.
But, if we make the data open and accessible to researchers, that acts as some check on what's going on in those systems because researchers can look at data. We've already seen several examples where this is taking place, and the researchers have identified some problems there. So I think ultimately, that's a way to build in some safeguards and make sure that these systems are service us, as opposed to more nefarious practices.
Michael Krigsman: What are some of the safeguards that need to be built in to help the public sector keep up with that rate of change that we are talking about, and help to support the growth of technology such as AI; but at the same time, ensure that the public interest is met along with it?
David Bray: So I think we need safe spaces to experiment. The challenge that I think we're facing in public service is obviously we have tight funding constraints. We now also have a hiring freeze. And so, I think the question is, where are those places that we need to make sure the trains keep on running on time, that things are 99.9% up? Those are things that maybe you can't experiment with yet. But at the same time, if you don't find anything that you can experiment to try and use AI more, use machine learning more, to press the envelope, then we're going to quickly find that we fall behind and we're out of date.
So, we need these safe spaces. Just like we have a Defence Advanced Research Projects Agency for the Department of Defense to help keep them abreast of what's going on with technology change and how we can bring that into their mission, we may need to have a civilian equivalent of an advanced research projects agency where agencies and departments could bring their thorny problems where what they're doing right now could be better, could be an exponential leap in serving the public better. And, at the same time, they may not know necessarily how to do an answer, they're going to have to do an experimental approach. What I would love to see is this group reach out to the private sector; reach out to individuals of the public and say, "Here are our real thorny problems." Maybe it is. We need to actually have a better data and a better science of how to create jobs. Do people have an idea as to how we can do that, and then invest in those ideas and see what works?
And what I would really love is to see not specific to any one department or agency, because in some respects I think we need to defragment that because the problems we now face span multiple agencies and departments; but instead maybe be at most three or five top big issues that cut across all of government at the local level, state level, and federal level, bring those problems to bear, and then use the power of We the People to pitch ideas as to how we’re going to bring AI and machine learning to help solve those issues.
Darrell West: And to add to what David just said, I think the other thing that we should be thinking about is just improving the level of transparency in how artificial intelligence operates so that all of us have a sense of what bases these systems are making decisions. Like right now, AI is a big, black box. There are algorithms and millions of lines of software code. We don't necessarily know what dimensions are being put into those things, so some people have suggested that a little more transparency just about how those algorithms are operating, what the bases are for these artificial intelligence systems, how machines are trying to learn from the big data that is being generated. All of those things would make a big difference in making people more comfortable with some of these emerging technologies.
Michael Krigsman: But Darrell, again, when you clearly what you’re saying is right. But the moment you start talking about transparency, then there are people who, especially in the commercial sector, that will stand up and say, “Wait a second. These are our proprietary trade secrets.” So, you get right back into the crux of the problem of balancing public interest against private need.
Darrell West: Absolutely. And that is a very crucial question, and certainly, you know, companies should be allowed to have some proprietary information. I mean, there’s a long history throughout the world of trying to protect that. With emerging technologies, we also need to understand that the social and economic impact on the rest of us is so extraordinary that we, as a society, do have some vested interest. You know, we don’t need to know all of the proprietary information that is there, but just giving us some sense of how these systems are operating, what are the fundamental decision points that are being made, and how various ethical dilemmas are being handled. I think there is a social good that comes out of that type of information.
David Bray: And I would actually add to Darrell and say that one of the good things about public service is that it's not in competition, whereas in the private sector, you keep things secret because maybe that's your intellectual property, maybe that's your trade secret, maybe that's your competitive edge. And so with public service, I think we can ask for more transparency and openness that we may not necessarily be able to expect in the private sector because really it is there to serve the people. And what we may be looking at in the next ten, twenty, thirty years, is to which degree is a nation transparent about what the machines are making sense of, what data they're ingesting, and what decisions they're making on the data. Maybe you can't reveal the complete intricacies because there is privacy associated with the data, maybe there's one-way hashes with the data so you can't figure out specifically who it's talking about or the people's talking about, but you can at least express what decisions are being made; what are the outcomes that are being decided; what data's being ingested; etc.
And maybe we also need to think about, for public service, something that’s akin to a credit report, where you can actually sort of say what data is being used on me across the different departments and agencies. Can I verify that all the data is correct? And then [number] two, have I given consent that I made an informed choice for that data to be used for that purpose [and] for that outcome?
Michael Krigsman: Are there examples of countries around the world who have made greater progress than the U.S. in terms of grappling with these issues?
Darrell West: Certainly, there’s a lot of variation in how different countries are handling these types of issues. So for example, the European Union has been very tough on privacy considerations, so they’ve gone further in terms of wanting to look inside the black box of artificial intelligence, developing very strong privacy rules, respecting the idea that people own their own data, and they have a right to control how that information is used.
The United States tends to be a little more libertarian and hands-off in thinking about those issues. I mean, we talk a lot about privacy, but a lot of the privacy rules still are voluntary in nature and developed by companies as opposed to being imposed through government regulations.
In Asia, there’s quite a bit of variation in how important privacy is, in particular, countries and how much ownership people have over their own information. So, I think every country is struggling with these issues. Countries are kind of finding that balance and drawing that line in different ways, depending on their own histories, their own backgrounds, and their own values.
Michael Krigsman: We have just about five minutes left; a little less than five minutes left. I thought it would be interesting to ask each one of you, and Stephanie Wander had to do the drop-off, so …
David Bray: Wander off…
Michael Krigsman: Stephanie Wander had to wander off. [Laughter]
David Bray: Sorry.
Michael Krigsman: So we’re talking with David Bray, who is an Eisenhower Fellow, an executive-in-residence at Harvard, and the CIO of the Federal Communications Commission, and we’re talking with Darrell West, who is with the Brookings Institution.
Just in the last few minutes, could each one of you offer your thoughts or prescriptions for how we balance - the summary - how do we balance these various competing interests, in order to allow AI to move forward, but in a way that is supporting the common good, and not detrimental to the common good?
David Bray: So, yes I’ll jump in first, and actually one thing that we didn’t get a chance to talk about was the news that happened last week, that an AI/machine learning algorithm beat five of the world’s top poker players after twenty weeks of training the machine. And this was after twenty weeks of playing multiple rounds of upwards of fifteen and twenty rounds per day with these top five poker players, and it learned every night about its new strategy. And poker’s interesting because it’s bluffing, and so we now have an AI that demonstrated it can out-bluff five of the world’s top bluffer poker players, which raises an interesting question about, is it ethical for a machine to do bluffing? Is it ethical to do deception? If you go negotiate the price for your car, and maybe you’re not a good negotiator, would you want to have an app for that, that will negotiate on your behalf with the dealer as opposed to doing it by yourself?
That, I think, raises huge issues that the future is now, and it’s coming at an accelerating rate; a very fast rate. And so, I would say my three recommendations would be first, where are the safe spaces to experiment at the local level, as well as the national level and the global level? Because you can’t even begin to tackle and approach any type of making sense of policy or something like this, until you experimented with it and you tried to use it.
Two, as Darrell said, and then Stephanie said, try to be as open as possible about the data that’s being used, as well as what the algorithm is doing.
And then Three, I really do think [you should] Take the Golden Rule, "Do unto others as you would have done unto you." I would date it for the 21st Century, which is, "Do unto others as they would commit to being done unto them." And I think that's really what we need to think about going forward.
Michael Krigsman: Fantastic. And Darrell West, your closing thoughts.
Darrell West: Yeah, just quickly because we're running out of time. I see extraordinary advances coming in artificial intelligence are being deployed in terms of transportation, energy savings, resource management, health care, education, and in many different areas. And I think there are going to be great benefits that come out of this. The key is to make sure we keep the balance right and to make sure that societal interests get represented so, a little more transparency I think would be helpful. Making sure that there are antidiscrimination rules and norms that are put into place, and then just making sure that these systems conform to the basic values that exist in any particular society. I think that would help us get the advantages of technology without suffering a downside.
Michael Krigsman: Okay! What a very fast conversation that has been! You’ve been watching Episode #218 of CxOTalk, and we’ve been talking about AI, the ethical issues, the governance, policy issues, and especially what’s happening around the world with some of these things. And, we’ve been talking with David Bray, Darrell West, and Stephanie Wander, who just had to drop off a few minutes ago. Thanks, everybody for watching. Next week, we have another really awesome show, so I hope you’ll join us. Bye-bye!
Published Date: Feb 10, 2017
Author: Michael Krigsman
Episode ID: 416