As artificial intelligence grows in importance, how should we address the impact of AI ethics on the future of work?
As artificial intelligence grows in importance, how should we address the impact of AI ethics on the future of work? This question has profound implications for organizations in the private sector and the government. Our guests on this episode are uniquely qualified to discuss this topic from multiple perspectives in business, health care, and public policy.
David A. Bray was named one of the top "24 Americans Who Are Changing the World" under 40 by Business Insider in 2016. He was also named a Young Global Leader by the World Economic Forum for 2016-2021. He also accepted a role of Co-Chair for an IEEE Committee focused on Artificial Intelligence, automated systems, and innovative policies globally for 2016-2017 and has been serving as a Visiting Executive In-Residence at Harvard University since 2015. He has also been named a Marshall Memorial Fellow for 2017-2018 and will travel to Europe to discuss Trans-Atlantic issues of common concern including exponential technologies and the global future ahead. Since 2017, he serves as Executive Director for the People-Centered Internet coalition co-founded by Vint Cerf, focused on providing support and expertise for community-focused projects that measurably improve people's lives using the internet. He also provides strategy and advises start-ups espousing human-centric principles to technology-enabled decision making in complex environments.
Eric Rasmussen is the CEO for Infinitum Humanitarian Systems (IHS), a multinational consulting group built on a profit-for-purpose model. He is an internal medicine physician with both undergraduate and medical degrees from Stanford University and a European Master’s degree in disaster medicine from the UN World Health Organization’s affiliate CEMEC (Centre European pour la Medecin des Catastrophes) in Italy. He was elected a Fellow of the American College of Physicians in 1997 and a Fellow of the Explorer's Club in 2014. Rasmussen is also a Research Professor in Environmental Security and Global Medicine at San Diego State University and an instructor in disaster medicine at both the International Disaster Academy in Bonn, Germany and the Institute for Disaster Preparedness at Tsinghua University in Beijing, China.
Michael Krigsman: The topic of artificial intelligence, the ethics of artificial intelligence, is a very complex set of issues, and it's a set of issues that is going to become of great, great importance during the upcoming year. I'm Michael Krigsman. I'm an industry analyst and welcome to Episode #318 of CXOTalk.
We have two extraordinary guests. Before I introduce them, please, right this second, I need you to subscribe on YouTube.
I'll ask each of our guests to introduce themselves. Let's start with Dr. Eric Rasmussen. Eric, how are you? This is your first time on CXOTalk. Welcome.
Dr. Eric Rasmussen: This is my first time on CXOTalk, and I appreciate the invitation. Thank you very much.
I'm a physician. Originally, I spent 25 years as a Navy officer, and my undergraduate medical degrees are from Stanford where I did a fair amount of work with SUMMIT, the Stanford University Multimedia Medical Information Technologies Group, which was an early expert system/AI understanding of what might be possible.
I've continued to work in technology. I did nine years at DARPA. I went to lead the Google award of a TED prize to Larry Brilliant that turned out to be mostly around software driven detection and better understanding of outbreaks for diseases. When I left that and began to chair the board, I started doing a fair amount of work in humanitarian sciences around the world for various governments. I'm on the faculty at Singularity University where I have an opportunity to look at what's coming and how I can apply it in the work that I do around the world.
Michael Krigsman: Okay. That is some background. Extraordinary. Our second guest is a good friend of CXOTalk and somebody who has been on the show a number of times, David Bray, who is the executive director of People Centered Internet. Hey, David, how are you? Welcome back.
Dr. David Bray: I'm doing great, Michael. Thanks for having me again. I'm really glad to join Eric. I think we've interacted back in the early 2000s when he was doing the work that he was doing with DARPA and then later, and additionally instead. I was with the Centers for Disease Control. Since then, we've reconnected both with Singularity University where he mentioned he's on faculty; I'm on faculty as well. He's also on the board for the People Centered Internet coalition as well.
What is AI Ethics?
Michael Krigsman: Okay. Fantastic. Let's start with when we talk about AI ethics, what are we actually referring to and why should we care? I'll let either one of you jump in and take a stab at this one.
Dr. Eric Rasmussen: I'll take that for the moment because I'm more of an interested user than a specialist. There's a real distinction there. There is a need for an understanding of how these capabilities are going to arise and how they will shape the world that we're entering whether it is in education, autonomous vehicles, online presence, or data harvesting, the number of things that might go wrong if we don't look at a commonsensical structure that can walk across cultural boundaries, across political boundaries. There are a number of things to think about in how we might want to implement the tools that are coming down the road very quickly, so it's worth a discussion.
Dr. David Bray: To amplify what Eric said, we humans have been using tools to either extend our abilities physically or cognitively for centuries, if not thousands of years. That's what we do. We are true users. However, the very nature of the tool is how we choose how to use it that decides whether it's good versus bad and its impacts on communities and societies.
When we're thinking about ethics associated with machine learning, with artificial intelligence, with giving either semi-autonomy or autonomy to machines, it's a question of both can we trust that how the machine was designed lends itself to being more used as a force for good versus the alternative. Then, two, if the machine is given semi-autonomy or, eventually, autonomy, can we trust that it will make choices consistent with the ethical values we have for different societies. As Eric said, we've had more than 3,000 years of human philosophy. We still haven't been able to find a common thread across human cultures that everyone can agree upon. Can we find a way to do that in the next few years for AI that's going to codified increasingly into systems around the world?
Dr. Eric Rasmussen: If I could pile on just for a moment, that's certainly true around concepts of fairness, concepts of trust. These are becoming fundamental to the way that we think about what's developing. It's fun, actually, to think about the fact that we might have an opportunity to revisit these 2,000-year-old questions in ways that set standards that might actually minimize bias that right now almost everything that we're doing has some implicit bias and it's often very subtle. We need a better understanding of how to spot it and what we want to do about it.
Michael Krigsman: I'd like to come back in a minute to this issue of bias, Eric, that you just raised. David raised a point earlier. He said that in 3,000 years, we haven't overcome these ethical issues, but we need to do so in the next few years. That seems like a losing proposition to me.
Dr. David Bray: Well, I would actually say, in some respects, it's not until the crisis hits that eventually humanity figures out how to make it work. We are great under pressure, usually. It's the make or break moment.
I think it was in what Eric just said. Maybe now this gives us a chance to revisit it. It's no longer just an academic exercise. This is now essential to the future of work, to the future of communities, to the futures of society that we work to find them.
There have been some that I've suggested. Actually, I support this idea that maybe, at least at the very minimum, one should at least subscribe to whether you believe it's Kant's Categorical Imperative that whatever you do, do it in a way that you're okay with it being done universally by others. Other faiths have said it's, "Do unto others as you would have them do unto you." Maybe something like that can be a building block that says, "What would be the things that we would be comfortable both having been done to us and us doing with machines with AI in a way that is universally done by everybody?"
Dr. Eric Rasmussen: It's interesting to recognize that Marshall McLuhan talked about the medium being the message and now we have a variety of media undreamt of previously giving voice to the voiceless. In many ways, it has been hugely beneficial. But on the other hand, the algorithms that are driving AI are being written by humans, and the humans might be having a bad day. There might be something that is quite subtle that doesn't manifest itself until well into an adoption of that AI process, and those tools may get embedded in society in ways that are quite unconstructive.
The Challenge of Implicit Bias
Michael Krigsman: Eric, you brought up this issue of implicit bias. I think this is a really important one. Let me ask you. What is implicit bias when it comes to AI and AI data sets, number one? Then, let's move into the connection that David had raised earlier between all of this and the future of work.
Dr. Eric Rasmussen: Sure. That's a complex thread. Let's tug on it for a little bit. We have very good studies that indicate that when there is a gender balance in participation in meetings, some things change. When there is not gender balance or when there is a subjugation of a point of view, what comes out of the meeting has a bias toward that particular power structure. There's no surprise there.
When you have tools that are being developed that are going to be asked to make decisions that are going to be affecting individual humans or the data around individual humans and how that data is used or the way that a machine behaves in relation to a human, how those things are being written must be evaluated with the recognition that virtually everybody has some bias towards something in whatever they're doing. If that happens to be writing code and happens to be writing algorithms that will eventually control machines, that bias has to be understood. Whether we act on it or not, it has to be understood.
Dr. David Bray: To amplify what Eric is saying, let's assume for a moment that somehow you had an algorithm. In fact, it may be possible to have an algorithm that itself is purely mathematics, but the data you chose to expose the algorithm to as the machine teaches itself, as it learns, that data, we have to recognize that we humans, again, our societies are right now imperfect. If you try to do autocomplete for certain professions right now on any popular search engine, when you start doing autocomplete for, like, say, doctors, fire people, firemen, even then it begins to show that there are biases associated with certain languages in terms of describing a he versus she to that profession. That data is then being fed to an algorithm.
The algorithm itself is just mathematics, but what the algorithm starts to learn from is biased. You bake in that bias, unfortunately, and carry it into society. To make it even more real, we already know there are certain facial recognition algorithms that cannot recognize certain minority groups because, unfortunately, when the machine has been trained, they haven't been as represented when the machine was being trained, and so it can't recognize them, which is not right and is not fair.
Dr. Eric Rasmussen: It does seem that the argument about how that turning takes place and what tools we use mathematically, whether we need to have a very, very large set of data in order to give the machine an understanding of what it's talking about, that may not be the only way. There are other methods, mathematically and computationally, that might allow a bit more common sense than is currently present with massive data sets looking for patterns.
Relationship between Data Sets and AI Ethics
Michael Krigsman: Are data sets the key, I was going to say, towards the ethical use of AI or are there other aspects to this as well, such as how we use the results of AI computations?
Dr. David Bray: Let's first tease out ethics and morality. Ethics is what society or a field or a profession view is the correct and good thing to do. For example, and I'm sure Eric has a view, in a lot of places in the medical field it is not right for a doctor to ethically support a patient's wish to die, even if they have end-stage cancer. That ethically is the case, but it may very well be that the physician may morally feel like, "Yes, if you have end-stage cancer that that might be the right thing." And so, we separate morals, which is your own internal locus of what you think is right and wrong, versus ethics, which is what society thinks is right and wrong.
I don't think that data sets, by themselves, will be our saving grace. I think that's putting faith in a false fantasy of what we really need, though, is more groups, whether they be within organization or they involve the stakeholders or customers in that organization or are involved with that organization to look at both the data that's coming in and how it's being used. Is it representative? Is it fair? Then, the conclusions the machine is reaching, does that make sense?
If it comes and says that certain groups should receive more favorable tax credits or more favorable interest rates, if it's splitting across racial lines or gender lines, we'd probably say that's not right. At least in the United States, we'd say that's not right. But that's something that the dataset won't be able to decide. It's something that we humans, ethically when we say that's not right, we'll need to decide.
The other thing, though, I think we need to think about is, how do we come to a sense that just because the data may show a pattern, that might not be right? Let me put this in real context. I personally am against profiling even if the data shows that maybe a certain group is more at risk of doing something or not because, in the United States, we believe in the idea that you should have due process and you should be considered innocent until proven guilty. And so, just because the data may show there's a greater tendency, that doesn't mean we should ascribe that to the person until their behaviors have actually presented itself.
To make that even more real, sometimes you may find patterns in the data only because you've been looking intensely in certain areas. Eric knows this from public health as well. It may be that simply just because you find flu or some prevalence of an infectious disease in some part of the United States doesn't mean it's not also in another part of the United States. It might just be you're not collecting it or you're not sensitive enough to pick it up over there.
We have to recognize that data itself can be flawed. The conclusions we can reach from that data set can be flawed. And, even if it shows an accurate pattern, we as humans may not find it ethical to accept that as something we apply universally.
Dr. Eric Rasmussen: I completely agree. One of the things that we've noticed in epidemiology is the dog that did not bark in the night. Right? When you don't have the information coming in from an area that you think you should have, that may well mean, and it's kind of trivial to note, but it will mean that they're really, really broken.
I lead a disaster response team for the Roddenberry Foundation. We wind up looking at where people need water in ways that are not conventional because the reasons people need water are complicated after a disaster. It may be challenging to use conventional data sets or conventional surveillance to spot what is actually obvious to the population affected and really difficult to pull out from elsewhere.
Michael Krigsman: I want to remind everybody that you are watching CXOTalk and right now there is a tweet chat taking place using the hashtag #CXOTalk. You can ask our guests questions and we'll respond right now, so please join in. We have two amazing guests.
Okay. Now, we could continue going on with this for a long time because these are very thorny issues. What is all of this, AI ethics, data sets, and all this stuff have to do with the future of work - at all?
Dr. David Bray: Well, I think it gets to both the challenge that there's a massive amount of data being produced. Some say that by 2022, just four years from now, there'll be more data on the planet than all human eyes see in the course of a year, or twice all the conversations we've ever had as a species. It's essentially 96 billion terabytes of data or more.
We know that the idea of right now where you go and you actually type in keywords to find what you're looking for will not scale just four years from now. The machines are going to have to be aware of what you're doing, the context in which you're doing it. It may even have to nudge you and say, "I see you're working on this particular type of problem. Would this data be useful?"
You either use the data or not. You begin to teach the machine, and the machine learns how to help you deal with this massive amount of data that we're all drowning in.
The challenge is, though, that means you're being watched in the workplace. It's essentially, in some respects, surveillance to some degree, which is already happening with some of the apps that are on people's phones that make things convenient for you, that make things easy to access, but there are false conclusions that might be reached. It might reach the conclusion that Eric is more productive than I am based on a certain set of presentations, but is that a real conclusion or is that something that's just nefarious in the data, and how does it affect both our promotion through the ranks, our salary, and how we're valued? I think there are a lot of questions about, as machines are paired with humans, how do you make sure they're being done to amplify all human abilities and are not unfairly either advantaging a select group over another or disadvantaging a select group.
Are Discussions no Ethical AI Important?
Michael Krigsman: Eric, what David just said strikes me as no different than idealistic thinking that has occurred at any point in history. And so, this whole topic, isn't it kind of irrelevant in a way? It's just like, okay, this is business as usual with the human condition. Why are we wasting our time with any of it?
Dr. Eric Rasmussen: Oh, you are such a challenging man. [Laughter]
Michael Krigsman: [Laughter]
Dr. Eric Rasmussen: No, it is not irrelevant. This is the kind of thing that we wrestle with and, for the first time, we have some tools that might give us some objectivity in the kind of things that we have always wanted to know more about. David mentioned the fact that we're fundamentally under surveillance to one degree or another by the very nature of the motions we take through the day whether it's closed-circuit TV cameras, what we click on on Amazon, or what we look at as our daily newsfeed. That is being tracked.
Now, there are disadvantages, but the advantage there is that when you are recognized as a unique individual to the pattern that you develop through life, a data pattern, an associated pattern, you have the ability to be understood better than perhaps you ever have before. Is that worthwhile? Possibly in many cases. One of those cases, for example, might be education. If we had the ability to tailor educational understanding to the kind of thing that can be most beneficial to that particular student, that's really something that can help leap forward many of -- [laughs] not to make the Chinese reference -- to not leave people behind that otherwise had not been recognized in how they develop their strengths best. That's being done in China, which is why I mentioned the leap forward.
There are companies in China that are harvesting a lot of weak signals, weak features in how data is being analyzed for patterns. Strong features are obvious, right? These are the kind of things that every teacher would be looking for all the time. But there are weak features in associative pattern extraction that can lead to much better tailoring in education.
The same thing is true in many other sectors. Then we can get into the future of work and why it's both a positive, neutral, and possibly a negative as we look at what's developing in AI across multiple sectors. But I'll stop for a second.
Michael Krigsman: Okay. We have a few questions from Twitter. I know I asked for questions, and we'll take your questions. Actually, I want to ask my own questions here. I don't want to take questions from you guys. But because I'm a good host, and not an evil host, [laughs] let's take some questions from Twitter.
Tensions Between AI Ethics and Business Profitability
A good one from Arsalan Khan who says, "Who should be responsible for ethics and then profitability and economic growth when it comes to AI?"
Dr. David Bray: Ooh! I would say, don't pin your hopes on any one person or group. We are the cavalry. If you feel like this is something you're interested in, you need to figure out your role in contributing and how you can interact with everyone. I'd make the slight pitch for the People Centered Internet as one, but there are many other places too.
I don't think we can say there is any one group or one sector that has responsibility. I think that's part of the frustration we see right now is people trying to figure out, one, what is the future of work. Is it a future of abundance? Is there going to be a massive series of job losses and displacements? There are all these different scenarios.
The reality is, no one really knows for sure, but the best way and, of course, Eric coming from DARPA and I subscribe very much to DARPA as well, the best way to guess as to what the future is going to be is to create it. I think it's going to require people that start thinking about how, the last two to three decades, we focused on technologies and empowered individuals. What we really need to do for the next five years on to make this so that it's not just an academic conversation, Michael, that it is real, that it is translating ideals into action are technologies that uplift individuals and communities, which we've not really done a lot. In fact, if anything, technologies have sort of polarized communities.
When I think about the future of work, we talked about, one, how you can make sure that it's not disadvantaging any one group. The other question is, do you have a right to know if an organization is using what you're doing to train the machine with the end goal of displacing you. We know this has already happened in some cases with certain salespeople where their emotions have been trained to the machine.
Did they have the right to know? Does the organization have a responsibility that if you are going to be displaced from that job that they may help you retrain for another job? What's the social contract? These are all questions that will only be addressed if we start having the conversations now. Yes, they may sound idealistic. However, we are the cavalry. No one else is coming. And so, for Arsalan and other people on social media, start having the conversations and seeing what you can do to help contribute positively to this.
Dr. Eric Rasmussen: I really like that. To keep the illusions going -- allusions -- as Henry V said to Kate, "We are the makers of fashion." Right? We are right now in a period of time where we can shape how this is going to go.
I like the reference to a social contract. We still had some strong thoughts about what an obligation a state has to its citizenry and what the citizenry has to the state.
There is a place called the Institute for State Effectiveness in D.C., originally formed by Ashraf Ghani and Clare Lockhart, where they listed out ten in a book called Fixing Failed States. They listed out ten obligations of the state to its citizens. They kind of apologized that there's not 9 or 11. It looks like they just kind of were lazy and rounded up, but there's ten.
One of those, for example, is the state has a monopoly on the violence. You can't have militias. You can't have people beating each other up for no reason. The state has and must have a monopoly on violence. How is that implemented in the face of AI? How is that implemented in the workplace? What constitutes violence when somebody has control over the things that you see every day? Right?
The feeds that are tailored to what you like, the things that cause the clicks that get the advertising dollars, those may not be the healthiest things for you. We don't have; in the great conversation, we don't have any reason, debate, or any enlightened compromise over somebody, for example, like Alex Jones. Right? These are not level playing fields. This is, I think, directly relevant to the future of work and how citizens respond or employees respond to their workplace.
AI Ethics and Disaster Response
Michael Krigsman: We have another question from Twitter. What an amazing conversation that's going on. You can join in and ask your questions using the hashtag #CXOTalk.
We have another question for Dr. Eric Rasmussen. How can AI techniques affect disaster response or the delivery of disaster services and what are the ethical concerns associated with that?
Dr. Eric Rasmussen: Great question and one that we kind of wrestle with now on a monthly basis. There are entities at the UN, within the donor agencies, within the bilateral agencies that look at how response teams are constructed. We try to do that by estimating the damage likely to have been done to a population by a given event: a flood, a cyclone, an earthquake, a volcano. That takes a lot of before the event data, understanding the vulnerabilities and the risk mitigation strategies that might be available to a population at risk.
That's a big field. There are a lot of people studying that. That's becoming a data-rich field, and so the algorithms that are being drawn around that pre-understanding of vulnerability are getting stronger and stronger and stronger.
Now, they're doing that in part because they're collecting a great deal more data from those populations that are at risk. The populations may not actually know that data is being collected. The teams may not know why they're being assigned where they're being assigned after an event because the teams have no visibility on how those algorithms were run. This is a complicated set of questions.
The actual logistics of delivery of aid after an event, that's also another strong AI direction that people are looking at quite carefully because, as it turns out, it's very effective when you can look at a lot of information. Where you've never had that capability before, you now have the ability to make some decisions that might be considerably more robust than anything you've been able to do before, which is more efficient use of donor dollars, less team fatigue, more effective political positioning for the population you're helping, less social unrest as a result of the long-term consequences of the event, and so forth.
Practical Implications of AI Ethics
Michael Krigsman: David, what are the, can we say, practical implications of all of this for companies, for employers, for workers, for policymakers? Another very complex set of issues, but maybe just take a stab at unfolding that onion for us.
Dr. David Bray: Sure. I'm glad you raised the question of practical because I think it's one of these things where we need to have the conversations that go deep and then we also have to also say we also recognize that the journey of a thousand miles begins with a single step. What is a single step or two that organizations can do?
I want to recommend, if you can visualize a two-by-two table and, at the top left-hand corner of that table, what you're basically putting out is what do you perceive are your obligations in the context of using any new technology? In this case, it's AI and machine learning. What do you perceive as your obligations, and just put them in simple bullets. Ideally, it's no more than half a page.
Then, to the right of that, still on the top, put forward what are the things that you either consider to be blind spots, known unknowns, potential biases as Eric recommended, things that you recognize you don't necessarily know or you can't know right now, but you may have concerns that maybe you can't see everything and have complete visibility about the situation when you're rolling out that technology. Now you've spelled out both your obligations, and you are now aware of potential blind spots.
Now, you want to think about your responsibilities. Right below the obligations, spell out those things you're now going to do in response to those obligations. What are proactive steps? Then, underneath the blind spots and the biases, spell out the safeguards. Spell out the things that you're going to do.
For example, maybe you're rolling out a new technology that's going to help people when they walk on the street and they run across a colleague that they haven't seen in three years. The AI will tip and queue you and say, "Ah, this is Bob. Bob has a wife named Jane. They have two kids. You know him from this connection of this company. The last time you saw him was six months ago." It tips and queues you, and that's very helpful.
Recognize that that same technology that could help you do that could also, unfortunately, be used for either terrorist or criminal activities in terms of who to target. In fact, if that sounds hypothetical, unfortunately, social media, GPS, and Web searches were used in the Mumbai terrorist attacks to figure out how to plan them and how to execute people.
It's not a hypothetical exercise, but I raise that because practically what you've spelled out in a very simple form are what you perceive are your obligations, what are you aware that you don't know, what are then your responses to those obligations, and what are your safeguards. Your safeguards might be having a group that can be early warning if your technology or AI is being used in ways you didn't plan for. It could be inside the company. It could be a combination of people in the company and the outside. Something like that would be a very practical way to at least be a learning organization as you roll out a new technology.
Michael Krigsman: Eric, your thoughts on these practical applications? I really did like the kind of ethical framework for action that David just laid out.
Dr. Eric Rasmussen: I like it too. It's one of the reasons we're friends.
Dr. David Bray: [Laughter]
Dr. Eric Rasmussen: It's good to recognize that, over the past long centuries, we've wrestled with these things. These have been topics that have been shaping how we choose to work with each other and become a society where you can trust the behavior of the other person next to you, that everyone is kind of adherent to the rule of law, that there is the ability to optimize your life in the way that you see fit with significant freedom of action. A lot of that has changed over the years. There's much less violence than there used to be, despite headlines. There's good evidence for that, and most of us have read at least some part of Pinker's book.
It is clear that an ethical framework is already something we consider very important in the way that we go about our lives and the way that we shape our institutions. How we incorporate that ethical framework, how we understand what AI ought to be able to do for us and the constraints around those decisions is something that is worth attention right now in a very practical way.
Michael Krigsman: What about Facebook? Where does Facebook fit into all of this? I love you guys because you're such idealistic thinkers. But as David said, the tool can be used for good, for evil, and the definition of good or evil is all in perception because I think Mark Zuckerberg, he feels that he's doing a great thing for society and he's certainly doing a great thing for Facebook.
Dr. David Bray: Well, without going specifically to that one company because I don't want to necessarily pick on any one company, I would say never at least remove the ability for individuals in organizations to learn from experiences. We talk about how we want to have the culture of doing experiments and gaining insights from those experiments and pivoting because there is no textbook for where we're going. Maybe there's enough pressure now for an organization to be a learning organization and incorporate some of these things.
Then the other thing is, actually, I once heard an adage, and I try to incorporate to this day, which is, never ascribe to malevolence that which can be explained by ignorance. It may just be these were things that were not thought of that are now presenting themselves. Maybe the response has been flatfooted, but give the people a chance to redeem themselves, the organization a chance to redeem themselves and, if they don't, then hold them accountable. But if we remove the ability for anyone to be learning, don't be surprised if no one sticks their neck out and says, "Look, this is hard, but I'm going to try and be idealistic and tackle it."
Dr. Eric Rasmussen: I completely agree with that. I have been a CEO for both for-profit and nonprofit organizations for about ten years. I have made some stupid errors in the course of those ten years, and I have a fair number of people that would be happy to point them out to you for me.
The end result is that I learned. It was very painful at the time, but I got better as a consequence. I got a lot of feedback. I collected information. People helped me think about these things. That kind of capacity is critically important, particularly when we introduce something that's likely to be far-ranging, society changing, humanity altering as AI is going to be; has already been.
That's certainly true in other nations, at the moment, even more aggressively than it is here. China, for example, has WeChat. WeChat is on my phone. I work in China quite a bit, and I find WeChat indispensable to daily life in China. It makes things hugely convenient.
I am aware that my data is being collected. I am aware that what I'm doing in the process of using WeChat with hundreds of millions of other Chinese is building stronger data dumps and allowing better pattern recognition, writing stronger algorithms, and those may be bias toward things that I do like or things that I do not like as the person that I am, a relatively progressive West Coast liberal that doesn't always agree with what China has to say.
It is clear, though, that there are, by and large, really well-meaning people struggling with every aspect of this as they try to optimize a half a dozen factors.
AI Ethics and Public Policy
Michael Krigsman: David, where are we going over the course of the next few years as AI becomes more ingrained in society and, therefore, our understanding of the effects of AI become more sophisticated? Again, I'll use that term "realistically." What do you think the impact will be on government policy, on business, on worker displacement, and things like that - all these different perspectives?
Dr. David Bray: Well, [laughs] you want me to pull out my magic crystal ball and predict the future. Eric and I, maybe we'll go off and play the stock market afterward.
Michael Krigsman: David, you're the David Bray. [Laughter]
Dr. David Bray: [Laughter]
Dr. Eric Rasmussen: [Laughter]
Dr. David Bray: I'm just doing a caveat that if I could see the future clearly, I'd probably be in a different profession. That said, where I think things are going is, we're going to be learning a lot in a very intense period of time. As Eric mentioned, some of those lessons are going to be painful. Some of these things are going to be learning only by doing and saying, "Oh, that didn't work. Let's go this direction," or, "That wasn't right. Let me try this."
The question is, one, do we have enough patience for that? Then, two, are enough people assuming that we can aim for the ideal? Maybe we don't get to the ideal perfectly, but if we just give up and say, "Look, it is what it's going to be," then I think we relinquish ourselves to a future that is going to be tyranny by AI in which we no longer have the machines benefiting us, but we are serving the machines.
Eric mentioned there is an example in which, in a certain country, that is already being done to a degree where what people are doing is optimizing not just their lives but optimizing society. It may have some biases in it. It may be things that right now people are okay with because it makes their lives good enough that, even if they've lost some locus of choice and freedom, they're okay with that.
I have to admit, like Eric, my own bias coming from the United States, and that is a bias, I believe that we should still have the ability to make choices and have freedoms as to how we live our lives within the construct that we are in a larger community and larger society. I have not seen AI examples of where AI can be done to amplify human choice and autonomy in societies. If we don't start doing experiments like that now, I think we run the risk that, yes, the future is going to be favoring more autocracies, less individual choices, the illusion of freedom but, in fact, the machine is really making a lot of the choices and possibly subjugations of certain groups if we don't stand up and say, "That's not right. That is not the vision in which we hold dear to the human spirit as important." Human rights are important and, in this digital future going forward, we need to not only preserve human rights but actually amplify them for everybody.
Dr. Eric Rasmussen: I completely agree again. We have some interesting documents that had been tried over the years to help shape that discussion. Coming up on December 10th, there's going to be an event in San Jose at the Fairmont where we are going to look at our digital future. One of the reasons we're doing it then is because it's the 70th anniversary of the UN Declaration of Human Rights.
That declaration was built in 1946, 1947, during a period of time when most of Europe and much of Asia was in ashes. There was still smoking ruins all over Germany. There were still human ashes in the ovens at Treblinka. They had already looked carefully at the consequences of unconstrained government and, as a consequence of those lack of constraints, those biases, those misleading statements, or that propaganda, there were choices made that are unthinkable--I hope--in the modern world but didn't seem unthinkable at the time. That's an important object lesson.
We need, involved in this discussion of AI and the ethics of AI, legislators and we need academics. We need courts. We need the users. We need the platforms themselves to reach out with everybody's input and sort these things out carefully using history as a guide, recognizing that AI is a special case of a general problem and we should not lose track of that.
We have been wrestling with these issues for a long time. We should simply apply the good thinking we've had before to the special case we have here.
Michael Krigsman: I want to just mention that Eric Anderson, on Twitter, in response to the point about there should be a social contract that controls everyday interactions with AI systems, says, "That's why IBM has published a book or statement called Everyday Ethics for Artificial Intelligence," and he gives a link. You can go on Twitter, hashtag #CXOTalk, and you can find that if you're interested.
As we come to a close, I'd like to ask each of you to provide advice. There are so many different stakeholder groups that I'll just leave it to each of you to decide which stakeholder groups you would like to offer suggestions or advice to. Here's your opportunity to write the future as you see it for these folks.
Who wants to start? David, let's start with you.
Dr. David Bray: All right. First, if I could just real quick give an amplification and foot stomp to what Eric just said. When he gave his bio at the beginning, he mentioned he was with the Navy, but he didn't mention that he's been in Iraq. I myself have been in Afghanistan, though as a civilian, on Secretary of Defense travel orders. We may sound idealistic, but we've also seen conflict and we've seen what happens when humans reach disagreements and the dark side of humanity, to be honest, which is why efforts are done which say, basically, prepare for war and hope for peace.
We hope that where we're going in this new era is an era in which we have the ideals, but we recognize that humans are humans and there may be things along the way that we want to try and avoid. Obviously, WWII, we don't want to see another future world war or anything like that in which people are treated badly and subjugated, et cetera.
I share that, though, because who I want to speak to are two groups. First, to everybody that's looking for someone else to solve this, that's learned helplessness. No one else is going to solve it. We have to step up and do it ourselves. If all you thought you did was you got angry at an issue or you got upset or fearful at an issue, all you do is release emotion. Nothing happened.
All of us can step up and do something. Even if it's at a small level talking about this with others, raising awareness within communities, or standing up and saying, "We believe in the sanctity of the human spirit, of human rights in this digital era, and we want it preserved."
The other group I want to talk to, though, are those individuals with the resources to help shape that future, whether they be CEOs, whether they possibly be giving grants, although I personally don't see the public sector as being much of a funding in this effort as it is the private sector now, but those CEOs that are willing to take the leap of faith and invest in community efforts, in solutions that uplift people and, yes, you're accountable to your stakeholders and you have to make a profit for them. But if you can think beyond just your own profits for your organization and think about communities, you can help shape and empower those people that are willing to step out and say, "We want a better future," and together, again, we can all be that cavalry going forward.
Michael Krigsman: Okay. Eric, it looks like very quickly you're going to have the last word here.
Dr. Eric Rasmussen: All right. I'll try to be very quick. I want to speak to three. Number one is the State Department. The State Department has, as its implementation requirement, the need to affect U.S. foreign policy around the world. If they can do that in concert with the Department of Defense using AI for those pro-social actions that calm people down, that give them the things that they need, that reduce the level of social unrest, that reduce resource shortfalls, we're likely to find that we can have a calmer world than we might otherwise.
I am working with the Pentagon on some aspects of AI that are looking at the pro-social opportunities that are out there. I think that ought to be emphasized.
Also pro-social, I'm a physician. I'm a clinical doctor. I'm a professor of medicine at U-W. I was chairman of the Department of Medicine. I've done a lot of medicine over the years. As David mentioned, I had three tours in Bosnia. I did two tours in Afghanistan. I spent nine months in Iraq. Out of that, I saw a lot of misery.
If there is the possibility that something can help me do better for my patients, then I think that we as physicians ought to jump on that the way we always have. We really have looked carefully at how we can gain advantages from inventions and creativity and done better for people. We don't have diphtheria much anymore. We don't have tetanus much anymore. We don't have polio much anymore. Vaccines really do work.
We have evacuation routines, logistical protocols for getting the wounded off a battlefield that save a lot of lives, save a lot of limbs, save a lot of eyes that used to be lost in earlier wars. We have had good things happen in medicine and AI that I think can leap us forward rapidly, and that's already happening.
The last one is teachers. I mentioned education earlier in the hour. I want to reemphasize the fact that individualized education is a real thing. It's happening in other countries. It's proving extremely beneficial, and we will do better as a society with a better-educated citizenry. I'll stop there.
Michael Krigsman: All right. Wow. This has certainly been a very, very packed and fast moving discussion. I sure wish that we had more time today.
You have been watching AI Ethics and the Future of Work on Episode #318 of CXOTalk. Our two genuinely extraordinary guests have been Dr. Eric Rasmussen, M.D., and Dr. David Bray, Ph.D. We've got a lot of brain power in the house. Eric, thank you. This is your first time on CXOTalk. Thank you so much for being here with us today.
Dr. Eric Rasmussen: Thank you for the opportunity.
Michael Krigsman: David, you've been here before. Thank you for coming back and sharing your wisdom and experience with us.
Dr. David Bray: Thank you, Michael. I really enjoyed the insights you shared, Eric.
Michael Krigsman: Everybody, subscribe on YouTube and tell your friends to watch CXOTalk. That really helps us a lot. Please do that. Do it right now.
Thanks, everybody. Go to CXOTalk.com. We have lots and lots and lots of great videos and more shows coming up. Thanks so much. Have a great day, everybody. Take care. Bye.
Published Date: Nov 30, 2018
Author: Michael Krigsman
Episode ID: 569