AI Risks and Opportunities: The House of Lords Public Policy Report

The power of artificial intelligence creates opportunities and risks that public policy must eventually address. Industry analyst and CXOTalk host, Michael Krigsman, speaks with two experts to explore the UK Parliament's House of Lords AI report.

42:01

Aug 17, 2018
8,658 Views

The power of artificial intelligence creates opportunities and risks that public policy must eventually address. Industry analyst and CXOTalk host, Michael Krigsman, speaks with two experts to explore the UK Parliament's House of Lords AI report.

Timothy Francis Clement-Jones, Baron Clement-Jones, CBE, FRSA is a Liberal Democrat Peer and their spokesman for the Digital Economy in the House of Lords.

Lord Clement-Jones is a Consultant of the global law firm DLA Piper where former positions held include London Managing Partner (2011-16), Head of UK Government Affairs, Chairman of its China and Middle East Desks, International Business Relations Partner and Co-Chairman of Global Government Relations.

He is Chair of Ombudsman Services Limited, the not for profit, independent ombudsman service that provides dispute resolution for the communications, energy, property and copyright licensing industries. He is a member of the Advisory Board of Airmic (the Association of Insurance and Risk Managers in Industry and Commerce) and Board Member of the Corporate Finance Faculty of the ICAEW.

Dr. David A. Bray was named one of the top "24 Americans Who Are Changing the World" under 40 by Business Insider in 2016. He was also named a Young Global Leader by the World Economic Forum for 2016-2021. He also accepted a role of Co-Chair for an IEEE Committee focused on Artificial Intelligence, automated systems, and innovative policies globally for 2016-2017 and has been serving as a Visiting Executive In-Residence at Harvard University since 2015. He has also been named a Marshall Memorial Fellow for 2017-2018 and will travel to Europe to discuss Trans-Atlantic issues of common concern including exponential technologies and the global future ahead. Since 2017, he serves as Executive Director for the People-Centered Internet coalition co-founded by Vint Cerf, focused on providing support and expertise for community-focused projects that measurably improve people's lives using the internet. He also provides strategy and advises start-ups espousing human-centric principles to technology-enabled decision making in complex environments.

Transcript

Michael Krigsman: The impact of artificial intelligence on society is truly one of the most profound and important topics of our time and its importance is going to grow. Today, on Episode #301 of CXOTalk, we're speaking with Lord Tim Clement-Jones, who chairs a committee of the House of Lords of the U.K. Parliament and has just released an important research document and policy statement, policy advisory statement, on this topic. I'm Michael Krigsman. I'm an industry analyst and the host of CXOTalk.

Before we go further, I want you to please, please, please subscribe on YouTube and tell all of your friends.

In addition to Lord Tim, I'm really delighted that we're also joined by an old CXOTalk hand, David Bray, who is my guest co-host, and he's a subject matter expert on this topic. David is the executive director of People Centered Internet. He is associated with Harvard University. He is a marshal scholar in Europe and is coming to us right now from Brazil where he's been speaking today on this exact topic for Singularity University. So, David is a guy who wears a lot of hats. David Bray, welcome back to CXOTalk.

Dr. David Bray: Thanks for having me, Michael, and thanks for that very humbling introduction. Glad to be on this show as a guest and, actually, as a subject matter expert to join Lord Tim.

Michael Krigsman: Well, it's always a delight to see you. David, you're in Brazil, and you're giving a talk on these issues today.

Dr. David Bray: Yes, it was actually yesterday with Singular University on everything ranging from the opportunities and challenges of the Internet of Things, the opportunities and challenges with AI, and how we need to think about being more resilient as a society to what these impacts will have both on organizations and on societies and on nations as a whole.

Michael Krigsman: Lord Tim Clement-Jones, it's an honor to welcome you to CXOTalk. Thank you for being here today.

Lord Tim Clement-Jones: Well, thank you very much for inviting me, Michael. I'm delighted to be here and, especially, to be talking it through with you and David.

Michael Krigsman: Tim, please tell us about the committee that you chair.

Lord Tim Clement-Jones: Well, we set up the committee last year, in June of last year, and we had around nine months to produce a report. Our brief was to look at the economic, ethical, and social implications of advances in artificial intelligence in the U.K. and the implications. And so, we fulfilled our brief.

It was a pretty breakneck operation. In nine months, we produced a report in April. We've now had the government response in June.

But, I think most people think that it's a fairly fundamental piece of work. It's a couple of hundred pages, 74 recommendations, and we took something of the order of 230 separate pieces of written evidence. So, we have a pretty good evidence base for the whole report.

Michael Krigsman: I have to say that, from the report, the report is filled with links back to transcripts of the testimony, so it's a very rich repository. Anybody who takes a look at it should be aware of that. It's not just the report, but it's the source material as well.

Lord Tim Clement-Jones: Yes, absolutely. Of course, we also had 66 oral sessions with witnesses as well, and people can see those actually recorded as well.

Michael Krigsman: Why did you decide to invest the significant level of resources and time to create this research?

Lord Tim Clement-Jones: Well, we're very lucky in the House of Lords because not only do we have, if you like, the permanent select companies rather like congressional committees, but we also have what we call ad hoc committees, special committees set up for particular topics, and these are decided on by the senior leaders in the House of Lords, and they respond, basically, to suggestions being made by members. One of the really important issues, it was sought by a number of members of the Lords who suggested that we should have this inquiry with artificial intelligence. That somewhat belies the kind of slightly fuddy-duddy reputation that the House of Lords might have.

Dr. David Bray: Tim, recognizing that most of our viewers may not have read the report yet--however, hopefully, we're going to get them hooked and interesting in reading it--what would you say are the key takeaways for the public, as well as for business leaders, to think about in terms of what this report find?

Lord Tim Clement-Jones: What I think we tried to do was to get to grips with the real problem of polarization of opinion, if you like, on this whole area. As you know, both of you will be familiar with Elon Musk calling artificial intelligence more dangerous than nuclear weapons. The late Stephen Hawking had a pretty similar view.

On the other hand, there are others who are grossly optimistic and don't believe that there are any ethical or other societal issues which AI gives rise to. So, what we tried to do is to cut through that and come up with where we thought the opportunities were but, also, be very, very forensic about exactly what we thought the risks were that needed mitigating.

I can go give examples but, for instance, bias in algorithms, issues of lack of transparency, issues of misuse of data on the risk side and non-inclusion and so on. But also, we didn't want to lose sight of the opportunities, which are many fold. It was important, we thought, to get the balance right. That's why, in a sense, you'll see that it is a very balanced report because we genuinely were optimistic about the future, but we were saying to government, "You've got to sort out some of these issues at the same time."

Dr. David Bray: To build on that, Tim, since the report has come out, have you seen, or do you hope to see things with the U.K. government that they're going to start doing? Then, beyond the U.K. government, do you think there's responsibility for other sectors to play a role as well?

Lord Tim Clement-Jones: Yes, I absolutely do, and it's very interesting. The government produced their response in June this year, and the absolute first principle, in a sense that they agreed with us, is the whole issue of the need to retain and build public trust.

One of our real concerns was that if the public didn't understand what AI was all about, how it would benefit them, and what impact you would have on them and their jobs and so on, they would abreact against that. It would be a kind of Luddite opinion-forming process.

Government absolutely accept the need to build public trust, and so that leads on to the need to have an ethical framework, the need to make sure we don't have bias, and the need to make sure that people trust the way that their data is used. That is the first principle, basically, that we felt the government really have accepted. That's a very good basis for going forward.

Obviously, there are many other areas involved where we don't feel that the government have been quite active enough in terms of skills and understanding the kinds of skills that are going to be needed in the future. But, they accept the very important need to have re-skilling and adaptability for our workforce meeting the needs of the future in terms of working with AI and, indeed, sometimes having to find new forms of employment where they've been displaced by AI.

I generally feel that, actually, we are in a good place. We've had quite a lot of contact with government after the report. I personally have had conversations with the new chairman of the Center for Data Ethics and Innovation, which have a lot to do with the formation of the new ethics codes and so on. And so, we think that's a very fruitful relationship that we can build on.

The government also accept our international agenda. They're having discussions with the government of Canada, France, and more broadly. Again, I think they've accepted that they do have a leadership role to play internationally and that it is essential to build a common agreement across governments internationally to make sure that, bluntly, artificial intelligence is our servant, not our master.

Michael Krigsman: Tim, was your primary goal to establish that baseline, or were there other basic goals that you had in developing the report?

Lord Tim Clement-Jones: I think we had the basic goal that we didn't want to throw out the baby with the bathwater. We were very, very conscious of the very unfortunate experience that we've had in the U.K. and in Europe on GM foods where we could have been far, far more positive about the use of GM, but we failed to build public trust. As a result, the public didn't understand that GM foods could benefit them. Effectively, what happened in Europe is that the European Union banned GM foods altogether. Now, that's not the case in the States. There was a degree of public trust, and GM foods were allowed to carry on.

Now, I'm not saying that we would have adopted exactly the same regime, but it is, if you like, an object lesson for governments in terms of how not to build public trust. We've got some other more positive examples such as the technology on human fertilization and embryology where, in the U.K., we accepted that wholeheartedly because of the way it was communicated, regulated, and so on.

We were really interested in some of that history, and we wanted to make sure that we learned the lessons from the past in terms of how the public would adapt to AI and understand the benefits of AI. Of course, those opportunities are the things that are going to help with our economy, help with our society, particularly in areas like healthcare and education, personalizing educating for instance.

There are a great many benefits, but we said, at the same time, you've got to deal with these other risks. I've talked about them a little bit today, and I can go into them in more detail.

Dr. David Bray: Actually, I was wondering, Tim, if you might be able to explain for our viewers a little bit about how the House of Lords works in the entire context of Parliament. It sounds like, in some respects, you're able to go forth and try to approach things from a non-political perspective and research them. We don't necessarily have the same thing in the U.S., but maybe if you could tell us a little bit more about what you do and the value of that function in trying, like you said, to help build public trust.

Lord Tim Clement-Jones: Yes. We don't start off like a kind of congressional inquiry or, indeed, a select committee inquiry in the Commons, which is really trying to find the culprits. We are basically starting off with a very balanced way of inquiring into a particular subject, so it doesn't tend to be particularly political. It is actually looking at how policy should be formed whether the government has got the right policy going forward and isn't trying to adopt an absolutely critical stand right from the word off, which some committees in Commons try and do. They grandstand with witnesses and so on and so forth.

That isn't the way the House of Lords committee operates at all. Basically, it's treating a subject seriously, treating the witnesses very seriously, indeed, and listening to them very carefully, and trying to get as much expert testimony that we can, and then coming to a conclusion. As you say, it is really very nonpartisan in the sense I chaired the committee. It was very, very difficult to establish, for an outsider, I'm sure, who was in which party or another, quite frankly, because they're all independent-minded and we all listen to the evidence with our critical faculties on full alert. We did agree 100% as to the outcome of the report.

Michael Krigsman: As you were doing the research, what did you learn that was perhaps surprising to you? How did your views evolve as you went through this?

Lord Tim Clement-Jones: That's a very good question, Michael. I think it was very interesting, apart from the polarization point, which I must say, did surprise me when I first looked at the evidence, if you like. The press comment that was out there was highly polarized.

I think the most important thing that we all came to realize is that AI is already here. It's already embedded in our smartphones. It's already in Google Home, Echo Dot, so on and so forth. Actually, we're grappling with issues that are here and now. Therefore, for people to talk about the singularity or strong AI is not particularly relevant.

What we really need to talk about is the implications, all the implications of narrow AI, AI as it is now because that already has given rise to ethical issues, issues to do with bias of datasets, algorithms, and so on. In a sense, we're in a little bit of a race against time to make sure we can establish that framework. I think that was probably the biggest lesson. Therefore, we all had a great sense of urgency about the need to really get on and get our proposals out there.

Dr. David Bray: Tim, shifting from the takeaways from the reports to implementation, where do you see the safe spaces, or are there safe spaces in the U.K. where, like you said, people can move forward, they can learn, because I assume that, while there are recommendations, the implementation level doesn't really necessarily have a textbook on how to do this right.

Lord Tim Clement-Jones: No, actually.

Dr. David Bray: Given all the other frictions that are going on, I mean the friction in Europe, the friction in the U.K. itself, the friction in the U.S., where are the safe spaces to do this without it becoming either a media field day or a political field day to try and get ahead of this curve, as you said?

Lord Tim Clement-Jones: I think we're quite lucky in the fact that, up to now, we've had a very active, proactive secretary of state for digital culture, media, and sport, the government department charged, really, with the development of our digital economy. I think he, to his credit, was very proactive in terms of starting to establish the framework for the way that ethics were going to be developed, the way that industry, business, was going to be involved in the evolution of artificial intelligence, and the way that government was going to connect all the dots and coordinate it. That went into the industrial strategy. Artificial intelligence is one of the grand challenges that's been set out by government and so on.

In a sense, we had some of those ingredients coming along onstream from the government at the time we started our work and continued our work. In a way, we've gone with the flow, but we've tightened the bolts, and we've made new suggestions in certain areas for greater priority, moving things up the political agenda.

I think we are quite fortunate that the government recognized that it's not only the economic opportunities which are very great for us as a country, but also that we've got to get it right in terms of public trust and ethics. I think we've been in a good place. You could never move fast enough and, of course, we've got the distraction of Brexit at the same time.

I am quite impressed by the fact that despite some of the external distractions, we have managed to keep moving forward. I hope that our new minister, the new secretary of state who has taken over for Matt Hancock will do likewise. Matt Hancock is now Secretary of State for Health, and I think he will have a very big influence on our national health service in its adoption of artificial intelligence and actually making much better use of the data that it has for the benefit of patients.

Michael Krigsman: I want to remind everybody that we're speaking right now with Lord Tim Clement-Jones who chairs the AI committee that just released an amazing, really great report from the House of Lords in the U.K. We are also joined by my guest co-host and subject matter expert on this topic, David Bray. Right now, there's a tweet chat taking place, and you can ask questions using the hashtag #CXOTalk.

Tim, following up on David's questions about the implementation, how has the government and other stakeholders reacted or responded to the report?

Lord Tim Clement-Jones: Well, they've responded very positively. I mentioned the point about public trust earlier. They accepted that. They've accepted even some quite difficult issues that we've raised with them such as the issues relating to ownership of data sets and the fact that there is some evidence that small and medium-sized companies, startups, are not getting access to those datasets, and so there is the possibility, the probability, of data monopolies being established, which need tackling through competition law and so on, ranging from the way that the new data, Center for Data Ethics and Innovation is going to operate, and how the National Retraining Service will operate.

I think that we are in quite a good place. The government have been very positive. Of course, we want them to move faster and we do want to make sure that we have the ability to make sure they deliver. Of course, my select committee is a short-term select committee. It's now done its job, and it's up to others. And, I will do that in Parliament in other ways. It's up to others to make sure that the government does deliver what it says it's going to deliver. It said it was going to deliver quite a lot in its response to our report.

Dr. David Bray: To expand on that, say it's three years from now and your report's recommendations have been adopted. What do you see that's different about the U.K. or about the world as a whole if the report's recommendations have been adopted?

Lord Tim Clement-Jones: Well, the first thing I want to see is much greater coordination of the government policies in this area. They've set up a new AI council, which includes industry. They've got a new office for AI within the government. And, they've got the new Center for Data Ethics and Innovation. There are many, many other bodies there: The Alan Turing Institute and so on.

They've got to basically make sure that our AI strategy is carried forward in a very coordinated way. That's, in a sense, the domestic agenda. They have to join the dots in domestic terms.

Internationally, I would be extremely disappointed if I didn't see movement internationally to have a code of ethics being developed in collaboration with France, Canada, South Korea, and many other countries including Gulf states in this whole area of countries who have a real interest in artificial intelligence and the ethical development of artificial intelligence. I would very much hope the U.S. would also contribute to that. But, at the moment, I'm not convinced that there is, if you like, a strategy for that within the U.S. administration.

Dr. David Bray: Tim, expanding upon that, you talk about the ethics. Can you talk a little bit more about what kind of ethics are we looking for, or are there principles because, obviously, we've had 3,000 years or more of philosophy and philosophers still haven't really converged on a single code? So, could you tell us a little bit about, if it was a perfect world and you got to help guide the way, what would the ethical framework look like at the world stage?

Lord Tim Clement-Jones: Yes.

Dr. David Bray: For AI.

Lord Tim Clement-Jones: Absolutely. We set out five principles. It could be six, seven. We chose, rather arbitrarily, to have five. They're high-level principles. It's rather like when you look at the U.N. Declaration of Human Rights or things of that sort. They are to be implemented at a national level, but you have a high-level set of principles which are applied locally, if you like.

It's that AI should be used for the public good. It's that AI should be transparent and free of bias. AI should not be used for weapons of mass destruction, and so on. There is a hierarchy of different people's data to make sure that they have the benefit of privacy and so on. There is a series of principles which relate to the application of AI.

Dr. David Bray: Excellent. That actually ties into why I was in Europe as a Marshall Fellow is, it's actually 70 years since the Marshall Program was put in place after World War II. Actually, in December of this year, it'll be 70 years since the U.N. Declaration of Human Rights. It may be that we're now at this crossroads in which the world has been pretty much operating on institutions that are post World War II, and now we need to think about how we refresh them for the 21st Century ahead.

Lord Tim Clement-Jones: Absolutely. One of the big issues is, what is the best framework, in a sense, I mean the best institution to try and develop this ethical framework? Is the G20? Is it the U.N.? Is it, you start at a different level; you start with the European Union who also are extremely interested in this whole area? What is the best way of actually developing such a framework?

Michael Krigsman: What is the best way? David and I were part of a large organization, nonprofit organization, a well-known one that was trying to promulgate these kinds of standards. One of the problems that I see with this is the co-option of various organizations on issues, co-opting of issues such as AI ethics to serve what amounts to either commercial aims on the part of technology companies or, in the case of nonprofits, basically, popularity. It creates tremendous distortion, so how do you manage that?

Lord Tim Clement-Jones: Well, I think what you can't do is simply try and just let the private sector get on with it, so to speak, and assume that they're going to develop their own code of ethics. I mean the partnership in AI is a very valuable initiative. It's got a lot of very good companies on it, many of whom I know are working very hard in these sorts of areas.

I think it has to be a collaboration, at the end of the day. But, at the end of the day, also, governments are the ones who have to control the agenda in terms of the development of AI because, as I said, you have to make a decision that AI is going to be your servant, not your master. If you take that view, for instance on things like autonomous vehicles or whatever it may be, you have to make sure that, at the right time if necessary, you regulate.

I'm not a big fan of regulation at this stage because it is not yet obvious what would need to be regulated. I think it's very important to let innovation take place and so on. But, when you need to translate those ethical principles into regulation, of course, that's the point where governments come into play.

Dr. David Bray: Tim, I'm wondering if I could ask two questions: one at the global level and one at the more community, local level. At the global level, you talked about possibly what are the right configuration of nations to come together. What are your thoughts about maybe it's actually networks of people that span nations or actually groups that span nations? Maybe regulation by geography is increasingly going to be difficult to do, and it's more coalitions of the willing across the world. That's the global level question.

Then the more community-level question is, imagine some of the more rural parts of the U.K. What would they see three years from now that would be the impact of this report on the more rural parts of the country?

Lord Tim Clement-Jones: Yes, I'll come on to that second question, but I think it isn't incompatible, the first point you make about networks that are nongovernmental, in a sense. That's not incompatible with government action. I think the more you can build public opinion and there is pressure on the political system to develop ethical principles and have a framework, and that is done international.

If there's an international movement and, of course, the Internet gives us that possibility, I think that's wholly positive, and I think that, as long as it's the right set of ethical principles, it means that you're allowing the development of AI in the right kind of way. What would be wrong is if you had movements that were designed to stop AI at any cost. I'm a great believer in having these principles, so it means we can develop AI in the right way.

Coming on to your second point, if you were in, if you like, a rural area in the U.K. in the future, it's very difficult to see quite how things would work except that, of course, you would see you'd have much better connectivity. I hope that by the time, in three or four years' time, we'll have 5G. We'll have much faster fiber connections and so on, so people will benefit from better connectivity.

How AI is going to affect them will probably depend on the section in which they're in. If you're professionals, they may find that they're doing more. They have greater ability to have assistive AI, which helps them work from home. It may be that the farmers have better information about the crops they're growing, about the weather conditions and so on and, therefore, there is a greater specialization and ability to make rural areas prosperous.

It's very difficult, with all matters of AI, to forecast the future, particularly in employment terms, as we'd seen from quite a number of the reports. People like Frey, Osborne, and others find it very, very difficult to predict exactly what the job situation is going to be in just a few years' time.

Michael Krigsman: We have a question from Twitter. Actually, the @CXOTalk account is asking a really interesting question. If you wait to see what goes wrong with AI, it's too late. And so, how do you protect the public proactively from the negative effects of AI?

Lord Tim Clement-Jones: Well, there's a very fine principle--and, as a lawyer, I really appreciate it--called the Precautionary Principle. I think if there is some evidence that there is detriment, then you can act on it. But, you don't have to wait until disaster strikes. You have to be proactive in making sure that you have an idea of what's happening out there and your radar is fully alert.

I don't think it's an either/or situation. I think the Precautionary Principle applies, for instance, on a lot of matters involving the environment. I would say that it applies also in matters involving artificial intelligence as well.

Michael Krigsman: Can I ask; what advice do you have for people in the technology community, for individuals and the technology companies? There are a lot of technology folks who listen to this show, and so this speaks directly to them.

Lord Tim Clement-Jones: Yes. I'm basically very happy with the response of many technology companies to our report, and I think it's genuine. I don't think it's a cynical exercise. They genuinely accept the need for an ethical framework.

What there is a little bit of equivocation about is the question of whether or not they should design AI, algorithms, and so on in a way that's explainable and transparent. There's some doubt whether that's possible. But, the more people I talk to in the technology industry, the more this is a matter of design upfront.

Basically, it may not be the case now that AI is explainable, but reliably informed by many, many people in the tech industries that it is possible to make sure, when you design AI, algorithms, and so on that you build in explainability. I think that's very important. I'd like to really get that point across to the tech industry.

Dr. David Bray: Then, to build on what Tim said, Michael, we're seeing, in the United States, our Defense Advanced Research Projects Agency actually have a thing called Explainable AI where they're actually trying to, right now, encourage the industry to come forward with solutions. I also think we shouldn't discount the ability of people to help serve as minders to make sure things don't go wrong.

You could imagine a future in which companies or any organization has a group of people, both from the inside as well as the outside, that look at the data being fed to teach the machine because it could be the algorithm itself is very simplistic and can be made transparent. But, it's the data sets that, if the data is bias itself, you'll get bias in the machine. So, they're looking at the data sets, and they're also looking at what conclusions the machine makes. It's going to be a tech solution, plus a people-centered solution as well for the future ahead.

Lord Tim Clement-Jones: It's a hugely important issue. One of the points we make in the report is, there needs to be much better digital understanding by the public. Contributing in that way seems to me to be extremely important for the future. The citizen needs to understand what AI is doing, what the implications are, how it might be biased if it's using the wrong data sets, how it should be explainable, and so on.

I think it needs to start very young. Young people already don't really know what is happening with the AI in their smartphones in terms of targeting them on social media, in terms of the purchases they make on the Internet, and so on and so forth. I think a digital understanding, the need for it really ties in with the point you make, David.

Dr. David Bray: Michael, if I could add one thing to that real quick, I would also say I could easily imagine a new field. There used to be a thing called--and there still is--human/computer interaction. But, I could imagine there's going to be a whole series of sort of anthropology, sociology studies about how, as we begin to interact with something that is independently intelligent from ourselves, both the good and the bad and the in-between that arise from that in terms of how human behavior occurs. That's recognizing that we humans also have general traits that have become part of being human as a result of evolution.

There's a wonderful series of studies, psychology studies, that show if you want something to go viral on the Internet, you make it either angry or fearful. Well, hopefully, the machine doesn't learn when it gets assigned to go promote something on social media to do that, but I think that's where we're going to have to recognize that it's both trying to recognize and make sure the machine doesn't actually have bad behaviors at scale or even individually, but also recognizing that we as humans may have some challenges of just being human, some flaws, and some humanity about ourselves that, when coupled with the machine, may create some interesting patterns.

Lord Tim Clement-Jones: Completely. That's precisely the point. What we don't want to do is incorporate the worst of our human features into AI. Of course, that'll be doubly the case if we start cracking the way in which AI can understand our emotions.

Dr. David Bray: Right.

Lord Tim Clement-Jones: Because that, I think, is one of the next big steps that's going to take place.

Michael Krigsman: We have a very interesting question from Twitter on this topic from Arsalan Khan who asks, "To what extent should the government be involved regulating the ethics?" Just as an extension of that, I would add, regulating the datasets to ensure that they are as bias-free as possible.

Lord Tim Clement-Jones: Well, I think there's no doubt about it. We'd seen a big change in public opinion, big change in public policy over the years towards the Internet. Now, we started off by thinking the Internet was an absolutely free and open space that shouldn't be regulated and so on in any shape or form. That view has changed over a period of time. I think the same is happening with artificial intelligence in that sense.

I think we have to make sure that AI isn't just simply free for all, developed in any old way. We already have pretty strong regulation through the General Data Protection Regulation, which came in earlier this year in Europe, which makes sure that the use of people's data is properly regulated. It even has something to say, a little bit to say, about the way that algorithms use data and explainability and so on and so forth.

I think it's very important to have that kind of regulation in order to establish public trust. If you don't establish public trust, it means that the technology is suspect, the public won't accept it, and it'll be an abreaction if they see AI taking jobs, taking functions away from them. This is all about traveling together along the road, not somebody breaking off and saying, "I don't need any form of regulation."

Michael Krigsman: We're almost out of time. I wish we had a lot more time, so we'll need to do this again.

In your mind, one of the key takeaways from the report is the importance of communication and transparency so that the public understands what's actually taking place.

Lord Tim Clement-Jones: Absolutely, and it's such a big challenge because we've all seen the narrative. We have our tabloid newspapers in the U.K. with the Terminator robot narrative. Whenever there's AI mentioned in a tabloid, you normally get a picture of some robot from one of the movies, which isn't necessarily a very nice robot.

AI is, if we're not careful, being characterized as the enemy. We've got to change that narrative. We've got to make sure that the way that we develop public policy counteracts that narrative. I'm passionate about that, and I am convinced we can do it, but we all have to work together in doing it.

Michael Krigsman: Let me ask you both a final question. How do we ensure that not just the narrative, but the substance of the way AI develops, leads to the betterment of society rather than to increasing inequality in society? In a sense, that's foundational to all of us.

Dr. David Bray: I think that's actually near and dear to the People Centered Internet Coalition, which is, there is no textbook for the world we're going toward. However, Lord Tim and the folks with the Commission have put out some really great principles. And so, what we like to do is put them into practice. We like to do demonstration projects that we can learn from and, even more importantly, measure how this improves people's lives.

Our goal, if successful, either localities will copy them, states will copy them, companies will copy them, countries will copy them. It's sort of creating a space in which we can learn. As Tim said, if we sense that anything is not going right, we can make changes. We can shift. We can use the Precautionary Principle and try to address this unfinished work of actually translating the ideals into practice and help shape the future.

Now, that said, we're obviously a small organization and there are many more organizations out there. I think it's actually getting back to the idea that it's going to take both government and nongovernment organizations and private sector funding to do these projects around the world and self-learn from each other with the goal of, if we are mindful of trying to bring up everybody, that will be a triple win for society, that'll be a triple win for the economy, and it'll be a triple win for individuals.

If, however, we get fragmented and people go their own ways, as Lord Tim mentioned, and people say, "We don't need to work together. We don't need to learn together," in the short-term whoever goes off on their own might benefit, but it'll pull us apart.

I worry because I actually wish I was like the U.K. right now where it seems like you actually have more solidification around a vision moving forward. I can't say the same is true about the U.S. But, if we can learn from each other and move forward together from these learning living communities, that might be how we can make sure that we actually make sure it benefits everybody and we don't fall into the trap of digital inequality of lack of inclusion with AI.

Lord Tim Clement-Jones: I think David has put it beautifully. I think we have to create a climate in which we accept the need for AI to be developed in that ethical way. I do think, also, though, it needs leadership. Of course, there's a sort of organic way of developing opinion and acceptance of a need for ethics and so on.

But, at the same time, we need to have leadership in government. We need to have leadership in industry to make sure that, for instance, if companies are deciding to invest in AI, they really look at the implications, the ethical implications, the job implications, the reskilling implications, so this becomes second nature. It becomes a set of procedures that they know they've got to do in order to have public acceptance. I think that that will require quite some leadership because a lot of our leaders will not really understand fully what the implications of AI are and what the benefits will be, as well as, of course, the risks that they need to mitigate.

Michael Krigsman: Okay. We are pretty much out of time, and what a very, very fast conversation this has been on one of truly the most important issues that we are facing as a society, as a world, and the impact of technology. We've been speaking with Lord Tim Clement-Jones, who is the chairman of the House of Lords, Select Committee on Artificial Intelligence, and they have done really an incredibly excellent report on this subject of the impact of AI on society. I urge you to search for it and read it. It's very rich.

Our guest co-host and subject matter expert on this topic has been David Bray, who is the executive director of People Centered Internet. He has also been a Marshall. David, is it Marshall Fellow or Marshall Scholar?

Dr. David Bray: Marshall Memorial Fellow.

Michael Krigsman: A Marshall Memorial Fellow in Europe. Thank you so much, everybody, for watching. Be sure to subscribe on YouTube. Tell your friends and go to CXOTalk.com because we have great shows coming up. Thanks a lot, everybody. Bye-bye.

Published Date: Aug 17, 2018

Author: Michael Krigsman

Episode ID: 537