House of Lords Member talks AI Ethics, Social Impact, and Governance

What are the social, political, and government policy aspects of artificial intelligence? To learn more, we speak with Lord Tim Clement-Jones, Chairman of the House of Lords Select Committee on AI and advisor to the Council of Europe AI Committee.

41:51

Jan 29, 2021
13,245 Views

What are the social, political, and government policy aspects of artificial intelligence? To learn more, we speak with Lord Tim Clement-Jones, Chairman of the House of Lords Select Committee on AI and advisor to the Council of Europe AI Committee.

Tim Clement-Jones was Chairman of the Association of Liberal Lawyers 1982-86 and then of the Liberal Party from 1986-88 and played a major part in the merger with the Social Democratic Party to form the Liberal Democrats. He was made CBE for political services in 1988. He was the Chairman of the Liberal Democrats Finance Committee from 1989-98 and Federal Treasurer of the Liberal Democrats from 2005-10.

He was made a life peer in 1998 and until July 2004 was the Liberal Democrat Health Spokesman and thereafter until 2010 Liberal Democrat Spokesman on Culture, Media and Sport, in the House of Lords. He is a former Spokesman on the Creative Industries (2015-17) and is the current Liberal Democrat Digital Spokesperson in the House of Lords.

Tim is former Chair of the House of Lords Select Committee on Artificial Intelligence which sat from 2017 to 2018 and a current member of the House of Lords Select Committee on Risk Assessment and Planning. He was a former member of the Select Committees on Communications (2011-15) and the Built Environment (2015-16). He is Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence and Vice Chair of the All-Party Parliamentary Groups on Music, The Future of Work, Digital Regulation and Responsibility, Ticket Abuse, Performers Alliance, Writers and Indonesia.

Tim is a founding member of the OECD Parliamentary Group on AI and a member of the Council of Europe’s Ad-hoc Committee on AI (CAHAI). He is a Senior Fellow of the Atlantic Council’s GeoTech Center which focusses on technology, altruism, geopolitics and competition.

Transcript

This transcript was lightly edited.

What are the unique characteristics of artificial intelligence?

Michael Krigsman: Today, we're speaking about AI, public policy, and social impact with Lord Tim Clement-Jones, CBE. What are the attributes or characteristics of artificial intelligence that make it so important from a policy-making perspective?

Lord Tim Clement-Jones: I think the really key thing is (and I always say) AI has to be our servant, not our master. I think the reason that that is such an important concept is because AI potentially has an autonomy about it.

Brad Smith calls AI software that learns from experience. Well, of course, if software learns from experience, it's effectively making things up as it goes along. It depends, obviously, on the original training data and so on, but it does mean that it can do things of its own not quite volition but certainly of its own motion, which therefore have implications for us all.

Where you place those AI applications, algorithms (call them what you like) is absolutely crucial because if they're black boxes, humans don't know what is happening, and they're placed in financial services, government decisions over sentencing, or a variety of really sensitive areas then, of course, we're all going to be poorer for it. Society will not benefit from that if we just have this range of autonomous black box solutions. In a sense, that's slightly a rather dystopian way of describing it, but it's certainly what we're trying to avoid.

Michael Krigsman: How is this different from existing technologies, data, and analytics that companies use every day to make decisions and consumers don't have access to the logic and the data (in many cases) as well?

Lord Tim Clement-Jones: Well, of course, it may not be if those data analytics are carried out by artificial intelligence applications. There are algorithms that, in a sense, operate on data and come up with their own conclusions without human intervention. They have exactly the same characteristic.

The issue for me is this autonomy aspect, data analytics. If you've got actual humans in the loop, so to speak, then that's fine. We, as you know, have slightly tighter, well, considerably tighter, data protection in Europe (as a framework) for decision-making when you're using data. The aspect of consent or using sensitive data, a lot of that is covered. One has a kind of reassurance about that that there is, if you like, a regulatory framework.

But when it comes to automaticity, it is much more difficult because, at the moment, you don't necessarily have duties relating to the explainability of algorithms or the freedom from bias of algorithms, for instance, in terms of the data that's input or the decisions that are made. You don't necessarily have an overarching rule that says AI must be developed for human benefit and not, if you like, for human detriment.

There are a number of kinds of areas which are not covered by regulation. Yet, there are high-risk areas that we really need to think about.

Algorithmic decision-making and risks

Michael Krigsman: You focus very heavily on this notion of algorithmic decision-making. Please elaborate on that, what you mean by that, and also the concerns that you have.

Lord Tim Clement-Jones: Well, it's really interesting because, actually, quite a lot of the examples that one is trying to avoid come from the States. For instance, parole decisions or decisions in terms of artificial intelligence, that live facial recognition technology using artificial intelligence.

Sometimes, you get biased decision-making of a discriminatory nature in racial terms. That was certainly true in Florida with the COMPAS parole system. It's one of the reasons why places like Oakland, Portland, and San Francisco have banned live facial recognition technology in their cities.

Those are the kinds of aspects which you really do need to have a very clear idea of how you design these AI applications, what data you're putting in, how that data trains the algorithm, and then what the output is at the end of the day. It's trying to get some really clear framework for this.

You can call it an ethical framework. Many people do. I call it just, in a sense, a set of principles that you should basically put into place for, if you like, the overall governance or the design and for the use cases that you're going to use for the AI application.

Michael Krigsman: What is the nature of the framework that you use, and what are the challenges associated with developing that kind of framework?

Lord Tim Clement-Jones: I think one of the most important aspects is that this needs to be cross-country. This needs to be international. My desire, at the end of the day, is to have a framework which, in a sense, assesses the risk.

I am not a great regulator. I don't really believe that you've got to regulate the hell out of AI. You've got to basically be quite forensic about this.

You've got to say to yourself, "What are the high-risk areas that are in operation?" It could be things like live facial recognition. It could be financial services. It could be certain quite specific areas where there are high risks of infringement of privacy or decisions being made in a biased way, which have a huge impact on you as an individual or, indeed, on society because social media algorithms are certainly not free of issues to do with disinformation and misinformation.

Basically, it starts with an assessment of what the overall risk is, and then, depending on that level of risk, you say to yourself, "Okay, a voluntary code. Fine for certain things in terms of ethical principles applied."

But if the risk is a bit high, you say to yourself, "Well, actually, we need to be a bit more prescriptive." We need to say to companies and corporations, "Look, guys. You need to be much clearer about the standards you use." There are some very good international standard bodies, so you prescribe the kinds of standards, the design, an assessment of use case, audit, impact assessments, and so on.

There are certain other things where you say, "I'm sorry, but the risk of detriment, if you like, or damage to civil liberties," or whatever it may be, "is so high that, actually, what we have to have is regulation."

You say to yourself, then you have a framework. You say to yourself you can only use, for instance, live facial recognition in this context, and you must design your application in this particular way.

I'm a great believer in a graduation, if you like, of regulation depending on the risk. To me, it seems that we're moving towards that internationally. I actually believe that the new administration in the States will move forward in that kind of way as well. It's the way of the world. Otherwise, we don't gain public trust.

Trust and confidence in AI policy

Michael Krigsman: The issue of trust is very important here. Would you elaborate on that for us?

Lord Tim Clement-Jones: There are cultural issues here. One of the examples that we used in our original House of Lords report was GM foods. There's a big gulf, as you know, between the approach to GM foods in the States and in Europe.

In Europe, we sort of overreacted and said, "Oh, no, no, no, no, no. We don't like this new technology. We're not going to have it," and so on and so forth. Well, it was handled extremely badly because it looked as though it was just a major U.S. corporation that wanted to have its monopoly over seed production and it wasn't even possible for farmers to grow seed from seed and so on.

In a sense, all the messages were got wrong. There was no overarching ethical approach to the use of GM foods, and so on. We're determined not to get that wrong this time.

The reason why GM foods didn't take off in Europe was because, basically, the public didn't have any trust. They believed, if you like, an awful lot of (frankly) the myths that were surrounding GM foods.

It wasn't all myth. They weren't convinced of the benefit. Nobody really explained the societal benefits of GM foods.

Whether it would have been different, I don't know. Whether those benefits would have been seen to outweigh some of the dangers that people foresaw, I don't know. Certainly, we did not want this kind of approach to take place with artificial intelligence.

Of course, artificial intelligence is a much broader technology. A lot of people say, "Oh, you shouldn't talk about artificial intelligence. Talk about machine learning or probabilistic learning," or whatever it may be. But AI is a very useful, overall description in my view.

Michael Krigsman: How do you balance the competing interests, for example, the genetically modified food example you were just speaking about, the interest of consumers, the interest of seed producers, and so forth?

Lord Tim Clement-Jones: I think it's really interesting because I think you have to start with the data. You could have a set of principles. You could say that app developers need to look at the public benefit and so on and so forth. But the real acid test is the data that you're going to use to train the AI, the algorithm, whatever you may describe it as.

That's the point where there is this really difficult issue about what data is legitimate to extract from individuals. What data should be publicly valued and not sold by individual companies or the state (or whatever)? It is a really difficult issue.

In the States, you've had that brilliantly written book Surveillance Capitalism by Shoshana Zuboff. Now those raise some really important issues. Should an individual's behavioral data—not just ordinary personal data, but their behavioral data—be extractable and usable and treated as part of a data set?

That's why there is so much more discussion now about, well, what value do we attribute to personal data? How do we curate personal data sets? Can we find a way of not exactly owning but, certainly, controlling (to a greater extent) the data that we impart, and is there some way that we can extract more value from that in societal terms?

I do think we have to look a bit more. Certainly, in the UK, we've been very keen on what we call data trust or social data foundations, but institutions that hold data, public data; for instance, our national health service. Obviously, you have a different health service in the States, but data held by a national health service could be held in a data trust and, therefore, people would see what the framework for governance was. This would be actually very reassuring in many ways for people to see that their data was simply going to be used back in the health service or if it was exploited by third parties, that that was again for the benefit of the national health service: vaccinations, diagnosis of rare diseases, or whatever it may be.

It's really seeing the value of that data and not just seeing it as a commercial commodity that is taken away by a social media platform, for instance, and exploited without any real accountability. Arguing that terms and conditions do the job doesn't ever – I'm a lawyer, but I still don't believe that terms and conditions are adequate in those circumstances.

Decision-making about AI policy and governance

Michael Krigsman: We have a very interesting question from Arsalan Khan, who is a regular listener and contributor to CXOTalk. Thank you, Arsalan, always, for all of your great questions. His question is very insightful, and I think also relates to the business people who watch this show. He says, "How do you bring together the expertise (both in policymaking as well as in technology) so that you can make the right decisions as you're evaluating this set of options, choices, and so forth that you've been talking about?"

Lord Tim Clement-Jones: Well, there's no substitute for government coordination, it seems to me. The White House under President Obama had somebody who really coordinated quite a lot of this aspect.

There was, there has been, in the Trump White House, an AI specialist as well. I don't think they were quite given the license to get out there and sort of coordinate the effort that was taking place, but I'm sure, under the new administration, there will be somebody specifically, in a sense, charged with creating policy on AI in all its forms.

The States belongs to the Global Partnership on AI with Canada, France, UK, and so on. And so, I think there is a general recognition that governments have a duty to pull all this together.

Of course, it's a big web. You've got all those academic institutions, powerful academic institutions, who are not only researching into AI but also delivering solutions in terms of ethics, risk assessments, and so on. Then you've got all the international institutions: OECD, Council of Europe, G20.

Then at the national level, in the UK for instance, we've got regulators of data. We have an advisory body that advises on AI, data, and innovation. We have an office for AI in government.

We have The Alan Turing Institute, which pulls together a lot of the research that is being done in our universities. Now, unless somebody is sitting there at the center and saying, "How do we pull all this together?" it becomes extremely incoherent.

We've just had a paper from our competition authority on algorithms and the way that they may create consumer detriment in certain circumstances where they're misleading. For instance, on price comparison or whatever it may be.

Now, that is very welcome. But unless we actually boat that all into what we're trying to do across government and internationally, we're going to find ourselves with a set of rules and another set of rules there. Actually, trading across borders is difficult enough as it is, and we've got all the data shield and data adequacy issues at this very moment. Well, if we start having issues about inspection of the guts of an algorithm before an export can take place—because we're not sure that it's conforming to our particular set of rules in our country—then I think that's going to be quite tricky.

I'm a big fan of elevating this and making sure that, right across the board, we've got a common approach. That's why I'm such a big fan of this risk-based approach because I think it's common sense, basically, and it doesn't have one size fits all. I think, also, it means that, culturally, I think we can all get together on that.

Michael Krigsman: Is there a risk of not capturing the nuances because this is so complex and, therefore, creating regulation or even policy frameworks that are just too broad-brushed?

Lord Tim Clement-Jones: There is a danger of that but, frankly, I think, at the end of the day, whatever you say about this, there are going to be tools. I think regulation is going to happen at a sector level, probably.

I think that it's fair enough to be relatively broad-brushed across the board in terms of risk assessment and the general principles to be adopted in terms of design and so on. You've got people like the IEEE who are doing ethically aligned design standards and so on.

It's when it gets down to the sector level that I think then you get more specific. I don't think most of us would have too much objection to that. After all, alignment by sector.

For instance, the rules relating to financial services in the States (for instance in mergers, takeovers, and such) aren't very different to those in the UK, but there is a sort of competitive drive towards aligning your regulation and your regulatory rules, so to speak. I'd be quite optimistic that, actually, if we saw that (or if you saw that) there was one type of regulation in a particular sector, you'd go for it.

Automated vehicles, actually, is a very good example where regulation can actually be a positive driver of growth because you've got a set of standards that everybody can buy into and, therefore, there's business certainty.

How to balance competing interests in AI policy

Michael Krigsman: Arsalan Khan comes back with another question, a very interesting point, talking about the balancing of competing goals and interests. If you force open those algorithmic black boxes then do you run the risk of infringing the intellectual property of the businesses that are doing whatever it is that they're doing?

Lord Tim Clement-Jones: Regulators are very used to dealing with these sorts of issues of inspection and audit. I think that it would be perfectly fine for them to do that and they wouldn't be infringing intellectual property because they wouldn't be exploiting it. They're be inspecting but not exploiting. I think, at the end of the day, that's fine.

Also, don't forget; we've got this great concept now. The regulators are much more flexible than they used to be of sandboxing.

Michael Krigsman: How do you balance the interests of corporations against the public good, especially when it comes to AI? Maybe give us some specific examples.

Lord Tim Clement-Jones: For instance, we're seeing that in the online situation with social media. We've got this big debate happening, for instance, on whether or not it's legitimate for Twitter to delist somebody in terms of their account with them. No doubt, the same is true with Facebook and so on.

Now, maybe I shouldn't talk about it not being fair to a social media platform to have to make those decisions but—because of all the freedom of speech issues—I'd much prefer to see a reasonably clear set of principles and regulations that's about when social media platforms actually ought to delist somebody.

We're developing that in the UK in terms of Online Harms so that social media will have certain duties of care towards certain parts of the community, particularly young people and the vulnerable. They will have a duty to actually not delist or take off content or what has been called detoxing the algorithm. We're going to try and get a set of principles where people are protected and social media platforms have a duty, but it isn't a blanket and it doesn't mean that social media have to make freedom of speech decisions in quite the same way.

Inevitably, public policy is a balance and the big problem is ignorance. It's ignorance on the part of the social media platforms as to why we would want to regulate them and it's ignorance on the part of politicians who actually don't understand the niceties of all of this when they're trying to regulate.

As you know, some of us are quite dedicated to joining it all up so people really do understand why we're doing these things and getting the right solutions. Getting the right solution in this online area is really tricky.

Of course, at the middle of it, and this is why it's relevant to AI, is the algorithm, is the pushing of messages in particular directions which are autonomous. We're back to this autonomous issue, Michael.

Sometimes, you need to say, "I'm sorry." You need to be a lot more transparent about how this is working. It shouldn't be working in that way, and you're going to have to change it.

Now, I know that's a big, big change of culture in this area, but it's happening and I think that with the new administration, Congress, and so on, I think we'll all be on the same page very shortly.

Michael Krigsman: I have to ask you about the concentration of power that's taken place inside social media companies. Social media companies, many of them born in San Francisco, technology central, and so the culture of technology, historically, has been, "Well, you know, we create tools that are beneficial for everyone, and leave us alone," essentially.

Lord Tim Clement-Jones: Well, that's exactly where I'm coming from in terms of that culture has to change now. There is an exception, so I think that if you talk to the senior people in the social media companies and the big platforms, they will now accept that actually the responsibility of having to make decisions about delisting people and so on or what content should be taken down is not something they feel very comfortable about and they're getting quite a lot of heat as a result of it. Therefore, I think increasingly they will welcome regulation.

Now, obviously, I'm not predicating what kind of regulation is appropriate outside the UK or what would be accepted but, certainly, that is the way it's worked with us and there's a huge consensus across parties that we need to have a framework for the social media operations. That it isn't just Section 230, as you know, which sort of more or less allows anything to happen. In that sense, you don't take responsibility as a platform. Well, you know, not that we've ever accepted that in full in Europe but, in the UK, certainly.

Now, we think that it's time for social media platforms to take responsibility but recognizing the benefits. Good heavens. I tweet like the next person. I'm on LinkedIn. I'm no longer on Facebook. I took Jaron Lanier's advice.

There are platforms that are out there which are the Wild West. We've heard about Parler as well. We need to pull it together pretty quickly, actually.

Digital ethics: The House of Lords AI Report

Michael Krigsman: We have some questions from Twitter. Let's just start going through them. I love taking questions from Twitter. They tend to be great questions.

You created the House of Lords AI Report. Were there any outcomes that resulted from that? What did those outcomes look like?

Lord Tim Clement-Jones: Somebody asked me and said, "What was the least expected outcome?" I expected the government to listen to what we had to say and, by and large, they did.

To a limited extent, in terms of coordination, they haven't moved very fast on skills. Again, to touch on skills, they haven't moved nearly fast enough on skills.

They haven't moved fast enough on education and digital understanding, although, we've got a new kind of media literacy strategy coming down the track in the UK. Some of that is due to the pandemic but, actually, it's a question of energy and so on.

They've certainly done well in terms of the climate in terms of the search investment and in terms of the kind of nearer to market type of encouragement that they've given. So, I would score their card at about six out of ten. They've done well there.

They sort of said, "Yes, we accept your ethical AI, your trustworthy AI message," which was a core of what we were trying to say. They also accepted the diversity message. In fact, if I was going to say where they've performed best in terms of taking it on board, it's this diversity in the AI workforce, which I think is the biggest plus.

The really big plus has been the way the private sector in the UK has taken on board the messages about trustworthy AI, ethical AI. Now, techUK, which is our overarching trade body in the UK, they now have a regular annual conference about ethics and AI, which is fantastic. They're genuinely engaged.

In a sense, the culture of the app developer, the AI app developer, really encompasses ethics now. We don't have this kind of hypocritic oath for developers but, certainly, the expectations are that developers are much more plugged into the principles by which they are designing artificial intelligence. I think that will continue to grow.

The education role that techUK has played with their members has been fantastic and is a general expectation (across the board) by our regulators. We've reinforced each other, I think, probably, in that area, which I think has been very good because, let's face it, the people who are going to develop the apps are the private sector.

The public sector, by and large, procure these things. They've had sets of ethical principles now for procurement that they've put in place: World Economic Forum principles, data-sharing frameworks, and so on, or ethical data sharing frameworks.

Generally, I think we've seen a fair bit of progress. But we did point out in our just most recent report where they ran the risk of being complacent and we warned against that, basically.

Michael Krigsman: We have a really interesting question from Wayne Anderson. Wayne makes the point that it's difficult to define digital ethics at scale because of the competing interests across society that you've been describing. He said, "Who owns this decision-making, ultimately? Is it the government? Is it the people? How does it manifest? And who decides what AI is allowed to do?"

Lord Tim Clement-Jones: That's exactly my risk-based approach. It depends on what the application is. You do not want a big brother type government approach to every application of AI. That would be quite stupid. They couldn't cope anyway and it would just restrict innovation.

What you have to do—and this is back to my risk assessment approach—you have to say, "What are the areas where there's potential of detriment to the citizens, to the consumers, to society? What are those areas and then what do we do about them? What are the highest risks?"

I think that is a proportionate way of looking at dealing with AI. That is the way forward for me, and I think it's something we can agree on, basically, because risk is something that we understand. Now, we don't always get the language right, but that's something I think we can agree on.

Michael Krigsman: Wayne Anderson follows up with another very interesting question. He says, "When you talk about machine learning and statistical models, it's not soundbite friendly. To what degree is ignorance of the problem and the nature of what's going on and the media inflaming the challenges here?"

Lord Tim Clement-Jones: The narrative of AI is one of the most difficult and the biggest barriers to understanding: public understanding, understanding by developers, and so on.

Unfortunately, we're victims in the West of a sort of 3,000-year-old narrative. Kumar wrote about robots. Jason and the Argonauts had to escape from a robot walking around the Isle of Crete. That was 3,000 years ago.

It's been in our myths. We've had Frankenstein, the Prague Golem, you name it. We are frightened, societally existentially frightened by "other," by the "other," by alien creatures.

We think of AI as embedded in physical form, in robots, and this is the trouble. We've seen headlines about terminator robots.

For instance, when we launched our House of Lords report, we had headlines about House of Lords saying there must be an ethical code to prevent terminator robots. You can't get away from the narrative, so you have to double up and keep doubling up on the public trust in terms of the reassurance about the principles that are applied, about the benefits of AI applications, and so on.

This is why I raised the GM foods point because—let's face it—without much narrative about GM foods, they were called Frankenfoods. They didn't have a thousand years of history about aliens, but we do in AI, so the job is bigger.

Impact of AI on society and employment

Michael Krigsman: Any conversation around AI ethics must include a discussion of the economic impacts of AI on society and the displacement, worker displacement, and economic displacements that are taking place. How do we bring that into the mix?

Lord Tim Clement-Jones: There are different forecasts and we have to accept the fact that some people are very pessimistic about the impact on the workforce of artificial intelligence and others who are much more sanguine about it. But there are choices to be made.

We have been here before. If you look at 5th Avenue in 1903, what do you see? You see all horses. If you look at 5th Avenue in 1913, you see all carriages. I think you see one horse in the photograph.

This is something that society can adjust to but you have to get it right in terms of reskilling. One of the big problems is that we're not moving fast enough.

Not only is it about education in schools—which is not just scientific and technological education—it's about how we use AI creatively, how we use it to augment what we do, to add to what we do, not just simply substitute for what we do. There are creative ways we need to learn about in terms of using AI.

Then, of course, we have to recognize that we have to keep reinventing ourselves as adults. We can't just expect to have the same job for 30 years now. We have to keep adjusting to the technology as it comes along.

To do that, you can't just do it by yourself. You have to have—I don't know—support from government like a life-long learning account as if you were getting a university loan or grant. You've got to have employers who actually make the effort to make sure that their worker skills don't simply become obsolete. You've got to be on the case for that sort of thing. We don't want a kind of digital rustbelt in all of this.

We've got to be on the case and it's a mixture of educators, employers, government, and individuals, of course. Individuals have to have the understanding to know that they can't just simply take a job and be there forever.

Michael Krigsman: Again, it seems like there's this balancing that's taking place. For example, in the role of government in helping ease this set of economic transitions but, at the same time, recognizing that there will be pain and that individuals also have to take responsibility. Do I have that right, more or less?

Lord Tim Clement-Jones: Absolutely. I'm not a great fan of the government doing everything for us because they don't always know what they need to do. To expect government to simply solve all the problems with a wave of the financial wand, I think, is unreasonable.

But I do think this is a collaboration that needs to take place. We need to get our education establishment—particularly universities and further education in terms of pre-university colleges and, if you like, those developing different kinds of more practical skills—involved so that we actually have an idea about the kinds of skills we're going to need in the future. We need to continually be looking forward to that and adjusting our training and our education to that.

At the moment, I just don't feel we're moving nearly fast enough. We're going to wake up with a dreadful hangover (if we're not careful) with people without the right skills but the jobs can't be filled and, yet, we have people who can't get jobs.

This is a real issue. I'm not one of the great pessimists. I just think that, at any pace, we have a big challenge.

Michael Krigsman: We also need to talk about COVID-19. Where are you, in the UK, dealing with this issue? As somebody in the House of Lords, what is your role in helping manage this?

Lord Tim Clement-Jones: My job is to push and pull and kick and shove and try and move government on, but also be a bit of a hinge between the private sector, academia, and so on. We've got quite a community now of people who are really interested in artificial intelligence, the implications, how we further it to public benefit, and so on. I want to make sure that that community is retained and that government ministers actually listen to that community and are a part of that community.

Now, you know I get frustrated sometimes because government doesn't move as fast as we all want it to sometimes. Algorithmic decision-making in government, our government hasn't yet woken up to the need to have a fairly clear governance and compliance framework, but they'll come along. I'd love it if they were a bit faster, but I've still got enough energy to keep pushing them as fast as I can go.

Michael Krigsman: Any thoughts on what the post-pandemic work world will look like?

Lord Tim Clement-Jones: [Loud exhale] I mean this is the existential fret because, if you like, of the combination of COVID and the acceleration of remote working, particularly where lightbulbs have gone off in a lot of board rooms about what is possible now in terms of use of technology, which weren't there before. If we're not careful, and if people don't make the right decisions in those boardrooms, we're going to find substitution by technology of people taking place to quite a high degree without thinking about how the best combination between technology and humans work, basically. It's just going to be seen as, "Well, we can save costs and so on," without thinking about the human implications.

If I were going to issue any kind of gypsy's warning, that's what I'd say is that, actually, we're going to find ourselves in a double whammy after the pandemic because of new technology being accelerated. All those forecasts, actually, are going to come through quicker than we thought if we're not careful.

Michael Krigsman: Any final closing thoughts as we finish up?

Lord Tim Clement-Jones: I use the word "community" a fair bit, but what I really like about the world of AI (in all its forms) whatever we're interested in—skills, ethics, regulation, risk, development, benefit, and so on—is the fact that we're a tribe of people who like discussing these things, who want to see results, and it's international. I really do believe that the kind of conversation you and I have had today, Michael, is really important in all of this. We've got international institutions that are sharing all this.

The worst thing would be if we had a race to the bottom with AI and its principles. "Okay, no, we won't have that because that's going to damage our competitiveness," or something. I think I would want to see us collaborate very heavily, and they're used to that in academia. We've got to make sure that happens in every other sphere.

Michael Krigsman: All right. Well, a very fast-moving conversation. I want to say thank you to Lord Tim Clement-Jones, CBE, for taking time to be with us today. Thank you for coming back.

Lord Tim Clement-Jones: Pleasure. Absolute pleasure, Michael.

Michael Krigsman: Thank you to everybody who watched and particularly to those who asked such great questions. You guys are the best. You ask superb questions, so thank you for that.

Before you go, please subscribe to our YouTube channel and hit the subscribe button at the top of our website. Thanks a lot, everybody. Have a great day. We'll see you soon.

Published Date: Jan 29, 2021

Author: Michael Krigsman

Episode ID: 689