Responsible AI, Public Policy, and Social Impact: A Conversation from the House of Lords

Lord Tim Clement-Jones of the UK House of Lords offers thoughtful perspectives on AI regulation, stressing practical solutions to assess risks and enable innovation while seeking global collaboration on ethical guardrails and responsible AI.

44:16

Jul 21, 2023
13,829 Views

As AI and other emerging technologies race ahead at lightning speed, establishing ethical guardrails has become urgent. In this forward-looking episode of CXOTalk, Lord Tim Clement-Jones of the UK House of Lords offers a thoughtful perspective on navigating this complex landscape.

With an eye toward practical solutions, he discusses how to assess risks and shape regulations to enable innovation. He advocates bringing together diverse voices internationally to find common ground on AI safety standards. Lord Clement-Jones stresses tackling online harms thoughtfully and avoiding regulatory overreach. Throughout the wide-ranging conversation, his level-headed advice provides direction for policymakers and technologists alike. Lord Clement-Jones aims to balance rapid progress with collaborative, proactive efforts to ensure AI and related technologies benefit humanity.

Our guest co-host for this episode is QuHarrison Terry.

Lord Clement-Jones was made CBE for political services in 1988 and a life peer in 1998.  He is Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology; a member of the AI in Weapons Systems Select Committee; former Chair of [the very first] House of Lords Select Committee on AI which sat from 2017-18; Co-Chair and founder of the All-Party Parliamentary Group on AI; a founding member of OECD’s Parliamentary Group on AI and a Consultant on AI Policy and Regulation to global law firm, DLA Piper.

QuHarrison Terry is head of growth marketing at Mark Cuban Companies, a Texas venture capital firm, where he advises and assists portfolio companies with their marketing strategies and objectives.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: On Episode #797 of CXOTalk, we're discussing AI and public policy with Lord Tim Clement-Jones of the House of Lords. My guest co-host is QuHarrison Terry, Chief Growth Officer for the Mark Cuban Companies. 

QuHarrison Terry: Yo! What's up? I'm glad to be here. And, yeah, it's going to be an exciting show. I'm in Israel today, so I've only got 30 minutes, but I'm excited to actually have a few conversations with Lord Tim Clement-Jones.

Opening discussion on regulating emerging technologies

Michael Krigsman: And with that, Lord Tim Clement-Jones, welcome back to CXOTalk. 

Lord Tim Clement-Jones: Really good to be back, Michael, and good to be with you and Qu. Very nice to meet Qu as well.

Michael Krigsman: Tim, tell us about your focus at the House of Lords, as a member of the House of Lords.

Lord Tim Clement-Jones: It's something that has really grown because I started off by being a digital or the digital spokesperson for my party in the House of Lords a few years ago. But now, as everything has become digital – and the government have created a special department for science, innovation, and technology – what I try and have to do is cover the full range of innovation and technology. 

That is not just AI and all the tech around that. It's also life sciences, the application of new technology to health, and other aspects. So, it's a broad brief. 

Balancing AI innovation and regulation

QuHarrison Terry: You're working on bills that are regulating some really ambiguous technologies related to our future, right? AI is a big deal. Obviously, it's played a part.

I think GDPR was really good because it taught marketers like myself what we could and couldn't do and where to place data. And it gave us some standards.

But marketing had existed for decades. We'd been doing online marketing for decades. It just hadn't as big as it is today.

With AI, we're literally governing something that is in its infancy. Would you give your kid training wheels before it could even get out of the womb? 

I feel like I'm seeing that argument being brought up a ton, especially here in the U.S. My question for you is, how do you even conceptualize something and the impact that it's going to have on society in the next few decades? 

Lord Tim Clement-Jones: What you have to do is start from the risk aspect. We don't really know, and so what you have to do is try and get some guardrails in place where you do have to assess the risk. Otherwise, if you don't start now, by the time we get to artificial general intelligence, it'll be too late.

What we have to do is try and work out what is appropriate. One of our big debates at the moment is what is proportionate in all of this. A lot of people... There are people who believe that any form of regulation stifles innovation. 

I'm a believer in regulation, proportionate regulation that makes sure that developers have some certainty around what they do. So, actually, that stimulates innovation. 

And I think there's a bit of a philosophical divide. But increasingly, I think when people talk about AI, they do think that some guardrails are appropriate.

QuHarrison Terry: I want to advocate for some of the companies that are being picked on or challenged in the landscape. Is this regulation just for them? 

If I'm Meta and I say I'm getting into AI, and then the UK Parliament comes in and says, "Well, we've got an AI bill," it's almost like you're putting handcuffs on me. And it's not just Meta. It's Google and Microsoft, et cetera. Is that what this really is?

Lord Tim Clement-Jones: If I may say, it's probably the attitude of a few years ago, the kind of "move fast and break things" type of philosophy. We all know that that got us some places, but it also got us some detriment as well.

No doubt, we're going to be talking about online safety and so on. And we don't want the same to happen with AI. We want to anticipate some of the issues that are going to be associated with AI.

It is right that we should be looking at this. I think, increasingly, those big tech companies now recognize that some form of guardrails, some form of regulation is appropriate. 

If you hear people like Brad Smith, you listen to Sam Altman, all of those guys, they're beginning to understand the need for it. Now when you talk to Meta, they're also concerned to make sure that there is some kind of regulation. 

The big issue is, how far do you go in terms of the smaller companies? What do you regulate? How do you regulate? And, in the case of online safety, what impact there is on freedom of speech, for instance.

The “black box, autonomous” nature of AI is hard to regulate

Michael Krigsman: One of the real questions that I have about this is what is unique about AI because, historically, it would be almost unheard of that technologists would approach the government and say, "Hey, you need to regulate us." But with AI, Tim, as you pointed out, that's beginning to happen. What is causing this very unusual dynamic where folks are saying, "Hey, government, you need to regulate us"?

Lord Tim Clement-Jones: I absolutely understand why there is a little bit of bafflement if you don't know what the potential of AI is. It's partly about the black-box nature of AI, the neural networks, the fact that they can make decisions and predictions which impact on you without reasons being completely understood. That's a machine making those decisions, and you can't interrogate it in an appropriate kind of way unless you design it ahead of the game. 

And it's the autonomous nature of AI, too, where you may not have full control over what the AI is actually doing in terms of what it's deciding, what data it's actually ingesting, and now we've got these large language models. That has become particularly acute because we don't actually know what all the data that has been ingested by these large language models is.

As I said a little bit earlier, what we're trying to do is anticipate some of the problems where AGI (artificial general intelligence) would have a greater degree of autonomy, a greater power in terms of prediction and decision-making, and, if you like, just content creation than anything we've ever seen before. And so, we do have to make sure that there is some form of ethical set of guardrails around that.

Michael Krigsman: Please subscribe to our YouTube channel. Hit the subscribe button at the bottom of our website so that we can send you our newsletter, and you will be notified of upcoming shows.

What is driving this request for regulation from technologists? Is it fear? Is it fear that the government may take control? Is it fear of their own creation, the AI creation?

Lord Tim Clement-Jones: I know the narrative is somewhat polarized about AI, but you've seen a thousand technologists write in. You've seen people like OpenAI, Anthropic, and so on all join in with the view that we should even... I mean I don't actually accept that, but a lot of technologists are asking for a moratorium on further development, pending regulation.

Now I think... Maybe I've got a greater faith in politicians' ability to move fast and regulate rather than break things, but I do believe that there is a possibility of, fairly quickly, moving to at least standards so that we can get sets of standards for testing, sets of standards for risk and impact assessment, standards for continuous monitoring – even on an international basis.

Get that in place. Whether or not it's obligatory, make it part of our technology culture on both sides of the Atlantic, and I think we would have a great achievement there. 

Importance of regulatory standards for AI

QuHarrison Terry: One of the things that I think about, if we jump just a few years into the future, there's going to be a case where you're going to have to litigate AI. And the reason that I believe a lot of these entrepreneurs are raising their hands and saying, "Hey, challenge us. Challenge us. Challenge us," is because their lawyers are telling them, "Hey, we need to CYA all of this," and one way to do that is to go and not lobby. Don't send your lawyers in saying, "This is what we suggest," but you as the entrepreneur saying, "I don't know what's going on. If something happens, I basically created some basis of indemnification from the jump," and you can't. When you have to go litigate the AI, your founders are removed from that because they're going to say, "Hey, I told you I didn't know what was going to happen, and this could have massive, deadly implications."

Lord Tim Clement-Jones: I love that. The U.S. approach to this kind of thing is litigation, which establishes standards. It establishes norms. 

When you have people like the National Institute for Standards in Technology setting standards (as they are doing for risk management and so on), that becomes the norm. Therefore, people are going to litigate over this and say to the technologists, "Look, guys. You didn't conform to the risk management framework set out by the national institute," and so they are going to be liable in the court.

Now, that's not the way Europeans tend to do it. We tend to try and regulate in advance of this kind of litigious behavior if you like. But they're both valid and they both lead to the creation of standards, which can be common.

I'm a great believer in trying to converge (as much as possible) on these standards so that then technologists can get on with their job. They can feel quite confident that this is what they need to do to avoid a black box, or this is what they need to do to make sure they're not developing high-risk AI systems. 

We've got a lot of potential there. So, to that extent, I'm actually quite optimistic. 

High-risk AI applications like facial recognition

Michael Krigsman: Can you elaborate on the phrase you just used, "high-risk AI systems"? What qualifies, what pushes the application of this technology into that category?

Lord Tim Clement-Jones: Well, let me take a U.S. example, Michael. You know a lot of cities in the U.S. – Oakland, Portland, Chicago – have banned the use of live facial recognition cameras, AI-powered live facial recognition cameras or systems (and rightly so in my view) pending a proper kind of overview, an ethical framework for the deployment of this kind of technology because it's seen as continuous surveillance, basically, of our citizens. Now, for me, that is high risk. 

What you need to do before you install those cameras, before the police install them or the security services, what we need to do is have an assessment of the impact and how much people need to consent to that being deployed, the biases which may be inherent in the processing of the system. For instance, the racial prejudice that might be inherent in the datasets when they, in a sense, pick out particular people from the crowd. 

All that kind of thing, that for me is high risk and requires a greater degree of impact assessment before you deploy. That is the kind of regulation that I think is very appetite and it's, of course, what the EU are adopting currently.

UK Online Safety Bill aims to balance harms and expression

QuHarrison Terry: I want to shift a little bit towards something that is just as important, and it's the Online Safety Bill. My question here is, I feel like I'm being overregulated. I'm being regulated on AI. I'm being regulated on all of my terms of service on all the websites that I now maintain that I have to enforce. I've got to make the Internet a safer place. I have to control the users and control my content. 

What do we expect these tech companies to really become, because it seems like a lot of the legislation that you're proposing is just telling me that I have to follow the rules and I can't think outside the box? Thus, I can't bring you the devices or the services that I want. I have to follow the line. Pretty much we're all going to be using the same devices, the same websites because it's going to cost too much to think outside the box.

Lord Tim Clement-Jones: Having spent the last two or three years working on the Online Safety Bill – which hopefully will become an Act in September because we're very, very nearly there – I think we've managed to work our way through without overregulating and without impinging too much on freedom of expression because what we're really doing now, we're really regulating for harms against children. And everybody would agree that what you don't want is minors suffering harms, so you've got to define those harms and make sure the platforms adhere to that so they don't expose those children to harms.

The second thing is illegal behavior. Our principle has been, what is illegal offline should be illegal online. That's been the principle we've tried to establish. 

Everything else is okay as long as everybody knows when they get on a platform, this is what the terms of service provide. So, if you're in a very free speech environment and that's the terms of service – like parts of Reddit for instance, the community that is quite explicit about the things they talk about for adults – that's fine. 

We've also arranged it so that it's obligatory to have certain user empowerment tools, so if you don't want to see particular content, you have the right to press a button and say, "That's the kind of content I don't want to see," and that's freedom of choice for the citizen, basically. So, we've tried to work our way through in all of this without being too negative about new innovations and platforms, and also without impinging too much on the smaller platforms.

Now, we have a debate about whether you should regulate those who are risky as opposed to those who are large, and we haven't yet quite resolved that. At the moment, we're regulating the larger platforms, not the risky platforms.

Now, I believe that even smaller platforms can be quite risky. But that's a debate to be had continuously. We'll see how the Bill works out in the future if we don't manage to establish risk as the basis for regulation.

But nevertheless, that's about the last remaining debate that we've got because what we're regulating is the functions of the systems. We're not regulating content so much. It's how the amplification of the algorithm works and so on. 

Really important stuff. Again, we're back to the black box. We mustn't have those black boxes. 

Regulating platforms and content risk assessments

Michael Krigsman: But the algorithms are inextricably linked with the content, so how do you regulate the container without getting into the messy set of judgment calls for the content that is being delivered?

Lord Tim Clement-Jones: It's all about transparency. Indeed, this is the common link with AI regulation as well. It's all about transparency. 

What we ask is for the platforms to show their risk assessments of their platforms, the technology, and the algorithms they're using. And then the regulator basically sees what they say about the impact because that's what the platforms do.

It's not as if – except in extreme cases – the regulator steps in and actually looks inside the algorithm or gets a technological report on the algorithm. By and large, it's a lot about self-reporting. It's a lot about saying, "These are the harms that we discovered," and also we want to see researcher access as well. So, there is some independent, third-party validation.

But it's not a very heavy hand, actually. A lot of this is being done by codes of practice. And so, you just expect people to follow the codes. It's mainly good practice. 

QuHarrison Terry: We've got to some pretty big companies already starting to follow the codes. Just the other day, Meta announced that they're open-sourcing one of their large language models, Llama. I'm curious about your take on that. Is that what you want to see from the industry, or is that still just another press release?

Lord Tim Clement-Jones: I think that's a slightly different thing, strangely enough, because you could say that a powerful open-source AI is like opening Pandora's Box on a very powerful creature. I feel quite nervous about that.

The other – ChatGPT, GPT-4, and Ember, which is the platform on which Bard is based – they're not open-source. And so, being able to utilize an open-source like Llama, I have my doubts about.

That to me is quite high risk, and I think what I would like to see is a full impact assessment from Meta about that. I think we'd all be greatly reassured if we could see that.

But I'm quite nervous about the ability to build quite maline systems off the back of an open-source AI as powerful as Llama. Let's face it; only people who can build these massive generative AI systems are those who have excellent compute power and massive datasets. And so, this is only a limited number of big tech companies who can do that. Therefore, being able to piggyback on an open-source system like that is quite something for any tech player.

Responsible AI and technology industry consolidation of power

Michael Krigsman: At the end of the day, what is the problem with these mega players and their AI? Are you concerned that there is an over-consolidation of power and control and, therefore, impact on society? 

Lord Tim Clement-Jones: In terms of ethical regulation, the big tech companies (for quite a long time) have responded. We talked about responsible AI. They have dialogs all the time with politicians like myself, so I think there's a general acceptance that there is a need for ethical behavior. Indeed, now there's a growing acceptance of the need for regulation or at least for standards. 

I think what is really going to be in dispute is whether or not there should be greater control over the competition aspect. By that, I mean opening up competition because, as I said earlier, with the excellent computing power and the huge datasets that are needed currently (in the current form of machine learning) – it may change in the future, but at the moment – it means that you're concentrating power in the hands of Microsoft, Meta, Apple, Google, and so on. We need to make sure that they don't behave in any kind of monopolistic way. 

I'm afraid I'm a science fiction reader, and when I look back at William Gibson and things like Neuromancer, they were competing AI systems but very limited AI systems. And I don't want to get to a situation where, for instance, we have the same situation we have in app stores where there's a very limited choice about stores. I don't want to have a very limited choice of AI systems simply because there are only four or five, the FAANGs, only four or five companies that can actually build these things. 

AI regulation and impact on competition with China

QuHarrison Terry: We've got a very interesting question from a friend of both Mike and myself. Iavor is over at Harvard Business School, and he's asking the question, "Is there a concern that the regulation in the UK and the EU will slow down innovation within the jurisdiction?"

Already AI development within the U.S. and China is much further ahead, and regulation in the UK should make it worse, right?

Lord Tim Clement-Jones: Actually not. I think that it's going to create a certain confidence in the ability to do this stuff.

Don't forget that if you add the UK to Europe, you're well over 450 million consumers. If you then establish norms (and you will have the same norms in the States, believe me) – the standards will be virtually identical at the end of the day – that's what developers will want. That's the market that developers will want.

They will be very silly if they start trying to develop AI which are effectively black box because they'll find when businesses and governments and so on procure AI, they won't procure black box systems. If you don't get Kitemarked or certificated as being a conforming AI system for a consumer, for instance, you won't be able to sell your product to the consumer.

The same is going to be true of Tencent, Alibaba, and all the Chinese companies as well. In fact, of course, the Chinese government have adopted standards or are adopting standards for generative AI systems themselves. 

They may not apply it to their own government systems, but that's not the commercial aspect. The commercial aspect is both in data and in AI. Private Chinese companies, such as they are, will need to conform. And when they export or sell their services or systems abroad, they will have to conform.

Can the UK Online Harms Bill be used as a template for AI regulation?

Michael Krigsman: We have another really interesting question from Twitter. This is from Lisbeth Shaw who says, "How well would using the UK Harms Bill work as a template for regulating the use of AI?"

Lord Tim Clement-Jones: I think it's going to be a very, very different story for AI regulation because it won't involve anything like the difficulties that we've had over freedom of expression because what we've absolutely had to do is try and make sure that we don't close down free speech on our social media platforms. Therefore, we've had to be very, very particular and quite complicated in the way we do it.

I think that AI regulation is going to be much more straightforward than that. It'll be simply, "Look. Before you institute AI systems (or procure them or adopt them or develop them) and depending on what stage you get to – it's rather like the Data Protection Impact Assessment – you've got to carry out that assessment. If it turns out that it's high risk, these are the kinds of things you've got to do in terms of transparency, further work on bias elimination, further work on explanation, and being very clear about liability in these circumstances.

This is going to be a set of requirements. These are the standards you adopt. They could be those from the International Standards Organization. They could be those that NIST has put forward or IEEE and so on. Those are all coming together. 

It's going to be much more straight. It's going to much more like product liability.

We already have product liability laws in the UK, which are pretty clear. We have health and safety laws, which are even clearer, and so does Europe. In fact, ours are based on those that we had when we were in the EU. And so, that's much more like product liability in a much more straightforward kind of a way. 

But we've got to make sure that, in the way that Qu was talking earlier, we can't have developers saying, "No, no. That was an AI system. Nothing to do with me. It's been running around on its own. I don't take liability for that." That's a really important bit of liability establishment that we need.

QuHarrison Terry: The whole concept of AI regulation today is something that I feel is very narrow to where we are today. As a future thinker, one of the biggest skill sets that we all possess is being able to separate today from tomorrow and then from tomorrow to like ten years. 

Let's just say, for the sake of this discussion, everything you're working on goes exactly as planned. What does the vision of tomorrow and even ten years into the future look like if all these things are running as they are being designed?

Lord Tim Clement-Jones: One of the things that's most difficult is to future-proof our regulation. But that's exactly what we need to do both in online safety and in terms of the general AI approach.

I believe that, at the end of the day, what we need to do is have a system where AI is our servant, not our master. And in ten years, that's got to be the case. That's what I'm working toward so that it's a tool that we use in our employment. It helps us do our jobs. It doesn't substitute for us all the time. It's something that improves the quality of our lives in everything we do.

I'm an optimist about AI. I think it's like electricity or the printing press. It's a beneficial technology that we've got to grab with both hands. But we've got to do it very clearly and knowing what the risks are.

QuHarrison Terry: Lord Tim, you've got some big responsibilities. I'm going to be looking toward you in ten years when AI has replaced me, it's taken my job, and I'm trying to figure out what I'm going to do in society. 

Michael Krigsman: [Laughter] 

Lord Tim Clement-Jones: [Laughter]

QuHarrison Terry: But because you're here, hopefully, that won't be a scenario that I'm living in. 

Lord Tim Clement-Jones: I don't think there's any chance of you being substituted, Qu. 

QuHarrison Terry: I love that. I'm going to take that and put that on my resume, if you don't mind. 

Lord Tim Clement-Jones: Good seeing you.

QuHarrison Terry: Likewise. Thank you, all. It's been a pleasure today. I do have to run, unfortunately. But this is a great conversation.

Michael Krigsman: Thanks, Qu. Take care.

Lord Tim Clement-Jones: It was nice to see you.

Michael Krigsman: Bye-bye. 

How to balance risk against innovation 

Tim, when it comes to this kind of regulation, what are the biggest challenges in balancing the competing interests for, for example, a technological project versus innovation (which you alluded to earlier)? 

Lord Tim Clement-Jones: I think it's really an approach to risk, and I think that there's going to be inevitably a difference of approach in different parts of the world as to what is considered high risk. I think that's going to be where some of the debate and the argument is going to be.

What do I need to do if you determine that something is high risk, because I might have a different opinion about that? I think we've got to get some view about that.

Now, it's quite interesting. I think, in the States, you have very strong views about civil liberties, about surveillance, and the right to privacy, and so on. It's more about data in our case because your social media companies, in a sense, make their money from data. It's that kind of approach that's been traditional. 

But we push back against that quite heavily with GDPR. So, there are going to be cultural differences on this, and I think that's going to be the big challenge. 

Applying responsible AI and ethics to autonomous weapon systems

Then, of course, on weaponry, which we haven't touched on yet, that is where there's going to be an even bigger debate because, lethal autonomous weapons, for a start, you have to define what an autonomous weapon is, and you have to decide whether or not that's going to put you at a competitive disadvantage internationally if you do decide to limit the use of them. Also, the whole business about whether it's offensive or defensive and whether it's appropriate to use it defensively rather than offensively.

Then again, if you use an autonomous weapon in relation to nuclear weapons, that's even more high risk than you can possibly imagine. So, there are a lot of questions there, which I think we need to resolve. But we do need to start resolving them.

Michael Krigsman: Honestly, how practical is this, even when you talk about lethal autonomous weapons and the ability to get countries to agree on standards? Obviously, that's far more complex than regulating Google, Facebook, or any other company.

Lord Tim Clement-Jones: Yes, it is. But of course, we already have international humanitarian law, which applies across the battlefield. And so, what you need to do is make sure that international human rights law is fit for purpose and that it does cover those kinds of autonomous weapons.

Already, things like drones, for instance, that are on the battlefield that may have quite a high degree of autonomy, so we've got to make sure that those are covered and that we know when it's appropriate to have a human in the loop.It may be that certain weapons must have a human in the loop. Otherwise, there is a breach of international humanitarian law.

We've got some of the tools for this. We're not in Wild West country here. We can move towards agreement on that.

Impact of Brexit on attracting AI companies and talent

Michael Krigsman: We have an interesting question from Twitter. This is from Chris Peterson who says, "Has Brexit complicated the regulatory picture or hurt the UK's ability to attract people and companies in the AI space?"

Lord Tim Clement-Jones: I think, in the medium term, that is already happening, basically. It is much more difficult now to attract and move people around, even the high skilled ones. 

We do have exceptions for high skilled technologists, but the Visa charges are eye-watering compared to many other countries. And of course, between members of the European Union, you can travel Visa-free. You can work anywhere within the 27. 

It's a real problem for us and, of course, I would much prefer us to be regulating on the basis of 28 rather than 27. But that's history now, I'm afraid, and what we have to do is try and come to an agreement with the EU about making sure that we do have quite a high degree of convergence in our regulatory systems. 

It would be crazy for us as a country (of no more than 70 million) to start trying to establish completely different rules from the EU, who now have begun to really establish a suite of rules relating to AI, social media, digital markets, and so on. We mustn't diverge too much. 

Otherwise, why should American companies, for instance, come and establish here when, if they went to Paris or Berlin, they'd have a bigger market, they could attract all the postgraduates that they would need from universities across the EU for more easily? We've got to make sure these barriers to skills and jobs and so on are much lower than they are at the moment.

Michael Krigsman: I know that you said earlier that your focus is not on the content so much as on things like the algorithms. 

Lord Tim Clement-Jones: Yes.

Tackling misinformation and disinformation

Michael Krigsman: But when we start to talk about misinformation, disinformation and, even in the case of large language models, hallucinations where there's no ill intent but the machine simply gives you an incorrect answer with absolute 100% confidence. How does all of that factor into the regulation, the regulatory kind of framework?

Lord Tim Clement-Jones: You can't, is the answer. There are two things you can do. 

You can be very advertent to how the system amplifies information (misinformation, disinformation) and, therefore, what you've got to try and do is make sure that the social media platforms are very conscious about that and that they have providence, checking systems, fact-checking systems, and so on.

I wouldn't make that compulsory, but I think it should be part of your terms of service that, where possible, you try and check the provenance of the sources if you're a social media platform putting out, say, public health information that's misleading and so on. You've got that. 

The other is you've got to make sure your population is equipped, that they're sufficiently and media and digitally literate to make sure that they've got at least some idea of what it being fed to them.

I don't think there's any easy solution to disinformation and misinformation. It's something we're going to have to live with. 

Okay. We might recognize whether there's a bot pushing out. Whether it's what they call an inauthentic account. That's a different story. But the ordinary pushing out of misleading information, that is something where, as citizens, we have to be a bit more sophisticated than we are now. 

But let's face it. There are still many people in your country who believe that the election was stolen. And I'm afraid. How do you treat that? Is that disinformation, misinformation? Many people believe that to be true.

Michael Krigsman: We have another interesting question on Twitter from Chris Peterson who comes back on the subject of an educated populus. I think this also gets directly to the point you were just making.

Lord Tim Clement-Jones: Yes.

Advice for policymakers - Convene and collaborate

Michael Krigsman: He says this: "Listening to Lord Clement-Jones makes me wish pretty much anyone in our "upper house" the Senate, had your knowledge, understanding, and perspective in the U.S."

I guess, as a rhetorical question, "How can we expect the population to be educated when, frankly, our own politicians seem to know very little about the subjects in which they're discussing?"

Lord Tim Clement-Jones: I am very heartened by the fact that people like Chuck Schumer, for instance, the Majority Leader in the Senate, is very interested in this subject, and he's got a working group. And I think that senior Democrats in Congress, they want to see legislation along these lines.

And it could easily be bipartisan because, quite frankly, AI, which is black box, affects all the population. And I believe there's great room for a bipartisan agreement on that kind of thing.

Of course, the White House itself, we've seen the blueprint for a digital AI bill of rights. We've seen the White House even today, I believe, who've got all the big tech CEOs in today to talk to them about testing and getting their commitment to a particular testing of AI systems before they're put into use.

I think there's quite a lot happening there. We may not have the answers, but I think that both in Brussels and London and in D.C., I think there's a growing understanding of what we need to do and what the issues are.

Michael Krigsman: In that case, what advice do you have for policymakers and for politicians who want to address these issues in a substantive way?

Lord Tim Clement-Jones: Our Prime Minister having produced proposals which were very weak in the regulatory area (earlier on this year in what we call a white paper) now has suggested we convene in London this December, November-December, an AI Safety Conference, internationally, a global conference. Now, I very, very much support that.

I would say, "Look. Come together. Let's involve the Chinese government as well. Let's bring in Chinese technologists, and let's bring EU leaders together. Let's bring congressional leaders together as well. Come to London, and let's try and converge on a particular set of solutions."

Maybe we won't be able to agree on how much we need to regulate across the board. But what we can agree on is what kind of standards – safety, freedom from bias, transparency, accountability, and so on – we need to put into effect.

I think that's really where we should be going.

Advice for business and technology leaders on responsible AI

Michael Krigsman: What about advice for businesspeople and technologists who are creating these technologies?

Lord Tim Clement-Jones: I think what we do is we keep the dialog open. I talk a lot to technologists, and I know that this idea that some governments have that innovation is the enemy of regulation and vice versa is wrong. 

I mean most technologists I talk to are just keen for us to get on with it, to get some guardrails around this so that they can get on with understanding what they need to do to conform to regulation and, therefore, their job as developers and adopters is made much easier. I think they'd breathe a sigh of relief, especially if we got some sort of international agreement on standards.

Michael Krigsman: As we finish up, Tim, any final thoughts that you would like to share?

Lord Tim Clement-Jones: We were all a bit surprised by the speed at which ChatGPT, GPT-4, and then Bard all came thundering down the track. Now we've had Llama in the last couple of days. 

The speed of development is unbelievable. But at the same time, the narrative has been very, very confused.

We've had a thousand technologists on one side telling us that we're doomed (more or less) and the other side saying, "No, no. It's great." Another thousand the other day saying it's all great. 

I think people might find it quite difficult to understand what the agenda is. But what it has done is made politicians think very, very carefully about what we need to do. 

It's heightened consciousness, and it's made us think about the future, and it's made us think that things like artificial general intelligence are coming along much quicker than we ever thought. That is only to the good.

It's really made sure that we've got to speed up. We've got to get going and make sure that we do the right thing for our citizens.

Michael Krigsman: You sound very optimistic.

Lord Tim Clement-Jones: I'm always optimistic until I fail [laughter] because I believe that if you don't throw a lot of energy into this, you will fail.

A lot of people say, "No, we don't need this because it'll stifle innovation." "Oh, we can't have this because the Chinese will get ahead of us," and so on and so forth.

I think there are a lot of spurious arguments out there, and I don't want them to prevail, so I spend quite a lot of energy trying to make sure that we adopt practical solutions. And there are a lot of practical solutions out there. I think, strangely enough, we just demonstrated it with our Online Safety Bill that it is possible (even in a really complicated world) to do stuff, to regulate, and also, to some extent, future-proof as well.

Michael Krigsman: Well, congratulations on moving your Online Safety Bill forward. With that, we're out of time. I want to say a huge thank you to Lord Tim Clement-Jones of the House of Lords for taking time to be with us today. Tim, thank you so much.

Lord Tim Clement-Jones: A huge pleasure, Michael. 

Michael Krigsman: I hope you'll come back another time.

Lord Tim Clement-Jones: Definitely. [Laughter] You won't keep me away.

Michael Krigsman: And a huge thank you to our great audience. You guys ask the best questions. Very smart, intelligent, sophisticated audience. We love that.

Now, before you go, please subscribe to our YouTube channel. Hit the subscribe button at the bottom of our website so that we can send you our newsletter, and you will be notified of upcoming shows. Check out CXOTalk.com. Have a great day, everybody, and we'll see you soon. Bye-bye.

Published Date: Jul 21, 2023

Author: Michael Krigsman

Episode ID: 797