AI, Digital Assets, and Public Trust: Inside the House of Lords

UK House of Lords members reveal inside perspectives on AI regulation, digital asset frameworks, and building public trust in emerging technologies. Join the live discussion with Lords Holmes and Clement-Jones on balancing innovation with ethical responsibility.

57:07

Apr 04, 2025
3,827 Views

In CXOTalk episode 875, we feature a compelling discussion with Lord Chris Holmes and Lord Tim Clement-Jones, distinguished UK House of Lords members. They offer a unique perspective on policymakers' critical challenges in overseeing rapidly advancing technologies. The conversation centers on striking a delicate balance between fostering innovation in artificial intelligence, digital assets, and open data while establishing effective regulation and maintaining public trust. 

Lord Holmes and Lord Clement-Jones examine the complexities of creating "right-sized" governance frameworks for AI, emphasizing principles such as transparency, accountability, and international interoperability. The discussion explores practical implications of regulation on creators' intellectual property rights, the need for human oversight in automated decision-making processes, and the importance of global collaboration amid differing national approaches and trade tensions.

This dialogue offers valuable insights for leaders seeking to understand the policy environment that will shape the future of technology.

Episode Highlights

Implement Right-Sized AI Governance

  • Adopt a principles-based approach to AI governance focused on desired outcomes and understanding the inputs used. This method fosters clarity and consistency, enhances stakeholder confidence, and supports innovation.
  • Establish governance frameworks that are horizontally focused across sectors to ensure comprehensive coverage. Maintain agility within the framework to adapt as technology evolves, avoiding overly prescriptive rules that stifle progress.

Balance AI Regulation with Innovation

  • View well-designed regulation not as an obstacle, but as an enabler of safer and more trustworthy innovation. Clear and consistent rules provide the certainty businesses and developers need to invest and build responsibly.
  • Apply a risk-based approach to regulation, ensuring measures are proportionate to the potential harms of specific AI applications. This targeted strategy supports innovation by avoiding overly broad restrictions on lower-risk uses.

Foster International Collaboration on AI Standards

  • Advocate for and adopt internationally recognized technical and ethical AI development and deployment standards. Common standards reduce complexity for businesses operating across borders and facilitate global market access.
  • Engage with global standards bodies and participate in cross-jurisdictional dialogues to promote regulatory interoperability. This collaboration helps prevent fragmented regulatory environments that hinder innovation and trade.

Protect Intellectual Property in AI Development

  • Scrutinize the data sources used for training AI models to ensure compliance with copyright and intellectual property laws. Obtain proper consent and arrange fair remuneration for creators whose work is incorporated into training datasets.
  • Explore and implement technical solutions like metadata, watermarking, or fingerprinting to protect creative assets used in or generated by AI. Upholding IP rights maintains the value of creative work and encourages ongoing content creation.

Manage Risks in Automated Decision-Making

  • Implement robust processes for automated decision-making systems, including impact assessments and transparency about their use. Ensure customers and employees understand when AI influences decisions affecting them and provide clear avenues for appeal or redress.
  • Maintain meaningful human oversight within automated decision processes, especially for high-stakes applications like recruitment, finance, or public services—label decisions made by AI systems to foster accountability and user awareness.

Key Takeaways

Adopt Smart AI Governance to Build Confidence

Effective AI governance provides the clarity and certainty necessary for innovation. Implement principles-based policies focused on outcomes, risk assessment, and transparency to foster trust among customers, investors, and the public. This approach supports responsible development and encourages wider adoption of AI technologies.

Respect Data Rights and Intellectual Property in AI

Ensure AI training datasets comply with copyright law and data privacy regulations, securing consent and offering remuneration for creators' works where applicable. Establish transparent processes for automated decision-making systems, including human oversight and clear paths for individuals to seek explanations or corrections; prioritizing ethical data handling and IP protection builds crucial stakeholder trust.

Champion Global AI Standards for Broader Reach

Promote adopting consistent international standards for AI safety, ethics, and interoperability. Aligning with global benchmarks simplifies cross-border operations and reduces regulatory complexity for businesses developing or deploying AI solutions. Active engagement in standards development helps shape a favorable environment for global innovation and market access.

Episode Participants

Lord Chris Holmes has been a Member of the House of Lords since 2013. His core policy focus is on digital technology for the public good, with a particular interest in technologies such as AI and Blockchain and areas of application such as FinTech, GovTech, RegTech, Assistive Tech, and EdTech. Lord Holmes writes regularly on these and related topics for Computer Weekly and Finextra. Other key policy areas are social mobility, employment, education, skills, culture, media, and sports.

Lord Tim Clement-Jones is the Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology.  Former Chair of the House of Lords Select Committee on Artificial Intelligence which reported in 2018 with "AI in the UK Ready Willing and Able?" and its follow-up report in 2020 "AI in the UK: No Room for Complacency". He co-founded and has co-chaired the All-Party Parliamentary Group on Artificial Intelligence since 2017. Tim is the Chair of the Board of Trust Alliance Group (formerly Ombudsman Services) an independent not-for-profit group that provides dispute resolution for communications and energy utilities.

Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator known for his deep business transformation, innovation, and leadership expertise. He has presented at industry events worldwide and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.

Transcript

Michael Krigsman: Welcome to CXOTalk episode 875. Today, we're honored to host two distinguished members of the UK House of Lords, Lord Chris Holmes and Lord Tim Clement-Jones. We'll explore how policymakers at the highest level balance innovation, regulation, and public trust in technology, specifically related to AI, digital assets, and open data.

Let's get into it. Gentlemen, welcome to CXOTalk. It's great to see you both.

Lord Tim Clement-Jones: Hi, Michael. Great to be here.

Lord Chris Holmes: Great to be here.

The Need for AI Governance and Regulation

Michael Krigsman: Lord Holmes, tell us, why do we care about this topic of AI regulation and governance in this way, and especially at this time?

Lord Chris Holmes: They're incredibly powerful technologies, but they're in our human hands. If we're to optimize their potential while being cognizant of their challenges and risks, it seems entirely logical, as we have done through centuries with other advancements and technologies, that we consider the right regulatory framework.

We need to balance all of the competing needs, and we put in what I would describe as right-sized regulation. Right-sized regulation is good for innovation, good for the investor, good for the citizen, good for the creative, and good for the consumer.

Balancing Regulation with Innovation

Michael Krigsman: Tim, why is all of this so beneficial, and how do we avoid interfering with or presenting obstacles to innovation, as Chris just said?

Lord Tim Clement-Jones: Both Chris and I believe that regulation can help innovation. If you get it right in terms of clarity, consistency, and certainty for business, for consumers, and for developers, that will actually lead to a much better and safer form of innovation at the end of the day. That's what we want. We want to see adoption. We want to see the rollout of new technologies, but they've got to be for our benefit.

As you and I have discussed over the years, Michael, this technology has been advancing at an incredibly fast rate. Back in November '22, for instance, suddenly ChatGPT burst on the world, and people have been struggling to catch up ever since. Now, of course, this year, people are talking about agentic AI, which is much more autonomous. You've got this mixture of large language models and autonomy, and you've absolutely got to have some kind of guardrails around that.

Principles for Effective AI Regulation

Michael Krigsman: Chris, when we talk about the guardrails and the necessity for protection, how do you implement that type of governance but at the same time ensure that there are minimal obstacles to innovation, going back to your initial comment?

Lord Chris Holmes: We know what we need to know to make a success of this. All of history suggests to me that we know how to do right-sized regulation. We all know bad regulation, but that's bad regulation. That doesn't mean that as a consequence, regulation is bad.

I think if we take a principles-based, outcomes-focused, and inputs-understood approach, we give ourselves the best chance of making a success of this. To those principles: trust and transparency; inclusion and innovation; interoperability and an international perspective; accountability; assurance; and accessibility. Good principles, I would suggest, for AI regulation, good principles for all regulation.

In the UK, the previous government, in its white paper, set out principles, but set them out on a voluntary basis. Well, they're good principles. If they're good principles, it seems logical that one would want them on a statutory basis to give that opportunity. Whichever part of society, whichever part of the public or private sector, whichever part of the economy you find yourself in, you're likely to experience the same approach to AI, the same guardrails around AI.

Because as Tim said, whether you're an individual, whether you're a consumer, whether you're a business, whether you're an innovator, what ultimately you want is certainty, clarity, consistency. From those three Cs comes what you really need as a human to enable action: confidence.

Achieving Regulatory Clarity and Agility

Michael Krigsman: How do you get there?

Lord Chris Holmes: Well, I think by putting in place a regulatory architecture which is rooted in those principles. Crucially, it's horizontally focused, so it's cross-sector. It goes right across the economy. By doing that, you're able to identify areas where there is minimal or indeed no regulatory cover or regulator in the UK, recruitment being an obvious example to bring to bear on that.

If you have that framework, it has the principles at its core, but it has the agility and the dynamism to develop. Particularly in our common law jurisdiction in the United Kingdom, it can develop. It can be agile.

For a comparator, if one looks at other more prescriptive jurisdictions, there's no suggestion necessarily that prescription is wrong of itself, but it is necessarily different. Prescription necessarily tends towards trying to capture every last element in that legislation, which always has a danger of trapping that legislation at that point in time or ending up being over-controlling through that over-prescription.

I think by having agility and flexibility, it's entirely possible. It's what all governments, it's what all regulators should be seeking to hold at the same time: the needs of the citizen, the needs of the innovator, the needs of the investor. It makes it more complex, but that is completely what needs to be if you're going to have this right-sized regulatory approach.

Risk-Based Approach and International Standards

Lord Tim Clement-Jones: I think any form of regulation in the world of AI does have to be risk-based. That gets you to a kind of proportionate approach to regulation.

The other point, and Chris used the word interoperability earlier, is the adoption of international standards. I think that, in the UK's context particularly, but also worldwide, if we're going to see AI applications thrive across the world and not just sit in one jurisdiction or another, that is really the big challenge that we've got.

Whatever form of regulation in any jurisdiction, because the US is going to be different from Europe, is going to be different from the UK, what we need to do is make sure that developers and businesses and adopters and so on aren't trammelled by regulation too much. They really can understand what standards they're meant to adhere to, whether they're of safety or ethics or however you might describe it. Those are the standards which are currently in the offing and are being developed. I think that's one of the most positive aspects at the moment.

The Importance of International Collaboration

Lord Chris Holmes: I think that's right, Tim. Building on that standards point, that sense of bringing the world together and having that international collaborative relational approach has to be right any time, but particularly at the moment, it would seem extraordinarily pressing.

To go to the US, people would be mistaken, but it's easy to be mistaken, to think in the AI space there's no legislation, there's no regulation. But when you go into more detail and you see what's happening at a state level, different regulatory instruments being brought to bear, it's far more interesting and far more complex than that. It demonstrates, again, a need to bring some interoperable approach to it. There'll need to be, at some stage, some federal activity in this space, for sure.

For that, I'm really keen, and why I brought it into all of the legislation I've done in this space, is the sense of really looking at that international piece. The standards are a key part of it. Also, we could potentially do nothing short of potentially reinvigorating many of those international organizations set up in the wake of the horrors of the Second World War. If we could get a form of engagement and approach from all of those international organizations, that could be extraordinarily powerful for them, and indeed, for the entire planet.

Global Trade Tensions and the Digital Economy

Michael Krigsman: I just want to tell everybody that right now you can ask your questions on Twitter using the hashtag CXOTalk. If you're watching on LinkedIn, just pop your questions into the LinkedIn chat. I urge you to ask questions because when else can you ask two members of the House of Lords pretty much whatever you want on these topics? Ask your questions.

Now, you've both been talking about international collaboration. Seems like a very apt discussion today with what's going on. What are your thoughts in this environment where we have tariffs, where the world economy is being upended because collaboration doesn't seem to be happening? What's the impact there on AI and the development of these tools that we all want to develop properly?

Lord Tim Clement-Jones: People like us who work in the digital world find it very ironic that all the debate coming from the States is about the world of physical goods, the trade in physical goods, the deficit in physical goods from the States, in terms of the States feeling it has to impose tariffs on those imports. Whereas actually, the United States now is the world's most powerful country in terms of digital services.

The big tech companies, Microsoft, Google, Amazon, they are immensely powerful. I think that is the countervailing power in the economy of the US which is not really being taken account of. It was what the digital services tax was going to try to redress. But of course, now that is up for grabs. There could well be some kind of trade-off from what we see in the media about what our government's up to, and it may be what Europe may have to trade off to some extent.

But it is ironic because you should really take the two things together and say, "Well, look, if we're going to come to agreement, we have to take account of the fact that the US big tech companies are hugely dominant."

Lord Chris Holmes: It seems extraordinary analog in a digital world or extraordinarily automotive in an algorithmic world. The automotive industry is obviously incredibly important and significant worldwide, but to have something which looks at this through a lens that doesn't look so much on services, on the digital economy if you will.

Also, I was under the impression that quite a number of centuries ago, we'd resolved this issue of international trade. I very much remember reading Adam Smith. It only makes sense to have international trade and all of the economic and indeed social benefits that flow from that. Of course, there are questions around dumping. There are questions around other forms of protectionism. But as a cardinal principle, largely we can win as a connected, relating human society if we try and establish and enable that route to free trade as opposed to going alternative routes where we're only beginning to start to see the more than significant consequences.

Adapting to a Disrupted Global Landscape

Michael Krigsman: Tim, it doesn't seem to me like right at the moment, the idea of a connected world society is among the primary goals, let's just say at least in the US.

Lord Tim Clement-Jones: No, that's very true. In fact, it's almost a disrupted society, and it seems that there is some goal to actually disrupt. We're told that this is going to lead to, and certainly US citizens are being told this is going to lead to, a powerful, even more powerful US economy, higher living standards and so on.

But I've been brought up in a world, and Chris has said so himself just now, we've lived in a world where free trade was seen as something to be desired. The more we could encourage that and enable that, the better. Indeed, the whole digital services area has been built on that. The internet has been built on the essence of free trade, effectively, only mitigated in terms of online safety. A whole of digital markets competition, anti-trust policy has been predicated on that, that you only interfere with digital services where there's clearly an abuse of a dominant position, for instance.

In a sense, are we having to rewire our brains in order to adjust to all this? I don't think it's going to change our attitude to thinking that it's much more desirable to have an open economy, both digital and analog, but we may have to adjust to a new reality.

Audience Question: Addressing Bias in Regulatory Guardrails

Michael Krigsman: Now would be an excellent time to subscribe to the CXOTalk newsletter. Just go to cxotalk.com and subscribe. We have a number of questions that are coming in. Why don't we jump to some audience questions right now?

This first question is from Arsalan Khan on Twitter. Going back to the discussion earlier about the guardrails and risks, he says, "Whoever makes the guardrails becomes the gatekeeper. How do you ensure that there are no unintended biases that creep in as you're undertaking regulatory efforts?"

Lord Chris Holmes: That's the same truth when one is regulating or legislating for anything, and it's the right point to raise. The reality is, if you understand and are always conscious of the values, the social, the economic context in which you're seeking to bring about regulations and guardrails, and crucially, you have meaningful, sustained public engagement, that gives you the best opportunity. Not just bringing to bear that right-size regulation, but right-size regulation which is really rooted in that social and economic context and thus has the ability to thrive as it develops over time.

Lord Tim Clement-Jones: Legislators and governments and so on, that's their responsibility to set the rules for the regulators. It's not as if these guys are sort of free-standing. They're there to put into practice the principles basically, and there should be regular oversight of them.

One of our complaints in the UK is that we're not active enough as legislators in overseeing how our regulators are doing. As a result, if you're not careful, you get accusations of bureaucracy, red tape, regulators blocking and so on. Whereas actually, for the most part, whether it's protecting consumers or farmers or those who are subject to the weight of technology if you like, it's meant to be beneficial and it's perfectly possible to make sure that it delivers.

Audience Question: Protecting Intellectual Property in the Age of AI

Michael Krigsman: This is from Anthony Scriffignano on Twitter. He's been a guest on CXOTalk a number of times, and I believe, Tim, you know Anthony? Yes. Anthony says this: "Please share thoughts on the ongoing difficulty of protecting intellectual property across borders that is in the category of innovations and artificial intelligence." As an inventor and innovator, it seems antithetical to him that it is so difficult to protect.

Lord Tim Clement-Jones: We could talk about AI particularly in intellectual property till the cows come home, especially given the difficulty of intellectual property copyright, for instance, used for training large language models being used in different jurisdictions but being subject to different forms of copyright protection.

The States, actually, it's a very, very interesting case where you've got the need to register copyright, and yet there is the fair use exemption. But that is being tested. Actually, in respect of people like yourself, creators, I'm actually quite optimistic that in different jurisdictions, whether it's Northern California or Delaware or wherever it may be, judges are beginning to come to the conclusion that fair use does not cover the use of copyright content for training purposes.

In the UK, we have our own debate taking place where our government is proposing to align itself with the EU. That is going to be causing difficulties because their proposal is to have an exception for the training of large language models that requires creators to opt out and say, "Sorry, I don't want you to use my material," when it's going to be incredibly difficult both technologically and in practical terms for creators to opt out.

It is a really live issue and the big question is, should those large language developers have a free pass when creative rights are still very, very important, whether it's authors or musicians or visual artists or filmmakers?

AI Training Data: Consent, Respect, and Remuneration for Creators

Lord Chris Holmes: It seems extraordinary that we're in an era now, just by dint of having a new technology, that necessitates tearing up centuries of well-understood IP and copyright jurisprudence. Absolutely extraordinary.

As I said at the outset: principles-based, outcomes-focused, inputs-understood. To those inputs: inputs understood, respected, remunerated where appropriate, and consented. We're a long way from that in any jurisdiction. I think the Supreme Court decision was extraordinary, really, to come to that conclusion. As Tim says, there's more hope in some of the state judicial decisions and others coming down the track.

In the UK, it's just as live an issue. I published a report just at the beginning of March to try and bring more focus onto a lot of these areas about where AI is currently impacting people's lives and thus the need for cross-sector, cross-cutting AI legislation. I drew out eight realities, eight billion reasons to regulate. Those realities were people at the sharp end of this. One of those archetypes was indeed the creative who finds herself or himself with their work being taken with no consent, no respect, and no remuneration.

It's why in the UK a bunch of our greatest artists, music artists released an album, and it was an album of silence to make that very point. This is where we're heading. If we accept, which I don't believe for one instant we should, I don't believe for one instant we need to, that this work can just be taken, then that's what we'll be left with: silence, where otherwise we'd have those sweet sounds that musicians bring us, blank canvases of want of artists bringing this stuff to bear.

Then the additional kicker on this is where you have AI-generated content competing with those artists as well. We need to be conscious of that double impact on our creative community who we should be rightly standing up for their IP, their copyright, their rights.

Audience Question: De-Risking AI-Led Decision-Making (ADM)

Michael Krigsman: Tim, Oliver P. on LinkedIn raises a similar issue. He says, "There is not just a need for guardrails for AI. Regulation is definitely required. However, the impact of regulation hits many other areas." He gives the example of AI image manipulation, and he says, "AI in financial services is another key area." Here's the crux of his question, Tim. He asks, "How will regulation help de-risk AI-led decision-making for customers?" He's saying customers, but I think that's really for all of us in general.

Lord Tim Clement-Jones: Chris and I are both fans of putting forward private member's bills. Chris has had a terrific private member's bill on AI regulation in general. I've taken a kind of narrower approach which is about taking decisions by automated algorithm in the public sector. That is also a very big issue in the private sector now.

The increasing use of AI models in the private sector to take automated decisions is going to be a bigger and bigger issue as time goes on. We have to have a risk assessment, an impact assessment to start with. We have to have transparency about the use of these models. We have to have the ability of the citizen or the consumer to be able to make a complaint, to understand what the decision has been made about them, then make a complaint and get redress.

There's a whole series of steps which most governments haven't yet really thought about. But we can't have a situation where decisions are being made by machine, where the citizen, the human, doesn't really have any agency at all.

I think many of us are worried that governments in their enthusiasm to get rid of red tape, to be more productive and so on and so forth, are increasingly going to adopt these models. We're not going to have any ability or any insight into what's happening. Of course, the same must be true of the private sector. They've got the same drive towards productivity themselves. People now talk about workplace impact assessments to make sure that AI is genuinely going to be used for the benefit of employees in terms of augmenting their working experience rather than simply throwing them out of a job.

Human Oversight in Automated Decisions

Michael Krigsman: Chris, thoughts on this notion of AI as the decision-making overlord?

Lord Chris Holmes: Very much so. As Tim said, it's why his bill in the public sector use of ADM [Automated Decision-Making] is so important because this covers the whole of society and the economy.

Back to the principles, there should always be human in the loop. Potentially you can move to a position of human over the loop. There should always be the right to a public explanation of a decision. There must always be the right to know that you're subject to an automated decision.

We would say, for example, you apply for a bank loan and you get turned down, and it's turned down by an automated decision, and you don't even know that AI was in the mix. That's bad enough. But imagine where you're in that state context, where the state understandably, in exchange for our protection, states it's granted extraordinary powers, unique powers in any open society. Imagine you have your benefits suspended on the back of an automated decision and you don't even know that AI was in the mix.

This is happening right now. You don't have to imagine it. It's happening right now. It takes us right back, Michael, to your initial question really in the sense of, well, why do we need legislation? Why do we need regulation? Because all of these instances, be you a benefit claimant, a job seeker, the creatives, a teacher in education, transplant patient, a voter, this is not hypothetical. This is not something for the future. We're not having to contemplate legislating or regulating for a thing that is coming down the track. This is happening to citizens. This is happening to individuals right now.

Audience Question: UK's Regulatory Path vs. EU and US Approaches

Michael Krigsman: This next question is from Greg Walters and he says, "Do you feel the UK is better off building a shared AI alliance with the US based on innovation, agile regulation, and safety alignment rather than trying to conform with the EU's slower, more centralized regulatory framework?" EU versus US in terms of international alliances. Who wants to grab that one?

Lord Tim Clement-Jones: I don't think the EU got it completely right on their AI Act. It's too disjointed really in terms of how it operates because it's got a sort of almost like a special breakout for large language models whereas actually the risk framework should apply to every form of AI and the risk assessment should apply accordingly.

But I think it's slightly a false dichotomy to say does the UK need to follow the EU or the US? Because currently, as Chris earlier said, the US has a whole series of states which is beginning to line up different forms of regulation. California, Colorado, the list goes on of states that are beginning to first of all deal with the whole issue of transparency but also the question of what AI does and the risk and those kind of guardrails that are appropriate.

There has been an awful lot of bills trying to get through Congress over the years as well without success, but there's clearly going to have to be a point where, in order to get it right across the states instead of having it between different states, you're going to have to have some kind of federal legislation. We don't know what that's going to be.

I think the UK is rightly following its own path, but we do have to have guardrails. They will have to be risk-based and we do have to adopt pretty much the same standards that NIST is advocating, that other standard setters like the International Standards Organization, IEEE, the EU are all espousing. Indeed, the OECD is trying to put it all together and get convergence between the standards.

I actually think that if only we thought rather clearly and maybe adopted what Chris has put forward in his bill, we'd actually find ourselves in a much better place. I think this argument about whether or not we're going down a US or an EU track is not going to be particularly helpful.

Lord Chris Holmes: No, I'd agree with that. As Tim identifies, there isn't a US track, and there isn't really, though it would appear so, an actual EU track either when it comes to it.

In the UK, what do we have? We have an opportunity because of our common law tradition to reach out to our friends in the EU and the US, and right round the common law jurisdictions of the world, to put in place principles-based legislation which can develop over time. It can deliver for innovator and investor, consumer and creator, and ultimately for citizen.

It's why I brought my AI regulation bill to bear in November 2023. It was in good shape. I got it through all stages of the House of Lords. It was set to go in the House of Commons until somebody thought it was a good idea to call a general election. Well, we all know how that ended up. But I brought it back just over a month ago.

I still believe that we have, if not a unique opportunity in the UK, we have an important opportunity to play amongst friends to bring into bear cross-sector legislation which can give a coherence, a clarity, and a consistency of approach. That's what you want if you're a citizen or if you're an investor. Totally possible for us to do that. It's unfortunate right now that the UK government are, shall we say, increasingly reluctant to take such an approach.

Audience Question: The Principle of Proportionality in Regulation

Michael Krigsman: Let's jump to another question. You can see I really prioritize the questions from the audience over my own. This is from Funke Abimbola. Chris, I'll direct this one to you to start. She says, "A risk-based approach is the best way to go, factoring in proportionality. We have numerous examples of such regulations in place and should adopt a similar approach in the UK." I find this notion of proportionality to be particularly interesting because right now on the world stage, there's a lot of discussion of proportionality, and it seems the entire definition of proportionality has completely been blown up. Thoughts on this, Chris?

Lord Chris Holmes: My bill, to go back to that very much, has this conception of risk-based, but proportionality running through it. Let me give one example to give the sense of why proportionality is such a useful legal, and indeed underpinning that, social construct.

I suggest one of the clauses in my bill is all organizations developing, deploying, or using AI should have an AI responsible officer. Now, before anybody thinks, "Oh, overly burdens compliance," a dead hand on any innovator, any business, any scale, any growth. Because of the proportionality clause running through the bill, don't think of individual, don't think of group, don't think of team. Think about role, think about function.

For those micro-businesses just starting off, they will obviously have a proportionately different approach to satisfying that AI responsible officer than say, a business of 20,000 employees with multiple sites right around the UK, never mind internationally. It's an important principle. I think we can bring it to life.

There's another principle which appears in a lot of other legislation in the UK of reasonableness. Again, people sometimes when you first come to it struggle with that concept, going, "Well, what's reasonable what isn't?" But that's the joy of drafting in that flexible, agile, developmental way. Because quite right, what is considered proportionate today may be very different to what's considered proportionate in 10 years' time. But quite right, too. Thus, the statute or the regulation has the potential to have that developmental nature within it because of the joy of that agility and flexibility of English common law.

Audience Question: Sovereign AI and Sovereign Cloud Infrastructure

Michael Krigsman: We have a question from Vinit Osmani, who says, "How will the need for a sovereign AI affect the possibility of a global agreement on AI regulation?"

Lord Tim Clement-Jones: Sovereign AI is actually important because I think we need capacity to create our own models. But I don't think it's going to be very helpful for us simply to expect to be doing large language models because we don't have the exacompute, we don't have the powerful chips, we don't have, except in the case of health data, the huge datasets that have been accumulated by some of the AI developers.

I do think that in terms of open source development, we can really be pretty high up there in terms of world rankings. I think sovereign AI to that extent is important. But for my money, more important even than sovereign AI, where, of course, our major universities have got great skills, and there are spin-outs and startups and so on, is sovereign cloud. What we lack is sovereign cloud in the UK. Cloud is the platform for so much that goes on, and we haven't got that. It's dominated by two or three major US big tech companies. Again, that's another example, as we talked about earlier, of American dominance in this area.

Audience Question: Pursuing UK Digital Sovereignty

Michael Krigsman: Now would be an excellent time to subscribe to the CXOTalk newsletter. Just go to cxotalk.com and subscribe. We have incredible shows coming up, and just this really amazing library of discussions just like this.

Okay, we have a question from Isobel Doran, and she is CEO of the Association of Photographers. Chris, she asks, "Do you think that the importance of ensuring UK digital sovereignty means we should be pursuing our own agenda without the US and EU, given the EU appears to be rolling back on guardrails for its citizens? Is it possible?"

Lord Chris Holmes: Yes, I think there's a real opportunity for the UK to say how we want things to be in an interconnected, in an interoperable, in an internationally connected way. But yes, to say these are the principles on which we base this legislation, because these are the shared principles on which we've based our society and our economy.

If we don't, we'll end up potentially, on one hand, being a rule taker from across the pond while taking something else as well at the same time, neither beneficial for UK creatives.

We have a way to use the technologies to solve for the technologies. If we think about what we can do with a combination, say, for example, of the metadata, watermarking, and fingerprint, for example, in combination, there's a real opportunity there to offer real, effective, sustainable, technology-proof solutions for our photographers and all of our great creatives.

Bear in mind, folks, our creative industries in the UK, and it's a picture mood in many other jurisdictions, are growing at twice the rate of the rest of our economy, 126 billion. It's worth thinking hard about what we do to support all those great creatives who not only do so much for our economy, but do so much, frankly, to nourish our souls and make life the beautiful thing that it truly can be.

Lord Tim Clement-Jones: I could just add to that, what we mustn't do, and I totally agree with Chris, what we mustn't do is sacrifice all those hard-won creative rights on the altar of some kind of growth strategy. That would be utterly futile at the end of the day, and counterproductive.

Legal Clarity for Digital Assets and Tokenization

Michael Krigsman: Let me toss out a question to either one of you. Digital assets and tokenization. Why is legal clarity around digital assets, cryptocurrencies, NFTs, tokenized property important for innovation and market confidence? Either one of you? Very quickly, please.

Lord Chris Holmes: If you bear in mind, whichever stat you take from various consultancies, it's likely that around 80% of all value is going to be exchanged by tokens by 2030. This is an extraordinary opportunity for any economy, for any jurisdiction.

Again, what do we need? That clarity, that certainty, that consistency, which delivers then the confidence for people to invest in, for people to innovate around, for people to develop platforms, for people to develop tokens of themselves.

In brief, in the UK, we're just bringing forth the Property Digital Assets Bill. Tim and I both sat on the special bill committee for that. We've got report stage in about three weeks. What that seeks to do is give that clarity that digital assets can be considered as property. Really delivering to that certainty point which enables investment, enables innovation. Again, to the growth point that Tim rightly raised, the government here and around the world talks about growth. Well, who wouldn't? But if you really want to consider where growth is most likely to positively come from in the shortest time in a sustainable, effective, and emancipatory place, well, it's going to come from these new technologies, and it's going to come from everything around digital assets and tokens.

Lord Tim Clement-Jones: There's a great deal of impatience in the industry about the fact that government really isn't picking up the ball quickly enough. We may be identifying the need to identify exactly what is a digital asset and what is not and the legal definition. But as an economy, we're not really moving fast enough on this.

Open Banking and Finance: Benefits and Security

Michael Krigsman: Tim, can you give us one sentence on open banking, open finance, and initiatives, why they benefit consumers and what should be done there? Literally, just very, very quickly, please.

Lord Tim Clement-Jones: We'd got legislation going through on really trying to roll out the concept of open banking in terms of joining up, for the benefit of the consumer, a lot of the data that is held on them by different service providers, which happens in banking. The idea is to try and make that available to a wider group of businesses in terms of the consumer and services provided. Many people think that is a great way forward.

I have my concerns, I think most legislators do have concerns, to make sure that the data is firmly secure. Because if you imagine the idea of open banking where different financial institutions have your data, and it's all designed to be for your benefit in a personalized kind of way, your 401 or whatever it might be, Michael. But this is going to be an important set of services. This is why we need legislation to make sure that there are ground rules around this, that it is secure, and that the consumer really does benefit. And if it comes to the citizen, the citizen is also benefited.

Importance of Open Source AI Models

Michael Krigsman: Tim, just literally one sentence on the importance of open source LLM models in this broad scheme. Very quickly, please.

Lord Tim Clement-Jones: I'm really keen on open source models because even if it's off the back of LLaMA or DeepSeek or whatever, it gives the smaller developer the chance to use the power of those big models and, using their own particular dataset, to really move forward in a whole bunch of places, particularly healthcare, where this could be game-changing in many ways.

Audience Question: Establishing Best Practices in AI Governance

Michael Krigsman: Chris, let me jump to you, and this is a question from Dr. Carolina Sanchez Hernandez. She asks, "From a governance and assurance perspective, how do we agree on best practices within this tech race?" If you can answer this literally in one or two sentences. We're going to run out of time.

Lord Chris Holmes: By the approach we should always take to legislation: considering the context in which we're legislating, ensuring meaningful public engagements. Clause seven in my private member's bill is all about public engagement. It's the most important clause. Getting that real engagement. Again, technology enables such an opportunity to transform how we engage with the citizenry, how we engage with the public. Having that, having all of that in place gives us the best opportunity to come up with the right results. But again, underlining, I'm a cracked record, but doing it in the legal code in which we legislate gives the agility, gives the dynamism, so we can develop over time.

Audience Question: Keeping Regulation Agile Amidst Rapid Change

Michael Krigsman: Tim, we have a question specifically directed to you from Bridget E., who says, "Even if the UK regulations are eventually risk-based, like the EU AI Act, will they keep up with rapid changes in technology?"

Lord Tim Clement-Jones: Chris and I are great believers in agile regulation. If it's sufficiently agile and sufficiently principled and outcome-based, actually, you can do that. You don't have to regulate for every possible form of AI or every possible neural network or algorithm, however you might define it. If you define it by the risk and the outcomes, then I think you're in a much better place.

We're used to outcome-based regulation in the UK, principle-based regulation. This is not a foreign concept for us. It's worked perfectly well in a number of different areas, particularly financial services.

Add to that the question of sandboxing, which Chris and I are both very keen on. That then gives you the ability to safely develop, to innovate, under the beady eye of the regulator, without transgressing the regulations. It means that you can innovate safely, and then when you've got to your beta point, your product that is fit for purpose, that's the point when you can take off and you know that you're meeting the regulatory requirements. I'm a fan of proportionate regulation, and I do believe that we have the makings of it.

Audience Question: Ethics, Responsibility, and Public Trust in AI

Michael Krigsman: Let's move on to Michaela DeMello. Michaela says she's seeing a fragile relationship between regulation of AI and the baseline data ethics issues that go a step beyond regulation and are key for building public trust. What are your thoughts about this relationship as regulations evolve so quickly? How can we encourage organizations of all sizes, innovators, et cetera, to choose to be more ethical in their use of AI? Chris, thoughts on the ethical use of AI in relation to innovation, please?

Lord Chris Holmes: An ethical approach not only is the right approach, but ultimately, it will always make good economic sense because of the certainty, the reliability, the solidity that come from that approach. Even if you only do it for economic reasons, which you'd hope people would do it for broader reasons than that, even if you do it for economic reasons, it's a right approach.

Moving beyond that, you can talk about responsible AI, which I think is a very helpful conception. To the earlier question, I think that sense of the role of the professional is important. I think we can have professional standards and a professionalization of the data science, the AI world, where people who are working incredibly hard come up with great concepts, great models in that world. It would really benefit from having some professional standards, some qualifications recognized around that. All of that would help.

To the Hong Kong point, I think I was down in Hong Kong last spring. It's really interesting what's happening there with the Hong Kong Monetary Authority in terms of sandboxing around potential AI models in the financial services arena, and really using Hong Kong as a Petri dish, not just for AI, but back to our earlier discussion around crypto and tokenization. It's fascinating what's happening there. But all of it comes back to that sense of another thread running through all of this: professionalization, ethical approach, responsible AI. What ultimately we're saying there, we're saying that's about standards.

Lord Tim Clement-Jones: This is absolutely fundamental, public trust. If you haven't got public trust, and we have had enough difficulty getting public trust for the use of data sharing and access in many different ways. We've got to make sure that we don't make the same mistake with artificial intelligence, that we gain that public trust, because that's what gives everyone license to use it and to make our lives better.

Lord Chris Holmes: Quite right, Tim. I think it's why it's an important clause in my bill because exactly to the point you made there, if the public don't trust AI, then they're not going to most likely avail themselves of the benefits and the opportunities. Simultaneously, they're likely to suffer the sharp end and the potential burdens of it, and that would be utterly tragic. But the great news is, if we so choose, it's utterly avoidable.

Audience Question: Whistleblower Protections for AI Concerns

Michael Krigsman: Here's a simple, very important question from Don Davidson, who says, "Given the unique insights of employees working directly with AI, do you agree that stronger whistleblower protections are urgently needed in the UK to ensure concerns can be raised safely, particularly as this technology rapidly evolves?" Chris, you want to jump into that really quickly, please?

Lord Chris Holmes: Yes. I think whistleblowing has an important role to play, better relations around all of this stuff in the workplace. I think there are some real opportunities with the Employment Rights Bill, which has come to the Lords right now, so I know Tim and I are going to be working a lot on this and the numerous issues around that relationship between AI and the employee and the employer, indeed.

Lord Tim Clement-Jones: This is going to be one of the big things in the future as AI increasingly is adopted by employers. It's going to be AI in the workplace, and whistleblowing is a crucial part of that.

Audience Question: Addressing the Global Digital Divide

Michael Krigsman: Tim, Arsalan Khan asks another very important question. He says, "What about the digital divide? Should countries that are far ahead in AI even care about those who are far behind?"

Lord Tim Clement-Jones: Well, I think accessibility is going to be absolutely crucial. The interesting thing is that different jurisdictions have very different approaches. We all thought that Africa was... many sub-Saharan African countries were behind in technology, but then they leapfrogged us in terms of payment systems, using mobile payment systems well before many of us got there.

There is no iron rule that says you can't adopt new technology even if you're a developing country and you don't have full access to every form of internet and so on. Mobile communications are extremely sophisticated in quite a number of countries that you wouldn't think had very advanced economies.

But I do believe that governments have a duty to try and make sure that their citizens have as much access, or have a level playing field, if you like, for access to these technologies. For my money, we've just had a digital inclusion plan published by the UK government, which is broadly on the right track, but I think there should be an absolute right to internet access, for instance, or at least to 5G. That needs to be the bottom line, really, of the digital world.

Audience Question: Redress for Use of Creative Works in AI Training

Michael Krigsman: Vincent Suzara simplifies his question for me. He asks, "Are people ever going to get an apology for their social media chat, published, artistic, and creative works that were stolen from them, taken without informing them that it was to be used to train AIs?" He wants apologies for his work and other people's work being sucked up into the LLM machine.

Lord Chris Holmes: Sadly, not from all the people who purloined their works. But never mind apology, there should be remuneration for all those works which made these particularly large foundational models. There should be remuneration, there should be respect, and there should have been consent. That needs to be a three-pronged approach moving forward.

Audience Question: Regulating Outcomes vs. Processes to Avoid Loopholes

Michael Krigsman: Tim, Lewis H. says, "You..." I don't know if he means you or the global "you," meaning both of you, "...mentioned regulating automated decision-making using bank loans as an example. But as you know, this kind of decision-making is a constant across AI systems. Isn't there a danger that clever lawyers will find signing loopholes much like accountants do with tax law? Wouldn't it make more sense to focus on regulating outcomes to ensure they're safe and transparent rather than trying to constrict the machine?"

Lord Tim Clement-Jones: I don't think the two are incompatible quite honestly. I believe in outcome-based regulation. On the other hand, I'm a lawyer, so I do believe in the legal process. If there are flaws in regulation and the regulator isn't doing their job, or the regulated finance service provider is not doing their job, then we need to make sure we have the remedies. But we're back to the proportionate regulation again, basically.

Audience Question: Crypto Fraud and the Need for Labeling AI Decisions

Michael Krigsman: Chris, Anthony Scriffignano comes back because he wants you to know, he wants to make the point... He was the chief data scientist at Dun & Bradstreet, so he's an expert in this. He says, "Malfeasance in the context of cryptocurrency is often not covered by the same protections as fiat-based currency. The best fraudsters change their behavior faster than the best regulators."

Lord Chris Holmes: Well, I think there's fraud in crypto as there's fraud in fiat, and fiat fraud still dwarfs crypto fraud, without being unaware of the difficulty with crypto, with getting the right-sized regulatory framework for crypto.

Tokens are far more interesting, as you know, Anthony. It's a far broader area than simply crypto, so-called currencies. Getting the right regulatory framework in place, getting more public understanding of these technologies gives us the best chance to get the protections in place.

Back to the earlier question that Tim answered. One-word answer to that: labeling. If somebody is going to be subject to AI in the machine or an automated decision, if it's labeled, they'll know that's the case and that will give them the best chance to decide whether they want that or not.

Final Advice: Building Public Trust in Technology

Michael Krigsman: Tim, to finish up, let me direct this first to you, and then I'll direct the same question to Chris. What advice, Tim, do you have for policymakers regarding the building of public trust in AI, digital assets, and open data?

Lord Tim Clement-Jones: Introduce proportionate regulation that basically makes sure that the high-risk AI is regulated. That would be my advice, and that would create public trust and give us the license to keep innovating.

Michael Krigsman: Chris, same question. Advice for policymakers to build public trust in regards to technology and regulation.

Lord Chris Holmes: Engage, engage, engage. We have what we need to make a success of this because we understand critical thinking, values, ethics, responsible approaches, economics, philosophy. We need to be human. These are tools, they're incredibly powerful tools, but they're tools in our human hands. We decide, we determine ultimately our decisions, our human-led digital futures.

Conclusion

Michael Krigsman: And with that, a huge thank you to Lord Chris Holmes and Lord Tim Clement-Jones. Thank you both so much for being here. I'm grateful to you both.

Lord Tim Clement-Jones: Thank you.

Lord Chris Holmes: Thank you.

Michael Krigsman: Folks, thank you for your questions and for watching. Tim and Chris, I hope you'll come back. I feel like you're both so smart and the questions that I was asking you didn't challenge you enough. Please come back again and let's do this one more time.

Lord Tim Clement-Jones: Brilliant questions.

Michael Krigsman: Everybody, thank you for watching. We'll see you next time. We have amazing shows. Check out cxotalk.com. Be sure to subscribe to our newsletter because we want you as part of our community. Thank you so much, everybody, and I hope you have a great day.

Published Date: Apr 04, 2025

Author: Michael Krigsman

Episode ID: 875