AI FAILURE: Injustice, Inequality, and Algorithms

Discover why AI decision-making is catastrophically failing vulnerable populations and the implications for your organization. Learn how to prevent AI disasters through accountability, on CXOTalk episode 882.

56:07

May 30, 2025
15,741 Views

As organizations rapidly deploy AI across critical functions, we're witnessing unprecedented failures that challenge our assumptions about algorithmic decision-making.

Artificial intelligence makes life-altering decisions for 92 million Americans, determining who gets healthcare, housing, jobs, and government benefits. Too often, these AI systems fail catastrophically, causing human suffering on an unprecedented scale while evading traditional accountability mechanisms.

Kevin De Liban, founder of TechTonic Justice and veteran legal advocate, joins CXOTalk episode 882 to expose how AI decision-making has gone dangerously wrong across both government and private sectors. Drawing from his groundbreaking report "Inescapable AI" and his experience leading successful legal challenges against harmful government AI, Kevin reveals:

  • The shocking scale of AI failures: From 40,000 false fraud accusations in Michigan to 25 million people losing healthcare coverage, learn how AI amplifies errors and bias at speeds and scales impossible with human decision-making
  • Why traditional oversight fails: Discover how automation bias, black-box algorithms, and misaligned incentives create a perfect storm where neither political, market, nor legal mechanisms can adequately protect against harmful AI
  • The hidden business risks: Understand why the same AI systems harming vulnerable populations today could threaten your organization tomorrow through litigation, reputational damage, and operational failures
  • Lessons from the frontlines: Learn from real cases involving Deloitte, Thomson Reuters, and major corporations where AI implementations have led to lawsuits, regulatory scrutiny, and public backlash
  • A path forward: Explore practical recommendations for building accountable AI systems that enhance rather than replace human judgment, including transparency requirements, meaningful oversight, and community involvement

Whether your organization is deploying AI for hiring, customer service, risk assessment, or operational efficiency, this conversation offers essential insights into the pitfalls to avoid and the governance structures necessary to prevent your AI initiatives from becoming tomorrow's cautionary tale.

Key Takeaways

  1. AI is not neutral - Systems reflect the values, biases, and incentives of their creators and deployers
  2. Scale of impact is massive - 92 million Americans affected, with expansion into all economic classes
  3. Self-regulation has failed - Market incentives alone don't prevent harm to vulnerable populations
  4. Accountability drives quality - Meaningful regulation benefits ethical companies and improves outcomes
  5. Legitimacy matters - Not all decisions should be automated; human judgment remains critical
  6. Transparency is essential - Black box systems violate democratic principles and prevent accountability
  7. Community engagement is crucial - Those affected by systems must have meaningful input in design
  8. Environmental and social costs - Full impact assessment must include resource consumption and labor exploitation
  9. Democracy at stake - Unchecked AI deployment threatens fundamental democratic values
  10. Action is urgent - Window for shaping AI's role in society is closing; engagement needed now

Episode Participants

Kevin De Liban is the Founder and President of TechTonic Justice, a newly launched organization to fight alongside low-income people left behind by artificial intelligence (AI). Through multidimensional advocacy, TechTonic Justice supports marginalized communities and their advocates to secure the work, housing, schooling, public benefits, and family stability needed for a thriving life.

Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator known for his deep business transformation, innovation, and leadership expertise. He has presented at industry events worldwide and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.

Transcript

The Impact of AI on Vulnerable Communities

Michael Krigsman: Today on CXOTalk episode 882, we're examining a critical issue that affects millions yet is largely invisible to most business leaders: how AI is failing our most vulnerable citizens on an unprecedented scale. I'm Michael Krigsman, and I'm delighted to welcome our guest, Kevin De Liban.

As founder of TechTonic Justice, Kevin has witnessed firsthand how AI systems determine who gets healthcare, who finds housing, who gets hired, and who receives government benefits. His groundbreaking report, Inescapable AI, reveals that 92 million Americans now have fundamental aspects of their lives decided by algorithms, often with devastating consequences.

He'll share how these systems fail, why traditional accountability mechanisms don't work, and what this means for your organization. We're discussing real systems, built by major vendors, causing real harm right now.

Kevin De Liban: TechTonic Justice is a new nonprofit organization launched last November to protect low-income people from the harms that AI causes them. And I come to this work after 12 years as a legal aid attorney representing low-income folks in all sorts of civil legal matters, and it was there that I first saw the ways that these technologies were hurting my clients' lives.

And I was involved in several battles and won several of them as well and started understanding that this was a bigger problem that needed more attention and more focus.

Michael Krigsman: Kevin, you were an early pioneer actually winning cases relating to AI harms.

Kevin De Liban: It was really in 2016 when I had clients who were disabled or elderly on a Medicaid program that pays for an in-home caregiver to help them with their daily life activities so that they can stay out of a nursing facility, and this is better for their dignity and independence and generally cheaper for the state as well.

What happened is the state of Arkansas replaced the nurses' discretion to decide how many hours a day of care a particular person needed with an algorithmic decision-making tool, and that ended up devastating people's lives. People's care hours were cut in half in some cases, and it left people lying in their own waste, left people getting bedsores from not being turned, being totally shut in, just intolerable human suffering.

We ended up fighting against that in the courts and also with a really big public education and community activation campaign, and we won. And that's one of the relatively few examples still to this day of successful advocacy against automated decision-making.

The Neutrality Myth and Design Flaws in AI

Michael Krigsman: Algorithms, AI are neutral mechanisms, neutral devices. They're just math without feelings, without interests, without malice. So given that, what is the problem here?

Kevin De Liban: I would challenge some of the assumptions even in that question of them being neutral, right? I mean, they're programmed by humans. The statistical science that underlies a lot of this stuff is determined by humans using various choices that they have, using historical data that they have, and that isn't a wholly objective exercise.

I think what you really have to look at is the purpose for which the technology is being built to understand it and understand a lot of even the technical aspects that underpin it.

In my world, when we're talking about low-income people and automated decision-making for them, these are not neutral technologies at all. These are designed oftentimes to restrict access to benefits or to empower whoever's on the other side of the equation, whether it's a landlord, a boss, a school principal, a government official, to do something that they want done that might not be what the person who I'm representing is interested in.

I would challenge that premise first.

Michael Krigsman: So you're saying that the design of the system is intended to cause harm. Is that correct what I'm hearing you say?

Kevin De Liban: In some cases, it's intended outright to cause harm. In some cases, it's just intended to facilitate a decision by the decision-maker, right? Make a landlord's life easier, make a boss's life easier, make a government official's life easier.

The problem is inherent in making their life easier ends up being making somebody else's life harder. And I think that's where the push and pull is of this, is there is the intent issue.

There are very clearly stuff that's built to be harmful. But then there's also this gray area where nobody's scheming in a dark room about plotting to take over the world and destroy people's lives, but the nature of their power positions and the decisions that they're making and what makes their life easier ends up translating into that for low-income people.

Michael Krigsman: Can you give us an example of where the goals or the incentives are misaligned between the developers of this technology, of these technologies or algorithms and the, can we say the intended recipients? Is that even a correct way to phrase it?

Kevin De Liban: You have the hiring process, for example, with most big companies now is riddled with AI. Everything from resume review and screening to video interviewing and to oversight once somebody gets the job, right?

There's nothing inherent in that process that really benefits the person who's seeking work or is an employee, right? That's all intended to facilitate the life and the work of the employer or the bosses.

Same thing a lot of times with public benefits. You've got really dedicated public servants, but oftentimes they're unsophisticated in technology issues. They're thinking, "Okay, well, this new piece of technology is going to suddenly help expand our limited capacity, so let's implement it." And then they don't have what they need to do that in a non-destructive way.

The people who end up bearing the risk of their own lack of knowledge or incompetence are the low-income people that are subject to the decision-making.

Accountability and Ethical AI

Michael Krigsman: These systems are complex. They are developed with algorithms and data as well. Can you isolate where a primary source of the problem lies? I realize underneath it all, you have human intention, trying to solve a problem, trying to achieve a goal, but can you drill into that a little bit, kind of dissect this for us?

Kevin De Liban: There were a couple aspects to the way the algorithm worked. One is the mechanics of it, right? What inputs turn into what outputs? And that's hard enough to discern.

But then there's the reason that those inputs are chosen to lead to those outputs, right? Like, why do you look at this factor and not this factor? Why is this factor shaped to look back three days instead of five days, all of those things.

Those are all human decisions. Now they're informed by, in the best cases, statistical science. In a lot of cases there is no, science is a bad descriptor for that. A lot of times it's junk, right, that somebody just invented and comes up with. But in the best cases, it's statistical science that still is riddled with various assumptions.

In our example in Arkansas, for example, whether or not somebody could use the bath on their own might not have been a factor that the algorithm considered, and that's weird, right? I mean, we're talking about home care for elderly or disabled people.

Being able to bathe on your own should be one factor that decides how many hours of care you need. It wasn't. Or your ability to prepare meals wasn't a factor, and so you see this disconnect of like, we know instinctively or through medical discretion and judgment how to answer this question of how much care somebody needs.

It might be imprecise, but we know. We know what we should be looking at, but the algorithm didn't do that. They looked at a lot of factors that weren't intuitive, and then they ignored a lot of factors that were intuitive.

Michael Krigsman: How does this come about? Is it simply lack of understanding of the subjects of the target? What happens here?

Kevin De Liban: Some of it is real ignorance about the lives of poor people and the ways that decisions are made and the impact of the decisions. Some is ignorance about certain program standards or laws or anything else. I've seen that a lot in the technology.

Some of it is the lack of having to get it right. For a lot of the developers of these algorithms in particular, they're shielded from any sort of consequences for their actions. And so they do what they think is best or what they can sell to a client, and that's that.

And then the clients that are using it, the government agencies or the employers or whatever, they might not be vetting it, or they're also insulated from accountability because if it hurts poor people, what's going to happen to them? Like, what's going to happen to the person who decided to use it?

I mean, poor people oftentimes are not a particularly empowered political bloc. There usually aren't scandals that end up resulting in lost jobs or lost elections for officials who are in charge of this stuff.

It's easy to get away with really harmful actions just because you're doing it to people who don't have a lot of power that's ready at hand. Low-income communities have always been super involved in advocating for themselves and organizing and everything else, but that's a huge effort, right?

It takes like a concentrated movement, and it's not like you can just call your elected official, and you have that kind of access and say, "Hey, this is a problem. Can you take care of this for me?" Or organize a lobbying effort to get rid of something. No. If you're doing something with poor people and it hurts them, you're not going to face immediate consequences for the most part.

Michael Krigsman: Folks who are listening, I want you to ask questions. We have some questions that are starting to come in on LinkedIn and Twitter, and we're going to get to them in a couple of minutes.

If you're watching on Twitter, just insert your questions into Twitter using the hashtag #cxotalk. If you're watching on LinkedIn, just pop your questions into the chat.

For those of you who are developing these kinds of systems, and we hear a lot of discussion of ethical AI or responsible technology, here's an opportunity to ask somebody who's dealing with the actual fallout of this. So, ask your questions.

The Expanding Role of AI in Decision-Making

Kevin, what about the scale of the problem?

Kevin De Liban: All 92 million low-income people in the United States have some key aspect of their life decided by AI, whether that is housing, healthcare, public benefits, work, their kids' school, family stability, all of these issues.

Not everyone might have all of those issues decided by AI, but everyone has at least one of those issues decided by AI. And then it extends beyond low-income people as well into higher income things.

There've been a lot of stories, for example, about employer use of AI, the screening aspect and then the bossware management aspect of it being used against finance executives or against hospital chaplains, against therapists. Recently there was a story about Amazon programmers who are now subjected to AI-based oversight and measurement, and it's affecting their lives.

Even though a lot of this stuff is most prevalent and probably most severe in the lives of low income people, it's happening to all of us. Healthcare is another great example, right? If our doctor recommends a treatment for us, many of the more expensive treatments are subject to health insurance company review prior to being offered, and those health insurance companies are using AI generally to deny those requests.

Michael Krigsman: We all know about United Healthcare and their use of algorithms that they say is neutral and, "We don't do that," but you hear doctors complaining about how algorithms are interfering with their ability to render the kind of care that they want.

It becomes pretty evident that what was once targeted at lower income people now, through the acceleration of AI, is broadening and touches all of us at this point, I would imagine.

Kevin De Liban: In one of the examples of the health insurance companies, they ostensibly had a human reviewer reviewing the AI's outputs. But when the investigation dug into what that human review looked like, it showed that the doctor was approving something like 60 prior authorization requests a minute. Like, they had one or two seconds per one.

There's no human reviewing that, right? And it's bad faith to assert otherwise. And that's, I think, one of the key data points, and there are a lot of others, that help us show that this isn't just all purely accidental.

This can't be just attributed to mistakes or errors, that there's a lot of thought and intention that goes behind developing and implementing these systems that are denying people really fundamental needs.

Michael Krigsman: Subscribe to our newsletter. Go to cxotalk.com, check out our newsletter, and check us out for our next shows. We have great shows coming up. Let's jump to some questions.

Ethics, Regulation, and Accountability in AI

Michael Krigsman: Let's begin with Arsalan Khan on Twitter. Arsalan's a regular listener. Thanks so much, Arsalan, for your regular listenership.

And Arsalan says this. He says, "Whoever sets the AI guardrails has the power, but who checks if those guardrails are equitable?" And he asks, "Why don't we have a Hippocratic Oath for us as IT professionals?" He's an enterprise architect.

This notion of whoever sets the AI guardrails has the power, but who checks that the guardrails are right, are equitable?

Kevin De Liban: The Hippocratic Oath idea is not a meaningful source of systemic change to insulate society from these harms. Because doctors have Hippocratic Oaths, and while that might be useful, it doesn't, a lot of times, prevent some of the abuses in medicine either. Or lawyers have obligations, and it doesn't prevent us from going and doing all sorts of random harmful things.

I think what you need is actual regulation to reinforce the guardrail notion, right? Safeguard people from having any exposure to the harms in the first place, or if there are, because those kind of institutional and ethical safeguards fail, then there are real consequences for that that go beyond somebody just violating their oath and feeling bad that way.

I don't know if that's getting at the full essence of the question, but that's where some of my thoughts go. And also, not everybody's as ethical as the person asking the question either, right? And some people are perfectly happy to just do whatever the client wants or program the system in whatever way is going to make it most profitable and attractive.

As long as they don't have anything holding them back, formally, officially, real consequences and accountable, we're not going to get any major change.

Michael Krigsman: So, self-policing is not sufficient in your view?

Kevin De Liban: Definitely not, and I know those questions are asked in good faith and are posited in good faith. But the people who are pushing that at the policy level are definitely not pushing it in good faith.

They don't want any accountability. They don't want anything that would restrict how they use it, and they're perfectly happy to shunt off all the risks and all the dangers of their systems being bad or going wrong or doing something destructive to the people who are subject to those decisions.

Michael Krigsman: Are you talking about government policy, or in corporate policy, people designing products?

Kevin De Liban: Government policy. The tech industry has been vociferous in their opposition to any sort of meaningful regulation of AI automated decision-making technologies and so forth.

That's the reason why we don't have any real societal protections against this stuff outside of existing laws. And even now, they're targeting some of the European Union's restrictions, which are modest, but big tech doesn't like those.

That's where I'm talking about, is how corporate interests end up shaping their policy positions in ways that are detrimental to really all of us that are not in that world, but particularly low income people.

Michael Krigsman: You also have many of the major tech companies pushing forth their own ethical AI initiatives and lots of discussions around the data and creating, building bodies of data that try to weed out bias. I mean, you see this everywhere is happening.

Kevin De Liban: That's true. And there are a lot of good people who share my values in these companies and are trying to make the companies do as right as possible.

But I think when the rubber hits the road, we've seen repeatedly that the folks speaking out for ethical uses are sidelined. A few years ago in Google, for example, the whole ethical AI team, I think, was fired because they wanted to publish a paper that Google didn't want published. Or more recently, when Twitter was taken over by its current owner, the whole ethical AI team was disbanded instantly.

You have Google's retrenchment of its ethical AI things, and now its technology is being deployed in unemployment hearings, right, for people who are desperate for benefits, even though we know that a lot of the AI technology involved can be faulty.

Again, you do have these ethical components within institutions that are pushing, I believe in good faith a lot of times, for changes. But the people who are pushing for that don't have the same interests as the institutions who are allowing it.

A lot of times, the institutions are allowing ethical AI because it allows them to go out and talk about their concept of social responsibility. But we see repeatedly, when the rubber hits the road, ethics will go by the wayside and the company's profit incentives and motives are going to be what dictates what happens next.

Michael Krigsman: So basically, money talks, nobody walks.

Kevin De Liban: Yeah. I mean, it's complicated, right? Because again, there's a lot of good people in there that are pushing really hard for these major institutions that have lots of power to do right. And the fact that the institutions allow that to happen is noteworthy.

I think it just comes, yeah, at the end. It ends up being that money talks.

Michael Krigsman: I will say that you are up against the marketing budgets of some really, really large companies here.

Kevin De Liban: I am. This is going to change everything though, Michael. See, CXOTalk, this is going to be the entryway. This is better than all the marketing budgets of the big tech companies right now.

Michael Krigsman: Let's jump to some other questions.

Practical Guardrails for Ethical AI Development

Michael Krigsman: And I'm seeing some themes developing in the questions here. And this next one is from Preeti Narayanan, and she says, "Given your work exposing large-scale harm caused by AI in public services, what practical guardrails would you recommend to technology leaders," like her, like many of our listeners, "who are building enterprise AI systems so we don't unknowingly replicate those same failures at scale?"

Basically, it's the same sentiment as Arsalan Khan just brought up. What can we, as the people creating these systems, do?

Kevin De Liban: Okay. One thing is push for regulation, right? And push for meaningful regulation of what it is that you do. Because that way, it bakes in consequences for getting it wrong. And as long as you have good faith and are doing things the right way, those consequences shouldn't be terribly severe, or you shouldn't be exposed to them in a way that's wholly destructive.

I think pushing for regulation is actually in your own interest. But in the context of developing a particular product, you can ask, "Is this a legitimate use for AI?" For example, should we be using AI to deny people, disabled people benefits and home care? That might not be a legitimate use of AI.

If it isn't a legitimate use, maybe we shouldn't do it, and we should just say, "That's off-limits. We're not going to do that no matter how much somebody's going to pay us because we just don't believe that's fair."

Now, if it is a legitimate use, and I acknowledge there's a lot of gray areas in this, then you've got to have a really intensive development and vetting process. What are you doing? What data are you using? Are you projecting out the harms? Are you consulting in a meaningful way with actual oversight, the people who are going to be subjected to these decisions?

Do they have some sort of say in how it's developed in a way that would actually stop you from moving forward or force a different development of it? Are you willing to disclose things that might traditionally be considered trade secrets or intellectual property in the interests of having more public accountability?

Are you willing to ensure ongoing oversight so that if your product is developed or is deployed, it's deployed, first of all, in narrow, short, phased ways so that we can test the harm before it's applied to everybody?

And then two, are we willing to look over time in a three-month span and see, hey, does our projected impact, which we have documented and would have disclosed to the public, differ from what the actual impact is? And if so, is there an automatic off switch? Is there some way to course correct that?

All of those things when combined with meaningful legislation that means that the people have enforceable rights if they're hurt by it would lead to reduced chances of harms on systemic society-wide scales.

Michael Krigsman: If I were a corporate leader, you made the assertion that we should question whether AI is the appropriate decision-making tool to use in some of these situations that could cause real downstream harms.

But I would push back, and I would say, "Sir, you don't know what you're talking about because AI is a decision tool. It is not autonomous.

Challenges of AI Development and Bias in Data

Michael Krigsman: It's overseen by humans. The data that we collect is carefully vetted to be unbiased, and it's unfortunate that these downstream harms are happening, but it's not a result of our decision-making. There are systemic underlying societal issues, and frankly, the AI is making the right decision."

Kevin De Liban: I would challenge almost everything that you said there, Michael, from the sophistication of the vetting process. The people who are developing enterprise software might be doing a better job when the people who are going to buy their software are wealthier or richer than when it works for low-income people.

First of all, I think, like, who the audience is, who's going to be subjected to this, dictates a lot of how careful the development process is. And if it's going to be deployed against poor people, the development process doesn't need to be as intensive probably as it would be for corporate clients, right?

I think there's that. So a lot of the so-called science in AI is really junk when it applies to poor people.

One great example of that is identity verification, for example, during the pandemic. And hopefully some of your listeners will have some frame of reference. But during the height of the pandemic, right, masses of people were unemployed. Congress expanded unemployment benefits to help people float during these desperate times.

At some point, states, encouraged by the federal government, implemented ID verification measures, algorithmic ones. And what they would do is they would run every active claim and every application that was outstanding through these ID verification algorithms. And the algorithm would flag claims that it noted as suspicious.

Then what would happen is the person who was flagged would have to present physical proof that they are who they say they are. That happened, and then still the state didn't have capacity to process that verification.

You ended up with millions and millions and millions of people who are in desperate circumstances, can't keep their lights on, can't pay their rent, can't get school supplies for their kids, who had their benefits stopped or delayed by months and months and months because of this identity verification algorithm.

Now, what would happen? How did it work? One of the factors is, are you applying from the same address as somebody else? Including with apartment buildings. So if I live in unit 101 and somebody else is applying for unemployment benefits and lives in unit 303, both of us are flagged.

That's ridiculous. That's somebody in their basement coming up with some junk that they think would be associated with fraud. There's nothing statistical about that. There's nothing scientific about that. That's somebody just inventing stuff, right?

But it invents stuff, and it causes millions and millions of people desperation that you couldn't imagine. I had clients who were calling with active mental health crises, talking about self-harm because they couldn't get unemployment benefits even though they were who they said they were, and they showed that to the state.

That's an example of maybe some companies care more than others, but here when rubber hit the road, it didn't matter.

Ultimately, studies that came out after that were assessing the validity of these tools showed that, for the most part, they caught eligible people, right? They weren't targeted narrowly to ensure that we're only getting the few that are actively suspicious. No, they end up catching essentially everybody, and then just leaving folks to try to wade through the mess on their own. And that's just not acceptable. There's no justification for that kind of stuff.

Addressing Bias and Regulation in AI Systems

Michael Krigsman: Michele Clarke on LinkedIn says, "Can the problem of biased data be solved?" And let me just reframe that. How do you manage the fact that people are struggling to have data that is lacking bias? I've spoken with many of these folks on CXOTalk, but it's a really tough challenge from a technical standpoint. So what do we do about that biased data?

Kevin De Liban: Biased data is only one part of the problem, right? And there are other parts of the problem. You can have unbiased algorithms that still cause massive harms, and that I think would still be illegitimate in a lot of ways. So, we want to make sure we talk about the risk in more ways than bias.

But bias is a big one, and when we talk about it, there have been various ideas about de-biasing data. And to be fair, I don't have the full technical background to understand the statistical science between all of the different ways in which is the best at doing what, so I don't want to claim otherwise.

But what I do understand is that there's sophisticated trying to get more data sets that are validated, trying to account for historical exclusion. Again, using the data in real world examples, but that don't have real world consequences and so forth so that you're hopefully getting better data.

I think all that is very much possible. But, again, I think the best test against biased data is going to be, once it's out in the world, are you going to face consequences for what you put out there, right?

If you are going to face consequences, then you're going to make sure or you're going to do your very, very best efforts to ensure that your data is not biased in a way that's leading to unfair outcomes for folks.

Michael Krigsman: Self-regulation is not sufficient regulation in this case.

Kevin De Liban: Yeah. Exactly. We see the bias example all the time, right? There's the obvious healthcare examples about who gets transplants or Black folks' pain being treated as less real than people who are white, and various other examples in the healthcare context of AI that's deployed with bias baked in. The ad targeting stuff from social media. Like, all of these things.

Then there's another deeper question, which is, if you can't figure it out, if you can't de-bias your data, maybe you shouldn't be using it. Maybe what you're trying to do is not so important that you're going to go out and reproduce longstanding societal inequities with your technology. Maybe the money's not worth it.

Michael Krigsman: That's a value judgment, I guess, for every person and every company to make. But of course, everybody is going to say, "Well, we are careful."

Kevin De Liban: That's the point. I mean, I think this comes back to one of my points that ultimately meaningful, robust, enforceable regulations are in your interest.

If you are a company that is committed to doing things right, subjecting yourself to accountability is going to be a competitive advantage, right? Because if you have other people who are not doing things right, and they can be subjected to lawsuits that are consequential, they can be subjected to regulatory oversight that's meaningful, that's going to be a competitive advantage for you.

You can say, "Look, we are not caught up in any of that stuff. They are, and so we're a safer bet. We're a better bet." You can tout the societal values that you provide, all of those things. So, I think, ultimately, regulation is in your interest, because it creates a new competitive space for you, a competitive surface, I guess I'd rather say.

Michael Krigsman: I just want to mention, for folks that are interested in the technology, technical underpinnings of data and bias, just search on the CXOTalk site because we have done interviews with some of the leading technologists in the world who are focused on this problem. So, just search for data bias and so forth on cxotalk.com.

And, oh, by the way, while you're there, you should subscribe to our newsletter so we can keep you up to date on shows like this because we have incredible shows coming up. Our next show, not next week, the week after, is with the chief technology officer of AMD. So, subscribe to the newsletter.

AI's Role in Addressing Poverty and Accountability Mechanisms

We have our next question is from Greg Walters, who's another regular listener, and Greg, thank you.

Michael Krigsman: And Greg says, "AI is not like old school digital transformation. Broadly, can AI help raise us up out of low income?"

Kevin De Liban: No, not with current incentive structures in the current system that we exist in. People always ask me, like, "What about AI for good," right? Like, "What can we do that would advance justice?"

There's one example I always like to offer, which is with public benefits, say Medicaid, or SNAP, which is nutrition assistance. The government knows, most of the time, what income and assets people have, right? That information is accessible to them in some form.

They know they could make eligibility decisions oftentimes without any or with minimal involvement from the person who would qualify for the benefits. And so if you could build a system that would accurately, fairly, consistently make those eligibility decisions, minimize paperwork and other burden on folks, that would be a net wonderful good that would do more good than 100 legal aid lawyers in our lifetimes ever could.

The problem is, big companies have tried it, big government vendors have tried it, and it repeatedly fails in the same way. Why does it fail? Because of failed accountability mechanisms, right? You don't have political accountability, as we talked about, because hurting poor people generally isn't a scandal that's going to get anybody booted out of office.

You don't have market accountability. Oftentimes, in the government vendor context, it's because there's very few government vendors of the size needed to be able to compete with one another. But even beyond that, you have market failures in terms of transparency of how your product works and what kind of public oversight it's subject to.

Then you have no legal accountability because the existing laws that we have, while have been used effectively by advocates like myself, are limited in scope and can only get a certain amount, certain kind of relief. A lot of times, they can't get money damages for the suffering that people's caused. You can just get a judge that tells the state or that tells the vendor to change what they're doing.

You have all these broken accountability mechanisms, which means that even with this good use, right? Helping people get the healthcare that they are eligible for, you don't see that brought about in real life. And so if you can't do something like that, you're not going to do anything else in terms of alleviating poverty at scale.

You can have some cool projects. Like, in the legal world, there's like know your rights projects, right? Everybody's had a bad landlord at some point in their life, right? Where you needed to request repairs or ask for your security deposit back after you left, and they were trying to hold on to it.

There have been some cool AI-based tools that help people do that, and that's cool stuff. It's great. But it's a grain of sand on the beach that borders the Pacific Ocean, right?

Community Engagement in Ethical AI Usage

Michael Krigsman: Like, it's cool, but it's not scale. This leads us to an important question from Trenton Butler on LinkedIn, who is a community advocate organizer and project manager, and Trenton says this, "For those of us committed to ensuring these tools are used ethically, how can we get involved, especially if one does not come from a law or technology background?"

Kevin De Liban: This is an important aspect, is there's a lot of power-building and community organizing that can be done. Some of the AI stuff happens at a very local level, right? Some school districts, actually, about half of school districts, use AI to predict what kids might, in the future, commit crime, right? And then targets them for law enforcement harassment or terror in some cases, right?

That's something where you could find out as a citizen. You don't need to be a lawyer. You can do open records requests. You can go to school board meetings. You can ask people, "Hey, is AI being used here, and how does it work?" And if it is and it looks bad, and most of the time it is bad, you can help organize people to get involved.

Another local fight that there is, is data centers. These are a big deal, right? They're the way that all of the data that AI depends on is processed. They're subject to local land use laws, local regulation around utility prices, and other things.

There are a couple ways really at home that can get involved in this, in building knowledge of yourself, building knowledge of journalists and the public, holding meetings, getting your neighbors involved, and all of that stuff.

It can be daunting. So, and there's a huge gap in helping people do that right now, and that's one of the reason TechTonic Justice exists. So, in the self-interested plug, please follow us. Please stay in contact, and as we're building out. It's just me, and my first two employees joined last month, so we're still very much in the building phase. But as we get more established, we want to be doing working in partnership with folks who want to be engaged around these issues, so please stay up with us.

Challenges and Accountability in AI Development

Michael Krigsman: We have another question now from Twitter, and this is from Chris Peterson, who says, "What agency, or is there just one, would you suggest as the AI ombudsman in the US?" He says also that, "For folks in charge of big AI, 99%+ of us are 'lower income,'" in quotes.

Kevin De Liban: There is no one ombudsperson around AI. And I mean, that's an interesting idea in terms of meaningful accountability because there are ombudspeople in healthcare and in nursing homes and other similar entities. It's a huge gap.

That's part of why we exist is, right, to be focused people on the ground, right? Like, I was a lawyer working with hundreds and hundreds of low-income people to try to fight this stuff. So I think, in the ecosystem, the nonprofit ecosystem, there are a few organizations that are trying to build up the capacity to do some of this stuff, to watchdog the use of AI.

Then there are a lot of established organizations that are more focused on the policy level. So there is no one ombudsperson. In terms of the other aspect of the question, I guess I would need more context about what it means that 99% of us are... Maybe it's that we're subject to the big tech?

Michael Krigsman: What he was saying is that it's the billionaire question.

Kevin De Liban: Now is the time to get involved before these technologies become entrenched as legitimate ways to make decisions about these core aspects of life, because even though AI a lot of times purports to be, or at least its hype-men purport it to be this objective way to make decisions, whose objectivity is it, right?

If it's always limiting access to benefits, if it's always making housing or jobs or education harder to get, then it's not really objective. It's the people who are developing or using the AI, it's achieving their ends.

Now is very much the moment for it, I think, because this field is relatively new as a social phenomenon and a social movement. There isn't a lot of the infrastructure that needs to be there to help people get organized and engaged around it, so a lot of my answers are probably unsatisfying.

It's like, well, talk to your community, organize around it, stay up with TechTonic Justice, these kinds of things, because that's what we're trying to build is build the infrastructure for people to be able to channel their concerns, their frustrations, their energy towards ensuring something that looks more like justice.

Michael Krigsman: But aren't you, in a way, trying to turn back the clock to a simpler and easier time before we had AI? And AI is not going away, and its growth is going to continue to make incursions into every aspect of decision-making.

Kevin De Liban: That's the overwhelming. You talked about earlier the PR budgets, right, of big tech. And that's the overwhelming sense is that it's inevitable. But is it really, right?

Why can't a nurse make a decision about how much home care a disabled person needs? Why is that not viable anymore? Why shouldn't that be the case? Why can't we use technology in a way that supports human-based decision-making rather than essentially making the decision for us with, like, cursory human oversight at all?

I think that has to be the questions. What is the legitimate use of AI? And then even where the use is legitimate, let's go through all the vetting we talked about earlier, but let's also talk about the bigger-picture questions in terms of what it means for the Earth, right?

We know that AI has environmental consequences. There's debate about how many liters of water each ChatGPT prompt uses or whatever, but, like, we know that it's draining water in certain places where water is scarce. We know that it's responsible or at least correlated with energy price increases. We know it's correlated with the use of non-renewable energies.

You have to factor all these things into the equation in terms of its societal value and its societal cost. And it may be that if we actually do a concerted, reflected effort that accounts for all these externalities, we realize, you know what? This is a net harm. Maybe we shouldn't do it, or we should only do it in these limited circumstances, and I think that's what we have to be engaging in, and that's why I always reject the frame of inevitability.

I'm a practical person. I generally need to solve problems for my low-income clients, and that doesn't always allow me to be pie-in-the-sky principled. But we can be pie-in-the-sky principled while also being practical and start thinking, like, "Is this really worth it? Is the productivity gain really worth all the cost?" And so far, even that in the corporate sphere hasn't been clear that there are really net productivity gains, particularly when you factor in the required human oversight for its continued use.

I don't think it's inevitable. I think it will be inevitable if we don't, in the next decade or two, really reckon with the implications of it.

Michael Krigsman: My friend, you have an uphill fight. I have to say on this point, you and I have to agree to disagree because as I look out over the developments of AI and automated decision-making, I cannot see, I cannot fathom, and maybe I reflect a typical technology viewpoint, but I cannot fathom that AI is not going to grow much as the steam engine influenced every facet of our lives. And you can say the steam engine also caused a lot of problems.

Kevin De Liban: Potentially. I mean, I think in society, it's not like we just accept technology inevitably without restricting its use. I mean, certainly nuclear energy has had significant use restrictions around it and its development and where it can be used and everything else.

Cars have had a lot of restrictions around how they can be used. Everybody, I'm sure, thought Ralph Nader in the '70s was ridiculous for advocating for seat belts, right? And now that's just an accepted facet of the cars. Now, that doesn't take care of all the harms that cars are potentially causing, right? And I'm not saying that it does, but it's one example of movement that way.

All of these things have essentially corporate power and lots of money going against people who seem like they're in the way of inevitability. But we have to be a little bit, what's the word? We have to believe that something more is possible.

Otherwise, we just resign ourselves to accepting the worst version of whatever it is that we're fighting against, and that's not a concession I'm willing to make. Like, I'll fight like hell. Maybe I'll lose, but I bet you that we're better off because of the fight than if nobody fought.

Global Perspectives on AI Ethics and Data Governance

Michael Krigsman: Let's jump to another question, and this is from Ravi Karkara on LinkedIn.

Michael Krigsman: He says, and I should mention he is co-founder and author of @AIforFood Global Initiative, and he says, "How should global stakeholders navigate the ethical challenges and data governance differences posed by China's AI strategy, particularly its state-centric data policies, while promoting international norms for responsible and transparent AI development?" Not sure how much expertise you have in China, but thoughts on a global perspective?

Kevin De Liban: In the global context, the AI discussion becomes even more interesting because there's a lot of people who are pushing AI as a solution to global poverty, right? And inaccessible healthcare, right?

You get the story of, like, people in remote villages in the majority world who are now suddenly able to access medical care or at least knowledge about medical care that they couldn't because they couldn't travel to cities and so forth.

I think there's, who am I to say sitting here in the US, in Los Angeles, California to say that that's a bad use of AI? I think where I care about, though, are a few things. One is the data extraction that comes from expanded use of AI.

Is it fair to be extracting all the data about people's behaviors, who they are, et cetera, et cetera, when you're going to monetize that and when they really don't have meaningful consent, right? Opting into the terms of service on a contract for social media, for example. That's not a real thing of consent for most people. So, what's the data extraction relationship?

What's the labor relationship, right? Because just as there's a person who needs to seek healthcare in a village, right? And this is an archetype.

The Costs and Harms of AI Deployment

Kevin De Liban: I'm not trying to use a specific example. There's somebody not too far away who's being paid pennies on the dollar to view really horrific traumatic data and label it, right? There are people being exploited for the supply chain and everything else.

I think as we transition to the global discussion, you're going to have a lot of these use cases of AI for good that are going to be uplifted to justify the continuance of the AI regime, and if we're being reflective people that are serious about the policy implications of this, we need to factor in all the costs.

What are the costs of the data extraction, of the labor exploitation? What are the downstream costs of having other people's lives decided by the not-so-good and not-so-innocent uses of AI?

AI's Role in Government and Society

Michael Krigsman: This is from Lisbeth Shaw, who says, "How does today's AI differ from previous algorithms from the view of social harms, and can AI be part of the solution?" So really, the question is what's unique about AI, and can AI help solve these problems?

Kevin De Liban: A lot of the technologies that are used in government services right now are not the latest generation AI, LLMs and other things like this. A lot of them are more older algorithms that are based, that were supervised learning, that were based on statistical regression and these sorts of technologies, and those are really harmful.

I don't think AI has any, like, the latest generation AI has anything to offer in a lot of these contexts. Again, so long as it's existing for purposes that are to essentially limit life opportunities, and in this vacuum of accountability, I don't think the technological sophistication is going to make much of a difference because they're going to be making the same decisions with the same incentives, right?

One way that we are seeing this now is in the recent developments federally, right, where the administration has implemented AI and, for example, Social Security offices, and it's made Social Security harder to access. It's made people have to wait longer. It's made people not get their benefits, and that is technically the latest generation of AI.

I think that's an example of AI... I always challenge the premise that AI is going to somehow fix existing problems just because the technology is going to get more sophisticated. No. What's going to happen is it's going to make those problems even harder to fight as the technology becomes even more inscrutable, more insulated from public accountability and transparency, and all of those things.

Michael Krigsman: You are not of the school of thought that AI is going to be the great savior.

Kevin De Liban: Oh, God no. Oh, God no. No. If anything, it's the opposite way, right? It's immiserating people, and I think the recent use of AI in the last few months by the administration helps show us this is devastating.

AI is being used to destroy government, destroy government capacity, destroy lives, and violate the law left and right, and everything else. It is a weapon that is uniquely suited, uniquely suited to authoritarianism, right?

Even by its nature, AI is inscrutable. It's sort of just as an, the oracle that tells you what the decision is, but doesn't tell you why it's making the decision, doesn't allow you to disagree with it. All of that, that's like an authoritarian approach to thinking and to decision-making. So no. If anything, AI is a greater threat to, I think, our continued existence as a democratic society. It's antithetical to a lot of egalitarian notions. If anything, it's going to make things worse.

Advice for Addressing AI Harms and Advocacy

Michael Krigsman: Can you offer advice to folks who are working in the corporate realm who really have a conscience and who don't want to see the perpetuation of these kinds of harms that you've been describing?

Kevin De Liban: The current uses of AI are destroying its reputation. I think that's brand risk for your companies. I think that's brand risk for AI as a venture. And I think opposing authoritarianism, particularly authoritarianism that's being fueled by AI, is a really critical thing for your long-term survival for various reasons.

Then, on a less global and doomy scale, are all of the things that we're talking about. Push for meaningful regulation. What are you scared of? That's my question. Like, if you've got this great product that's backed by the most sophisticated science we have, what are you scared of?

You should be proud of that. You should be putting that out there and saying, "You know what? Subject us to accountability because our stuff is so strong, so scientifically sound, and produces such clear value for the public that we're willing to embrace being under a microscope." And I don't see that yet, and that's why I even challenge the notion of inevitability in terms of pure efficiency.

There haven't been clear one-sided efficiency gains that have made adoption of AI, even for non-decision-making purposes, universally sensible. Help make AI an electoral issue. Let's start talking about the injustices.

I mean, I think there's going to be some incentive problems there because there's big tech money that funds both parties, and I think there are a lot of people who don't want to be accused of being a Luddite, and there's other incentives there. But I think policymakers have a responsibility to educate the public much more intensely than they currently do about the harms of AI, engage the public, hopefully create a base of people so that there's a balance, a counterbalance to the weight of big tech in these discussions so that you can push for meaningful legislation and regulation and ongoing enforcement and oversight.

I think that's going to be vital to again, sustaining a democratic society, pushing for less inequality, and ultimately having an environment where people have a real chance to thrive.

Michael Krigsman: What advice do you have to individuals who are victims of unjust AI decisions?

Kevin De Liban: This is really hard. A lot of times you don't even know that AI is the reason that you're suffering.

What I would say is contact your local legal aid program if you're hurt by this stuff. Legal aid provides free legal services to folks throughout the country on civil legal matters. Talk to your neighbors. Talk to other people in the same situations. Try to see what's going on and gather information and start engaging in the things that are needed to push back.

If you're in a position to sue, if you're in a position to offer your story to a journalist, take those opportunities to speak for yourself, because there are relatively few stories out there. And the discourse doesn't have the people who are hurt the most, doesn't have the people who are having to live with the consequences of what powerful people do.

Any chance that we have for long-term success is going to dependent on you being able to become a leader and to share your story and share your passion and share your injustice so that we can make it better for everybody.

Michael Krigsman: Kevin De Liban, founder of TechTonic Justice. Thank you so much for taking your time to be here, and I'm very grateful for you sharing a point of view that honestly is quite different from that that we usually hear on CXOTalk. So thank you for taking your time and being here with us.

Kevin De Liban: This was really fun, and thank you also to the audience for all the great questions and you for having me, Michael.

Michael Krigsman: Audience, thank you guys. You guys are awesome. I mean, truly that your questions are so thoughtful. You guys are so smart. Before you go, subscribe to our newsletter. Go to cxotalk.com, check out our newsletter, and check us out for our next shows.

We have great shows coming up, and if you're interested again in topics like data bias, all of these issues that we've been discussing, search on the CXOTalk site because we have had lots of perspectives on this from business leaders, from politicians, you name it. So dig into the interviews on cxotalk.com. It's truly a great resource.

Thanks so much everybody. We'll see you again next time, and I hope you have a great day.

Published Date: May 30, 2025

Author: Michael Krigsman

Episode ID: 882