Explore the dual nature of AI in CXOTalk Episode 864. Join Dr. Anthony Scriffignano, Dr. David Bray, and Dr. Anastassia Lauterbach for a discussion of trustworthy vs. deceptive AI, ethical considerations, societal impacts, and actionable strategies for building trust in AI systems.
AI Ethics and Trustworthy AI: Navigating Truth and Deception
In episode 864 of CXOTalk, hosted by Michael Krigsman, the topic of discussion is trustworthy versus deceptive AI, featuring three prominent researchers. Dr. Anastassia Lauterbach, the Founder and CEO of AI Entertainment; Dr. David Bray, Distinguished Chair of the Accelerator at the Stimson Center; and Dr. Anthony Scriffignano, Distinguished Fellow at the Stimson Center, share practical methods for developing governance frameworks, addressing cybersecurity challenges, and ensuring that humans are involved in automated decision-making.
Key points from this episode for business and technology leaders include:
- Establish accountability for data handling, bias mitigation, and model retraining.
- Balance innovation with compliance to ensure responsible AI deployment.
- Collaborate among legal, technical, and business teams to shape AI strategy and investments.
Episode Highlights
Develop a Governance Approach
- Form a cross-functional group that includes legal, technical, and business leaders to steer AI policies. They should set clear guardrails and accountability measures, so everyone understands roles and responsibilities.
- Establish transparent methods for tracking AI outcomes in data collection, bias mitigation, and model retraining. This ensures any risks or failures are quickly identified and resolved.
Elevate AI Literacy
- Provide employees with short, targeted training sessions that explain AI fundamentals in everyday language. This helps everyone evaluate vendor claims and avoid adopting tools that don’t address real business needs.
- Offer continuous learning paths and clear documentation so team members know how AI models work. These steps reduce overreliance on “black box” solutions and empower more informed decision-making.
Prepare for Evolving Cyber Threats
- Conduct regular security audits and invest in monitoring tools that detect AI-driven attacks. This proactive approach protects corporate infrastructure and sensitive data.
- Form alliances with industry peers and cybersecurity experts to share insights on emerging threats and mitigation strategies. Such collaboration helps you stay ahead of criminals who are constantly adapting AI for malicious purposes.
Balance Innovation and Compliance
- Evaluate new AI opportunities by factoring in revenue potential and regulatory obligations. Early alignment with legal and ethical standards prevents costly remediation later.
- Communicate openly with stakeholders about how AI systems handle data, make decisions, and stay compliant. This builds trust, reduces confusion, and positions the company as a responsible innovator.
Keep Humans in the Loop
- Require that critical decisions made by AI-driven tools undergo human review, particularly when safety or ethical considerations are involved. This process minimizes the risk of unapproved actions or biased outcomes.
- Define specific intervention points where employees can question, refine, or override automated processes. By maintaining human oversight, leaders ensure transparency and safeguard against unintended consequences.
Key Takeaways
Focus on Real Business Needs
Narrow your AI strategy to well-defined problems that impact revenue, customer experience, or operations. Align leaders with the outcomes and metrics that matter most to your organization. This approach helps you avoid chasing hype and accelerates meaningful returns.
Anticipate AI-Driven Fraud and Misinformation
Remain vigilant to new tactics criminals use to exploit large language models and deepfakes. Establish rapid-response plans and cross-functional teams to detect, counter, and minimize reputational damage. By anticipating bad actors’ methods, you protect customers and stay one step ahead of emerging threats.
Foster Cross-Disciplinary Collaboration
Engage teams from marketing, IT, finance, and legal in regular discussions about AI opportunities and risks. Combine diverse viewpoints to ensure your policies and solutions are ethical and feasible. A unified approach strengthens decision-making and drives more balanced outcomes across the organization.
Episode Participants
Dr. Anthony Scriffignano is an internationally recognized data scientist. He has an extensive background in advanced anomaly detection, computational linguistics, and inferential methods, is the primary inventor of over 100 patents worldwide, and is currently a Distinguished Fellow with The Stimson Center.
Dr. David A. Bray is a Distinguished Fellow and Chair of the Accelerator with the Alfred Lee Loomis Innovation Council at the non-partisan Henry L. Stimson Center. Previously he was Chief Information Officer at the Federal Communications Commission and a Senior National Intelligence Service Executive with the U.S. Intelligence Community.
Dr. Anastassia Lauterbach is a Non-Executive Director of Aircision and Freight One. She is also a member of the Advisory Council of Nasdaq and the Diligent Institute. Formerly, she was Non-Executive Director of easyJet PLC, Dun & Bradstreet, and served as Chairwoman of the Board of Directors of Censhare AG.
Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator who has written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.
Transcript
Michael Krigsman: Welcome to CXO Talk episode 864. I'm Michael Krigsman, and today we are exploring how to navigate truth and deception in AI as part of a special series on AI ethics. Our three expert guests are Dr. Anastassia Lauterbach, Dr. David Bray, and Dr. Anthony Scriffignano. We’ve got a lot of doctors in the house.
All of you, welcome to CXO Talk. I'm delighted that you're all here.
Anastassia Lauterbach: Thank you very much.
David Bray: Thank you for having us.
Anthony Scriffignano: Great to be here with.
Michael Krigsman: Anastassia, let's start with you. Very briefly, tell us about your work.
Anastassia Lauterbach: I spend most of my time with my company. I'm a founder and CEO of AI Entertainment, and this company democratizes knowledge of AI with AI Muggles, not Wizards. This means that I try to translate AI, robotics, and quantum computing from difficult to easy to understand.
For example, I write books like this. This is the Romey and Robbie series, and it's a real story about a real cat and his friendship with a robot. Everything in this book is mirrored in the science-behind section. Those are over 100 articles explaining AI in very easy language.
Michael Krigsman: I love it. An AI cat.
Anastassia Lauterbach: Absolutely. He's an influencer. And by the way, did you know that 27% of web traffic is cat-connected or cat-linked content?
Michael Krigsman: Cats are addictive. David, tell us about your work.
David Bray: I try to bring rationality to the world in terms of how technology is changing politics, geopolitics, societies, and the like. I do so both as a Distinguished Fellow with the Stimson Center, as well as Business Executives for National Security. On the side, I have my own S-corp., which tries to help boards and CEOs do the same.
Michael Krigsman: Anthony Scriffignano, welcome back. Tell us about your work.
Anthony Scriffignano: I'm a Distinguished Fellow with the Stimson Center along with David Bray. My philosophy is to try to teach and learn every day. No fair cheating on that; you have to do both.
I've done a lot of work over 45 years with data and data science and AI before it was a thing. I've got multiple patents in my name, lots of invention around things like identity resolution and veracity adjudication—which we'll talk about today—judging the truthiness of things, and also lots of effort around finding people that are doing new bad things, or novel malfeasance. These are all things at the edge of computer science and among the hardest of the hard problems to solve.
Michael Krigsman: Anastassia, why is building trustworthy AI versus deceptive AI important?
Anastassia Lauterbach: Trust is something tremendously important for human psychology. It's one of those constructs paramount for human relationships, so no one will disclose, "Oh, I am building something based on deception."
There are two economic arguments in favor of paying attention to whether AI is considered trustworthy or not. The first one, I'll open up with a quote by Kevin Kelly, the founder of Wired. He said that the business of the next 10,000 companies is very easy to predict: you take X, and you add AI to it.
This remark, made around 2016 or 2017, now has a basis in financial figures. For example, around 2030, we expect the AI market to grow to 1.85 trillion US dollars globally. This is tremendous growth. We are talking about a compound annual growth rate of 33% from 2022 to 2030.
There is another caveat to that. When I looked into valuation of businesses around 2012, the portion of digital assets in this valuation of companies was around 12 to 13%. After the COVID epidemic, we are talking about 90%. Ninety! Digital assets are all about data, intellectual property, and systems and processes to do something with this data. This incredible growth has something to do with the progress in AI, and we have to pay attention.
Anthony Scriffignano: If I were to summarize what Anastassia just said, everybody is leaning into this, but it's a different "this." If you look at what many organizations want, they want to be able to check the box and say, "We're doing AI. We're doing gen AI. We're doing large language models. Look at us, we're great."
That's great. What's the problem you're solving, and by what right do you think that what you're doing is going to help address that problem? In many cases, there's a great argument to be made that you're just accelerating the speed with which you hit the wall. You're helping people understand what you don't know. You're helping people understand more about your questions than about your answers. You're exposing yourself to potentially looking to your customers as if you've focused on that and not this other thing that they want you to focus on.
It's really important to move into this technology. I would never say don't do that. But I would say, just because you see a new tool at Home Depot doesn't mean you go buy it. You have to have something to address, and you also have to understand the unintended impact of those actions you're about to take.
Anastassia Lauterbach: May I quote a poet from England from the early 18th century, Alexander Pope? He said, "Trust not yourself, but your defects to know." This emphasizes that it's not just about self-reliance but about humility, and I think this humility and thoughtfulness is something we need to learn more and apply more when we move into the world with AI.
Anthony Scriffignano: One of the questions I always ask when someone is running headlong into "Let's AI our way out of this problem" is two questions. One is, what do we have to believe to go down that road? And the second thing is, by what right do you think the answers contained in the AI are going to address that question?
Imagine during COVID if you had ChatGPT, and you said, "What are the most promising approaches to responding to COVID?" Well, we didn't know, right? You would literally get what we love to call hallucination—on steroids—because the LLM doesn't know that the answer isn't in the corpus of data that it's looking at. It's going to give you an answer anyway, and it's going to look awfully good. It's going to look like a human being wrote it, and it's going to be wonderfully articulate but just as good as any other rumor you just heard.
Michael Krigsman: So, can we say that the summary here, to make this very simplistic, is: AI is around us, AI is surrounding us, AI will continue to do so, and therefore, we need to know what's true, and not true? That seems pretty simple.
David Bray: Perhaps instead of calling it AI—artificial intelligence—we should call it alien interactions because these machines do not think like you or me. That's a very real danger when you read articles or, for whatever reason, begin to anthropomorphize these machines.
Statistically, the human brain uses between 15 and 20 watts to do everything it does. These models can look at upwards of 2,000, 5,000, or more. When we hear about companies thinking about nuclear power plants to do these alien interactions as AI, that might tell you this has simply been designed to look like it's thinking like a human when in fact it's not.
That also gets to the deeper question—if you remember the Turing test—it was intentionally trying to have a machine deceive a human into thinking it was human. We've been wildly successful at rolling out technologies that will now convince you that the responses they're giving—whether in text, audio, or video—appear to be human-like.
But that's a problem because, as Anthony mentioned in his background, fraudsters and scammers would love to use this technology. We've probably all seen, in the last two years, a step change in scam phishing emails. They're now at the point where you can no longer use misspellings as a tell if it's a phishing email or not, and now they're often referencing another family member or friend because they've combed their social media network.
They are unfortunately really successful at the Turing test. The whole Turing test was, again, a machine trying to convince you it was human. That might not be the best tool for society at large.
Anthony Scriffignano: There's another little nuance happening right now where we now have to prove to the machine that we're human, right? It's the opposite of that. With CAPTCHA, the way it was originally designed, OCR was kind of not very good, so if I tilt the letters and put them together, then OCR doesn't work and the human can read it. Well, then people started to use the responses from CAPTCHA to train better OCR. Lo and behold, all of a sudden, the computer gets just as good as we are at reading kind of lousy text.
Then they start introducing pictures. Introducing pictures is not an answer. It's not the end. It's just a way of kicking that ball a little further down the road. I was listening to a talk recently with one of the people who actually invented this technology, and he was saying, number one, he wished he never did, and number two, maybe in the future you'll have to do things like walk three steps with your phone, or jiggle your phone, or tilt your phone down, or do something. I can just imagine us: "Do 10 pushups!" At what point do I have to stop proving that I'm human? We don't have a good answer to this right now. It's not because the AI is thinking; it's because it's very good at watching us behave.
Anastassia Lauterbach: That's so true, and actually we need to establish a language to talk about AIs and machines. We are still using human terms to describe what we expect. In social science, for example, trust is described as the belief that another person will do what is expected. Now it's not a person, it's a machine or piece of code. What is really expected?
If I use my vacuum cleaner, I expect it to do a certain task, but this is very different from an LLM. Because I am in AI and I teach AI and cybersecurity at the university, I know it has nothing to do with language; it's a statistical stream of tokens. A token isn't even a word. If we take English, it's maybe two-thirds of an English word statistically. Humans expect that it understands, and it simply doesn't, and it's not even based on a world model.
David Bray: A couple of things to add to the nuances. First, let's take the 30,000-foot view before we dive in. Back in the 1920s, when this disruptive technology called radio came out, there were pundits who were saying something similar to what we're seeing with AI nowadays: that this was going to cause world peace, it was going to create greater understanding, that world leaders would talk to each other regularly on radio.
Then fast forward to 1933, about six or seven years later, those same pundits were saying that it was going to be the end of society as we know it, the dictator's toolkit. I raise that because right now there are a lot of breathless articles either saying AI will save us all and that good times are upon us, or it's the end of society. We're probably doing the same thing to the technology that we did to radio, and we're missing that it's just a tool.
The second thing I would say is, I think we are often using AI as a stalking horse or a scapegoat for things that are deeper in society. We ask questions like, "How do you know if AI is ethical?" But how do I know if an organization is ethical? How do I know if an organization is not biased or is not making bad decisions or hallucinating?
What this may point to is we need to do a better job of figuring that out, whether it's a machine, organization, or person. A slightly different definition of trust, from the background I come from, is the willingness to be vulnerable to the actions of an actor you cannot directly control. That actor could be an individual, an organization, a government, or a machine.
What has been shown is we humans are willing to be vulnerable to those actions we can't control if we perceive benevolence, competence, and integrity. Exactly to Anastassia's point and Anthony's point, there is no way of assessing the benevolence of an LLM. There is no way to assess the competence—these things are just doing very fancy pattern matching—and if it's not in the training data, it will give you something that is made up. On integrity, you say to them, "That's not right," and the prompts will be very cheerful in saying, "You're right," and if you say that's not right again, "You're right again."
Integrity, competence, and benevolence aren't there, but let's step back and say, how do people assess that given that we are now connected digitally? How do we assess that CXO Talk is benevolent, competent, or has integrity? How do we do that for governments, for the world at large? Maybe this is the challenge hitting free societies in particular: we lack the ability to adjudicate at speed.
Anthony Scriffignano: A lot of the terms that you're using, David, are also epistemologically fuzzy. Benevolent might be benevolent from your perspective and not from my perspective. We tend to use terms like "for good." Well, good for whom?
I have a lot of background in emergency medicine. One of the things they always tell you is expand the circle. If you're going to make a complicated decision, get advice from other people, and make sure that those other people bring in different perspectives. That's great until something's on fire or until somebody's not breathing, and then somebody has to make a decision. It might be the wrong one, but you have to make that decision with some degree of immediacy.
After the fact, everybody will swoop in and judge you: you should have done this, you should have done that, why didn't you do this, how didn't you know about this? At some point, AI is going to be making decisions for humans because the argument will be that AI can make a decision before the human decision becomes irrelevant. "Do I cauterize this vessel or that vessel? Make a decision right now." Do you know all the vasculature in the brain? Let the AI do it. I don't know, but I could see how we can get there.
I could certainly see how we can get there in a nation-state, military-industrial context. I can see how we can get there in space; it takes so long for a signal to get from one place to another that you need autonomy for the thing to be able to land or do whatever it's trying to do. But as we give up this concept of agency—giving the right to the machine to make the decision on behalf of the human—the values of the human have to be imposed into that code. We are not rules-based creatures. We are very squishy about what we do and we make decisions and we change our opinion based on new facts and new modes and new ways of learning all the time. AI is horrible at that right now.
Michael Krigsman: Please subscribe to our newsletter, and subscribe to our YouTube channel. Check out CXOTalk.com. We have great shows coming up.
We have a question from Twitter, from Arslan Khan, who says, "We know AI can be trustworthy or not trustworthy based on many factors. How can normal, non-technical people know if AI is affecting their lives, if it is ethical, and is there an opt-out for AI?" Often, there is not. Then let's also talk about the role of software companies.
Anastassia Lauterbach: AI is surrounding us. If you have a smartphone, you are immersed in AI. We can really discuss all those technologies like LLMs on smartphones, but even how we see our calendar invites and how it correlates with tweets or notifications from LinkedIn—it's immersed in AI. Navigation technologies are connected to automation and to AI.
We just need to think about how we should behave. Unfortunately, this is the difference from radio technologies and everything that preceded it: we must expose ourselves and learn. AI literacy and technology literacy is a must from an early age. It's like basic financial literacy. Obviously, there will always be treasury experts, taxation experts, and valuation experts, but everyone must understand that there's revenue, there's cost, and one minus another produces some kind of result.
I love comparing AI and dealing with AI to the kitchen, maybe because I'm a woman. Obviously, you might be a great chef even if you don't work in haute cuisine, and you can produce fantastic stuff, perform on some shows, and that might be the ultimate level of sophistication. But everyone knows how to fry an egg. With AI, we must learn how to fry an egg.
This is basically why I teach kids—and clever parents—to think about these concepts: What is an LLM? What does it do? Who is Wolfram? Why should we know about him a little bit? What does it mean to have a robot in our house? What is the difference between a robot and, let's say, a vacuum cleaner?
I think we must embrace this knowledge and evolve as communities, ask difficult questions, and understand that there's a purpose behind every single application and service. You can't go through a jungle; you must learn. Then you will have some openness to maybe be more precise on what you need.
Finally, I love the quote of Pablo Picasso: "Machines are quite stupid, they just produce answers." I think it's up to humans to ask questions. We need to teach humans how to ask those questions. This is a fabulous opportunity that is opening up in front of all of us, to learn more about humanity and what it means in a world partly dominated by AI.
David Bray: In Western society, if you apply for a job and your name is non-Western, a Western name application is three times more likely to be selected. That's not an AI problem whatsoever. Everything we've been talking about here is how we trust the machine. No. We have deeper systemic issues.
We ask, "How do we ask these questions of AI?" without pausing to say, "How do we actually do better as humans?" I really appreciate that. I would also say that as we go forward, it's thinking about how we've had these challenges before. Most of us are not medical doctors, yet we have to go see a medical doctor. How do you know if that medical doctor is going to prescribe an approach that is in your best interest?
The way we solved this in the past—and unfortunately the internet kind of squashed this—was we had professional societies requiring members to have knowledge, experience, and then to pledge an ethical oath. If at any point the member had a concern—because most people couldn't adjudicate if that doctor acted in your best interest—it was other medical doctors who would decide to either say, "Yes, they did the right thing," or "No, we're going to censure them; we're going to remove their license."
Unfortunately, what happened with the internet is it made everyone an armchair quarterback. I worry that we will become so fixated on "We must understand AI" without pausing and saying, "Right now, there are plenty of organizations with issues internally, with bad decision-making, biased decision-making, as well as externally in how they're interacting with employees or customers."
I think we need a deeper approach, not just about the machine.
Anthony Scriffignano: Implicit in what you have all been saying is this idea that benevolence is our fundamental goal, and we rely on the benevolent human spirit, and we hope that—Anthony's shaking his head—well, this is my interpretation. Let me ask my question anyway, before it's edited, before it comes out of my mouth. Anthony's trying to edit it in my brain.
So my question is this: If we think about the software companies—over my career, I've consulted with over 100 software companies, literally. As far as I can see, the goal of software companies is innovation with the goal of making money. Yes, software companies talk about wanting to change the world, and I'm sure that's also true, but the bottom line is it's about money. When we talk about trustworthy AI versus deceptive AI, how do we overlay this software company issue on top of it? Anastassia, thoughts on this?
Anastassia Lauterbach: Thirty years ago, I was working at the Munich Reinsurance company as a liability underwriter. Even now, there is not such a thing as software liability within the product liability category. The European Union has updated the product liability directive, but the rules will be introduced only in December 2026. In the United States, to my knowledge—and please correct me if I'm wrong—so far there aren't any rules stating that software vendors are within these liability rules applicable to, for example, automotive companies or consumer electronics.
We need to somehow calibrate what we are talking about. In July this year, there was a bug played into a software over an overnight update with a cybersecurity vendor, and then all Windows machines went down at the airports. I think around 5,000 flights were either canceled or delayed globally. In the US, the figure of financial damage known today is 5.4 billion US dollars. There's some legal process going on, but so far, it's unlikely that there will be damages paid because it's very difficult to determine negligence or misconduct. Tort law excludes software from these liability things. And if there are multiple parties involved—who did what and how—this is very, very complex.
If we were dealing with the food industry or pharmaceutical industry, the world would look completely different. The world is being eaten by software—Marc Andreessen said it in August 2011—but there's no product liability for software vendors. Now we are going into AI. Fantastic. The European Union is implementing the European AI Act, so all vendors must comply. It's really killed the AI ecosystem in Europe, by the way. I represent myself; I don't represent any big brand. I was really against the European AI Act because it did not solve one single risk in AI. It imposed a huge amount of cost on AI companies and investors. Venture capitalists are saying they're keeping their fingers off the European ecosystem because it's all too costly.
We need now to balance, what do we want? At the same time, five European countries—Italy, Belgium, Austria, Germany, and the Netherlands—are moving into mass retirement this decade. If you have something like that, you must think about automation and AI. There's no policy, no thinking on how to balance the inevitably declining GDP due to this mass retirement. Meanwhile, we have the European AI Act. So Europe shot itself in the knee. This is my interpretation, with all the caveats that it's complicated, that there are issues, that thoughtfulness is needed, and all of the above.
Anthony Scriffignano: I'm in violent agreement with your position on the dangers of regulation. I'm a scuba diver—the regulator is pretty important, right? You die without the regulator. But if you overregulate, then you can't breathe.
There's a tendency right now to take whatever pre-existing laws there might be—GDPR in this case—and write something that looks like that for AI. Well, it's not the same thing. It's not even close to the same thing. You wind up with this tapestry of complexity that makes it very hard to take a step forward.
Now, I'm not saying that was the right or wrong thing to do because, like you, I work with those people all the time, but I'm not in that space, and they have to do something, I get it.
The software issue that you talked about—the CrowdStrike thing—was delivered kind of by Microsoft, right? Because most people had Microsoft. If you unpeel the onion, who produced the blank file that got distributed with the update that got processed by the software that was acting like a driver that was allowed by the kernel of the operating system? This is a nightmare to say, "Where is the smoke detector that should have figured this out?" The world learned a lot about how its own security was working that day.
That was a very telling moment for those in cybersecurity. They understood it relatively quickly and it wasn't a big deal to fix, but oh my gosh, the impact of it was epic. That also tells a story about how connected the world is right now and how much we have to be careful as we move forward.
That is a counter-argument for some regulation, to say you can't just do whatever you want and apologize later because "it had never happened before." We are in a very difficult time right now. In epistemology, they sometimes refer to these as critical incidents or critical moments, where there's a point of inflection going on. It's hard to see when you're part of it, but we are definitely part of it, and we're making little decisions right now that are going to have a very big impact later, with imperfect information.
These companies—your question started with something about what companies can do—companies are trying to serve their shareholders, their customers, their employees, and their future market, and it's impossible to serve all of them at the same time. Absolutely impossible. If you want to maximize shareholder value, you wind up doing short-term things that often kill your company. If you want to maximize employee satisfaction and engagement, then you wind up doing things your customers don't care about. If you do everything your customers want, then you don't make any money because your margins go to zero and your shareholders get upset. So it's a bunch of ways to kill yourself.
Now throw AI into this mix, and then throw all this regulation into the mix, and you get where we are right now in these boardrooms. This is not an easy place to be.
Michael Krigsman: Where are we?
David Bray: We are very similar to how we were when the automobile first came out. It's worth noting how long it took before seatbelts arrived: more than half a century. I'm hoping with AI we don't have that, but to think we're going to immediately get it right within the first six to 12 months is probably not going to happen.
Let me zoom into a very specific point I think Anthony brought up earlier about how AI works and doesn't. What we should unpack is not all AI is created equal. We've been talking about generative AI, but there are other forms of AI—everything from expert rule-based systems to decision support systems to computer vision. The way you're going to trust computer vision, which is deterministic, which is actually much more trustworthy and not prone to hallucinations, is dramatically different than generative AI.
I raise that because if there's anything that needs to be talked about at the boardroom level, it's first understanding the different tools we refer to as AI. It's not monolithic, and you need to understand whether you're reaching for a hammer or a screwdriver. The trouble with writing regulations is that we treat AI as if it's monolithic.
Another thing that's baffling is how we somehow think there's going to be one AI regulation to rule them all. When IT systems came online, we didn't write "IT regulation" for health and assume it was just as good for the defense sector or for recommendation systems. There are existing laws in the United States—HIPAA, the Bank Secrecy Act for banks. I think the more pragmatic approach is to go back to the existing rules that were written for both human and IT systems and say, "Where does AI fail because maybe it's too fast, the speed or scope?" and upgrade those existing laws rather than trying to write a new policy.
I'll give a very specific example. Right now, in the United States—and it's not just the United States—there's a thing called Health Level Seven. It's the international standard for interoperability across medical systems. There's a standard for tracking the provenance of a decision. You record in your medical record when a physician made a decision and on what data. You don't record in the medical record when an AI does. The number one usage right now in medical settings is that physicians love to type or dictate three to four short bullets, then say, "Give me two to three pages of physician notes that I will then put into the file."
Aside from the fact that the company or organization delivering your healthcare should know whether those physician notes were written by the physician or by an AI—and if so, which one—a patient should know that too. There's a very real risk that if that AI output is later read back in by the same AI, there's a thing called model destabilization, sometimes referred to as AI self-cannibalization, where the model overregresses and starts to collapse on itself and make bad decisions.
I worry, Michael, that the question about deceptive versus trustworthy AI again goes back to human organizations and what humans do. We may see lawsuits three to five years from now because companies didn't at least take the first step of tagging and labeling when a decision was made by an AI or an AI plus a human, and then later, if that was fed back into a machine, not doing the appropriate actions to avoid model destabilization.
Anastassia Lauterbach: I really love this example from the medical industry, and it might go into further industries like food. I have two further suggestions for regulators and for those serving on boards.
One is—and by the way, this is not an ultimate solution—we know that today more money is spent on capability research than on transparency, safety, or responsibility. It's something like 97% of all papers are published on capabilities and only 3% on XAI or explainable AI. We might motivate companies to spend more money on that, or to support a foundation or university to invest in that. That could be motivational, not just punitive if we go into regulation.
The second is cybersecurity. AI is a tool not just in the hands of good people; criminals are using it every single minute. We need to rethink how we approach cybersecurity and regulate cybersecurity issues—like exposure and how to quantify it, or what is the financial figure. This is tremendously important, and very few people are talking about it.
Anthony Scriffignano: You mentioned quantum technologies early on, and certainly we are on the verge of the point where everything that was encrypted will now be readable, even the data that was stolen. We are on the verge of having quantum decryption before we have a really good way to do better quantum encryption, other than scattering all the keys all over the place.
The world is going to get much more complicated. We have to start thinking—I'm not going to say "start thinking," because there are some fine folks out there thinking about this—about novel cyber malfeasance. The bad things that happen today—data exfiltration, trojans, malware—we need to be very good at fixing them all the time. That's necessary but not sufficient.
I spend a fair amount of time—and David, you know this—thinking about what future bad guys might be able to do in less than a generation with technology that I suspect will be available by then. It's not hard for me to come up with an example: we've looked at using flocking and swarming algorithms, like the way flocks of birds can bifurcate and swarms can get bigger and bigger. If you apply that to botnet attacks, and imagine that the botnet swarm or flock will get smarter about how it's failing and therefore get better at attacking you, that's horrific to imagine. I know how to write code that would do that, and who am I?
If we can do that sort of thing at an unimaginable scale in the very near future, and we're just getting really good at fixing yesterday's malware, we are in for a world of hurt in that future you're talking about. The good news is, there are some fine folks thinking about that. The even better news is, to your point, we need to incent them to think harder and faster about that.
Michael Krigsman: Anastassia made a very interesting point. She said that investment is far higher for technology capability—AI capability—than it is for compliance or cybersecurity. Human nature tells us innovation is fun, and compliance is not, and innovation makes money while compliance costs money. Yes, there's a cost to society, but it's not to me if I'm a software company, unless of course there's a data breach. Part of this is definitely trying to go against the tide of human nature from that standpoint.
Anthony Scriffignano: This is the argument for smoke detectors and fire alarms and life preservers. I have life preservers on my boat because I'm required to have them on my boat. If I have a boat, I want the best life preserver that's going to preserve my life. Most people will not think that way, and if I'm going to put a ferry out there with 2,000 of these things, maybe I start thinking about how much they cost. "What's the most cost-effective deployment of required life-saving equipment?" rather than, "How do I save David Bray's life because we need more people like him?"
David Bray: Well—and maybe if I can jump on that question. I think we've been talking about cybersecurity, but you can create a whole lot of damage without ever breaching a system. That's why trust versus deception is so important.
Using 10 minutes of compute time from WormGPT—the dark side cousin of ChatGPT—and some plugins, using data stolen from healthcare data breaches (many of us may have been affected by a data breach in the last year), that data can then train and create 1 million realistic-looking medical records, complete with chest x-ray, physician's notes, and doctor's notes, for a claim at $250, which is below the fraud detection threshold for several services in the United States because it's more expensive to adjudicate it than to pay it out. That's 1 million records for upper respiratory infection.
What this points to, and this is what Anthony is talking about, is again, the solution is to incentivize those who are doing innovation to find innovative solutions to either adjudicate faster or adjudicate with more veracity, whatever it might be—at scale. AI is basically doing the same thing that, remember, when the Xerox copier first came out, we had to upgrade our dollar bills because some people were counterfeiting them. We upgraded that. The challenge, and a unique challenge with AI, is there are generative adversarial networks or GANs. So the moment you create a solution that's really good at filtering valid from invalid, a bad actor will use a GAN to get good at fooling it. It's a predator-prey relationship.
But I do think this points to how we have to figure out ways to incentivize market-based solutions. Central planning won't get us out of these solutions. You have to figure out a way to shine a spotlight on it. The speed at which this is moving is almost like what we're seeing in Ukraine, where every six to nine months there's a generational leap in how they're using drones for conflicts. The only difference is, this is behind the scenes—AI spoofing. How do you know if that image is real, that video is real, or if the person you're talking to is real?
Michael Krigsman: This is a perfect time to subscribe to the CXO Talk newsletter. Do that now so we can notify you of upcoming shows.
Huai Wang on LinkedIn says that data collection right now is being affected not only by linguistic transliteration or translation nuances, but we're also seeing complex data patterns when technology languages and platforms change how data is stored. What do you recommend in navigating and preparing for these increased complexities against how frequently we collect data?
Before you answer, I have a request to the audience: you've got a dumb moderator, so just keep your questions short, okay?
Anthony Scriffignano: I do like a brilliant question. I'm going to paraphrase the question: There's lots of languages out there, and there are lots of people speaking those languages, and we're trying to teach AI to suck in data from them. A lot of times, the systems that are showing you that language are not the way it was originally written.
Most of us have people in our social network where they write something in their language, you see what they allegedly said, and there's a translate button on the bottom. It's been translated, or I'll say transformed. There's a lot of data out there that's gone through linguistic transformation, intentionally or unintentionally, either at the time of writing or subsequent to the time of writing. That introduces all kinds of nuance into the language we consume with our AI systems. That's going to have an impact.
There are two different arguments, and there's no winning argument yet. The Turing argument says, "If I get enough examples of that language, I'll be fine. Just give me millions and billions of articulations, and no matter how complicated it is, I'll regress around it and figure it out." The contrary argument is that when we speak a language, we change it. Nuance gets introduced, such as sarcasm, neologisms, borrowing words from foreign languages. All these things confound our ability to understand.
Right now, we're speaking in a certain way—quasi-academic, intelligent, trying to speak in complete sentences. We know there's going to be a transcript later, so we don't want to look like idiots, right? We're trying to say things in a certain way. If we talk over an adult beverage, it might come out differently.
When you introduce language on top of that, how does it perturb your ability to understand? Is that person upset? Is that person lying? One of these people is an imposter. Which one is it? Right now, any one of those questions could earn you an honorary PhD in computational linguistics by tackling it really well, because I'm sorry to tell you, but the technology hasn't gotten there yet. It's getting better.
There are things, as Huai mentioned, that you can do to recognize commonly occurring graphemes, or to recognize that some misspellings can be intentional or accidental. Every time I write something to Anastassia, my autocorrect changes the spelling of her name because I know someone else who spells it differently. If I don't catch it, I send it spelled wrong. She knows this now, so hopefully she's not offended. These things are happening all the time, and it's going to get worse.
We have to get better at not just accepting that computational linguistics is "Give me the dictionary," because there are low-context languages with no single good reference. Or there are languages where the only good reference is a religious text, and that's not how people speak. So now you've just consumed the Bible and you're trying to speak English. We don't speak like that anymore. So there's the low-context problem. There's the multilingual disambiguation problem where people are writing in more than one language. Then there's the language transformation problem, where people are changing what was written and you're reading what was transformed. All of those are the Wild West right now, and you can make a big impact working in any one of those fields.
Michael Krigsman: Greg Walters on LinkedIn says, "We've lost the EU." I think "alien" is the best way to view AI. In that vein, shouldn't we look at AI through a brand-new lens, without previous anchors or history or KPIs or best practices? AI does not fit only in a vertical or horizontal. It is everything everywhere all at once. There are no experts. What is a new perspective—AI as a mirror?
David Bray: When 2001 happened, I was responding to 9/11 and the anthrax events. There were conspiracy theories, but none could really take root. Some said we'd done it to ourselves, false flag, but they didn't take root. Fast forward to 2009—I'm on the ground in Afghanistan, and there's an event in Western Afghanistan where the Taliban took a valid photo of a US fighter jet flying overhead, then briefly took a photo of a propane tank detonating and unfortunately killing innocent Afghans. Both were true photos, but they were out of context. They went on social media and claimed, "US airstrike kills innocent Afghans."
The Department of Defense said, "We're investigating," and it took more than four and a half weeks to figure out what happened. During that time, of course, the news media was blaming the US and our own ambassador apologized. It took four and a half weeks to put that characterization of what really happened versus the deceptive narrative to bed.
As you know, Michael, back in 2017, when I was at the Federal Communications Commission, there was an event where we were getting 6,000, 7,000, 8,000 comments a minute at 4:00 AM, 5:00 AM, 6:00 AM. We were told by our lawyers we couldn't test for bots—we couldn't do invisible means because that would be seen as surveillance—and we couldn't block what was perceived as spam, which was 100 comments from the same IP address per minute. It took more than four and a half years, at that point in time, for eventually the New York Attorney General to adjudicate and say, of the 23 million comments we got, 18 million were politically manufactured—9 million from one side of the aisle, 9 million from the other side.
Over the last two decades, we get longer and longer tails for the more valid, authentic, accurate characterization of scenarios to get out. We went from them not taking root in 2001, to taking four and a half weeks in 2009, to four and a half years in 2017. We are unintentionally, through no one technology—internet, smartphone, generative AI—laying the seeds where it used to be the adage that a lie can go halfway around the world before truth gets on its sneakers. I would say now a lie can get to the other end of the solar system before truth gets on its sneakers.
This is why I'd submit the solutions have to be technology-neutral and also have to account for humans doing things regardless of any one machine. I still would say, at the end of the day, it's going to be sector-specific, because the severity of doing it for a recommendation on a website is much different than making a decision about your health, which is much different than making a decision on a battlefield.
Michael Krigsman: Anastassia, there are two questions, and I'll combine them and direct them to you. Elizabeth Shaw says there are strong agendas that power deceptive and malicious use of AI, such as greed, power, etc. How do you encourage the ethical use of AI as opposed to the deceptive use?
Then Arslan Khan says there are many underlying issues—security, data bias, culture, jobs—and AI can help or make them worse. He's asking fundamentally the same thing: how do we incentivize organizations to take an ethical as opposed to a deceptive approach? I'm paraphrasing these two questions, but that's fundamentally what they are.
Anastassia Lauterbach: If you incentivize a prospective customer to be more aware and educated about what's out there—because sometimes the issue is not the technology, but the business model and the "why" behind a certain construct. For example, how a social network is configured.
I think the reduction of confusion and demystifying of AI is a noble cause. In my eyes, there are three fundamental buckets of risk in AI. One bucket is everything having to do with design. For example, we are talking LLMs, and in my view, an LLM will always hallucinate. You might reduce certain things, but it will still remain because—I'm not going to go into the theory of computation now—it will hallucinate, period. We're talking about AIs as if they're new, but the roots of the technology are in the '40s, '50s, and '60s of the last century. We must go into a new wave of architectural design to improve. That might be an issue by the system construct or because there's a human mistake. This is always possible.
Then there is malicious intent. We talked about cybersecurity. Criminals will apply AI for their dark purposes, and some companies might want to seduce their customers into certain thinking. The more educated the customer is, the better. Sometimes we are talking about scalable problems, and sometimes it's just basic stupidity on behalf of a customer. When I work with companies and review their AI portfolio, sometimes I ask, "Why are you spending this money on that?" The issue is that the customer believed the sales personnel of a certain vendor. The vendor consists of 85% sales and only 15% engineers. Obviously, this vendor cannot execute. I'm not going into ethical and trustworthy AI, just the basic configuration. The more knowledge the customer has, the better the outcome.
Last but not least—and this will be very specific to an industry or business function—is the so-called human in the loop. This is the third bucket of risks. Define how much human and in what loop. When are we going to introduce supervision and human feedback? Once again, it's a balancing act, and it's a play between whoever is building and introducing a system and whoever is the customer.
It might be an answer that's not very simple, but in my view, education and asking good questions, being capable of reviewing the vendor portfolio, and writing down what types of problems we're solving—do we need this and at what cost? AI might appear new, but ROI is still ROI. If I have a rule of thumb that 25% of my revenue line is spent on compute and 15% on cleansing data—if I am great—then you have your economics, and you need to keep it in mind before deciding whether you're going into a certain AI implementation or not. What will it cost you? Do you really need to automate here and introduce an AI agent, or maybe a human will do just fine?
Michael Krigsman: Let me just ask each of you very briefly for a final thought. David, final thoughts on this topic?
David Bray: This is not a tech-specific issue; this is a society and company-level issue. The solutions will look not at the tech specifically, but more broadly. Recognizing we're wrapping up: data, data, data. We didn't talk at all about data governance, data cleaning—I know Anastassia just briefly touched on it. One way you could do everything right in ML or AI and completely fail is if your data is bad.
At the end of the day, the intent of pursuing more trustworthy and less deceptive AI begins first and foremost by putting yourself in the shoes of your different stakeholders and making sure whatever you do—tech or not, data or not—you are thinking about them and how you deploy things.
Michael Krigsman: Anthony, final thoughts?
Anthony Scriffignano: A couple of things: Beware shiny objects. There will be more and more shiny objects. Right now, it's LLMs, and that's great, but what problem are you solving? Always step back and ask, "What do I have to believe? What's the problem I'm solving?" Don't get distracted by the shiny object because there's another one coming right behind it.
The second thing is, to David's point about cleaning the data, regulation will happen with or without you. So get involved. Make sure that regulators understand the unintended impact of some of the things they're considering. We just wait for these regulations to come out, and then it's too late.
Third, be humble. You can't solve this alone. Get help. There's lots of folks out there with lots of expertise. We should make new mistakes. We do that by involving other people in our decision process and being humble enough to realize that it doesn't matter how smart your smartest people are—you should bring in people who disagree with you.
Michael Krigsman: Anastassia, it looks like you're going to get the last word here. Final thoughts?
Anastassia Lauterbach: We might be in technology and AI, but we are all in the people business ultimately. It's really about human leadership and thinking and humility. I don't encourage people to wait for some decision from Alphabet or some new regulation from Washington DC or Brussels. I would really encourage local communities to look into the local ecosystems and see what kinds of colleges or universities are out there interested in AI, offering courses. Which schools are interested? What could you do for those kids? What might be the local startups doing? Maybe do startup breakfasts so they explain what they're doing and how they're hiring. Maybe some incumbent companies are adopting one or another type of technology.
I encourage a bottom-up movement rather than waiting for some big guy somewhere to decide for you, because that's how you learn, and that's how you create a dialog. Ultimately, progress will happen through dialog.
Michael Krigsman: We are dealing, as all three have said, with human rather than specifically technology issues. The question then becomes how do we harness and focus this human energy for the ethical use of AI? I think, with many things, it will require a carrot and a stick. There's no simple solution here. As complex as the technology is, the human aspects are always harder.
David Bray: So, a carrot and a stick, Michael—are you building a snowman?
Michael Krigsman: You know, I'd love to build a snowman. It's actually snowing right now here in Boston. Wait, are you saying, "Do you want to build a snowman?" Sorry, I'm sorry.
David Bray: Okay, sorry, Michael. Back to you.
Michael Krigsman: Anyway, a huge thank you to our guests, Dr. Anastassia Lauterbach, Dr. David Bray, and Dr. Anthony Scriffignano. Thank you all so much for being here. I'm very grateful to you all.
A huge thank you to everybody who watched today, and especially you folks who asked such awesome questions. I always say this: you guys in the audience are amazing. Before you go, please subscribe to our newsletter and subscribe to our YouTube channel. Check out CXOtalk.com. We have great shows coming up, and our next show will be back at the usual time of 1:00 Eastern.
Thanks so much, everybody, and I hope you have a great day.
Published Date: Dec 20, 2024
Author: Michael Krigsman
Episode ID: 864