Building Ethical AI: The 7 Principles of Responsible Technology, with Juliette Powell

On CXOTalk episode 826, AI ethics expert Juliette Powell (author, "The AI Dilemma") reveals the 7 principles of responsible technology. Learn how to build ethical AI, avoid common pitfalls, and navigate this complex landscape. Essential listening for CXOs.

52:21

Mar 01, 2024
14,241 Views

Are you ready for the AI ethics imperative? As artificial intelligence rockets 🚀 into our businesses and daily lives, the need for responsible development has never been greater. Join us for an illuminating conversation with Juliette Powell, co-author of The AI Dilemma: 7 Principles of Responsible Technology, on CXOTalk episode 826.

In this episode, Juliette unpacks:

  • The biggest ethical pitfalls facing AI today – from hidden biases to the dangers of "black box" systems ⚖️
  • How your company can build ethical AI from the ground up – practical steps for CXOs to put safeguards in place 🎯
  • Real-world examples where AI ethics have gone wrong...and right ❌✅ – learn from the mistakes and successes of others.

Whether you're a tech leader or simply concerned about the future of AI, this episode will equip you to navigate the complex ethical landscape of this transformative technology.

Episode Highlights

Importance of Ethical AI

  • Juliette Powell emphasizes the growing significance of ethical, trustworthy, and responsible AI in our increasingly AI-driven world.
  • The discussion introduces Powell's book, "The A.I. Dilemma," which addresses the core issues surrounding ethical AI through seven foundational principles.

Risk-Benefit Perspective and Regulation

  • Powell advocates for a risk-benefit perspective in AI development, stressing the need for intentionality in assessing risks to humans.
  • The lack of a regulatory framework in North America is highlighted, with a call for businesses to integrate AI risk assessments into their broader organizational risk management strategies.

Responsible Technology vs. Ethics

  • The conversation critiques the amorphous nature of "ethical AI" and shifts towards a more tangible concept of responsible technology.
  • Powell argues for global responsibility in technology adoption, considering the billions coming online with little awareness of AI's influence.

Balancing Risk and Innovation

  • Examples of OpenAI's ChatGPT and Google's Bard illustrate different corporate risk calculations and their implications for market trust and perception.
  • The importance of diverse perspectives in AI development is discussed, advocating for inclusivity to create universally beneficial technology.

Regulatory Landscape and Compliance

  • Anticipated impacts of forthcoming EU legislation on AI are explored, with Powell advocating for proactive compliance and strategic innovation within legal and ethical boundaries.
  • The necessity for explainability in AI systems is emphasized to ensure accountability and the ability for individuals to redress wrongs.

Societal Implications and Corporate Responsibility

  • The potential for AI to exacerbate or mitigate societal inequalities is explored, highlighting the need for transparency and responsible use.
  • Powell calls for a balanced approach to AI development, prioritizing long-term societal benefits over short-term gains.

Future Directions and Recommendations

  • Recommendations for business and technology leaders include thoughtful enterprise-level processes for AI use and data collection, ensuring responsible technology practices.
  • The importance of being forward-thinking and prepared for upcoming regulations is discussed, advising companies to start preparing now.

Key Takeaways

Prioritize Intentional Risk Management

  • Understand the Unique Risks of AI: AI introduces specific risks that differ from traditional business risks. Senior leaders must ensure these risks are not only assessed but fully integrated into the broader organizational risk management strategy. This approach is crucial for navigating the complexities of AI deployment in a landscape that lacks a comprehensive regulatory framework, particularly in North America.

Embrace Responsible Technology Development

  • Broaden the Definition of Responsibility: Move beyond the narrow focus on ethics to embrace a wider sense of responsibility that includes the global impact of AI technologies. This perspective is essential for developing AI that is beneficial and accessible to the billions coming online, who may not be aware of algorithmic influences. It's about creating technology that serves a broader audience, not just those in privileged positions.

Prepare for Regulatory Changes

  • Anticipate and Adapt to Upcoming Regulations: With significant regulations on the horizon, such as the EU's Artificial Intelligence Act, companies must proactively adjust their AI strategies. This preparation involves understanding the potential impact of these regulations on business operations and ensuring compliance to avoid substantial fines and legal challenges. Being forward-thinking in regulatory compliance can also serve as a competitive advantage.

Foster Diversity and Creative Friction

  • Leverage Diverse Perspectives for Better Outcomes: Encourage diversity within AI development teams and decision-making processes. A diverse range of perspectives leads to more thorough deliberation and more inclusive results. This approach, referred to as "creative friction," can enhance the quality and relevance of AI technologies, ensuring they serve a wider range of people and scenarios effectively.

Episode Participants

Juliette Powell is the founder and managing partner of Kleiner Powell International [KPI], a New York City-based consultancy. As a consultant at the intersection of responsible technology and business, she has advised large companies and governments on the questions of how to deal with the accelerating change underway due to AI-enabled technological innovation coupled with shifting social dynamics and heightened global competition.

Powell identifies the patterns and practices of successful business leaders who bank on ethical AI and data to win. Her co-authored book, “The AI Dilemma: 7 Principles for Responsible Technology” (August 2023), integrates the perspectives of engineering, business, government and social justice to help make sense of generative AI and other automated systems.

Her live commentary on Bloomberg, BNN, NBC, CNN, ABC and the BBC, and powerful presentations at institutions including The Economist, Harvard and MIT, emerged from her lifelong interest in community-building combined with a deep knowledge of the people, technologies and business practices at the forefront of connected society. She also authored the highly acclaimed book, “33 Million People in the Room: How to Create, Influence, and Run a Successful Business with Social Networking” (2009). She is on the faculty at New York University, where she teaches graduate students of the Interactive Telecommunications Program.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Ethical, trustworthy and responsible A.I. are becoming increasingly important in our A.I. driven world, and that growth is only going to continue. 

We've been doing many shows on CXOTalk about this topic, and today we're speaking with Juliette Powell, author of a book called The A.I. Dilemma that goes right to the heart of these issues.

Juliette Powell: I've been working in all things AI, data and systems always. Since 2012, I realized that there weren't a whole lot of people that looked like me that worked in this space and as a result, decided to dive deeper, went back to school, and ended up turning my dissertation into a book.

Michael Krigsman: So, tell us about the book. It's you present seven principles. What are what's the what's the foundation theme of the book?

Juliette Powell: As we were writing the book, the EU was coming up with what would become the Artificial Intelligence Act. And to me, looking at, you know, all of these systems and how, you know, depending on what you're doing, what the use cases are, some people are more aware of them or not. I think that the thing that really resonates with me as well as with our clients for the AI Advisory is this idea of looking at it from a risk-benefit perspective.

So, the first principle is, you know, be intentional about risk to humans. I think many of us, especially in North America, as we're really seeing the capability of these tools, are getting excited. We not only get to lead the world in artificial intelligence, but we get to innovate before anybody else. And of course, because we have no real regulatory framework here, it's about accelerating the process.

It's the Wild West. Let's make money while we can and break things. That's just our philosophy, especially coming out of Silicon Valley. So this idea of being intentional about risk, I think is something that businesses have been thinking about for as long as there have been businesses. But the risk and benefit around AI development and deployment is a separate calculation that then also has to be integrated in the larger risk benefit analysis of the organization.

And right now, it seems that there's a big disconnect between those two things.

Michael Krigsman: When you talk about air fix and responsible technology, what actually do you mean? Because it's become these have become buzzwords and it's so amorphous.

Juliette Powell: It is. And in fact, I do not like the word ethics when it comes to technology for the simple reason is I always ask myself, ethics for whom? There's always a point of view when we talk about ethics, and I think more of it in terms of responsible technology, that we are all responsible for the technologies that we embrace into our lives, whether it's our phones or any other devices that we have, the one that we're using right now to be able to communicate the entire modern world.

Our underpinning is based on artificial intelligence, at least in G8 countries. And so, you know, when I think of what that means to be responsible, I think it's not just about being responsible for us, those who have and enjoy these technologies. It's also being responsible for the billions more that are coming online just now that have no real sense that these algorithms are even out there.

Generative A.I.. What does that mean? Maybe it's my best friend. Maybe it's, you know, it's what is it? Any new technology comes across as magic. And so to really think about responsibility from the perspective of the world as opposed to just where we happen to sit, which is a very privileged place.

Michael Krigsman: Could you define some of the most pressing kinds of challenges that we face when it comes to this balance between risk and innovation, if that's a fair way to say it When it comes to A.I..

Juliette Powell: Let's just look at the chatbot deployment of OpenAI that happened in November 2022. And then let's also look at the deployment of Bard by Google. In February of 2023, OpenAI was an organization that the majority of people in the world had never heard of. It was a nonprofit. Very few people were aware of the massive cash injection that came from Microsoft, and all of a sudden, they've got this amazing tools chatbot in their hands.

They're discovering the power of being able to ask a question and actually get answers and feel like these systems are starting to understand us, that they get me, they know me. In fact, I'm really seeing my students at NYU who in many cases are turning towards their devices to get all kinds of information, who they should date, where they should go to school, what kind of job should they put their job.

All of these things are coming from systems, and it made sense for OpenAI to look at risk and think, you know what? We're going to be first to market. We're going to get those hundred million users in the first two months. And they did. But Google did a very different calculation. They had these algorithms for a long time, as you probably know, And ultimately the risk for them was greater because they've got billions of people that already knew them, that already use their products and services, and ultimately that trust that.

And so, when you're thinking about what it cost OpenAI to be first to market, it's much smaller than what Google had to essentially deal with. So when they did launch their product, their Bard product in February, A it was flubbed because they really weren't ready for it and B, when they did the relaunch a few weeks after that, it still wasn't be launch that captured the imagination, the attention of the rest of the world because GPT was already in so many people's hands and the way that Google released it was much more responsible in the sense that they first, you know, wanted to make sure that developers got a better sense of it and then business people, how would you use it?

And eventually, you know, if you'd signed up or you had the right connections, you could get access to these things. Now, fast forward a few months, and when people talk about generative AI, they're still in their mind, kind of like, I guess Frigidaire was the refrigerator for, you know, previous generations.

In this case, when we talk about generative AI, they think about ChatGPT as being the brand for all lamps, which is really, really fascinating. And so as you probably know also recently, you know, Bard got rebranded as Gemini and then they just kind of pulled back on Gemini because it was badly behaving. It was having hallucinations. If you've read the news, you know exactly what I'm talking about.

But ultimately, that other risk calculation of Google saying, Hey, this is not going the way that we planned, this is going to be a horror show for people and we have to take responsibility and take it back. Now, you know that that cost them, you know, a lot of eyeballs in the market, a lot of investors. You know, when you think of you know, what tools do you want to use, are you still as gung ho about using Google tools around AI as you are about other companies?

Possibly not. And so it's really shifted their position in the market, or at least the perception around their tools, their AI tools in the market. And so again, this risk benefit calculation for Google takes on a much bigger scale, much, many more billions of dollars at risk. Then if OpenAI does the same calculation. And I think that most of us that use these tools, we're not thinking about the risk benefits in using these tools at all.

It's just like, Hey, this is the shiny new toy I get to, you know, if nothing else, get a good laugh at it, and maybe I actually gets next to productivity. These are very different approaches. You know, one of one of the labs that we're working with, one of the AI labs out of Finland, is just sent us a video.

It's about a snake game. And essentially it asks, should you be free version to create a snake game? And so such a beauty creates a different agents. They all represent different engineers. One is a quality assurance engineer, and each engineer is saying what they are doing in the advancement of this game. Then the other agents are evaluating the work, making better recommendations.

And so, there's a virtuous feedback loop. At the end of the 5 minutes there's a game. It's absolutely playable multiple levels. Multiplayer This took 5 minutes. So if you think about that, what is the benefit of that? Well, that means that we can literally reinvent software development, right? That takes on a whole other connotation. The second question is, okay, so does the scale.

Yes, it does scale, but what are the larger implications of the scale and what happens when these agents interact with each other, with each other through, you know, different companies, different algorithms and how. But I guess ultimately there are lots of benefits, but we can't really understand what the potential downfalls might be. And most of us don't spend the time trying to even begin to understand that.

Michael Krigsman: So, the bottom line is that the decisions that are being made from a risk standpoint are fundamentally, at the end of the day, self-serving business risks. Even though what.

Juliette Powell: Would you expect? I mean, these companies are in business to make money for their shareholders. That is their fiduciary duty. And what I don't expect, however, is for a nonprofit like OpenAI to do the kind of shenanigans that we saw at the end of last year. And I think that surprised a lot of people.

Michael Krigsman: Arsalan Khan, who's a regular listener, he asks great questions, He says, so where is the link between ethics and morals? Ethics might be enforced by an organization or a government, but can you really enforce morals when it comes to A.I.? And I'll just add the comment. It seems like a very sad commentary, what you're saying, that this risk equation really is coming down to kind of like the worst aspects of human nature. At the end of the day, greed.

Juliette Powell: Ultimately, we live in a capitalist society and we can absolutely judge whether we agree with that capitalist society in the way that companies in the United States make money. But ultimately, if I look at what's happening in the European Union, for example, it's much the laws are much more based on human rights in the United States, we're much more focused on property rights.

I didn't invent that. Right. So, but to your larger question, and thank you very much for the question on X, I really appreciate that because it's something that I thought a great deal about in fact, I had a fantastic conversation with an amazing, amazing professor from MIT, the person that came up with the Moral Machine experiment. So, for those of you who have experimented with this, it it's still online.

And so you can do it at any point, but it's essentially the trolley problem. But they did it in such an interesting way and that they put you in the seat of a self-driving car. And the question is, if it's a trolley problem, you've got to hit one or you've got to hit the other. So are you going to hit the males?

You're going to hit the females, you're going to hit the younger people. You're going to hit the older people, Right? Do you want to see the dog or do you want to save the baby? And different territory, different regions of the world? 233 million responses later, they realize that we in the world do not agree on any of this stuff, with the exception that for the majority of us, we would rather save a younger life than an older life.

But if you look at France and Italy, it was the opposite for them. They really valued the wisdom that an older person brings. And so if a self-driving car is going to kill anybody, you kill the younger generation because we need the older people with the experience to be able to steer society. So again, how do you even begin to program a moral compass into your technologies when depending on your culture, where you live in the world, you have a very different viewpoint on what is moral and what is amoral.

Michael Krigsman: Subscribe to our newsletter and subscribe to our YouTube channel. Check out six Tor.com. We have incredible shows coming up. 

What is it about A.I. that leads us into this kind of conversation? For example, I think if we were talking about computer monitors, we would not have this ethical discussion or discussion about human nature in this way. So, what is it about A.I. that just drives us here so quickly?

Juliette Powell: Well, first off, the name Artificial Intelligence gives you the impression that the technology is more intelligent than a human. And in many cases, we actually seem to believe that when we're, for example, following our navigator to get from one place or the other in our cars, even if you live in the city, even if you've driven there a zillion times, many of us still look at our navigators to make sure that our system gets there.

They're better than a human camp. And it's really interesting. I live in New York City, so it's always kind of the human taxi driver who's, you know, stereotypically knows this road better than anybody else, and you're trying to tell them what to do based on what your phone's saying and it's just a miss you got. So I think that in many cases, the this myth of higher intelligence is prevalent.

I think the other piece is that our systems allow us to reduce how many decisions we have to make in a day. It allows us to choose A or B and not necessarily, you know, contemplate all the shades of gray that are in between. One of the things that we talk about in the book is this idea of control that biologically we as humans need to feel control, control over our lives, control over our choices, etc..

But when you're pressing that buy button right on Amazon, are you really in control? Well, some will say absolutely. It saves me a lot of time and energy. Sure. But how many things have you bought that you don't actually need or that you don't really want or potentially can't even afford? But because it's so easy, we do it.

And so real control is when you really do, you take the time, first of all to do a cost benefit analysis. Are you going to do a cost benefit analysis every time you need to go from A to B? Probably not, but that's what a real decision. That's what real control requires. So, we delegate our controls to our systems.

Michael Krigsman: You describe in your book seven principles without necessarily enumerating, you know, one through seven what underlies those seven principles and give us a flavor for what these issues are and why you identified.

Juliette Powell: In 2017, I went back to school and I had been working in industry for a long time, but I really wanted to understand the impact of these technologies on society, because when you're working in it, you're so caught up by the possibility that it's more difficult to kind of step away and go, wait a minute, you know what?

What are the larger implications of what I'm doing? And also, there's very little time you're just working all the time. It's difficult enough to have a family, let alone, you know, really ponder the depth of why I'm doing this. So, it took some time and I did spend a lot of time, and then I tested my own hypothesis against all of the engineers that I knew from all over the world.

And again, I've been very fortunate where I've worked with many from different industries for profit, nonprofit, government, non-government, etc. And what I found is that the real question was about self-regulation, that companies at that time in 2017, every other company was coming up with their own principles of of responsible technology. And so much of it seemed like greenwashing.

This idea of paying lip service to this idea, to this ideal, but no real way of implementing any of these ethics. And so I started analyzing what the commonalities were between all of these ethics. And that's essentially what you're finding in those seven principles. But ultimately, what I really found is that there are so many pressures around an organization like at startup stage, you're all about trying to grow, trying to scale, etc. But as you do that, it's more and more difficult to keep a moral compass.

It's kind of a slippery slope as you try to please your investors, as you try to please if you go, you know, public, as you try to stay ahead of the competition, you have to make a whole lot of compromises that you didn't have to at the beginning. And when you become one of the tech titans, then there's a lot more that you've had to compromise.

And oftentimes, you know, the do know evils turn into something else years later. And I think that's just the nature of the way that we scale companies, at least in in G8 countries. I'm a big believer in the scale up, which is a much slower kind of development, but it's not something that we tend to embrace in the United States, and especially not when, you know, you can become a, you know, a unicorn overnight if you do things right.

But what does right mean in that context and right for Google.

Michael Krigsman: Two of the principles are risk and transparency.

Juliette Powell: For me, it's less about transparency and more about explainability, right? Explainability to me isn't just about being able to explain to another engineer what you're doing, let alone kind of sharing what the business decisions or business assumptions are in your code. But it's also about allowing these regulators that will be coming out of the EU, for example, that whether you like it or not, you will likely have to have an outside auditor, which by the way, for everybody who's worried about losing their job to AI, there are so many great jobs that are coming down the pipeline, everything from, you know, prompt engineer to a AI auditor or that.

I think that there's a lot of hope for many people that are more nervous about this stuff. But this idea of having external people kind of coming in and looking at whether you have transgressed the law or not also means that you have to tune your business in view of that, that that's part of the risk reward and the strategy of being able to scale regardless of the uncertainty in the world and those conditions.

And so ultimately, I think that this idea of explainability has to work for the engineers, it has to work for the regulators, but it also has to work for the for the individual that are using these tools that are potentially caught up in something like, let's say, facial recognition. If facial recognition mistakes me for somebody else, a bad actor, how do I defend myself if we're basing ourselves on a closed box in which, you know, nobody has any visibility except for the companies that are making it, and many of those are proprietary and as a result, you can't see them.

How do you defend yourself? So how does the lawyer defend you? How does the judge come up with any kind of a ruling? We need to have these mechanisms so that people can defend themselves. At the very least.

Michael Krigsman: This issue of defense ability or the ability to for individuals to redress wrongs or incorrect decisions that were made by an ally. And the linkage back to Explainability, there's a there's a very long chain between having explainable algorithms through the requirement that there's that there's a customer service, essentially pipeline for people to come back and ask these questions.

You know, what do I do? You know, I made an incorrect decision on my taxes or whatever it might be. Do you know where we are in terms of companies addressing this set of issues? Because there's also a whole layer of competitiveness that the algorithms and the data is you know, is so closely tied to company ownership, property ownership, which is a whole other can of worms.

Juliette Powell: And I think it really depends on where you live. So I was having this conversation a couple of days ago with a podcaster in the UK, and they were describing David Savage. He was describing really the companies that he is dealing with where companies are afraid they are retrenching instead of investing in AI, they are now very, very nervous.

Fair enough. I don't know a lot of big tech companies that are coming out of the UK, which doesn't mean that they are and exist. I just don't know them. On the other hand, I live in New York City. Most of the people that I deal with on a day-to-day basis are either in the United States or in a G8 country.

And we're seeing the opposite. We're seeing, you know, massive injections of money into AI, whether it's from the VC side or it's from the M&A side where large companies are buying smaller companies. We just saw Mistral out of France being bought. I mean, ultimately I'm seeing that the the opposite thing. I think one of the reasons why I get up in the morning and I'm excited about my work is because it's all coming at us in different forms, different cultures, different.

But everybody is trying to find that balance between what is the opportunity that I might be missing here. And oftentimes it's not even coming from the companies specifically, but rather from their board. Hey, we've been hearing about all this AI, how do we deploy it, right? How do we save money? How do we optimize? Is what we're doing ultimately, how do we lay off a bunch of people so that we can make more money again, but might not be what David's seeing in the UK, but that's certainly a lot of what I'm seeing.

Michael Krigsman: We have a very interesting question from Twitter, from Art Kleiner, your coauthor, and I'll just mention that I remember reading Art Kleiner work in the Whole Earth Catalog and in Coevolution Quarterly, going back decades. So it's an actually an honor for me to see Art Kleiner, his name pop up.

Juliette Powell: We should bring him into the crawl. I didn't realize you were that much of a fan next time, and definitely we should bring in art.

Michael Krigsman: Sounds good to me. So, Art says, What can a company or a nonprofit do to credibly show that it is taking risk into account?

Juliette Powell: There are many different things. Let me go with this idea of creative friction, which is principle number seven, which I think that artists is is trying to allude to, which is the idea that if you've got a diverse population within your company, within your organization, and you actually have decision makers that are also diverse. And when I talk about diversity, it's not just about race, it's just sex.

It's also about neurodiversity, cultural diversity, very different types of education. You tend to have better decisions. Why? Because they tend to deliberate a lot more. It takes longer to make a decision because there's always somebody asking a question, Have you thought about this? Have you thought about that? And as a result, when they finally do make a decision, it tends to cover more people.

And we found exactly the same thing when it comes to the teams that are actually creating the systems that we're talking about. Ultimately, whether they're diverse or not within their makeup, it makes for better technology when they bring in the different communities and populations of interest into their inquiry, into their R&D, to try to imagine together what could be the what is the highest, the best possible outcome that this technology could have on our communities.

And on the other side, what is the worst that you can imagine that could possibly happen with all of this and then literally hedge your bets for the entire range? That takes a lot more time, a lot more energy and in many cases a lot more money. But the products that you get sort of so many more people, such better quality.

And I think that, you know, it's a very different approach than what we've seen in the past. But I do think that if we want to be responsible to more people on the planet with these technologies, that's what's going to be required.

Michael Krigsman: What about the tension between legitimate business needs, such as being competitive, being profitable, and the need to protect user data, which is such a fundamental issue in all of this?

Juliette Powell: I spoke to one company and it's a major company that I won't name. They signed a lifetime nondisclosure, believe it.

Michael Krigsman: Or not, a lifetime non-disclosure…

Juliette Powell: Lifetime non-disclosure. But the one thing that I can hopefully safely say is this idea of people as data. So our data trails are what feed these systems, but there's also that intellectual capability. So when I say bring in, you know, your different communities so that they can weigh in on the the possible opportunities, but also the pitfalls that only they can imagine because these technologies are focused on them specifically.

But all of that is intellectual capital. All of that is tacit knowledge. All of that can also be monetized and in many cases is monetized by the companies that are doing it. And so understanding that by bringing them in, you're making them want to give you that data. They want to be a part of that process, which is good for you and it's good for them.

So in some ways it helps you protect your intellectual property because you're developing it as you go with the very people that it's focused on. But at the same time, you can also repurpose that within the organization so that you have better intelligence overall. And the the idea, the larger idea of that is having a learning organization not just internally, but also with the communities around you.

That's the ultimate. And that's oftentimes what we saw in the best-case scenarios during the pandemic. For example.

Michael Krigsman: Let's say that you start with making your AI algorithm and the use explainable. How can organizations keep them transparent and explainable, especially as we go forward? There will be self-modifying A.I. and even putting the technology aside. We have the human tendency to do something once and then, you know, it's not that important that we keep documenting it, right?

So, kind of let it slide. How do we manage that?

Juliette Powell: Right now, we have technologies that allow you to document as you go, right? We have those technologies, whether we decide to use them or not is a completely different issue. But all of that exists already. I think as humans, we have a tendency to want to put our fingers on the scale. So even when we automate things, we still want the outcomes to come out the way that we expect them to come out.

And oftentimes we'll modify the data or will add more data. We'll tweak our model so that the outcomes come out the way that we wish. And sadly, a lot of that stuff is not quantified.

Michael Krigsman: Arsalan Khan from Twitter comes back again, and he asks a really thoughtful question. He says, So we can try to democratize the definition code of ethics. But in the end, it's the business executives who make the decisions around what is ethical and whether to enforce or embrace those ethical standards. So once again, the whole thing becomes very subjective and subject to whatever the corporate motives happen to be and priorities at a given time.

Juliette Powell: I take that as a statement, and I agree with it. So, I will start there. Secondly, that goes back to what I was saying about this, this wish that, you know, big tech would self-regulate. They can't. They literally cannot, which is why, you know, one moment they're there testifying before Congress in the Senate saying, you know, please, regulators.

And on the other end, they're saying, well, hold on a second. If you're going to regulate us, we're going to teach you how to regulate because you really don't understand the technology. Ultimately, one of the views that I think is incredibly important to have within the context of this conversation is that it's not just about the corporations and it's not just about government.

It's not just about the engineers, and it's not just about social justice. All of these are logics of power around artificial intelligence. And if you really want to move the needle in one direction or the other, you have to keep the other logics in mind. Right? If you want to persuade someone, you have to be able to understand where they're coming from, what it means to them, and what can you leverage, what is less important to them and the better you can put yourself in other logic shoes, if you will, the more you're going to be able to have sway.

So we've seen a whole lot of, you know, collaboration, for example, between Silicon Valley and the government. We have not seen as much collaboration between social justice and corporations, and there's a reason for that. They're often at odds. But I think that the companies that have done the best have been able to strategically understand these different perspectives. I think right now it looks like the regulatory side of the you are the ones who are most successfully been able to do that.

Michael Krigsman: Can you give us a sense of when a company is doing this? Well, what does that look like?

Juliette Powell: I was asked by a member of the press, I think it was last summer about Salesforce, that Salesforce had a notice to their vendors now, sorry, it wasn't to their vendors, to their clients that anybody that was using their services, there are certain things that they could not do. So one of the things was, I believe and sorry, this is really off the top of my head here, but one of them, I think was pedophilia, right?

Images of children or anything that is against the law in general right now. You would think that these are things that would have been there from the beginning. You wouldn't think that a company that has visibility into the servers of all of its clients, that sees the data, that knows exactly what goes against the law and what doesn't, that they would have not only known about it, but potentially done something about it before.

I don't know whether that was the case or not. But what I do know is as of this past summer, if I recall correctly, all of a sudden, they're putting their clients on notice. And I take that as a positive thing because if you can get away with it before without impunity, apparently you can't anymore. That's a big step in the right direction.

Whether you like Salesforce or not. I like the idea of setting boundaries and if you cross the law, that's a boundary.

Michael Krigsman: Let's talk about the implementation. What can business and technology leaders do to build responsible technology and into their products? What should they be doing?

Juliette Powell: Well, first and foremost, I think it's really important for any company, no matter what the size, to actually look at the various use cases that they want to throw at. As I said, there are so many boards that are pressuring companies to throw everything they're now, you know, playing with DNA. I their kids are playing with DNA, their grandkids are playing with DNA.

Why is my company not playing with DNA in this case? You know, many of these people have portfolios of companies. Why are you all not throwing at this? So first and foremost, you actually need a I think is is such an important question that many companies are not asking themselves. You mentioned tax earlier. Right. If you have a fairly simple, straightforward tax return, you do not need a I, you need maybe a logic tree, but you don't need a I not unless you know, it's so complex that you need to be able to detect patterns in that case.

Sure, maybe you need. So do you need a do not need I also what are the actual costs of AI if you are and we asked the same thing when it came to blockchain too. If you're a small mom and pop shop, do you really need blockchain? It's going to cost you a lot of money. If you're a small mom and pop shop, do you really need a generative AI tool?

Because there's more than, you know, the cost of reputation. Some of the things that we were talking about before. If you care about sustainability at all, every time you put a prompt into Gen AI tool, it's like dumping a gallon of water into the ground, right? So there are a lot of costs that are not discussed, that are not thought about.

And I think that we do need to talk about them. It kind of reminds me of a different technology. Remember when CRISPR was launched, right, the gene editing tool and Jennifer Doudna came out. She gave a magnificent TEDx talk viewed by millions of people in which she explains the technology. She explains her discovery, and she also makes it a point to say we are not releasing it now because we need to have a global conversation.

Yes, this could save so many lives and at the same time we could also create designer babies. And we have to decide as a global community, do we want that? Do we want designer babies? Because you can't really have one without the other. And so, this kind of overall debate have not had. Now, you recall last year when there was a call by thousands of experts, including Yash Bengio and of course, the famous Elon Musk asking for a moratorium of six months on advanced A.I. models.

And ultimately, I think Italy bans hope. Italy paused and we ended up banning tattoo beauty. But the rest of the world they in, I think Montreal. There are some cities that did it. But beyond that we didn't pause, we accelerated. And I think that that's very much in our human DNA, our capitalistic human DNA. Again, until there's regulation, we're just going to go full steam ahead and make as much money as we possibly can.

And realistically, there will be winners and there will be losers. And I think these first losses will kind of give us an indication of, you know, where things stand. But I would be very, very, very hesitant to tell any company to just go full steam ahead with AI without at least doing some small experimentations, which oftentimes just do not scale.

So do not bet the bank on this.

Michael Krigsman: What about from a data perspective? Arsalan Khan comes back again and asks, Is there a metric that puts an ethical value on the data? And should there be some type of metric that does that?

Juliette Powell: Every company is doing it differently. Why don't we talk about academia when we talk about science, things are become much more precise because you're trying to solve for, you know, a disease you're trying to solve for a very specific vector in the case of business, again, what are you trying to solve for? Yeah, you're trying to create a business, you're trying to create a product.

You're trying to create a service that will solve a need. But ultimately, what is your goal? Is your goal to benefit the world and maybe make a little bit of money? Or is your goal to benefit the world and make a heck of a lot of money because you want to leave a legacy? Who starts companies? What kind of people run companies?

Think about that a little bit.

Michael Krigsman: Sure. And the pressures on people that are starting companies and that have accepted VC funding, you know, those priorities are pretty well set in stone.

Juliette Powell: They are. And, you know, they're in your contract. Ultimately, you know, the way you're funded will absolutely determine the way that you go, the kind of compromises that you have to make. And of course, you can also be like a OpenAI and just hedge your bets. Let's do nonprofit and for profit simultaneous.

Michael Krigsman: How can business leaders ensure that their organizations are not just compliant with current regulations but actually being forward thinking? And I think this applies both to business leaders on the business side as well as technology leaders, CTOs, for example, and people leading the development of products that either are A.I. products or that deeply incorporate AI and data.

Juliette Powell: First and foremost, you have to be aware of what's actually going on in the world. You have to understand that there are, whether you want it or not, regulations that are coming down the pipeline that will affect your company if you are dealing with AI and data. I mean, ultimately GDPR came out of the EU back in 2018, big tech and I'm talking big tech is still grappling with GDPR, still not fully prepared for GDPR.

I can't even imagine the average small to medium sized company. So when I talk to you about the artificial intelligence that coming out of the EU, the fines are much bigger than GDPR. They're at least 1% bigger than GDPR, which means what is it, six or 7% of your entire annual not profit revenue, plus all of your subsidiaries globally plus any damages?

Right. So, I've got massive class action lawsuits and you lose I mean, it could literally wipe out a company and I don't think that the majority of companies are prepared for that. So that preparation has to start now, if not yesterday.

Michael Krigsman: So, you're a believer then, in regulation?

Juliette Powell: I'm assuming I'm a believer that this regulation is coming, whether you want it or not. I'm a realist. I'm saying prepare yourself, make it so that when it comes down, not if it comes down. When it comes down, you are ready and you will not get sued and you will not have all of these fines. Yes, you can always, you know, create a separate R&D kind of skunkworks, but that's separate from your core business.

Be very, very careful with that.

Michael Krigsman: So, then what would you advise companies today in terms of, again, balancing the risks, risks against the current business demands and the need to be competitive to show quarterly profit, whatever, whatever the case may be.

Juliette Powell: If it were my company, I would have a skunkworks, I would have a place where I can fully innovate in a smaller scale but in a controlled way. And I would make sure that my business processes are willing to kind of take out some of these innovative products and test them out in a larger scale, but not try to integrate all of that into one larger process.

I would be very, very careful of that. I would definitely automate where I can automate where, you know, we're talking about very rote, simple tasks within an organization. But I certainly wouldn't throw AI at everything and everyone. I would be very, very careful with that.

Michael Krigsman: So, you need a thoughtful process at an enterprise level, thinking about where to use AI, what kind of data to be collecting and so forth.

Juliette Powell: Again, in most cases in business, even if your data has been cleaned, even if you can attest to the provenance of your data, in many cases, the data does not actually represent the population of interest. Right. And so, if there's a disconnect or if you just had the wrong model, you start thinking of it from the right perspective for your business, for your particular use case, it becomes problematic.

So, when I talk about, again, this calculus of intentional risk, it includes all of these things and not just in terms of the AI and the data side of things, but integrating that risk and benefit analysis into your larger corporate risk benefit analysis. As I said at the beginning of this conversation, every company that's come to see us in the last year have really had these two bubbles, and the risk benefit analysis for the technology is not being integrated into the overall company risk benefit analysis, and that is a huge disconnect that will cost a lot of companies a lot of money.

Michael Krigsman: We have another question from LinkedIn, and this is from Greg Walters. He says, We're trying to look at A.I. through the lens of terrorism, the old hierarchies from legal copyrighting, trademarks to business to religion are on the edge of possible dissolution. I guess we could question that. But is now the time to realign the meaning of business beliefs and work?

And how do you see this settling? Is the U.S. in a leadership role around this rethinking of existing norms and expectations?

Juliette Powell: There is pure capitalism. There's American capitalism. There's also triple bottom line busting. And I, I don't know where triple bottom line investing comes from specifically, but I do see some of it here in the United States. And I think that that's one approach that tries to balance all of these things simultaneously. I don't know if, you know, people that invest specifically in AI are the same people that invest specifically in triple bottom lines.

But hopefully. Yes. And I'm sorry, what was the rest of the crowd? I feel like that that question went to direct.

Michael Krigsman: I think what he's really getting to is the rethinking around ethics and around business priorities and how businesses balance the risks given the potential for A.I. to provide benefit, but also the potential for AI to provide all kinds of potential problems, such as you were describing earlier. And I think he's getting to a rebalancing. And are we in the U.S. taking a lead on this?

Juliette Powell: Honestly, during the pandemic, I really thought that we as as a global community were rebalancing, that we were going back to what's important as a human being, as opposed to my particular role within a particular organization or whatnot. That's not what happened for the majority of people on the planet, even though it would have been a really interesting time for that to happen.

I do think that AI accelerated exponentially during that period. Our need for it, our reliance on it and our addiction to it all simultaneously. And is America in a leadership position for the disruption that is? I in certain ways, yes, because again, we have Silicon Valley and we also have an air first initiative, which is an executive order that came out, you know, while Trump was still president, for the U.S. to be number one in AI in the world by 2025.

And that, of course, was an answer to a similar directive that came out of China, where they want to be, you know, the number one AI supremacy in the world by 2025. So you've got this background, which is much more of an arms race to, you know, what's happening in our day to day with our devices. And most of us can kind of think about one or think about the other, but thinking about both of these simultaneously is not necessarily our comfort zone.

But if you are going to think about I think, yes, you do have to look at it from all of these perspectives. Are we ahead of the curve? Many analysts say that we are not ahead of the curve when it comes to. Are we ahead of the curve when it comes to you know, using it for social good?

No, we're not ahead of the curve for that either. What are we ahead of the curve that we're ahead of the curve out of making money out of it As soon as I possibly get.

Michael Krigsman: Arsalan Khan comes back, He asks, What role should the government play in ethical? I should the government should governments, whether it's this one or this government, an internationally, have an office that checks that ethical AI is actually being used. Responsible technology practices are being followed.

Juliette Powell: There's a whole other conversation that has been informed over time by science fiction about how AI is going to take over the world. Robots are going to take over the world, blow up the world. All of these scenarios that were actually brought forth again in that open letter that I mentioned from back in March 2023, where you've got thousands of AI luminaries there saying, Hey, we need to take a pause.

And that larger pause was then discussed at some Bletchley Park. That was back in November, where you had over 20 nations, including the United States and China, signing that they would not deploy. Right. Advanced artificial intelligence that can potentially destroy the world. Now, this is a real thing. This is no longer science fiction. This is a reality. But again, it's the people that are reporting on that are not necessarily the same people that are reporting on, you know, Nvidia.

These are two in many cases, very different broadcasts, very different people that are watching this stuff. And really, if you want to stay on top of it, then you pay attention to all of it. One of the things that we're going to start doing with my colleague Art is we started releasing a newsletter just to anybody who subscribes that really kind of covers the range of how we AI is affecting us and does a short analysis which helps us stay on track really, but also to have more informed conversations.

Because again, are we talking about the short term, the medium term, the long term? Are we talking about the military? Are we talking about business? These are all very different things.

Michael Krigsman: One aspect of this conversation, the broader conversation that I found interesting is there are times you see folks questioning the use of AI in the military and with autonomous weapons. And the reality is that AI and autonomous weapons of various kinds have been in development and in use for quite some time. So there's also a lot that is hidden beneath the surface that's not immediately obvious.

Juliette Powell: Yeah. For anybody who's had a HoloLens, I mean, the number one client for HoloLens for a long time was the U.S. Army. Right?

Michael Krigsman: As we finish up for folks that are building products, again, CTOs and business leaders who are actually building these products, what do you recommend that they do now to ensure that they're on the right side of this responsible technology set of issues?

Juliette Powell: I think it depends what your products services are. First off, if we're talking about automated systems that are just dealing with other automated systems, it's a very different calculus. Then if you're dealing with products and services that are actually affecting people. For example, if you are working in the law or in law enforcement where you are really affecting people, if you are building products and services that are being advertised on Amazon, are you using actual images of the products or are you using AI everything which when your clients get the products, are completely different?

You know, there's a lot of risk in doing these kinds of things, and yet it's more expedient and again, it's more optimized and you make a lot more money in the short term. But will you really in the medium to long term? And so what I really encourage everyone to do not only is to keep those false objects in mind that I mentioned before, the corporate, the government, the social justice and the engineering simultaneously as you make your decisions, but also keep in mind what are the implications of what we're doing both to the benefit and to the risk in the short, medium and long term.

And that's one thing that we as humans do better than anything else on the planet, right? We've got this amazing brain that allows us to remember things and to hold those memories in our minds while we think in the present and also imagine the future. We have that power. And so I think more of us need to kind of bring that to the surface and collaborate together to make better tech that serves more people.

But I think getting outside of your comfort zone and putting yourself in more situations simultaneously will make a huge difference already because that takes a little bit longer, it takes a lot more reflection and a heck of a lot more control.

Michael Krigsman: Better tech that serves more people. That seems like a great place to end this conversation. And Julia Powell, thank you so much for taking time to speak with us. I really, really appreciate it.

Juliette Powell: And I've really had a great time with you. And thank you also to everyone that sent in a question. It's been lovely getting to know you.

Michael Krigsman: And do you want to again, quickly hold up your book so we can take a look?

There we go, The dilemma. And as Juliette said, thank you to everybody who watched, especially you folks who asked such great questions. Before you go, please subscribe to our newsletter and subscribe to our YouTube channel. Check out six tor.com. We have incredible shows coming up. Thanks so much, everybody. I hope you have a great day and we will see you again next time.

Published Date: Mar 01, 2024

Author: Michael Krigsman

Episode ID: 826