What are Ethical AI and Responsible Technology? A Conversation with Accenture.

Explore the intersection of technology and ethics with Paul Daugherty of Accenture in CXOTalk Episode 831. Dive into responsible AI, its impact on business, and the future of work in an AI-driven world. Essential viewing for leaders in technology and business.

54:28

Mar 22, 2024
1,297 Views

In this important episode of CXOTalk, we speak with Paul Daugherty, a leading figure at Accenture, to explore the increasingly critical topics of responsible technology and ethical AI.

As organizations grapple with the rapid advancements in artificial intelligence and its far-reaching implications, Paul shares his expertise on navigating this complex landscape while prioritizing trust, value creation, and risk mitigation. The conversation delves into the urgency for business and technology leaders to proactively establish comprehensive responsible AI frameworks, the evolving role of humans in the era of exponential technology change, and the importance of democratizing AI knowledge and fostering a culture of continuous learning. Join Michael and Paul as they discuss the key considerations and strategies for embedding responsible technology practices into the fabric of your organization.

Episode Highlights

Defining Responsible AI

  • Responsible AI involves taking intentional actions to design, deploy, and use AI to create business value while building trust and mitigating potential risks.
  • Principles, policies, processes, tools, and training are all essential components of a comprehensive responsible AI framework.

The Importance of Trust in the Era of Exponential Technology Change

  • As AI becomes more powerful and pervasive, maintaining trust with customers, employees, and stakeholders is critical for business success.
  • Exponential technology advances mean the consequences of irresponsible AI use can materialize and spread faster than ever before.

Closing the Responsible AI Gap

  • While many companies have responsible AI principles, very few (<2%) have implemented the necessary compliance programs and guardrails.
  • Organizations must move beyond statements to operationalize responsible AI practices across their business.

Human Accountability and Orchestration of AI

  • Despite increasing AI capabilities, humans must remain accountable for the outcomes and impacts of the technology.
  • "Human orchestrated AI" emphasizes designing AI to augment and amplify human potential, rather than simply keeping "humans in the loop" as an afterthought.

Bridging AI Talent and Trust Gaps

  • 95% of workers believe AI will enrich their careers, but 60% feel anxious due to lack of communication from leadership about AI's impact on their roles.
  • Democratizing AI knowledge, enabling continuous learning, and transparently communicating AI's implications for the workforce are crucial for building trust and closing talent gaps.

The Urgency and Benefits of Proactive Responsible AI

  • Implementing responsible AI is essential for driving business value, fostering innovation, and avoiding costly mistakes and reputational damage.
  • Organizations that proactively establish responsible AI frameworks will be better positioned to realize the transformative benefits of AI while navigating evolving regulations and stakeholder expectations.

Urgency and Benefits of Proactive Responsible AI

  • Implementing responsible AI is essential for driving business value, fostering innovation, and avoiding costly mistakes and reputational damage.
  • Organizations that proactively establish responsible AI frameworks will be better positioned to realize AI's transformative benefits while navigating evolving regulations and stakeholder expectations.

Bridging AI Talent and Trust Gaps

  • 95% of workers believe AI will enrich their careers, but 60% feel anxious due to lack of communication from leadership about AI's impact on their roles.
  • Democratizing AI knowledge, enabling continuous learning, and transparently communicating AI's implications for the workforce are crucial for building trust and closing talent gaps.

Human Accountability and Orchestration of AI

  • Despite increasing AI capabilities, humans must remain accountable for the technology's outcomes and impacts.
  • "Human orchestrated AI" emphasizes designing AI to augment and amplify human potential, rather than simply keeping "humans in the loop" as an afterthought.

Key Takeaways

The Urgency of Proactive Responsible AI Frameworks. As AI becomes more powerful and pervasive, organizations must move beyond principles to operationalize responsible AI practices. Proactively establishing comprehensive responsible AI frameworks, including principles, policies, processes, tools, and training, is essential for driving business value, fostering innovation, and mitigating potential risks.

Bridging AI Talent and Trust Gaps Through Continuous Learning. While 95% of workers believe AI will enrich their careers, 60% feel anxious due to lack of communication from leadership about AI's impact on their roles. Democratizing AI knowledge, enabling continuous learning, and transparently communicating AI's implications for the workforce are crucial for building trust and closing talent gaps.

Human Accountability and Orchestration in the Era of AI. Despite increasing AI capabilities, humans must remain accountable for the technology's outcomes and impacts. The concept of "human orchestrated AI" emphasizes designing AI to augment and amplify human potential, rather than simply keeping "humans in the loop" as an afterthought. Organizations should focus on creating AI solutions that maximize human capabilities and ensure human accountability.

Episode Participants

Paul Daugherty is Accenture's Chief Technology and Innovation Officer (CTIO), and is a member of Accenture’s Global Management Committee. As a visionary in shaping the innovation of technology, Paul leads and executes Accenture’s technology strategy, leveraging the company’s leading-edge capabilities and research and development to reinvent the future of business.

As CTIO, Paul also leads Accenture’s Innovation strategy and organization, including Accenture Labs and The Dock in Dublin, Ireland. He directs Accenture’s global research and development into emerging technology areas such as generative AI, quantum computing, science tech and space technology. He leads a dedicated innovation group that not only designs and delivers transformational business and technology solutions, but also invests in, and partners with, pioneering companies to pilot and incubate new technologies. Paul also leads Accenture’s annual Technology Vision report, hosts its annual Innovation Forum event, and leads Accenture Ventures, which he founded to focus on strategic equity investments to accelerate growth.

Previously, Paul served as Accenture's Group Chief Executive – Technology, where he led all aspects of Accenture's Technology business. In this role, he led the formation of Accenture Cloud First to help clients across every industry accelerate their digital transformation and realize greater value at speed and scale by rapidly becoming “cloud first” businesses. He oversaw the launch of the Accenture Metaverse Continuum business group, which focuses market-leading capabilities in customer experience, digital commerce, extended reality, blockchain, digital twins, artificial intelligence and computer vision to help clients design, execute and accelerate their metaverse journeys. Most recently, he helped lay the groundwork for Accenture’s $3 billion investment in its Data & AI practice to help clients rapidly and responsibly advance and use AI, including Generative AI, to achieve greater growth, efficiency and resilience.

Paul is a passionate advocate for gender equality in the workplace and STEM-related inclusion and diversity initiatives. For more than six years, he has been a member of the board of directors of Girls Who Code, an organization that seeks to support and increase the number of women in computer science careers.

Paul serves on the board of directors of Avanade, the leading provider of Microsoft technology services. He also serves on the boards of the Computer History Museum and the Computer Science and Engineering program at the University of Michigan.

In 2023, Paul received the prestigious St. Patrick’s Day Science Medal for Industry from Science Foundation Ireland as well as a nomination to Thinkers50, a ranking of the world’s most influential management thinkers. He also accepted the FASPE Award for Ethical Leadership in 2019 for his work in applying ethical principles to the development and use of artificial intelligence technologies.

Paul is co-author of two highly acclaimed books: Human + Machine: Reimagining Work in the Age of AI (Harvard Business Review Press, 2018), a management playbook for the business of artificial intelligence; and Radically Human: How New Technology is Transforming Business and Shaping Our Future (Harvard Business Review Press, April 2022).

Paul joined Accenture in 1986. He earned his Bachelor of Science degree in computer engineering from the University of Michigan.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Welcome to episode 831 of CXO Talk. We're discussing responsible technology and ethical AI with somebody who I can only describe as being a responsible technologist, and that is Paul Paul Daugherty, who is the chief in technology and innovation officer at Accenture. Paul, can you tell us about that role?

Paul Daugherty: My role is look across Accenture, which is a big company, we're 65 billion or so in revenue, 730-ish thousand people. So, a big organization serving clients around the world, helping clients drive greater value and business impact through their technology. 

With my role, my job specifically is really twofold the way I look at it. One is setting the technology strategy for Accenture. That's about looking ahead. What are the next things we need to get into? As part of that, I launched our cloud business. 

I launched our AI business a number of years ago. And so part of it's about that, and then the second part of the role is bringing the innovation to our clients. So that's really what I do. There's a number of different parts underneath that. I also wrote a couple of books that get into the issues we'll talk about today, one of which is human plus machine just seven years ago, where we started talking about responsible AI, and then radically human a few years later. And I got a third one coming sometime soon.

Michael Krigsman: The notion of responsible technology is very vague, and yes, it's very important. So what do we actually mean? Can you drill down into that?

Paul Daugherty: Technology has got immense potential and immense value that it drives not just for business, but for society, for individuals, for advancing all sorts of outcomes in the state of the world. But every technology depends on what you do with it. So, it's about applying the technology, so you drive the good outcomes rather than creating some of the consequences that can come. And that's the way I think about responsible technology. It's something that's mattered to me since back in my university days when I was doing research around the impact of technology on people. And it's really been a theme throughout my career, and I've been really fortunate to be at a place Accenture, which really values that. 

Now, when we talk about what we mean by responsible AI specifically, we can get into AI and some of the unique nuances that make this really important. But we've got a very precise definition that we've crafted for responsible AI because we think it really matters. We define responsible AI as taking intentional actions to design, deploy, and use AI to create value and build trust by protecting the potential risks of AI. Each word in there is important. It's intentional action that need to create. It's not ad hoc or incidental, it's creating value as well as building trust, because you need to do both of those things and it's avoiding the risks. So that's the way we look at it, and we got a lot going on to make sure that we operate by that definition.

Michael Krigsman: How did you come up with that particular combination of factors? And at the same time, how does that statement serve as a reference point for decisions that you make along the way?

Paul Daugherty: I spent a lot of time with Julie Sweet, our CEO, on this. Why do you need to spend time in a responsible AI definition? Because we wanted to get it right. We wanted to shape the actions we're taking internally, to your point, and with our clients, a lot of time with Julie, our CEO, with our general counsel, with a number of other of our senior leaders, because we wanted to make sure we had all aspects of this really rounded out right. 

And the reason this is important is because we as an organization believe in taking that intentional action in a number of ways. One way is in the work we do with clients to make sure we're bringing in responsible AI to everything we do and bringing that into the solutions that we help our clients with. And it's also about being responsible AI to everything we do. We can talk more about this, but we had a compliance program for responsible AI. We report partially to the audit committee on our board of directors about our progress on responsible AI. And we believe that that's the intentional type of action that every company needs to take to really manage, again, create the value while mitigating the risks from the research we've done. 

Though, if I'm in a room, and I've done this recently, where I've been in a room with a lot of companies, I say, how many of you believe in responsible AI and have principles defined? Almost everybody raises their hand. And if you say, how many of you have systemic compliance programs in place? Our research shows that less than 2% of companies do. And that's what we call a responsible AI gap that we need to work to close.

Michael Krigsman: It's very easy to put out a nicely crafted statement, much more difficult to put that into practice and have that statement serve as a practical guide or reference for making decisions.

Paul Daugherty: That's why there's different components that you need. You definitely need the principles, and I think that's significant in and of itself for an organization to decide what are your responsible AI principles? 

I can tell you what ours are if you're interested. We've got seven of them, whether you have six or seven or eight, exactly what they are will vary a little bit. But the principles are one thing. The second thing is the processes and policies that you set around those to make sure that you live to the principles. The third is the tools and the data and such that you track around it and how you implement it. And then the fourth is the culture and the training of people to really understand it and how it applies to every part of the job they do. So those are the four components that turn it from talking good talk about responsible AI to really taking the steps to putting it into practice.

Michael Krigsman: I also find it very interesting that this issue rises to capturing so much attention from the CEO of the organization.

Paul Daugherty: A defining moment with technology, with where we are right now, precipitated by generative AI and the new set of technologies that are coming behind it. We're moving from what I'd call the computing kind of age of technology, where it was all about Moore's law and processing, more powerful technology, to, we're moving now to more of a human era of technology, where it's all about we're interacting with more human like technology, and it's about how does that impact us as humans, and how do we change the way we work and what we do to take advantage of this technology. 

So, I think we're at this inflection point, entering this more human era of technology. So that's why it's more important, because everything we do in a company is going to be impacted. We believe every single process in every organization will be impacted. Every role will change, from the CEO to the management teams, to the manufacturing workers, to the frontline workers. And because of that, you need to be serious about dealing with some of these issues.

Michael Krigsman: You're describing the implications of these technologies on business and society as a whole. But what does that have to do with the importance of thinking about the technology responsibly? 

Maybe it sounds like an obvious question, but on the one hand, we're having a technology discussion, and on the other, you're talking about very far-reaching implications with tentacles that go almost everywhere. So how do they link?

Paul Daugherty: They link in terms of the point of the definition that I mentioned is how do you create value and build trust and avoid the risks? That's really the link that we're trying to create. And so if you think about the kinds of principle kind of things we're talking about, it's how do we make things human? What we talk about human by design, it's one of the principles we talk about, so that we're deploying technology, deploying AI that maximizes human potential and kind of adjusts for the role of the individual. How do you implement technology and provide solutions that are fair, that avoid bias, discriminatory outcomes, et cetera? I won't go through them all, but we have a set of principles like that.

That's why this link is important, because you're creating the outcomes that are consistent with the values of your organization and that drive the business value that you're looking to create. I think it's inextricably linked. This isn't about doing responsible AI to check off a boxer because it feels good to say you're doing responsible AI. We believe it's the foundation of driving the value in the business in the right way, which is why it needs to be inextricably linked. It's interesting to me. 

I think we talked a little bit on these themes probably about seven years ago, Michael, when my book Human plus machine came out and we started talking about this, and back then, it was hard to get attention on responsible AI, to be honest. It was kind of, we're pushing the message out there. But there wasn't a lot of traction among executives. 

That's changed because I think everybody realizes the power of the technology. They realize the way that this is fundamentally leading to the reinvention of organizations, and that therefore the consequences really matter. And people want to get it right. They want to get this right for their consumers. They want to get it right for the workforce. They want to get it right for the constituents and the communities that they operate in.

Michael Krigsman: Please subscribe to our newsletter, and subscribe to our YouTube channel. We have incredible shows coming up, so check out cxotalk.com. 

Paul, you described earlier that there is a lot of intention or awareness of this issue, but not so much execution. Not much execution yet. And you describe this as being a gap. Can you speak to that?

Paul Daugherty: You need the principles. You need the processes and policies. You need the tools. You need the trainings. That's the things you need to do. But I boil it down to saying I ask these questions all the time when I meet with people.

I say, in your company, can you identify an inventory? Every use of AI in your company, some hands go up, not many hands go up. That's the start. If you can't do that, then you got to address that.

The second point is, do you understand the risk level and the risks of every use of AI that you have in your company? And have you assessed that and evaluated whether it be against the NIST frameworks or EU AI act, whatever it might be. Then the computer ads go up, not many.

And then the third question is, have you documented and implemented the mitigating actions to adjust and compensate for those risks? That's what we mean by putting it into action. You should be able to answer those three questions.

And that's what I said earlier. We're reporting to our board of directors on a regular basis. This is a compliance program, like anti-money laundering and other compliance programs, corruption and other compliance programs we run and we treat it that way.

In fact, I just got an update on our progress just this morning before this conversation we're having. And it's a journey to go on, too, because it takes a while to do everything I just talked about. This isn't easy. It's hard. We've been working at this for three years. It took us 15 months to get the first steps of it in place. And that's why organizations need to get started with it today and really think about it systemically and think about how you lay the foundations in the way that I talked about.

Michael Krigsman: We have an interesting question from Twitter, from Arsalan Khan. He actually has two questions that are right on point with this. He says, first, beyond regulations and societal pressures, why should anyone care about ethical AI? And he's also wondering about how ethical considerations cross country boundaries.

Paul Daugherty: I think you could cause material damage to your company, not to mention other consequences, by not doing it. If you're in, say, the financial services industry, and you implement artificial intelligence that has severe, say, racial or gender bias in the lending algorithms and such that you're using, there's going to be severe consequences for it.

If you're in the European Union, which has just ratified their AI act, if you're using emotional AI in certain ways that violate the act, you can be fined up to 3% of your global revenue for violating that use of AI. So, these are things that matter based on the impact it can have on your customers, based on financial consequences, et cetera.

But it's fundamentally, to me, it's about creating the trusted brand that you need for the future, the trusted brand externally, but the trusted brand internally with your people and the employees you have now that you want to attract. And it's going to be harder and harder to maintain that trust if you don't have a systemic foundation on dealing with these responsibility issues.

Because if you can't do what I said earlier, if you can't answer those three questions that I asked earlier, it's inevitable. It'll only a matter of time before you have a serious issue in your company where you're going to be impaired in some way, and that's going to damage the trust and damage the brand in consequential ways.

So, I believe it is very important. I've used this phrase before. I think people talk about data being the new gold and all these sorts of things. I really think it's trust. Data is an element of trust that you create. I think trust is really a key currency and a key differentiator for the way you're going to attract employees, the way you're going to attract customers, et cetera. Especially as people deploy more consequential technologies, and especially as you see certain people breaking some of the boundaries and some of the rules with it.

The second part of it was the global considerations, different, regional. And it's a great question because ethics and responsibility aren't necessarily absolutes. That's why your principles may vary from one company to another. Your principles may vary a little bit. There are certain things that are foundational, that are not subjective, say, but you do have to tune and adjust to your culture, to your community, to the countries you're in, et cetera. There are certain things that'll be foundation, universal and global. But it does require some adaptation as well.

Michael Krigsman: You have used the word trust a number of times. Of course, trust is not a new concept. So therefore, what is new today? Why this reemphasizing the term trust?

Paul Daugherty: It's what I was talking about earlier. With this exponential technology increase, we're continuing to see. I mean, we were dazzled 15 months or so ago when chat sheet BT came out with this big step changing capability. And we're going to be dazzled with other advances that are even bigger going forward, because that's what exponential technology innovation means.

And it means that. That the decisions we make are that much more important. The way we use a customer's data and the consequences it can have on that customer, the way that we're learning from patterns of what people are doing and creating experiences, and the way that we then use that to either offer them better services or maybe intrude on what they consider to be their privacy, really matter.

And that's why it's a big issue or a bigger issue, because the way we're deploying technology is getting more invasive. I use that word intentionally. It's getting more invasive on people's lives because we're using technology to get at the heart of the experiences that we're creating. For consumers, for employees, for our customers, et cetera.

Creating more detailed, more personal experiences means you're dealing with more fundamental data. You're dealing with patterns, you know more about your customers. You're dealing with more sensitive information. You have more potential to abuse it in more powerful ways with the technology we're using. And you can go much more quickly from being a trusted provider to completely losing the trust. And that's why it's a bigger issue today. That exponential change means you can lose the trust even faster than you could lose yesterday.

Michael Krigsman: So exponential change then, is that real driving factor why we must be so on top of this issue right now?

Paul Daugherty: I believe so. And I think, again, that's what I'm trying to kind of get across. I'd be trying to talk about is we kind of feel like, I think we get in these moments and we think of where we are today, and so we're thinking about generative AI and how it applies and everything.

But generative AI won't be the biggest or the last innovation that we'll see in our careers, in our lifetimes. And so you have to think about and plan for what's coming next as well. And think about the trendline again, the trendline here is moving from thinking of technology as computing to thinking about technology as the human experience and the way that we're creating human potential. That's the moment we're in.

As we look at things like, I'll just toss out some other technologies. You think about advances in humanoid robotics. If you think about what we call the body electronic, which is human sensing and human biological augmentation, if you think about in silica biology and the way it's creating different treatments and care for people, deeply personal and deeply powerful technology.

And that's what I'd encourage everybody to think about, is not only where technology is now, but how do you lay this foundation? Because we're in a new era in terms of the human impact and potential and consequences, I think it's exciting and the most exciting era we've ever had. But again, you need to bake some of these foundational ways of thinking into it.

Michael Krigsman: And we have a couple of questions on LinkedIn from Greg Walters that start to touch on these issues as well. 

He's got two questions. Let me ask his first one. You alluded to this, Paul, but who defines trust? Responsible ethical bias, he says, isn't it how we use the technology that is ethical and not the technology itself?

Paul Daugherty: Technology is neutral, so we can't pick on the poor technology and blame it. Technology is neutral. It's what we're doing with it that matters. And why, again, what we're talking about on this program is so important.

So the same technology, synthetic video production using foundation models, multimodal models. We've done a wonderful thing for a museum where, at a family's request, for an artist. This is in India, an artist that has deceased his family, there's an exhibit of his art. They wanted to recreate the presence of this artist in explaining his art, which we did with synthetic video, using a lot of photographic footage and such from his life. And it was an amazing, emotional, powerful experience.

That same technology can be used to develop deep fakes for a political campaign of an opponent. So the same technology can have amazing, powerful implications or have terrible consequences. It's a great point, and I think that's why I wasn't a fan of the band that was called for a while ago on foundation models, because I don't think that would work, first of all. And I don't think it's about the technology per se. It's about thinking about how we put in the right education and the right guardrails around the use of it.

Michael Krigsman: I'm assuming that this is precisely why you have developed a variety of frameworks and processes and compliance metrics and so forth around the implications of responsible or the use of technology? 

Paul Daugherty: Education, one of the things that we very much believe in is creating a baseline of education around what's happening with technology, so everybody understands it and understands how to use it properly.

One thing we do in our organization, we call it TQ technology quotient. We believe everybody in the organization needs to have a TQ. You have an IQ, you have your intellectual quotient, you have your EQ, your emotional quotient. We believe that TQ you have is as important your baseline level of understanding of technology, because if you understand the technology, you can use it better.

So that's something that every one of our employees does. Every one of our 700,000 employees takes foundational basic training on a regular basis around these technologies. We have a new gen AI, TQ, that's coming out. We've already had one. We have another one, because the technology is changing so fast, because we believe that helps people learn how to apply it better. We also learn from people. The better they understand the technology, the better they can help us. Our employees can help us figure out how to deploy it in a responsible manner.

Michael Krigsman: So, essentially, you are diffusing the knowledge and the expectations around responsible ethical use of technology through the organization. And this sounds like it's very deliberate, very intentional, to use your term.

Paul Daugherty: If you think about those four pieces earlier, it was principles, policies and processes, tools and culture and training. What I just talked about was the culture and training, but you need the compliance policies and the adherence to them and the tools to ensure that you're measuring it.

So you need all components of it to get it right. But democratizing the knowledge and involving people is what's key to getting the culture right and key to getting the trading right so people know how to use it properly. So, yeah, it's a key part of it.

Michael Krigsman: I can see that you've developed a very systematic approach and you're far more mature than most other organizations when it comes to these issues.

Paul Daugherty: We're working with some of our partners who are very advanced in these areas, some of our large technology partners, and we're increasingly doing this work for clients. I mean, kind of what we do with for a living is helping companies solve these types of problems.

And a big area of our business right now is helping companies establish their responsible AI foundations, which could be everything from, hey, set up your principles to getting all the policies and tooling in place to do it. And we have companies we've been working on that journey with for a year, 18 months, and kind of putting it in place.

This is something that you can package up and figure out how to do. There's a method and such to doing it, and there's tools out there, tools that we've developed, tools that are being developed out in the ecosystem to help with these challenges.

Michael Krigsman: Greg Walters on LinkedIn had a second question. Let me come back to that now he says, what are your thoughts around responsible AI supporting the financial goals of the company, both for Accenture and for your clients?

Paul Daugherty: That's why in our definition, we really single out creating value because it's got to, if some people view responsible AI as risk, like you're going to put a layer of risk and stopping people from doing things. But that's not the point. The point is how do you do things and create value and allow for the positive impacts in the right way?

So absolutely, I think the success should be measured of the positive outcomes you create, not in what you've stopped from happening. I think that's the key thing that we're trying to focus on, that we're helping companies focus on.

So, yeah, it should be around how do I use an example if I'm in the insurance industry working on applying generative AI in the underwriting process, which we're doing for a company, how do I make, which is very sensitive and regulated and all sorts of things, a lot of sensitive data, how do I do that in a way which really drives better implications and impacts for their customers?

It makes the process more efficient, which is what they're looking to do. And in their case, it's a growth story because they're constrained by their ability to underwrite processes so they could underwrite policies so they can do more. It creates growth in the business, so it's all linked and linked together. I think you get a good way to ask a question, because I think it's getting to the way we believe you need to deploy it.

Michael Krigsman: We have another excellent question from Deepak Seth on LinkedIn, and he says this. Good AI will do good things, bad AI will do bad things. How do you ensure that humans stay in control of making the good versus bad distinctions in the creation of AI? And Paul, you wrote a book on this topic.

Paul Daugherty: Human accountability is one of the principles in our list of seven principles for that reason. So that you have for any. There was a company who had an issue recently. I won't name them who, it had a bad outcome. The AI created, created an issue, and their response was, it was the AI. And I don't think you can. That can't be a response is that the AI caused the problem.

Somebody in your company designed it a certain way. Somebody in your company chose to implement it. Somebody in your company, your company was responsible for ensuring it worked. Right. It was a human at the heart of that error, not the AI that you can blame the problem on.

And I fundamentally believe that for any technology, if you start taking the human accountability out of it, we're in big trouble. You got to, at the end of the day, people have to be accountable for the impacts that we're driving.

So that's foundational to what we believe is the approach. And in fact, there's this idea of that's always talked about of human the loop. And I use the term human loop, and I understand it. I think it's a good term to use for the concept of keeping human involved.

I think it's backwards, though. Human in the loop implies you got this big thing, and we'll consult the human and see what they think. I think the right way to think about it is human orchestration of AI systems. The human needs to be kind of really in control, and the AI needs to be used in a way that's augmenting the human capability and the process in the right way, rather than human the loop, which sometimes can be thought of as a little bit of an afterthought. Let the human check this just to make sure the result is okay.

And that's the way we think about it, is how do you really ingrain this in the processes to amplify the human potential, rather than just simply keep a human aloof?

Michael Krigsman: So the human orchestrating AI gets to the heart of the human being ultimately responsible for ensuring the right kinds of outcomes. Is that a correct way to describe it?

Paul Daugherty: Last I looked, we were doing 800 generative AI projects in Accenture. We just announced our earnings yesterday, and we talked about having a billion dollars of generative AI sales in the first six months of our fiscal year. So significant work going on in generative AI. That's all real foundation model related work, so significant stuff going on.

But we're finding from that, we think there's a shift in mindset that you need to take in applying generative AI and move from a single use case or a narrow use case view to, you can either call it a broad use case or you can call it a value chain view, but you need to look beyond one little thing you augment.

Because take a claims reviewer type of role. You can design a generative AI tool to just automate the review of one thing they're doing, but you may miss the chance to really look at the whole claims inflow and what's happening and how you can completely change how you handle a claim and what the human's role is in really driving greater potential, greater, greater value across that process.

Thinking beyond the use case to really changing fundamentally, reimagining, reinventing, reengineering, whatever word you want to use. The work that's happening is really critical. We're seeing this be the key to unlocking the value as we look across the different areas.

Michael Krigsman: This is, again from Arsalan Khan, who now shifts this conversation a little bit. He says, we know that AI is based on data and algorithms. How do you know the data and algorithms are ethical when your organization uses third parties? So I think this speaks more broadly to the ecosystem aspect.

Paul Daugherty: I think the question is getting at if I use, say, Einstein GPT, or if I use ServiceNow or SAP's new jewel, whatever it might be. If I use the generic that's embedded in one of my application partners products, how do I know?

And I think the answer is working closely with those partners and looking at what they're doing. Each one of them have different approaches they're using, and they're all talking about trusted use of AI in various ways. And then as you look at deploying it, but making sure that you still have your own kind of responsible AI framework around that to make sure it is generating the right results you have, because you can use their tool, but if it's your data, it's the way you're using your data and plugging things together and integrating that together to put new solutions in place that could have positive or negative impact.

So I think that's a timely question, because the low hanging fruit, I think, for a lot of companies that we work with is, and what we advise companies is take advantage of what your partners, of the partners that you already have, what they're putting out, because that's going to be your shortest path for generative value, in particular, that's going to be your shortest path in, in many cases, to getting the value in a kind of a more trusted, safe way  to ensure it's being put into practice in a responsible manner.

Michael Krigsman: We had as a guest, John Halamka, who is the president of the Mayo Clinic Platform. They are developing a system to bring together data from lots of different institutions so that healthcare providers can query this data using AI machine learning to help doctors, physicians with diagnosis. And their solution, of course, using healthcare data was to build a federated system where each hospital or organization maintains control over their own data. But the platform underneath it allows queries that go across these various federated systems in order to ensure the protection that you were just describing.

Paul Daugherty: We're seeing more approaches like that where companies are kind of collaborating. I think what you're describing is, how do I keep my own data protected, but also benefit from the insights that I can get by combining my data. We're seeing those types of approaches, the interest in them, or solutions in them, across different industries.

For example, in life sciences, looking at shared access to clinical trial information where you can protect your own, but gain insights on progress on drug trials across companies, things like that. More and more conversations that people are looking at like that.

And part of it's because people see the value in collaborating, because data is so key. Part of it is the advance in technologies like privacy preserving compute and other things that are providing homomorphic encryption and things like that, that are providing technology solutions that can help you both protect your own data and expose it in ways to be queried by others. So we're going to see more of that, but there's a lot of interest that we see in some different industries in doing that.

And one thing I'll say is probably seeing even more interest from that in Europe than North America, which is an interesting pattern. As companies in Europe kind of look at how do they come to grips with deploying the technology along with other things that they have underway, like the EU AI act, like sovereign data requirements and sovereign compute requirements and such. And some of the way that puzzles come together is leading to companies looking at how they collaborate in the way that they form those types of solutions.

Michael Krigsman: This is from Wes Andrews on Twitter, who says, there was a great question earlier on value and business growth, but can you explore more of the value trust element? As we saw with Wendy's attempt at dynamic pricing, there was a great deal of public outcry. But it's a great example of how the public feels about businesses leveraging AI technology. And obviously this gets right to the heart of many of these trust issues.

Paul Daugherty: It does show the sensitivity around the value and trust. I think Wendy's was doing something there that was creating a value proposition for a certain set of customers, but broadly based, the customers saw it as Wendy's kind of taking pricing advantage over a broad set of customers. So it's really interesting and there's probably some communication and change management lessons to be learned there, in addition to some of the questions around trust and responsibility.

I think being very clear on if you're deploying something like that, being very clear on the messaging and intent and what you're trying to do with it, I think is really important. So I think that's maybe something to think about. In that case.

This connection between the value and trust, though, we're seeing just being a real continual theme because in the customer service areas, again, when we plant some customer service and marketing and look at how you can use customer data, you can create a new experience through an app for a consumer, get more data from them, create a really immersive experience for them. But then how do you use the results of that?

There's actually some research that's been done and research we've done that says you can be too accurate actually in using the data with a customer. Sometimes you may need to vary it a little bit because the degree to which you might be able to anticipate what a customer wants may actually erode the trust or cause questions. That's called the uncanny Valley is the term for the body of research that's happening around this, around technology. It's something just to be sensitive to because you're dealing with human beings, and human beings got to have a reaction to how the technology is deployed. So it's something worth thinking about as we work on different solutions with customers.

Michael Krigsman: And on the subject of human beings having a reaction. Deepak Seth thanks you for your insightful response, and he particularly appreciates your comments about human in the loop versus human orchestrated AI.

Paul Daugherty: Trying to think about catchier term than human orchestrated because human loop is very catchy. So human orchestrator isn't very catchy. I'll come back on when I have a better catchy term for you, Michael.

Michael Krigsman: As I interview people on CXO Talk, that term human in the loop is becoming more and more common. I hear more CXO talk guests using that phrase.

Paul Daugherty: Yeah, it is. And I worry a little bit that it turns into, okay, I'm going to redesign, I'm going to throw a bunch of technology to this process, and then I'll have a human checker of the process, and I think that's going to lead to the wrong outcomes rather than unwinding and really looking at a piece of area of work in your organization and looking at where's the human impact greatest, where's the technology? Do things that humans can't. How do you put the two together? And that's the way that you really need to look at. And there's a whole methodology we're creating around this, those that live through Michael Hammer in the reengineering a corporation. I think this is kind of a revisit of that and a need for that on a whole new level as we look at applying technology going forward.

Michael Krigsman: And I understand that you're trying to capture, really, an entire mindset that implies a culture, that implies processes, all within that one phrase.

Paul Daugherty: That's right. I'll just deviate a little bit, but I think it's kind of related. I just talked about the work. We did a report. I'm happy to send it around if anybody's interested. It was called work, workforce and worker. That was the name of the report. We released it in Davos this year. And it's based on some research that I partnered on with our CHRO, Ellyn Shook.

And we interviewed thousands of workers, and we interviewed employees all sorts around the world, different jobs, all sorts of professions, and we interviewed the c suite executives, and there were some really interesting findings that are relevant to what we were just talking about. And this idea of the role of human.

95% of workers surveyed. So 95% of people believe generative AI is good for them. They believe it's going to enrich their careers, and they believe they can do better in their jobs with generative AI, 95% of people. That's a big number. Bigger than you normally see with technology. Very positive.

60% of those people, however, feel a lot of anxiety, and their anxiety is increased with this new technology because they're not hearing about how it impacts their role in their company. They're hearing silence from their leadership, and they're interpreting that as this might be bad for me, even though I think it's good.

And then you have this gap where 80% of the workers believe they can learn everything they need to do to this going forward, but the leaders believe that they need a lot of skills that they can't get from their workforce. I talked about a responsibility gap earlier. There's a talent gap and a trust gap between the workers, between employees and the leadership on the technology that really needs to be addressed with better transparency on talking about what are people's roles going forward? How will roles change? How do we put the learning platforms in place to help people learn as they go?

So I'll just add that because it's really just human, the loop issue, what the roles of people are going forward. And I think there's a lot more work that needs to be done immediately to bring the two together so this suspicion doesn't continue to build in the workforce.

Michael Krigsman: We have a question from Lisbeth Shaw on Twitter, who says the following. Who asks the following? She says, how do you balance profit maximization or extractive management versus responsible AI? And how do you help companies navigate that issue? And I think it relates to what you were just talking about, because, again, it's another dimension of trust and confidence in the organization.

Paul Daugherty: I don't really see a conflict. Again, I think it's the basics in part of the approach and how we need to deploy the technology. I don't think you can separate it out. I don't see it as in conflict with whatever your strategic goals, profit, growth, whatever you're trying to achieve. Rather, you're going to maximize your success by employing the responsible use of technology because it's going to avoid the issues. You're going to understand them and be able to assess the ways to build better trust as you do it so that more customers use your services.

If you release new features, unlike Wendy's, where there's a reaction, your customers trust you to do it the right way rather than reacting against you. Again, I don't want to single that organization out at all, just using it as an example. I don't think it's in conflict. I think on the contrary, it's essential to driving the value. And that's why our responsible AI definition has driving value as part of the definition, because that's where you're going to get the outcomes that you need in the organization. So I'm not sure if I fully answered the question, but I think it's linked.

Michael Krigsman: We have a question again from Arsalan Khan, who says, who should be responsible for AI? Is it the board, the CEO, the CIO? No one specific? Everybody?

Paul Daugherty: There's different responsibilities in different places. I'm speaking more than I ever have to boards on this. There's a tremendous desire of boards to really understand it so that they can understand if they're playing their fiduciary and other responsibilities. Right, with things changing.

And it's clearly a leadership responsibility as well. But trying to figure out the best way to unwind that, I'll get back to the fact that this is different than any technology before, which is why the accountability for it gets pushed up higher in the organization.

Three or four years ago or ten years ago, we weren't talking about responsible cloud. We didn't talk about responsible mobile phones, mobility or anything, because the technology was different. It didn't have the level of potential impact it does now, and it's just getting started. There's even more invasive, pervasive, ubiquitous, human like technology that's coming down the road.

So, it is incumbent on the senior leadership to have a firsthand understanding of this. So we believe, we've been talking about this concept that leaders need to learn first. It's why in Davos this year, instead of doing the normal thing we do in Davos, of a lot of panels and different things, we conducted small group learning workshops for CEOs and senior leaders in Davos. Focused closed door sessions, Chatham House rules, just to help them learn, because we believe learning needs to start at the top and the top leaders need to learn first.

So, I just think it does start with the boards. It starts with the CEOs learning, and CEOs are really leading in learning, from my experience that I'm seeing in the number of conversations I'm having there. And then it's matching that up with what you're doing with the broad-based education.

I talked about TQ earlier and democratizing the learning to everybody in the organization and trying to bridge that talent gap where you're communicating with everybody in the organization and they're involved in part of designing the processes and the way they're going to work going forward. So long answer. And I think I answered by saying there's accountability in a role at every level, because there is. But more than any other technology in the past, there's a real accountability and need to have leaders learn first and be at the forefront of this.

Michael Krigsman: Arslan comes right back to you and he says, okay, trust is important, but why should monopolies even care about it?

Paul Daugherty: I think there's a lot of good active discussion happening on how do you, everybody wants this technology to be used and to be used in the right way. I think there is a lot of serious focus that I see when I talk to our large technology partners. We're the largest partner to most of those companies that are building the models, the hyperscalers and the new companies building the models. And there's a lot of seriousness in really trying to understand the right way to apply it.

Now, that said, they're moving very fast, and I think they don't provide responsibility and trust out of the box with the foundation models and APIs and such they're providing. That's not their job, though. And I think it is incumbent on anybody using the technology in their enterprise, in their organization to build that layer around to make sure that what you're doing is responsible with it, just like I don't know any device that you could have purchased previously that could cause harm if not used properly.

So I think that's the way to answer that. I don't think it's in conflict. I think there's a lot of conversation we're having, whether it be directly with those companies or whether it be with governments and regulatory bodies and other communities of business, where there's try to grapple with those issues and everybody's really trying to make progress.

Michael Krigsman: Another question from Twitter. You briefly touched on this earlier. How should multinational organizations, multinational companies create and implement responsible technology, and can they build a unified umbrella again, crossing geographic boundaries?

Paul Daugherty: I think you can. I think we're multinational, so we fit that bill. We operate kind of most places around the world, and that's the approach we're taking. We're helping many other multinationals with that approach. We have a single global set of responsible AI principles and standards that we use. Helping other global organizations do the same, then you need to make sure that that approach accounts for specifics that you'll have in different places.

So, if you're in Singapore, which has some very specific things around sovereignty and various issues, are you able to deal with that? If you're in Europe, as I mentioned earlier, are you dealing with sovereignty issues and such that you need to, and complying with provisions of the recent UAI act for AI things in?

Yeah, I think it's definitely possible to do globally. I think that's the important way to start it. But then make sure that in each part of each place where you operate specifically, that you're also peeling that back to make sure that you're taking the right steps. That's the approach we're taking.

And the other thing I'd say is with GDPR and data, many of you have lived through that and kind of what Europe did kind of set the way for most organizations. You had to look at Europe's GDPR and that kind of set the policy. You tended to use the rest of the world because it was so well defined and it was rigorous and such. That's probably the way this is going to go as well, is you'll kind of design to the clearest standard that you can then apply and use to do things in the other parts of the world, too.

Michael Krigsman: Do you have any thoughts or perspective on government regulation and what's likely to happen with regulation? And again, I'll ask you to be quite quick because we're simply going to run out of time.

Paul Daugherty: We're really involved in trying to be an active voice everywhere we can be. So we were involved everywhere. All I can say is I think we do need regulation around this. We do need kind of guidelines and guardrails around it. I think the EU's taken their position nist in the US and there's other things going on in the US. Canada's got things underway. So different countries around the world are all moving with it.

So I think that's positive and constructive. And will they get everything right? Probably not. It's hard to, but it's important to get some of that in place quickly. Some organizations are waiting to see that, which know it's hard to wait, but there does need to be some better clarity.

Michael Krigsman: Have a question again from Deepak Seth, who says organizations seem to be in hiatus with AI C suites, see the potential, but seem to be at a loss to figure out how to realize business value. We have the issue of shiny object versus essential object. How are Accenture and others helping to get over this log jam?

Paul Daugherty: So, five things to address what Deepak's talking about: 

  • First is we call it lead with value, which is starting with a business case and looking at both strategic bets as well as table stakes. There's things like job descriptions in HR or knowledge management, different things that probably more table stakes that you can start moving on in different ways. Then there's strategic things for your industry it might be underwriting and insurance, R and D and life sciences, things like that. That's the value piece of really starting with a business case.
  • The second piece is understand and build out your digital core that you need for the technology most companies are behind. You need to start taking steps on that. I could get into that in more detail, but that's the second thing that's getting the way, and there's steps you need to take on that front.
  • The third is addressing the talent issue that I've already talked about. So revisit it. But getting that training and transparency and learning in place with the organization is key.
  • The fourth is with the responsible AI gap, because we don't think you should go too far in this before you have that systemic framework in place.
  • And then the fifth is setting up a change, getting a real change management competency, because this is going to be a decade of change that we're going into here. This isn't going to be solved overnight. That ability to drive changes on a continuous basis is going to be key to the reinvention.

Michael Krigsman: Technology is changing so rapidly, ethical frameworks are evolving and government responses are in flux as regulation is considered, accepted, rejected, and so forth. How should business leaders, what advice do you have for business leaders in responding to this shifting ground?

Paul Daugherty: I see a lot of leaders doing this. This is good, but probably not the majority yet. But it's leaning in and learning first that they have personal knowledge of it, communicating with transparency of the organization about where they see this going, and then starting with responsible AI systemic framework, addressing that responsible AI gaps so that they have the foundation in place going forward.

Michael Krigsman: We have another comment from Greg Walters, who's asking about robotics, the impact of robotics. And I know you're looking at so many different technologies. Any thoughts on that?

Paul Daugherty: The large language models are providing a super boost to advances in robotics because for all sorts of reasons, there's just a lot of synergy between what's happening with the foundation models and robotics. So, it's creating advances and the space to watch. 

One thing that will be interesting is the humanoid robotics, which are poised for a real advance. And we're doing some separate research just on that vein right now because I'm sure everybody's seen figure in some of these recent announcements that have been pretty stunning in terms of what they're trying to do on that front.

Michael Krigsman: This is from Kash Mehdi, who is asking about organizations bringing in diversity of people in various roles to avoid cultural bias, which is what we're seeing, he says right now with Gen AI images and biases based on color, religion, region and so forth. So, the idea of diverse teams, in order to help address some of these.

Paul Daugherty: Issues, I fundamentally believe, and we fundamentally believe at Accenture, that innovation equals inclusion. You can't achieve your potential at innovation without an inclusive approach. For some of those reasons mentioned, if you had an inclusive team developing AI, do you think you'd really have racially biased AI as a result? 

Just as one example, I think the inclusive teams thinking about the problem, framing the problems, identifying or developing the solutions in the right way is key. We have a whole body of evidence that I'm happy to send people that inclusion and diversity does differentiate and lead to more innovative organizations. So great point to make, a great point to close on.

Michael Krigsman: And with that, a huge thank you to Paul Paul Daugherty. He is the chief technology and innovation officer at Accenture. Paul, thank you so much for being a guest again on CXOTalk.

Paul Daugherty: Thank you so much, Michael, it's been a pleasure and thanks to the audience for such a great set of questions.

Published Date: Mar 22, 2024

Author: Michael Krigsman

Episode ID: 831