The Generative AI Rocket Ship: Practical Advice from PwC

Explore the future of consulting with PwC's Vice Chair, Joe Atkinson, on CXOTalk episode 821. Dive into the strategic investment of over $1 billion in generative AI, its impact on enterprise scaling, and the evolution of consulting services. Gain practical insights on integrating generative AI with human expertise for superior business outcomes.

56:29

Jan 19, 2024
19,886 Views

In this thought-provoking episode of CXOTalk, Michael Krigsman talks with Joe Atkinson, the Vice Chair and Chief Products and Technology Officer at PwC, about the transformative journey of implementing generative AI in large-scale enterprises. With PwC's decision to invest over a billion dollars in AI, Joe sheds light on the strategic pillars guiding this investment and the profound implications it has for consulting services and beyond. 

This episode is a deep dive into scaling AI, the evolution of consulting in an AI-augmented world, and the cultural shifts necessary to harness the full potential of these technologies. Join us for an insightful discussion that will resonate with leaders and technologists alike, as we navigate the complexities and opportunities presented by generative AI in the enterprise.

Episode Highlights

PwC's Scale and Investment in AI

  • PwC is a global organization with over 400,000 people across more than 150 countries.
  • Announced a $1 billion investment in generative AI to enhance client service, improve internal technology assets, and upskill employees.

Strategic Approach to Scaling AI

  • Emphasizes the importance of scaling AI to achieve a meaningful return on investment.
  • Scaling involves preparing employees for new tools and technologies and aligning them with business value.

Differences in Scaling AI vs. Traditional Systems

  • Generative AI is becoming embedded in enterprise systems like ERP, customer care, and human capital systems.
  • The integration of AI into these systems is expected to replace traditional reporting and visualization tools.

Impact of Generative AI on Consulting

  • Generative AI is anticipated to significantly enhance the consulting industry by automating tasks and enabling consultants to focus on higher-value activities.
  • The nature of consulting work, including development and training, will evolve with the adoption of AI.

Challenges and Opportunities

  • Generative AI presents both opportunities for growth and potential disruption in consulting.
  • The development of consultants early in their careers will need to adapt to the integration of AI tools.

Client Engagement with AI

  • PwC aims to help clients adopt AI responsibly and effectively.
  • Consultants must not rely solely on AI but use it in conjunction with human judgment to meet client needs.

Path from AI Concept to Enterprise Scale

  • PwC's approach started with understanding employee needs and establishing a governance group for AI.
  • The "AI factory" model was used to manage over 3000 use cases, emphasizing the need for discipline and expertise in AI application.

Closing Remarks

  • Joe Atkinson underscores the importance of honest conversations and cultural standards for driving change.
  • The balance between AI regulation and innovation remains a developing area, with risks on both sides.

Key Takeaways

  • Generative AI is poised to transform the consulting industry by automating routine tasks and enhancing the quality of insights.
  • Scaling AI requires a strategic approach that aligns technology with business outcomes and prepares the workforce for new capabilities.
  • The integration of AI into enterprise systems will be widespread, necessitating adjustments in professional development and training.
  • PwC is committed to guiding clients through the responsible adoption of AI to drive business growth and innovation.

Episode Participants

Joe Atkinson is PwC's Vice Chair - Chief Products & Technology Officer; he is a member of PwC's US Leadership Team responsible for executing on our vision to digitally enable our firm and better leverage technology and talent to bring greater value (and a better experience) to our people and our clients.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Today on episode 821 of CXOTalk, we're exploring how to scale generative AI in the enterprise. Our guest is Joe Atkinson, Vice Chair and Chief Products and Technology officer at the consulting giant PwC. This is his second time on CXOTalk.

Joe Atkinson: I have responsibility for technology strategy and agenda across the firm. That's the tech that our people use to work every day and deliver services to our clients.

It's the tech we use to operate our business, all of our back-office applications, our infrastructure. It is. It also includes the technology that we build or operate or innovate alongside our clients. So it's a big agenda that we've got and it's been an exciting time in the space, as you know.

Michael Krigsman: Give us a sense of the scale of PwC to help place context for the folks who are watching.

Joe Atkinson: 75,000 people across the US, our colleagues in Mexico, as well as what we call our acceleration centers around the world, we're now over 400,000 people. So, it's a huge organization with reach across more than 150 countries.

Michael Krigsman: Joe, you have made a very large investment in AI and especially in generative AI. Can you tell us about that and especially the strategy underlying that investment.

Joe Atkinson: It's an amazing space. We're all talking about AI because it's got so much of our attention. But back in April, we announced that we would be investing over $1,000,000,000 in the next few years in bringing generative AI capability to our business as well as to our clients. And there's a few pillars in there, Michael.

There's a lot of components. Obviously, you spend $1,000,000,000 over a few years. There's a lot of things happening in there, but there's a few components that are worth mentioning. First is how do we serve our clients? Our clients are trying to figure this space out. We obviously want to be good partners to them and good providers to them and provide them good perspective.

How do we help them think about the opportunity that AI represents? How do we help them do that in a responsible way? Govern that in a thoughtful way? The cybersecurity, the safety of those, those tools and technologies. So that's a big part of what we're investing in to make sure that those capabilities are there and ready for our clients.

And not surprisingly, they're already calling on that. The second pillar comes back to the technology assets that we use to deliver our business every day are people use technology platforms to serve our clients and consulting and trust, and they need the capability to continue to get better. So we want to invest in generative AI and put tools in their hands so that they can better serve our clients.

And then the third piece is it's one thing to give them the tool, but you have to give them the skill. So how do we make the investments in their upskilling? How do we actually provide them the training and the foundational understanding that they need so that it's not just a toy that's on the side, It's actually a technology that's being used in a responsible way and gets to this idea of how do you actually scale it across the enterprise.

Michael Krigsman: This notion of scaling is obviously central to what you're doing with this very large investment. So as an organization is considering this question, how do you plan the allocation and investment of dollars and resources in this scaling issue?

Joe Atkinson: A particularly important question in the C-suite right now because there's a lot of money being invested in generative AI and there's a lot of potential cost in the operation of gen AI systems and large language models.

So, this question of what's the right formula for scale is one that we're talking with clients all the time. From a practical perspective, I feel very strongly it doesn't matter what technology you're talking about, if you're not going to scale its impact, meaning that that immaterial portion of your people and your business is impacted by the use of that technology, if you're not going to scale it at that level, the likelihood that you'll get ROI that's meaningful is is that much more challenging to achieve?

You have to get it into the hands of lots of people. And of course, that means you have to get it into the hands of lots of people. Prepare them for what tools and technology you're giving them and make clear what you expect. Because if you just give technology into people's hands, they'll be very well-intended, generally, generally. But they may not know what's expected or where to bring the technology to drive actual business value.

That connection to business value, obviously key to our why, but if you're only doing, you know, skunkworks or you're doing a few little experiments, that's important. Experimentation is really important, but nobody's ever driven significant or why they've never moved the needle at the enterprise level by just having a few people experimenting somewhere off in a conference room or on a floor that doesn't engage with the rest of the business.

Michael Krigsman: So to me, this raises a fundamental question as to the differences between scaling for example, in the ERP system, a business process system as opposed to generative AI. Can you describe some of those, the differences?

Joe Atkinson: Let me go to maybe enterprise systems like customer care systems and human capital systems. You go to sales and contact management systems and of course, ERP systems that run financial applications as these generative AI tools have developed, most of us have seen these as essentially chat bots.

There's a window. You can go in and chat. It can do really interesting, very powerful things. At the same time, the enterprise technology providers, they know the power of these generative AI skills and capabilities. They're embedding these technologies into their platforms. Today. So most organizations, frankly, whether you expect to see it or you don't expect to see it, you are going to see it.

Most organizations are going to begin to benefit from generative AI because the technology providers they're counting on today at scale are going to be embedding it in the technology that you're consuming. And in fact, there's a lot of AI in those technologies today. So that means your human capital systems, your customer care and your sales systems, your ERP and your financial systems, they will be gen AI powered and the tools that we rely on today, the more self-serve reporting tools and more self-serve visualization tools, the search tools, most of those technologies are rapidly going to get replaced and in fact are already getting replaced by tools that are powered by generative AI.

Michael Krigsman: What is this do to, in your case, the consulting business? What kind of changes does gen AI drive to the nature of that work?

Joe Atkinson: Every consultant in the world wants a clear answer to that question. But there's a lot we don't know. But I'll tell you what we do know. I think generative AI, there's a lot of hyperbole and there's certainly been a lot of hype around this, but I think if you're answering the question honestly and sincerely, knowledge work is going to be massively impacted by generative AI, just the nature of the technology and the nature of its use and at its heart, consulting is knowledge work.

It's how do you take the capacity capability for us of hundreds of thousands of people around the world direct that capacity to bring benefit to your clients? From our perspective, this is a game changer and it's a game changer in a positive way. We know there are tasks that are going to get replaced by generative AI. When you think about this agent model, people talk about generative agents.

All of these assistants that will become more and more specialized and more and more powerful are going to make for better consultants. They're going to help us tap into knowledge in different ways. So I believe and we're already seeing the results of this, that you're actually going to have better consultants because they're going to be better equipped. A lot of the work that they do today, that probably doesn't create the same degree of value that clients expect now gets automated with generative AI tools, which means that what you pay consultants for and why you bring consultants to the table, the ability that they have to help you think about a problem differently.

The challenge, the thinking, the challenge, the internal politics to challenge whatever it is, is necessary to get you to the breakthrough moment, to get to the other side. Their capacity to do that will simply be unlocked by these tools. So I think it's a massive, massive lift for consulting. But there's no question the consulting profession, the nature of development in consulting, how we develop people, how we train them, is going to look very, very different.

Michael Krigsman: So, from your standpoint, would it be correct to say then that the primary differences will be ones of efficiency, more efficient access to knowledge, or is there something more than that?

Joe Atkinson: I think there's more than that. But let's go to the access to knowledge first and then we'll build on that.

Anybody that's been the consulting space has been trying to solve knowledge sharing across consulting organizations for as long as I've been in the business, and it's been a few decades now that I've been in the business. So you look over that time, we have we have had all kinds of technologies to try to gather the collective knowhow and knowledge of our people and make it accessible so that you don't have to be the deepest expert in a particular piece of technology in order to benefit from the fact that somebody in the firm has the knowledge on that particular piece of technology.

That problem has been notoriously difficult to solve with the technologies of the past, and I believe generative AI presents the first opportunity to truly solve it at scale and give people full access to not only the organization in the firm knowledge, which is proprietary. It has its own capabilities, but that plus the very good knowledge that's outside the walls of the firm.

And so when you can bring those things together, that's really powerful. And if the objectives of knowledge management 20, 20 years ago were to bring great knowledge to consultants that they could do more. And now you can magnify that impact in a gen AI world. I think that just means you have access to much better, much more equipped capabilities and technology savvy consultants than we would have had in the past.

But that's the beginning. And it goes to your point on on knowledge management and efficiency. There's no question that this unlocks efficiency. You can draft reports, you can draft analyzes, you can very quickly summarize large documents so you can get to the heart of issues as you're going through whatever the project may be with the client. But the other piece that surprises people is that the studies that have been done so far are showing improvements in quality.

That quality piece surprises people because we have all these discussions about the power of generative AI, and we have discussions about whether you can rely on the outputs and how those outputs come together. We believe very firmly and it's it's, it's core, Michael, to our our culture. It's it's almost a value touch point that the power of technology with the power of a human exercise and good judgment, that that combination is going to get you a better answer 99 times out of 100 or 999 times out of a thousand,

Michael Krigsman: We actually had as a guest on CXOTalk last week, an academic from the Harvard Business School who had authored one of those studies. And I pointed out, or I said, okay, so this is an increase in efficiency. And he said he was very clear. He said higher quality is actually the thing that matters most.

Joe Atkinson: such a great reference point because that's a study that we're thinking about. He talks about 25% improvement in efficiency. And I'm going by memory, which is always dangerous.

So, check the study. But it was 41% improvement in the inequalities, I recall. That's a huge leap in quality. And again, it prompts a really important conversations, one that the C-suite and boards are having, which is we've all been talking about when generative AI provides unexpected results, when it provides information that is not rooted in fact, it provides data that actually isn't what you expect.

And so people hear that and they think that we're all headed to a world with generative AI is going to provide you less quality. But when you talk to the individuals, the engineers that actually build these tools, they always remind me that the generative piece is where creativity comes from. And then the value of that creativity is that it's going to help you get to different and better answers.

Again, coupled with humans providing judgment and oversight. But at the end of the day, that generative capability, that creative capability, that is one of the reasons that we get these unexpected results. The reality is that some unexpected results are going to be breakthrough opportunities, and we're already seeing some of those show.

Michael Krigsman: Within consulting. Do you anticipate negative impacts? You've described the opportunities, but are there negative Are there potential obstacles and challenges that this now introduces, either expected or unexpected?

Joe Atkinson: I think any technology presents challenges of disruption, and the consulting business will be subject to the kind of disruption that a new technology can develop. From my perspective, I always look back historically and any time you see a new technology innovation, we all worry about the technology innovation.

But the reality is the track record tells you that great technology breakthroughs bring growth, they bring more opportunity, they bring top line growth, they bring innovation. So that's my expectation for the consulting industry. Having said that, a lot of the things that people learn when they come into the consulting industry early on, how do I conduct an analysis on a big data set?

How do I build the PowerPoint slide or the deck in slides or whatever technology I'm using? How do I how do I create the story that I'm going to share with the client and create the report? A lot of those activities are now going to be supercharged, supercharged by generative AI. But those of us that have been in business not only consulting but anywhere, know that some of those tasks that people don't really love doing at the early part of their career, those are learning tasks as much as they are output tasks.

It's an opportunity to dig into the details, learn the data, learn the process, learn those types of things. That from a challenge perspective for any professional services organization, means we're going to have to rethink how we develop our people at the start of their career to make sure they're rooted in the knowledge that they're going to need in order to apply that judgment later.

That's going to be coupled with the technology to get to the right answers, to get to the right insights, to get to the right innovations for our clients.

Michael Krigsman: So, in other words, if consultants rely to heavily on the tools as shiny objects disconnected from the client's requirements, client’s needs, then you end up really with nothing; going down the wrong path.

Joe Atkinson: Could end up in a bad place.

And I think the trick to that is, is we always have these conversations about transformation, and I always come back to values. The trick is what do you, what do you value? We value serving our clients, doing the right thing. We value developing our people, preparing them for professional careers at the firm or wherever their career will take them.

Those are cornerstones for us. And so long as you stay focused on that, I think you can manage the risks that you end up in the wrong place. But the other really, really important piece is if you value the development of professionals, again, whether they stay with us for their careers or they go elsewhere and become clients, or they start new businesses or serve in government or wherever they decide to go, we feel really strongly that you have an obligation to equip people with the skills and understanding that they need in order to be successful.

And that means that the way that we've developed professionals over the last 10 to 10 years, 20 years, is going to have to change because the activity that helped them build the foundational skills is now going to likely be delivered to them by generative AI. They're still going to have to learn prompt engineering. They're going to have to learn how to construct those in a thoughtful way.

But now that learning path is going to be different. And so for us, that's change agility. We're going to have to embrace that difference and help our people make the most of it.

Michael Krigsman: And on that subject, we have we have several questions that have come in on Twitter. There's one from Arsalan Khan, who's a regular listener. And when I saw this question, I'm smiling. So, he says, What prevents consultants from becoming AI addicts? And at the same time, if AI is used in consulting for clients, then why can't the clients use AI direct, therefore cutting out the middleman, i.e. the consultants?

Joe Atkinson: two things I would offer. One is I don't think it's a question of whether you use enough AI or too much AI.

I think the question is do you use it the right way and do you use the right tool for the right purpose? And look, we're at the we at the early innings. This is concept car stage, to use a phrase that one of our colleagues from one of our technology partners uses. This is the beginning of AI. So as we look at the development, I think the that meshing of that human, that consultant, that skill, that person with the technology has always been key to our business.

It's been key. Before it was generative AI it was key when it was analysis using data. It was key when it was spreadsheets. And frankly, it was key when it was Abacus or when it was the ten key. It's it's the people plus the tech. And the tech is going to keep innovating. The people keep innovating. But I might surprise them with my answer to the other question.

Not only do I think our clients will be using generative, I think we have an obligation to help them use it and use it as powerfully and as broadly as they can to improve their businesses. I think that ability to bring that capability to the client and help the client unlock it, I frankly, I don't think that unlocks a middleman problem.

I think what that does is keep us focused, as we always should be, on what do our clients need to grow and compete in a hyper competitive world.

Michael Krigsman: And I suppose part of that very large investment that you're making, in fact, 

Joe Atkinson: We want them to adopt them. And again, in the ways that are responsible and the safe use and we want them to adopt them for the appropriate technologies or the appropriate outputs. What we don't want them to do and we've been training them on this, we don't want them to guess as to what the outcome is supposed to be and accept the outcome from a generative AI capability.

And I think this is this is a point that we are all talking about in the air space. There's a there's a lot of research going on. We have a nice relationship with CMU. That team has been looking at this question of where where is the risk of output associated with a particular generative AI model. So if I asked you whether you're going to be comfortable with a generative AI model that flies your airplane, my guess is most of us are going to have pause.

But if I asked you, Are you going to be comfortable with a generative AI model that figures out how much detergent to add to your laundry, you're going to say, Well, okay, well then I think that would be just fine. Let's go figure that out. In fact, there's plenty of machines that are figuring that out already. There's this relationship between the output, the value, the output, the risk of the output, the reliance on the output, and then to what degree all of us collectively as humans are going to be comfortable in the technology taking that role.

Those frameworks are frankly just developing. And as they develop, I think what you're going to see is we're going to take more and more of the low value, low risk tasks and say, I'm perfectly happy having technology take those off the plate, which has two benefits in my view. One is it frees us all up to focus on the high risk, high value stuff that are the things that we all generally want to focus on anyway.

But it also means that we're going have better context for the technology outputs so that we can apply that expertise along with the technology. So do I worry that our people are going to overuse generative? I know I actually want them to use it. I just want them to use it for the right use cases.

Michael Krigsman: please subscribe to the C XO Talk newsletter. Subscribe to our YouTube channel. 

Joe, when we talk about scaling AI and in generative AI, especially from consideration to proof of concept all the way through the scale inside an organization, can you, can you take us through that path?

Joe Atkinson: It depends on the use cases that you're applying and I'll I'll share the BWC story, Michael, because I think it's illustrative of how you start to think about going from those concepts in small groups to factories to beyond.

When we started and this actually predated our announcement back in April of the billion-dollar investment when we started that exercise, we knew there was a lot of interest in what these generative AI capabilities were going to unlock for the firm. So, we started with a view of let's make sure we understand what our people are going to be looking for with these technologies.

We brought a governance group together that is a governance on a one firm basis. So think of that as the people that have responsibility across all of our businesses. And then we put together what we call our air factory. We believe very strongly that the nature of these use cases were going to require discipline and expertise in the AI center, if you will, And that's not to mean that everything has to go through a center of excellence.

But as you start this journey, it's a really, really important place to start. In our early days, we saw over 3000 use cases and and that wasn't six months of collection. That was a few weeks of our people saying, Hey, could it do this? And how can it do that? That 3000 became very evident to us at least in the very beginning, that that 3000 couldn't be addressed in your typical use case model.

Evaluate the use case. Lots of wait how much it's going to cost to build it. Let's figure out how long it's going to take. Let's figure out what the risks are and then what the returns are going to be. If we try to do that. Over 3000, I'd be here talking to you about the fifth use case or the sixth use case, if 

So, what our teams developed was a view on patterns, and this is emerging across the AI space. What are the common asks of generative AI models that people come to? And there's a few examples that I would give you. We all want the benefit to take large, complex documentation and be able to query that in a reliable way and get not just search results, meaning here it is on page 35, but insight that says 35 describes the policy associated with this, where the reporting associated with that.

We also want to be able to take documents and we want to be able to show them are there comments or documents, document wording in here that needs attention and it might be a big document. And then we're going to want other things where we'd say, not only do I want deep Q&A, but I want to summarize it.

I don't need to know all the details in this document, but I'd like to know the top three themes that it tells me, because that's going to be an input to something else. That was the framework that our team used to those 3000 use cases. That framework then allowed them to move very rapidly. So you break down this 1 to 1 relationship of use case to answer and you get this one too many answer to many use cases.

That opportunity helped us move really fast and that also set a framework that then gives us the opportunity to scale. So if that is the beginning and then you get a little bit broader with the use cases in the factory that gave us the standards, the footprint, the platform, the responsible AI frameworks to go even further and faster in the way that we adopt.

And now where we are is we're taking what we call chat BWC, which is part of our partnership with OpenAI and Microsoft. That's the chat public capability that unlocks the power of the public model, but brings it into the safe environment of the firm so we can use it to serve our clients and serve to serve our firm.

That brings that capability to all 75,000 of our people. In fact, we just opened it up earlier this month. At the same time, we're bringing tools like Microsoft Copilot, we're deploying those as well. So, if you think of that progression, it's a very quick story of a ten month progression, but think of 

How do we want to use it? What governance do we want to put in place? What does responsible use look like? Let's start experimenting, pattern those so we can address more. And then what are the publicly available or large scale tools that we can supplement supplement to help us get to the kind of scale that we're after? That's the journey we've been on and that's the journey we're talking to with our clients.

Michael Krigsman: Can you describe some of the differences relative to traditional business process technology? Obviously, gen AI is quite different. So how is that deployment and scaling process different?

Joe Atkinson: Yeah, it's funny. I was preparing for this discussion, I went back to look at some of the conversation you and I had four years ago, and it's amazing how things change. We've been talking for a long time about business process and then the application of technology in that business process.

So that's the typical model. Here's how we do order to cash, here's how we do purchase to pay, and this is what the process looks like. And now let's think about where the technology applies in that process. My view, and this is becoming more and more clear, is that generative AI is going to have to generative AI is going to demand a redesign of the process at its core, we're going to have to rethink what we're achieving, what the inputs are, and what the outputs are going to be, because the capability for generative AI is so broad that the application of it is going to require just a different way of going about it.

Now there are some things that remain the same. Nobody wants a dart board and say, Hey, let's go figure out if we go after this process or that process. That's not good discipline in organizational transformation. But this idea of business model reinvention, it's a discussion that's taking place last week in Davos. It's this discussion that is taking place in C-suite all the time.

The technology enables things that were just too difficult to do before, which means we've got to rethink them at their core. And that's a prioritization question for most of our clients, which is, okay, if it can really, really change the way that we're working, let's figure out where the biggest bang for the buck is and start to tackling, tackling those problems, because that's where we're going to see the most compelling near-term returns.

Michael Krigsman: So, Joe, you were just describing the importance of the integration to processes and technologies. Can you elaborate on that? Because that's obviously a very important piece of it.

Joe Atkinson: Let's go to customer care, which is a place we're starting to see these generative AI tools really show themselves. One of the one of the challenges we've all had with customer care is if I said to you, we're going to automate the customer experience, some of us already cringe a little bit because, I've been through that process of automating the customer experience.

15 years ago, that was press one to talk to somebody about this or press two to somebody about this. Well, then the agents got a little bit better. The technology agents, the voice recognition got a little bit better. But most of us, even as it got a little bit better, would still from time to time get frustrated and say, well, it's really not the same.

And we all figured out what are the words I have to use to talk to the voice agent in order to get to a live person so I can get the resolution that I need. The power generative AI means that that that interaction is much more likely to satisfy a customer in the future than it was able to do in the past.

And companies are all over this issue that requires rethinking how the customer engages with the company as a whole, and it requires looking at the whole pathway, because it may mean that you can actually increase the customer interactions and customer actions and customer activity. And these ideas of personalization that have been really experimented with over the last couple of decades, these ideas of personalization, of the experience, they become much, much more powerful.

So if I can if I can know more about the customer, but not in a way that makes the customer uncomfortable, the word that people usually use is doesn't feel creepy. If I can do it in a way that doesn't feel creepy and it just feels like, hey, this is this is a technology agent. I know it's a 

It's a trusted technology. It has a responsible approach to me as a customer and has a responsible approach to the use of my my data. Then I'm going to be more willing, I think, to engage with that kind of technology and open up the aperture of the relationship, which is going to be opportunity for the company, but also opportunity for a much better experience for the customer.

That requires a rethink because some of the technologies I've just described and some of the attributes of what we've just described simply weren't available five years ago. And so that rethink, I think, is really at the heart of this business model reinvention. But the other thing I would say is, and again, lots of different studies that are looking at this, and I think we're all going to see many more.

It doesn't matter in my view, whether you talk about 20% improvement in efficiency or 30% improvement in efficiency or 40% improvement efficiency, or you go up from there, the efficiency plus quality means that we can get much better outcomes from business processes using these technologies than some of the technologies that have been available in the past, which means most organizations are going to have a remarkable opportunity in the coming years to take massive amounts of cost out of their operating model to put that resource, those assets, that capacity into other innovations, whether it's new products, new technologies, new capabilities, new ways to serve their customers, expand to new markets, acquire new businesses.

So I do think that there is this massive upside to what's about to happen now when people like me say there's a lot of cost that you can take out, I want to make sure people know that you don't just buzz past that because you recognize that when you take cost out, you're impacting people. And this is why I think the upskilling process and the obligation that employers have is so important.

The employer employee, you have to shake hands on this point, right? Everybody's got to be in on it. But at the end of the day, I do think employers can help employees make the transition across the bridge from where they are today to the kinds of jobs and opportunities that are going to help them be more secure in the future.

I realize that is really easy to say and devilishly difficult to make happen, but I think that's the game. That's that is the whole game for us. In the coming years, can we help take advantage of all these great resources we have in talented people that are doing job today where some of the tasks that are in that job are likely going to go away and equip them with the skills that they need so that they can do a job tomorrow where more of those tasks are not going to be replaced by generative AI in the north end.

In the short term. And that's why I think the growth orientation is the right mindset, the skill orientation is the right mindset as we all go down this business model, reinvention pathway.

Michael Krigsman: we have some questions that are stacking up and let's let's jump over to them and pseudo. Susan Huddleston has an interesting question that's right on point here. And she says, Can you share a client example of this journey and the transformation, which I think is right to the point of what you were just talking about.

Joe Atkinson: Good news is we have now hundreds of clients where we've been engaged with them to help them on this journey. And it's everything from helping them stand up.

The governance in the factory. Lots of clients start with the question of how do I create a framework around my use that's going to satisfy not only our corporate values, but the regulators and others from a responsible use perspective. So so there are dozens of examples of that, but I'll share one without naming the company, of course, but I'll share one where a client of ours is looking at the future of internal and external financial reporting.

They're thinking about the question of whether or not generative I can produce better insights for their managers inside the organization than the traditional reporting models have have been able to produce over the last few years. And any of us that have played with and worked with the generative AI tools know that their ability to unlock an insight massively better than some of the construction projects that many of us have been subject to over the years of How do I create a report that actually shows what I what I need to show and the dynamic aspect of it, the agility aspect of it, will mean that no longer we distributed reports to organizational heads that say,

Here's last week's view of some question that we had three weeks before that. But instead, executives on demand will be able to say, Well, here's the question I have today. Based on the circumstances I'm seeing today. And one of our clients is taking a very, very aggressive look at that. The other one is supply chain resiliency. So these are complicated problems or data centric problems.

A lot of our clients are now starting to look at the question of can AI tools broadly already being applied but generative AI specifically, can it help me think about where my sources of resources are that are in my supply chain, where my vendors and suppliers are going to start to monitor and give me early warning systems that I've got vulnerabilities in my supply chain that perhaps my control systems might not have otherwise detected, and that that combination of kind of insight and the ability to apply, I think is a really powerful those are a couple of the really powerful use cases.

And then the last one, I would say we're seeing the number of insurance companies look at this in the claims management space. Anybody that makes an insurance claim, that's not always the thing that you want to wake up in the morning having to worry about. So if you can do it really rapidly, get to an answer that's fair for the claimant and fair for the company.

That's a pretty good thing. If you can shrink that process down, It's less expensive for the company. It's also less expensive for the consumer, for the client. And that is a space that we're seeing application generative. I really have a big impact 20, 25, 30% improvements in the speed of those claims management processes.

Michael Krigsman: It's amazing the diversity and breadth of the use cases that you're seeing among your clients.

Joe Atkinson: Yeah, it's such a great point and it's an honestly, it's one of the points I love talking about because when you talk about the 3000 use cases that we saw at P.W, see for how we can improve our own business, one of the reasons we went down that pathway is because we knew if we can improve our own business, that not only could we help our clients, but we could help our clients from a position of experience, things that we've done to ourselves where we had the lessons learned because we lived it.

But if you think about generative AI in terms of the power of this technology, it in my in my 30 plus years. Michael it is the broadest technology application, if you will, the ecosystem opportunity that I have ever seen. We are always talking about where it fits into the pillars and what happens, but there are generally very few places and organizations in in the economy in the way that we conduct government and in startups and community services, etc., where you don't see some degree of application in generative AI and lots of people have pontificated on this point.

But if you if you really think about the breadth of this impact, you're talking about the kinds of technology innovations like electricity, like the the implementation of the Internet at scale, these were transformational moments in industry, in in society. And I believe we're right in the middle of the next one.

Michael Krigsman: Emily Perry asks about the employee, the internal use cases you've been describing external use cases. What about upskilling and internal employee use cases of Jenny,

Joe Atkinson: again, one of the things I'm really passionate about, I think for employees we've talked over the years about this idea of citizen led innovation.

And one of the interesting things about this space is I think it creates the same kinds of opportunities. These are more business led in terms of the enterprise scale. But the reality is, if you can put generative AI tools in the hands of individuals, they're going to come up with the use cases that are going to be most meaningful to the way that they work every day.

And what does that require? It requires that they have access to the tool again in a controlled and secure fashion, but also requires that they have the skills to use the tool and. So I always encourage everybody, again, we have the partnership with Openai. So if you haven't gone out to chat to play, I encourage people to play.

Just experiment with it, innovate with it.

know, you don't want to use any of your company data. No, you don't want to use any of your sensitive data. But in the public models, go play and just see the power of the tools that experimentation I think is so important in the internal employee use cases. It was one of the reasons when we deployed Chad PwC so early, the reason we did it is because we wanted our people to experiment and innovate on the tools because we knew they could help us get better on the way that we were deploying it.

And I think that that that coupling of the availability of the technology in a safe environment in order to experiment and innovate and then add the really important element of how do I skill them in what the capability is, what's the upskilling journey look like? That to me, I think is kind of the secret 

So we've asked everybody to lean in to generative AI Regardless of your role, your title, or your tenure, we think there's a role for everybody to learn and develop their skills in these tools. We've rooted everybody in a responsible way. I we said, no matter what you do, start here. What's responsible use look like? Understanding the output, recognizing that the output for any technology, including generative AI, is the response ability of the individual that is actually going to rely on that or provide that that output.

And then we gave them the the other really fundamental skills, prompt engineering is a phrase that most of us didn't know two years ago. Now we all talk about it. Well, when you play with the technology, you realize very quickly that it's ability to iterate on your questions. The way that you construct. And have you provided the right persona?

Have you have you provided the right expected output? Have you provided the right context to the technology to give you the answer that is most appropriate? Have you restricted it to the data that is most relevant to provide the most competent answer? And all of those questions become skills questions for people. So I feel very, very strongly I love the question.

I feel very strongly that if organizations at scale have not started to upskill their people already, frankly, regardless of whether the technology's in their hands yet, then I think you're at risk of falling behind very, very rapidly.

Michael Krigsman: we have another question, this time from Greg Walters on LinkedIn. And it's related to what you were just talking about. And he says you've been discussing and he quotes you said earlier, redesign and the process at its core. Elaborate on that and the the role of redesigning the process when you talk about this kind of upskilling, for example, that you were just discussing?

Joe Atkinson: the organizational processes that people use to produce management reporting, I'll 

The reality is that most of us are trained in thinking about processes, the way these processes have been conducted. I'll actually go even further back because these famous stories about when electricity hit the factory floor and they took the steam powered equipment out of the factory floors and they put the new electrical equipment in, they located them where the steam ports were, because that's the way the process was built.

We're seeing the same thing in business processes today. We're building business processes. We have these business processes that have executed for years. We're seeing this powerful change agent in the in the power of generative AI technology. And all of us come to this perspective and say, Well, maybe I could apply it here and maybe I could apply it there and maybe I could apply it here.

And by the way, you could, there's no question. But then the new question becomes, am I even doing what's necessary? Are these steps even necessary? Do I have to rethink what my input is and what the required output is? And when you see that from a consulting perspective, you see it from a professional services and assurance perspective, a compliance perspective.

It's that it's that zeroing in on the output, what's the outcome I'm after and then rethinking what the inputs are. And by way, those inputs are people and process and the technology, all the elements. And that when I say redesign to the core, that's what I'm thinking about the outcome driven redesign of the inputs that are necessary to achieve the right outcomes.

Michael Krigsman: On this point, we have another question from Twitter, and this is from Myles Sewer, who I know and he's an expert with CIO, speaks to a lot of CIOs, and he says when he speaks to CIOs over the last few years, they have been focused on using technology to affect top line revenue first  His question is, does Jenny I mean, focusing on top and bottom line revenue impact?

Joe Atkinson: My short answer is yes, I think it does. I think the I, I had the good fortune of being down in Florida with the CIO conference. And one of the things I said to the CIOs that were assembled is I actually think this is the golden age of the office of the CIO.

And what I mean by that is we're all dealing with the technology infrastructure. And as Miles probably knows as well as anybody, it's not easy to deploy large scale cloud infrastructure that secure the provides the data at scale that you need. And the tech is getting better and better and 

Give me the infrastructure, the capability, etc.. But now you get to the core of the question, which is can you start to rethink not only the revenue opportunities which all of us are always focused on growth, but if you get to this idea of core redesign of business process, can I start to think about how I deliver again, the same, similar or better outcomes at a much lower cost point, so I can free up capacity in the organization to invest in the growth opportunities.

And one of the one of the most powerful examples is actually the coding and development of technology systems. This will change everything in the way that we deploy. I'll date myself. Miles will probably he'll probably chuckle when I say this, but we all came through the year 2000 and we were all in the in the process of trying to find coders who could go fix the date coding problems that we had as the Y2K problem showed itself.

And if you think for just a minute what solving that problem would look like today with generative AI, it becomes immediately evident how different the world is. Now. That's 20 plus years ago, 20 to 24 years ago now. But in 24 years we would solve that problem completely differently. And some of the issues that we had, we didn't have COBOL programmers that were available at any reasonable scale because most of them had retired.

Well, now the technology can translate the code in most cases. And will the quality be 100%? It won't be, but if I can get to 85% and then apply some good human judgment to get me to the rest, I can start to build systems, deploy systems, operate systems, manage systems at a much lower cost point than what I've been able to do previously.

Now, if I'm the CIO of an organization, what I promised my CEO and my board, that's going to happen overnight and I'm going to give you 30 or 40%, I would not it's hard to change these processes over time, but I absolutely would start to paint the roadmap of where these technologies can start to be applied.

Michael Krigsman: Arsalan Khan comes back again and he says, So if air development and investments are dependent on this specific use cases, how do you measure? It's a great question. How do you measure in dollars that a specific use case has more value? It seems that executives have veto power in what use cases to pursue. And what about ideas from the bottom up?

Joe Atkinson: the answer is probably not as precise as he'd like, but my view is pretty simple.

If you if you take generative AI as a technology and assume for a second we're talking about a black box technology, it's not generative AI, it's technology. Doesn't matter what it is. The challenge that any organization has, including the executives in the C-suite, including the CEOs, the challenge that they all have is the same What's the cost of that technology and what's going to be the business outcome on the other side as a business outcome to to the great question from Miles, is it less cost?

Is it more revenue growth? There's always a mathematical question to be had. Now swap technology X with generative AI and you have, in my view, a much different problem because of the breadth of applicability that we've talked about, because it can be applied in so many cases and because the application of it, depending on how you apply it, may not actually require all that much effort.

I think you're going to see much more and you're already seeing much more innovative experimentation at scale with a lot more difficulty figuring out whether that innovation and experimentation is getting returns. It's one of the reasons that we took a factory model across PwC is because it gave us some visibility into all the activity that was happening.

So, I do think that that's a bigger challenge in the current environment. But the fundamentals of how do I decide whether a technology investment provides me return? Think about our multi year billion dollar return. It's pretty basic from a perspective of what we expect. I want to serve clients better and therefore get more of the market. I want to be able to earn a premium in the market where our skills are differentiated and our clients value that and I want to be able to provide efficiency and quality capability for our people so they can deliver more value and do that in a more cost effective way.

If I can get those returns, I've got to measure them. But if I can get those returns, this this becomes pretty easy. And I'll add one more point because I think it's such an important question. Michael, if if I said to any organization, you're going to spend $100 million or $1,000,000,000, depending on the scale of the organization, you're going to spend $1,000,000,000, but you're going to get $10 billion of return in either cost savings or revenue.

It's an easy discussion. Part of the problem is that these costs are expensive, they're fluid and they're broad. And so people get understandably and rightfully nervous. How do I get the right discipline applied to these investments in order to direct them in the right way?

That's, to me, a governance question. Get the governance right, make sure the right people are at the table.

And I think you can navigate that complexity.

Michael Krigsman: Susan Huddleston comes back and says, what lessons have you learned from launching ChatPwC?

Joe Atkinson: people will lean in. You've got to equip them with the knowledge they need to lean in.

And what I mean by that is that the experimentation, the innovation drive that most people have is is really powerful. And so you don't want to try to harness that. You actually just want to try to embrace and empower that. And that's been a real lesson for us. Our people have pleasantly surprised us. They have engaged in our my training, all the assets that we provide.

In fact, 75% of the firm at this point, give or take, have now engaged in the training assets that are there and available to us. So the other lesson I would share with Susan is people are hungry for this. They're really looking for it. They're reading everything. They're seeing everything. They're seeing all these discussions and they're expecting their employer to help them get through this and navigate through this.

I realize not everybody has the investment capacity that we enjoy given our given our business, but that opportunity that you have to really invest with your people. I think that's one of the most important lessons we've learned in terms of the tech plus the people.

Michael Krigsman: Arsalan Khan comes back again and get ready for this one. He says implementing an ERP is different than an AI inside the enterprise due to the danger of job loss. How do you make sure people are not sabotaging AI projects on purpose? And what is the role of culture in this?

Joe Atkinson: I enjoy one of the greatest cultures in any organization on the planet.

It's one of the great privileges of being a P.W. C are people. They Embrace the kind of culture of doing the right thing, acting with integrity, etc. And I think most organizations enjoy that. But people also can embrace fear. They can anxiety. And if they're worried about their job, they will not always act rationally. They'll do things to protect what they think is the more important thing to protect, which is their security and their family and all the things they care about.

So I think to that great question that the answer to that, in my view, is you've got to bring your people along on the journey. Not everybody will lean in. Some people may not be afraid and they may not care that much, and they're just going to let it play out because they may be at a stage in their career or their life where they're just going to let that happen.

But particularly for people that are admit mid-career or at the start of their career, I really do think they're looking for leadership. How are people leading through this moment of disruption? And are you speaking as a leader like somebody who doesn't care what the impact is on our people? Are you implementing programs and technologies that don't incorporate the training and upskilling of people?

If you're doing those things, you're likely going to get the unintended results where people may say, Well, I'm not going to lean into this technology because all I'm really doing is I'm automating myself into a riskier position. I think we owe people clarity. Having said that, when you do that, I think you can actually unlock more speed and confidence in your people because they know that they're 

I'm not. There's been a lot of discussion on this topic about job losses and implications, and I'm not naive. There are certainly jobs in certain classifications in certain places where the full stack of tasks in those jobs will likely be replaced by technology. But you have human capacity, you have human potential taking advantage of that, plugging that in, in training, upskilling, etc., making those investments.

We've been fortunate over our 170 plus year history to find out that that's a pretty good investment every darn time.

Michael Krigsman: Greg Walters asks, What advice do you have for the SMB segment? 20 to 800 million in revenue on how to engage AI internally and for their customers? And he's asking for three points.

Joe Atkinson: I do think it applies regardless of the scale of the organization.

So get engaged with generative AI, Make sure somebody in your organization has ownership for the generative AI strategy and start thinking about the generative AI strategy. The second, I would say is think carefully about build versus buy slash rent. These technologies are complicated and the scale is complicated, and so in many organizations of smaller size, you may very well be better off to purchase the technology attributes that are available from some of your suppliers.

The flip side of that, and this would be my third, is make sure your people are prepared. So if you go out, you acquire your rent. Those are probably broken record today on this discussion. But even in an organization of that scale, that anxiety will be real. So embracing the upskilling, the journey that people are going you'll want to be on, get to them, given the skills, help them understand, make sure that they're part of your journey.

It's not just something that's going to happen to them.

Michael Krigsman: And Steve Jones on Twitter wants to make the point that if you're looking at a future where 50% of organizational decisions are made by AI, then it's not a top versus bottom line issue. It's about total transformation.

Joe Atkinson: Amen. I totally agree with that. And I don't know.

Again, lots of data out there and people making making lots of guesstimates and estimates and research on what the answers will look like. I don't know if it's 50% or 30% or 70%. I don't know what that balance will ultimately look like. But I agree the nature of this technology is impactful to top line, bottom line. And it is for business transformation.

It's something different than what we've seen over the last ten, 15 years.

Michael Krigsman: Lisbeth Shaw on Twitter says, How do you balance responsible and ethical use of AI against the business pressures? And how do you help your clients draw that balance as well?

Joe Atkinson: I'll start with culture and I'll start with leadership. We all have a responsibility to execute our organizations in a responsible way, and that includes the application of generative AI.

So, if as an organization, when you're adopting these technologies, you're not setting out an expectation of responsible, ethical, safe use, then I think that's a failure of leadership. And that was the first step that we took as the firm and it's the first step we recommend to our client. So and by the way, that is more then write a policy or make a statement.

That's an important starting point, but it's more than that. You have to have the governance, infrastructure, the understanding of what safe use looks like. You have to be engaging with not only your own organization, but external organizations. NEST has been very, very strong on the frameworks here, the responsibility frameworks that Nest is looking at. Engage with those, bring them to the table.

And that's the advice we're giving to our clients

Michael Krigsman: On this responsible AI issue from as you've been talking, it's very clear that this is woven deeply into your thought process and the governance as well.

Joe Atkinson: It is. And actually I often point out that we're talking about responsibly AI, but we started looking at responsible AI eight, nine years ago.

I mentioned our relationship with with Carnegie Mellon University a little while ago. That was research that we were doing with Carnegie Mellon going back almost seven, eight years ago. On what does responsible application of AI technology look like? And we talked about, you know, what feels creepy to a customer. There's all these attributes that are kind of hard to quantify, but the reality is that you're going to have access to data and the ability to do things.

Just because you can do something doesn't mean you should do something. And you're going to need the governance processes in place with people of sufficient, sufficient responsibility and influence in the organization of organizational heft, if you will, to make the decisions, say, you know what, We're not going to pursue that use case because we don't think that's the right, responsible way to apply it.

Michael Krigsman: there are so many efforts right now in the United States and in Europe to consider regulation and how what form that should take. Do you have any thoughts on that?

Joe Atkinson: have the executive order in the U.S., you have the E.U. regulatory framework that's developed and you have a little bit of a I'll say, a regulatory race.

I think regulators want to be the organizations that actually set the standards. We have said and we've been on the record, that we think that this is a place where it is appropriate, that there be a regulatory framework and that the regulatory set rules setting environment is important to the use of these technologies. These are new technologies, they're broad technologies, massive implications.

So we invite the application of thoughtful and intelligent regulation. And we think part of the opportunity actually for organizations, again, regardless of where you are, small or medium businesses, as Greg was asking about earlier, the largest organizations or policymaking organizations get engaged in the process. We've been very, very positive on the fact that regulators around the planet, they want to learn.

They're engaging with thought leaders, not just us, obviously, but they're engaging with our technology partners and others. So engaging with that, I think, is a good thing. Their desire to regulate it in a thoughtful way is a good thing as well.

Michael Krigsman: Ws we finish up,  what advice do you have for business people just in general for approaching this change which is going on all around us as represented by gen AI?

Joe Atkinson: I think the most important thing for leaders and organizations is to bring your people along, Bring your organization along, bring your customers along.

Be transparent about how you're applying it and be thoughtful about how you're applying it. But I do think very, very strongly that if you are not engaged already in thinking about what is what is the future of my people in a gen AI driven world where the future of my businesses in a gen AI driven 

And the problem with this pace of change is that it is we have all these things that we say. The pace of change has never been slower than it is today, and it's always going to keep speeding up. And like all of these things about how it all plays out, the reality is that requires committed, visible leadership and you're not going to have all the answers.

I don't think anybody does. But if you start to set a pace, you do it in a responsible way. You use that theme of responsible and safe use of AI, and you talk about innovation, experimentation, iteration and learning together. Then I think organizations can be very successful. Despite how scary this can sometimes feel.

Michael Krigsman: Joe, one of the key points that you have made throughout has been the careful and, thoughtful examination of the issues as opposed to drawing just a very broad brush strokes. And I just have to say that seems to me just such a foundational point to be successful with these transformation efforts.

Joe Atkinson: I think it's a really, really important point of emphasis. And I'm glad you made it because you think about the press and media and how we've got a lot of folks on the X platform today and on LinkedIn today and we're all trying to digest.

I say to clients all the time when they say, How do you keep up? My answer is I don't. None of us are keeping up. There's too much to keep up with. Now you've got to keep the pace going. And you don't want to you don't want to stand still. But the reality is, if you have this opportunity to keep up, you have to also recognize that we're all going to have a little of a knot in our stomach about what is it that I don't know, what is it that I haven't looked at and how do I find out what I need to know?

And that's about engaging good, thoughtful people around you and and to your great your great words. Michael, examining the question and getting comfortable asking questions that may make you feel like you're stupid, but that that courage of saying, look, I don't know what you're talking about. I don't know how to do that. I don't know how we should do that.

I'm not sure what the implications are. That honest conversation, I think, is critical to the cultural standards, the cultural platform that makes great change possible.

Michael Krigsman: Greg Walters comes back on LinkedIn and he says, Where do you see the demarcation between regulation and stifling AI innovation? How do you draw that balance? Just very quickly?

Joe Atkinson: I think we're all going to be figuring out where that line lands. Honestly, I think there's some risk the regulation will stifle. I think there's some risk that it doesn't go far enough in order to prevent, prevent harm. So that's a line we're all going to be working together to go find.

Michael Krigsman: And on that, a huge thank you to Joe Atkinson from PwC for being a guest on CXOTalk.  Joe, thank you so much for being with us today again. 

Joe Atkinson: Pleasure, Mike. Really enjoyed it and thanks for all the questions.

Michael Krigsman: And thank you to everybody who watched. You guys are an amazing audience. Before you go, please subscribe to the CXOTalk newsletter. Subscribe to our YouTube channel. Leave Comments. Check out cxotalk.com. We have incredible shows that are coming up and we'll see you again next time.

Published Date: Jan 19, 2024

Author: Michael Krigsman

Episode ID: 821