Paul Daugherty of Accenture joins CXOTalk for an incisive look at generative AI's impacts on business. Gain insights into responsible AI adoption, transforming workflows, assessing AI maturity, investments, transhumanism timeline, reading recommendations, and more.
At the vanguard of technological innovation, this episode of CXOTalk offers an incisive look into the transformative impacts of generative AI.
Host Michael Krigsman engages acclaimed author and thought leader Paul Daugherty, Group Chief Executive of Accenture Technology, in a far-reaching dialogue exploring this breakthrough technology. With decades of experience advising Fortune 500 companies, Daugherty provides unique insights into AI's monumental potential alongside prudent recommendations for responsible adoption.
From reimagining workflows to anticipating future capabilities, Daugherty and co-host QuHarrison Terry delve into AI's promise and perils. Expert perspectives for executives seeking to leverage AI's capabilities while safeguarding ethics and values. A must-watch for leaders aiming to remain competitive in the AI-driven economy. Gain an informed vision of the coming AI revolution from one of the foremost minds accelerating it.
The conversation includes these topics:
- Overview of generative AI
- Timing of generative AI advancements
- Adopting Generative AI
- Impact of AI on work and productivity
- Explainable AI
- Organizational maturity for AI
- Accenture's AI investment
- How to invest in technology ambiguity
- Science fiction and the future
- Paul Daugherty’s reading list
- Comments on managing a large organization
- Quantified self and personal data
Paul Daugherty is Accenture's group chief executive – technology & chief technology officer. He leads all aspects of Accenture's technology business. Paul is also responsible for Accenture's technology strategy, driving innovation through R&D in Accenture Labs and leveraging emerging technologies to bring the newest innovations to clients globally. He recently launched Accenture's Cloud First initiative to further scale the company's market-leading cloud business and is responsible for incubating new businesses such as blockchain, extended reality and quantum computing. He founded and oversees Accenture Ventures, which is focused on strategic equity investments and open innovation to accelerate growth. Paul is responsible for managing Accenture's alliances, partnerships and senior-level relationships with leading and emerging technology companies, and he leads Accenture's Global CIO Council and annual CIO and Innovation Forum. He is a member of Accenture's Global Management Committee.
Paul also served as chairman of the board of Avanade, the leading provider of Microsoft technology services, for five years and remains on the board of directors. He serves on the boards of Accenture Global Services Limited, the Computer History Museum and the Computer Science and Engineering program at the University of Michigan. He also sponsors Accenture's partnership with Code.org, which is focused on bringing Computer Science education to students around the world.
QuHarrison Terry is head of growth marketing at Mark Cuban Companies, Texas venture capital firm, where he advises and assists portfolio companies with their marketing strategies and objectives.
Previously, he led marketing at Redox, focusing on lead acquisition, new user experience, events, and content marketing. QuHarrison has been featured on CNN, Harvard Business Review, WIRED, Forbes and is the co-host of CNBC’s Primetime Series – No Retreat: Business Bootcamp. As a speaker and moderator QuHarrison has presented at CES, TEDx, Techsylvania in Romania, Persol Holdings in Tokyo, SXSW in Austin, TX, and more. QuHarrison is a 4x recipient of Linkedin’s top voices in Technology award.
Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.
Michael Krigsman: Today, we're talking about generative AI and leadership with Paul Daugherty, Global Chief Executive for Accenture Technology. My guest cohost is QuHarrison Terry, Chief Growth Officer for the Mark Cuban Companies.
QuHarrison Terry: Thanks for having me, Mike. It's exciting to be able to talk with Paul on AI today.
Michael Krigsman: Paul, why don't we begin by asking you to tell us about your work as the chief executive for technology at Accenture?
Paul Daugherty: Accenture is a large organization. We're about 740,000 people, over $60 billion in revenue. We help companies do amazing things with technology. That's what we're all about.
Michael Krigsman: Do you want to give us, to start, just kind of a brief overview of generative AI? I think everybody in the audience knows what it is. But in the context of business and in our world, where does it fit today?
Paul Daugherty: To talk about generative AI, you have to talk about AI first. AI has been around for a long time, and all of us use AI continuously.
The three of us talking here and anybody listening has used AI dozens if not hundreds of times today. AI has become a pervasive part of our life through the advances in machine learning and deep learning and such that have come before.
AI, as I'm sure most of the audience knows, it's an old field. The term was invented, I believe, in 1953 at a conference in Dartmouth 70 years ago. And it's gone through a lot of iterations over the years.
I like to think about three forms of AI:
Diagnostic AI, which is using AI to diagnose things. Often, deep learning and the like to look at, for example, using machine vision to look for manufacturing defects (a thing we do commonly), to unlock our phones (as we do every few minutes of every day), or assistive driving features in cars. That's diagnostic.
Then there's predictive AI, such as AI we do to do retail forecasting for companies, often machine learning and optimization models. Those are well-established techniques. We have lots of people doing that work for lots of clients around the world, and many companies use it.
Generative AI is the new thing on the scene, and it really is a massive breakthrough, probably the biggest breakthrough in AI to date. And what we're really talking about with generative AI is foundation models, which are really powerful models that can be reused across many different use cases. That's why they're called foundation models.
Large language models are a type of foundation model that really understand language and have allowed us to really master language through artificial intelligence. Then the transformer technology added on top of that allows us to generate things. GPT (generative pre-trained transformer) are these large models that then have transformer technologies. They can create new sources of content.
That's really the breakthrough of generative AI; models that are very powerful and can be reused rather than bespoke data science projects combined with foundation models, which have tremendous reuse and power, combined with this creative capability to produce language content, whether it be graphics, video, et cetera. It really is transformational in terms of what it allows us to do as individuals and what it allows companies to do. But we're at the very early stages still.
QuHarrison Terry: Hey, Paul. One of the things that I want to talk to you about today is the whole concept of you thinking about this stuff almost a decade ago. In your book, Human + Machine: Reimaging Work in the Age of AI – sorry, I don't have it in front of me, but I did read it a while ago – when I was looking back at that book, one of the things that you talked about was how AI would ultimately become the ultimate innovation machine.
It's fascinating that it's 2023, almost 5 years later since you published that book. What's your take? It seems like you're spot-on, but what things happened in generative AI that you didn't envision or forecast back in 2018?
Paul Daugherty: I think the premise and all the precepts in Human + Machine really have stood the test of time well and the concepts we talked about, the human plus machine, and the idea that AI gives humans superpowers to do new things really has stood the test of time. We see generative AI as an even bigger step forward in terms of the augmentation and enhancement of what it can do for all of us in terms of giving us greater tools and productivity to do new things.
I think the surprise, we did talk about all this technology in that book, and then our next book that my co-author and I wrote – Jim Wilson is my co-author – which was called, Radically Human. That was the second book.
The pace of the advance is what surprised us more so than the capability. We were anticipating that some of these capabilities would come along.
But the pace of development of the foundation models, the rapid growth, the size and complexity of the parameters, and the weightings and everything, and the breakthroughs that came about with that, were probably the biggest surprise, Qu, in terms of what we saw.
QuHarrison Terry: Then one last thing on that. When you talk about the timing and how fast everything is coming together, it's fascinating to think that even open AI's ChatGPT is still relatively nine to ten months old (as we stand today).
Paul Daugherty: Yes.
QuHarrison Terry: When we fast-forward to just yesterday, Elon Musk announced X.AI, which is another fascinating AI company. As a business leader and executive, how should I think about AI? It's happening fast, but does that mean I take the "move fast and break things" approach, or should I wait and see where things settle?
On the flip side of that, an organization might be behind. How should I think of that?
Paul Daugherty: Our belief is that generative AI is a participant sport. You have to jump in and start using it, experience it, and do some experimentation. We're encouraging companies to do that, and that's the approach we're taking in our own organization.
It's very early with the models. You just highlighted that with how young the GPT and ChatGPT models are. A lot of companies have not reached GA (general availability) status of their models and products, so it's early evolving.
Elon's company was announced recently, and there are new companies sprouting up continuously. And so, I think the key for companies is, first, look across your business and decide where it's applicable. Second, pick some use cases where you can jump in and experiment with the technology and manage some of the complexity and risk. Then third, develop the foundational capabilities that you need to then scale it faster.
Those capabilities include technology capabilities like understanding the models: the problems engineering, the pretraining, and other things that you might need to do and how to integrate these models back into your business. As well as the business skills of understanding how and where you apply it. How you develop a business case for it. How much does it cost to apply these models?
Those three steps—looking across the landscape, experimenting, and laying the foundation—are what we're helping a lot of companies do today.
QuHarrison Terry: From an industry-specific standpoint, it seems like each industry is dealing with AI at its own speed. The two that I want to bring up right now that have had probably, I would say, some of the most impact is, one, education, and two, the legal sector.
The funny thing about it is they dealt with this regulation in entirely different ways. In the education sector, everything is pretty much a chaotic mess. You have schools banning things, turning things off, then re-enabling. We could have a whole show on this but, on the legal side, you've got—which surprises me the most as a technologist—lawyers really embracing this technology.
There's obviously a little resentment, but there are legal LLMs, and there's a lot of adoption as to how you can integrate it and adopt and make your law firm or your practice move faster. I would have never predicted that in 100 years, but it's happening.
Now, on the flip side, Lisbeth Shaw from Twitter has this really good question where a lot of organizations and individuals have begun using generative AI for work without any AI governance in place. She's wondering how you can apply governance once the horses are out of the barn and racing.
The reason why I brought up the points earlier is education. That whole sector is dealing with this whole dilemma right now. I'm curious on your take just because you're seeing it on the enterprise side where, if I input an email or contents of a document, there is a true risk there whether it be IP or trade secrets, whereas with school, if I put my quiz questions and test questions in the program, it really only impacts me and will have a lasting impact on the knowledge that I retain and gain.
Paul Daugherty: We're seeing broad adoption across industries, unlike any other technology I've seen which had very specific, and everything had specific industry patterns. Client-server, ERP, mobility, cloud, SaaS, had very specific industry patterns. Generative AI is super-broad in terms of the industry adoption we're seeing and the industry potential use cases we're seeing.
The two you mentioned are super interesting, Qu. Education, I think, will be literally transformed through generative AI. It enables truly personalized learning in ways that are significantly different than our current educational system. It'll take a while for that to work through but, yes, it's going to be pervasive and powerful.
Legal, I agree with you. The interesting thing about the legal profession is it can help paralegals work more effectively and do higher-level work, and it can allow experienced lawyers to leverage themselves more effectively in terms of the work they get done. So, we're seeing it being adopted across the different types of work in the legal industry or industry profession from that perspective.
But I think to the horse out of the barn question, you can still apply responsible AI. You can go back through and do it.
It's a matter of being systematic and rigorous. It's about having C-suite and CEO support.
We report on responsible AI to our board. It's part of our formal compliance responsibility that we do. And we encourage organizations to do the same.
If you already have AI out there, and most organizations do, and most organizations don't have enough responsible AI in place, we believe it's time to do that. Inventory the AI. Know where you're using it.
Understand the risk level. Know the mediation techniques and tools and have them at your disposal. Know if you've mediated the risks. You have to go back retroactively and do that if you haven't done it so that you know what your baseline is as you start to apply more AI and generative AI going forward.
Michael Krigsman: We have a bunch of questions that are stacking up on LinkedIn and Twitter. I have to say you guys in the audience, you are so intelligent, so smart and sophisticated, and your questions are absolutely great.
Our next question comes from Florin Rotar. He is the chief technology officer at Avanade. I have to say that I did a video with Florin years and years and years and years ago in Seattle. Florin, it's great to see you pop up.
Here is Florin's question, and I think it gets right to the heart of some of the key issues. He says, "How will generative AI change the future of work? Can it also play a role to enable people to realize their full potential, to thrive and to grow, not just to drive productivity? Will it blur the lines between white collar and blue collar?"
I'll just add to that. To me, this question is also getting to the point that QuHarrison just raised, which is, generative AI brings out the best and brings out the worst (in people).
Paul Daugherty: We talked, in Human + Machine, about the idea of no-collar jobs, and exactly what Florin highlights, eliminating this distinction between blue-collar and white-collar, as you look at it. Think about a hands-on service technician. Think about a plumber or an electrician that now has access to large language models that give them tremendous amounts of additional information and potential.
It can give them tools to run their business more effectively. Maybe they can be a service provider to others in their profession rather than just being the specialist at the physical trade that they have. I think that's the blurring capability that AI allows.
Think about a small business (or any part of a larger business) that wants to go international overnight. They can start communicating in dozens of languages seamlessly in expanding their business. It's these superpowers that are enabled that give people more capability, and that leads to a lot of new entrepreneurial activity and ideas.
Think of what GoDaddy did to the Internet in creating a generation of entrepreneurs in a lot of different ways, or eBay marketplace, and such. We're going to see that to the next exponential multiple with generative AI, creating all these new possibilities of what people can do. That's what we see happening there.
To get more specific around it, we see the new opportunities for jobs and the way generative AI impacts that fall into five categories.
The first is advising. This is advisors, assistants, or copilots to help people do their jobs more effectively.
For example, a large European service organization that we're working with where we're using generative AI in their customer service organization to allow them to answer questions with a lot more accuracy and quality because, as I mentioned earlier, they can pull tremendous amounts of technical information together to answer customers' questions better, faster, with higher quality, and they can cross-sell more effectively because they get the ideas and prompts and support on how to cross-sell. That's advising.
Creating is another whole category, is a second whole category, another category. A good example here is the work we're doing in the pharmaceutical industry where we're able to (in the drug discovery process and clinical trials process) create some of the regulatory and compliance documents they need to create so that then gets reviewed at the final stage by humans in the loop, avoiding all the rote work that a person would normally do, and allowing them to apply their judgment and expertise in the final product. That's creative (in addition to applying it in marketing and other areas that I could talk about, which is super interesting right now). That's the creating side of it.
There's automating where you can use generative AI to automate some of the transaction processing. An example here is a multinational bank. We're using generative AI in their back-office processing to read and correlate tens of thousands of emails that come in with transaction activity. Normally, people need to sort through all this to reconcile and do their post-trade processing more effectively.
Again, you can do this with other technology. You can do it with generative AI. You can make people's jobs more productive and effective and take out some of the drudgery.
The fourth category is protecting, which I think is super interesting. An example here is we're working with a large energy company on a safety application so that workers in real-time can get all the information on what's happening. Real-time conditions, weather conditions, and other things in a complex, say, refinery, and then combine that with all the information they need to know from safety procedures in manuals and regulations and such that they can operate in a more safe manner in real-time. Again, couldn't put all this together before generative AI.
Then the final use case we're seeing a lot is in technology itself, using AI and software development in technology development.
I'm sure we'll find more examples as we go. Those are five that are kind of standing out right now, just to drill into some of the ways that it's transforming work (in response to Florin's question).
QuHarrison Terry: We've got another question in from Twitter from Chris Peterson. The question is, "One of the opportunities mentioned in Human + Machine was the AI explainer role. Is that even possible for something as complex as GPT4 with billions of parameters and almost unlimited training data?"
Paul Daugherty: In some industries, in some problems, if you can't explain it, you can't do it. That's part of that screening that I talked about earlier with responsible AI.
If you have kind of a regulatory or ethical or business need to explain exactly how something is happening, you need to use the right type of approach (where you can do that), and you can't do that with (to your point) some of the models that are there.
There's a lot of advance happening in explainability. There are ways to create the models to understand how they're processing.
There are areas like GAN (generative adversarial network) that we can use in different ways to get some insight into how models are working and such. So, there are a lot of different advances there, and there are new fields, in addition.
New fields like prompt engineering are cropping up because of generative AI. We're also seeing demands in the market for explainability engineers or explainability specialists who can bring that understanding in to help understand those kinds of conditions.
The other thing that's sometimes important is that, in some applications, you don't necessarily need to explain exactly how you got the answer. You need to provide the transparency of what information you're using, what data you're using, and the process itself.
You need to differentiate where you really need to explain exactly all the math you did and how you did it, so to speak, and where you just need to provide transparency into how you're doing it and show that you're using information such in the right way. Distinguishing that can help organizations unlock some of the potential, too.
Michael Krigsman: We have another question from Wayne Anderson. You can see we love taking questions from the audience. Again, the audience is amazing.
This is from Wayne Anderson on Twitter. Wayne also has a question coming up on LinkedIn, so he's sort of a multi-tenanted—
Paul Daugherty: Multi-platform.
Michael Krigsman: –multi-faceted social media happening here. Wayne says, "What is the litmus test? Is there one, a question, set of questions, that you use to quickly evaluate a client's place on the operational maturity journey for AI and ML?"
Paul Daugherty: We have a maturity framework we use to assess for ourselves as for our clients. There are steps of maturity that you go through in assessing it.
There's assessing talent and where you are with the talent and the expertise that you have in the organization. That's about the technology talent as well as the skills you have in the business and the kind of training programs that you have around that.
There's assessing the data readiness for it in terms of (as we talked about earlier) your data maturity and the maturity of your platforms, data platforms, to support what you need to do.
There's then the maturity of how you need to use the models and your sophistication around that. That depends on the strategy that you have. Is your strategy to use proprietary, pre-trained, publicly available models, or is your strategy to do some of your own pre-training or customization using your own data? That requires far different operational skills and, therefore, you need to evaluate where you are on that spectrum.
Then there's the operational skills around it. So, how do you put the AI in place, and how do you monitor it on an ongoing basis for the right outcomes?
Then finally, the responsible AI dimension of it.
) Those are kind of the dimensions. There are more underneath that. But there's a process that we use to go through it, and I think that's every organization having an understanding of that and having a way to evaluate their maturity is important to know how you're making progress.
Michael Krigsman: Paul, let's shift gears a little bit and talk about investment, technology investment. AI is changing so rapidly. The capabilities are changing. The models are changing. The implications for the enterprise, and for society at large, remain very unclear. Given this ambiguity, how do you recommend that organizations should be investing?
I will mention that Accenture recently announced a $3 billion investment in this. Obviously, this is something that you're giving a lot of thought to.
Paul Daugherty: As you said, we announced a $3 billion (billion with a B) – we don't do that too often - $3 billion investment in data and artificial intelligence. There's a good part of that is for generative AI, but it's across data and artificial intelligence, so we're doubling our workforce.
We have 40,000 people that work in data and AI today. We do a lot of work in that area. We're going to double that over three years.
We're developing a new tool called AI Navigator for Enterprise to help companies apply AI more quickly, including generative AI. The tool itself uses generative AI to help companies understand the roadmap they need to follow and, industry-by-industry, how they can drive value from AI.
We're creating a center for advanced AI where we're looking not just at generative AI but the next breakthroughs that will come as well.
Yeah, we're excited about it. We're putting a lot of money and focus on it because we do believe this is transformational for business and this wave will build faster than cloud and faster than some of the other technology waves that we've seen before.
Yeah, a big focus, and we see companies doing the same. We did a survey recently, and 97% of executives that we surveyed—this is just a couple of weeks ago—believe this is going to be strategic for their companies, and it's going to change their business or industry. Ninety-seven percent, that's basically everybody.
Over 50% believe it's game-changing. Not just some change, but game-changing for their industry or company.
About 46% are going to invest a significant part of their budget in generative AI in the next 2 years. This is a fast build, and maybe some of this is companies getting a little over-excited, but we believe that that pattern will hold and companies will move and invest in this technology more quickly than we've seen other ways of technology built.
Michael Krigsman: But what about the risk associated with investing in something where the end trajectory is so unclear?
Paul Daugherty: You need to look at the horizon. I think there are a lot of things that are clear.
I think the key thing is to look at this from two dimensions: business case dimension and the responsible AI dimension, which helps you balance the risk. The business case helps you look at the value. The responsible AI helps you look at applying the human values and the right risk profile.
If you take those two lenses, I think you can find the intersection of the right things you can start on now with no regrets. Obviously, you have to make sure that the use case you look at can be supported with the technology that's available today, which is moving super-fast. I think, Michael, you can identify no regrets things to do.
We believe, in the near term, this is going to be human-in-the-loop types of solutions for the most part. It's going to be solutions that bring in tremendous new capabilities for people. It's going to be new, exciting capabilities for consumers to use more directly.
In one case, a retailer we're working with is using generative AI to create all sorts of new product configuration capability for their customers. It's going to create new capability for employees, et cetera.
This is all stuff that's doable today, I think, with no regrets, without really worrying too much about the risk. You can apply the right principles to do it in a responsible way.
QuHarrison Terry: Sci-fi has shown us what the future looks like. We see some of the gadgets and gizmos that are real-life objects from Star Trek. We see some of the unforeseen and uncomfortable futures from Black Mirror start to arise.
One of the things that I'm wondering your take is, I mean, you wrote the book Human + Machine, and then you've got another one since then. I'm guessing you've been thinking about this whole concept of transhumanism and merging the brain-computer interfaces that Elon talks about with some of these AI models. How near do you think that is, or do you think that that is still fodder for science fiction novelists?
Paul Daugherty: First of all, I'm a massive fan of science fiction, and I believe most science fiction eventually becomes real. It's a matter of the timeline.
If you want to read about where technology is going, you pick up somebody like Neal Stephenson and read his books where he coined the term metaverse among other things, and his book Fall previewed where we are with technology right now (a number of years ago) really well. Science fiction can be incredibly illuminating into where we're going.
In terms of transhumanism, I'm not a real expert per se in that field, but I talk to a lot of friends and colleagues who are. I believe it's quite far away.
Think about how blown away we are by large language models today and ChatGPT and everything. There is no intelligence inherent in these models. These are statistical models.
People ask me how intelligent these models are. The models have no intelligence. The models are a bunch of data with technology that can statistically create results from them. There is no inherent knowledge.
Now, some of the breakthroughs we're looking for in AI, the next generation of things like common sense AI, the way knowledge graphs come in and can be combined with generative AI, that starts to create systems that have more intelligence inherent in the models, along with the generative capability. I think that's where you see some interesting advances.
But truly getting to the human and surpassing human level, I think we're quite far away from it. We're multiple breakthroughs away, I believe, from seeing that.
I think that discussion distracts us a little bit from what we need to do today, which is some of the great questions that listeners have asked about human values and ethics. Let's prevent people from using today's technology in bad ways and avoid getting a little bit too distracted by the things that are pretty far down the road.
QuHarrison Terry: As an author, I'm sure some of your pastime includes reading. What books are you reading these days, and what's keeping you sane?
Paul Daugherty: One of my favorite authors and heroes is Neal Stephenson who wrote so many great science fiction books, so I'd put him out there.
A great book that I read recently is Cloud Atlas, which is a fantastic story that gets into some of the topics that we talked about. It's a prize-winning novel that covers everything from the fall of the Ottoman Empire to space travel in the future (through a series of parallel stories). It's a very interesting read.
There's a book called Reality+, which I'd recommend to anybody, anyone that's interested in, first of all, the transhuman topic you mentioned, the metaverse, or related topics. Reality+ is by a philosopher from NUI who is exploring the question of are we living in a real-world or a simulation, and how would you know the difference between the two? It's a fascinating book and super well-written.
I read a lot, and those give you a sense of the realm from fiction to science fiction to philosophy as well as technology.
Michael Krigsman: You're the senior person for technology at Accenture, which employs about 740,000 people. Just that number in and of itself is almost incomprehensible. How do you spread yourself over 740,000 people and manage the pressure and the expectations?
Paul Daugherty: It's an amazing privilege to have a role like this. Our mission is to deliver on the promise of technology and human ingenuity. The human ingenuity that we have in those 740,000 people is just amazing.
What I like most about my job is the ability to learn from 740,000 people. I don't talk to each of them individually, but the work that we do for clients, the innovative ideas they come up with is just super inspiring, the projects we do in terms of improving communities and society through some of the work we do.
It's really a privilege to do it. I'm just honored to have the role and to represent the amazing group of people that we have and the amazing leadership that we have.
It is a big company. It's a lot of people. But it's a lot of small communities that come together with a common culture is the way to think about it.
We have the system that we know how to hire people in volume if we need to. We know how to build community and build culture in our organization in a lot of different ways.
As you scale up and get bigger, some things aren't that much harder to do at a bigger scale and upscaling very well as you grow. That's what I've found as we've grown the organization.
It's a lot of fun and, again, it's just a privilege to be in an organization like this and have the role that I have.
Michael Krigsman: What's the hardest part?
Paul Daugherty: I don't know all the 740,000 names, but I'm working my way through as best I can.
QuHarrison Terry: Hey, Paul, a question for you regarding just being a techie. What's your favorite device?
Paul Daugherty: Probably apps that I use. One of the devices I'm really getting a kick out of is my Oura Ring. Not to do any marketing for a specific product, but it's a simple device.
The ring is connected to the app on the phone. And I'm finding it's really helping me understand some patterns and how I can be a little healthier and happier and get better sleep and such.
You can track. I can track and correlate my heart rate, my oxygenation, my breathing patterns, all sorts of things, compared to my sleep activity, compared to my sleep cycle, compared to my activity cycle.
We're data-driven, and if you get better data, you can improve patterns and such. That's one of the things I'm playing around with right now that I'm getting a lot of value out of.
QuHarrison Terry: One of the things that's interesting about the Oura Ring is it represents the whole quantified self-movement.
Paul Daugherty: Right.
QuHarrison Terry: You now have your own personal database of data that you can do whatever you want with. Are you going to build anything using your health data or is it just a personal experience?
Paul Daugherty: I don't know, but I'm on that exact journey you mentioned. I'm starting with now the personal biome, understanding your biome more using the self-diagnostics, which has another big impact on health and wellness.
Yeah, I've been trying to get more and more data-driven and understand what makes me work and what makes me healthy or not. Yeah, that is something I'm going to continue doing.
QuHarrison Terry: It's funny because that's the big data that comes off of your body, and then you could take that, what works for you, and implement that at the enterprise at scale. I see what you're doing.
Paul Daugherty: [Laughter] Exactly.
QuHarrison Terry: [Laughter]
Michael Krigsman: Okay. With that, we are out of time. A huge thank you to Paul Daugherty. He is the chief executive for Accenture Technology. Paul, thank you for coming back again to CXOTalk. We really, really do appreciate it.
Paul Daugherty: It was a pleasure, Michael, and it's great to do this with Qu as well. Thanks to you both and to the audience. Those were amazing questions. I wish I could be there and ask the audience a lot of questions as well, but it's been a great experience. Thank you.
Michael Krigsman: QuHarrison, it's great to see you. Thank you for being such a great co-host. That was a lot of fun, wasn't it, Qu?
QuHarrison Terry: Indeed, man. Thank you for having me.
Michael Krigsman: Everybody, thank you for watching. And as Paul said, you guys are an amazing audience.
Before you go, be sure to subscribe to our newsletter. Subscribe to our YouTube channel. Check out CXOTalk.com, and we will see you again next time. We have amazing, really great shows coming up.
Have a great day, everybody. Bye-bye.
Published Date: Jul 14, 2023
Author: Michael Krigsman
Episode ID: 795