Centaurs and Cyborgs: Navigating the Jagged Edge of Generative AI Productivity

Explore generative AI productivity in business with Harvard Business School and Boston Consulting Group on CXOTalk episode 820.

01:02:11

Jan 12, 2024
8,253 Views

In this episode of CXOTalk, we explore the transformative role of artificial intelligence in reshaping the landscape of knowledge work. Our special guests, primary authors of the important research paper "Navigating the Jagged Technological Frontier" [pdf], which discusses the profound effects of generative AI on professional productivity and work quality.

As we navigate the complex terrain of AI in the workplace, our discussion will focus on two crucial metaphors:: Centaurs and Cyborgs. These concepts capture the nuanced ways in which professional knowledge workers collaborate with AI technologies, blending the strengths of human intuition and creativity with the precision and speed of AI.

This episode not highlights the significant productivity gains and challenges posed by AI and also examines the ethical and practical implications of this relationship. We tackle critical questions: How does AI redefine the boundaries of human expertise? What are the best practices for professionals to harness the full potential of AI without losing the essence of human judgment?

Join us for a thought-provoking conversation that cuts through the hype, offering a balanced view of generative AI's role in the future of work. Whether you're a business leader, a technology enthusiast, or a professional keen on staying ahead in the AI-driven world, this episode offers valuable insights into the evolving dynamics of human-machine collaboration.

Karim R. Lakhani is the Dorothy & Michael Hintze Professor of Business Administration at the Harvard Business School. He is the (co)founder of several Harvard-wide research and educational initiatives centered around the intersection of technological innovation, artificial intelligence (AI) and company strategy. He is the co-founder and chair of the Digital, Data & Design (D^3) Institute at Harvard, founder and co-director of the Laboratory for Innovation Science at Harvard, and the principal investigator of the NASA Tournament Laboratory. He is also the co-founder & co-chair of the Harvard Business Analytics Program, a university-wide online program transforming executives into data-savvy leaders.

Karim has published over 150 scholarly articles and case studies and is known for his original scholarship on open innovation and has pioneered the use of field experiments to help solve innovation-related challenges while simultaneously generating rigorous research in partnership with organizations like NASA, Harvard Catalyst, and The Broad Institute. His digital transformation research investigates the role of analytics and AI in reshaping business and operating models. He co-authored Competing in the Age of AI (2020), an award-winning book published by the Harvard Business Review Press. He has developed six online-courses that have educated thousands of executives on AI strategy, technology-driven transformation, and entrepreneurship.

François Candelon is the Global Director of the BCG Henderson Institute, Boston Consulting Group’s think tank dedicated to exploring and developing valuable new insights from business, technology, economics, and science by embracing the powerful technology of ideas. He is also a leader of BCG GAMMA’s AI@Scale effort for technology, media, and telecommunications companies. GAMMA is BCG’s data science and advanced analytics unit.

François joined BCG in 1993. He has worked in several European countries, led BCG’s global telecommunications work from 2008 to 2012, and spent seven years in China where he worked with the most advanced tech companies. His research is focused on AI and digital as a source of competitive advantage for businesses and national economies. His work has been published in several prestigious reviews (Harvard Business Review, MIT, and Sloan Management Review), and he has been a speaker in global conferences like Mobile World Congress, TED@BCG, Web Summit, Politico AI Summit, and Wuzhen Internet Conference.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Today on Episode 820 of CXOTalk we're answering the question, "Does generative AI actually make you more productive?" Our guests are Karim Lakhani, who heads the Digital Data and Design Institute at the Harvard Business School, and Francois Candelon, who is Global Director of the Boston Consulting Group's think tank called the BCG Henderson Institute.

Karim, tell us briefly about your work at Harvard Business School.

Karim Lakhani: I've been here, gosh, 18 years now at Harvard Business School in 2 big roles. One is this new institute that we set up, Digital Data and Design Institute (D³) because we feel that the power of digital data and design is exponential, not linear. Then also, the Laboratory for Innovation Science, which is a place where I do all of my research as well.

Michael Krigsman: Francois, welcome to CXOTalk. I'm thrilled that you're joining us today.

Francois Candelon: I've been with BCG for more than 30 years, and I have 2 roles today. One, I am, as you said, the global director of BCG Henderson Institute where I focus my own research on the impact of tech, in general, and AI, in particular on business and society. And on the other side, I am a practitioner, as I am leading, let's say, AI transformations for our clients in the tech can telecom sectors globally. It gives me the opportunity to work with Karim. 

Research into generative AI and knowledge worker productivity

Michael Krigsman: You both were involved in leading this very interesting research study. It's called Navigating the Jagged Technological Frontier. Tell us about your research. Karim, do you want to jump in and give us an overview of that work? 

Karim Lakhani: The whole idea came about when Francois came with his team to my office and we were talking about generative AI and what that was going to do to knowledge workers. Through a series of conversations (we were having before and after), we realized that we were at this very special moment in the history of knowledge work where, all of a sudden, this new tool that could generate text, that could be a cognitive aid, was becoming widely available and many companies were adopting but nobody was approaching it in a scientific way.

We felt like this was a unique opportunity for us to partner together and do a study, a rigorous scientific study, an experiment, a field experiment, where we would be able to study the implications, the impact of this tool on knowledge workers at BCG. That's what the study was designed for. 

We ran it as a randomized control trial. There's a control group, which does not use ChatGPT for a couple of tasks. Then a treated group that does use ChatGPT. Then we can measure a range of outcomes. 

That was the design of our study, and it was an amazing collaboration. We loved working with Francois and the rest of the BCG team. We pushed and pulled each other quite a bit.

Francois Candelon: Yes, a lot.

Karim Lakhani: It was quite eye-opening for us to do this study and put it together. Then the reception, since those studies came out in September, has been quite amazing as well.

Francois Candelon: Yes, and I believe it was quite amazing because it was one of the rare studies I have seen where you were basically working... It was a large scale, more than 750 BCG'ers with real knowledge workers working on real tasks. 

It was not something where you were just having, let's say, students. And no offense to any student.

Karim Lakhani: [Laughter]

Francois Candelon: But the fact that, yes, this is our day-to-day job, so that makes it different.

Karim Lakhani: Yes.

Francois Candelon: And what is true as well is that what we found was that people were more working on, let's say, what the impact of the macroeconomics and the potential of these technologies. There is little that is said on how to implement it—

Karim Lakhani: In companies.

Francois Candelon: –and what is the impact on the relationship or the collaboration between humans and generative AI. So, I think this is why it resonated so much.

Michael Krigsman: Can you describe the methodology? How do you set up a scientific experiment with consultants and something like ChatGPT in a way that is reliable?

Karim Lakhani: One of the things that I've been doing at HBS (since I started my academic career) is to run field experiments. The notion is that when there are policy questions, strategic questions, you can go by intuition or you can run it as a proper A/B test.

My lab at the Laboratory for Innovation Science, we've done a range of experiments on online platforms, in-person meetings at the Harvard Medical School, at the Broad Institute, and so on and so forth. We took that apparatus and that experience for about 18 years to be able to run experiments inside of organizations to solve that problem. 

Here's what we did. First, we collaborated with Francois and his team to really understand the how and what kind of tasks knowledge workers – so, these are individual contributors in the consulting pyramid – do and then designed two tasks that were representative of what the consultants did. Those tasks were then standardized. 

One task was to create a problem-solving, brainstorming innovation task. The other one was a business analysis: read customer interviews, look at a spreadsheet, and make some decisions and recommendations.

Those tasks came about in us working together to say, "Are these really representative of knowledge workers?" 

Francois Candelon: Yes.

Karim Lakhani: And we wanted to, on the first hand, make sure that they represented what BCG consultants were doing, but also, more generally, what we see in the economy with knowledge workers. Basically, the graduates from the top business schools and where they were going. That was the first task, to do that.

The second task was to then pretest these tasks. Make sure that people can do them and do them in the right amount of time, and so on and so forth.

Then we were able to (working with Francois) recruit 758 consultants. 

Francois Candelon: Yes.

Karim Lakhani: Nontrivial. The CEO of BCG sent an email.

Francois Candelon: Yes.

Karim Lakhani: Spammed the entire company, encouraging people to participate. Again, the idea was, let's be scientifically rigorous about this. 

We could all make predictions about this or that or find a little bit of data here or there. We wanted to make it a scientific study, just like a drug study for FDA, right? You want it to be treatment and control. 

The CEO sent the message out. We were able to recruit 758 consultants. We made it incentive-compatible.

Francois Candelon: Mm-hmm.

Karim Lakhani: We offered incentives for it as well. Then we basically pretested everybody on all the tasks, and then we randomly assigned people either to a control group or to the treatment group where GPT was provided. 

One more thing. Again, this is the cooperation we got with BCG. We got GPT-4 access through APIs. Remember, this is April-May of 2023 of last year.

Francois Candelon: Yeah.

Karim Lakhani: We got API access, but then we built; the BCG team built a Chat interface so that we could capture every single prompt being used as well and track all of that. And so, that was a big effort.

The last bit was that we had to evaluate all of these submissions that we were getting. Some evaluation was automated, but there was also a lot of human interaction in the evaluation.

Francois Candelon: Mm-hmm. 

Karim Lakhani: There again was an HBS team and a BCG consulting team that went ahead and evaluated all of the responses from the 758 consultants. That was the setup.

Lessons from the research into AI and knowledge workers

Michael Krigsman: Francois, can you tell us what did you learn? Can you give us an overview of the results? I mean 758 consultants, that's an extraordinary number.

Francois Candelon: It is, and I would say I have three big learnings. The first one is that there are tasks where the augmented human (or human + AI) is much better than humans. But there are some others where the augmented human is worse than humans. It's not just that it is not creating value; it is destroying value. 

What surprised me there is that these consultants who are trained to be strong critical thinkers were basically not able to differentiate whether it was creating or destroying value. This is the first key finding.

The second one is that when it is within the jagged frontier, when AI is adding value and so the augmented human is better. Basically, it is a great enhancer. Ninety percent of the consultants, 9-0, were benefiting from it. 

But it is a great leveler as well in the sense that the ones who benefited most were the ones, the consultants that were below average in the first task.

Karim Lakhani: I thought nobody was below average at BCG?

Francois Candelon: No, you have an average. You are above the global average.

Michael Krigsman: [Laughter]

Karim Lakhani: Okay. [Laughter]

Francois Candelon: But then you are below the average of the group.

Karim Lakhani: Of course. Yeah, Yeah. By the way, full disclosure, I spent a few years at BCG myself. That's how I got to know—

I had not met Francois before, but our view was that only the best get into BCG, right?

Francois Candelon: Sure. 

Karim Lakhani: But there is a distribution.

Francois Candelon: There is a distribution even in the best. 

And the last thing that is important as well is that there is a kind of a tradeoff in terms of creativity. You improve the creativity.

The performance of the augmented humans was much better. However, it is at the expense of the diversity of the ideas of the group. 

And so, there is a kind of tradeoff, and we are currently working with Karim on trying to see how we can change that. It may be because very often people talk about we should start with AI or gen AI, and it will then get enhanced by humans. But maybe what needs to be done is to start with humans, to have the diversity of ideas, and then leverage AI to improve it. 

On creativity, one more thing, which is that 70% of the consultants – we interviewed each of them after they performed the task – 70% of them were saying, "Actually, we realize there is maybe a risk to have creativity muscle atrophy." 

I think that's a very important element for us and for companies as well to try to say, "Okay, what happens? What are the skills that I need to keep, to build, to increase, and to leverage to make sure that I have a great source of competitive advantage later on?"

Impact of generative AI on business performance

Michael Krigsman: Am I correct in hearing that this jagged technological frontier that you described indicates that, in some cases, generative AI, ChatGPT can significantly enhance productivity? However, when not used correctly, misunderstood – whatever it might be – you can have a significant decrease in productivity. Did I understand that correctly?

Francois Candelon: That's true, but I'm not sure I would use the word "productivity"—

Karim Lakhani: Yeah.

Francois Candelon: –which is very often related to, "Okay, I do things faster." Here it's more a question of quality of the performance. 

I'm performing better. Basically, the quality of my outcome was of a better quality. This is what happens with generative AI.

Karim Lakhani: Yeah. What we found is that, overall, there is a speedup. With these tools, people are faster and can do more. 

But the quality question is also important. How good is the work that's being done? 

To go back to your question about the jagged frontier, here's the thing. There are no user manuals for these generative AI tools. 

If you remember, when you'd get software, there'd always be a user manual. I remember getting the browser, and there was a user manual on how to use a browser (way back when). There are no user manuals for these tools because even the creators of these tools have been surprised by the capabilities demonstrated by these tools overall. 

For a consultant, imagine a consultant that does a stream of activities for the same level of difficulty. In some cases, the generative AI tool is great. It performs really well. In other cases, it performs poorly. That's what we mean.

The jagged frontier is that for the same difficulty level, some things are just easy and some things are actually wrong. You shouldn't be doing them with the AI either. That's what we discovered in our prework with the BCG consultants, and that's how we designed the study as well.

Francois Candelon: What I would add, as you are talking about this software manual, the fact that this frontier is shifting over time.

Karim Lakhani: Yes.

Francois Candelon: So, you cannot say, "Oh, forever, this is within the frontier. This is a good task where you should use ChatGPT or generative AI, and for that one you should not." No, because, over time, you will see that this is shifting and, therefore, you always need to experiment and to revisit your belief.

Karim Lakhani: Yeah, 100%.

Michael Krigsman: This jagged frontier (which is, in a sense, the shifting boundary between being effective with ChatGPT, producing quality results), is this due to the nature of ChatGPT, the type of tasks or activities that were being requested, or did it have to do with the specific individuals?

Karim Lakhani: No, this is a feature of the technology because, again, as a randomized controlled trial, what we can now say is that basically when you give people a task that's outside the frontier and they are using the AI, then most likely their performance is going to drop compared to those humans of equivalent skill, equivalent background, equivalent job history, approximately, who don't have the AI. This is a feature of the technology itself.

Generative AI in business operations

Michael Krigsman: Can you give us some examples of when this technology is effective and when not? For so many of us, we're using ChatGPT, we're experimenting, and we're figuring all of this out on an ad-hoc basis as we go.

Francois Candelon: It is as of today (or even yesterday). 

Karim Lakhani: Yes.

Francois Candelon: It was very good at writing, brainstorming these types of things. But where it was not good at was more on, let's say, linking different sources of content.

For instance, the summarizing interviews and, at the same time, relating the content of these interviews to some quantitative—

Karim Lakhani: Spreadsheet, yeah.

Francois Candelon: –spreadsheets and so on. It was not good at doing quantitative analysis. But again, as I'm saying, this is shifting over time. 

I don't think that I want to give to people the feeling that we know which tasks will be the right ones. This is why companies should build an experiment muscle to make sure that before they deploy at scale for a given use case, generative AI, they should be able to experiment first to make sure that it is creating and not destroying value in the way this technology and their employees are interacting and collaborating.

Karim Lakhani: Yeah. I think the way I also think about this, Michael, is building on what Francois said. The frontier is going to keep expanding and keep changing. By the way, different large language models will have different capabilities as well, so that's part of the complexity we now face today.

The challenge that we will face is that you can't take for face value what these systems can do. You have to learn about how they apply in your work setting, in your particular domain, and make sure that the expected answers are the same as what you would approve of without these tools. Once you can figure that out, then you're off to the races. 

That's why, as Francois was saying, this is where the companies need to be not just passive receivers of this technology from the providers but in fact active adopters and applying their critical thinking on this kind of technology.

The other thing I would say is that the lead author on this paper, Dr. Fabrizio Dell'Acqua (a post-doctoral fellow at our institute), his research has basically shown that humans can fall asleep at the wheel with AI. When the AI is too good, they just take it for granted, and they fail to apply their critical thinking skills to what the AI is telling us. That's what we observed as well in our study. 

There's a little bit of human behavior that we need to also unpack and make sure that inoculate people against falling asleep at the wheel as well. That's a very important message that I think we want to portray to the audience that's out there.

Francois Candelon: Yes.

Michael Krigsman: Please subscribe to our newsletter and subscribe to the CXOTalk YouTube channel. Check out CXOTalk.com. We have incredible shows coming up.

We have an interesting comment from Simone Jo Moore on LinkedIn. It's outside the scope, I think, of what you studied, but I'd be interested in your opinion because it's thought-provoking. She says, "Are you concerned that ChatGPT and similar will spiral into eating their own data into oblivion as humans rely more on it rather than creating more human content?" In other words, that the ChatGPT will rely on other AI-generated content.

Karim Lakhani: From my perspective, we're in brand new territory. It's a great question – I think about it quite a bit – which is, how will the human-AI augment (which is the future for us), is going to work together, and how do we think about the role of all this content that gets generated, because the marginal cost of creating the content is going to zero? 

Francois Candelon: Mm-hmm. 

Karim Lakhani: And the scale is infinite in many ways. You've just got to throw compute at it. 

That means that we can create as much synthetic data, as much synthetic content that we want. 

Francois Candelon: Mm-hmm.

Karim Lakhani: But is that going to be useful or is that going to be garbage? I think the question is TBD. 

Francois Candelon: Mm-hmm.

Karim Lakhani: I don't think we know yet because I don't think we as humans and as knowledge workers have even absorbed how to adapt these tools and do the new stuff.

Francois Candelon: Yes, but this is exactly... I fully agree with what you say. We are just scratching the surface. 

Remember, it's a new industrial revolution, so when you look at the time between electricity and the modern factory standardization. It's going faster, but we don't know.

What we know is that there are plenty of things that we don't know, and this is why we need to experiment. This is why we need to have guardrails and responsible AI and other things.

We need to look at it, but we need to embrace it because it will probably be a source of competitive advantage. The capability to adopt and to increase the rate of learning of the adoption of these technologies is very likely to be (in my opinion) the main source of competitive advantage for the decade to come.

Karim Lakhani: I fully agree, and not just a decade – for a while, I think. [Laughter]

Francois Candelon: Okay, but you know a decade today is, to us, infinite. 

Karim Lakhani: Yeah. Yes, yeah. Yes, yes, yes.

Augmenting human expertise with generative AI for business innovation

Michael Krigsman: We have another really interesting, thought-provoking question from Twitter. This is from Arsalan Khan. He's a regular listener. He always asks very interesting questions. He says, "What about hallucinations in the sense that AI can result in humans hallucinating that they are experts in anything just because they can ask questions to a generative AI?"

Karim Lakhani: The way I talk about this in my framing of this tool is first to go back to the browser. If you remember the browser 30 years ago [laughter], the browser got invented. There were 30 years of the Internet beforehand, and then we consumerized it. 

What the browser did is that it lowered the cost of information transmission. Anybody could have a webpage. Even the Oxford coffee pot could have a webpage. It lowered the cost of information transmission.

What is generative AI doing? My claim, and it's a strong claim – I think we have some evidence for it – is that it's lowering the cost of cognition. It's lowering the cost of problem-solving. It's lowering the cost of creativity.

The example I give is that I'm terrible at art. I'm a disaster at art. But through Midjourney, I can create beautiful pieces just by speaking it. By speaking it, I can do a lot more things in art, and I don't have to acquire and spend five years in art school to generate all of this great art.

For me, this drop in the cost of cognition is a really important thing. And so, the hallucination problem is different from what he's saying.

Francois Candelon: Yes.

Karim Lakhani: What he is saying is, "Will people be faking expertise?" I'm saying I think, again, it's a really good question. 

What I imagine is that you could get access to the world's expertise through these approaches. Then the question becomes, where do you apply your own judgment on it, and do you have the basis to apply your judgment on it? 

I think, again, this is where the education system that we're in is facing a big upheaval because we don't know yet how to bring this into the classroom in a proper way and then tell people how to use these tools in a systematic way.

Francois Candelon: Yeah. Yeah, but I think, on the first part of your answer, I partially disagree with you.

Karim Lakhani: Sure.

Francois Candelon: In a sense that—

Karim Lakhani: You're wrong, but anyway—

Francois Candelon: No. You know. Let's see.

Karim Lakhani: [Laughter] Yeah.

Michael Krigsman: [Laughter]

Francois Candelon: Let's see. Time will tell. 

Karim Lakhani: Yes, yes, yes.

Francois Candelon: Of course, you don't need to spend five years at school to do something, but what will the quality be of it? What will the quality be? 

This is a challenge because I think we are lacking. We don't see yet all the imagination. We are too much focused on what we can do without it. 

Karim Lakhani: Yes.

Francois Candelon: As you were referring to the Internet times, remember, you have the class of '95. In 1999, the website of the New York Times was a PDF of the printed version.

Karim Lakhani: Yes.

Francois Candelon: I think it's too early. 

Karim Lakhani: No, I agree. 

Francois Candelon: I am trying to see whether, for instance, on data science—

Karim Lakhani: Yes.

Francois Candelon: –for data scientists, very often people say, "Oh, yes, but with gen AI, we don't need data scientists anymore." We might need them in a different way.

Karim Lakhani: Yes.

Francois Candelon: There will be new, but I still believe (and I don't have evidence, so it's more, let's say, an intuition) that there are things that today, without it, cannot be solved by data scientists; but augmented data scientists might solve them. 

Karim Lakhani: I agree. But also, again—

Francois Candelon: So, you see. You see I'm right. [Laughter]

Michael Krigsman: [Laughter]

Karim Lakhani: Meh... Let me put this in a frame. The way to think about this here in this setting is to say that a business person needs a data scientist and a data scientist needs a business person – oftentimes. There's a partnership.

The question is, how much of the work that the data scientist does can be augmented with a smart business person to drive the analysis? Then what does the data scientist do? 

That's the cost of cognition dropping. 

Francois Candelon: Yes.

Karim Lakhani: At the same time, how much of the work that the business person provides to the data scientists could the data scientists could do by themselves as well? That's the interesting space we're going to find ourselves in.

The question is really going to be about how we think about judgment. How do I know that this answer is the right answer? Just because it's written well, that's not the right thing.

Francois Candelon: Yeah.

Karim Lakhani: The quality of writing is high, so how do I know it's the right answer? I think that's going to be where all the value is going to be created and the value is going to be destroyed.

Francois Candelon: Yeah, and this is where I would say, at the individual level, critical thinking becomes more important than ever. We should not forget that generative AI is a little bit like your right brain, about creativity and so on. That's great, but it's not reliable. 

If you want, it is as if you were talking to a friend who has read everything, remembers everything. But we all know that memory is not reliable. So, don't take it for granted. That's for sure. 

But at the same time, what we can say is that we have these hallucinations. Are they bad (at the end)? 

Karim Lakhani: Oh, yes.

Francois Candelon: I don't think so.

Karim Lakhani: I think it's a feature, not a bug.

Francois Candelon: It's a feature and, basically, it can create new ideas, new relationships, so it's a feature not a bug.

Advice on using generative AI in business processes

Michael Krigsman: Given your research, what advice do you have for folks in organizations, for knowledge workers, as far as using (in a practical way) ChatGPT and similar tools for maximum benefit and effectiveness?

Francois Candelon: For the individual, I believe you need to embrace it. You need to embrace it. You need to recognize as well that you will need to because, as we all know, humans are not replaced by AI being analytical or generative but by humans using AI. This is one thing; you need to augment it.

But you need to remain flexible as well because, over time, your job will change, be it to have been reskilled, upskilled, trained, whatever. Make sure that you're not becoming lazy. I think we are back to that.

I think that for companies it's a little bit different, but especially coming back to the previous question. When you revisit your workflows, you need to make sure that there is enough time for quality control. If you are not having this critical thinking at the individual level and the quality control at a company level, you will be really in deep trouble.

Karim Lakhani: Yeah. Michael, if I may add a complementary view on this as well. For the individuals, I wrote a Substack on this a few months ago.

I was taken by a colleague of mine at Flagship Pioneering. He said Jobs had called the computer the bicycle for the mind. What the bicycle did was really enable humans to go faster and further than just by simply walking. Generative AI, in many ways, is a bicycle for the mind. 

Here's the thing, though. Most of us learn how to ride bikes when we're kids, right? We can, in a few weeks, learn how to ride the bike and get going. 

Riding a bike as an adult is actually kind of difficult because it's embarrassing that you don't know how to ride a bike. You're going to fall down. You're going to get a concussion. You're going to get a scraped knee, scraped elbow, bloody everything. 

What we're finding is to use this bicycle for the mind for individuals, you've got to invest in the learning. You've got to be able to push yourself and keep learning these tools. 

These tools keep evolving, so the learning mandate is pretty high. As Francois said, you also want to keep applying your critical thinking to this. 

I think that's the first. On the individual level, the learning mandate is massive. I spent so much time on YouTube videos [laughter] even now to just be up to speed on what's going on in this space. 

For organizations, I agree with Francois, again, that this is critically important and it's going to be a source of competitive advantage. But my view is that the workflow will change. The processes will need to change. 

If we believe in numbers, and I do, that software developers have a 40% boost in productivity, or we can create as many images as we want on Tali or Midjourney, then the marketing function will need to change. How we do marketing will need to change. How we do software development will need to change.

The process analysis needed to redo your organization will be very critical. That I think many companies aren't thinking about yet. 

Then I would add in a layer that you need to add a perspective on, "Oh, for this particular task, do we believe the outputs of generative AI or not?"

Measuring bias for generative AI in business transformation

Michael Krigsman: We have another really interesting question from Hue Hoang on LinkedIn. Get ready for this one. She wants to know what to consider when assigning meaningful values. It's a long question, so I'm trying to interpret this. 

When evaluating the diversity of ideas, how do you bring in the subject's cultural or environmental background when considering bias (which is really the key here)? Then she goes on. "With the assumption that a decision can be influenced through the human perspective based on their experiences and how it can be translated or transliterated, can we measure the potential bias which may impact the end result?" 

I think what she's asking is how do you measure bias given the number of variables in a person's background and their experience. A complex question.

Karim Lakhani: What did we find in our research? We have this unique capacity to see the same tasks being done by hundreds of consultants. Then we can look at the ideas—

Francois Candelon: From all over. From all around the world. All around the world.

Karim Lakhani: From all around the world, exactly. What we observed is that, while ChatGPT makes you smarter individually, collectively, those that use ChatGPT, their ideas sound similar. 

We can measure that semantic diversity in those ideas. Technically, we have natural language processing tools to be able to measure the semantic diversity.

This becomes a concern to us, in the first place. The way I talk about it is that if two consumer products companies are going to be using Microsoft Azure – which they probably will – generative AI services, and they're both launching a new SOAP campaign geared towards academics (SOAP for academics) and they both use ChatGPT, those will then create similar ideas for them.

One of the things we're working on heavily right now is to do two things: unpack the source of this homogenization and the ideas and, secondly, rerun all those prompts through other large language models as well. We're running it through Claude, we're running it through Llama, we're running it through Gemini to see if in fact does going to the other models give us similar... Do they have the same problems or not? We're going to be able to come back to you all with some findings in the next few months. 

Here's a really important thing, which is, ChatGPT (and all of these large language models) have been trained off the Internet. If your culture, your point of view is not publicly available for training, then that won't be represented.

That's the worry that I hear her saying is that the training data is so critical. That's why the French (with Mistral) and Macron have said, "We need to make sure that French cultural activities are represented."

Francois Candelon: Yeah, that's true.

Karim Lakhani: Mistral is driving that, and the French government is driving that. But then others can also absorb it. 

This becomes a key national, cultural concern that these tools are very good. We could get trapped in Hollywood-type homogenization of our cultural artifacts—

Francois Candelon: Yes.

Karim Lakhani: –if we don't make sure that they are available for training. 

Francois Candelon: May I come back just for one second to your gen AI bicycle analogy?

Karim Lakhani: Yes. Did you like it or not?

Francois Candelon: I love it.

Karim Lakhani: Okay. Again, full credit, Armen Mkrtchyan at Flagship Pioneering reminded me of that, and we've been talking about it. 

Francois Candelon: But I think it is good not just because it helps you do better, what you are doing on that, but because it frees up time as well. Therefore, it allows you to move and develop all the things.

If I come back to our experiment, the thing that really surprised me most was how much our consultants were embracing this technology. They were feeling that they were augmented, not threatened. 

And they were telling us, "But you know it's a great opportunity because there are many things I won't do anymore. I will then be able to focus and become a much deeper expert (for instance in change management, as you mentioned)," because we can see that the rate of learning of these and to adopt these technologies is becoming a source of competitive advantage. Therefore, change management will become more important. So, it's an opportunity for consultants and so on.

You were talking about consulting is dead. Long live consulting. 

Karim Lakhani: [Laughter] You said it. I didn't.

Francois Candelon: You said consulting is dead. 

Karim Lakhani: I didn't say it publicly. [Laughter]

Francois Candelon: [Laughter] But I think that's very important, and it is true for every company.

Karim Lakhani: Yeah. Yeah, but can I say one thing? I think there's a heterogeneous response to this technology with knowledge workers. 

Francois Candelon: Yes.

Karim Lakhani: We'll go under the hood a bit, and I want to really acknowledge our full research team that has been working on this. We basically had Saran and Lisa from BCG working with us. Then on our side, we had Dr. Fabrizio Dell'Acqua, Professor Edward McFowland from HBS, Professor Kate Kellogg from Sloan at MIT, and Professor Hila Lifshitz from Warwick University.

Francois Candelon: And Ethan.

Karim Lakhani: And also, of course, our good friend Professor Ethan Mollick from Wharton as well. This made up the team, and we did this all collectively.

Here's the thing that really, when we first designed the study, was going to be purely a quantitative exercise. We were just basically trying to understand these effects. We had all these numbers and all these measures, and that's what we were going to do.

In some prework and pretesting of the study, we had consultants go do the work without ChatGPT and then with ChatGPT. Then we did some interviews. Our team did some interviews.

Fabrizio reported back to me this one incident which really blew my mind, and I said, "Oh, shoot!" I said other French words, but I said, "Oh, shoot. We better be careful about this."

This consultant did the task. It took them two hours. Then this consultant did the task with ChatGPT. It took them 20 minutes. 

The reaction was, "This feels like junk food."

Francois Candelon: Mm-hmm.

Karim Lakhani: Empty calories. I was stunned, Michael. I was stunned.

I called up Francois. I said, "What the hell?" Here, I'm a techno-optimist, "This is the best thing ever. It's going to be incredible." 

Now these people, these very smart people, are saying, "Empty calories." What's going on?

That triggered us to bring Hila and Kate into our research group because they were experts at doing qualitative interviews and really understanding the dynamics of what's going on. Then we added a whole layer of interviews where more than 500 consultants got interviewed by our team (led by Kate and by Hila) to really understand what's going on.

Francois Candelon: Yeah.

Karim Lakhani: That's where we're going to have even more. I mean the study is just the start of a range of additional papers that'll come out.

Francois Candelon: Yes.

Karim Lakhani: This notion that there is a heterogeneous view. Some people are just like, "Shoot, my job is at risk. Shoot, I spent all of this time being taught. I went to the top in high school. I worked my butt off. I got in the top undergrads. I had good jobs as an undergrad. I'm in the top business school. Now I'm working at the top consulting company in the world. It takes me 2 hours to do this thing. In 20 minutes, are you kidding me?"

Francois Candelon: Yeah. Yeah, but this is where there is a question about what can you do with this time that is freed.

Karim Lakhani: Yes. Yes.

Francois Candelon: But it's true. What is true with that, when I say I was really amazed by the fact the way they embraced it is partially maybe not true for every job.

Karim Lakhani: Yes.

Francois Candelon: But for this, I would say high-skill workers, knowledge workers, then they see it as an opportunity to refocus on something maybe more important.

Karim Lakhani: Yes. Yes. 

Michael Krigsman: Greg Walters on LinkedIn says, "When will AI move beyond requiring historical data?" He says, "When will AI become so predictive it will not need static, past data? The models being so optimized that they have outrun the need for "old" data."

Francois Candelon: I don't know.

Karim Lakhani: Yeah, same here. I don't know.

Francois Candelon: Point one. Point two, but you know even us humans, we are working and dealing with old data. Basically, when we make connections—

I would like to make the difference between generative AI (your right brain) and analytical AI (your left brain). These are different things. At some point, there might be a convergence. Then you will have a right and a left brain – whatever it means.

Karim Lakhani: Yes.

Francois Candelon: But I think it's very... We make connections, so I think the transformers – and this is why I like, basically, hallucinations—

Karim Lakhani: Yes.

Francois Candelon: –as a feature because even when we want to be creative, we are very often connecting things that were not connected before. This is what the transformer architecture does. 

I don't think that relying on past data prevents you from being creative. Of course, you might not be able to create, let's say, quantum mechanics. But even as humans, we don't often create quantum mechanics. 

Karim Lakhani: Yes.

Francois Candelon: I think that for the traditional level of creativity, a lot can be done leveraging this past data, so gen AI doesn't have to get rid of it to really reach out to the next level.

Karim Lakhani: Yes.

Generative AI and outlier outcomes for decision-making

Michael Krigsman: I'm going to jump the line to Wes Andrews on Twitter who asks a question, "Will generative AI only make it harder to consider outlier outcomes?"

Karim Lakhani: I would say no because one of the things in my own usage of generative AI is to throw it scenarios that seem improbable and get it to explain things or do things. And so, I would say that here's the thing that I've encountered in my use of it, and I use it a lot. 

Unlike my human RAs, they don't get tired. The gen AI doesn't get tired. It doesn't complain to me that I'm asking too many questions or that I changed my mind. I can keep going back and back and back and back.

I don't think reasoning is even the right thing to say. The ability for you to think with it for now and get yourself to imagine improper scenarios and have it game it out with you is quite interesting. Again, this is where, like, how do we develop skills in using this tool which never gets tired, which can draw connections in ways and find patterns in ways that most humans can't do? 

The question then becomes, how am I going to ask it the question to imagine scenarios that are not possible, and will I even pay attention to it when those come up? In fact, I think the scenario, the creative hallucinatory generating perspective is infinite in many ways. Then it's a human limitation to be able to absorb and say, "What's the likelihood of these things happening?"

Francois Candelon: Yes, and I agree with you that this is a human factor that is a limiting factor because, at the moment, it's still quite expensive to use.

Karim Lakhani: Yes.

Francois Candelon: But with the compute power—

Karim Lakhani: $20 a month.

Francois Candelon: You know for a company it's expensive, if you do it for all your employees. 

Karim Lakhani: $20 a month.

Francois Candelon: Yeah, for all your employees. Yes, because... I will come back to that.

Karim Lakhani: [Laughter]

Francois Candelon: With the compute power that will explode, continue to explode over the next few years—

Karim Lakhani: Yeah.

Francois Candelon: –probably the cost will drop dramatically.

Karim Lakhani: Yes.

Francois Candelon: So, it won't be $20. I think that's very important because then you could run as many scenarios. You could be in a wargame play all the time. 

Karim Lakhani: Yes.

Francois Candelon: Therefore, then our ability to absorb and decide will be the limiting factor.

Karim Lakhani: Yeah.

Francois Candelon: It is true for the individual in that case. It is true for a company at all.

Karim Lakhani: Yes. Yeah.

Styles of AI-enabled human-machine collaboration

Michael Krigsman: Tell us how centaurs and cyborgs fit into your generative AI research.

Francois Candelon: Because, as you said, we were able to look at the way people were prompting and working, we found that there were two different categories. You had what we called the centaurs because they are dividing the work with AI, and cyborgs who are people, consultants, that are coworking with AI. 

What does it mean? It means that for the centaurs, basically, they were using AI for sub-tasks (mostly writing, brainstorming), and the cyborgs who were using it for everything (including, on top of these two, framing, qualitative analysis, quantitative analysis, recommendations, and so on).

At the moment – and I say, "At the moment," given the current jagged frontier – the centaurs had better results because they were using gen AI with sub-tasks from within the frontier, so they were not making the same mistakes as the cyborgs. But what will happen over time, we don't know. 

What it says is that, for a company, you absolutely need to understand how people will act and collaborate with AI because you will have different types of collaborations. 

Karim Lakhani: Yeah.

Francois Candelon: If I may summarize—

Karim Lakhani: Yeah, absolutely. I just want to add one more thing, which is, I remember the moment when we discovered those two categories, and the names actually came from our good colleague Ethan Mollick. 

He's the one who said, "Oh, this looks like a centaur-type behavior. This looks like a cyborg-type behavior."

Both Ethan and I are big science fiction and mythology fans. [Laughter]

Francois Candelon: Yeah.

Karim Lakhani: And so, we were very happy to bring in... Yeah.

Francois Candelon: Because to know exactly how a centaur behaves—

Karim Lakhani: Yes. [Laughter] 

Francois Candelon: [Laughter] –is really—

Karim Lakhani: You need to know a little bit of fiction. 

Again, this was a great collaboration. I remember I was in my car on a conference call with the team. All of us were trying to make sense of this. It was a hot summer day, and that's when we sort of homed in on the typology and the categories.

Michael Krigsman: Would it therefore be accurate to say that the cyborgs were more willing to take risks and experiment?

Karim Lakhani: No.

Francois Candelon: No.

Karim Lakhani: No. No. No. No.

Francois Candelon: No. No, not at all. I would say almost the other way around because, basically, centaurs were delegating some tasks, sub-tasks to AI. But I think it's more the way you behave and what we will see. We'll see different types of augmented humans.

Karim Lakhani: Yes. In fact, it could also depend on the tasks. It could also be task-dependent. Some tasks, you may act like a cyborg. In other tasks, you may act like a centaur. 

It may not just be typologies of people. It may be task-dependent as well. 

Francois Candelon: But as said, we are just at the beginning. 

Karim Lakhani: Yes, 100%. What's incredible about the study – again, this is a true partnership – is that we could build these systems that we could keep track of every single prompt, keep track of every single reply that the consultants were doing, and then we could go back and sort of reconstruct these types of behaviors. That was, again, this incredible partnership that we had with BCG and our academic team.

Generative AI for software development

Michael Krigsman: On X, formerly known as Twitter – I feel like we're talking about Prince here – Domenic Ravita says, "Based on this study, what new questions and possibilities do you see for how software development is done? Could this technology help mitigate the impact of the growing software engineer shortage?"

Francois Candelon: Yes for two reasons. The first one, because we see that there is a productivity opportunity because instead of taking 2 hours, it takes 10, 20 minutes in software development. 

Andrej Karpathy says that it is not developing or coding anymore; it is prompting and testing. I think that's the first part.

The second part is, because of what we were talking about earlier on and the ability for the augmented human to really deal with tasks it was not prepared for originally. It can drastically increase the talent pool because a series of tasks will be performed by people who do not have this software engineering diploma.

Karim Lakhani: Yeah. There was a team out of China that, as a demonstration project, showed how you could basically build all these agents to go in and be the customer's voice, be the product management. 

Francois Candelon: Yep.

Karim Lakhani: Be the developer, be the tester, be the deployment person, and all that. They showed that what would take you two weeks to get done with lots of humans you could get done in ten minutes with generative AI.

That was just proof. There are lots of details that need to be figured out.

Francois Candelon: Yeah.

Karim Lakhani: But I would agree that I think the demand for software development is infinite. It truly is. And there aren't enough software developers in the world. 

Just as the consultant's job is going to change, the software developer's job is also going to change dramatically. And the software development process is going to change dramatically.

Francois Candelon: As you were mentioning agents, it will change permanently. I love to quote Leon Trotsky to say that we are entering a permanent revolution era. 

Karim Lakhani: Yeah.

Francois Candelon: Today, we have LLMs. Yesterday with LLMs. Now we have LMMs—

Karim Lakhani: Multi-models, yeah.

Francois Candelon: –with multi-models. Tomorrow, we will have autonomous agents. So, I think that we need to understand that it will dramatically change over time.

Karim Lakhani: Yes.

Francois Candelon: So, even in a short period of time.

I was discussing with ... [indiscernible, 00:49:09] let's say a few months back where he was telling me, "Okay, autonomous agents. Okay, there are still things to be done. It's not just by growing compute power or the number of parameters, but it is likely to happen before the end of the decade."

Karim Lakhani: Yes. 

Quantifying the business value of generative AI

Michael Krigsman: Arsalan Khan comes back on Twitter, X, and he has an economic question. He says, "Typically, in organizations, humans are either—" This is a great question. He says, "Humans are either full-time or part-time. What financial value do you assign to an AI that replaces humans or even augments humans? Is AI a full-time equivalent? How do you budget for AI?"

Francois Candelon: More and more, AI will become a kind of colleague because, of course, every individual will get augmented individually, but it's true that when you work as one team – and this is something we will need to work on because it's very difficult to measure – you can imagine that you would have AI as a facilitator, a moderator, an expert, whatever. So, I think that.

I think it was last year; Google spent more on computational power than on HR. 

Karim Lakhani: I didn't know that. Wow.

Francois Candelon: Yeah.

Karim Lakhani: Wow.

Francois Candelon: I got it maybe from Azeem Azhar.

Karim Lakhani: Yes.

Francois Candelon: By the way, Azeem is really a fantastic person.

I think that we need to realize that, yes, you will have to deal with that. But again, I think that, for companies, the way to give a valuation is really to think about gen AI, how you can solve real business questions. 

Very often when I'm talking to companies, they say, "Oh, what can I do with gen AI?" That's not the right way to think about it.

If you want to be able to bring a valuation, you need to say, "Okay, how can I solve a business question leveraging the power of gen AI?" It can be by reducing the cost of a given function and reshaping it, as you said, with the processes (customer service, marketing, whatever). Or it is by creating new products.

As a bit of an advertisement, you should look at L'Oreal's Genius, which is their new virtual beauty advisor that was presented this week at the CES in Las Vegas. 

I think these are things that are really important. Solve real business questions if you want to have a real impact. Therefore, you will then be able to bring value, to put value on it. Otherwise, you will have an impact but it will be diffused productivity and, therefore, it will be a negative impact on your P&L.

Training generative AI to take business risk

Michael Krigsman: We have a question, another really interesting one, from Greg Walters on LinkedIn. This is related to the outlier question earlier. He says, "Can gen AI be trained to take chances, to make and provide "risky" decisions and answers?"

Karim Lakhani: I think our colleague Edward McFowland has been thinking about this from a very statistical perspective. If you think about what's going on – and I'm going to butcher this perspective. Edward will provide a much better explanation.

It's trained off the Internet, right? Then it has to give you a statistical answer. The way LLMs work is the statistical probability of what's the next word. That's what it is doing. That's all it is doing.

Then if it's a statistical-based system that is going to give you the average answer, and the opening question is, "Can you get it to give you the two sigma, the three sigma, the six sigma answer as well?" I believe the answer is yes, but I haven't seen any significant studies where we have, for the same type of things, prompted the AI to give us outliers. There's no technical reason why you can't sample from the distribution that's further away from the mean, further down, but we'll see if, in practice, we can actually pull that off or not. 

Michael Krigsman: Did this research change how you or your teams or you personally use these tools like ChatGPT?

Karim Lakhani: I've become a cyborg. [Laughter] I'm very much—

In my massive monitor, I have one window always open with three tools. I use ChatGPT, I use Perplexity, and I have Poe up and running. 

And so, I am using it as a companion all the time, and also even in my smartphone. I've got Poe and ChatGPT, and so I'm continuously using it all the time. And my usage has increased since we started the study. 

Francois Candelon: On our side, I would say at the team level, we are deploying it as much as we can just because we want to better understand the second, third, and fourth other effect.

Karim Lakhani: Yes.

Francois Candelon: On the one hand and because it was part of our discussions as well. I think that these AI transformations, more than any other transformations, are critical and impacting our professional identity. This is something that is important and that we need to deal with because it will be critical in the change management in companies.

The other thing that we did that this study impacted in my team is that we really reinforced the research on the change management. 

Karim Lakhani: Yes. If I can add one more thing – I've been out talking a lot about this – what I've been surprised by (contrary to Francois's assessment of how the BCG consultants looked at that) is that a lot of people are worried. Many people are worried about job loss and de-skilling and so forth. And that has made me aware.

We can't take for granted that acceptance of these solutions are going to be easy. That's why the change management story is so important. It's really a personal change management, identity.

Francois Candelon: Yes.

Karim Lakhani: Like, what are your skills now given that these tools are so powerful?

Francois Candelon: You can say that, in a company with analytical AI, contributors don't need managers because AI will tell them what to do.

Karim Lakhani: Yes.

Francois Candelon: Managers with generative AI don't need contributors because basically, they will have all the things that are done for creativity.

Karim Lakhani: Yes. Yeah.

Francois Candelon: I think that we need to invent what will happen. 

Karim Lakhani: Yeah.

Francois Candelon: But it is a new world. 

Karim Lakhani: Yeah.

Francois Candelon: It's an industrial revolution.

Karim Lakhani: Yes.

Francois Candelon: Even I was discussing with Professor Sinan Aral—

Karim Lakhani: Yes.

Francois Candelon: –from MIT. He was saying, "No, no. It's not an industrial revolution. It is a revolution. We had agriculture, industry, and now this is a third."

Karim Lakhani: Yes. 

Francois Candelon: I don't know if he is right, but for sure it's big.

Karim Lakhani: Sinan is a pretty smart guy.

Francois Candelon: Yeah, he is.

Karim Lakhani: Yeah. [Laughter] 

Bias and generative AI

Michael Krigsman: On LinkedIn, Benjamin Huynh asks a very serious question, comment, but we don't have much time to answer it. He says, "Generative AI is basically a tool that supports repeated tasks with a known, existing data set."

Karim Lakhani: No. That is incorrect.

Michael Krigsman: Okay. He goes on to say, "Based on that assumption, which we already know you do not agree with (that is wrong)," he says, "any prediction that AI generates on this limited data set is therefore high risk of errors, not biased?" I can tell this is wrong, but anyway. "Bias is something you know right and wrong and consciously take risks contradicting the existing fact that then become ethnic issues." I think there's a language thing. Obviously, he's not considering the fact of a bias built into the data. But Karim, go ahead.

Karim Lakhani: I just don't think it's a right portrayal of the tool and what it does. I would encourage him to actually use a tool in his workflow, personal as well as professional, to see what it can do.

Francois Candelon: What is true – and we touched on it – is that given 70% of the Internet is written in English, there is a real risk in terms of biases and in terms of social power. 

Karim Lakhani: Yeah, 100%. That is a national issue. 

Francois Candelon: It's a national issue.

Karim Lakhani: That's why France, Macron—

Francois Candelon: Yeah, yeah. I agree. I agree. 

Karim Lakhani: Dubai, UE, India, everybody is freaking out and saying, "We better make sure that we can build our own homegrown systems," and you should. 

In fact, I have to tell you this. At HBS, we had the same issue. We write all of these cases. We have a publishing arm that copyrights everything. 

The question for us is, "Should we let these models train off of our information and of our knowledge?" There's no agreement on the school. Our policy is we don't upload stuff, our documents, up into these systems. 

My view is, no, of course, because my research is going to get lost in these generative AI tools if it hasn't trained off of my work.

Francois Candelon: Yes. Yes. 

Karim Lakhani: Personally, I'm worried sick that we're going to make the wrong choices about this.

Francois Candelon: Yes, clearly. I'm on your side. 

Karim Lakhani: Yeah.

Michael Krigsman: Also, it's not the technology that is making a bias decision. It's bias that is inherent, built into the underlying data. 

Advice for business leaders on investing in AI

Can I ask each one of you in turn to share your thoughts or advice for business leaders on the use and the adoption of these technologies in their organizations? Francois, do you want to go first maybe?

Francois Candelon: Today, many people are a little bit worried about, "Should I do it? Should I wait for the next generation?" 

You need to start now. As I said, as I've said earlier on, the main source of competitive advantage moving forward will be your ability to adopt this and unleash the power of these technologies. 

The clock speed of these technologies is much faster than the clock speed of the organizations to adopt them. So, if you are able to adopt them a bit faster, then you will have a great competitive advantage. So, go for it. Don't wait.

Karim Lakhani: I totally agree. I felt that, a year ago, as these tools became generally available, many companies were misguided in banning these tools and saying this is all terrible and bad. 

I don't know if anybody has done any studies to see if people have updated their policies or not. 

Francois Candelon: They are updated.

Karim Lakhani: Have they stopped the bans?

Francois Candelon: No, they have stopped, especially because of Shadow AI.

Karim Lakhani: Yes.

Francois Candelon: Basically, it was banned but people were using it.

Karim Lakhani: Yeah, exactly. Exactly. And so, my view is that, for leaders especially, it is a big learning mandate.

Francois Candelon: Yeah.

Karim Lakhani: You have to know how these systems work and use them yourself in order to be able to help your organization. You can't delegate this to a junior person or to somebody else in your team. 

I really think, and I tell this to boards, I tell this to the C-suite, I say, "You have to invest in learning it yourself. This is a bicycle for the mind. You have to learn to ride the bike. You can't just vicariously watch a movie about it and say, 'I understand it.'" 

You don't understand balance until you learn to ride a bike. The same thing here as well.

Michael Krigsman: Okay. With that, a huge thank you to our guests, Karim Lakhani from the Harvard Business School and Francois Candelon from Boston Consulting Group. Thank you both so much for taking the time to be with us today. I really, really appreciate it.

Karim Lakhani: Thank you. It was a lot of fun.

Francois Candelon: It was. 

Michael Krigsman: To everybody in the audience, thank you for watching. You guys are amazing. 

Before you go, please subscribe to our newsletter, and subscribe to the CXOTalk YouTube channel. Check out CXOTalk.com. We have incredible shows coming up. 

Everybody, we'll see you again next time. Have a great day.

Published Date: Jan 12, 2024

Author: Michael Krigsman

Episode ID: 820