Discover how top CEOs navigate AI strategy and ethics with McKinsey & Company. Learn to balance innovation and risk, bridge business-tech divides, and lead through AI's ethical challenges. Essential insights for forward-thinking executives.
McKinsey: CEO Guidance for Enterprise AI
In CXOTalk episode 851, Curt Strovink, senior partner at McKinsey and Company, discusses the critical role of leadership in navigating the AI revolution. As head of McKinsey's global CIO services and a board of directors member, Strovink brings a wealth of experience to the conversation about how CEOs and other senior leaders can effectively integrate AI into their organizations.
Strovink explores the challenges leaders face in balancing AI's potential with its risks, emphasizing the need for a strategic approach beyond individual use cases. He shares practical advice on fostering collaboration between business and technology teams, addressing ethical concerns, and cultivating the adaptive leadership skills necessary in this rapidly evolving landscape. Drawing from his new book, The Journey of Leadership: How CEOs Learn to Lead from the Inside Out, Strovink also offers insights into the personal development required for leaders to guide their organizations through AI transformation successfully.
Episode Highlights
Navigate AI's Ethical Challenges Proactively
- Develop a comprehensive risk framework that addresses bias, privacy, and job displacement concerns before implementing AI solutions.
- Create a balanced approach that considers AI's potential benefits and ethical risks, involving legal, risk management, and technology teams in the decision-making process.
Foster Business-Technology Collaboration
- Encourage technology leaders to become more business-savvy and business leaders to increase their tech fluency to drive effective AI implementation.
- Create cross-functional teams that combine business and technology expertise to develop AI strategies aligned with company goals.
Cultivate Adaptive Leadership for AI Integration
- Embrace a learning mindset that acknowledges the rapid pace of AI evolution and the need for continuous adaptation.
- Model openness to iterative learning and empower teams to take calculated risks in AI implementation.
Balance Short-Term Hype with Long-Term Value
- Look beyond immediate AI use cases to identify domains where multiple AI applications can drive significant business outcomes.
- Develop a multi-year AI roadmap that considers quick wins and long-term transformational opportunities.
Build Trust Through Effective Communication
- Simplify complex AI concepts for non-technical stakeholders, focusing on business value and practical applications.
- Create "journeys of discovery" for senior leaders to experience AI's potential firsthand rather than relying solely on presentations and reports.
Key Takeaways
AI Strategy Requires CEO Calibration
CEOs should act as “chief calibration officers” in AI strategy. This involves balancing the need for rapid innovation with careful risk management. Leaders should set clear parameters for AI initiatives, prioritize high-value domains over individual use cases, and foster a culture of learning and adaptation as AI technologies evolve.
Bridge the Business-Technology Divide
Successful AI implementation demands close collaboration between business and technology teams. CEOs should encourage tech leaders to become more business-savvy and business leaders to increase their tech fluency. Creating cross-functional teams and ensuring technology strategies align closely with overall business goals are crucial to bridging this divide.
Navigate AI's Ethical Maze
Addressing AI's ethical implications is a critical leadership challenge. CEOs should develop comprehensive frameworks considering potential biases, privacy concerns, and workforce impacts. This involves bringing together diverse perspectives from legal, risk management, and technology teams to make balanced decisions that maximize AI's benefits while mitigating risks.
Episode Participants
Kurt Strovink co-leads McKinsey’s global CEO services. He has previously led the firm’s global work in the insurance sector, the Strategy & Corporate Finance Practice in the Americas, the Global Client Council serving priority clients of the firm, and the New York Office. He is a senior partner in the firm’s financial institutions group and has led major McKinsey governance efforts, including reshaping senior partners’ evaluation and development. Kurt serves life- and wealth-management, property and casualty insurance, health, and asset-management groups on strategy, organization, operations, and transformation. He works with CEOs, CFOs, chief strategy officers, and boards on strategy formulation and execution. He has expertise on strategy across industries, corporate governance, leadership in talent-rich organizations, the future of hybrid work, CEO transitions, and leveraging the role of the CEO as a catalyst for transformation. Over the past ten years, Kurt has led pro bono efforts in education, including work with Teach for America, where he was formerly on the national board. He is a member of the board of trustees and the executive committee of Carnegie Hall.
Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator known for his deep digital transformation, innovation, and leadership expertise. He has presented at industry events worldwide and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.
Transcript
Michael Krigsman: Welcome to CXOTalk. Episode 851. I'm Michael Krigsman, and we are discussing leadership strategy and AI. Our guest is Curt Strovink, senior partner at McKinsey and Company, where he's head of the firm's global CIO services and a member of their board of directors.
Curt, tell us about your work at McKinsey. You also have a new book coming out.
Curt Strovink: Our group's work has focused on helping CEOs achieve their full potential and role. We think that's one of the biggest opportunities for global business.
The book that's coming out is called "The Journey of Leadership: How CEOs Learn to Lead from the Inside Out". It builds on some earlier work we did around CEO excellence, where we examined what excellence looks like for high-performing CEOs who have delivered significant value in terms of total shareholder return and other metrics.
The thesis of the book, in a nutshell, is that learning to lead others often involves first learning how to lead yourself. There's a link between leading oneself and leading others. The book explores the relationship between these two sides of leadership.
Michael Krigsman: What I found particularly interesting about your book is the focus on what we might call soft attributes - being self-aware, mindful, and collaborative. Of course, these are important personal and professional attributes for anybody. But you focus on this for CEOs.
Curt Strovink: As you point out, Michael, it's kind of counterintuitive because you might think the senior-most integrating officer of a company is always in charge, with a strong form of clear leadership. But we found that some of the best ones show traits of humility, vulnerability, and learning to lead through others.
These softer skills are really important. Sometimes, when people struggle with these skills, it's traced to how they govern themselves—how they react to stress, ambiguity, or complexity. We found a link between these aspects as we examined hundreds of CEOs.
It's strange because we're coming out of an imperial CEO era, where CEOs were seen as all-knowing and all-powerful. I think we're definitely beyond that era now. The expectations of CEOs with multiple stakeholders are different. That's part of why these softer skills are much more natural to put in scope.
People sometimes don't recognize the true value of the softer side. We've found that CEOs who operate at the top of their game often treat the soft stuff as disciplined matters, just like they treat the hard stuff of numbers. They manage both sides and don't see it as soft. They see it as highly important for culture, leadership development, and those things.
There are probably several reasons why the softer skills historically weren't as focused on, but I think they've now come back into scope. Given the rapid changes in the world, I think the requirement for adaptability, change, and agility in organizations and in leaders has gone way up.
Michael Krigsman: I can certainly say that, having interviewed hundreds of senior leaders from the largest companies on CXOTalk, one of the surprising aspects is the humility and humanness of the folks who I think are the best leaders I've interviewed.
Curt Strovink: I find that too. It might be that CEOs stand at an intersection where they get to see the effects of different leadership forms, and they learn over time. You'll often see this during somebody's CEO tenure, where initially, they might still be using the operating model from their previous role. You'll see how they sometimes evolve that role in leading and learning.
But you're right about humility, especially if it can be done decisively, creating an open space for learning and continuous improvement. Those kinds of skills go a long way toward making organizations more agile, bold, willing to take risks, and able to learn how to respond.
Michael Krigsman: When we talk about leadership, disruption is a crucial part of that. Right now, we're facing this disruption caused by AI. From a leadership and strategy standpoint, how do you speak with CEOs and boards of directors relating to AI?
Curt Strovink: There are a number of thoughts on people's minds. One is trying to assess the full potential of what AI can do for their businesses. This involves looking at the businesses they're in and the use cases that may be most accretive, trying to isolate and identify those. So there's a question of where the opportunity is greatest.
Strategically, there's also a question of whether to be a fast follower or to lead on this. Sometimes people think in even broader terms - do I want to be a taker, a maker, or a shaper? Where in this continuum do I think benefits my business, shareholders, and employees most? So there's a calibration question.
CEOs often deal with calibration questions. How much should we change? Is the extent of our change enough to matter? Those kinds of things are typically CEO-like considerations.
There are also questions around how to actually industrialize AI. Everyone has their use cases, but how do you actually scale something? Often there's a focus on what domains or larger areas and processes, if enhanced with AI, might create greater business value. So that's another question - how to get the aggregation right so you're not just doing individual use cases or pilots, but trying to actually scale and industrialize.
In fact, we have part of our QuantumBlack firm called Ignacio that focuses just on that industrialization of use cases, to do work in these domains.
Another key consideration is risks. Boards, management teams, and CEOs are thinking about the risks of AI - unfettered access to it, use on the front line of generative AI. Is that okay? Should we ban it? Should we limit it? What areas are most appropriate for us to move into now? Which ones might we prefer to wait on until there's more learning and experience?
Typically, people will sometimes insulate aspects of the process that could involve customers directly until they have deployable solutions for employee training or other areas.
So it's a full range of considerations, Michael, from strategic opportunities and stance to risks. And trying to calibrate what's real versus hype. I'm sure we'll talk more about that too.
Michael Krigsman: We have an interesting question from LinkedIn from Michelle Clark about one aspect of the risks you were just describing, Curt. She asks: Where should a CEO draw the line in putting AI into operation when the data is overwhelming? Particularly in healthcare, she notes that the data used to train the models is biased, even "criminally biased" as she puts it. But let's make it broader - how do CEOs that you speak with calibrate those distinctions of bias? It's not just a switch you flip between biased and unbiased. There are gray areas.
Curt Strovink: It's an issue that many CEOs I work with grapple with. Obviously, it varies by context, but I think the ever-present reality that "we could create bias with some of the tools we're using" is really important.
What I see is a focus on being aware of it and mindful of it. Front-loading a risk framework that takes this into account from the ground up, rather than as an afterthought, is super important.
Sometimes looking outside the organization for independent validation of approaches, so it's not just an internal decision, can be valuable. But I think it's a critical area that many companies and management teams are still learning about and working through. It's an ongoing challenge, but a really important one to address.
It's not just because of potential regulatory concerns. I think leaders are genuinely concerned about these issues themselves.
There are other related risks as well. One fascinating question many thoughtful CEOs and management teams are considering is: As we use more generative AI in our companies, how do we ensure we're still developing people who previously would have done tasks themselves, rather than relying on AI? Are we at risk of creating a generation that hasn't really learned and been apprenticed in the same ways as previous generations, because they've become so equipped with AI tools?
That's a fascinating question for future leadership development. It's a sort of second-order effect to consider. How do you both leverage AI, but also continue developing your human capital over time? How do you ensure people are gaining the skills and experience at each stage so they can become senior leaders with enough breadth and pattern recognition?
So while it's not a regulatory risk per se, it is an important long-term human capital consideration that's quite strategic.
Michael Krigsman: Please subscribe to our newsletter and YouTube channel.
Curt, as you're describing these various issues, the level of complexity is astounding - how intricate it all is and how many directions the implications and tentacles go. How are CEOs and boards parsing all of this? In some cases, there are conflicting goals as well.
Curt Strovink: It's definitely a space where just a show of hands or democratic process is difficult. From what I've seen, Michael, people try to set parameters around this.
Clearly, there's a question of where to start. Where is the business value highest and the risks we discussed earlier lower or more manageable? So I think a clear heat map of opportunities in business terms, relative to risks, is one way to parse it and begin.
Another important consideration that's sometimes overlooked is: What's the story of how you imagine AI playing a role in your company over time? No one will have that fully fleshed out, but I think it's crucial to consider the purpose, meaning, and underlying message.
Everyone is uncertain - will this replace my job? Will it augment me? Are we not sure yet? What's our thesis, and how does it relate to our overall strategy?
I think CEOs and boards that get ahead of that curve, perhaps with their senior teams leading, will be much better prepared. That's another way to parse it. I'd suggest doing that early in the process, as it tends to be the kind of thing that gets incrementally pushed back. Then you risk antibodies developing in the organization because people aren't sure what it means for them.
So that's another key aspect - frontloading meaning, purpose, and articulation of the "why" behind AI initiatives.
A third way to parse this is by considering the multi-year roadmap. How do you graduate to other applications over time? Take a longer view beyond just early use cases. If you look out 6, 12, 18 months, where could you be? What does that roadmap look like?
Finally, I think it's important to identify risks you absolutely want to avoid from the start. How do you parameterize those and isolate them as areas to steer clear of?
And related to that, how do you create optionality? With so many providers promising a lot and evolving quickly themselves, how do you maintain flexibility and avoid becoming overly dependent on any one area?
So those are some ways to parse it - identifying high-value, lower-risk opportunities; articulating the longer-term vision and purpose; mapping out the multi-year journey; defining no-go areas; and preserving optionality.
But as you point out, Michael, it is a complicated set of considerations. The length of my answer probably reveals that complexity.
Michael Krigsman: To what extent do you see CEOs actually getting involved in the details of this list that includes so many different pieces? And to what extent is it simply delegated down for somebody else to handle and manage?
Curt Strovink: One of the things we've been thinking about more broadly - both in this book, Journey of Leadership, and in other areas - is what are the undelegatable roles of the CEO in different functional areas? Because you shouldn't be wrestling your team to lead in most cases.
But in the AI space, I think one of the few things a CEO needs to remain involved with is probably the decision on whether to lead or follow. This is in consultation with the senior team and ultimately the board, of course.
I think many more are deciding they need to lead, as this may be an area where you can't truly fast follow. Fast followership risks becoming permanent laggardship. They're worried this isn't like other areas where they can watch and then execute. They feel they need to lean in more proactively.
That strategic stance is a question the CEO typically gets involved in.
I also think CEOs need to push on identifying the highest value uses within this landscape, making sure they're not just pursuing a litany of use cases, but actually focusing on key domains. That's super important for boards and CEOs to insist on.
Sometimes companies content themselves with doing a handful of use cases that are going well. The board feels okay that they're doing something, and the CEO feels progress is being made. But that's insufficient.
You really need to think about end-to-end processes - what we call domains, which are categories of use cases that cohere into business outcomes. I would say that's a CEO responsibility - to insist on that level of aggregation and strategic focus.
And then as we discussed earlier, I think having a sharp view of risks, front-loading that consideration, and ensuring legal, risk, business, and technology teams are working together on these questions is critical. You need empowerment across multiple parts of the organization.
Finally, I'd say this is an area where you have to have a learning mindset. You're not going to know everything upfront. You'll have to iterate and learn. I think modeling openness to that as a CEO - showing that's the space we're in and we expect to learn as we go - is important. It sets the tone at the top for learning and encourages people to be learners themselves.
Michael Krigsman: Much of what you're describing is part of the cultural fabric of an organization. We have a question from Twitter on this topic from Arsalan Khan. He says the CEO's personality can lead the organizational culture, but CEOs often change every 3 to 5 years, leaving the culture in flux. How should an organization respond to these leadership changes?
Curt Strovink: First, I'd challenge the premise that culture necessarily changes as the CEO changes. I think some of the best CEOs lead in a way that creates and institutionalizes things that persist beyond their tenures. That's not easy to do, but focusing on the ten-year view - what can this organization do in ten years that it can't do today? How do we create systems and embedded processes for learning and adaptation?
I actually think the best CEOs outlive themselves in terms of their impact on organizations. So while I recognize the premise of the question - that CEOs rotate frequently and we need to manage that flux - our view is that CEOs should think about what happens after they leave. They should try to lead in a way that sets up the next generation.
In fact, that's something we talk a lot about in the CEO lifecycle. We discuss stepping up when you're not yet CEO, starting strong in your early years, staying ahead by upping your game mid-tenure, and "sending it forward" when you're setting up the next generation. There's a commitment to a through-cycle mindset much broader than just your own tenure that I think is part of CEO excellence and leadership.
But having said that, it is still challenging as leaders change. I would suggest focusing on the processes that are stable in an organization and that will be part of your signature strengths. I see many CEOs and senior teams thinking about what's stable versus agile in their organization, trying to get a better understanding of the stable core. That's something meant to live on beyond individual CEO tenures.
It's a great and persistent question though. I think CEO tenures will likely hold steady or become slightly shorter given the speed of change in the world. We know many CEOs don't even last three years. So it's a constant tension in some organizations.
Michael Krigsman: One of the challenges with AI is it's very difficult to determine the distinction between hype and reality. You combine that with the enormous investments required. So how are CEOs managing that bundle of issues?
Curt Strovink: I think maintaining optionality with partners is important, as we discussed earlier, because things are changing so rapidly. We have a group in our own firm with some of our best analytics people who constantly monitor the ecosystem, meeting with various players to understand what new applications are real versus vaporware that doesn't live up to its claims.
There's a lot to sort through. I'd say organizations need that kind of capability to be strong - to constantly monitor this changing landscape of potential providers and partners.
So one key is to be mindful of optionality and the partners you work with. I think CEOs that ask their teams, "How do we maintain optionality here?" are on the right track. How do we evolve as the space evolves over 3 or 6 months? That's crucial, because you can make investments too fast or too soon in one major area or partner, which can limit your strategic degrees of freedom downstream.
I also think you need a strong ROI equation - investment and return in business terms. Technology leaders can help a lot here in partnership with business leaders. But having a sense of what returns will look like often leads people to focus on earlier areas where there can be clear impact - training and onboarding, for example. Getting a three-year employee to operate like a six-year employee through AI-enhanced training. Those kinds of early, no-regret moves tend to be good starting points.
So those are a couple ways I see people approaching investments - maintaining optionality and focusing on higher ROI, lower-risk opportunities earlier in the process.
Michael Krigsman: But what about situations where the technology is moving so quickly and an organization feels it must invest right now despite the uncertainty, or risk getting left behind? Yet the cost of that investment is very high, and the risk is equally high. How do they balance that strategy?
Curt Strovink: Strategy by definition involves hard-to-reverse choices made under uncertainty that create enterprise value. The "hard to reverse" aspect is exactly what you're describing. Some of these decisions are one-way doors, not two-way doors.
Reasoning through these trade-offs is challenging, but ultimately it's where senior leadership adds the most value.
The approach I've seen work in practice is to look hard at the domains - not just individual use cases - where the greatest value potential exists. Prioritize those and overlay a risk assessment so you're focusing on lower-risk areas early on. Understand there will be iterative learning, and adjust as you go. Try to create optionality with partners for the longer term.
But leaders do realize they need to commit, and they will need to invest, sometimes substantially depending on their strategic context.
It's an interesting question because while it's particularly salient for AI right now, it's really the essence of strategy more broadly. How do you make big, bold moves in uncertain environments without betting the farm on bad decisions?
Michael Krigsman: Arsalan Khan on Twitter follows up with another interesting point. He says in some cases, employees may just be tired of hearing the hype from their own marketing teams, even though the reality is very different. Where does that aspect fit in?
Curt Strovink: It's definitely true that there's an incredibly high level of hype around generative AI across all of business right now. So I understand that fatigue can set in for many people.
Having said that, what usually happens is even if something is slightly overhyped in the short term, it's often underestimated in the medium term.
So just about the time when we're getting tired of hearing about it, real opportunities start emerging for leaders and businesses to innovate. We've seen this pattern with other technology transformations - from the dot-com era and other areas. Even electrification way back when - it took time to gain traction, but once it did, it changed the world. And I think most people see AI as following a similar trajectory.
It's not going to be a brief, overhyped event. It's likely to be truly transformative, but it may take longer to get there than all the energy, social media buzz, and commercial interests would suggest.
I think the way to approach it is to persist. Hold to what is changing. Ensure there's a tight business-technology interlock. Marketing is important, but I think the key is going to be delivering real business value through technology in a risk-adjusted way. And I would focus on the medium term rather than getting distracted by short-term hype cycles.
Michael Krigsman: That's an interesting framing - the business value enabled by the technology, and then risk adjusted. That encompasses a lot of the key pieces.
Curt Strovink: I think it does. And that will play out differently in various contexts. Every senior team will have to look at it through their own lens.
But I would try to keep that as a North Star, recognizing there will be many voices saying it's overhyped, and others saying we're not moving fast enough.
Another risk of complacency is thinking you're making good progress because you have a few use cases or even a couple dozen. But that's table stakes now - everyone has use cases. The real questions are: Are you starting to industrialize the right ones? Do you have coherent domains? Are you beginning to affect business outcomes that involve multiple, sequenced use cases? That's where I see the real opportunity.
Michael Krigsman: Curt, there are a host of ethical issues associated with all of this. Potential job displacements from AI. Customer privacy concerns. We discussed bias earlier. There are questions around algorithmic decision making and lack of transparency. How should business leaders and CEOs navigate this complex set of ethical and governance considerations?
Curt Strovink: There really is an ethical dimension to leadership generally, but especially when it comes to thinking through these AI issues. Leaders need to try to see around corners, clearly consider what could go wrong, but somehow not let that paralyze the organization. It's a very difficult balance to strike.
If people are always worried about what can go wrong, they often end up idling and not moving forward. On the other hand, if they're not sufficiently imaginative about potential risks, they're underserving their senior roles.
So I would say it's an ever-present challenge and trade-off. Senior-most executives, not just CEOs, deal with these kinds of gray area trade-offs that require judgment. And maybe some imagination as well. I think imagination is sometimes as important as judgment and ethics. It's the ability to envision something that's not there yet and ask yourself, if it were to happen, how would that change things?
But it's equally important not to fall into the failure mode of just stopping and not starting because it's risky and there are many potential pitfalls. We've seen situations - and I think many CEOs would attest to this - where they get locked down by their legal department, compliance teams, and others. They can't move forward in any workable way because, of course, there are lots of risks.
So how do you create the right bounded entrepreneurship? I think that's the real test of leadership. It's a good way to define at least some elements of what modern CEOs need to do.
There are other analogous areas too, like deciding when to take public positions on external issues. When to speak out and when to stay quiet. How to navigate when half your employees want you to do something and half don't. How do you thread those needles? How do you reason morally through these kinds of things?
Usually it's about creating a framework for what you will and won't engage in that's unique to your organization. So you might decide to take positions on areas A, B, and C, but not wade into commenting on everything like a public citizen would. That may be beyond your business model or remit.
But thinking through those trade-offs - I'm using that as another example similar to these AI risk trade-offs - where you have to move forward, but you have to do it within a framework that you've carefully conceived and then socialized within the organization.
Michael Krigsman: Would it be fair to say that it's really the CEO's job to manage through these conflicting goals and sets of issues?
Curt Strovink: I've often thought that part of the role of a CEO is to be a chief calibration officer. You need to calibrate across these polarities or tensions.
In fact, one of the things we discuss in our Journey of Leadership book is that all leadership involves trade-offs. How do you balance competing priorities? It's a deep concept in human-centric leadership. And it arises from the stresses on yourself as an individual before you can actually help lead others.
But I do think you're right. Part of the CEO role is dealing with things that can't be resolved before they reach your desk. Those are typically thorny issues with very difficult trade-offs. Or they have deep political implications that require your attention.
This concept of the CEO as chief calibration officer speaks to the fact that ultimately, CEOs - informed and supported by their senior teams - need to grapple with questions like: Are we moving fast enough? Are we moving with sufficient scope and scale? Those are calibration questions. And I think that is, to a first approximation, co-extensive with the role of a CEO.
Michael Krigsman: Now talking about calibration and being stuck in really hard problems, your book describes the story of Moderna's CEO and how they managed the onset of COVID-19. Can you tell us about that?
Curt Strovink: It's one of the 24 cases we share in the book. We wanted to be practical but also highlight the human side of how these individual leaders developed and grew.
It's a fascinating case. As many know, Moderna succeeded in developing a billion doses of their vaccine in record time, by the end of 2020. The leadership required to do that, at a time when it wasn't believed possible, was quite substantial.
Stéphane Bancel, Moderna's CEO, was a great example of someone who was decisive and bold, had high aspirations and put them forth, but also empowered his people. There's one famous story where he's talking to his head of manufacturing, and he asks, "How do we get to a billion doses by the end of the year?" The manufacturing lead responds, "There's no way to do that." And Bancel says, "Sorry, let me reframe. I've spoken incorrectly. What do you need for us to be able to do that?"
I think that's an interesting moment that captures his leadership approach. He was empowering, but also very clear on the end goal, saying "I will move heaven and earth and do whatever I can on my side." And there were many things he did do to create and underwrite the chance for them to succeed. He was decisive and unbending on the goal, but also hugely empowering of his team.
It's a great story of a company energizing itself - coming back to the FDA in just 34 days and then ultimately getting to that billion dose target, which they did achieve. The inside story of how that happened is quite remarkable.
Interestingly, there were also moments where Bancel and his team slowed down when they felt they needed to. For example, to get more diversity in their clinical trials. So they weren't just about speed at all costs. They had a quality overlay as well.
But they were able to manage this incredibly high-stakes moment for humanity with a strong, bold aspiration, radical empowerment of the team, and a sense of purpose and story around why this was so important. Everything was shut down, but this was an exciting moment where they could do something transformative for humanity. That was very motivating for people.
So while it was a unique moment in time, I think it has a lot of elements that can apply to leadership in other contexts. That's why we chose to tell that story.
Michael Krigsman: One aspect of the story in the book that I find compelling is there was a real bet-the-company moment. It was a situation where the need was so great and they had to make a decision before they knew the likelihood of the outcome. Which is very much like the AI issues we were talking about earlier.
Curt Strovink: The analogy to AI is a powerful one. You have a compressed timeframe, which in the Moderna case was driven by COVID and the urgent need for a vaccine. But with AI, it's this sense of incredible, kaleidoscopic speed and evolution of the technology, yet still needing to make decisions quickly. So I think you have this compressed timeframe feature which is quite similar in both cases.
You also have this idea of empowerment and how to get the full organization to collaborate, to break through silos. That's required in the AI space as well. There's a commitment to iteration and learning - "we will figure it out together" - and having some sort of bold vision. We've talked about those themes throughout our conversation.
So I think there are a lot of analogies. And it's also a space with risks that have to be negotiated, but we can't be stymied by them. We have to figure out a way through this forest. I think that's similar to what we've been discussing around AI.
So it is a provocative analogy for leadership in an AI context, to think about the Moderna case more broadly.
Michael Krigsman: On this issue of calibration, we have a really interesting question from Lisbeth Shaw on Twitter. She asks: How can CEOs work through the ethical issues of AI and take their organizations and boards through these issues when they must balance value and revenue?
Curt Strovink: It's about trying to get that balance right - having a full-throated conversation about business value, but also the attendant risks. You need a way to say, here's how we've ranked and stacked the business opportunities, aggregating them into domains rather than just use cases. But we've also looked at the risks - not just first-order risks, but second and third-order effects. Because there are some things that can bite you that are secondary or tertiary consequences.
I think you need those two lists - opportunities and risks - and you need to look at things that are high on one side and low on the other. That's how I've seen CEOs and boards in companies that have pursued this well step through the process.
What they try to do is discipline it with facts and discussion of what's known and unknown. Then they iterate and commit to a culture of learning. They also try to get external validation and involvement. So it's not just listening to the people who know this space inside the company, with everyone else just getting updates. It has to be a much deeper exploration with the full team.
I think it's also valuable to bring in external partners, whether technology providers or others, to help illuminate what's possible.
Another approach organizations can use is challenge sessions. Bring people together and say, let's look at this proposal and poke holes in it. Let's really question our assumptions. Sometimes people do what are called pre-mortems, where they imagine the initiative has failed and ask, "Why did it fail?" Go around the table and have everyone give a reason. The insights you can get from teams reasoning like that are hugely valuable for troubleshooting.
Sometimes the results of those exercises can be shared with the board. And I find boards are very interested to see that a management team has thought through these issues thoroughly and tapped into the full intuition and experience across multiple people.
But it's not easy. And the question is a great one because I think it's going to be one of the key challenges going forward. Balancing these considerations is really at the very heart of senior leadership.
Michael Krigsman: And when it comes to AI, there's so much uncertainty. The technology is evolving so quickly. We know AI will be very important, but we don't know exactly how important or where the ultimate implications will fall. We know it will affect jobs, for example, but precisely where and how we don't know. I think that makes it very difficult for CEOs and other senior leaders inside companies.
Curt Strovink: Absolutely. I keep coming back to thinking about hard-to-reverse decisions made under uncertainty that create enterprise value. We often offer that as a definition of strategy, but all three of those terms apply acutely to AI. The decisions are hard to reverse, they're made under great uncertainty, and they have the potential to create significant enterprise value.
So we have to navigate through that complex equation. Exactly as you said, Michael.
Michael Krigsman: What do CEOs want from their senior technology leaders?
Curt Strovink: I see a lot of CEOs asking for business-technology interlock. They want business and technology coming together to develop roadmaps, plans, and return on investment projections.
I get worried when I see the technology group treated as just a fulfillment entity and not a co-creator. And I get concerned when the business isn't investing to understand technology and treats it as just "IT" rather than a strategic capability.
CEOs really want those business-technology interlocks. They're demanding that from their teams. They're trying to get business leaders to operate that way and become more tech-fluent. And they want technology leaders to become more business-savvy and see it as their role to orchestrate outcomes and co-create paths forward with the business.
That would be one key thing. A second is sometimes having the leadership capabilities to create collaborative dynamics and lead people to outcomes even when you don't directly control them. There's a human capital and leadership element to this. CEOs look for their technology leaders to step up in human-centric leadership terms. Becoming more collaborative, better listeners, showing more vulnerability - all the things we talk about in the Journey of Leadership apply to technology leaders as well.
A third aspect is the ability to demystify and simplify technology for the board. Can you isolate and reduce complex concepts to their essence so that business generalists can understand, rather than requiring deep technical knowledge? That's a capability most CEOs and boards really appreciate from their technology leaders. When they see it, those technology leaders become very influential because they can translate between business and tech. They can explain how to move forward in this new world.
Finally, I've seen many cases where a company has an overall strategy, and then also has a separate technology strategy, but the two aren't as tightly linked as you'd want given how central technology is to overall strategy these days. So technology leaders need to be not just conversant in, but real students of, the broader strategies of the firms they help lead. They should be able to explain why, under this enterprise strategy, we're pursuing technology in this particular way.
It sounds basic, but I think that linkage often isn't as strong as it needs to be, at least in the CEO's eyes.
So those are 3-4 key things. I'm sure you've probably seen other patterns as well in your conversations with leaders.
Michael Krigsman: There have been relatively few CIOs, CTOs, or Chief Data Officers in large organizations that have equal footing on both the technology side and the business side. I've interviewed a few of those people on CXOTalk, but I think the challenge is that the technology career path is focused on technology, and the business career path is focused on running the business. There are very few people who have an equal grasp of both. I think that gets to a fundamental issue.
Curt Strovink: That's a great point. Interestingly, I can think of three CEOs in just the last 7-8 months who have told me that in their succession planning, they're considering heads of technology as potential CEO candidates. That wouldn't have been the case five years ago.
So I think your point about career paths is really important. It's noteworthy that we're starting to see CIOs being considered as future CEOs. People are beginning to imagine that career trajectory as a possibility.
If that's going to happen more broadly, we need to think about how to develop that kind of business fluency earlier in technology leaders' careers. And also how to build their ability to create bridges across the organization and get things done in agile ways that involve both technology and business.
I love that point you raised. It just struck me that this is really the first time in the last 6-12 months that I've heard CEOs explicitly talking about technology leaders being part of the CEO succession planning process. That's a significant shift.
Michael Krigsman: On this issue, we have another question from Arsalan Khan. He says not all CEOs are tech savvy. How should CIOs create trust with the CEO in order to deploy these emerging technologies? And he wants to know, does it matter where the CIO reports? Should they report to the CEO, CFO, COO, board, or someone else?
Curt Strovink: Building capability and trust between chief technology officers or chief digital officers and CEOs often involves creating journeys of discovery. Usually that's about experiences, not PowerPoint decks. It's references, talking to other people, convening groups to speak about issues. It might involve creating a neural network of connections. The goal is expanding the ambit and field of vision for CEOs and boards by bringing in folks who can talk about the state of the art in different areas of your industry or beyond.
I think that's a big part of it - create experiences. Many people will discuss facts analytically and bring business cases. But there's often a missing premise - what do I need to know to truly appraise this? There's insufficient attention paid to learning and experiences as opposed to just facts and recommendations.
So if you can take your CEO on journeys of discovery - both inbound and outbound - I think that's a big part of developing conviction and experience. That would be one key idea.
I also think communication is really important. Being able to find the right level for speaking to CEOs and boards is crucial. It has to be clear, not dumbed down, but it also can't immediately dive into technical details that aren't necessary for the discussion at hand. Learning to elevate the dialogue and frame it in the right way is a task for every business leader.
It's an executive skill, but I think it's particularly important for technology leaders to master. Because they have so much deep technical knowledge that others don't, it can be tempting to get too deep too quickly. Resisting that impulse is key.
Michael Krigsman: Earlier, when I described those unique individuals - relatively few - who possess both deep technology skills and deep business skills, what I left out was the glue you just mentioned: excellent communication skills.
Curt Strovink: Absolutely. I think it's super important in large organizations.
Another thing I would add is the importance of role modeling continuous learning. Showing that you don't see yourself as the resident expert on every topic immediately is a form of humility and vulnerability. But it's used to an end that helps lead other people, much as we talked about in The Journey of Leadership.
I think those skills can be doubly valuable in the technology space. When you're not just relying on your expertise in one domain, but showing that you're continually learning and bridging to new areas, you're modeling what you're demanding of others. You're showing how your team and others will need to grow as well. And you're demonstrating how to do that together.
Those are powerful ideas. And one other thought I've long been interested in, especially in innovation and disruption areas: Some of the most powerful motivators for people to take action are when they see analogies in other industries that they think might apply to their own.
So I think a technology leader can also be a student of where else technology and business interlock is happening at scale and to great benefit. Why is it working there, and what would it mean for us to do something similar here? Maybe we get to know those people. Maybe we bring them in to share their experiences.
That falls under the heading of expanding learning and creating discovery paths. But I think analogies to other contexts can be a very powerful way to build conviction. Technology leaders, in my estimation, should be much more focused on finding and sharing relevant analogies than they often are.
Michael Krigsman: What advice do you have for boards of directors when it comes to this whole AI situation?
Curt Strovink: I think one key area is helping the management team decide on their strategic stance. How fast do we believe we need to move in this space? This is subject to the risks we've discussed, but boards should be part of that collaborative conversation. It's ultimately the CEO's decision, but boards can engage. How fast should we move? Why not faster or slower? Having that discussion as a board is important. Collaboration on strategy is crucial, in my opinion, but particularly so in the AI space.
Second, I think boards need to push to get above the use case level. Having a few AI use cases is table stakes now. It might sound like progress to the uninitiated, but boards need to push for real aggregation of efforts. I call these "domains" - categories of use cases that lead to meaningful business outcomes.
Boards can challenge, motivate, and expect to know which domains the company is focusing on. Individual use cases are fine, but they won't move the needle much on their own. What are the areas where multiple use cases come together to drive real business impact? Boards should push on that question. I think that can be very useful.
Third, I think boards have a role in modeling a learning mindset and capacity for adaptability. We're in a space that's going to require adaptive leadership. So celebrating the fact that people are going to step forward, take some risks, and that not everything will play out perfectly - that's okay. Boards can show they're aware of the environment we're in. It's a bit like modern leadership generally, but I think it takes particular expression with AI.
Boards can do a lot to show they're interested in entrepreneurship and proactive leadership, while still being thoughtful about risks. But they need to recognize we won't get it all right the first time. Patience and encouragement of learning are key. Role modeling that mindset for the CEO and management team is really important.
And then fourth, I would caution boards to be wary of over-applying what they think they know from 5 or 10 years ago to this space. The overreach of a board member making a bad analogy can be quite problematic. Many board members are trying to add value, not cause problems, but they go back to their past experience base. I would just suggest putting an extra filter on the analogies you think are relevant to AI.
You may have unique insights to offer, but you may also inadvertently lead folks astray. Or you might quash innovation because you get nervous about things you've read in the media. That's not necessarily helpful to management teams who are in the thick of navigating these issues day-to-day.
Michael Krigsman: What do CEOs want from middle managers, and what advice do you have for middle managers?
Curt Strovink: For middle managers, that last mile of execution is absolutely critical. Getting AI and other technology innovations to really take hold on the front lines and influence change - there's so much value in that for organizations. And middle managers play a huge role in creating those conditions. So I think they're super important.
They can also help institutionalize "what works around here" - creating standard work practices and continuous improvement mindsets. They can build systems of replicable processes that make companies better. That speaks to your earlier question about what's left when leaders depart. What's left are these institutionalized processes. But middle management is often where that happens - or doesn't.
So I think they play a huge role there. They're culture carriers. When they're committed to self-development, learning, and iterative improvement - what we might call proactive leadership within an appropriate, adult-like view of risk - I think that's super important to creating the cultures we need in high-performing organizations.
Those would be some key pieces of advice. And then I would humbly suggest that middle managers take a look at the six practices for governing oneself that we identified in our research with 500+ CEOs over ten years. And the six other elements of leading others externally that we think are closely linked to those self-oriented practices.
Just reflect on your own self-awareness around these things. Where do you think you're strong? Where are you mindful? Where might you have room for growth? Are there opportunities to enhance your leadership of others that might come from better governing and developing yourself?
I think if more people engaged in that kind of reflection and development, we could see real positive waves move through companies. It could be quite productive for organizational energy and leadership capacity.
Michael Krigsman: Here is Curt's book. And with that, a huge thank you to Curt Strovink. Curt, thank you so much for being here. I'm so grateful for your time today.
Curt Strovink: No, thank you, Michael. It's been great to hear your questions and really think through these issues. And the audience questions have been excellent as well.
Michael Krigsman: Everybody, thank you for watching. Especially those of you who asked such insightful questions. Before you go, please subscribe to our newsletter and YouTube channel so we can notify you about upcoming shows. We have some really amazing episodes coming up this fall, so check out CXOTalk.com for details.
Thank you all so much. Curt, have a great day. Take care everyone.
Published Date: Sep 06, 2024
Author: Michael Krigsman
Episode ID: 851