Data science leaders Inderpal Bandhari and Anthony Scriffignano join Michael Krigsman on CXOTalk Episode #843 to discuss the impact of AI on leadership, outlining the skills and strategies executives need to thrive in an AI-driven world.
The AI Imperative: New Rules of Leadership
In Episode 843 of CXOTalk, join us for a discussion on artificial intelligence and leadership with two prominent guests. Anthony Scriffignano, a Distinguished Fellow at The Stimson Center and former Chief Scientist at Dun & Bradstreet, brings over four decades of expertise in data science and advanced anomaly detection. Inderpal Bhandari, the founder of Virtual Gold and former Global Chief Data Officer at IBM, shares his experience in leveraging data and AI to drive business value.
Together, they explore how AI is reshaping leadership roles and the essential skills executives need to thrive, emphasizing the need for a deep understanding of technology and data. Scriffignano and Bhandari highlight the challenges and opportunities presented by generative AI, offering strategies to integrate AI into business processes and decision-making. This episode explains how to understand and overcome the complexities of AI adoption and harness its potential to drive organizational success.
Episode Highlights
Adapt to the Changing Nature of Work
- Recognize that AI is transforming job roles and workflows; prepare the workforce through training and upskilling.
- Align new technologies with current business processes to enhance efficiency without disrupting workforce stability.
Manage AI Ethics and Bias
- Implement checks and balances to be sure that AI systems are used ethically and that outputs are free from biases.
- Regularly review and update AI models to reflect ethical standards and societal values, ensuring transparency in AI operations.
Address AI and Workforce Concerns
- Communicate openly with employees about AI implementations and future implications for their roles.
- Develop clear pathways for employees to transition into new roles or enhance their skills in an AI-driven workplace.
Cultivate Data-Driven Leadership
- Leaders must develop a thorough understanding of AI and data analytics to drive company strategy effectively.
- Encourage a culture where decision-making is carried out by data insights to enhance accuracy and strategic outcomes.
Understand Investment and ROI in AI
- Carefully evaluate the potential returns on AI investments, considering both direct benefits and indirect impacts on competitive positioning.
- Balance the focus on immediate ROI with long-term strategic benefits and the potential costs of inaction. Maintain ultimate decision-making authority with experienced executives.
Key Takeaways
Navigate Ethical Challenges and Unintended Consequences
Implementing AI requires navigating complex ethical challenges and mitigating unintended consequences. Establish governance frameworks to ensure responsible, unbiased AI use and regularly review models for adherence to ethical standards. Leaders must strike a careful balance between pursuing innovation and managing risk.
Cultivate Essential Leadership Skills for the AI Era
Succeeding in an AI-driven landscape demands new leadership skills. Leaders must become comfortable with ambiguity, develop deep technological understanding, and effectively communicate AI's implications to stakeholders. Encouraging experimentation, inviting dissent, and leveraging diverse perspectives will be crucial for making sound decisions in a rapidly evolving environment.
Prepare for AI's Transformative Impact on Work
The nature of work itself is fundamentally changing due to AI. Leaders should recognize this shift and proactively adapt by upskilling employees, aligning AI with current processes, and fostering a data-driven, innovative culture. Transparent communication and clear pathways for employee growth in an AI-driven workplace are essential.
Episode Participants
Anthony Scriffignano, Ph.D. is an internationally recognized data scientist with experience spanning over 40 years in multiple industries and enterprise domains. Scriffignano has extensive background in advanced anomaly detection, computational linguistics and advanced inferential methods, leveraging that background as primary inventor on multiple patents worldwide. He also has extensive experience with various boards and advisory groups.
Scriffignano was recognized as the U.S. Chief Data Officer of the Year 2018 by the CDO Club, the world's largest community of C-suite digital and data leaders. He is a Distinguished Fellow with The Stimson Center, a nonprofit, nonpartisan Washington, D.C. think tank that aims to enhance international peace and security through analysis and outreach.. He is a member of the OECD Network of Experts on AI working group on implementing Trustworthy AI, focused on benefiting people and planet.
Inderpal Bhandari is the Founder of Virtual Gold, a Data and AI company. He has served as a director of The AES Corporation, a global energy company, since January 2024. He serves on the Financial Audit Committee and the Innovation and Technology Committee of the Board. Additionally, since September 2022, he has been a member of the Board of Directors at Walgreens Boots Alliance, a global retail pharmacy company, where he serves on the Finance & Technology, the Audit, and the Nominating & Governance committees.
Dr. Bhandari is the former Global Chief Data Officer of IBM, where he led the company's global data strategy to ensure IBM maintained its leadership as the top AI and hybrid cloud provider for enterprises. His career spans over two decades in healthcare, including roles at Cambia Health Solutions as Senior Vice President and Chief Data Officer, and at Express Scripts / Medco Health Solutions as Vice President of Knowledge Solutions and Chief Data Officer.
Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator, known for his deep expertise in the fields of digital transformation, innovation, and leadership. He has presented at industry events around the world and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.
As the founder and host of the widely acclaimed CXOTalk, Michael has interviewed almost 1,000 of the world’s top business leaders, technologists, and academics, providing valuable insights into how organizations can harness technology to drive success and create value. With over three decades of experience, Michael has built a reputation for his ability to distill complex concepts into clear, actionable strategies. His work spans a broad range of industries, including technology, healthcare, finance, and manufacturing.
Transcript
Inderpal Bhandari: How do you tie your business fundamentally to what's happening with the technology?
Anthony Scriffignano: Some of these gen AI initiatives can be extremely expensive, especially long term. You introduce something like this to your customers, they fall in love with it, and then you can't maintain it.
Michael Krigsman: Welcome to Episode 843 of CXOTalk, where we explore the intersection of AI, leadership, and the digital economy. I'm Michael Krigsman, and today we are discussing how AI changes the nature and practice of leadership.
Our two guests are both brilliant and extraordinary people. Anthony Scriffignano is a distinguished fellow with the Stimson Center, a DC-based think tank. He's the former chief scientist at Dun and Bradstreet. He consults and advises governments around the world and has over 100 patents to his name, actually, 102.
Inderpal Bhandari is the founder of Virtual Gold, a data and AI company. He's the former global chief data officer of IBM and sits on the board of directors of the AES Corporation and the Walgreens Boots Alliance. He is also on the faculty of Carnegie Mellon University.
How are AI and other disruptive technologies changing traditional leadership and traditional leadership roles?
Inderpal Bhandari: AI itself has been around for a very long time. When I did my PhD thesis at Carnegie Mellon, graduating in 1990, it was an AI engineering thesis. It's really been around for a very long time. But I think what's happened most recently is that it's become far more accessible to everyday users, to the common person, so they can make use of what used to be very verified and sophisticated technology is now in the hands of people so that they can make use of it in an everyday context.
And so, fundamentally, I think it's not. I mean, yes, obviously it's going to change the nature of leadership, but I think what's happening underlying that is that it's changing the nature of work. I think fundamentally, that is the major shift that's happening.
And then, obviously, with that, you're going to have leadership change as well. And you see this in multiple, multiple ways. And obviously, we'll get into all that. But I wanted to get Anthony's perspective before we start getting into the details as well.
Anthony Scriffignano: I did a lot of AI in my dissertation method as well, but I didn't call it that because I didn't want to scare anybody. Now, it's okay to actually admit to using some of these capabilities, and it's kind of a shame if you can't admit it. So there's a lot of leaders that haven't necessarily updated their knowledge of these underlying technologies, but they've certainly updated the frequency with which they mention them.
It's almost like they have to say gen AI every time they open their mouth, they have to somehow just say it's like big data or cloud computing, or any other disruptive evolution. Just say it enough times and you'll sound like you're doing it.
But it's not like that, because the way generative AI is different, generative, meaning it generates its own content, is that it's going to be there whether you use it or not, and it's going to be part of how you lead, whether you think about it or not.
So the data that you use to make decisions, the data that your customers use to evaluate how your organization is doing, the opinions of you in the social space, all of these things are influenced by the democratization of this technology.
Our challenge as leaders is to not just talk about it, but to understand how the skills that got us to where we are today are different than the skills we're going to need going forward when everything is ubiquitous and discoverable. Other than that, it's pretty much straightforward.
Michael Krigsman: So, what are the implications here? What does this mean for leaders, and what do leaders need to do that is different? And oh, by the way, what exactly is different?
Anthony Scriffignano: You need to be a servant leader, you need to be very participative, you need to invite dissent. You need to make sure that you get the right cross-functional group of people around the table when you're making big decisions. You can't possibly know everything that's going on right now. It is far too complex and far too dynamic.
Inderpal Bhandari: The nature of work itself changing and evolving, there is a lot to learn as the situation evolves. I'll give you some context in terms of my experience.
When I was leading data and AI internally for IBM for about eight years, what we learned there was that to have it successfully take hold within the enterprise, obviously you have to get the data right and you have to get the technology right, but you also had to figure out ways to integrate it into the business processes that they had, which is now a bit softer than the hard technology, because you start dealing with people who have already been working those jobs and they might feel threatened by AI and so forth.
But then lastly, also the culture. The culture had to change. It had to become much more data and AI driven. It couldn't be the typical pyramid kind of decision-making scheme that you would have traditionally. You'd have to give a lot more freedom to the people who were on the front lines who were actually doing the work, they had to be empowered.
And so that just makes sense. If you say you're going to have a data and AI-driven company, and then the employee has to come back up the management chain to get permission, they'll never be as quick, they'll never be as nimble, the decision won't be as good, because they just don't have the right, right culture to go with it. So I think those are some of the factors that go into a successful adoption of AI.
I think what's different now, because this is kind of even traditional AI, machine learning, all that stuff, it falls into that same paradigm. I used to call it the cognitive enterprise blueprint, but I think what's happened now, going back to the generative piece, you do now have an AI system that will create its own content. And like Anthony said, it's going to do that no matter what you do, you can ignore it, you cannot use it, etcetera.
But I think it fundamentally changes the nature of work because you have an AI system that's now generating its own content, it kind of can earn the right to be much more of an assistant than you would previously have had. So almost becoming an assistant, that's going to come up with ideas, not ideas necessarily, that were just hidden patterns and data, but something creative. And so that aspect is different.
It's extremely different from what was there before. And I think it's changing the nature of work. And correspondingly, it'll go beyond the four pillars of the cognitive enterprise blueprint that I talked about, just because it's going to be very different from what we had in the past as we move forward.
Anthony Scriffignano: I'm not accusing you of this, but I am definitely accusing others of anthropomorphizing this technology and thinking that it is intelligent and that it is creative, it is convolutional. It is taking a bunch of information that's available and presenting it in ways that maybe you never thought about presenting it.
Now, if you want to call that creative, that's great. But the creation is actually happening in your human mind and in your ability to interpret that. So if we use data.
So I usually try to recommend three frames. One is the epistemology. What do you have to believe in order to believe the method? So you have to believe that the data that it consumed contains the thing you want to learn about. Right. If you're trying to learn about, I don't know. I'm doing a lot of work lately with understanding things that are happening in the environment, right.
If you want to know what's going to happen in the environment in the context of some cataclysmic industrial thing that never happened before, that just happened yesterday. And you believe that that AI is going to have enough data in its corpus to help inform you. You need to think about that, right?
Maybe it does, probably it doesn't. So you have to ask a different question. The second frame, besides, what do you have to believe is, and I think this is what you're talking about, don't hold the reins too tightly, right. Allow the people in your organization to make decisions, but don't allow them to just go off and do whatever they want, because what they'll do is they'll chase 4000 shiny objects and make zero progress.
They'll have a lot of fun and they'll, they'll have an amazing lab and they'll have incredible toys to show, but nothing will ever be done and nothing will ever have any kind of continuity to it because they'll be on to the next thing.
So you'd still have to hold the reins. You just have to hold the reins loosely. And then the third frame that I would recommend, I used to play water polo, and it's really important when there's a fast break in water polo to swim as fast as you can, which means keeping your head in the water and pick your head up so you don't get hit in the head when they pass you the ball. Right.
So you have to pick your head up. When you're involved in these initiatives, it is. And make sure that the environment hasn't changed while you're doing this super cool gen AI thing that you think is going to change the world, that the world didn't change before you got to it. And now the thing you're building doesn't matter as much as you though it did.
Michael Krigsman: Please subscribe to our newsletter subscribe to our YouTube channel.
This super cool gen-AI thing that you're talking about. In many cases, it's obvious that it will reshape the future exactly as Inderpal was describing. And that means that there is a level of investment that is absolutely required. And so, Anthony, you just described this chasing the shiny object in a sense. What if we were to describe it instead as examining necessary potential investments and therefore is this is required?
Anthony Scriffignano: Don't disagree with you at all. As long as you do it in a systematic way in order to, as long as you understand how you would know if you were succeeding, as long as you're understanding the possible opportunity cost of doing this thing versus doing another thing that the leadership, which is what we're trying to focus on here, is about holding those reins loosely, allowing that sort of innovation, but not doing it in such a way that you just allow people to just do whatever is amusing.
Inderpal, I see you've taken three breaths to try to talk. I want to give you a chance to jump in here.
Inderpal Bhandari: I sense that there is something deep there with the AI programs, because I think it's not so much just a question of the data like we used to think about it before, mainly because it's now, I think it's best understood in the context of text rather than imagery, although I think the thought could carry over to imagery, but within text, within language, the nuances that exist, when those nuances actually get captured in the network, and then it's able to essentially find links across these nuances in very subtle ways, you will get some very counterintuitive ideas that I think would be very difficult otherwise. So I think that aspect is there.
But to come back to the point that Michael made with regard to the necessary investment, I'll add another context too, to Anthony's point. I think in addition to being systematic, I think the issue is that we know there's going to be complexity even without the generative AI. If you just go with what I'm going to just call it, conventional AI, prior to the gen AI work, we know that there's a lot of complexity in doing that successfully, at least in an enterprise context.
You definitely can have applications aimed at consumers that do one or two things that could be very successful. But with regard to an enterprise, we know you've got to get the data right, you got to get the technology right, but then you also have to figure out how to adapt the business processes.
And finally the hardest part, how to change the culture of the clicks. I think with the gen piece on it, where now you have this technology which can be creative, it can be creative in good ways, it can be creative in bad ways, it can hallucinate, etcetera. It's going to add a fifth dimension to it. It's going to make it even more complicated for enterprises to get this right in a broad scaled way where you can adopt it across the entire organization.
So to your fine point, Michael, yes, necessary investments. But unless you get the formula right, those investments are not going to bear the fruit that you're hoping for.
Anthony Scriffignano: If we take AI out of this for a second and just look at the world that we're trying to operate in and look at the normal of disrupted disruption, the fact that everything gets disrupted, and our response to that disruption itself gets disrupted by something else happening, either geopolitically or environmentally, or from a regulatory standpoint and so on.
Never before has it been more important to quickly embrace new technology that can help us keep pace with this.
The caveat would be there never has been and still never will there be this easy button where you can just push the AI button and let AI go solve it for you. There's no substitute for leaning into this and figuring out what new skills you need and how you're going to use it, and how you're going to get the organization to adopt it, and how you're going to get your customers to embrace what you're doing and your shareholders to like it.
All of those things are still important.
Michael Krigsman: Effectively, what you're both saying is this is no different than anything else. We focus on the culture. We need to, you know, keep our eye on the ball and we can implement our ERP systems.
Anthony Scriffignano: And no, no, because you can't move fast enough anymore as a human being. You need this fifth dimension that Inderpal is talking about. We can't think fast enough on our own without help to understand what this content creation that's creating content is doing without involving it in that process.
Inderpal Bhandari: If you take ERP system and you try to implement it, it's the impact on changing the culture of the organization is just not comparable. This is in a different realm altogether. I think an AI platform that does try to enable everybody in the company to make use of data and AI easily, that is a much, much bigger cultural lift than doing an ERP system or a traditional system.
And I think that's where the angles about leadership come in. And I think that's what I said was true even of conventional AI. But I think with GenAI, you have a further complexity because the nature of work itself is changing.
And I'll give you a couple of examples to just because people don't really understand it, because it's evolving and it's going to get clearer as things go on, but it's important to understand that the nature of work is changing. And you see it on the, on the governance front, I serve on a couple of boards, but if you look at what happened with OpenAI and its board and Sam Altman, and they kind of took a stance saying we don't think the company's in the right direction, and then pretty much ran the risk of every employee essentially departing from that company because the employees believed in a completely different way.
So if you're doing governance in the traditional way, if you're on a completely different page with regard to the employees, there's nothing to govern. I mean, it's not going to work, but it goes back to the nature of the situation, the nature of the beast. It's different. The nature of work is changing. You see that also reflected in some ways with Tesla and Elon Musk's package.
Again, you kind of see that because Tesla is an AI company. No matter what we, you know, what we say it is, fundamentally, it's an AI company. I mean, they've got the most sophisticated algorithms that are figuring out what cars do and so forth, and they'll apply it to other areas as well.
But you fundamentally, again, have a situation where you've got the shit, shareholders are divided, some of them totally fall the pay package, and some others are not because they're looking at it more traditional. And I do think that the situation is just completely changed. And so we have to reflect on how it's changing.
And my sense is it's more complicated because just conventional AI itself is complicated in terms of changing the culture. It takes a rare leader to be able to pull all that off. There's the servant leadership, but there's got to be enough appreciation off the technology, off the data.
If you just think, oh, I got my numbers to meet right now, and there's this AI thing, and we're going to do some project on it here and there. It's not going to happen. I think it's going to result in a different sort of company. I recall a survey that we did just around the time, maybe a little before I left IBM, where of the CEO's we surveyed, 50% of them basically said that more than half our products in the next. I forget if it was two years or five years, but some narrow timeframe were going to be completely new, it would be reinvented through the technology that was surfacing.
So it's that kind of situation that we are grappling with. I think it's going to be more complicated for an enterprise, especially a traditional legacy kind of enterprise, to navigate through all this. And they could also spend a lot of money trying to do this without getting anywhere.
Michael Krigsman: What I find particularly interesting for both of you is you're both PhD data scientists, and yet you're both very focused on the cultural aspects. So where and to what extent does the technology itself start to play a role here?
Anthony Scriffignano: The technology evolution is very disruptive right now. We've had a couple of years of GenAI, this, GenAI that, that isn't a giant surprise to me. I suspect it was not a giant surprise to Inderpal. These are not new technologies. These are technologies applied in a new way that has caught the attention of the world. And now all of a sudden, we have enough data and we have enough compute power to start doing some things that were only theoretically possible before.
All of that is very disruptive in the technology world. Quantum computing will be very disruptive pretty soon, and then there'll be quantum AI, whatever that is, and there'll be the next disruption. And these sort of giant things that happen like this one, are not overly surprising. They play out in ways that hopefully surprise us and delight us and give us new opportunity. But I'm not shocked by it.
What I have to do as a student of leadership is to be reflective. The research shows that the most effective leaders are those who can stop and not just sort of intuitively say, this is what we're going to do, but who can examine the first principles of why they think so, who can communicate effectively to their organizations, get the kind of buy in that Inderpal is talking about so that you don't have this sort of revolt of the people that you're trying to help and serve. These are the tough things to do. The soft stuff is really hard. There's no easy button for that.
Inderpal Bhandari: You just stay with the floor for the moment: data, technology, business, process, and culture. I'm not downplaying the importance of technology nor data there. I think those are two very important pillars. But to Anthony's point, the softer pillars are the harder ones to affect. So any senior executive who's been in this role is going to arrive at that kind of conclusion.
That's not to say that the technology and data are not important. I think, in fact, that's kind of one other way that that's going to make the leadership more complicated. Because if you have leaders who really don't have a deep appreciation or an intuitive understanding, at least of the technology or the data, they are not going to be successful because they'll downplay the stuff.
Then they'll just think, my CIO will figure this out and it'll become an issue, because eventually you're going to have to align whatever business strategy you're on with the tech, with the data, with whatever you're trying to do. And on this stuff, again, somebody's going to have to make some very tough calls as to how you do that alignment.
Anthony Scriffignano: I tend to look at people, process technology, and mindset. The people, process, and technology, I think, are well understood in change leadership. All the pundits of change leadership talk about those as the drivers. What often gets missed is the mindset, how you capture learnings, how you get people to embrace this change, how you deal with people who are dissenting, and right.
If you think about it from a board perspective and interplay, you have a much better experience for this. What I'm about to say, but my experience on and with boards is very often you have a lot of very important people on these boards who are, dare I say, very impressed with their progress in life and not necessarily well informed about the kinds of disruptive technology we're talking about here.
So the mindset problem there is them sort of becoming the student instead of the teacher and being okay with that. Really hard to do. Really hard to do, because they think they know what you're going to say before you open your mouth.
Michael Krigsman: Inderpal, explain to us how as business leaders, we can talk to the board and get them to listen, and then we have a bunch of questions that are stacking up and we'll go to some questions.
Inderpal Bhandari: Boards are far more aware of the need for a permanent seat on the board that actually, you know, just grew up with technology, somebody like myself. I mean, that's the reason why I am on two boards, and that happened very quickly. But I think what you see is there is an awareness now because previously this would have become relegated to one committee and they would have said, we'll hire a consultant and they'll advise us on this.
Now they want somebody who's actually a technologist, you know, who's kind of grown up with technology sitting on the board itself so that they can ask that person those questions and get into the discussion in a very deep and detailed way. And so that's new. I mean, that's relatively new. I don't think that that was a phenomenon that was even perhaps existing two, three years ago. I think to your question, in terms of how do you make the boards more aware of this, and even the executive team for that matter? I mean, it's all about education.
So there's a big thrust now in terms of educating the boards around technology and aspects of technology. And right now, of course, data and AI are right up there. It started with cybersecurity. That became something extremely important. Boards needed to know that. And there's always somebody on the board who's well versed in cybersecurity. But now you kind of see that also for AI and data.
So I don't think that the awareness now is an issue. I mean, people kind of understand the importance of this, but there's still a need for education at the board level and the C-suite level about these technologies. So part of the work that I'm in Carnegie Mellon for is really to develop those kinds of offerings.
Michael Krigsman: Do you have some short bits of advice for communicating these very technology-based issues and implications to board members who don't have a technology background? And very quickly, because we have questions stacking up and we have to get.
Inderpal Bhandari: To the audience, one, complete transparency is essential in terms of what the company is doing, because part of the reason that boards become nervous is they don't fully understand. And if they don't fully understand and the executive team is not being completely transparent about what they're trying to do, it's going to be difficult for them to make a decision. Just like if there's a vendor coming into a C-suite and talking about AI, unless they fully trust the vendor, they're not going to surrender their data and so forth in that context.
So I think transparency, trust, education, these are aspects that are number one on my list. I think they also go towards operating ethically, operating in a way that leads the world in the right direction as well, because that is another big concern, which you don't see that right now in any systematic way being played out other than some of the regulation, which is also fast and furious, but obviously will never be able to really keep pace with the technology.
Michael Krigsman: Anthony, let me direct this one to you. This is from Greg Walters, and it goes back to the investment question that we were discussing earlier. And he says on LinkedIn: "Is it best to ride the wave and respond versus trying to create and control that wave of adoption, which implies how do we make these investment decisions? How do we know if something's a shiny object or not?"
Anthony Scriffignano: Very much depends, I think, on industry. How mature your industry is. It very much depends on those "what do you have to believe?" kinds of questions. If I see a new technology that comes along. There was a wonderful piece in HBR recently where they talked about dimensions of risk in AI, misuse, misappropriation or misapplication, misrepresentation. And my favorite one, misadventure. Right. Just a fabulous way of sort of picturing what happens when you get this wrong.
You know, how do you decide whether to ride the wave or to control the wave? Depends a little bit on what you have to believe in order to be riding it or controlling it. So if you're an industry leader. If it's a nascent industry, if you are creating your own market, then absolutely it's a great opportunity to try those ride the wave things.
If you're in a really mature industry where the opportunity cost of sitting out in the wrong direction, some of these gen AI initiatives can be extremely expensive, especially long term. You go introduce something like this to your customers, they fall in love with it, and then you can't maintain it. That's a whole different problem.
You have to think about where you are with respect to the pack, what the cost, the opportunity cost, the cost of doing nothing is, and also the level of disruption in your industry and how mature it is. Those are some of the drivers that I would think about.
Michael Krigsman: Inderpal, can you elaborate any further thoughts on the potentially unexpected costs of investment in GenAI? These costs can skyrocket.
Inderpal Bhandari: By aligning your business fundamentally to the technology. And I'll give you an example. You had Nvidia where they really aligned this with their data center work, and you could argue that that's natural. That's obvious. They would obviously go there. Nothing major about that.
But we also have a more recent example with what happened with Apple and WWDC, their developer conference, where they come out and they talk about AI, and the initial reaction from the market was kind of ho-hum, saying, oh, I don't think there's much new, et cetera.
And then they figured out that what they had actually done was they had tied the AI to the upgrade cycles for the hardware, the phones, which is now their core business. And you saw the stock market, the Apple stock just skyrocketed as soon as people realized that's what happened. So they figured out a way to tie their core business to the technology and vice versa.
And once you do that, then you can ride the wave and you can ride it very effectively. I think that's the problem that at the C-suite, the CEOs, et cetera, have to unravel. How do you tie your business fundamentally to what's happening with the technology?
Michael Krigsman: You should subscribe to the CXOTalk newsletter. Go to CXOTalk.com and subscribe to our newsletter. And also to subscribe to our YouTube channel so we can keep you up to date on these incredible conversations that come up almost every week. And they're live.
So here's a question from Arsalan Khan, who says: "In the age of AI, do we really need executives in organizations to make key decisions? What's the time horizon for getting rid of executives?" This is, I realize, a kind of tongue-in-cheek question, but it actually raises a very important point, which is the level of judgment and potentially sentience of these LLMs. So, shall we talk briefly about Anthony? What do you think about sentient LLMs?
Anthony Scriffignano: Of course, we need executives to make decisions. We just need them to make different decisions. Right. The importance here is that we have this new seat at the table, which is a virtual seat at the table, which is occupied by something that is not sentient, but is pretty knowledgeable about what your customers are saying or how the market is changing or how the regulatory environment just evolved or what the medical implications of, you know, this thing or so. Yeah.
Should you take advice from that? Absolutely. Should you use your human brain and your human knowledge and your sentience and your actual ability to make new mistakes and learn? Yes, you should. So, no, we're not going to replace leaders. I, hopefully, we're going to get leaders that are making better informed decisions and are better able to support those decisions with very current information.
Michael Krigsman: The questions are stacking up, which is why I'm jumping there pretty quickly. And I love questions from the audience. You guys in the audience, you are so smart, and I am humbled by hearing your questions, and it's great.
So, Inderpal, Wes Andrews says: "Here's a question that a few of us in the data analytics community are currently wrestling with." He says "How do we temper the fear and trust factors that are being perpetually surfaced in the media, movies, etc? Caution is warranted, but how do we calm outright fear and distrust?"
Inderpal Bhandari: You got to also ask yourself, then, you know, just peel the onion a little bit. Okay. What, what is it that foments that fear? So if you're trying to bring technology into an enterprise and AI technology, trying to integrate it into a business process, there's a substantial amount of automation in that the person who's been working that is going to be afraid for their job.
And to go back to Arsalan's question, it was tongue-in-cheek, but it goes to that same thing. And you question, well, you know, is the person really going to be needed? And the part of the answer, I think, lies in Anthony's answer to that question where he says, you're going to need that human judgment to eventually, you know, make the right, make the right decision. You can't really have a machine or a robot because it's not really living through the human experience.
And so it's not something that, that you can just expect that these machines would have learned that. And this takes me back again, an anecdote. I think there's a movie, American Sniper. They have this juxtaposition of the sniper. They show him making a couple of decisions, and both right in terms of taking the shot or not. And they explained that the only reason the decision was right is because as a father and a family member, he could really relate to what was happening on the ground and determine if there was somebody was a combatant or really just a civilian.
And those kinds of things that the human experience, if you don't have that to relate to certain contexts, you're going to be missing the boat. So I think those aspects are critical.
But to go back to Arsalan's point, I do think that, and what I said earlier, I do think the nature of work is fundamentally changing. I don't know if you're going to have less work. We might even have more work. But I do think the nature of work is changing.
For instance, do I think that the companies of the future are going to be huge behemoths like we've had in the past? No, I just don't think that. I think they're going to be far fewer people working on companies. I hear talk of a company of one which has a billion-dollar valuation. I think you'll see much more of that kind of effect take place. So things are changing, and that's part of the complexity in this.
Michael Krigsman: Hue Hoang says: "As leaders, can you share examples of solutions or situations you've experienced with your teams when unintended risk was discovered after AI methods were unknowingly implemented incorrectly?"
Anthony Scriffignano: Can I do that in a public forum that is being recorded on the Internet? Yes, of course, I can. So, I can give you one example, and Inderpal, I'm hoping you have a much better one. And I'm giving you a chance to think about it.
You know, I was involved with a situation where when bad things happen, so environmentally, hurricanes, tidal waves, things like that, it might shock you to know that people take advantage of the chaos that's caused in order to do other bad things like commit insurance fraud or take people's money and don't do what you help them, you know, promise to help them do to recover that sort of thing.
So, using a fairly advanced method of finding clusters of likely malfeasant behavior, it's possible to uncover that. And that's great, and that's awesome. But then you get to, what do you do with the answer? And do you know with certainty that these clusters of likely malfeasant actors are actually malfeasant?
No, you don't know. And you can't go accuse them of that because all you have right now is the light shining on them. You need human beings who can use their human brains and their human experience to, as Inderpal said, take the shot or not. So I wouldn't say that they, as a result of this, did something wrong, but it could easily have gone there if somebody said, well, just do whatever the AI recommends because it's not going to be perfect. Nothing's perfect
Inderpal Bhandari: There are always two things that are going on simultaneously in something like this. There's a rush to get it out there, and then there's also the need to temper what you're doing, and you can easily make an error in judgment. And there's just, in my career, and it's not just been with AI, but with new technology, I've seen that we've launched products that just were not ready. And so they have huge amounts of bugs.
We've had products that were automated in some ways, and they just ended up downloading data at exorbitant cost without really understanding that you didn't need everything. So there are lots of examples. I'll give you one I think everybody is likely to experience. You have all these cars out there that essentially are smart now, and they will apply the brake automatically if they think there's a risk of a front-end collision.
And I've actually had this happen to me where if you make a left turn and if there's somebody approaching you at speed and they put on their indicator, so they're going to turn left, do not make that left turn because the automatic brake is going to slam on because it doesn't pick up the other indicator. So again, it's an example system is out there. It's going to, there are unintended consequences that can come off this, but it happens because you don't really anticipate every situation in something like this. And to some extent, you have to learn.
It's a game of managing the risk. That's really what it comes down to. And talking about the skills for leaders, that's going to become even more critically important because you're not going to learn if you don't get it out there. At the same time, if you're going to lose your shirt by what you do when you get it out there, your business is over. It's a question of managing and balancing that risk very effectively.
Anthony Scriffignano: When the self-driving cars were first being tested on the streets, there were a bunch of kids that thought it would be hilarious to torment the self-driving cars by making fake stop signs and stepping out on the side of the road and making the cars slam on their brakes. Very funny. Until somebody goes to the emergency room. But that led to the evolution of adding to the logic. Is this a likely place for a stop sign? Did the car in front of me stop? Was there a stop sign there yesterday when I came? So, we can also learn from these oops.
Michael Krigsman: John Scott says: “Dr. Scriffignano, how do you best put your staff at ease with the changes that AI is presenting?” He says: “At my firms, teams are concerned that their jobs will be impacted.” If I can restate that, everybody's concerned that this AI is going to take their job away. And why in the world should I train you? You, being that AI, why should I even do that, about my job?
Anthony Scriffignano: Why do you assume that I put them at ease, but without being facetious, you know, part of this is addressing the elephant in the room, right? If a massive change like this comes about, you don't just put your head down and hope nobody will notice, right? You bring people together, you talk to them, you say out loud what they might be thinking.
And in some cases, the reality is, yeah, you know what? This is going to affect the types of people that we need, just like we don't need people to add up lists of numbers anymore. So let's talk about how this allows us to free up your time to do something we really need to do that we can't afford to hire somebody else to do right now. Let's talk about how we can bigify your job. Let's talk about how we can address some of those known unmet needs of our customers where some of the unknown unmet needs, like Inderpal was talking about with the tying the evolution of the technology, with the evolution into the product, there's some fantastic opportunity here.
If we talk to the team, if we hope that the team won't notice, or if we hope that the team will just hunker down and lead harder, work faster, I can pretty much guess that you're going to lose the best people and that the ones that remain will develop a culture that you're not going to be very happy about if you don't pay attention to it.
Michael Krigsman: Chris Petersen has a question. And Inderpal, I'll address this one to you. Chris Petersen says: "Organizations that already have matured business intelligence, analytics, data governance, and so on, seem to be many hurdles ahead in the GenAI race. For organizations that haven't matured these prerequisites, is there any chance that they can catch up?" Rather than be hopeless, Inderpal, what should they do?
Inderpal Bhandari: If you go to AI, you have to have good data. That's a hard requirement, right? Garbage in, garbage out. So if you have not taken that step of actually getting your data fit for purpose, you would have to do that simultaneously while you embark on the AI journey.
That's got to be a necessary step. It's one of those four pillars I talked about, right? I think your sense is right. If you've already gone ahead and done that, that also means that you would have made some changes to your culture in the organization, your business processes would have been changed, etcetera. And so you're ahead in that department as well.
But if you haven't done anything like that at all, which I suppose there are probably some organizations that have not really done that because they were so secure in their businesses, the only way to catch up is to do it simultaneously now, because you don't want to spend the time just first in sequence, getting the data ready and then the technology. But I'll go back to what I said earlier in terms of riding the wave and how you do it. The very first, I think, completely necessary condition is you aligned your business to the technology, and that's a strategic kind of decision on the AI strategy that you're going to use.
And once you do that, it'll narrow the focus greatly onto what you truly need to do to move the business forward, and then you do it simultaneously, both the data and the AI.
Michael Krigsman: Could I correctly paraphrase you by saying, if you've taken your eye off the ball on your business, you better just catch up and you better do it?
Inderpal Bhandari: The reason I hesitate to just go with that, Michael, is because I think there are probably more companies today that will struggle with that alignment I spoke about than jumping in and doing things with the data. I think there'll be people who now understand we do need to improve the data, we do need to bring in AI. But that first step of the alignment, I don't think that's going to be that easy.
In most cases, I think it could be quite subtle as to how you go about doing that, and that's going to be your test of leadership. It's not just simply going to be the matter of, okay, I'm going to appoint a chief AI officer and have it report into the executive committee. That might be a start. But if it's just meant as he's going to figure it out or she's going to figure it out, it's not going to work.
Anthony Scriffignano: One of the most critical components of change leadership, and I'm not saying change management, I'm saying change leadership is creating a credible vision of what it will look like when you've implemented the change and embracing that change when you bring it about and using that to bring on the next change in the organization. It's not, you can't outsource oversight. You can't just hire people to go do that for you. You have to bring it with you when you come into the room.
Michael Krigsman: Arslan Khan comes back, and he says, “Sometimes it feels like AI is like everything everywhere, all at once.” He says, “Are there practical steps businesses can take? For example, are there enterprise-architecture-type levels of analysis to determine the progress of AI in organizations?”
Inderpal Bhandari: There are obviously ways to judge readiness, data readiness, AI readiness in terms of technology, AI readiness in terms of preparedness of employees, are they educated enough? Do they understand enough, and all that? But I'll go back to my earlier comment. I think if a business really goes after that alignment, there will be one or two things that they'll need to do.
They're not going to have to boil the ocean. It won't be everything everywhere all at once. It'll be just one or two things that they really need to focus on to take their business to the next level and reinvent it. And the price to pay on that is the C-suite has to really knuckle down and get that done.
Anthony Scriffignano: Be very careful when you're measuring confirmation bias, that if you think you're smart, you think you're doing a great job. You'll find the data that says that you're smart and you're doing a great job. You got to make sure you're measuring the right stuff, and you got to make sure that you're open to what you don't see.
Michael Krigsman: But, Anthony, we human beings like to be told we're doing a good job, and we like to seek things out that confirm that.
Anthony Scriffignano: Yes, we do, and yes we are. And that is often the first chapter in the book of failure. So we have to be careful.
Inderpal Bhandari: That's why you need friends like Anthony. So you can bounce off.
Michael Krigsman: Hue Hoang comes back and says: “On behalf of a colleague named Joy, for those new to AI, how and where would you encourage them to start their journey to learn?”
Anthony Scriffignano: One of the best pieces of advice that I ever got from someone in my organization who was a runner was ‘the most difficult thing about going for a run is putting on your sneakers, right? It will never be easier than today to start this journey.’
So it's very easy to get overwhelmed by how much there is to learn and how much you don't, you think you don't know.
First, you'll be surprised at how much you do know. And second, there'll be more that you don't know tomorrow if you don't start today. So I can't stress enough the importance of pick a horse, find something that resonates with you, whether it's online courseware, whether you sign up for a certificate program, whether you get a mentor.
But don't just think about thinking about getting ready to start, getting ready to go to the dance. You got to start taking some steps.
Michael Krigsman: We have a question from Lisbeth Shaw. She says: "Currently executives are applying the 1990s concept of return on investment. When thinking about investing in AI initiatives and projects, especially beyond proofs of concept to scaling. How should business leaders be thinking about the value and investment when it comes to AI and GenAI?"
Inderpal Bhandari: Businesses are going to think about ROI. That's the way business is structured today. I mean, it's all about return. You want to drive the share price up. You want the return on investment. So in a sense, can't really fault them from going in that direction.
I think the deeper question is, given that we have these kinds of disruptive technology as we do today, which are going to change the nature of work, the nature of companies, et cetera, et cetera, where should things evolve to so that you then have metrics that perhaps go beyond the ROI. But I think, just as a practical matter, if a CEO comes into the board and says that, hey, I'm now going to be thinking about how to improve the world as opposed to my share price, I don't think it'll be a long discussion.
Michael Krigsman: Anthony, it looks like you're going to have the last word here. Thoughts on investment and ROI when it comes to GenAI?
Anthony Scriffignano: We need to think about the cost of not doing anything as well. It's very easy to take what's the cost of the investment? What's the investment horizon? What's the expected return?
Do the math and compare it to some hurdle rate and move on. The cost of not taking some of these initiatives is much harder to measure because how do you measure marginalization? How do you measure your competitive moat getting smaller? How do you measure becoming increasingly irrelevant? Those are not things that have established calculations for.
So, you're absolutely right. In terms of 1990s thinking around investment, I would say we need to think a lot more about opportunity cost and marginalization and those sort of soft things that are hard to measure.
Inderpal Bhandari: No decision is a decision.
Michael Krigsman: With that, we are out of time. This has been a fascinating discussion. Went by very quickly. I want to say an enormous thank you to Inderpal Bhandari and Anthony Scriffignano. Gentlemen, thank you both so much for taking your time to be with us.
Anthony Scriffignano: Thank you. Thank you, Michael. It's a pleasure.
Michael Krigsman: And thank you to the audience. You guys are awesome, especially you folks who ask such thoughtful and incisive questions.
Before you go, please subscribe to our newsletter. Subscribe to our YouTube channel. Next week, there's no show. Two weeks from today, we have two of the world's experts on misinformation and disinformation on social media, and AI plays a role in that for sure.
Everybody, thank you for watching. I hope you have a great day, and we'll see you again next time. Take care.
Published Date: Jun 14, 2024
Author: Michael Krigsman
Episode ID: 843