Why Enterprise AI Fails and How to Fix It

Discover why AI projects fail and learn actionable strategies for success. Tech executive and author Sol Rashidi reveals common pitfalls and offers practical advice for implementing AI in your business, in CXOTalk episode 840.

50:57

May 17, 2024
18,707 Views

Artificial intelligence holds immense promise for businesses, but real-world deployments often fall short of expectations. In Episode 840 of CXOTalk, we explore the common reasons why AI projects fail and discuss practical strategies for achieving success. Our guest is Sol Rashidi, author of "Your AI Survival Guide" and a seasoned technology executive who draws on her experience leading data and analytics initiatives at companies like Sony Music and Estée Lauder.

Join us as we discuss the critical factors that determine AI success, from establishing clear objectives, and ensuring data quality to navigating the complex and occasionally daunting landscape of AI. This conversation provides valuable guidance and advice for business and technology leaders who are looking to navigate the complexities of AI implementation.

Episode Highlights

Establish a Clear AI Strategy

  • Define whether your organization aims to be AI-centric or to embed AI across specific workflows. This decision will guide the overall direction and resource allocation for AI initiatives.
  • Formulate a comprehensive plan that includes selecting use cases, forming a dedicated team, and designing, and deploying AI projects. Avoid common pitfalls by ensuring alignment with business goals and technical capabilities.

Consider Projects’ Criticality and Complexity

  • Prioritize AI projects based on their criticality and complexity rather than just business value, to avoid internal conflicts and ensure objective decision-making. This approach helps in selecting projects that are truly feasible and impactful.
  • Consider factors such as competitive threats, regulatory requirements, and market consolidation when evaluating AI projects.

Engage Leadership and Manage Change

  • Secure top-down support for AI initiatives to overcome resistance and ensure alignment with strategic goals. Leadership involvement is crucial for driving AI adoption and managing organizational change.
  • Communicate the benefits and realistic expectations of AI projects to all stakeholders. This helps in managing fears and misconceptions about AI, and helps to foster a culture of collaboration and innovation.

Leverage Existing Data Investments

  • Maximize the use of existing enterprise data before seeking new data sources. Many organizations actually underuse their current data assets, which can be a rich source for AI applications.
  • Focus on connecting and integrating existing data to uncover new insights and drive value. This approach is cost-effective and accelerates the deployment of AI solutions.

Avoid the “Shiny Object” Syndrome

  • Be cautious of investing in AI projects just because they are trendy. Ensure that the chosen AI solutions are appropriate and necessary for your business needs.
  • Start small with AI projects to test their viability and gain buy-in before scaling up. This helps to ensure that resources are not wasted on unproven technologies.

Critical Takeaways

Rethink Use-Case Selection for AI Projects

Choosing AI use cases based solely on business value is a flawed approach, according to Sol Rashidi. Instead, she advises evaluating use cases based on criticality and complexity, ensuring that projects align with the organization's current capabilities and infrastructure. This method helps avoid the common pitfall of projects getting stuck in "pilot purgatory," and increases the likelihood of successful deployment and scaling.

The Importance of Continuous Monitoring in AI Deployments

AI projects differ from traditional IT projects because they require ongoing attention and adaptation. Rashidi describes AI as a "live wire" that needs continuous monitoring and human involvement to ensure data accuracy. Company leaders must plan for this ongoing involvement to maintain the effectiveness and reliability of AI applications, post-deployment.

Overcoming Organizational Resistance to AI

Rashidi highlights the significant resistance AI projects can face within organizations due to fear of redundancy and job loss. She suggests that successful AI implementations require strong top-down leadership and clear communication about the benefits of AI. Engaging stakeholders early and addressing their concerns can help mitigate resistance and create a culture that embraces technological change.

Episode Participants

Sol Rashidi currently holds 7 patents, with 21 filed in the Data & Analytics space and is a keynote speaker at several technology conferences speaking on various topics such as Machine Learning, Data & Analytics, and Emerging Operating Models for organizations taking on transformations in the D&A space. Prior to joining Estee Lauder as their Chief Analytics Officer, Sol was the Chief Data & Analytics Officer for Merck, EVP and CDO for Sony Music, and Chief Data & Cognitive officer for Royal Caribbean.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Welcome to episode 840 of CXOTalk. I'm Michael Krigsman, your host. On CXOTalk, we explore stories at the intersection of AI, leadership, and technology. Today, we're discussing what makes AI projects succeed or fail. Our guest is Sol Rashidi. Sol has been Chief Analytics Officer at Estée Lauder, Chief Data and Analytics Officer at Merck, Chief Digital Officer at Sony Music, and Chief Data and Cognitive Officer at Royal Caribbean. She's just written a new book. I could go on and on, but I will simply say it's great to see you. Welcome back to CXOTalk.

Sol Rashidi: Thank you. Thank you. You and I are not strangers, Michael. This is the third or fourth talk that we've done together, so I'm happy to be back.

Michael Krigsman: So, you've just written a new book. Tell us about your book.

Sol Rashidi: It's called Your AI Survival Guide: Scraped Knees, Bruised Elbows and Lessons Learned from Real World AI Deployments. I call myself a recovering C-suite executive from enterprise.

Michael Krigsman: When we talk about AI and these AI projects, and you've seen so many data analytics and AI initiatives, when we talk about AI success or AI failure, what actually do we mean?

Sol Rashidi: The standard flow is, let's establish an AI strategy. Do we want to be AI-centric, or do we just want to embed AI across a few different workflows? Let's establish use cases. Let's pick those use cases.

Let's form a team, a committee. Let's conceptualize, design, and deploy. Like, not much has changed. And so that sounds easy, but there are so many mistakes you can make in the strategy. And there are so many mistakes on use-case selection. And there are so many mistakes that, honestly, like I call it the perpetual purgatory. You can select a use case.

It's great for a proof of concept (POC), but because of existing maturity levels, you'll never be able to scale it to production. And companies just unfortunately don't know which POCs can go past, so which use cases can go best stock. And like, so it outlines for you, "don't make these mistakes, if this then this, if this then that." And so, it's a really good way of helping you understand how to start.

How do you even draft a strategy, what questions to ask, who to bring on the team, who to keep out of the team. Don't ever pick a use case based on business value, which is the most, like, progressive and lean because everyone picks use cases on business value. It doesn't work with AI. So, it kind of walks you through everything.

Michael Krigsman: Why is that? That, because what you're describing, that selecting a use case based on business value, that's the conventional wisdom. And when you know, guests on CXOTalk, that's what everybody says. Well, find the highest, highest value. So, what's wrong with that?

Sol Rashidi: If you have a problem in manufacturing that needs to be solved, there is inherent value in there. If you have a problem within supply chain, there is inherent value in there. If you need to fix the problem across procurement or across inventory management, or across forecasting and projections, or across safety measurements, with all due respect, it doesn't matter the function.

All of those business problems have inherent value and should be considered fixed. So, if you're going off the business value, what you're going to get is toppling growth or sufficiency and productivity, or increased safety. They all inherently have value. And yes, you can get into a game of saying, "Well, this has more value than this," but does it, because you're not telling the head of HR that, "Sorry, your business problem isn't as important as the business problem that we have in marketing." Or, sorry, marketing.

Your business problem isn't as important as the business problem that we have in inventory. And you're creating an environment where you're prioritizing what is and is not important, when in actuality, they're all important, maybe to different levels. But you're not in a position with AI applications to go, "Your business is more important than your business." And so, as a leader, I don't ever want to get into the situation where I'm the tiebreaker because automatically, I build bad blood, not only in the relationships, but like, "Oh my." To determine what's of business value and what's not.

So, when it comes to like, one of the frameworks in the book, and this was something I invented at IBM when I was releasing Watson, was criticality versus complexity. It's not about business value per se. You have to ask, "Is there a competitive threat that fundamentally is threatening your position in the marketplace?" "Is there a regulation or fine that's expediting your ability to pay attention to this business problem?" "Is the market consolidating?" Like, there are other questions to be asked: yes, no, or to come up with answers. And then business value is the last one. It's like, "Well, what is the percentage increase in market share, or what would be the percentage?"

Capacity savings and productivity. Each of those five questions and their criticality get a score, get a weight. And then you go into complexity. Everyone likes to talk about, "You can't do AI with just data." That is correct. It's not just data, it's also infrastructure, and it's also time. And you have to understand where you are in the maturity curve.

So, you know, which use case you realistically have the ability to deploy and push it to production. Because we're not doing these things just to stay, see. But there's an environment that will foster an AI application going into production, like full-blown scale versus those that have the same POC. Almost anything can be done in a POC, but scaling it, that's a different ballgame.

So, from there, it's a series of questions around the state of your data, knowing that data is never going to be perfect. It's never of great hygiene, but not all data is created equal. So, it talks to you about the different, like budgets and tiers, and the state of your infrastructure, and the state of your talent. And the goal is to plot these things across criticality and complexity. Things that are very critical and low in complexity, deploy those. Those are your use cases as you have the likelihood of being able to push into production. But, things that are highly complex and highly critical, things are going to require time. And it's not going to be a 3- or 4-month POC. Those things are going to require 14, 15 months because you have to put the infrastructure in place, or you have to build the data factory to build the IT issues.

So, it's a more objective way of approaching it without having to be the tiebreaker and visually just demonstrating why you should pick some use cases versus others without having to be the tiebreaker for business value.

Michael Krigsman: For most organizations or most business leaders, technology leaders, as they're choosing the use case, looking at business value, really, which is, for in many instances, the primary choice metric, in fact, is one of a number of different factors that need to be considered. Assuming that your end goal is ultimately to have a meaningful impact and ultimately to scale.

Sol Rashidi: The name of the game for being AI-centric or scaling AI applications is understanding what you can technically scale based on the existing capabilities at hand. You know, there's even a chapter in the book, Bend It, But Don't Break It. And that applies to the use cases. Stretch. Come up with stretch goals and come up with stretch ideas.

But you're, your goal is not to spend millions of dollars, waste a ton of hours across these high-value talent and workforce assets, only to be keep something in a POC for six months and then not being able to push it to production. So, you have to take into calculation your ability to push it into production, what it's going to take.

And not all companies are strong in all areas. And so, it's what I say. It's have the discussions around the art of the possible, but your deployment strategy has to be the art of the practical. And people tend to forget that. And I.

Michael Krigsman: You mentioned the term pilot purgatory. Tell us about that. What do you mean by that?

Sol Rashidi: I have had a lot of experiments, projects, and programs that have died in POCs. And it is perpetual POC purgatory. It's where you have a concept and an idea, and you want to do a proof of concept around it to see if it's viable and if it's something worth putting the science behind. My teams and I, I mean, I'm so proud of the teams I've managed and the stuff we've put into POCs that we were like, "Yes, 100%, this is amazing." Or, only to find that actually this doesn't work for us, but it stays in POC. It never sees the light of day. So, I say I have nearly 40, but I have done nearly 90. That's why it's like, you have a lot of failures to have a few successes. And not everything's going to make it into production, but at least picking the right use cases that have a likelihood of being successful in production is like your first start. And then you take it from there.

Michael Krigsman: Which goes back to what you were just describing earlier. Of having the right, looking at the right set of factors before you start a pilot, or a proof of concept, so that there's a reasonable likelihood that you can actually ultimately put it into production.

Sol Rashidi: 100%. 100%.

Michael Krigsman: Subscribe to our newsletter. Subscribe to our YouTube channel so you can stay up to date on live shows.

How are AI projects different from any traditional technology implementation or project?

Sol Rashidi: The upfront work that has to go into it is different. It's the, what I call, the back-office work is the same. It's the front office work that needs to change. So, whether you're doing a data transformation or a data project or business process optimization, you have to get the knowledge workers, the people that know the data, that know the processes that have the subject matter areas, expertise in that area.

You have to understand the current state and where the desired state is. You need to be able to measure and benchmark where you are right now versus where you want to be. And then, you put the task force together, you get the funding needed, and you deploy. And then, you monitor, like that's classic. But there are a few nuances with AI applications.

With the previous scenario, you still fundamentally have some resistance from certain knowledge workers if it threatens their area. That's normal. You can get over the hurdles, but the resistance with data or business process optimization is not nearly as much as the resistance with AI. So, there's an element where it has to be a little bit top down, which is what enterprises are used to doing or prefer doing.

But, it's not a grassroots effort whatsoever. Now, you have different functions and individuals who are futurist and want to go rogue and do their own thing, but they can't. But fundamentally, AI has to be a top-down initiative because there are a lot more people who don't understand it than do. And so, there's greater resistance because there's greater fear.

So, leadership has to be really involved. Sol Rashidi: I think the second is, is it's not something that you should do or can do because the board of directors said, "Could we use AI?" There is a lot of time, energy, effort, focus, and funding that goes into it. So, you fundamentally have to have these, like, legitimate business reasons to deploy it because the cost of capital is really high to be able to deploy something with that.

And so, when I hear things like, "Well, the board told us,”It was like, "Oh, this is good." But the other thing is, is that, and I think this is a trap that I've seen countless companies go through, is you have a business problem. Not everything is meant to be solved by AI. Why not use a hammer to solve the problem versus a sledgehammer? AI is not meant to solve every single problem. So, you have to apply a level of discernment that's greater than normal projects.

Because right now, AI is very hot, and everyone wants to say they are an AI-based company. Whether or not the tool or the solution is appropriate, in many cases, it's too overloaded. So, first, leadership engagement. It has to be a little bit top-down driven. Second, AI is not meant for everything, even though people want to, like, force it and shoehorn it. I think. Third, what ends up happening, though, is at the very end of it, there is fundamentally a shift that has to happen in the organizational model. Unlike software projects, data projects, business process optimization projects, when you're done, for the most part, that piece of code that got pushed to production is relatively static.

If I'm going to build a button on an app, I write the code, containerize it, push it into production. That button exists. All right. If I'm going to build a semantic layer that's going to feed a series of dashboards and reports, I need to build the data pipeline, I need to monitor it. It's going to go to the data architecture, the transformations in the semantic layer.

That data will always appear for that report to grow. That's not how AI applications work. It's a live wire. You're only as good as your last data, which means that if a job fails, or if you're not pulling in the right data sets from the appropriate data sources, the content that you're providing internally or externally immediately becomes stale.

And so, there's an element of human in the loop, and that feedback loop that needs to be consistently monitored and discussed because it's a live wire. And like, folks aren't really looking at that. Because if something feels off from a knowledge worker, they've got to go back and say something feels off, and then they get to understand what happened in the train of thought, because it could be that the source system or the third party, or the API, it's not about changing their format and structure.

They introduce new information that the model wasn't trained on. And so, now, you've got to go through retuning. So, the tuning of the algorithm dictates that. And keeping a watchful eye and having a human in the loop of the output is very taxing. And most people aren't accounting for that.

Michael Krigsman: So, the approach and the mindset. Because of the nature of AI, the reliance on the data and the impact of that data on the results is so different from traditional programming that it requires a change in both the technology folks and the business folks, how they relate to these projects. Is that an accurate way of saying it?

Sol Rashidi: Yeah, I would say so. Because, like, all projects have infosec concerns. You cross those hurdles, you can do the same with the AI projects. There's still a task force that needs to come into play. But those are the unique nuances. This is not something you push to production, and then you just passively monitor. It's a live wire.

Sol Rashidi: You need to have active participation in it for sure.

Michael Krigsman: There's a tweet chat taking place. Ask your questions on Twitter using the hashtag #CXOTalk. If you're watching on LinkedIn, pop your questions into the LinkedIn chat. And what greater opportunity can you have to ask Sol Rashidi? Pretty much whatever you want, so take advantage of it.

Arsalan Khan says, "How do you make sure that in AI, we are not just always going for the lowest hanging fruit? If you go AI big bang, wouldn't that create disruption that will ripple across departments? And how do you manage this? And who should be held responsible?"

Sol Rashidi: One, I always say, "Think big, start small, and then decide if you want to scale quickly." There is a muscle that has to be developed internally, period. There's an appetite for challenges, as most companies, just in general, the marketplace, especially in our space, right? Every two years is a new invention.

And it doesn't matter if it's the big data ecosystem, blockchain, Web 3.0, like the metaverse, I like things, just new buzzwords that come every two years. And so, we as executives, and as the board of directors, like, "Okay, we got to jump on that." And there's an element of FOMO that takes place. I think I, fundamentally, is different.

I think we may hear that term fatigue, meaning we may not use AI as like a term everywhere we go, but it's fundamentally going to penetrate and immerse itself within different organizations, for sure. And there's just going to be new ways of doing things, just like right now, we can use different apps, we could use a variety of different tools.

We don't even think if any, any of it has embedded robotics or automation or AI. It's just, we adapt so well that it just becomes normal after a while. So, I think AI applications are going to become normal after a while, whether we actively call it or not. But, for organizations to start that process, think big.

Yes. What are the areas that could transform that a need, a transformation? I think AI could be through that accelerator to actually do stuff about stuff that people haven't done stuff about for a long time, you know, the corporate backlogs that developed. There wasn't enough time, there was enough individuals, there was enough budget.

You've got to make sure that your organization's not following the shiny toy syndrome and putting a ton of investment and time and energy into something that they're not going to use anyway. It's like paying half a million dollars for a piece of software, and then at the end of the day, you've bought it. It's in your tech stack, but there's no usage or adoption.

And it took a ton of time and energy to deploy. You just don't want that to happen. So, start small, see if there's buy-in, and then wring funds, team talent funding, for at least three years across several functions. Because what ends up happening also is, I think we all know economic factors change, or people go towards other strategic priorities.

So, you want to also be able to ringfence enough funding to be able to last you for several years so that you can actually finish what you started. So, thinking big, starting small, and then thinking to scale quickly, I think that's a good way to start. The second is getting to that answer, your POCs need to be done quickly.

They can't be nine months, ten months long. You lose the attention span of most of the executives, so you got to pick something and make sure that it has enough of a moment for them, like, "Okay, this is valuable. Let's move forward because it takes ten, 11 to 12 months." You've created Project Dragon Leg. Their attention span has gone somewhere else.

Michael Krigsman: It sounds like, in many instances, the hype around AI becomes its downfall. All because, as you said, people are chasing that latest shiny object when it doesn't fit.

Sol Rashidi: When it doesn't fit that exactly. And there's a lot of snake oil out there, like with anything, right? That's really good marketing. You can't tell the difference between fact from fiction. Luckily, there's a lot of great stuff out there, but there's also a lot of bogus stuff, too, where I'm like, "Well, that's just a SQL statement running a massive, big decision tree right there.

Like, if this, then that. That's not AI, but I think most of us have enough knowledge and education and a discerning eye that if we ask enough questions, we'll eventually be able to understand if it is or if it isn't. And I always categorize AI into three buckets when I'm in front of a board or when I'm doing a presentation.

I do need to set expectations because I'm more of a realist, and I'm very transparent. I'll sell the story and the vision, the mission. But I'm also very reality-based. And I don't say that nothing we are going to do is artificial. Well, artificial first, sort of artificial intelligence. I'm like, "But nothing we're going to do is artificial."

There's automated intelligence, there's augmented intelligence, and there's anticipatory intelligence. Automated is around automating tasks. But, you know, what's the difference between automation and RPA and automation response AI? Well, with RPA, you fundamentally have to write the scripts and give it the instructions. The intention with automation within AI is, over time, there will be embedded intelligence. And so, it's going to be able to make the decisions on it.

So, based on that, but the pre-trained rules that we provided, then there's augmented intelligence. So, that knowledge store where you can do multiple multimodal interactions, or you can retrieve information much quicker so you can better service, you know, clients and customers who are calling the 1-800 number or customer support. And then, there's anticipatory, which is next-best action recommendations, engine, predictive models, but not based off of a static and finite data set, but off of multivariable data sets that are consistently changing, changing.

But, the model can adapt too. So, in that conversation, you know, I always define the expectations of, "This is the AI we're going for. It's more automated than it is anything else, but nothing's artificial." And then, in my conversations, to make sure no one is disappointed about what we're going to produce, there is a high expectation that machines need to get it perfect.

And if it's not, then they question accuracy. I always measure human error rate so that I can measure machine error rate compared to it. We don't like to talk about it; we don't measure human error rate. You know, like you can only have expectations of perfection if we're already doing perfectly. So, I will always establish a benchmark of our data entries at 82%.

With the current process, I think we can get to 96% of good accuracy, reducing the error rate of a few percentage points. If we were to automate tests A, B, and C, if we don't meet that threshold, this does not go into production. If we meet that threshold, it's better than the current error rate. We should go into production. That's a conversation that isn't really hard, and that's a conversation I always have.

Michael Krigsman: We have an interesting question from LinkedIn. And this is from Greg Walters, who says he finds the static versus live wire discussion very interesting. Static, static data versus active, ongoing, flowing data. And he says, "Do you think this is the biggest obstacle when looking to move AI into a process?" He has a follow-on question, but let's address this one first.

Sol Rashidi: And I always refer to it as static code versus dynamic code. And the easiest analogy is software development versus AI development, because in AI, you're constantly having to retune because you're going to get new data sets. There's going to be more anomalies. So, there's an element of babysitting, for lack of a better term, that needs to go into AI applications that you're scaling that I don't think most people are talking about yet, because in 2023, it was all about kicking the tires in ideation. 2024, it's about POCs or finishing POCs.

That pivot from 2024 to 2025 is going to be pushing things into production. And how do we build an organizational model that can continuously babysit the output, the results from our AI capabilities, to make sure that context is right, the situation is right, the circumstance is right, because you're feeding it new information consistently with anomalies that haven't been detected before.

I call it a live wire because it's not one and done. There's never a finish to AI projects. You know, sometimes you can build a mobile app and you're like, "Oh my God, that was intense. Woo! Let's have a celebration. We're done." You can't do that. Not with the AI applications. There's always going to be an element of babysitting. It's kind of like having children. You think they graduated high school? "Oh I'm done. Let's have a celebration. We're done." You can't do that. Not with the AI applications. There's always going to be an element of babysitting.

It's kind of like having children. You think they graduated high school? "Oh, I'm done. They're off the paycheck. They're not. They come back to you in college. Oh, and guess what? They graduate college." It's just, the older they get, the more complex the problems are. They're your children. They're always going to be around, and their issues are always going to be your issues.

Michael Krigsman: So, you're really going back to the human in the loop that you were describing, but overlaying this need for a culture change, a mindset change in in the way that an organization relates to AI apps as opposed to traditional static software applications.

Sol Rashidi: And for all middle managers, practitioners, executives, whether you work in enterprise or a large company, we all kind of know it's never the technology that's been a hindrance for us. It is the change management, the thing we always talk about. But, the thing we always short-circuit or shortcut is when we're actually doing these large transformational programs.

That's the reality of it. It's, "How do you upskill an existing talent base to understand this new application that was thrown out? Then, how do you increase the literacy to let folks know that jobs aren't going away? They're just going to shift and look a little bit differently? How do you embed it within existing processes that are now being changed and creating a new org model to be able to support it?

How do you now change the incentive programs of both the executive and the team to make sure that there's adoption? Because fundamentally, there's business value, we determined at the end of it, versus it being another tool in the tech stack that doesn't get used. These things need to be discussed because this isn't standing up a data warehouse. It's a high cost. It's a high capital investment.

Michael Krigsman: How many, folks, business leaders and technology leaders, relate to AI projects as if they were, or as if they are, traditional software projects. They leave out some of these elements that you're describing, and as a result, have what they then would call a failure because it's because the project is not meeting its expectations. Their expectations.

Sol Rashidi: That's what I'm trying to get folks to avoid. Because there's a stat right now, right? That 73% of AI projects are leaving companies underwhelmed. And there could be a series of reasons, right? They never finished it to the intent and the vision that they had originally planned. So, they cut it short, they ran out of funding, or whatever it may be.

Or, the expectations were never set up. Right. And so, everyone thought that it was going to solve world hunger, only to find out that, "No, we're just automated a few steps, but we're not getting what we deserve to get." But, they never measured the human error rate so they can understand. Or, at least, you know, what I call time in transition, how to manage that workflow.

There weren't enough statistics to do the before and after, so they can see the overall halo lift that it created. There could be a variety of reasons for that stat, that over 70% of companies are underwhelmed. There are exceptions. There are some amazing companies that have done some really amazing work. I don't want to discredit that. But, for the majority, folks are still struggling, and I think a lot of it comes down to that.

So, as a shameless plug, like, some of the things that the book, I don't know everything. It's impossible to know everything. And I automatically raise an eyebrow when someone says they're an AI expert. I'm like, "Really?" Like, I've been in the field for a really long time. I feel dumber than ever right now because the pace of change is so fast, and I can't keep my finger on the ball.

So, you're an expert? Then automatically raise an eyebrow with the intention of the book is, at least, "Don't go away not knowing what you don't know. Be informed." And I don't mean the high-level articles in the white papers that you're reading on, like the management consulting firms and LinkedIn, like it. There's grittiness in the book. There's a series of failures of and why the failures were there. At least, become aware of the assumptions that were made that didn't hold true, in the mistakes that were made, that didn't pan out, so that if you're going to make 19 mistakes in that project, you shrink it down to like three, and you still have a high likelihood of succeeding.

Michael Krigsman: So, Greg Walters had a second part to his question, and I asked you this in a different way earlier. He says, "That it seems that you don't see a big difference between AI and old-fashioned IT projects. Is that correct? Is that a way of looking at it or." No.

Sol Rashidi: No, no. I think we make, we establish three clear distinctions. You're going to face higher resistance from the team because there's an element of fear. So, it kind of has to be top down. Two, because you're only as good as your last data set. It's a live wire. So, it's not something that's one and done, and that you can finish. And then three, you do have to change the organizational model to account for when something goes into production, because you always need a human in the loop in that feedback loop to make sure that the output is aligning with the expected intention. Other aspects of it is, yes, measuring human error versus machinery. So, you can set up expectations upfront. So, there are definitely some distinguished components.

Michael Krigsman: And we have another question from Twitter, again from Arsalan Khan. And again, I want to remind everybody, there's a tweet chat taking place. Ask your questions on Twitter using the hashtag #CXOTalk. If you're watching on LinkedIn, pop your questions into the chat. And oh, by the way, this would be an excellent time to subscribe to the CXOTalk newsletter.

You should do that. So, Arsalan Khan comes back and he says, "When you're creating AI use cases, when your proposal, acceptance, rejection, be highly dependent on the executives you are presenting to. What if the executives are just drinking the Kool-Aid of shiny objects, but not knowing the true implications of AI?"

Sol Rashidi: That is exactly why I never use business value as the marker in choosing use cases, like Arsalan, you hit it. When we get together as a steering committee, or there's a task force where you're working with the management consulting firm, and you come up with nine, ten, 15 different ideas and use cases, well, then you get to go through the measurement of like business value. And you're, the goal is not to go in front of those executives from manufacturing, supply chain, finance, procurement, the divisions that own the P&L, and say, "Here are the use cases we're going to do and not going to do," because inevitably, you're going to make some stakeholders happy.

You're going to piss other ones off. That's not our job. My job is to let the company know what's realistically possible and in what timeframe, and what it's going to take, and how amazing it's going to be at the end but outline the reality of what it's going to take. So, when you actually follow that method of complexity versus criticality, it's a very objective way of measuring what we, as a company, can realistically do internally.

If we have true intentions of scaling this past POC. And then, I actually provide the visuals, and I don't decide for the organization. I said, "I always present and go, here are our options. Now, in my opinion, it's of interest for us to go after the things that are highly critical in that quadrant and low in complexity to deploy, because it means we actually have the maturity and the talent, and the infrastructure, and the data to be able to do so."

So, the lift in getting it done is minimal. It may not be your favorite use case, but as a first time in, we're developing those muscles. But, I will always say, that's just an opinion. Then, there are things that are low complex and low critical. But, if they're, if they're not critical, what's the point of even doing it?

I said, "I suggest we don't explore those things that are low criticality and high complexity." That's a nonstarter. But, there are things that are going to be very critical for us and complex for us to deploy because we don't have the maturity, and that's going to require us some planning. But, before we make the investments here, I say, "We be maniacally focused on getting a few POCs out the door within a 3-to-5-month timeframe."

And, we start with high criticality, low complexity. It's just a suggestion. I actually let them tell me what they want to do. So, that is a very objective way of doing it. And you visualize it without having to play the tiebreaker and making the decision on behalf of the enterprise. But, you're leading them down the path because you don't want to take up a data, a nine-month data cleansing project, so that you can do a three-month POC.

Michael Krigsman: Do you see yourself as a Kool-Aid buster or a myth buster, when it comes to giving accurate, truthful feedback to business and technology executives who have a mandate from the board, "Get this done. You must be a lot of fun at parties." No, I used to.

Sol Rashidi: Not a Debbie Downer, and I'm very optimistic as an individual. And, I will push boundaries like it's, you know, there's part of the chapter in the book, like of what a rogue executive is, and I've never really fit in because I've always pushed boundaries, and I've always pushed people, and I've always challenged the status quo.

I enjoy that role. And then, from it, I've seen some great things happen. So, I never say no. I just want to understand the reason and the intent behind the why, so I know how to navigate it internally. And so, I think if you ask, you know, past CEOs and past peers, I do definitely think big for sure.

But, I'm not a futurist. I don't think ten years down the road. I think of where we should be in 2 to 3 years, and then I stay very, very focused towards that. I'm also very comfortable exchanging progress for popularity. I know in my role, I'm not going to always be well liked, but I do know how to get things done.

And so, I automatically gain a level of respect because I can steer people in the direction of, "This will have a higher chance of success because of the following reasons. We can try this other one, but I just want you to know that I can't guarantee, nor can I fix, what's a variable." And so, I will always let them make the choice, but I will make sure that they are fully informed with the facts, and that I'm not giving them more snake oil.

And so, oddly enough, I actually build a lot more relationships that way because I'm not sugarcoating it. But, I'm also kind of charming in the meetings, too, so it's like, "That's fine." Like, I'm laughing and smiling and joking, but I'm very reality based. Out of the possible. But our deployment is the art of the practical.

Michael Krigsman: Well, you are very charming. And I think people like being told the truth. Because, as we know in the corporate world, so much of the harsh realities are sugarcoated. Because people know that by sugarcoating it, they won't piss off the management and other colleagues and so forth. And the project or POC may take so long that by the time it's done, they'll be gone. And so, there's no accountability or responsibility. And so, I think people really appreciate being told the truth.

Sol Rashidi: At least with the business relationships, I've made 100%, and made that mistake in my first C-suite gig. I didn't do that. But thereafter, like, you know, even now, like, when I get new positions, my references are always the business executives that I served because, like, my intention was always to steer them right. It makes no sense to steer them any other way.

Their success is my success, vice versa. And so, if I just fundamentally know something can't be done because someone wasn't transparent or truthful, or they're spinning their wheels or not being obvious, I have no issues in a one-on-one saying, "Listen, I know you heard this. I feel back the layers of the onion. Here's what I discovered.

I don't think this is the right time to explore this opportunity because we have quite a bit of technical debt in area A, B, and C, and we have a dependency on those areas." I'll always be very factual. I'll never say anything that I'm making up. I will do my due diligence. If anything, I get into the weeds a little bit too much, and that annoys some people. But, I want to make sure I know before I give recommendations.

Michael Krigsman: Gia Tendria Jama Dar says, "What leadership qualities are most essential in leading AI products in the multi-dimensional growth of an organization?"

Sol Rashidi: You have to be okay with not succeeding. Whether you buy versus build, leverage off-the-shelf or build your own foundational models, there is an element where you're consistently going to be tripping over yourself, and things aren't going to work. Things aren't going to work as planned, or things are going to take a lot longer, or you're going to have members and parties that are going to be a bit more difficult to work with.

First and foremost, you have to be okay with these micro-failures because you're learning along the way. So, this is not an instant gratification type of thing. You're going to look back on it, two, three years, and go, "Oh my gosh, I did do that. This is amazing." But, when you're in it, it's not going to feel great.

I think the second aspect of it is, you know, you're going to have to make some key decisions along the way of, "Are you going to prioritize progress over popularity or popularity over progress? Are you going to prioritize collaboration versus control or control over collaboration?" Because you may have a vision, and you may have the task to lead the company down this path, but sometimes you're going to have to make the judgment call.

"Am I going to have to concede and compromise for the name of collaboration with this particular choice?" And in other meetings, you're going to have to say, "No, I need to make control. I need to maintain control." And while it's decision making, through consideration and not consensus, and you have to be okay with not being popular sometimes just because you are goal obsessed, and you have an outcome to produce, and you understand what it takes to go from A to Z, and you got to keep everyone on that path.

I think there's also an element of grittiness. You're not going to have all the answers. I didn't have all the answers. I also didn't believe what every management consultant told me as well. I would just ask everyone, and then for my own opinion, regurgitating what someone else told you could put you in a very bad position. So, you've got to apply your own critical thinking skills. I would say those are the three main ones, going in at least.

Michael Krigsman: And this is from Premkumar Shah. And Prem says, "Is it advisable to allow a sort of Cambrian explosion from the ground up, or should organizations drive these projects from a centralized, top-down approach. Only?

Sol Rashidi: Honestly, it depends on the culture of the company. And I know we hate that answer. The challenges with large transformational programs and projects, and advents like AI, in the old model, was everyone did their own thing, that created a lot of technical debt, and couldn't streamline processes. It was expensive because you were duplicating talent. So, being fragmented didn't work.

Then, we pivoted into being federated. But, the challenges that are in a federated model is, yeah, there's some technical debt. Some things are streamlined, but there's some technical debt. But there's a heavy dependency on change management and communication to make sure that everyone is aligned with the capabilities that have been deployed across these federated verticals. And we're not, we don't have those disciplines in place, like, I don't know all enterprises, but for the most sensitive.

And I was, too. It's not a muscle we have. So, then, everyone pivoted to being centralized, where everything is managed at the core. The challenge with that is you don't have enough domain expertise, and it becomes too slow. So, then, now things are like in this pivot of, "What's the right model?" I personally prefer hub and spoke, and I'll tell you the reasons why. My goal is to enable the businesses, not disable them. The goal is to expedite what they want to do, not slow down what they want to do. And I don't need to have control over anything, but I do need to control our risk exposure. And I want to make sure they're making the right decisions and not buying into the snake oil.

So, models that I've loved that give them autonomy, give us a ton of near models where the businesses say, "I want to do X, Y, Z." And I've outlined a manifesto of sorts. It says, "We get to interview the vendor, the partner, ask a series of questions, make sure that if you're going to pay a premium for the AI price tag, it truly is AI, and is not just a marketing logo that they put on top of it, and help inform you of what this thing really does behind the scenes versus the marketing tagline that sold you on it."

So, we will play as your internal consultants and advisors because we're in a position of expertise. If they choose to move forward, they go through their normal procurement process. But, then, we also establish standards or protocols. So, in partnership with the IT department, "What are AI data security protocols like?" Okay, "If you're going to feed enterprise data into this thing, we've got to leverage stochastic modeling so that in the rare case, there's data leakage, our information is protected."

So, here's our infosec protocols. These are security protocols. At a minimum, you need to have the following individuals. And so, the goal is to help set them up as successfully as possible. And then, third is, if they choose to bring us into the fold, of course, we'll be a part of the task force. Sometimes they don't. They just kind of want to go and do their own thing, which is fine.

But, we always have one person in our team, be on the task force. So, we have line of sight as to what's been deployed. So, I've been in organizations where I've had 11 individuals sit across different functions and businesses on the AI task force. And then, if another function is looking at automating a process or looking at forecasting, whatever it may be, we'll look, "Oh, we've done that before."

We actually did it here. We've already got the vendor approved. It's proven to be successful. We'll just expand the existing scope to you so you don't have to go through the same things. That tends to work a little bit better, at least for now, based on my experience. But, the fragmented, federated, and the centralized models, there are too many advantages and disadvantages, ready to go. I don't think that they're ideal. Definitely not centralized. It's hard. You move to slide.

Michael Krigsman: From Lisbeth Shaw, who says, "Yeah, getting back to pilot purgatory. What are the options for exiting pilot purgatory? Real life is going to mean that many use cases and pilots will not be chosen based on your five critical points." And very quickly, please.

Sol Rashidi: Once you're in POC, honestly, in my opinion, you can't control it. It's going to have its own lifeline. It's controlling what goes into POC. I would estimate the project end to end, what it's going to cost to go into production and ringfence that budget immediately because you always tend to lose budget when you're in POC mode due to different variables or external economic factors.

Whatever it may be, go through the due diligence and understand across infrastructure, data, and talent. Do you have what it takes right now to be able to push it into production? Not all data is created equal, and not all data is perfect. That's not a reality that I've ever even considered. But, you can't hear your data sets of good-ish, okay-ish, or don't like. Use cases with, don't pick use cases that are okay-ish. Try to focus your efforts on the data that's good-ish so that when you get the outcomes, they're not blaming the data necessarily, because you picked a use case that needs good-ish data that the enterprise has protected. Click reply. That's what I would say, ringfence budget for pushing it into production and pick good POCs knowing nothing's going to be perfect so you have a higher likelihood of producing the results that you want to.

Michael Krigsman: We have another question from Twitter, again from Arsalan Khan, who says, "When it comes to data for AI, do you propose to collect new data by all from all the different departments, or to use data in existing systems, for example, your ERP or CRM system?"

 

Sol Rashidi: A big fan of leveraging investments that have already gone into place and leveraging enterprise data. There's so much untapped there. And the challenge within enterprises, we collect data. We don't connect the data. So, there's always this thing of, "We need more data. We need." But, there is plenty of data. And there is this interesting stat that, at the end of the day, if you take a look at all the reports and dashboards and everything, we're only leveraging about 93% of the enterprise data that's been collected.

So, we've made these massive investments in warehouses and marts, and databases, and data lakes, and doing data architecture for the conformal layer and semantic. We're not even tapping into its full potential. So, I don't want to put bad money after good money. If we've already made the investments of putting in together in situ, to get the data ecosystem, the happiest moments I've ever seen, the executives I serve with, when we discovered new things with existing data sets, or we've been able to leverage existing data sets, it just accelerates things a lot quicker.

Michael Krigsman: Another very interesting question from Twitter. This is from West Andrews, and I'll just tell everybody you have a few minutes left to get your questions in. Why would you not be asking questions? I mean, man, when else will you have a chance to ask Sol Rashidi? Pretty much, what? Yeah, whatever you want. So, on Twitter, use the hashtag #CXOTalk on LinkedIn.

Pop your questions into the chat. You have a few minutes left. Wes Andrews says: "Brilliant talk, Sol, and he looks forward to reading the book. A key piece you highlight is the disarray, disruption, and change management aspect. Any big suggestions on how you strategically, or how one strategically overcomes the inherent organizational tensions, especially in the cross-functional food fight over resources?"

Sol Rashidi: Part of my process of selecting your use case, the first question, and the complexity to deploy is, "Who's your partner? Who's your stakeholder? Which function are you serving? And the question is, does that leader have a reputation for being actively involved or passively involved?"

"Are they a good partner? Would they be engaged? Are they going to give you time of day on their calendar?" Because this is something that's really, really important to them. So, you have to understand, there may be a ton of ideas, but you also have to partner with the right individuals, and you're going to get a lot of cross-functional arrows going left and right.

But, part of it is to understand who's going to be a good partner for you, so that you have a higher chance of succeeding because they need to be engaged. This isn't something, "Here's my requirement. No, go do it. Go. Go and do it." It just doesn't fundamentally work that way. So, so there's that. You must know who you need to partner with. The second aspect of it, I would say, is around that cross-functional warfare. It it's so, so true.

Michael Krigsman: I love that, the cross-functional food fight for resources.

Sol Rashidi: It is so true. And here's the other thing. Part of the use case is also understanding how many strategic objectives do you have across the organization? And are any going to be deprioritized to fit this in, or is this now getting added to that list of strategic priorities? Because if you've already got nine strategic priorities at a corporate level, and each region, division, business, has like ten strategic priorities to align with one or many of those strategic priorities, this now gets slapped on top.

Guess what? We're tapping into the same resources. So, part of your discussion in that criticality versus complexity, there's another question. The second one after stakeholder happens to be, "Are you tapping into the same talent piece and talent force that all the other projects are, or can you do this independently in autonomy?" And, or, "How many dependencies do you have on other functions?"

Because if you're going to have to wait for their schedules or for their calendars to open up, you're going to create so much drag that you're not, it's going to be a project killer to even start. So, part of that complexity, I keep saying data, infrastructure, talent, but I wanted you guys to read the book to understand the other ones as well.

The first is the personality, the style, and the reputation and respect of the stakeholder. And the second is dependency on the workforce and the talent, and how to measure that. So, you know, if they're going to be available when you need it so that you could deploy successfully.

Michael Krigsman: Sol, do you have a copy of the book that you can hold up for us?

Sol Rashidi: I do, I do.

Michael Krigsman: There it is. The AI Survival Guide. Well, I have enjoyed reading your book, and I have enjoyed our conversation today. Unfortunately, we're out of time. So, thank you so much for coming back and being a guest again on CXOTalk.

Sol Rashidi: Thank you. Michael, this has been amazing, and I've known you for how many years now? I can't even count. And for all the listeners, thank you. You guys are so busy, and it's a Friday. We're also tired. So, Arsalan, where's everyone? Just thanks for joining and asking those great questions.

Michael Krigsman: I absolutely second the thank you to everybody who watched. Before you go, subscribe to our newsletter. Subscribe to our YouTube channel, so you can stay up to date on live shows. We really do have amazingly great shows coming up. Everybody, thank you so much for watching. I hope you have a good day, and we will see you again next time.

Published Date: May 17, 2024

Author: Michael Krigsman

Episode ID: 840