Mastering AI: Designing and Implementing Effective Data Strategies

Join industry experts Dr. Inderpal Bhandari and Dr. Anthony Scriffignano on CXOTalk Episode #793 as they discuss data management and AI integration in businesses, including data quality, AI methodologies, and strategies for aligning AI with business goals.

42:44

Jun 16, 2023
19,208 Views

Welcome to Episode #793 of CXOTalk. This platform brings together industry leaders and experts to discuss emerging technologies and trends. Today’s episode features a conversation on data management and Artificial Intelligence (AI) with esteemed guests, Dr. Inderpal Bhandari and Dr. Anthony Scriffignano.

Dr. Inderpal Bhandari holds the position of Global Chief Data Officer at IBM. In his role, he is instrumental in optimizing data utilization within the organization, focusing on integrating AI technologies to align data strategy with business objectives.

Dr. Anthony Scriffignano, former Chief Data Scientist at Dun & Bradstreet, is currently a Distinguished Fellow at think tank, The Stimson Center. He has an extensive background in AI, with a particular emphasis on data quality, and its critical role in implementing viable AI solutions that are congruent with business requirements.

In this episode, the discussion explores using data in AI initiatives, the importance of data quality, and strategies to ensure that AI initiatives are constructive and aligned with business goals. The conversation also addresses the practical integration of AI in organizational operations, including technological considerations, human resources, and the cultivation of an enabling culture.

Specific topics include:

This insightful dialogue offers valuable perspectives for individuals seeking a deeper understanding of the relationship between data management and Artificial Intelligence in a large-scale corporate environment.

Transcript

Michael Krigsman: Today on Episode #793 of CXOTalk, we're speaking about data and AI. Our guests are Inderpal Bhandari, the global chief data officer of IBM, and Anthony Scriffignano, the former chief data scientist at Dun & Bradstreet. 

About Inderpal Bhandari

Inderpal, welcome to CXOTalk. It's great to see you. Please tell us about your work at IBM. 

Inderpal Bhandari: I'm actually a full-time chief technologist. When I first became chief data officer in 2006, there were just four of us globally. I was the first in healthcare. Then the profession and the related professions like chief analytics officer, transformation officer, that took off, and I happened to be fortunate enough to ride with it, and I've done this job full-time, so IBM being the fourth and perhaps the most complicated. 

At IBM, my strategy, data strategy, has been to make IBM itself into an AI enterprise and then use that as a showcase for our clients and customers because our clients look very much like us. That's what I've been doing for the last seven and a half years or so. 

About Anthony Scriffignano

Michael Krigsman: Anthony Scriffignano, welcome back to CXOTalk. You're a good friend. It's great to see you. Tell us about your work these days. 

Anthony Scriffignano: Thank you very much, Michael. It's great to see both of you.

As you mentioned, I was with Dun & Bradstreet for quite a long time (over 20 years). Right now, I'm doing a number of things. Probably front foot is as a distinguished fellow with the Stimson Center, which is a think tank. "Think tank," I'll put in quotes because there are a lot of what I would call applied research or action research where they get involved in doing things, not just writing about them.

I've been involved with things that are called AI. The term has been around probably since the '50s, but I've been involved with it as it's become computational from its birth. I know Inderpal has as well. 

There are lots of things going on in the world right now in terms of regulatory focus on AI as well as new types of AI becoming the greatest new shiny object and everyone pays attention to them. I stand for the science behind it. 

What do you have to believe? 

What has to be true in order for you to do that thing that you think is so cool? 

And why is it better than what you're doing today? 

And what is the cost of it?

I've tried to ask those emperor's new clothes kind of questions, and that's the role I'm playing right now.

The relationship between data and AI

Michael Krigsman: We're talking about data and AI. I think where we need to start is, when we talk about an AI data strategy, what actually is that? Inderpal, do you want to maybe take a crack at that to start? 

Inderpal Bhandari: AI is only as good as the data that is used to train that AI because AI has a training sequence and then an inference sequence. The training sequence has to do with seeing all kinds of related data so that it can then train itself to figure out what the right output is when it's shown an input that it may not have seen before. 

Importance of quality data in training AI

If the data (to begin with) is flawed or low quality, the AI will not work effectively. It's the garbage in, garbage out, that kind of phenomenon that you would have. 

They go hand-in-hand. And very often, you think of people talking about AI. If they haven't really looked at the data but they embark on an AI strategy, that is going to be very high risk. It'll most likely fail because they'll have to go back and straighten out the data strategy first just so that it's fit for purpose.

Now, when you say fit for purpose, what that means is if you know what the business objective is that you're trying to serve. So, it could be something quite narrow like a specific objective. It could be something like I want to understand what segments of my business should I try to expand to increase my top line. 

In which case if it's segments of business – data about your clients, about your products, et cetera – those things become very important. You'd want to make sure that that data is of very high quality. 

Aligning data strategy with business objectives

On the other hand, if it's something at a strategy level, which is kind of what happened when I joined IBM, I mean IBM wanted to be a cloud and AI company. To be a cloud and AI company, eventually, we landed at the point that, well, let's transform ourselves internally before we show this off to our clients and customers. 

That became a strategy that was enterprise wide. And we realized that now, well, for instance, not only do we have to make sure that our structured data is in order but also our unstructured data because we are going to go after this and transform ourselves into an AI company.

There are two aspects there that are relevant. One is at a strategy level when you're aligning to the business strategy, or it could be narrower to a specific business objective. 

How to align data strategy with business objectives

Michael Krigsman: Anthony, the challenge of aligning the data strategy to the business objectives is something that many organizations struggle with. What thoughts or advice do you have on making that work? You've seen so many different scenarios. 

Anthony Scriffignano: You would really have to unpack what Inderpal just said quite a bit to really get at the essence of it. And I did (when I was listening to him), but he was using some terminology very carefully there. 

A lot of times, organizations don't have one strategy: Make more money. Grow... fill in the blank. The things that we learn in business school – you can serve your shareholders, you can serve your customers, you can serve your employees – it's kind of hard to do all of those things at the same time because, very often, optimizing for one is less optimal for one of the others.

The strategy of which we speak when we start to talk about AI has some very serious implications, these methods that we're talking about. And I should say that these days it's rare that only one method gets applied. Very often, there are many methods being applied simultaneously. 

Importance of “organizational agreement” on AI / strategy alignment

There are some commonalities. One of the commonalities is that the quality of the data has many dimensions. 

Truth: If your AI is going to ingest data, it's going to probably presume it's all true. Well, all data is not necessarily simultaneously true. 

It may have been true at the time that it was created but maybe not so much anymore at the time that it's curated. So, how old is the data? Is it still true? How would you know that it's still true before you consume it into an algorithm or an approach that presumes that? 

I love to say that, when we go to court, we swear to tell the truth, the whole truth, and nothing but the truth. That's because those are three different things, and those are three different ways to manipulate veracity or understanding. 

When you get back to this concept of strategy, well, whose strategy? What part of the organization specifically? What objective? How would we know when we were successful? 

Asking those questions is very often a source of contention because the people in the room that all think they want the same thing realize they don't. And when you unpack it a little further, they realize that to get what the guy on the left wants, you have to get less of what the person on the right wants, and it's not pretty.

It's not really a technical problem as much as it is an alignment problem and getting everybody to agree on what they want so that we would know that this strategy of which we speak is actually what this AI of which we speak is delivering. It's really difficult.

CXO roles must be part of the AI / business alignment discussion

Inderpal Bhandari: These roles (like the chief data officer, the chief transformation officer), the reason these are CXO roles is because of what Anthony just said. You have to be part of that discussion. 

It's not so much like there is a concrete business objective. Sometimes you get into those situations where it's very clear-cut. But often, it's a strategic discussion in terms of understanding, clarifying, perhaps even adding to the business strategy; and then relating it back to what you are trying to do with data and AI. 

Unless you're in a position to have that kind of conversation (and you also have the wherewithal to pull that off), you won't really be successful. That's why these are CXO roles because it's really part of the negotiation that goes on to align the business strategy to the data or the AI strategy.

Anthony Scriffignano: Maybe I can just add a little bit to that. The whole concept of being in the room is so important. 

Back in the day, the goals and the objectives would come down from on high, and the folks with the keyboards would just make it so. It doesn't work that way anymore, and it can't really work that way anymore. It's so critical that the folks in the roles that Inderpal and I have had have a seat at the table; understand what went into the ask and not just the ask. 

Very often, what folks want and what they need are two very different things. And so, without being arrogant about it, you ask a lot of questions and you get at what they really needed in the first place, which is probably not what they started out asking for.

Focus on strategy first and technology tools later

Michael Krigsman: You're talking about organizational alignment with business strategy and, at a high enough level, this is true for every business decision that needs to be made. Yet, when you hear people talking about AI and data strategy, the conversation turns very quickly to: What kind of data do we need? Where do we get that data? What's the technology that we're going to use to aggregate and to manipulate that data? What kind of models are we using? Now I'm confused because you're talking about one thing and I hear the entire world talking about something different. 

Anthony Scriffignano: The world tends to focus on the hammers and the nails, and it tends to focus on the tools that are going to be used for the purpose. If I come to your house and I say, "Do you want to put an addition on your house?" and I've met with the architect and I understand your objectives, let's talk. 

If someone else comes to your house and says, "I'm going to build you a beautiful addition, and I'm going to use the hammer," you don't really care about the hammer. Of course, the hammer is important.

It's very important that we have the right data, the right tools, the right technology, the right people. It's people, process, tools, and mindset. All of those have to be aligned in order to get this right. But it starts with making sure you're focused on the right mission.

Inderpal Bhandari: Yes, that never changes, that piece of aligning back to the business. 

The way I would put it is, yes, no matter how promising the technology, no matter how dramatic the advance, et cetera, it doesn't let the organization off the hook for coming up with a sound business strategy and then of aligning these elements to that business strategy. That's still going to be very much needed. In fact, maybe even more so than before (as you try to go after these new approaches and methods).

Michael Krigsman: Be sure to subscribe to our YouTube channel and hit the subscribe button at the bottom of our webpage. You can subscribe to our newsletter, and we'll tell you and notify you about our excellent upcoming shows and guests. We have lots of them.

Comparing business and technological challenges

Would you say that the business side is more difficult or harder to achieve than the data and technology foundations (in your experience)?

Inderpal Bhandari: I would say that if you take something new, you probably want to draw the distinction between a mature technology and a technology that's more recent, nascent, or emerging. If you take something new like that, like the ladder, then there is a tremendous amount of complexity on the technology side as well.

Four elements of AI adoption

Early on, when getting into this game, when you were working on AI (for instance, at IBM), it became very clear, as we went forward, that there were four elements that had to move in lockstep: data, technology, workflow, and culture. Those four kind of had to move at the same time. Otherwise, the adoption was not going to be effective.

The technology piece, emerging technology, at that time cloud was emerging. There were a lot of AI techniques that were emerging, the deep learning stuff with GPUs and things like that. 

You had to make all that stuff work together, so there is significant complexity in the technology piece. But there is also significant complexity in the data piece and the workflow piece and then, eventually, in the culture piece of the organization. The stuff that we were talking about in terms of the negotiation, working with the C-suite, there's a lot of the cultural aspect that goes into it. 

Partnership with the business to overcome resistance to organizational change

There are many organizations one could go into and, essentially, as Anthony said, they would still want to give you a set of objectives and say, "Here. Go off. Implement this. We really don't want to hear from you about anything else. These are your marching orders. Go off and implement this." But that's the wrong approach when you're trying to bring in an emerging technology and use it to impact the business. 

Alignment challenges business and technology domains

Michael Krigsman: Anthony, we have a question exactly on this topic from Twitter, from Arsalan Khan. Maybe you can share your thoughts on this. He says, "When we talk about alignment, there's business strategy, enterprise business architecture, change management, culture, and now data strategy." All right, Anthony. What's your prescription then to make all these layers work together and align? It sounds almost impossible.

Anthony Scriffignano: Almost impossible is a synonym for possible. So, if you said it was impossible, then we have to talk. Right? 

First of all, thank you for the question (from someone who knows how to ask a good question). 

I would say it's really important that you start with the question, with the objective. Everybody wants to jump to the technology. They want to jump to the data, the deal, the thing. 

There are two factions in the room. The one faction is focused on the revenue, the growth, what's going to happen to the organization. The other faction is focused on, "All right. Let's get going. Let's start doing stuff. Let's start cooking in the kitchen."

I'm usually the one somewhere in the middle of those two saying, "Let's make sure we're answering the right question here." I'm not slowing you down. I'm actually making sure we get done in a way that we don't fall over the finish line.

It is very difficult to get all those factions in the same place. Probably the most important thing you have to do is be able to listen to each other and not start immediately talking about hammers and nails or immediately start talking about what color we're going to paint the finished product. 

Somewhere in the middle is: Why are we doing this? What are we not doing while we're doing this? Do we know? 

There's a big difference between can we do it and should we do it. What are we giving up while we do it? What about compliance? What about regulatory? 

How do we know that the data that we have is the right data to make the decision you want? Just because you believe it and you have your confirmation bias and you found one or two pieces of data that support your hypothesis doesn't make you right.

Ask the difficult questions!

We have to ask these difficult questions, and there's a very fine line between being right and being dead. You have to be able to ask them in a way that doesn't annoy. It can annoy them a little bit, but you have to annoy them just to the point where they don't kick you out of the room. 

Keep asking those "help me understand" kind of questions until we get to a shared understanding of what it is we're trying to achieve and the opportunity cost of all the other things that we're not doing.

The role of technologists as change agents in organizational alignment

Michael Krigsman: Inderpal, you're a technologist. So, if this is strictly then a business issue of organizational alignment, why do technologists play such an important role in this discussion, such a foundational, fundamental role? 

Inderpal Bhandari: I think the best way to think of my role, of people in similar situations, is that of a change agent. The catalyst for the change is the technology, but the change has to be affected in the organization and in the business. You have to be able to bridge those two to be able to do this successfully.

It's a transformation, and the transformation typically has those elements that I talked about for what we do, what I do: data, technology, workflow, and the culture. 

I'll give you one other thing. There is a lot to be done in terms of changing the culture of an organization when you try to bring about this change. 

AI adoption must include a bottom-up organizational change strategy and an empowered team

What we saw at IBM when we pushed forward with our data and AI strategy was that the adoption of the platform was triggered far more by the bottom-up measures that we put in place. We had a team that was empowered to engage with other teams that were working in the business doing workflows, quote to cash, procurement—things like that—supply chain. 

We have an empowered team on the technology side which didn't really need to come back for direction or instruction. But if they found a like-minded team, they could go ahead and move forward with the transformation. We found that 85% of the adoption actually came from that path as opposed to the top-down path, and so forth. 

It really is all about how you affect the change. But obviously, if the catalyst is the technology, then you've got to be able to walk that walk as well. 

But you can't discount the other side of it. You have to really be the bridge. 

“Massive federation of data and AI capability” in the enterprise

Anthony Scriffignano: Michael, I smiled when you called Inderpal a technologist and he very diplomatically didn't respond. I think you can tell by that answer that you have to be much more than just an expert in the technology to get what he just said right. 

In large organizations, what's happening right now is a massive federation of data and AI capability. It's not like you go to the room where the people that know how to do that live and ask them to do it for you. 

Almost anybody can get these capabilities on their desktop. It doesn't mean it's the right place to do it, but they can start doing it there. 

Everyone feels like they're an expert. Just like when we all first got—I'm trying not to name a product, but I think I can say—Harvard Graphics. In the days even before PowerPoint where, all of a sudden, we could all lay things out on the screen, we all thought we were experts in design, layout, font selection, and all of that. Of course, we weren't.

There's an old joke where the punchline is death by PowerPoint. We all know versions of that joke. And I'm not picking on PowerPoint. [Laughter]

Federating a capability like that across an organization or across the world comes with some risk that those who really are practitioners, who know what the difference is between what you can do and what you should do, understand the implications of going down a certain path and the difficulty of changing course once you get too far down that path. They have to be able to hear what's going on. 

For what Inderpal is describing to work well 85% of the time, it does require an organization that actually talks to each other or at least talks up to people who talk to each other (up and down). But that's not always the case. 

You can't just throw everything out in the middle of the floor and say, "Here you go. Everybody, play with this and do whatever you want." That will not work. That will end in tears.

Managing and selecting technology to solve data and AI challenges

Michael Krigsman: You have governance. You have focus on these foundational pieces. What about the interface between the technology and what you're describing? The whole world (and organizations, by and large) tend to focus on that technology piece. Can you now maybe talk a little bit about technology management as it relates to what you were just describing, and also selecting the right kinds of technologies and especially, selecting the right kinds of data to match with the problems that you're trying to ultimately address?

Anthony Scriffignano: I would add time. That is still relevant. 

I think I have a good example for you. When the pandemic broke out, I don't think anybody was really expecting that. All of a sudden, organizations shifted to almost exclusively working from home. 

There are laws about what data you can access from home and what data you can access at your desk. You have a different firewall when you're working in the office than you do when you're working at home. 

Impact of the pandemic - hybrid and remote work - on business operations

You've got developers that used to be co-located that are not co-located anymore.

Organizations had to absorb all of that change while still trying to serve their customers and, in some cases, failure to do so could have been like a death. 

There is an urgency about this as well. You can't take forever to do it. And you have to have good discipline in place so that when the unexpected happens (in the middle of the other unexpected that was already happening), you have the resiliency to survive that and come out of that stronger.

I'm not going to suggest (although I could) that IBM is one of those organizations, but mature organizations that get it can do that. We saw a lot of organizations that weren't so mature not getting it (in the middle of all that disruption). So, it's a very big question you're asking.

Inderpal Bhandari: The example of the pandemic was particularly instructive, and I think it goes to your data and AI questions earlier in the segment as well. 

When the pandemic hit, in terms of being able to run your business (for instance, make financial forecasts, make forecasts about your supply chain, about your procurement abilities, et cetera), all the models that were in play were essentially useless because we had now embarked on a situation that was completely new. No matter what technology we had in there from an AI standpoint or a model standpoint, it had been trained in a completely different world. 

That was Anthony's point, right? It may not be true now. [Laughter] 

In fact, what was true, though, was if we were able to get the data, accurate data, pristine data, into the hands of the people who were running those different departments, along with an overlay of what was actually happening in the pandemic—where Covid-19 was breaking, what were in the incident reports in different areas—if you could geographically then overlay that on what these guys were working on (whether it be financial forecasts or sales, which they expected to close, or procurement sites that were in danger, and things like that), they could make something out of it and move forward with it. That's, I think, also an instructive example of the relationship between data and AI and how that plays out as things really unfold that are truly unexpected.

Understanding decision elasticity

Anthony Scriffignano: Let me draw first blood on saying something super nerdy. There's a concept. I call it decision elasticity. I kind of stole it from economics. 

How wrong can you be and still make the same decision effectively? You don't have to be perfect to make a decision.

Inderpal is talking about training. There's an implication there that you have longitudinal data, data from the past that you can project into a near-term future that looks reasonably similar. And you can measure the elasticity of your decisions: How wrong are they? Then if they start getting wronger and wronger (to coin a term), then you can stop and re-examine those methods. 

The problem is, when you have something completely disruptive, there is no data. And the most dangerous situation you can find yourself in is when the world is changing faster than the data that describes it. That's exactly where we were at that moment.

You can't just throw your hands up and say, "Well, wait. When you have five years' worth of data, come back and I'll retrain everything and we'll be good to go."

You have to have methods in place that are effective in a situation, and that's what this environment taught us. That you can't just rely on one type of learning, one type of projection into the future. 

At that time, I was very involved with watching bad guys do bad things. Well, when there's disruption, the best bad guys, especially if they think they're being watched, they change what they're doing. 

If you model based on what they were doing, you're modeling how the best ones are no longer behaving. Kind of a dangerous thing to do, right? But we know this. 

The flipside of that coin is if you know that the environment changed such that the bad guys are going to probably try to take advantage of it, that many of them are probably going to do that unartfully. And so, you may be more easily able to see them as they run.

You turn on the light and the little creatures run away. You can see that, and so there might be an opportunity there along with that risk. 

Sometimes these situations, I would say, are almost never all bad or all good. There's always something in it that can teach you. 

There's always something in it that can make what you're doing better if you have enough time to breathe and observe what's going on and use of energy in the best possible way. It doesn't mean the bad thing will stop happening, but it may mean that you emerge from it in a better way because you took that time to be more thoughtful about it. 

Challenges in AI-enabled initiatives

Michael Krigsman: We have a question from Twitter. Lisbeth Shaw says, "The issues you're describing are true of any business or technology transformation. Are there particular points, issues, that are more problematic for AI-enabled initiatives? Can you kind of drill down into that?"

Inderpal Bhandari: If you look at the advent of AI, the progression of AI, it's moved very, very quickly in the consumer space but not so fast in the business space.

Trust, privacy, and job displacement impacts of AI

That's because, in the business context, people don't trust AI, and they don't trust AI for multiple reasons. 

We talked about some of the issues about the data, so the robustness of the data, the quality of the data, the currency of the data. Then we also get into issues that have to do with the fairness of the algorithms, that the results they produce are going to treat people fairly (if the data pertains to people). 

You have the issue of privacy being invaded in terms of the algorithms discovering something new. There's the famous example, or infamous example, of a lot of the retailers looking at the shopping patterns and shopping data, and then inferring that this person is pregnant and mailing their home. It turns out to be a young lady, and it was really a complete invasion of her privacy. So, those aspects come in.

Then there are the issues around job displacement and things of that nature. If you're applying AI in the enterprise, there are two flavors, often. 

There's the automation flavor, which has to do with when things are kind of straightforward. You go from one step to the other, you know what those steps are, and you can automate all that, so there's job displacement associated with that.

But even on the decision-making side where the AI is helping make a decision, there's a decisionmaker in play, and they have to trust it. They have to say, "Well, this won't displace me." 

Extending that further, the executives, as you put AI. We kind of know by now that the AI has to be infused into the major workflows of the business, things like procurement, supply chain, et cetera. That's the kind of IP that doesn't get published in papers or patented or anything. Those are the trade secrets of a company. 

They have to be able to trust whoever the vendor is of this software that this is not something that's going to disintermediate. Furthermore, the decisionmaker that's working with the system has to understand it.

Years ago, I did this computer program called Advanced Scout that ended up being used by every coach in the NBA. I remember the first time it had a counterintuitive finding; it basically asked the coach to play two backup players in a playoff game that they were on the verge of elimination. 

He was very concerned about that because he felt, "If I do this and I lose, I'm going to lose my job and reputation as well, in addition to the series." We kind of solved that problem by letting him see the video clips of when those two players were on the court, but that's the explanation piece. 

If you tell a doctor, "Amputate the left leg," they're going to have all kinds of questions. Okay, why amputate? What other options were considered? Why is amputation the right one for this patient, et cetera?

Explanation is another big part of it, and the AI systems today don't do a good job of all that. Those are the special aspects of AI and trust that come into play.

Anthony Scriffignano: I think that was a fantastic list, and I won't deign to add to it, but I will suggest another dimension to it. 

Great question: How are the AI issues? What's special about the AI issues?

I would say another one is that you have the opportunity to fail faster and at larger scale. There's a tendency (once these sorts of systems are implemented) someone says, "Well, it's 99% accurate," "It's 92% accurate," "It's 87% accurate," and you assume that means that 87% of the time the prediction will be right.

Well, no. That's based on the past, not the future. Very rarely do we measure fast enough to stop every conceivable bad thing from happening. 

Inderpal hinted at something, which is an observer effect. People, when told what to do by a "machine" will sometimes think they know better or not want to be told what to do by a machine and do something different just because a machine told them to do it (to prove that they can do something), and not necessarily thinking it out loud like that. 

The question I get asked a lot is, "What about someday, will people be reporting to robots or robotic bosses of some sort?" 

You say, "Oh, of course not. I would never do that." Then the GPS tells you to turn left or right, and you do. Outlook tells you to go to a meeting, and you go.

We're already taking a lot of direction from automation. I won't call it AI necessarily but from automation. 

The human effect of what we do as human beings to accept or reject that device is essential to get at trustworthy AI, to get at making sure that we don't marginalize others (that are already marginalized) more because they don't have access to these technologies. The concept of good and not good kind of depends on where you're sitting sometimes (whether it's good or not good).

On trust and explainability in AI

There are certainly lots of volumes, books, committees focused on trustworthy AI and explainability. There's legislation as we speak being considered that will hold the feet to the fire of anyone who is implementing anything called AI. 

To say that it's not being adopted by business, the adoption is lower, I think, in some ways because of some of these human factors. It's not a lack of technology. It's a reticence to just push that button so quickly. 

Technology will always outpace regulation, so you have to be careful, or you could find yourself in a world of hurt where now they're coming after you because you used that technology that made a better decision. Good luck trying to prove that sometimes.

Costs of not implementing governance or data quality in AI projects

Michael Krigsman: This is a question from Hue Hoang, and he says, "We can sometimes measure the cost of implementing data solutions, but how can we measure the operational costs when a business decides not to implement certain solutions such as governance or data quality?"

Anthony Scriffignano: The opportunity cost, the cost of not doing something. Thank you, Hue, for that question. That's a big one, and I think it's an important one.

If we're going to decide not to do something, we should decide not to do it on purpose, not just because we got tired of arguing about it or because we didn't want to take the effort to get all the data that will be necessary to make that decision. 

One of those annoying questions that I usually bring into the conversation is if we're going to decide not do this (because there's some other thing that we want to do, and that other thing has been deemed more important), great. Then let's make that decision. But let's understand the opportunity cost, the cost of not doing it.

Inderpal Bhandari: In many cases, it does become clear-cut because you might have regulations that then levy huge fines. For instance, in the European Union. GDPR, for instance, if you don't have the right setup for governance and privacy and so forth, you'll be hit by a major fine.

In other cases, though, when they're making these decisions, they might choose not to do the governance of the data, but it'll end up reflecting in the actual output that's being produced. Then somebody has to go back and fix it, so keeping tabs on that.

I'm assuming here that you've lost the argument and they've gone ahead with it. Keeping tabs on that and raising that every time it happens, I think very quickly you'll be able to make a difference in the way people are viewing it because nobody wants a disaster. 

If they've skipped that step, which has major magnitude, sometimes that'll happen. Collaboration is the name of the game, so you just want to keep an eye on it, warn people that this is going to happen, and every time it happens or even before it happens, you raise your hand and say, "Look. I told you about this. Now let's do it."

Differences between generative AI and more traditional AI projects

Michael Krigsman: This is from Iavor Bojinov, who is a professor at the Harvard Business School. He's also been a guest on CXOTalk. He says this. Inderpal, maybe I'll ask you first. "Is there anything different between generative AI and more traditional AI, and how should organizations approach this?"

Inderpal Bhandari: I think the best way to think about generative AI, the promise is that you can do things conversationally. Just as you and I can have a conversation, and we can discuss something and try to get to some resolution, that's the whole. 

Now, if you apply that in a large organization and say, "I've got some intelligence that can now conversationally help me do client support, employee support, my IT operations, et cetera," that's hugely, hugely promising. 

On the other hand, the way these systems work today, the best way to understand generative AI that I've been able to get my mind around it is, in a sense, each word is predicted. Then the word, essentially, that word is fed back into the input. Then the next word is predicted. 

It's almost like when you and I are talking, I'll sometimes do this. I'll go out on a limb. I'll start saying something, and the thought hasn't fully formed. Usually, I manage to come out of it. But many times, I'll end up with my foot in my mouth.

The generative AI techniques are essentially going out on a limb every time, which is also why they're not always consistent with the response. You know you might have the same prompt give you a different response because it's working off a probability distribution. 

I think there's a tremendous amount of promise but also a tremendous amount of work that needs to be done to address some of the issues that we've raised earlier.

Michael Krigsman: Anthony, differences between generative AI and traditional AI and implications for the enterprise – pretty quickly, please. 

Anthony Scriffignano: Generative AI is making stuff that didn't exist before based on stuff that it observed, and that stuff can be hexed, it can be images, it can be anything that we as humans consume. The challenge to it is that you look at all this stuff in the past, and you kind of compute on it and do a lot of math. Then you generate something that looks like a human said it, and a human didn't say it. 

When the world changes and the corpus of data that it's looking at didn't change fast enough, that nuance gets lost, and we lose the ability to understand something nuanced. So, if the purpose is to provide customer support based on frequently asked questions, or if the purpose is to summarize a whole bunch of things that you should have read and didn't have time, it's a fantastic idea. If the purpose is to write some new thought leadership on something, maybe it's a starting point but I would be very careful when we consider that to be an ending point. 

Advice to business and technology leaders on data and AI

Michael Krigsman: Share final thoughts on advice that you would give to business and technology leaders who want to be more effective using data, using AI. Inderpal, do you want to jump in with that one first?

Inderpal Bhandari: I've been doing this for the last 20, 25 years, starting from the days when I did that program for the NBA to now. I've always felt that whenever I was doing it, I felt, "Oh, it can't get better than this," but it always seems to get better than that.

I think we are now in one of those moments where there is the potential and the opportunity to have a tremendous impact not just on business but also on society. I think, because of that implication, that there are these major societal considerations as well. We absolutely have to get involved, and that would be my biggest advice to people either on the business side or on the technology side. 

You need to really get involved with what's happening here. There's just tremendous, tremendous potential. There's never been a better time to be involved in data and AI.

Michael Krigsman: Anthony, it looks like you're going to get the last word here. 

Anthony Scriffignano: Number one, I would say, ask "Why?" a lot. Why are we doing this? What do we have to believe? Why this data?

Make sure that you understand before you jump into that method with that data. Make sure that method and that data are, in some way, justifiable (not only against what you intend to do but against what you are not doing by doing that instead). 

Then the second thing is to make sure that you pay very close attention to how the environment is changing so that you don't get caught by the change that makes what made sense no longer sensible. 

Then the last thing is something I always advise, which is to be humble. It is extremely rare when you know everything you need to know and have all the information you need without widening that circle and bringing in others that have some sort of expertise or some sort of perspective that you don't have. 

Inviting that expertise and that perspective is not a sign of weakness. It's a sign of great strength. 

Michael Krigsman: With that, unfortunately, we are out of time. I just want to say a huge thank you to Anthony Scriffignano and Inderpal Bhandari. Anthony, thank you. It's wonderful that you've bene here again, and I hope you'll come back another time.

Anthony Scriffignano: Absolutely. Thank you so much, Michael.

Michael Krigsman: Inderpal, I'm so honored that you joined us. Again, I hope you as well will come back and be a guest on CXOTalk again at another date.

Inderpal Bhandari: Delighted to do that, Michael. Thank you for having me. For those with unanswered questions, please link in and we can continue the conversation. 

Michael Krigsman: Everybody, thank you for watching, especially those folks who just ask such great questions. You are such a smart and bright audience. We love your questions. Keep watching CXOTalk. 

Go to CXOTalk.com. Be sure to subscribe to our YouTube channel and hit the subscribe button at the bottom of our webpage. You can subscribe to our newsletter, and we'll tell you and notify you about our excellent upcoming shows and guests. We have lots of them. 

Everybody, thank you so much. Hope you have a great day, and we'll see you again next time. Bye-bye.

Published Date: Jun 16, 2023

Author: Michael Krigsman

Episode ID: 793