Get a proven AI playbook from IBM Consulting's Chief Operating Officer. Learn strategies to cut through the hype, overcome implementation hurdles, and build responsible AI systems.
AI Playbook for Business Results: Insights from IBM Consulting
In episode 832 of CXOTalk, Michael Krigsman talks with Mohamad Ali, the Chief Operating Officer of IBM Consulting, about practical aspects of integrating AI into business operations. The discussion explores challenges and opportunities that companies encounter as they transition from initial AI experimentation to scaling these solutions effectively.
Ali emphasizes the changing landscape of AI consulting, highlighting the shift from building initial use cases to addressing the complexities of scaling, which involve security, bias, data privacy, governance, and cost optimization. He introduces IBM Consulting Advantage, a platform designed to equip consultants with AI tools and resources while ensuring responsible and ethical implementation.
Episode Highlights
Embrace AI as a Transformative Force
- Recognizing AI as a revolutionary force, similar to the internet, is crucial for leaders to understand the need for proactive adaptation and integration of AI to remain competitive.
- Emphasizing the importance of developing a comprehensive AI strategy that addresses broader implications beyond individual projects will resonate with the strategic mindset of the audience.
Prioritize Scaling and Governance
- Addressing the challenges of moving AI projects from sandbox to production, such as security, bias, data privacy, and cost, will be highly relevant to the audience's concerns.
- Highlighting the importance of embedding governance throughout the AI lifecycle will appeal to the audience's focus on responsible and ethical AI practices.
Develop a Multi-Modal AI Approach
- Discussing the advantages of leveraging diverse AI models and platforms based on specific use cases and requirements will provide valuable insights for the audience.
- Emphasizing the need to understand data provenance and limitations will resonate with the audience's focus on data quality and governance.
Invest in Employee Reskilling and Training
- Highlighting the importance of empowering the workforce with AI skills and promoting a culture of continuous learning will align with the audience's interest in maximizing the benefits of AI adoption.
- Addressing the need for accessible training initiatives that cater to diverse skill levels will be relevant to the audience's focus on employee development.
Operationalize AI to Drive Efficiency and Innovation
- Focusing on integrating AI into workflows and processes to enable new ways of working across different departments will appeal to the audience's interest in driving business impact.
- Providing guidance on designing AI solutions with operational feasibility in mind will be valuable for the audience's practical implementation concerns.
Govern AI Responsibly
- Emphasizing the importance of robust governance for bias mitigation, data privacy, and ethical use will align with the audience's focus on responsible AI practices.
- Offering guidance on embedding governance checkpoints at each stage of AI deployment and leveraging monitoring tools will provide actionable insights for the audience.
Key Takeaways
Scaling AI Requires More Than Technology: While initial AI implementations often focus on building specific use cases, the real challenge lies in scaling these solutions across the organization. This involves addressing critical factors beyond technology, such as data governance, security, bias mitigation, cost optimization, and cultural change management. Business leaders must recognize that successful AI integration requires a holistic approach that encompasses these broader organizational aspects.
Multi-Modal AI Strategies Offer Flexibility and Risk Mitigation: Adopting a multi-modal approach to AI, leveraging various models and platforms based on their strengths and weaknesses, allows businesses to tailor solutions to specific needs while mitigating risks associated with any single model. Leaders should carefully evaluate the capabilities and limitations of different AI models, considering factors such as data sources, transparency, and potential biases.
Investing in People is Crucial for AI Success: As AI transforms the nature of work, reskilling and upskilling employees is essential to ensure they remain valuable contributors. Leaders should prioritize training initiatives that empower employees to leverage AI tools effectively and adapt to evolving job roles. This investment in human capital is crucial for maximizing the benefits of AI and fostering a workforce that thrives alongside intelligent technologies.
Episode Participants
Mohamad Ali is Chief Operating Officer for IBM Consulting. In this role, he is responsible for the global operational performance of IBM Consulting, including Global Delivery, Cybersecurity services, asset development and scaling AI-enabled solutions that make greater use of IBM technology to improve delivery profitability and innovation.
Mohamad was most recently CEO of IDG, a leading market intelligence and demand generation company serving the technology ecosystem. Before this, Mohamad was CEO of Carbonite, a publicly traded data-protection and cybersecurity company where he grew revenues four-fold in four years.
Mohamad also previously served as the Chief Strategy Officer at Hewlett Packard where he played a pivotal role in the company's turnaround by directly managing over ten billion dollars of organic cost reductions and leading the process to split HP into two companies.
At IBM, Mohamad acquired and integrated various companies to help create the organization’s multi-billion dollar analytics software business. At Avaya, he oversaw the two-billion-dollar services group, and served as the head of the company’s research labs.
He currently serves on the boards of iRobot (NASDAQ: IRBT) and Henry Schein (NASDAQ: HSIC), and previously on the boards of Carbonite (NASDAQ: CARB) and City National Bank (NYSE: CYN). Mohamad is also a member of the Council on Foreign Relations.
Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.
Transcript
Michael Krigsman: Welcome to episode 832 of CXOTalk. We're discussing operationalizing AI in business. And what does that actually mean? We're speaking with Mohamad Ali. He is the chief operating officer for IBM Consulting.
Mohamad Ali: We're a $22 billion services company. And of course, we have a very broad range of services that we deliver - everything from user experience design to SAP implementations, to product engineering, to gen AI applications. And now we're even doing quantum applications for our clients.
Michael Krigsman: You talk about operationalizing AI, and I know that's very important to you, and it's important to your work. What does that actually mean?
Mohamad Ali: As many of you know, in the consulting market, which is a $1 trillion plus market, this world is about to change dramatically, and a lot of it is going to be around our operations - how we combine our colleagues, our consultants, with digital workers and build a new type of operation. And leveraging to implement this is actually software.
And so, we recently announced something called IBM Consulting Advantage, which is our workbench for these 160,000 consultants to be able for us to operationalize all this great technology.
Michael Krigsman: Would it be correct to say that your focus is thinking forward? What does work look like in a world that is the combination of people working together with various AIs?
Mohamad Ali: We're doing this every day now, right? As you know, we have several hundred of these AI projects going. So do our competitors. And, you know, there are thousands of these AI projects going, and every single one of them has to do with how do you make someone who is doing something efficiently today do it even more efficiently with these AI tools?
Michael Krigsman: When you look at this, obviously you're responding to your clients and to the companies that you work for. What are you seeing out there? What are the things that they are telling you that they need the most right now?
Mohamad Ali: I would say about 12 months ago, a lot of it was, "Let's get started, let's create some use cases." And there are now thousands upon thousands of use cases. But one of the things that we've noticed is that about 40% of these, what we call "stuck in the sandbox," they're just not advancing. And so the first phase of this was, "Please help us build these things."
And now, you know, we've helped them build these things. We're also being called in by clients who have said, "We've tried to build this on our own. We've used some partners to build these things, but we can't scale them." And so, the second phase is about scaling, and some of the things that I noticed - I'll just give you an example.
I was in Dubai recently and I met with a client, and he had built this really great AI solution. And he said, "Now I want to roll it out to about 1000 stores and the agents of these stores and so forth." They said, "I have a problem. I'm running out of tokens. This stuff is expensive. How do I know? How do I deal with this?"
And so we started showing him how he could optimize for things for cost. Then he turned and he said, "Well, I think I have a security issue as well." And so we started helping with security issues. So, I would say this next phase is about scaling, and things that are impediments to scaling have to do with security, bias, PII, governance, cost. And so we're heavily, you know, focused on helping our clients with those types of issues now.
Michael Krigsman: What I find really interesting is you start with a - we can say a technology problem: How do we use AI in our business? But then as you're describing it, it's like one of those balls of string that you pull on the string and now it starts touching throughout the organization all these very different aspects that really have very little to do with technology.
Mohamad Ali: I think a lot of technologies are like that, Michael, where, you know, there's some exciting thing, but that exciting thing touches everything in your organization, right? So if you think about just building the assistant or the copilot or whatever the AI solution is, that is almost a small part of the problem. How it connects everything else, how the data that you have in your organization feeds it, how it gets secured, how you deploy it so that it is used efficiently, effectively, ethically, responsibly - these are big, sort of cultural and change management type of issues.
Michael Krigsman: Are there unique aspects to AI consulting projects or - or maybe I should phrase it this way - unique aspects to the technology and the business implications with AI that are different from other kinds of technologies?
Mohamad Ali: There definitely are. You know, if you think about AI in the early days, we were reasonably deterministic, right? You had sort of rule-based AI, you had some neural networks, you had constraint-based approaches and machine learning approaches - generally deterministic. With large neural networks, there are some unpredictable aspects of that. And so it's really, really important that proper governance be applied to these neural networks.
Right. And when I say governance, I mean things like being able to determine if an AI module is doing something it shouldn't be doing outside of certain bounds, looking at things like bias, looking at things like PII. And so once again, I come back to this platform we've been building for IBM Consulting Advantage.
And so, as our consulting engineers use AI, either internally or for a project, every time they're interacting with one of these AI things - and they're doing it all the time, right? Every prompt that you do is you're interacting. Every time that happens, there's a security check, there's a bias check, there's a PII check. It's continuous, right? And I think this is a little bit novel in the consulting world, because we're not just doing it after the fact.
We're checking throughout the process and we're checking at every instance, right? And so, the thing that's a little bit different now is that with this new phase of AI, I think we have to be especially vigilant about how we use it. It's one of these, like, Spiderman things, right? With incredible power comes tremendous responsibility.
And so, we have to be even more responsible now.
Michael Krigsman: It sounds like you're almost building in various layers of governance throughout every step, that it's not distinct really from the project itself, but it's actually fully embedded, fully intertwined, inseparable.
Mohamad Ali: Absolutely. It's kind of nice that we're in this cyber range here because, you know, I think about this - when I was CEO of another company and at the time, you know, we were trying to make sure that our product was incredibly secure. And so it used to be that you would build the product, and then at the end of it, you'd do some pen testing, you know, you check it for security.
And then over the years, people started embedding security earlier and earlier and earlier in the process, to the point where every time you do a code build, you do a security scan, right? And so all the way through. And so this is a little bit like that. We're starting to embed governance and security and privacy all the way upstream in the process.
And so, you know, you think about DevSecOps, right? This is like DevSecOps gov put together. And so when we get something out to a client, we need to make sure that, like, the whole chain was good.
Michael Krigsman: Please subscribe to our newsletter and subscribe to our YouTube channel. We have incredible shows coming up. Check out CXOTalk.com. Mohamad, it sounds again like because you're going into the organization in so many - through so many different strands, so to speak, that is that why the operating aspect, the operating model, becomes so important?
Mohamad Ali: Like everyone in the consulting business, I think, had this innate feeling that something big's about to happen. And I think they're right. We're really at an inflection point, right? And, you know, maybe it's the Big Bang, but it's a big deal. Something's about to happen here. And I think in order for that to happen, these positive changes to happen in a productive way, it needs to be tremendously organized and operationalized into a consulting company.
And so it's not just about - this is not just another tool. You know, we've had a lot of great other tools that we go in and implement, right? I mean, we all do these projects all the time. We have, you know, I'm involved in a very large SAP implementation. It is an incredible tool that we're putting in there.
But this is not that. This is not another new ERP product. This is something that changes fundamentally how we're able to work, right? So if you look at, you know, almost any consultant, any engineer within our organization, what they do can be improved or made more efficient by someplace between 5 and 95%, right? I mean, these are big numbers.
And so, if that's the case, it's going to touch everything. A lot of our processes have to change, how we approach clients has to change, the value that we bring to a client on day one has to change, right? And so, you know, we're going to have people who have been working for five years who, with these tools, which we're making available in an organized way through IBM Consulting Advantage, that five-year experienced person will be able to do the work of somebody who's been doing it for seven years or eight years, right?
It's incredible what those people will be able to do. I mean, it's kind of exciting, right, for new people coming into this or, you know, existing people getting retrained in this. All of a sudden, you have - it's like a superpower. You know, you get all these people - 160,000 of them are getting this superpower.
And so we have to do it in an organized way. And that's why, you know, the operations of the organization is a key element to harnessing it.
Michael Krigsman: So, IBM Consulting Advantage - do you want to briefly tell us what that is?
Mohamad Ali: I think of it as a workbench for 160,000 consultants. And as I described, it allows us to use a variety of things, including gen AI, in a very organized way. So at the heart of it are what we call our methods, our assets, and our assistance. And our methods are effectively how we do a project. Our methods get turned into what is like a 300-page document.
It tells you how to implement this as a project, and that gets turned into a work breakdown structure. And that work breakdown structure is the project plan, right? And the project plan gets populated with resources. And those resources are our colleagues, our consultants, but they're also our assets, which are pieces of software. But now they're also our assistants.
And we have about a thousand of these assistants. And so we're doing it in a very organized, operational manner. We're connecting all these resources to the tasks in the work breakdown structure. And so when our engineers go to deliver a project, they're delivering it through this platform. So, you know, it's a very powerful and all-encompassing platform, in some ways meant to harness all this great new technology.
Well, yeah.
Michael Krigsman: How is this different from traditional non-AI approaches to project management?
Mohamad Ali: In the past, it was very manual, right? And in some ways, it still is. We're just beginning to work on this. But I'll give you an analogy. So if you go back 15 years in the infrastructure world, let's say you're in the CIO organization, you've got this new project, you're building a new website for something, and you need some CPU resources.
So what you would do is you would call down to the server room and you say, "Yeah, how many dollar servers do you have? I mean, it's got boxes, storage. Can you wire it all up for me?" And then four weeks later, somebody would do it and you'd have your compute resource and you'd be off, you know, you'd be able to run your application. Today, there's this thing called infrastructure as code. And what you do is you go to a screen and you type in your request and almost instantaneously, you have the compute capacity that you need. So today, in the consulting world, we look a little bit like the infrastructure world 15 years ago. When a project starts, somebody comes, they're excited.
You have a little, you know, celebration and then, you know, you kick off the project. Somebody builds a project plan and somebody calls, you know, figures out what resources are available. They manually staff the project plan. Somebody goes and figures out what assets might be available. You sort of plug it in there. In today's day, you'll figure out what assistance and copilots are available.
You plug it in there. It's still all very manual. And so what we're looking to do with IBM Consulting Advantage, and we've already started doing it, is to automate some of that, to provision some of this dynamically and make it easy for these engineers who are going to be delivering on these projects. They have the tools readily available to ensure that these tools are shared.
So, when they use them, they're high quality. I tell you, in consulting companies like ours and our competitors, everybody's building assistants and copilots and whatever else. There are thousands of them. How do you know which ones are good? And this is a process for curating the good ones and associating them with the people who need to do the work.
So, yeah, I mean, that's what's going on here.
Michael Krigsman: As you're talking? I'm thinking you're changing the workflows and the work processes for 130,000 engineers. That cannot be a simple task.
Mohamad Ali: No, it's not. Yeah. So we, I, we recently hired a guy and we said to him, "You know, your job is to think about if you were a large enterprise, a large bank, and you decided to deploy SAP everywhere, and your job is to hire a consulting company and have them come in and manage the implementation of this project."
So the guy that we just hired, he's not in the Consulting Advantage team. Consulting Advantage, there's a whole Consulting Advantage team and we're building it. And his job is actually to get this thing deployed to our 130,000 engineers, right? Because us just building this thing is not necessarily going to mean that it's going to get used. And so we have to have a whole team, a whole process to get it adopted and used by the organization.
So, you know, similar to the consulting projects that we work on, where we go and we build something, and a big part of it is the change management to get people to absorb it. That's the person we just appointed. So we're just at the beginning of this process. We have a great platform. We have about 20,000 people on the platform.
We have to get it to 130,000 people.
Michael Krigsman: Greg Walters on LinkedIn is asking about expert generalists versus industry/business specialists, and he's asking about the difference between AI consulting and traditional technology business advising. Does AI consulting demand more of a universal approach instead of addressing one silo at a time, which I think you started touching on this earlier?
Mohamad Ali: I think there are quite a few similarities between AI consulting and prior consulting, in that you really have to have domain expertise. And when I say domain expertise, I don't mean in AI necessarily. You have to have that, that's sort of table stakes, but you also have to understand the industry use cases, right? So if you're trying to apply it to a contact center in a bank, it's a very different problem than if you're trying to apply it to a contact center in a router company, a networking company.
And let me explain that, right? So, we're doing a project right now, two projects. One of them is at a bank. And imagine somebody calls in and says, "Hey, you know, I'm going to make up a bank, Bank of America. I see you, I'm Michael, I, you know, Michael calls in and says, there's a $50 charge on my account, and I don't know why I have this charge."
You can actually, it's not that hard for, you know, a piece of AI to go and look at a bunch of stuff and come back and tell the agent the reason the $50 charge is there. So that agent can very quickly explain to Michael, you know, what's the reason. Now switch that to a networking company. I'm making this up.
Let's say Cisco, right? Somebody calls in and says, "Yeah, I'm the IT person, a networking person at this large enterprise. And you know, my network, my router is sending packets to Singapore instead of Indonesia when I have encryption turned on." That's a hard problem for an AI tool to go figure out, especially if it's trained on, you know, just a general purpose LLM.
Right? So you can take the LLM, you can, you know, add some documents and there could be some RLHF capability that you associate with it. And maybe you can answer that, but there are a bunch of really smart things that you actually have to do with the LLM, with the prompt, and with the RLHF in order to make that use case work.
So, this is just an example of where, yeah, it's great that you, you know, you have the AI skills, which I said are table stakes, but you still have to have that industry use case knowledge. Otherwise, you're not going to be able to apply it. And in some ways, that is extremely similar to what we've always had to do.
Michael Krigsman: So at the end of the day, yes, you have your toolset, but as you have just been describing, it's really the detailed understanding of the process, the business use case, the players, all of those, the business goals, all of those aspects. That doesn't really change.
Mohamad Ali: It doesn't change.
Michael Krigsman: We have another question, and this time from Twitter, from Arsalan Khan, who always asks great questions. He's a regular listener. And, Arsalan, thank you for always listening. And he says, what is the importance of business process re-engineering and operationalizing AI? How do you capture and understand processes that are undocumented before AI can be deployed?
Mohamad Ali: I think AI actually has a really important place in capturing undocumented processes. So we also have a BPO business. And in the past, what you would do is you would interview the people who would do the task and you'd write it down. You'd document it, and these are for tasks that are effectively undocumented. You'd have to go through all the hard work of documenting it.
And with gen AI now, what we can do is you can interview a person on Zoom, and all of a sudden, you know, they start telling you how this all works, and you can use AI to extract out of that a model that then you can query later, right? So this isn't where you have to go look up some document even later.
You could just ask this model questions and it will tell you how this business process works. So I think, Arsalan, you're exactly right. Having gen AI is just one piece of the puzzle. Then how you slam it into the process is extraordinarily important. And the scenario I just gave you is how we're actually using gen AI to help with figuring out what that process is.
So then you can plug it into the process.
Michael Krigsman: This is from Simone Jo Moore, who says, "How do you see that we can blend country, cultural, and legal AI ethical aspects while some people are attempting to do all of that, but determined to keep it with an in-country focus?"
Mohamad Ali: At IBM, we were very, very supportive of the EU AI Act. And why? Because we thought it establishes a set of ethical standards and responsible standards that are important. It'll apply to the EU, not necessarily to all countries, right? Different countries will do their own things. One of the important aspects of the EU regulation is transparency. And what this means is on multiple levels.
But on one level, it's that AI should identify itself as such, right? So like nobody should be tricked into interacting with an AI if you don't want to. There are multiple other levels of transparency that this sort of suggests or implies, including where does the data come from to train the model. And, you know, in some cases, some of our clients are fine with models that don't necessarily conform to the highest level of rigor because of how they're using it.
Maybe that's okay. But in other cases, it really, really matters. And for those cases, you know, we're heavily focused on that, right? So just this morning, I was having breakfast with a financial services company and this is really extraordinarily important, how you know what data they use to train the models. And so we're working with them to train some models to deliver certain results that is effectively their data or as much of their data as we could find.
And then synthetically creating additional data, because sometimes you just don't have enough data to train the model, and you need to be able to do that. Now, in terms of other countries, you know, at IBM, what we try to do, and I tell you, you know, this is actually one of the reasons I came back to IBM. I worked for IBM for 14 years.
And one of the important things for me was the ethical standard that the company has. Now, not everybody gets everything right. But I always felt that IBM really, really tried to do things right. And so it was a big attractor for me coming back to the company. The approach that IBM has historically taken is to try to set a high bar for ethical and responsible approaches to things as opposed to country by country doing the minimum to comply.
And so I think there are people who do that. And I think that's going to be problematic, and I think particularly so in the AI world, because you just don't know where you're going to slip up. And having the highest standards and applying those high standards across the board is probably a better approach.
Michael Krigsman: And of course, different countries, as you said, have different standards, different approaches. And for a multinational organization, that also becomes very complicated.
Mohamad Ali: Yeah. If you take this to an analogy of, say, GDPR, some companies, what they'll try to do is they'll try to comply with GDPR because it's sort of the highest bar and then apply that approach to all other countries. Now you still have to be knowledgeable about specific country regulations in order to make sure that there's not something you're missing.
Michael Krigsman: Let's go back to LinkedIn for another question, this time from Emily Elgin Fritz, who says, "Can you discuss the importance of data management and governance in building an effective AI strategy?"
Mohamad Ali: After college, I actually started a neural network company. This was before people knew what that was—it's in the early 90s—and I was very excited about it because I had learned about it in college and had built some models, and I wanted to figure it out in the real world. And I quickly, quickly, quickly came to the realization that it's not so much about the models, it's about the data that's available to feed the models. And so, we actually tried to do this for businesses, and eventually, we started doing it for a steel plant.
We built a model for a steel plant. And it turns out that in a factory, the data has to be high quality because, if it's not, people die, right? This is not like a business where you just get something wrong; in a factory, somebody could get hurt. So the data has to be right. And so that was one of the few places where you could find clean data in the early 90s.
And we were able to build these neural networks that did quite well in the factories. Many years later, after I joined IBM, I was part of a team to build IBM's early AI software. This was before it was called Watson, and back then, the first thing that I did was buy a company called Essential Software.
Some of you might remember it was an ETL company, and I remember my colleagues looking at me saying, "Hey, I thought you were going to build an AI business here? What's the deal with this ETL company?" And at the heart of it is you have to have good data. And so the first thing we had to do was get access to the data.
Right. And as you all know, there are two primary ways to get access to data: federation and ETL. And we already had the federation capability; we needed the ETL capability. So now we have access to data. Then you have to cleanse the data, you have to have good quality of the data. Then you have to put the data into proper structures; you have to have the tables the right way. And then there's unstructured data. And so the fundamentals of those problems have not changed. What have changed are all the tools that we have to get the data into the format that we need. So, you know, when we engage a client, one of the first things we ask about is the data.
And if the data is not ready and it's going to be, you know, a 12-month project to get the data ready, we make sure we identify that upfront. Because the last thing we want to do is to start an AI project and then realize that the data is not ready. So there are times when we will back up.
We'll say, "Let's focus on the data, get the data right, and then do the AI." Governance is the other really, really important thing. Right. And so I talked about this a little bit earlier. But in the world of AI, you know, good governance is really critical. So we have actually been doing governance in software for decades. And so the fundamental concept of governance, again, hasn't changed.
What has changed is how you do the governance. Right now. You're doing governance around a piece of AI, and you need to understand what the AI can do, what it can't do, what it should do, and what it shouldn't do, and that you could plug into existing governance software, retrofit it for these types of things. And so it is not a hard or impossible task.
You just have to know and do it right. So these are, these are extremely important points. And thank you for raising data and governance.
Michael Krigsman: This one is from Twitter from West Andrews, who says most generative AI models being widely operationalized are heavily, heavily reliant on data availability, as you were just describing, for example, Google, Meta, Microsoft, OpenAI, and so forth. All of this, using internet data scraped for free to train the models. And here's the question: How do you see the legal aspects affecting the evolution of this?
And let me also say, how do you advise your clients about this data aspect, given that so much of it is coming through an uncertain provenance, shall we say?
Mohamad Ali: First thing I want to say is that we take a multi-modal approach. And what that means is that in IBM consulting, we work with all the labs right now, like any environment that's hybrid, and you're utilizing a variety of things. Each one of them has their strengths and their weaknesses. And for certain applications, you may want to pick one that has a particular strength and acknowledge the weakness.
But you have to know about what the weaknesses are, right? You don't want to go into it blind. And so, of course, we, IBM Consulting Advantage can leverage GPT for Microsoft, Bedrock from Amazon, Vertex from Google, Watson from IBM, Hugging Face, Lama from Facebook, etc., etc. And so what we do is we advise our clients on which of these models are trained a particular way, which might have deeper concerns, which, you know, some of them were trained in cleaner environments.
And so, with, with some of them, which were not trained in 100% clean environments, are extraordinarily powerful and can solve certain problems. But you can also solve these problems and add restrictions as to what they can do and what they can't do. For example, code generation within IBM Consulting Advantage. We have a tool in there that if you ask, you know, one of our assistants, one of these consulting advantage systems, to write a Python, a piece of Python code that's a bubble, sort of bubble, sort this for a or, you know, free in that in the world.
But some piece of code. Then you could click on this button that goes out into the world and checks to see if there is another piece of code that looks exactly like it. And it would come back and we'll say, "Hey, this piece of code that was written looks like it came from this source, this source, or this source, and from these sources, you know, it's open source.
You can actually use it from these sources. It's not—you might not want to use it. Right. So, what we've done in that particular case is we've taken a very large and successful LLM that's out there in the world that everybody loves, and everybody wants to use, but we've put some guardrails around it to reduce the possibility of you infringing on somebody.
Now, you may not, you know, totally resolve it, but then we advise the client that that's the risk they potentially could take. And in some cases, it's unacceptable risk. In other cases, like the coffee that I had this morning with this financial services company. And I tell you, I was super impressed with this guy, this guy who is the EVP of a business unit.
So he's not really a technical guy. But I tell you, the technical questions he asked about Gen AI and also about quantum computing were phenomenal. And it's the kind of people who are, you know, these business leaders who are so technically savvy who are going to win, and maybe we can come back to the, you know, skills at some other point here.
But, but in that particular case, he's very, very concerned. Right. And so what we're doing there is we're actually starting with a model that was built on clean data that he already has, and we're augmenting it with some other capabilities to get the results that he wants. So yeah, I mean, I would say the key thing is to be aware, right.
Mohamad Ali: And if the client is aware, you are aware of what the strengths and what the weaknesses are and what the opportunities and risks are, then you can make the appropriate decision.
Michael Krigsman: You know, when I do research personally, research, writing all kinds of different things, I on a daily basis use multiple models and I'll then compare, you know, Gemini advances this, what have you, and it's like having different advisors with different strengths, each of which goes crazy at one time or another, but hopefully at different times.
Mohamad Ali: Right? Right. Yeah. No, no, that's absolutely right. And so you do that on a daily basis. I do that on a daily basis. What we want to do, though, is to make sure that our consulting engineers who are doing that on a daily basis, do it in an organized way, and they have at their disposal the, the knowledge.
It's the what, the what the risks are with the option. And that's actually a big part of why we built IBM Consulting Advantage. So as they operate in this thing, in this multimodal way, they get that, you know, they get the reminders as they go along. Whereas as we as individuals use all these tools, we don't. There's nothing to remind us.
Right. And sometimes we'll forget. And as an enterprise, you can't forget.
Michael Krigsman: Systematization and doing. Being systematic is a term you've used a number of times.
Mohamad Ali: Yes, systematic is very important, especially when you have 160,000 people. Right.
Michael Krigsman: I have a question about ROI. With AI projects, how, how do you establish the ROI? How do we know what we should be investing in and whether we're getting a benefit from that investment in this amorphous technology, which is changing all the time? And which is an algorithmic black box in many cases.
Mohamad Ali: We've done hundreds of these projects now, in the last 12 months. And measuring the ROI isn't hard if you have the environment set up to measure the ROI. And that's actually one of the biggest challenges that we are seeing. It's not about the AI and what the AI does and the results of the AI. It's once you've embedded it into your organization, do you have the processes?
Mohamad Ali: Do you have the organization? Do you have the KPIs to actually measure this? So some of the measurements that you look for is—you probably, one of the simplest ones is productivity. You know, we, we had this process that took this long and now we're doing it. And you know, 50% of the time, that's productivity measurement. There are other.
Right. So we are now processing our accounts receivable much more efficiently. And we have, you know, we've improved, improved our DSO by two days. Right. So that's something that you can measure it. Another thing is, you know, I'm using AI in a contact center, and I'm able to avoid 20% of the incoming calls because they're automatically resolved or when they come in, my meantime to resolution is shorter. So these, these KPIs and these ROI measurement elements, I think we know what they are, and you can identify them. The difficult part of this is actually ensuring that, you know, as a company, you have the management system and so forth to be able to measure. And so, and that's not different than any other project that you do.
How do you, how you measure it is actually quite similar. But I think, I think we as an industry, not just here at IBM consulting, but our competitors and so forth, are getting their arms around how to how to measure the return.
Michael Krigsman: And so would it be correct to say that the measurement of an AI project, ultimately really, as you, as you were just describing, is not much different from any other business process project? But then you rate one raises the issue that essentially at that point you're measuring efficiency, but you're not necessarily measuring the opportunity that AI presents and the innovation potential, which of course, is far more amorphous.
That's right. I was at Dun & Bradstreet. We are working with Dun & Bradstreet to create a whole new product that they never had before with the data that they have. And this is how I look at this: The lights are going off here. Yes, we're saving money. But, you know, the operational efficiency of this approach. And so that is an incredibly, that's a totally net new opportunity.
Right. And so that is a longer-term thing. We don't know how that's going to play out. We don't know how that business is going to grow. So Dun & Bradstreet has a lot of data. We're working with them using AI to optimize procurement. And so they're going to go build their procurement solution, take it to the world. Another example is, you know, we, as some of you might know, we actually have a quantum computer.
And, you know, it's actually the second generation. We have a quantum cloud. And so we're probably the only consulting company who can help you build a quantum solution. So, there's a financial services company that wants to build a portfolio mix risk-reward solution using quantum technology. If they're able to do that, they will have something materially better than anything that exists today, and that has billions of dollars of implications to them.
And so they're investing today for something that might not be for 3 or 4 years before they have a quantum algorithm that does what they needed to do. And so you really can't measure that in the near term. Right. And so you're right. Some AI projects are like that. They're much longer term. It's harder to measure them. But it's not unique to AI because that quantum project is similar.
You know, you can't measure it because it's long term. But the upside opportunity, I think, for both AI and quantum is huge. And again, it's another reason why I came back to the company because, yeah, IBM consulting is one of the few places where you can do cloud, Gen AI, and quantum. We have the capability, we have the people.
We have the skills to do that.
Michael Krigsman: The measurement of the innovation potential kinds of projects is really not that different from how you would measure or examine traditional R&D or any other kind of technology project where there is this great innovation potential.
Mohamad Ali: I agree with that. I think the one difference here, though, is that the upside is so much larger than almost anything we have seen recently, right? I mean, in my career, yeah, I think the internet was the big thing and AI, and now I think AI is going to be as big, if not bigger than the internet. And then I think quantum is going to, you know, be the next leap.
Right? And so I'm hoping that in my lifetime I get to three, see three of these things. But we're, but we're in the middle of one right now. And so I think while the measurements are similar, the scale of what's possible is off the charts.
Michael Krigsman: Simon Jo Moore comes back and thanks you for your earlier comments. And, and I liked your comment about data and nutrition. And on the subject of data, she says, "How can you teach AI to not be a data junky, but rather be knowledge wise?" And I'll ask you to answer very quickly.
Gen AI is a data junkie today. But, but I think there are algorithms being developed, and some of them being developed in a research facility here, that are much, much more focused. So the large models have an incredibly important place, but they're also this whole wave of small models that we—they're being developed, that our knowledge base, you know, models that can read books and understand them and become experts in a particular area.
Right. So you can imagine a calculus book, right? You can train this model to be a calculus expert. And so these things are starting to come. And I think there'll be more and more of those. And, and that's sort of an exciting evolution of AI.
Michael Krigsman: This is from Lisbeth Shaw, who says ROI is a simplistic way to, quote-unquote, "value investment." Can ops people look at value creating, value, value change in a broader way? How does that factor into the value discussion of using and operationalizing AI tools? Well, we could spend a few hours on this one, but that's right.
Mohamad Ali: Very good answer. Right. Yeah. I mean, I think she's right. Like ROI is a simplistic approach, right? I mean, it doesn't take into account, you know, the impact of all this computing on society. It's a, it's a, it's a limited scope. And it's what we do when, you know, we're trying to answer questions and problems quickly. But you're right, there.
There's a lot of implications for society, for implications, their environmental implications. And so there is a bigger calculus that needs to be done. We, we have a whole practice actually within IBM consulting that focuses on the, you know, environmental impact of a lot of what we do, right, all this computing. So it's a good question. I don't think it's a, you know, 62nd answer, but, but a good question.
Michael Krigsman: We have another question from Arsalan Khan, who says this is an interesting one. Again, really quickly, please. When reorganizing and laying off, employees are asked to train the people who will replace them. Now we are asking employees to train AI since their job is going to change. Why would anyone train AI so that they can be replaced?
Mohamad Ali: So all we are doing, and of course, not everybody's doing this right. But what we're trying to do is, is enable our 160,000 people to be able to do higher-value work and then do that higher-value work for our clients. Right? So we're actually trying to not only change our skillset but also change the, the, the kind of work we do.
And we think the kind of work consulting companies do is going to have to change, right? They're going to have to become higher value, types of things because that's what the clients are looking for. And so what does this mean? Right. So if we are building an AI solution, like consulting advantage for our engineers to write a piece of code that they used to take an hour to write and a half an hour, what would I, what I don't think what that means is that we need half the engineers, I think.
Well, what that means is that now all of a sudden we have engineers who can do things at a higher level, faster. And so we should be able to do more. So one of the things that we're doing in parallel to all of this work is we're also training our employees because these employees, if they're trained on how to use these tools, then they are more valuable to us, to our clients, and to other people.
If they leave the company and go someplace else, they're valuable, right? So think about a machinist. There used to be a time when, you know, if you were a machinist, you had some amount of value. But if you're a machinist with CNC certification, you have even more valuable to the value to the company or outside in the world.
So what we're doing in parallel to all this is we're training people because if people feel like they're, they're, they're getting something personally out of all this work, they're going to be a lot more willing to participate in the war because, at the end of it, they'll be valuable to you. And if they're not valuable, they'll be valuable to somebody else.
So I think you have to invest in the employees simultaneously to doing this work, to improving the processes and to make the processes more efficient.
Michael Krigsman: So, your training to benefit the company, of course, but also this is beneficial to the person being trained. So essentially, you're helping with the, the reskilling that might be necessary.
Mohamad Ali: That's right. I mean, reskilling is going to be absolutely essential. Like one of my colleagues says that, you know, Gen, the, the, the people who know Gen AI and use Gen AI are the people who are going to be replacing the people who don't. Right. So it's really important for people to, to learn how to use these tools.
I mean, it's like Excel, you know, if you don't know how to use Excel, you're less valuable than the person who knows how to use Excel. So we actually have three of these training initiatives. We have something called Skills Build where we've committed to training 2 million people over the next three years on Gen AI. And this is just for the community at large, right?
Underprivileged communities, people who don't have access, now they have access. Second, internally within the, within the company, we're training tens of thousands of people, hopefully all 160,000 if we buy time, we're done on Gen AI tools so that they're more valuable. And thirdly, we have a talent, callin transformation team practice that will go out to clients and help them train their employees.
So I actually think I'm really glad this question was asked because if you're not training your employees and investing side by side with them, yeah, they're not going to, they're going to see all the value accruing to the company and not accruing to them. So if you accrue value simultaneously to both parties, you'll get to a better place.
Michael Krigsman: You've alluded several times to ethical AI, responsible technology. I'm not sure if you've used those terms, but you alluded to the concept. Any thoughts on that bundle of issues? Very quickly.
Mohamad Ali: The world is about to change dramatically. Gen AI is going to play a big role. Ethical AI, responsible AI is extremely important. And it's actually something the AI has been doing for decades now. And we're here in IBM consulting, we're applying that to our projects, we're taking it to our clients, and there's a whole toolset of how to do this.
And I can't see that everybody's applying these toolsets appropriately, and it's going to be important and not only going to be important just for doing things the right way, but it's going to avoid people getting into trouble. I mean, we've already seen some of the issues around Gen AI, and enterprises can't afford to have these issues. Ethical AI is a to really important, responsible AI as well.
Michael Krigsman: And very quickly. Any final thoughts on operationalizing AI for the enterprise that you'd like to share?
Mohamad Ali: I feel like I've shared a lot, and I'm happy to share more. We're out of time. I want to thank you for having me on the show. I want to thank the audience for all these great questions, especially Simon, for asking two questions. And Arsalan. Right. So, this was great. Thank you.
Michael Krigsman: And a huge thank you to Mohamad Ali. He is the chief operating officer of IBM consulting. And a huge thank you to everybody who asked such excellent questions. Before you go, please subscribe to our newsletter and subscribe to our YouTube channel. We have incredible shows coming up. Check out CXOTalk.com. At the bottom of our homepage, you can see who's coming up.
Michael Krigsman: I mean, it's amazing. So join us. Thank you so much, everybody, and I hope you have a great day, and we will see you again next time.
Published Date: Apr 05, 2024
Author: Michael Krigsman
Episode ID: 832