Enterprise AI Strategy: Planning to Execution, with Citi

On CXOTalk episode 814, gain insights on enterprise AI strategy, integrating technology, focusing on outcomes, and fostering a culture of innovation.

43:14

Nov 17, 2023
15,497 Views

In this episode of CXOTalk, explore the evolving role of artificial intelligence (AI) and machine learning (ML) in the corporate environment, featuring Murli Buluswar, the Head of Analytics for the U.S. Personal Bank at Citi. 

The discussion focuses on integrating AI and ML into core business strategies and operations, offering a nuanced perspective on leveraging these technologies for improve decision-making and business outcomes.

Key learnings from this episode include:

  • Integrating AI/ML into Business Strategy and Operations: Gain insights into how AI and ML are reshaping strategic planning and operational processes, emphasizing the need for a holistic approach that goes beyond simply adopting tools.
  • Emphasis on Outcome-Driven AI Initiatives: Discover the importance of focusing on measurable results in AI/ML projects, highlighting the significance of financial outcomes and the value of cross-functional team collaboration for impactful implementation.
  • Fostering a Culture of Innovation and Curiosity in AI Adoption: Learn about the challenges and strategies involved in creating an organizational culture that is receptive to AI, underlining the necessity for continuous learning, innovation, and curiosity to fully harness AI's potential in business.

Join this thought-provoking episode with Murli Buluswar, examining strategic and operational facets of AI in the enterprise, uncovering key insights and actionable strategies for businesses aiming to navigate the enterprise AI.

Murli Buluswar is an ‘intrapreneurial’ analytical and strategic C-Suite Financial Services leader who has successfully built alignment and achieved commercial outcomes by challenging conventional processes while harnessing data-driven intelligence.

A former member of The Operating Committee of The American International Group (AIG), Murli has influenced Fortune 100 board members and senior leaders to expand innovative thinking and execution through data insights. As Chief Science Officer for AIG, Murli built a (data) Science team to ‘serve as a catalyst for consistent evidence-based decision making at AIG.’ 

As Head of U.S. Consumer Analytics (USCA), Murli is leading the execution of the vision to ‘be a critical partner in achieving a quantum leap in customer intelligence’ across Citibank’s 30MM+ customer base. USCA is realizing this vision by delivering transformative outcomes on three dimensions: Asking deeper (customer level) business questions, harnessing wider (real-time) data assets, and enhancing algorithm sophistication.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Welcome to CXOTalk Episode 814. Today, we're discussing enterprise AI from planning and strategy to execution. Our guest is Murli Buluswar, who is the head of analytics for the U.S. Personal Bank at Citi. With that, Murli, welcome back to CXOTalk. It's great to see you.

Murli Buluswar: Michael, it's just such a delight to reconnect with you. It's almost been a year since our last conversation. So much has changed in the world since then. I'm thrilled to be spending some time with you this afternoon. 

By the way, congratulations; 814, that's a big number. Well done.

Michael Krigsman: Murli, tell us about your work at Citi.

Murli Buluswar: Our mission statement for our team, Michael, is to deliver faster, wider, and deeper data-driven insights to achieve superior decisions and material financial outcomes. Why we care about that mission statement is simply because we see our remit as orchestrating outcomes through the power of data-driven intelligence and curiosity and recognize that success isn't the delivery of an analysis or a tool. Rather, success is measured by what is the change end-to-end that we've been able to affect through cross-functional collaboration, whether that change is measured in financial outcomes that are material or whether that is measured in adoption and engagement with some of our solutions.

AI / ML strategy in the enterprise

Michael Krigsman: A lot of times with technology, people talk about the tools. Yet, you're very focused on the end outcome.

Murli Buluswar: One of my subtle quibbles with the world today is, even in the world of gen AI, we spend quite a bit of time talking about which large language model is better than the other. I'm not suggesting that isn't an important conversation at all. That is a very important conversation. 

My perspective, Michael, is if we were to zoom out just a tad bit, we all recognize that machine learning and artificial intelligence is fundamentally and profoundly reinventing society in ways that has not happened in the history of humankind. If we were to juxtapose that with what is the role that this capability is playing in reinventing institutions to make them fit for purpose in this world for AI, the answer to that (certainly from an outside-in perspective) isn't as consistent as perhaps we'd like for it to be.

The opportunity that I see is for this capability to reimagine problem and opportunity statements in some important part of the business and think through, how could we architect that through data-driven intelligence. Recognize that, while through a pure functional lens, you may not have end-to-end accountability, but you must take responsibility to answer the question of, "Why did this matter? How did this matter? And how do we measure those outcomes in a way that everybody can agree upon as being material and meaningful and a step-change?"

Strategic planning for enterprise AI / ML initiatives

Michael Krigsman: Murli, you mentioned gen AI, generative AI. When we talk about enterprise AI and machine learning, in general, how does this focus on outcomes translate into the way that you think about strategy and the planning of these kinds of projects?

Murli Buluswar: We've got a tool or a set of capabilities, and let's go figure out how we could apply that in a particular area. To me, that's actually starting at the wrong end of the spectrum.

The way I think about it is the true power of this capability is where curiosity meets data, meets insight, meets strategy, meets change. And so, for me, it's less about a use-case-by-use-case approach. Quite frankly, I'm somewhat allergic to the phrase "use case" because, at its more fundamental level, these capabilities need to rearchitect the what and how of decision-making, how people interact with tools and data, and what is sort of that interplay between human intelligence and artificial intelligence. 

In order to do that, you can't think of it as a series of individual point problems. Rather, you have to reimagine some important critical aspect of your operations and be able to envisage (6, 12, 18 months from now) what progress should look like, what would be the metrics, and then back into how you use the tools, whether that's generative AI or whether that's machine learning or, quite frankly, perhaps in an odd instance it could be an abacus. 

The implementation process for AI technology

Michael Krigsman: Can you give us an example of how this plays out when you're constructing this kind of project and what you're thinking about?

Murli Buluswar: We try to have a very methodical systems-driven approach to identifying opportunities, Michael. We ask a series of questions. The questions are:

  • Number one, is the process manual (i.e., is it heavily dependent upon human interpretation of some content)?
  • Number two, is the coverage partial? If you're listening to calls to understand customer sentiment, are you listening to a fraction of the calls?
  • If you make a mistake, is there a meaningful regulatory or customer implication?
  • Is there an out-of-pocket cost associated with this that is non-trivial?
  • Has the process not changed? 
  • How has the process changed in the last couple of decades?

If you apply that framework, you start thinking about individual problem statements, and you think about how you could rearchitect that process in a meaningful way.

One simple example is many institutions, whether they're B2B or B2C, have sizable marketing investments in the interest of attracting new consumers or customers, as it were. We make those allocation decisions of that budget, historically, based on a set of artifacts.

The questions that we ask within the context of Citi is, could we bring more specificity and understanding how an increase or a decrease in a budget could have specific financial implications, tradeoffs that they could be implying: number one? Number two is, what is the best decision possible given a set of constraints? Number three is the macro-economic environment is changing on a fairly consistent basis whether it's through interest rates or other factors. 

If you start taking some of these dimensions into consideration, you realize that part of the solution is to be able to build a software tool that can help you understand:

  • What's the best outcome possible?
  • How do you translate an economic metric such as net present value into an in-year financial P&L?
  • And how do you create transparency so that analytics, product, marketing, finance, and other functions can have access to the same set of knowledge to make more sophisticated decisions? 

In order to serve that macro problem statement, we actually built a software solution that helps us understand answers to these questions at a remarkable level of granularity. Just giving you one example, but obviously happy to unpack others.

Aligning the AI business plan with organizational business objectives

Michael Krigsman: It sounds like this is your mechanism for ensuring that, in general, technology projects align to the many different dimensions and business objectives that you have.

Murli Buluswar: If I were to step back, you think about people in functions similar to this. In my experience, we get stuck in one of three gears. 

Gear number one is being a pure sort of service orientation mindset of "You ask me a question and I answer it." It's valuable from a tactical standpoint, but it's not necessarily shaping and architecting the strategy in a meaningful way.

Gear number two tends to be "Goodness, I've got to get my data infrastructure and architecture perfected. We need to be cloud native." That's a two-, three-, four-, five-year process of $100+ million investment. Up until then, we cannot deliver strategic material value in the here and now.

Gear number three tends to be a little bit of, in my view, an excessive obsession with the latest technologies or capabilities, whether that was blockchain/crypto (the day before yesterday – perhaps Metaverse yesterday) and generative AI. 

All of these are profound and meaningful, so the question isn't whether they're valuable or not. The question is, how do you develop a blueprint for what success looks like in critical, material aspects of your operations? Then how do you back into thinking about how these tools can be in service of that objective? 

Which is why my firm view is that the solution doesn't start with the tool. Let's actually set the tool aside. The solution starts with imagination and curiosity and being able to rethink how decisions should be made. Then asking the question, how could we achieve that promised land, and what tools, technologies, and data assets do we need in service of achieving that broader objective? 

Michael Krigsman: Please subscribe to the CXOTalk newsletter. Hit our site, CXOTalk.com. Subscribe to the newsletter. Subscribe to our YouTube channel.

How to incentivize innovation, curiosity, and AI culture 

We have a question from Twitter from Arsalan Khan—who always asks the best questions. He's a regular listener of CXOTalk—specifically about this point on curiosity that you made. He said, "When it comes to AI, how do you incentivize curiosity among employees? Is it in a personal attribute, or can you teach folks to have the kind of intellectual curiosity that seems so important for what you're doing?

Murli Buluswar: Most things in life, the answer is it's a little bit of both. I'm not hedging, but I do believe some of it is our innate mindset of, how do we see our identity? If we were to see our identity as factory workers or providers of analytics that somebody else is asking us questions about, then we're not giving ourselves the room to be able to have that imagination. 

For me, it is part, innately, how are people wired? Part, how do we create a culture where we do value that pragmatic innovation and that mindset of challenging and asking questions and being able to reimagine how a particular issue could be resolved? And some of it is also probably incentives. 

Sometimes, the hardest part is getting started. In my view, the more real we make it, the more concrete, pragmatic examples that people have to latch onto, they can then gain some of that inspiration and connect that to their day-to-day routines to be able to apply some of that to the best of their abilities. 

Everybody has their own set of core competencies and capabilities, so not everybody is going to be at the same point. However, as long as we're steadily progressing in our individual and collective journeys at an appropriate rate, we're making progress.

How are AI implementation projects unique?

Michael Krigsman: Do you see AI and machine learning projects as being different than other technologies? If so, how? 

Murli Buluswar: Yes, and the reason it is different is the power of these capabilities continues to accelerate in ways that we probably could not have even imagined ten years ago.

You think about generative AI. Large language models have existed for a while. But that tsunami of gen AI has really hit us in a pretty profound way in the last 12 months or so. 

Why is that? In part, it's because of the compute environment continuing to make step-change progress that allows you to process at a faster pace than before. The data acceleration, availability of data, has been an observation or benefit that has existed for the better part of the last several decades, but the pace at which the definition of data is expanding and the availability and access of it is expanding continues to accelerate. 

It's really, to me, that sort of nexus of compute and the sophistication of algorithms and the access to data that is really thrusting us to a whole new level of opportunity in terms of the questions we could ask and answer, and how we could reimagine the world being different in 6, 8, 12, 24 months from now.

Investment planning for enterprise AI applications

Michael Krigsman: How do these differences then translate into what you do, thinking about the opportunities that you mentioned, going into the future and the way that you plan and invest?

Murli Buluswar: Number one, I have a fundamental belief in this notion of pragmatic innovation. You could think about innovation as roaming through the forest and being a pure explorer and hoping, perhaps, that you find something interesting and having innate satisfaction in the journey itself.

I have a slightly different philosophy toward innovation. My philosophy toward innovation is very much about that disciplined, rigorous orientation of being able to have a perspective on some important part of a change that you want to be able to architecture and working your way back.

That allows my team and me to start with what is the question, what is the opportunity, what is the problem statement. Can we reimagine why our worlds, our collective worlds, would be better in a meaningful way if we were to solve that? Then get to the brass tacks of how we bridge the gap from where we are to where we'd like to be 8, 12, 14 months from now. 

That's how we approach most of our innovation. In that universe, you have an operating discipline that is very clear on how we measure success. Is it financials? Is it some form of adoption? Is it some form of improvement in risk and controls? Is it a noteworthy reduction in customer friction, some of which can also be connected to financials?

Having a very clear sense of:

  • Why does this matter?
  • How would we measure whether this is having the impact?
  • What is the scale of that? 
  • And how could we back into how we bridge the gap from where we are to where we would like to be? 

That's the pragmatism with which we approach this, and that allows us to have a more keen sense of measurement and an understanding of what success looks like. Not that we wouldn't course correct, but you do have a perspective on what success looks like.

I'll give you one simple example. We all know that interest rates are going up. They have been for the better part of the last year and a half.

For banks, the problem statement of, "How do I get smarter in my pricing strategies for my deposit products?" is an important question. 

How do you answer that? Historically, in many institutions, they probably have made decisions based on instinct, and it also perhaps wasn't as big of a question, historically, given that the interest rates were quite low for the better part of the last decade-plus.

But in a rising interest rate environment, those questions of:

  • How do I understand the tradeoffs of offering a higher interest rate in this product?
  • How will that affect my deposit growth, my P&L?
  • How will that affect my customers and my engagement on a suite of products? 
  • How do I put numbers around that? 
  • How do I match human intelligence with data-driven intelligence?
  • Can I build a software solution that can simulate, forecast, and tell me what's the best outcome possible?

Those sets of questions are what guide us to say, "Gosh, we can go back, and we can do all of the deep dive analytical work from an algorithm development standpoint. But then we can also build software that can bring more agility, transparency, clarity, and sophistication to decision-making at a segment level."

Technology and regulatory challenges with generative AI

Michael Krigsman: What are some of the challenges that emerge when you're trying to grapple with such rapidly evolving technology like generative AI, and you want to maintain that rigorous, structured approach?

Murli Buluswar: The first thing that I would say, Michael, is to not get enamored with the technology. The technology is a tool. It's in service of something that you're trying to solve for. 

The issue that I see across institutions (having friends and colleagues in a variety of industries) is this notion of "We need a few gen AI use cases." I can, to some extent, empathize as to why we would want that because we're trying to build that muscle of saying, "We want to engage with this capability, and let's create some momentum." 

On the flip side, that's sort of like saying, "I've got a hammer. Now, could you help me find a nail?" 

Versus really the way you'd start with is saying, "I could actually imagine what my home would look like. I could imagine that I have a vision of what my home would look like. And in order to create that structure, I need wood, I need hammers, I need nails, I need saws, I need a bunch of different things. How can I start coalescing those things together? And what does my project management and operating discipline look like? What are my measures and milestones every step of the way?"

That's a different way of thinking. My view is, do not get obsessed with the technology. Understand and appreciate what it could do, but start with the opportunity and problem statement, and work your way toward, why is that worth of solving; how confident are you? 

One of the questions that we think about is toggling time, value, and certainty and how we think about prioritizing a problem or opportunity statement as a way of building a discipline of having a healthy portfolio that collectively can be very powerful in rearchitecting critical aspects of our business.

The other thing that I would say, Michael, is let's recognize that, at the end of the day, you're reinventing roles. You're reinventing how people, human beings, interact with data and tools. 

There's a human element of it because you're reshaping what decisions are made and how decisions are made. That is a nontrivial cultural change as well because our sense of identity will feel a little bit different as this capability continues to progress. 

And so, our ability to adapt, our ability to have a bit more of a growth mindset, our ability to recognize that yesterday's paradigm might not be perfectly portable to today or tomorrow, is critically important. And our ability to think about how that rearchitects our human structure within institutions, how people interact with each other, and what their roles are, are absolutely critical. 

You'll notice in my answer, I didn't actually talk about data infrastructure. I didn't talk about large language models. I didn't talk about which cloud partner and which tools. 

The reason I didn't do that, Michael, is it's not that those aren't important. They absolutely are. 

In my view, there's so much oxygen being given to those conversations and, on a proportionate basis, in my perspective, there's not enough discussion, energy given to these concepts of recognizing that you're trying to rearchitect how institutions work to be fit for purpose in the next decade and beyond. That's a fundamentally different mindset versus necessarily only exclusively focusing on the tangible and the concrete of which tool am I using or which large language model am I using.

Michael Krigsman: Does it feel weird not to talk about the tools and the technology when you're having these discussions?

Murli Buluswar: It's important to talk about them, and I'm not suggesting that I do not talk about them. My point is that those, the discussion about tools and technologies, take up so much of the energy that not enough is being given to the more profound, fundamental, and strategic implications of these capabilities and how they will reshape institutions and, ultimately, society.

That's a very powerful question, and so the magic is in being able to connect the concrete, tactical nature of the tools to saying, "All right. In the next 6, 12, 18, 24 months, in a very well-defined, specific way, could I articulate what my milestones are? What are my key performance indicators of success? What is the financial implication? What does that mean in terms of cultural evolution in terms of how people need to be able to interact with this technology? What is that harmony, that interplay between human intelligence and machine intelligence?"

All of those, even though they might sound a little bit abstract, have incredible, pragmatic implications in the here and now. And you need to be able to envision that and put parameters around that, and then have an operating discipline, recognizing that there's not 100% certainty. 

It's still a probabilistic world, so you're going to have to make some adjustments as you gain more information. But it is to acknowledge the fact that the implications of this can be much more profound and strategic. You've got to be able to see that end-to-end.

In many ways, when I think about what guides my team, there are three axioms that we believe to be non-negotiable.

Number one is this notion of follow-partner-lead. If you were to ask me to put percentages, I'd say probably rough order of magnitude 70% of what we do is lead.

Lead, for me, is being able to reimagine some important aspect of something that we do and being able to guide all of these different teams, these functions for whom that is relevant, and to be able to guide that change, orchestrate that, and achieve a set of measurements. Playing that sort of conductor role, or in the context of growing that coxswain role where you're actually harmonizing functions in service of a bigger commercial objective is one.

Number two is this notion of analytics as a software, which is the recognition that it's no longer good enough to build models or to deliver algorithms. You have to be able to understand what is that last mile of the adoption and the interaction and the interplay. Do you need to build software tools and such in order to embed these algorithms to be able to orchestrate that change?

The third axiom is measuring success through the lens of a CEO, CFO, and head of audit. 

  • CEO is materiality. 
  • CFO is it's not real unless and until it hits the financials, not just based on my team's perspective but by the finance function and other partners involved. 
  • And head of audit is traceability and attribution to be able to answer the question, are we architecting the change that we intended to? Do we have the right measurements in place? And is this materially reinventing a critical aspect of our firm?

Impact of generative AI on banking and financial services

Michael Krigsman: Julio Gonzales on LinkedIn has a couple of questions. "What's your opinion of the influence of generative AI on banks and financial services in regard to data and data analytics?"

Murli Buluswar: The capability of generative AI is going to be very powerful across industries. If you think about what (at the core) generative AI does, there's some form of code and content generation, there's some form of content extraction and summarization, and there's some form of conversational intelligence.

These are not necessarily mutually exclusive by any stretch, so we all recognize that there's overlap in these three categories that I just described. But at a slightly macro level, those are the three things that generative AI can be profoundly influential in.

And so, whether it's banking or any other industry, is code being created or do you need to create content (whether that is advertising, brochures, or what have you – things of that nature) from the data that you have? Well, that problem statement spans pretty much any industry that we could think of.

The second is content extraction and summarization. Well, do you have people reading documents trying to synthesize that? Do you have people that are having to process a bunch of documents to look for particular fields and particular pieces of information and are looking to be able to pull that out in the context of some other issue that they're trying to resolve?

Then last but not least is, do you have data where you want to know whether it is customer behavior or whether it's operating metrics? What's different today than yesterday? 

Do you have a curiosity to want to be able to understand that without getting inundated with spreadsheets, tables, and such where you're looking for the needle in the haystack rather than being able to say, "I want that needle in the haystack to be identified for me so that I can then understand what it is, why it matters, or whether it matters, and what I do as a consequence"?

If you think about those three categories, practically every industry that we know of, whether it's B2B or B2C, has profound implications for it. Yes, banking is probably, in that sense, no different than any other sector in terms of how profound this capability could be. 

However, let's recognize that in order to achieve that, you've got to be able to think. You've got to be able to be a systems thinker. 

You cannot have a functional lens of "I'm going to build this algorithm," or "I'm going to use this LLM to do this, that, and the other." You have to be able to think, "How do I measure success end-to-end?" And you have to recognize every step of the way.

The development of the models is an important piece of it. It is one important piece of it.

If you're trying to orchestrate end-to-end change, you've got to be able to be a systems thinking and a systems operator. That to me is critically important.

Where I suspect many firms are going to miss this bus in the near to medium term is they say, "Well, let's go do this use case and this, that, and the other." But there's no real cogent strategy. It becomes a one-off, piecemeal approach versus thinking about how you want to rearchitect some critical aspect of your operations, whether it's how you interact with customers, whether it is how you process paperwork, or whether it is how employees on a large scale consume data, reports, and such. 

Have the courage to be able to reimagine that. Break that down into bite-sized chunks and work your way backward toward how you can achieve that in a systematized way.

Michael Krigsman: This is from Tony Clark on Twitter who says, "How do the potential headwinds of AI regulation play into your approach and planned implementation velocity for the Citi AI program? Is there a thesis that underlies where you're focusing and where you're not?"

Murli Buluswar: We're breaking new ground as a society. It's not just the banking industry that is regulated. Many other sectors are also regulated. 

Quite frankly, regardless of whether an industry is regulated or not, the bar for all of the institutions within that industry should surpass anything that any regulator could set for you.

Why is that important? It's important because, number one, we tend to hold AI to, in my view, a whole different and higher standard than we hold HI (human intelligence) to.

Number two is the underlying data that you have has inherent biases in it. I think we all know about it, and so let's not be oblivious to it.

Any model – AI or, more specifically, gen AI – has the risk of perpetuating bias that exists in your data. The one risk is that it can accelerate that.

The key is, what guardrails do you have to understand false positives, false negatives, hallucinations, and things of that nature, and how do you ensure that you have a behind-the-scenes discipline around understanding exactly what the machine is doing? How do you have humans in the loop so that you don't have things running amuck, but rather, you're bringing that sensibility of recognizing that the magic is really at that interplay between human intelligence and artificial intelligence? 

And so, it's not about a pure AI or pure HI. It's about how you find that harmony between the two in a way that brings the best of both because, let's face it, as much as we admire and appreciate machines, essentially, what they do is compute stuff faster based on historic data. They're not able to draw inferences. They're not able to have imagination. (Not as of yet. Not that I know of.) And they're not able to rethink something.

Impact of regulation on enterprise AI initiatives

Michael Krigsman: Any thoughts on the impact of potential regulation on slowing down AI efforts? Any thought about that – just very quickly? 

Murli Buluswar: I don't think of it as slowing down as much as I think of it as bringing measured discipline and having a consistency in how an entire industry approaches something and, ideally, even across industries. I think of that as a good thing. 

One of my beliefs is that, in this first iteration, the first few iterations, let's focus on problem statements that are a little bit more internally focused that don't have customer-facing implications. And let's recognize that, yes, there is a crawl-walk-run dimension to this. 

And let's not think of it in the world of regulation slowing things down. Let's frame it as regulations being a perspective that is adding more operating discipline, consistency, and fairness to how we approach this.

Michael Krigsman: Okay. Fair enough. We have a question from Arsalan Khan, again, on enterprise architecture – a really interesting one. He says, "Data plus enterprise architecture gets you ahead in your AI journey, and AI helps in discipline database decision-making. How do you challenge or manage an executive who thinks AI should be just another project under digital transformation?"

Murli Buluswar: Clearly, a pretty steep learning curve for many people or anybody who has that perspective. Quite frankly, if one steps back from their day-to-day work (wherever they work, whichever industry they're in), you can see you cannot escape the fact that this capability has profound implications.

For me, there's a learning and education process for all of us in different angles in this space to rethink our own, to understand and recognize our paradigms and biases, and to be able to rethink to what extent we need to shed that because the future is going to look different. We need to build new skills and new ways of thinking. 

That to me is the most critical question because, at the end of the day, this capability is rearchitecting our sense of identity, how we work, how long we work, what we do in our work. For me, it starts with those fundamentals versus saying, "Gosh. You know what? This is another hammer, and I'm going to use it to hammer a different kind of nail than I did before."

Michael Krigsman: How do you take the concept of an AI project and turn that into a tangible initiative that accomplishes all the goals? Again, very quickly.

Murli Buluswar: I don't think about projects as AI projects. I think of it as a business problem/opportunity we seek to solve with a lens on what does success look like 6, 12, 18 months out. Then back into, how do we get from here to there?

Part of the answer is the algorithm development and the data that goes into the algorithm. An even bigger part of that journey is coordination across different functions and the orchestration of the outcomes. 

Understanding the technology underpinnings; understanding the legal, compliance, regulatory underpinnings; understanding the strategic underpinnings; understanding the human capital underpinnings; and being able to connect that to metrics that we a smell, touch, see, feel, and experience; that to me is much more critical. The algorithm development, in that sense, is an important but one piece of it. 

Measurement and metrics for evaluating AI solutions

Michael Krigsman: Are there unique aspects of these AI projects, business initiatives, that are different? For example, if you're talking about the measurement and the metrics, AI is different, and so how does that get reflected in this?

Murli Buluswar: It's less about whether the metrics are different. It's more about a recognition of the art of the possible. 

I'll give you one example. I've been thinking about this quite a bit. Many institutions get streaming data on a daily basis, and that data can be related to specific operating metrics or it could be related to customer behavior.

In the context of banking, it could be where somebody is spending money. How much of their bill are they paying? Are they logging onto their mobile app? In the context of metrics, it could be, how does my digital funnel look, and so on and so forth. 

I've been thinking about how, today, that information is consumed through a bunch of different reporting platforms. Sometimes we have automated alerts. But for the most part, they're consumed, and you start searching for it.

What I thought about was, how could I reimagine this process? How could I reimagine this process to have an anomaly detection engine that'll give you exactly a view on what is different today than yesterday without your having to ask or go searching for it? 

That is a capability that is solvable through the lens of generative AI because you have all of your streaming data. You can have a conversational intelligence veneer on top of it.

Now you could code in how you define anomalies. Then you don't have to know which metric you're searching for. You just know that things are different, and you know what's different. 

Then this engine can even answer why is it different. It could also ultimately build predictive capabilities into it.

How to encourage AI adoption in the enterprise

Michael Krigsman: What about the adoption aspects and creating a culture where people are understanding what actually is here in order to take advantage of it? Again, please answer very quickly now.

Murli Buluswar: Not easy because we're all victims of inertia. For me, adoption either shows up in your financials because it's been fully executed and you can measure it, or it shows up in some form of engagement with a particular platform and how they're using it. 

There can be metrics around both, and not all metrics have to be hard. Some of them can be around culture change and such. Those are equally as important as the hard financial metrics.

Michael Krigsman: Where is all of this headed at Citi as far as AI adoption and your thinking?

Murli Buluswar: The broader question is, I think there's good energy recognizing that there's a massive opportunity here, perhaps in ways that we didn't fully appreciate before the world of generative AI. 

There's a good bit of energy now to say, "Gosh, you know what? The implications of this are pretty profound. How do we build that muscle along the analogy of craw-walk-run in order to have some material successes in the next 6 to 12 months that give us the spring in our feet to be able to then be even more ambitious in our strategy through the lens of generative AI or, more broadly, AI?"

Michael Krigsman: You had mentioned (in the past when we spoke) about issues like anomaly detection and specific use cases. Do you want to touch on that just really quickly?

Murli Buluswar: We don't have a dearth of data in most industries. We have, to some extent, perhaps a death of curiosity, and we have a dearth of information that is insightful and actionable.

The question is, how do you bridge that gap? You can't bridge that gap by creating more spreadsheets and more platforms where data is thrown at you, however prettily it's presented to you. 

The problem statement is, you don't know what anomaly to look for. You might not have an operating hypothesis. 

What you want to know is what's different today. And you may not necessarily act upon everything that is different today, but good golly you'd want to know. And perhaps you have a slightly different functional lens than I might. 

The concept of the anomaly detection is to say, "We can build an engine that'll tell you what's different today than yesterday or last week and last month." Ultimately, we intend to be able to build capabilities that help you understand why that's happening and what might happen in the future.

But this notion of being able to democratize access to intelligence in a way that is consumable through perhaps an interactive conversational intelligence layer, to me, is profound. And in this aspect of the world, I think that it can be wonderful in stoking that next-level culture of curiosity and access to knowledge versus access to data.

Michael Krigsman: We have heard the term "democratization of data" for a long time now. You've shifted that to "democratization of intelligence" using AI as the mechanism or the vehicle to accomplish it.

Murli Buluswar: Indeed. Well said. Democratization of data is just a thing unto itself. It doesn't accomplish anything. Democratization of intelligence is much more fundamental and a massive cultural change in how you understand your business and how you reduce the friction between understanding a decision and outcome, whatever that might be.

Michael Krigsman: With that, I want to say a huge thank you to Murli Buluswar for being our guest today. Murli, thank you so much for taking time to be here with us.

Murli Buluswar: It's been an absolute delight, Michael. I cannot believe that it's 40 minutes or 45 minutes past the hour. I always end up speaking more than I think I did or intend to. Thank you for the opportunity, and I look forward to staying connected and tracking the growth of this space with you.

Michael Krigsman: Thank you for being here. Everybody who watched, thank you. You guys are an amazing audience. You're so smart, and the questions you ask are fantastic. 

Before you go, please subscribe to the CXOTalk newsletter. Hit our site, CXOTalk.com. Subscribe to the newsletter. Subscribe to our YouTube channel. Check out CXOTalk.com, and we will see you again next time. We have amazing, great shows coming up, everybody. Take care. Bye-bye.

Published Date: Nov 17, 2023

Author: Michael Krigsman

Episode ID: 814