Generative AI Strategy in the Enterprise

Join CXOTalk episode 806 with CapitalG's Bruno Aziza (formerly Google) to navigate generative AI in business. Learn to identify key use-cases and manage costs for informed decision-making.

44:39

Sep 22, 2023
16,949 Views

In episode 806 of CXOTalk, we welcome Bruno Aziza, who was a distinguished voice at Google before joining CapitalG, to discuss practical applications of generative AI within the enterprise. 

The conversation explore the technical aspects, the importance of data quality, and the ethical considerations surrounding generative AI deployment.

Be sure to watch episode 806 for a live, nuanced discussion aimed at elevating your strategic roadmap for enterprise AI.

Bruno Aziza is a Partner at CapitalG, Alphabet's (parent company of Google) independent growth fund. He is a seasoned operator who specializes in high-growth SaaS and enterprise software. Bruno has led product, marketing, sales and business development teams across all phases of growth, from startups to mid-size companies and Fortune 10 software leaders.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Today on Episode #806 of CXOTalk, we're discussing generative AI in the enterprise with Bruno Aziza. He is a partner with CapitalG. 

Bruno Aziza: CapitalG is Alphabet's independent growth fund. 

I've been in the data AI and analytics space for a very, very long time. Previously to CapitalG, I was at Google Cloud.

The idea here is, how do we help this field of data, gen-AI, and analytics innovate up to its potential? That's what I'm going to be focusing on in this next role.

Understanding Generative AI in the Enterprise: A Deep Dive

Michael Krigsman: When we talk about generative AI, what exactly do we mean when it comes to the enterprise?

Bruno Aziza: If you are a CEO or CIO, you have had experience yourself in the consumer context with gen-AI chatbots and so forth. If you've looked at the recent research from Gartner and you checked out their latest hype cycle for emerging tech, you've seen that gen-AI is at the top of the hype. It's at the peak of inflated expectations. 

The bottom line is AI (or gen-AI, I should say) has hit its browser moment. It's in front of everyone. It's really captured the collective imagination. 

There are a few things you've got to consider, though, when you're dealing with it in the enterprise. Even though it's interesting from a consumer standpoint (and it's rewarding, it's an interface that responds to you and is very helpful), you've got a few things in the enterprise you've got to think about. 

The first one is it's not magic. It's probabilistic, so you've got to remember the way it works is by completing information or sentences based on a model that's been trained. There's a high level of probability and sometimes it's not always correct. 

Second, you also have to think about how you orchestrate your teams around the gen-AI opportunity. 

The Human-Machine Symbiosis in Generative AI

It's not man versus machine or human versus machine. It's machines plus the humans. 

In a way, I think one of your guests recently called it the Iron Man suit, which is a very great analogy. You think about how this is going to help your current employees to become more productive, to expand the scope of their skills, and get to outcomes a lot faster.

Now, the key trend here is research that came out today, Michael, so we're really doing this live. What we're noticing is how people are adopting gen-AI. 

They tend to mistrust technology in areas where it can contribute massive value. And they tend to trust it too much in areas where the technology is not competent. And so, you really need a good framework if you're in the enterprise today to deploy this in a way that's ethical, in a way that is great for your business, and a way that provides attribution to the source of the information because this technology generates new content, so you've got to think about all the implications of that. 

Trust and Data Quality in Generative AI 

Michael Krigsman: You just mentioned this very important issue of trust in the data and also being able to discern where you should trust data and where you should not. Can you drill into the relevance for generative AI? 

Bruno Aziza: If you don't start with a foundation of trust in this business, using gen-AI is the equivalent of having found something that is really good at getting you the wrong answer very quickly. 

There is something new about this technology, but there's also something old about it. The fundamentals of having trusted platforms that have data that people can rely on is tremendously important because that's what's feeding your model, and that's what is providing information at scale to your employees. 

In a way, if you don't do that well, gen-AI will expose to more people the poor quality of your data. That's why it's really important to think about data quality as a fundamental block.

We see customers who (before pointing gen-AI to consumers or customers) point it to their data management problems because it can effectively help with labeling at scale, identifying issues with the data or the data is empty or there is data inequality. Really starting with this concept of data quality is probably the first step of adopting gen-AI.

Financial Impact and Adoption of Generative AI

Michael Krigsman: What about the traction? I know you talk with a lot of CIOs and CTOs. Where are you seeing use cases starting to develop and just, overall, that issue of traction (at this point today)?

Bruno Aziza: There's tremendous traction and tremendous interest. I think if you look at research, if you look at macro research, I think McKinsey said the opportunity is $4.4 trillion. By the way, if you look at the GDP of the UK, it's $3.1 trillion, so it's a gigantic opportunity. I think Morgan Stanley also came up with an opportunity size of $7 trillion. 

Certainly, there is a lot of hope and a lot of awareness and a lot of attention. 

Where we see customers that are doing it in production (that they're focusing on), they're looking at a few trends. The first one is this idea – and this is true for any technology – is shrinking time. 

  • How can I get to my outcome faster? 
  • How can I provide a more compelling customer experience? 
  • How can I help maybe junior folks on my team accelerate their learning? 

A typical issue we had before we talked about AI or gen-AI is skill availability and acceleration of proficiency in a particular team. Here, if you're doing this right, you have the opportunity of taking a team maybe that's under-resourced or maybe a team that is more junior or just starting to learn, and now you're using technology that's accelerating their proficiency because it can summarize information. It can get them started in content and so forth. 

We have certainly found that gen-AI is this very powerful lever of performance. There's, again, recent research from BCG showing that nearly each participant in this gen-AI, about 90% (irrespective of their baseline proficiency) have produced higher quality results when they use gen-AI versus people that are not using it, particularly when it comes to innovation. 

It's really, really interesting. I know we'll get to talk about organizations like Wendy's, TELUS, Cartier, and Twilio. There are a lot of examples now of people that are doing this well, and they start with guiding principles, data quality, and picking the right use cases of where they can get the best outcome for their teams.

Michael Krigsman: Please subscribe to our newsletter. Subscribe to our YouTube channel. Check out CXOTalk.com. We have just extraordinary shows, extraordinary people coming up.

Enterprise Trends and Use Cases for Generative AI

Bruno, you alluded to the use cases. Are these use cases right now primarily focused on efficiency, meaning cost-savings, or innovation? How are folks in the enterprise thinking about generative AI in relation to these?

Bruno Aziza: Certainly, there is a lot of literature out there talking about gen-AI saving money for a CIO and so forth, but the organizations that we've seen succeed with it, they really focus on innovation. Actually, if I were to break down the types of use cases, the priority goes a little bit something like this.

The first one is this idea of increasing efficiency and productivity. You could see it in the case of marketing or software engineering where you are trying to automate the tasks that previously were done that were repetitive and they were not enabling people to focus on creative tasks. This idea of efficiency is an important one. 

The second one is around improved customer experience. You certainly see chatbots and the ability to start conversations from a much better place and providing a more compelling customer experience, in general. 

We'll talk about a few companies but, in general, that's the theme is productivity for employees, leveling up their proficiency, if you will, and then customer-facing.

I think the third scenario is really about increased innovation, so this idea of generating new ideas. What's interesting here, I know we talk a lot about the models and how they're supposed to automate a lot of work, but there is this amazing dimension of companionship.

If you're an author – I know you're a content creator yourself – when you get started, it's very lonely. You have got an empty page. It's true, by the way, for a lot of software engineers. Well, having this help allows you to kind of test ideas, get feedback, so you can get started a lot faster. 

What we found as well is that it is a tremendous help for people that need to get started faster in the case of marketing, sales, engineering. Great ways to kickstart, if you will, innovation.

Michael Krigsman: There was just an article that came out that said companionship is one of the major growing use cases. You just alluded to that. I personally find it hard to imagine that that's the case, but it seems to be (according to the research).

Bruno Aziza: I think it's very true. The way I would think about it is if you're using any of these tools today (Bard or any of the other gen-AI chatbots), think about it as dealing with another person. Thank it. Give it feedback. It's a really interesting relationship you can build.

Now, of course, you've got to remember it's not a human behind this, so you've got to remember there's a fair amount of probability going into it. But if you break it down now into the various use cases, what research shows you is that, in existing use cases when you're trying to expedite manual work, repetitive work, I think there's research showing that effectively interacting with gen-AI can cut time by half.

In new ones, just like we talked about, in jumpstarting your first draft, you can really accelerate in the case of code refactoring, for instance. Research found that nearly two-thirds of the time can be cut. 

Then there is this aspect of beyond meaning. If I threw a hard problem at a developer or a person creating content, they're 25% or 30% more likely to perform the tasks to its completion in comparison to the folks that did not have gen-AI tooling available to them. 

I think a lot of it is we're learning how to interface with them, and you should just give it feedback and see how it works for you. It certainly works well for me. 

Michael Krigsman: I've discovered the same thing, and I also read an article about exactly this the other day that if you provide prompts to the gen-AI that are more interactive, more friendly in tone (because it is probabilistic and it's based on training data that comes ultimately from humans) that the responses are better and more accurate and more complete. 

Bruno Aziza: If you see the examples of organizations that are doing this well today, a big part of it is a change management issue as well where it's really training people how to interact with these new tools (just like we saw it with the advent of the Internet and the browser). This is not uncommon, but it also takes a little bit of a change management work, a little bit of a mindset shift.

Also, I'd say if you're a CIO or CXO listening to us today, it's really to focus on the high-impact use cases because there is also a risk here that is stuck between what I call FOMO and FOMU. FOMO is the fear of missing out. You want to try everything. But you also have the FOMU, the fear of missing up (if you will) where you don't want to point this technology to problems that don't exist. 

We need to have some kind of framework so you can think about, "Okay, where is this technology going to be most helpful to me based on the value to the business and also to the capabilities of my team?" How successful you've been so far in bringing this technology into their daily workflow. 

Michael Krigsman: We have a really interesting comment on LinkedIn from Alexander Vasilyev who is a CIO and CTO. He says in his organization they use it for quality control and to check voice records.

Bruno Aziza: People tend to point gen-AI directly to customer-facing examples. But in fact, if you look at where it can also be helpful, it is in this case of data management, and so data labeling, classification, data cleansing automation, data quality automation, MDM automation. 

There's also this whole idea of data augmentation. There are some professions, some domains – for instance, think about healthcare – where it's not easy, and it's also expensive and sometimes impossible, to test on real-world data. 

You can use gen-AI to generate synthetic data. You can still work and make progress towards your goal without having to work with data that's just simply not available to you.

There's a lot, I think, about gen-AI that is about helping you gain better quality on your data, which, like I said, is a foundational block of any data strategy. If people don't trust your data, if you don't trust your data, it really doesn't matter the application that sits on top of it.

Deploying Generative AI: Ethical and Practical Considerations

Michael Krigsman: We have a question from Twitter. This is from Arsalan Khan, who is a regular listener and he always asks great questions. He says this: "AI is a combination of data and algorithms. We often talk about managing the data used by AI, but what about managing the underlying algorithms to ensure they don't have biases and are changing, evolving with societal changes?"

Bruno Aziza: There are a lot of dimensions to quality. I think if you Google "quality," there's a paper that probably dates 25 years that breaks down the 16 various dimensions of quality. 

I think what Arsalan is alluding to, one is you've got to start with a framework. You also have to make sure that the framework evolves over time as new data comes in and as new situations are coming into your environment.

There is this principle. I didn't even this one. This is from a company called DataIQ. The CEO, who has been in the field for a long time, calls it the RAFT principle. RAFT is an acronym, and it stands for reliability, accountability, fairness, and transparency. 

I think, as you think about your models and the data that is feeding these models, you should run them through these dimensions. Like I said, there are many more, as Arsalan is referring to a few of them, but I'd say these four here, I don't think you can compromise on them. Reliability, accountability, fairness, and transparency are key, key dimensions. 

Frankly—in his world of gen-AI, this is going to sound counterintuitive—data quality is the moat. Your moat as an enterprise is not just the model and the algorithm. But it is the quality, the provenance, the trustability of your data. This is an important question, I think, important consideration and very, very relevant in the enterprise.

“Trust and Data Quality are the Competitive Moat”

Michael Krigsman: That's really interesting. Why do you say data quality is the moat, the competitive moat?

Bruno Aziza: If you're looking to differentiate yourself as an enterprise today, the work that you've done over many, many years – some organizations that are listening to us have decades of data – is the basis of your recipe. Just like if you think about making a great dish, of course, the plate matters, the fork and the knife matter, but really, if the meat inside the dish is really terrible, it ruins the entire dish.

This is where I think organizations have the opportunity to truly differentiate. Inversely, if you have not done a great job with your data, and you have not curated it well, and you haven't established provenance. 

Remember, one of the big issues in this field is, as you're using gen-AI to create new content, you've got to understand provenance because you've got to understand attribution. Where does this data come from in the first place? If you haven't done this hard work, then it's really going to be difficult for you to differentiate.

This is where organizations that are data first, that are AI first, have a competitive advantage because they've already thought about what it takes to activate this data at scale with the highest degree of quality, trustability, reliability, and ethics around that data, which is what Arsalan was referring to here.

Data Strategy is the Foundation of Generative AI Strategy

Michael Krigsman: Your enterprise AI strategy (including generative AI, but in its entirety) needs to be founded right at the outset on a very solid data strategy.

Bruno Aziza: Absolutely. This is where I was saying there's something old and something new here. The new is amazing applications that really don't require much training for people to adopt them. 

The old doesn't go away. All the discipline you needed to have around data quality, trustability, traceability—like I said, there are 16 to 20 dimensions in there—don't divorce yourself from them. 

In fact, a lot of the organizations that we talk to, when we ask them, "How do you prioritize your budget for gen-AI?" the assumption could be, "Well, they're going to take the data budget and the AI budget and divert it to gen-AI." That's absolutely not what's going on here. 

They're adding a new budget for gen-AI because they realize that the fundamentals don't change. In fact, they're becoming more important because if you now activate an application that everybody can use towards data that's not great, everybody is going to see it, so it exposes even more.

Michael Krigsman: We have another question, again from Alexander Vasilyev. He comes back, and he says, "Have you seen the use of ChatGPT and other generative AI in the real estate market?" He says that "In the UAE, it's an exciting question with market growth every year." Use of generative AI in real estate. 

Bruno Aziza: Just like other businesses, we've seen examples in customer-interfacing where someone is trying to create a full experience for themselves, very much like you'd see in the travel industry, for instance. Today, in order to really satisfy the answer of a consumer, there are a lot of steps that are required, right? I want to take a trip over here, and the trip needs to have a hotel, a car, a flight, and so forth. 

Real estate is a very similar example where I'm looking for a house. It needs to have two beds, two baths, this location, close to this school district, and so forth. 

There are certainly organizations that are using gen-AI in order to interface with the consumer, one, making sure that they're completing the information that's needed before they package the solution to them. It's a great example of where you can create truly compelling integrations in industries that are traditionally not vertically oriented or integrated, I should say, where, for you to complete the job, if you will, of the consumer, there might be 5 different steps, maybe 15 questions leading there. Gen-AI can truly help you with that because it can get the answers out of the person, if you will, and integrate that in a solution and propose it back.

It's a key dimension, I think, of what works for use cases in gen-AI is that they need to be context-aware. They need to follow the conversations so they understand that they're completing the job you're hiring them for. They're not just answering this one question, like you would see maybe in a typical search interface.

How to Identify Use Cases for Enterprise AI

Michael Krigsman: How can we best identify the ideal use cases or the optimal or the use cases in the enterprise that will yield the most fruitful results the most quickly?

Bruno Aziza: Today, there is a lot of press around this. It's very tempting to just look at the list of use cases and think you've got to try all of them. 

In fact, I think there's research recently—McKinsey released this research—showing 63 various use cases. But in fact, if you double-click, 75% of the value in the enterprise for gen-AI use cases comes from 4 areas: customer operations, marketing and sales, software engineering, and research and development.

I would first start with that. I'm not suggesting to ignore the hundreds of use cases you could have. But that could become really distracting, especially if you don't remember the research telling you 75% in these 4 areas. That's where I'd focus first is identifying these areas and then keying it off of the organizations that have done this well. 

If you take an example of Wendy's, for instance, a great example of customer-facing use cases, 75% to 80% of Wendy's customer orders comes through the drive-thru. Customers can customize their menu. There are billions of possibilities. 

Here they're using gen-AI for two things. One is making sure that they're interfacing with the consumers in the time constraints with a situation with billions of solutions, but also making it effective for their employees to get the order right. 

I would look at these dimensions where you have the type of technology that enables you to provide a compelling customer experience while taking care of your employees because happier employees are going to just be better for your customers. 

I'll give you another example of an organization in the medical field, the Mayo Family Foundation. They're looking to minimize their employee burnout.

What happens is an average patient, when you see a patient, on average is 7,000 to 8,000 data points. An average physician will see anywhere between 10 to 15 patients per day. That means 120,000 data points that this individual has to go through in order to provide a diagnosis or being able to serve the patient. Here they're using gen-AI to summarize the records, and so it helps the doctor really focus on their job in a better manner because they're facilitated with that technology.

That's how I would look at it. I would say, step one, sure, read about all the use cases. But remember there's four: customer operations, marketing and sales, software engineering, R&D. Then look at the ones where you're hitting values on both sides of the equation – a compelling customer experience and improve employee experience as well – because these two things are highly correlated. 

Michael Krigsman: What are the risks of choosing the wrong use cases?

Bruno Aziza: You're going to end up with a lot less value than you think you're going to get, and so that's problematic. We know chief data and analytics offices today, while there are many more than there were ten years ago, they also have a very short tenure. I think 2.5 years to 3 years. 

I think often it's because the first use cases they focus on tend to be the ones that don't generate value for the business, in general. Or maybe they might be perceived as these use cases on the edge that really are not advancing the organization. I think that's risk one as a personal leader yourself.

A lot of it is how your career is going to evolve through the right bend. So, we see a lot of organizations like Wayfair, for instance, has assembled this internal generative AI council so they can evaluate the potential use of the technology based on value to the business but also current capabilities inside the organization. These two things, again, are highly correlated. That's risk number one.

Risk number two, of course, is you might be using the wrong data. You might not be attributing the data, and so now you're running into the typical issues of governance, quality, and data security. Having guiding principles, a council, is going to allow you to minimize these risks, for sure.

How to Align Enterprise Architecture and AI Strategy

Michael Krigsman: We have another question from Twitter. Again, this is from Arsalan Khan, who comes back and says, "Enterprise architecture can help organizations become better decision-makers through data used to drive strategic alignment. AI accelerates this. There is a symbiotic relationship also between enterprise architecture and AI. Can you comment on this relationship between enterprise architecture and AI?"

Bruno Aziza: Highly related, so the industry chatter today is that these two things are separate. I'd argue with that point that they are extremely connected.

If you have an environment that is not tightly integrated, that is difficult for your data scientists and your data analysts to work with, it's going to be really, really challenging for you to build a gen-AI application. In fact, we've identified that it goes beyond just the technology. It also goes to how your team is structured. 

What we're seeing now is data teams and AI team, by the way, are converging. That's another key point here, I think, that probably Arsalan is asking about is, "Should I have my gen-AI team here and my data team over here?" 

What we've seen is teams that are converging because these two topics are highly correlated. We also see this culture of moving to building data products. 

If you think about what gen-AI is or a gen-AI application is, it's ultimately very intelligent data products. And so, we see customers hiring a data product manager, a UX leader, a program manager, to pair with their data scientists, their data engineering team, and their chief data officers to really build an integrated system that starts with the ownership of the data all the way to the activation and the value that this data is going to provide.

I'd say they are highly correlated. Don't make the mistake of ignoring one to the benefit of the other because you will probably regret it if you do. 

Michael Krigsman: Given all of these use cases, what kind of metrics or key performance indicators (KPIs) can companies use to evaluate these AI projects?

Bruno Aziza: I think, in general, the themes that we see are around making money, saving money, innovating. Those are kind of the places where it gravitates.

I look at examples of organizations that are doing this well today. Walmart, for instance. 

Walmart has 50 million Walmart customers. They're interfacing in some way or shape with their conversational experience. They have over a million associates that are also experiencing that.

What they've looked at is how can we help our associates, our store associates, be of better service to our customers. They have built this application called Ask Sam, which is an AI tool. They have over two million associates using the application today.

Here a metric would be, are we helping our customers navigate through the experience faster? Some of it is not going to be through their direct interface with the tooling. It's going to be with the ability of our employees to be more productive at providing an amazing experience. 

You see areas in the legal field. For instance, Accenture in Europe has this project called ALICE. ALICE is an acronym. It stands for Accenture Legal Intelligent Contract.

They have over a million contracts that their legal team has to go through in order to identify the Accenture's rights and obligations on these contracts. And so, ALICE is this NLP tool that enables them to interface with the contracts in English to find the relevant metadata.

Again, I think here it's 2,800 professionals globally that are using that tool. Again, this idea of, yes, productivity for my employees and also to the goal of providing better service either for the organization or directly for the consumers.

Michael Krigsman: Would it therefore be correct to say that evaluating these generative AI projects actually is not much different than looking at any other business project? You're looking at the business results, whether it's innovation, cost savings, whatever it might be. 

Bruno Aziza: The catch here is we don't yet have a great model for how much it's going to cost you to deploy some of this technology. I think that's probably a key conversation to have is if you're a CXO in CXO today, how do you think about the cost structure? What is your approach to gen-AI to maximize the results against the cost? 

Today, like I was saying earlier, customers are not replacing their AI and data budgets to the benefit of gen-AI. They're adding to it. And so, there is a huge conversation here.

Of course, now becoming more and more affordable to run gen-AI. But you also have to think about the value and cost equation inside your organization. 

Michael Krigsman: As companies are making investments in AI, in general, and generative AI specifically, generative AI produces open-ended results. If you add up two plus two, it's always, always, always going to equal four. But if you put a prompt into generative AI, you don't really know what you're going to get. 

In addition, the technology is changing and evolving so quickly. The data needs are evolving. And so, how can organizations invest effectively given all of these changing circumstances and uncertainties?

Bruno Aziza: You've got great examples in companies like Wayfair and Walmart who have published their principles. They've really thought about, start with the principles first. How do we think about treating data? What are the types of experiences we want to create with that? 

Second, does the architecture accommodate these goals? Here, I mean architecture in a broad definition. The technical architecture, but also the organizational blueprint that enables you to do that. 

The last component is how you cost that out. Now, luckily, there's research on that.

Now, the research is not always perfect because, in this field in particular, it's just moving so fast that some of the costs I'm going to share with you here might change in two months. But there are essentially three ways that you can interface or a combination of these three ways you can interface with this technology. It's McKinsey's research on the taker, the shaper, and the maker. 

The taker is a use case where you are using publicly available models. Maybe it's a chat interface, an API. There's not a lot of customization. Here, the cost, their valuation of cost, is this one-time cost $0.5 million, $2 million, and then the recurring costs of $0.5 million. That's the very first one. 

The second one is the shaper, which is about integration of models. An example here is supporting sales deals by connecting a gen-AI tool to a CRM, for instance. Here this act of fine-tuning, the cost starts about $2 million to $10 million, as a one-time model, and then, ongoing, anywhere between $0.5 million to $1 million recurring annually. 

The third one, which is the most expensive one, is about the maker. That's the organization that's building a foundation model and wants to address very discrete costs. Here to get started, anywhere between $5 million to $200 million. And then, as a recurring cost, $0.5 million to $1 million.

You could see there's a very, very wide range here between customizing technology that exists to fine-tuning it, starting from scratch. 

Now, the good news here is you have examples of organizations in the public domain. HBR, I think, just published a story of Morningstar that prompted an existing LLM. In their case—

Morningstar is a stock company that looks at a lot of information. They have this research agent called Mo that's built on top of gen-AI. 

They look at 10,000 pieces of Morningstar research, and they were able to get Mo, the agent, to answer 25,000 questions at an average of $0.002 (two-tenths of a cent) per question, so a total cost of $3,000. 

A wide range here in the cost. And most likely, as an enterprise, you're probably going to take one of those three paths, if you will. Probably two for sure because creating a foundation model is not a trivial endeavor for any organization. But that's a key consideration is the value and cost relationship.

The good news is things are getting better and getting cheaper. But also, as that's happening, the ambition in the types of questions we want to ask are also getting more sophisticated.

Michael Krigsman: Again, Alexander Vasilyev wants to know, "When you gave the example about Accenture, did you mean that they are using a vector database to collect all contracts?"

Bruno Aziza: No, and there's a video that's available from their leader. This is really about using AI foundations to go through these documents and identify and summarize and organize the metadata of these contracts. 

What Does Data Corruption Mean for Generative AI?

Michael Krigsman: We have a question, a really interesting one, from Lisbeth Shaw on Twitter who says, "Can generative AI be corrupted? If so, how does it get corrupted and how can you tell? What does corrupt data (when it comes to generative AI) actually look like?" It's a really interesting question.

Bruno Aziza: It's a wide question as well because what is corruption? Is it the quality? Is it the fidelity? Is it the legal use of the information? There are multiple dimensions of corruption.

I would say I have this acronym. I'm going to take a little bit more time to answer this question, if you don't mind, because I have this acronym that I think is key when you think about gen-AI. I call it MTCAR. 

When you think about gen-AI applications, the first bit is that you're looking for experiences that are multimodal, input and output – image, text, code, math – in and out. You want that type of interface.

This question is about trustable. There are many dimensions of trustability. 

  • Privacy: Which data is used as a source? Is this data protected? Is it shared with others? 
  • Liability, fidelity of the data, robustness of the data because this affects the quality of the insights.
  • Fairness, transparency, accountability. 

T is very important. 

Then the C-A-R:

  • C is about contextual, being able to have the conversation just like humans do. 
  • A is about applied to a workflow, so that's also an area where, to your question, if it's not totally integrated on the existing workflow, it could get corrupted. It might not be just the data provenance. But it may be the way it's integrated inside the workflow.
  • R is about recency. A lot of what's happening around data quality and trustability of the data is having data that's relevant to the question being asked. 

A lot of the issues with some of these models today is because it takes so much time to train them. You're asking something that happened yesterday. Well, when was that yesterday? Is it yesterday for you or is it yesterday for when the model was trained? 

That's another type of, I guess, corruption you could call because it's incomplete data. It doesn't have data from the last 24 hours. Those are all the dimensions to think about. 

Deploying Generative AI: Ethical and Cultural Considerations

Michael Krigsman: Arsalan Khan comes back, and he reminds us that we have not talked at all about a very important topic, which is culture. His question then is, "AI requires a huge culture change. IT alone can't change the culture. Who should lead AI? Is it the CIO, the CFO, the CTO, the CEO?" So, culture and leadership with respect to AI.

Bruno Aziza: Culture is typically the number one barrier that most organizations have to break through in order to succeed with data, AI, and analytics. The person that ideally will lead that is the CEO of the organization.

Often, we say, "Hey, this is why I hired the chief data officer," or the CIO. But the reality is that they need more than just sponsorship. They actually need a mandate of an organizational leader, the CEO of the organization that says, "This is the principles."

Culture is not something you print on the wall, right? Culture is what you do, which then leads into how is your organization optimized.

That's why earlier I was talking about these roles of the data product manager and so forth because the CEO has to have the mandate, and they have to enable their team to hire the right types of individuals and right types of leaders in order to apply this culture.

Absolutely a great question. But it's not, again, something you solve with posters on the wall. It's about the leadership and how your organization is structured.

Michael Krigsman: Bruno, let me come back and play devil's advocate for a moment on that point. The CEO says, "Great. Thank you so much for your input. But you know what? This AI stuff is no different than... You know we've had new networking technologies and new programming languages, and you people are always coming to us with this song and dance. And I don't see a difference."

Bruno Aziza: Ideally, you want a CEO that comes in and says, "This is the way we're going to do things." Now, what we've seen, organizations succeed in an environment like this, one thing that leaders really hate is losing. [Laughter] 

Michael Krigsman: [Laughter]

Bruno Aziza: And so, if you're able to quantify the costs of the missed opportunity, it's an effective way to turn around that relationship with the CEO. 

Now, if it doesn't work, I would say you probably want to do something more drastic for your personal choice because, ultimately, you need that leader to support you, particularly as you experiment. You might not be right all the time, so you will make mistakes in the process.

But I had this customer that quantified the value of each mistake, and I think it was $50,000 or some number like that. Every day, she was reporting to their CEO. "Well, today we lost another $250,000 because we made these mistakes, and here's the proof that it was the wrong decision."

If you can't come and convince with logic and positive, then convince with emotions and negative is where I would take this. That's certainly what's worked for some of the organizations we work with.

How to Drive Adoption of Generative AI in the Enterprise

Michael Krigsman: What advice do you have regarding adoption of generative AI in the enterprise given all the issues that we've been talking about today?

Bruno Aziza: A few things: I think the first bit that'd remember is there's something old and something new here. It's amazing that you're going to get the excitement and the attention. 

There's no better time to work in data, AI, and analytics than right now. I know when I started, it was a back office problem. Now it's front and center. That's the good news is everybody is going to pay attention.

The other piece of good news is the interface to data has changed. It used to be code. Now it's language. That means it could be available to many, many other constituents inside your organization. 

In the past, we had this idea of, like, "Well, get it to the data scientists first. Get it to the data engineer first. Get it to the business user last," in a way here. Now that framework is really speeding you into putting it in front of business users, so that's the good news.

The bad news about it, or the thing that could really be a gotcha, is to misunderstand that the moat really here is data: data quality, trustability, provenance, reliability, governance, ethics around the data. That's really where your center of design for your gen-AI product resides. 

You want a strong set of capabilities around that. You want people that know how to orchestrate that because if that piece doesn't work, the opposite will happen to assuming value from gen-AI. You'll actually expose to many more people inside your organization (maybe your partners, your customers) that the data you started with is in fact bad. 

The good news is lots of great attention. Not such new news is you need to continue working on data quality because that is ultimately your moat. 

Future Prospects: The Evolving Landscape of Generative AI and Enterprise

Michael Krigsman: We have one more question that's come in from Twitter, again from Lisbeth Shaw, who also points out an entire area we have not spoken about, but I will ask you just very briefly, which is this: What about the future of work and the workforce as generative AI use expands in the enterprise?

Bruno Aziza: There's research. I think, again, it was McKinsey that originally identified that 50% of our tasks would be impacted. In fact, they found now that it's about 60% to 70%. 

I think the question is right on here where it's going to require a lot of different interfacing with the machines. But again, the keyword here is humans and the machine. 

This is not a scenario where it's the machine only. It's humans with the machine are going to be more productive and create more compelling experiences than humans without the machines. I think that's very true for a very, very long time. That's how I would approach it. 

It certain changes the way we should think about the educational system as well. If you think about 20 years from now, how do you train certain professions? 

I think that's a key consideration as well if you're doing enablement at your organization and onboarding new employees. You've got to start with gen-AI being a component, which maybe a year ago that wasn't the case at all.

The good news here is that, again, it's a very powerful lever of performance. You can on-board people that maybe are new to the craft or maybe have fewer resources. With this new technology (if used correctly with the right data platform and the right set of guiding principles) could truly accelerate the productivity of your employees.

Michael Krigsman: Using generative AI as a lever inside your organization to accomplish things that you couldn't do as easily or even to do new things.

Bruno Aziza: That's correct.

Michael Krigsman: With that, we are out of time. A huge thank you to Bruno Aziza. He's a partner with CapitalG. Bruno, thank you so much for coming back and being here again with us.

Bruno Aziza: Well, thank you so much for having me, Michael.

Michael Krigsman: A huge thank you to everybody who was listening. You guys are an amazing audience. CXOTalk is for you, and I love your questions. 

Before you go, please subscribe to our newsletter. Subscribe to our YouTube channel. Check out CXOTalk.com. We have just extraordinary shows, extraordinary people coming up. 

We'll see you again soon. Thank you so much, everybody. Take care.

Published Date: Sep 22, 2023

Author: Michael Krigsman

Episode ID: 806