Enterprise AI: The Leadership Lessons

Discover the secrets of Enterprise AI leadership with Sunil Senan, Infosys' Head of Data, Analytics, and AI, exclusively on episode 799 of CXOTalk.

43:17

Jul 28, 2023
12,570 Views

Artificial intelligence promises great opportunities for the enterprise but realizing that potential requires strong leadership and a strategic approach. On episode 799 of CXOTalk, we spoke with Sunil Senan, Global Head of Data, Analytics and AI at Infosys, about the leadership lessons around enterprise AI adoption.

While interest in enterprise AI is high, companies still have more questions than answers, especially around how to translate the potential of AI into practical benefits and outcomes.

Among the topics discussed are:

  • There is heightened enterprise interest in AI but more questions than answers on how to apply it. Companies want help translating AI's potential into business impact.
  • AI adoption differs from past "big bang" software rollouts. It requires continuous, iterative development and learning.
  • AI success depends on envisioning goals, preparing for organizational change, and responsible design. Ethics and compliance can't be an afterthought.
  • AI offers opportunities like accelerated growth, operational efficiencies, and connected ecosystems through data sharing.
  • Realizing AI's benefits takes thoughtful leadership, cultural change, and aligning AI to solve specific business problems vs "AI for AI's sake."

Sunil Senan is Senior Vice President and Global Head of Data, Analytics, and AI at Infosys. In this role, he works closely with Infosys’ strategic clients on their data & analytics led digital transformation initiatives. He is passionate about how data & analytics is creating economic impact in the society and how enterprises and governments can engage in driving this transformation. He has written the “Data economy in Digital times” paper articulating how the new data economy presents a set of new possibilities for enterprises, governments to serve their citizens and consumers. Sunil holds a bachelor of engineering degree with a specialization in computer science and has completed his master’s in business administration (executive-MBA) from prestigious Indian Institute of Management, (IIM) Bangalore.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Today on Episode #799 of CXOTalk we're discussing enterprise AI, the leadership lessons. We're speaking with Sunil Senan, Head of Data, Analytics, and AI for Infosys. 

Sunil Senan: I've been with the company for over 22 years working with customers on their digital and AI-led transformation journeys who are the industry leaders across the globe.

Michael Krigsman: Tell us about your role. You have a really interesting title: Global Head of Data, Analytics, and AI.

Sunil Senan: I work very closely with our clients, CXOs, and really helping them understand how they can look at data and AI for their transformation. Really sift through all the hype to then convert this into a meaningful blueprint that delivers value, that delivers on the promise of data and AI not just for their enterprise but also for the society that they touch. Also, really creating what we call a data economy around the enterprises, which is a very meaningful way in which you can create value for all stakeholders and bring in a network of entities and partners, citizens and consumers, together to then create net new value. That's something that data and AI works for nations, societies, and communities. 

The state of AI adoption in the enterprise

Michael Krigsman: Sunil, you're speaking with so many different companies of varying sizes. What do people tell you about AI? Everybody is curious. Everybody is interested in AI. Everybody knows they have to do something. But can you (with a broad brushstroke) describe the general state of the market as you're seeing it?

Sunil Senan: This is the conversation in the boardrooms that our customers are having. Clearly, there's a very, very heightened interest in learning about AI and what it means for enterprises.

But I think the key questions that our customers have is, "How do I translate the potential of AI for my business, and how I can reimagine my business, how I can reimagine my business models, what it means for my products and services, what it means for my customers and other stakeholders whom I serve and, most importantly, how do I go about it?" 

There isn't a big-bang approach to AI. It's something that touches the roots of the organization. 

There are cultural aspects of things. There are processes. And obviously, there is the impact on people. That needs to be well understood and articulated for how it will amplify the potential for people within the organization and outside. How do you translate this really into an execution blueprint so that you can deliver on the value that data and AI promise for the organization?

These are some of the key questions, and I would say there are more questions than answers in their mind. That's why they are reaching out to us and we're engaging them on figuring this for them as well as the industry in which they operate.

Understanding enterprise AI: A departure from big-bang projects

Michael Krigsman: You made an interesting comment just now. You said that there is no big-bang approach to AI. For folks who may be younger that have not been through large ERP projects – quick summary; big bang means do it all at once, take the company live as one huge, expensive, long project – and you're saying, Sunil, that AI is different. Can you elaborate on that?

Sunil Senan: We live in a world where there is a continuous delivery of new capabilities that allows not only the enterprise to learn as to how to operate newer systems such as these, but also the users, the customers, the consumers to really embrace that. There is a continuous feedback that then makes this system evolve.

If I have to break it down into two or three elements as to why AI systems are like this compared to the ERPs and others that you spoke about, first, there is a clear adoption problem, which is the trust deficit in terms of what AI systems would tell you versus what the tribal knowledge is. I think it takes a certain experience (both for the AI systems as well as for the humans who are interacting or using such systems) to then build the trust and the way of utilizing such capabilities for amplifying the potential, getting that productivity that is needed, making that impact on the business and the customers.

But underlying problems are also that of data quality. How do you govern such systems? How do you make sure that the system is operating on ethical integrations? These are very important for the society and also making that larger impact, converting this AI effort into something that can deliver good for the society and for everyone who is going to interact with it.

I think it takes a certain amount of maturity in order for these systems to be tuned, really looking at how this is working with the ecosystem, and then you put this at scale, which is very, very important. You can't get the value out of these systems by only limiting it to small POCs or experimentations that are important to get started. But I think the end goal is to then scale it at the enterprise level. 

That is why the journey goes through the quick iterations and what we call agile digital is to then take this to business functions, operate it at an ecosystem level, and so on.

Michael Krigsman: With these large, traditional software projects, they were highly technology-based. But still, they had impact across the company (if you were doing an ERP system, for example). With these AI systems, there still is impact across the company, as you were just describing. But it's very different. So, how are these AI systems different from traditional enterprise software?

Sunil Senan: The enterprise software gave a new way or an automated way of executing, which is how you could run a process at a global scale. You could standardize a process, even though there were specific customizations for how individual regions needed to cater to local compliance laws and so on. The idea was to bring an automated system and industrialization and standardization of that process.

What we are talking about in terms of AI is to bring the cognitive capabilities into a system that would interact with humans and the other systems at large. This has to learn from the data that exists in the ecosystem and within the enterprise.

As you can imagine, if you have a bias, let's say, existing in the existing data, AI would amplify that. That's something that would completely distort at a minimum and give out inaccurate decisions, but also it would then not be fair. It's not going to be free. It's not something that would drive or even meet the ethical considerations with which we all operate.

Hence, the AI systems need to be governed and need to be looked at differently from that full automation journey to then say, "How am I tuning this? What business problems am I solving? And am I solving it in the ways that are acceptable to the enterprise standards and also to the societal standards?"

Michael Krigsman: Would it be correct to say that with these AI projects that they retain the elements of traditional software but now you have these layers that did not exist before, such as learning from the data (as you were just describing)?

Sunil Senan: At some level, I think it goes beyond that, Michael. In my view, when you start to look into the first deficit aspect of things and how you bridge that trust between AI systems and the tribal knowledge, the story starts to diverge from that of ERPs and other rollouts that we've done. But there's also an aspect of culture: the culture of data, the culture of insights-driven or data-driven decision-making is a journey. 

As you would imagine in large enterprises, it is normally an individual who has to get on to a system like this and really understand how to work with these systems, but it's also the groups of people. Teams not necessarily in one department but also cutting across other departments and, oftentimes, even across other companies.

How do you bring an ecosystem to that level of understanding and having that expertise to say how you leverage data and AI and solve problems together? This is where this journey starts to diverge and look at how the adoption and the utility of AI for different business functions would emerge. 

The other thing is most of what we're going to do in AI is to reimagine the model. You're going to see things that we have not seen before. And in that sense, it's a great opportunity for organizations to differentiate, to create that discontinuous growth potential, and also create new models, et cetera.

But on the other hand, it's also something that needs to be imagined, tested, experimented, and then put into place. Hence, pivoting AI on what it means for a business, solving problems is where the starting point is. 

AI cannot be done for the sake of AI. It's not a system that you're looking to put in place because that's the system that's your end goal. The end goal is essentially driving transformation for business, getting that outcome for the enterprise, and emphasizing not itself, or not only for itself, but also for the entities that it interacts with and the stakeholders that it is serving.

Michael Krigsman: Please subscribe to our YouTube channel. Hit the subscribe button. It's at the bottom of our website. Check out CXOTalk.com. We really have great shows coming up. 

We have a very interesting question from Arsalan Khan. Arsalan is a regular listener. He asks wonderful questions. Thank you, Arsalan, for that. 

He says, "When thinking about the ethics in AI, are there ethical standards that organizations can follow? If not, then who decides what is ethical? How do you make sure that your competitor's AI is ethical, that they're not cheating and putting yourself at a disadvantage?" Any thoughts on this? It's a really thorny topic.

Sunil Senan: You can see this as an evolution of standards and regulations that we have begun to see. But also, there are more that are going to come. 

Ground level, if you distill it down to two or three things that companies can look at, one is you have standards around privacy, for example. It's a very, very important consideration to see how you build that relationship with your consumers and partners and employees who will have their stakes into the data that you're processing, really making sure that you have the permissions or the consents necessary for you to utilize that data or store the data or process it for the purpose that you're stating and for how long you want to do that is know. It's something that has been defined in many regulations both within the states as well as across Europe and other geographies even though there's more that's coming in on that aspect as well.

Most corporates operate with values and standards that they're known for, and that's a good guardrail as well. Most organizations (successful ones, at that) have looked at the societal values and how they have created more value for everybody. Not only their own consumers, but also others who operate in the societies in which they operate. Those guardrails apply to AI as well, and that's something that's known.

Most importantly, I think it is also to anticipate and see what kind of regulations you're going to see in the industry around the impact of AI on people. That would benefit them (if done right), but it could also create negative impact in society. 

Anticipating some of those, preparing for that journey, and making sure that you're doing the right things from that aspect would put you on the right side of the laws and regulations when they do come into effect. And we know that they will. And I think those organizations and enterprises would find success far more than the ones who don't.

I think, beyond this, there are companies that are working together to lay down ethical standards that can be referred to. We are working on some of these as well. 

We do help our customers adopt some of these processes and standards, as we build those systems. How do you take care of biases, for example? There are ways to do this, and we do incorporate those frameworks into every project and every AI-driven initiative that we take up for our customers. 

Marketing, for example, is one of the most common areas where AI has been applied. We have, without exception, always held trust, ethics, privacy, compliance, and security standards to each one of our projects. And those customers have gone about benefiting from the use of AI and shared that value with the consumers that they serve. There are frameworks that you could adopt while working on the AI projects. 

Opportunities for enterprise AI

Michael Krigsman: We have another great question from Twitter. This is from Kayla Aragones. Kayla says, "What are the biggest opportunities that you predict AI will yield for enterprises, Sunil?"

Sunil Senan: For the enterprises, it's going to drive (in our view) three theaters of value creation. AI is going to accelerate growth for enterprises. This is by identifying newer markets, newer segments, newer needs that they can serve or serve those needs in a different way which is far more valuable for customers, or to even figure their play in the industry or across industry value chains. 

One of the things that we always discuss with our customers and guide then on is that the physical products don't transcend industry boundaries. But when you think about data, it does. That means tremendous opportunity and potential for looking at newer ways to create these new data-driven, AI-driven products and services. 

You could be a medical device manufacturer, for example. One of our clients, in using data (the medical devices, in this particular case for diabetes) they were able to really help the other parts of the value chain that interacts with those very patients. 

It could be hospitals, which are in the same value chain. How do you turn the brain of the industry, which is post-fractal, that anything that happens is post-factual, the sugar event? Using data and AI to predict those events, you could then turn this into a pre-factual, which is really working proactively to help the well-being of those diabetes patients. 

But also, going across other parts of the industries that touch diabetes patients would be consumer products on one side, the physical lifestyle products that can increase the activity levels of these very patients, and we all know that has an impact. The food industry is a big recipient of all this data and how you could use this for stitching an ecosystem that improves not only the well-being for these patients but also in general for society.

Figuring accelerated growth is one big theater of value creation. Unlock efficiencies at scale. You could now really push those economic frontiers to do things at costs that are far less if done right and drive more efficiencies into your operational processes, in your field operations, and how you operate your business globally.

But more importantly, also building connected ecosystems. The kind that I was talking about (both in the medical device as well as in general) where you are creating an economy around you through new data and AI-driven products and new business models, is a tremendous opportunity. The network effects of such data and AI products and services can create immense value, unprecedented value in the industry.

This is what we have embraced in our Infosys Topaz offering that we launched. It's a services brand that brings together all of what we have to offer as Infosys and network of partners that we have stitched together, the solution investments that we're making to help drive on these three objectives for our customers.

Factors that drive enterprise AI success

Michael Krigsman: Given the differences between AI projects and traditional enterprise software projects that you were describing earlier, what are the conditions that need to be in place in order to get started in the right way? In other words, what are the factors at the beginning that will drive downstream success?

Sunil Senan: AI should not be done for the sake of AI. What that essentially means is to emphasize and envision what AI means for the business and the industry in which the company operates. 

Really thinking about the fundamentals of what makes AI successful to deliver those objectives is the very next thing. Data, is it in place? Is it accessible and available? Does it have the quality that you could trust? If there are specific AI projects on the horizon, you could even start to look into whether this is the data that you want to base your AI systems on. 

The other thing that I would say is preparing for the journey. Oftentimes, we see enterprises seeing a lot of surprises as they start.

For example, there are many POCs that don't see the light of the day because the impact or the cultural change or the enablement of people who will be working on such systems is often not thought about. And even costs are not properly understood.

Risk mitigations, contingency planning to see how you govern such an AI system are not thought through. Hence, doing this as a tech-first project, which is just a cool, shiny technology that has been used, often remains in that very center as well rather than really bringing it to the business.

Thinking and preparing for the journey, you should have and should look at how AI can change the business. But really, breaking it down into smaller blueprints with defined, very specific objectives, and bring an ecosystem together to really work on such a thing.

The other thing that I would say is to take a responsible AI design, which is to say that the ethics, trust, security compliance, privacy cannot be an afterthought. It needs to be baked-in right at the front. Even as you communicate what AI is for your business, that you lay down some of those principles so all stakeholders know what it is that AI is seeking to do for the business, how they can engage, and what are the fundamentals and the underpinnings of such a system that you would operate.

Impact of AI on organizational culture

Michael Krigsman: Arsalan Khan comes back on Twitter, and he says, "Organizations want to do something useful with AI but still struggle with shadow IT." And so, he wants to know how using AI affects the organizational culture and makes AI more of an enabler rather than an obstructor. I think this gets to the dimension of culture and organizational change, Sunil, that you were alluding to earlier. 

Sunil Senan: Absolutely. I think it's a shift in the way in which we view AI. AI is not to displace but essentially to amplify the potential. 

IT, for example, this is something that we have embraced at Infosys as well. Using AI to improve productivity in software engineering, lifecycles, and the way in which we test our systems or the systems that we build for our customers, how we ensure data standards or data privacy across all our projects, and so on. 

There are multiple ways in which you would look at AI. What this does is to shift the work value chain where humans (software engineers in this case) would then shift to more complex, more value-adding activities. You would have AI really amplify the productivity of people by running many things autonomously.

The same would happen on the business front as well. We need to look at AI as a way to change or reimagine the business processes or business functions or business models and embrace this to design those new systems in the way in which we need to.

The thing that I was saying earlier – AI for the sake of AI – would not emphasize all of this. I think if we put the right foundation and envision the future from a business lens perspective, I think it tends to clearly communicate the purpose of the AI project and also bring the various teams that need to come together – IT, business – and find those champions who can then lead the way to create those systems at scale.

Ethical considerations in AI, including job displacement

Michael Krigsman: As you talk with senior business leaders and with boards, to what extent do you think there is an understanding of the complex impact that AI will have on their organization, because even when you talk about AI amplifying the benefit rather than displacing humans, the reality is that there is going to be job displacement as well? It's very complex. And so, again, to what extent do boards and senior business leaders recognize the depth of complexity on their organizations?

Sunil Senan: I think there is a great appreciation for the complexity that exists. But I think I would say that understanding that complexity and what are the ways in which you could manage that complexity and turn this into a positive cycle is where the effort and the focus is shifting. 

That's where we are helping our customers clearly understand how you bring those aspects into the things. For example, we take responsible AI design (by design, for example) that brings that thought process upfront in the process so that you are putting the right underpinnings for these systems as you build it rather than letting it be an afterthought that can be a nightmare for the organization.

Similarly, when you are emphasizing your business blueprints, the thought process on why you're doing it and how you want to actually do this, how you're going to bring things together in order to execute on this is a conversation that we have upfront. That prepares the organization to then run such systems at scale. 

There are several examples of this where we have changed existing models. We have put new models in place as well, new processes and new entities together to do things that were not done before. Let me take a few examples. 

A food and beverage company, we helped them build the AI core that helped them pivot to a more off-store model to sell to their customers and integrate digital partners seamlessly while taking care of privacy and compliance and some of those other aspects. It became the core for the company whereby they were able to then plug in new partners as they evolved this model and very successfully continued to have that consumer loyalty and in fact built new loyalties on the digital channels, which was something that they were able to take advantage of.

Similarly for a national realtor company, they were emphasizing a new ecosystem that they could create that would improve the yield and the throughput of the value chain. This included not only the other partners, the first mid-mile partners, but also their competitors who could be part of this ecosystem whereby the entire industry is able to increase the throughput, the economic throughput, but also ship their position from being a commodity provider (which is capacity in this case) to a value-added player. They could look at the end-to-end business outcomes for their customers and be able to orchestrate a very complex web of partners who can then dynamically come together, and so on.

There are a number of examples where we have delivered these systems at scale, and I've worked through the underpinnings of making sure that we are doing this right. We are bringing those micro-change management principles which allows the organizations to scale this and then bring the teams together through those learnings.

Consequences for business leaders of acting too slow or fast in relation adopting enterprise AI

Michael Krigsman: We have a really interesting question from LinkedIn. This is from Mike Prest, who is Chief Information Officer at a private equity investment group. He says, "As new adversarial AI agents are introduced without ethical limitations to penetrate enterprise systems, technology leaders often struggle in balancing optimization and innovation within their organizations." Here's his questions: "What would you say to leaders who are under pressure to develop AI and the consequences of acting too fast or too slow?" 

I would just add to that. As I speak with business leaders, one of the challenges they face, which I think is very much along these lines, is there's an expectation that they will make these investments. But yet, it's a shifting. It's all a shifting ground. And so, how do you invest in something that you know you need to invest in, but you don't know exactly what you're investing in because it's all changing?

Sunil Senan: I'll take this in two parts. One is, how do you balance the need for moving with speed, but also keeping that purpose and the responsible design in consideration?

I think the first thing is to be able to clearly articulate what the vision is and what is it that you're trying to achieve, having that translate into a blueprint because that brings the appreciation for what the design and ethical considerations that need to go into this. That would also make all the teams involved in this ready for dealing with that critical challenge and, hence, not make decisions that might not meet those considerations. I think that articulation is very important. 

The second is to look at not only addressing this on a case-to-case basis with each project, but also to develop not only standards that your projects can look at but also building your platforms in ways that has those underpinnings. In fact, for one of the global retailers, we looked at building a privacy-first data platform. 

What that essentially did was that, in this particular case when they were engaging their partners, they were engaging various different projects, they had teams internally. Each team did not have to deal with the complexity as if they were doing it for the first time. The lesson learned and best practices were baked into the platform.

For example, we used AI to discover privacy-sensitive information, which was very useful for every AI project as they were coming out and trying to leverage the data that existed there. We had automated workflows for privacy-sensitive information that was not properly masked. 

It was not left to the decision of each project to say what should they be doing. The workflow has baked-in rules and the actors to whom such an approval should go to. That allowed the organization to kind of scale this while protecting the underpinnings that are so very important for doing this.

I think those things can meet the need for speed on the business side because they want to move faster. But you also have a way to ensure that you're not violating the privacy considerations or not meeting the compliance standards or the ethical standards. Those are a few considerations to keep in mind and work with partners that can build an ecosystem across people, processes, and technology.

Organizations must learn from ongoing AI implementations

Michael Krigsman: Now you just spoke about the retention of lessons and incorporating lessons that are learned into new projects. We have a question on exactly that topic from Twitter. Lisbeth Shaw asks, "How do you take lessons learned from prior AI implementation engagements and use them to support new client engagements?"

Sunil Senan: This is where this becomes an evolving practice. There are a few ways in which we do this. 

One is, we maintain blueprints that are available to our practitioners globally. This has all the updated standards, best practices, the lessons learned in these standards. 

More importantly, we bring a community of practitioners together wherein they share the learnings. They share their experiences and look at ways in which they've dealt with some of those challenges.

Many of our customers look to understand how these things are taken care of. Obviously, the confidentiality of each project is maintained. It's only the ways in which we are dealing with some of these challenges that get discussed in the community of practitioners. We bake this into our solutions, so any solution that is used by the practitioners globally have these standards baked in as well. 

Of course, for our customers, when we're engaging on these projects, we lead with data strategists who are able to engage with the business stakeholders, the CXOs, and envision the blueprint or the business potential. Like we say, the biggest problem to solve in the industry is to find the right problem to solve, and that's where our data strategies come in. They are well versed with the standards and the compliance laws.

We guide many of our customers on privacy standards or looking at remediations that are necessary in their systems to operate such systems at scale. When you are doing this, any kind of such projects, building that ecosystem wherein you can push this into multiple vehicles that your teams would use for implementing such projects become important. 

Best practices, industry or the standards document, putting this in their training systems so that anybody who is getting enabled on this is well aware of those standards that need to be imbibed in the projects that they will execute and also making that available through a data privacy office or the compliance office is also a very meaningful way so that people know who to go to for getting that guidance. This office can really take the initiative to make everybody aware, enable them, engage them, become a resource when necessary to guide those teams as well.

Composition of an enterprise AI team

Michael Krigsman: On the topic of teams, what would you say is the team composition that an organization needs to look for?

Sunil Senan: It's clearly a business-first approach to this when you're looking at the business teams really coming together, along with the IT or the technology teams, to deliver this. But like I was saying, the considerations of Responsible by Design, so you would definitely have a play of your data privacy leads. Your compliance leads can audit the project or give blueprints upfront for what the projects need to comply with and, similarly, looking at the other considerations of data security, et cetera, to bring those experts. 

It's essentially a tribe, so to speak, that brings together these skills and, in an agile fashion, compose these teams to address the skills required for delivering on that project. 

Like I was saying earlier, it's a dynamic composition because you would take on the business needs in an agile fashion. Hence, building that tribe wherein you're able to pool these resources and create the part necessary for addressing this. It's more of a cross-functional team that you would have to work on your AI projects.

Balancing AI projects against traditional business transformation

Michael Krigsman: We have a very interesting question again from Arsalan Khan. He says, "Given the emphasis on AI systems today, to what extent should organizations be focused on AI tools and projects as opposed to traditional business and digital transformation projects using traditional enterprise software?"

Sunil Senan: Over the past few years, we saw businesses embracing digital, businesses embracing cloud. In fact, the businesses that embraced cloud were able to respond to events like the pandemic way better than those who did not.

They have invested in cloud. They have invested in digital. And getting to AI is the very next logical step where you are able to then amplify the outcomes that you can get through this.

It's kind of a continuum on that particular chain, but it also leverages the investments that businesses have made in the digital and data thus far. It then enables them to get quicker ROI through AI projects, through AI initiatives, and that's a very important consideration. Even as you look at scaling the AI initiatives to an enterprise level, the underpinnings that you have in your digital and data would allow you to scale it at that level. 

A simple example is that if you are using generative AI for enabling users or consumer to ask questions and get answers, you would want to have the right level of authorizations built in. For example, let's say I'm a non-finance person and I shouldn't be seeing certain numbers. You want to make sure that generative AI system does not give out the information that I'm not supposed to see. 

Those things are well baked-in into the digital and data foundations that most enterprises have laid, and that can be scaled to the newer systems as you are doing this. I think this is a continuum that you build on. 

It allows you to get ROI, quicker ROI on the investments that you already made. It allows you to scale at the enterprise level and with the right considerations put in place. 

With Responsible by Design, you can operate with confidence as well. So, it's kind of an initiative, but there is an aspect of experimentation that has to take place with AI. That's a very important aspect of how you will figure or learn new opportunities that a business can really take advantage of and how ready or what kind of data do you have, what kind data quality you have, to really solve some of those problems will come through the experimentation funnel. 

When you are scaling it, it's going to go back to some of the foundations that have been put in place. You're kind of getting from digital cloud to now AI. 

Iterative development and organizational culture in relation to AI

Michael Krigsman: That experimentation process, do you find that organizations are having trouble with that, or does it seem to go pretty smoothly? For folks that are very process bound, I would imagine that this experimentation is just a very different way of thinking.

Sunil Senan: The enterprise has to think about setting up their experimentation ecosystem. We guide our customers, and we do a number of engagements for our customers where we're thinking through the experimentation that is not wasteful but is productive. 

There is a way to think about it. How do you funnel ideas into the experimentation zone? There are ways to do this through design thinking on one side where you're exploring with the business what problems can be solved and in what ways can they be solved.

You could use data to nudge and recommend what areas you could look at. For example, data could tell you what trade promotion of a certain kind could improve the sales for other teams and could become an idea for you to experiment on. Or it could be the business teams coming out with newer ideas that they would like to look at because they are hearing those problems in the field or they are experiencing certain bottlenecks in which the business is experiencing problems. 

How do you then run this through the idea funnel to a scenario? There's certain things you could simulate to better understand those things and work those into real POCs and the small experimentation projects. Really putting those measurements in place whereby you are able to evaluate what the experimentation is telling your business in terms of what it can get and then connecting that into how you can scale successful ideas when they need to be pushed for that. 

But more importantly, also feeding those experimentations back into the funnel so that the next time when the business is looking to do an experiment that has already been conducted by somebody else, one could discover that and use that to then see whether there is a need to do this. I think there is a clear experimentation design that one could adopt, making sure that the whole experimentation cycle is serving the need to innovate at speed, but also gives you the basis on which you could scale those ideas and make this a very proactive cycle for yourself.

Michael Krigsman: With that, I have to say a huge thank you to Sunil Senan from Infosys for taking the time to be with us. Sunil, thank you for being here. I really, really appreciate your time and your expertise. 

Sunil Senan: Thank you so much for having me on your show. It was great talking with you.

Michael Krigsman: Thank you to everybody who watched and especially to those folks who ask such great questions. I always say this, but you guys are an amazing audience. You're so smart, and we love your questions. They add so much to CXOTalk.

Now before you go, please subscribe to our YouTube channel. Hit the subscribe button. It's at the bottom of our website. Check out CXOTalk.com. We really have great shows coming up.

We'll see you again next time. Thanks so much, everybody, and have a great day.

Published Date: Jul 28, 2023

Author: Michael Krigsman

Episode ID: 799