Generative AI and Business Transformation at New York Life

Learn how New York Life leverages generative AI for business transformation on CXOTalk episode 845. Gain practical insights on aligning AI strategy, fostering agility, and enhancing CX.

56:01

Jul 12, 2024
13,639 Views

In this episode of CXOTalk, Michael Krigsman speaks with Don Vu, Chief Data and Analytics Officer at New York Life, about the role of generative AI in driving business transformation. As a 179-year-old company and the largest mutual life insurance company in the US, New York Life is leveraging AI and data strategies to enhance client and agent experiences while navigating the challenges of legacy systems and change management.

Throughout the conversation, Vu shares insights on aligning AI initiatives with business goals, fostering collaboration between teams, and empowering employees with AI tools. He discusses the company's multi-threaded approach to AI adoption, which includes leveraging existing tools, targeting specific use cases, and exploring innovative solutions. Vu also highlights the importance of integrating AI and data strategies, addressing technical debt, and preparing for an AI-driven future in the life insurance industry.

Episode Highlights

Integrate AI and data strategies:

  • Ensure that AI and data strategies are not just aligned, but deeply integrated with the business strategy. This alignment is the key to driving meaningful transformation and it's your role that makes it possible.
  • Foster a culture of collaboration between business, technology, and data teams. This synergy is the powerhouse that supports the company's goals and your role is crucial in making it happen.

Honor Legacy While Innovating

  • Build upon the company's historical strengths and mission-based approach while incorporating new technologies.
  • Utilize the company's legacy as a foundation for future growth, leveraging it as a springboard for innovation.

Address Technical Debt Collaboratively

  • Navigate technical debt by promoting a strong partnership between business and technology teams.
  • Develop a strategy to shift from outdated systems to more flexible, cloud-based solutions.

Manage Change for AI Adoption

  • Recognize and address the "last mile problem" in AI projects, including operationalizing solutions and managing organizational change.
  • Engage business partners early to set clear expectations and tackle change management challenges collaboratively.

Empower the Workforce with AI

  • Invest in training and tools that allow employees to use AI capabilities in their daily work.
  • Develop a comprehensive AI strategy that involves using existing tools, targeting specific use cases, and exploring innovative solutions.

Key Takeaways

Combine AI, Data, and Business Strategies. To achieve impactful transformation, it's crucial to tightly integrate your AI and data strategies with your overall business goals. This alignment creates a collaborative environment where AI can effectively contribute to the company's strategic objectives. Emphasize teamwork between business, technology, and data teams to maximize the impact of AI initiatives and address technical debt challenges.

Embrace an AI-Empowered Workforce. Invest in training and tools that enable employees to leverage AI capabilities in their daily work. Develop a multi-threaded AI strategy that includes defending existing processes, extending targeted use cases, and upending traditional models with innovative solutions. This comprehensive approach ensures broad AI adoption, empowers employees to be more productive, and maximizes organizational impact.

Prepare for an AI-Driven Future. Recognize that AI is a transformative technology with far-reaching implications. While the full potential of AI may take time to manifest, early adopters are already seeing tangible benefits in specific use cases. To stay competitive, organizations must proactively invest in AI, manage change effectively, and cultivate a culture of continuous learning. Embrace the journey and adapt to the evolving landscape to position your company for long-term success in an AI-powered world.

Episode Participants

Don Vu is Fortune 100 Chief Data & Analytics Officer with 25+ years of experience leading teams and developing data/AI/ML solutions to drive business outcomes. Currently serving as Chief Data & Analytics Officer at New York Life, overseeing AI & data strategy for the company's $58.5B business.

Previously, heheld leadership roles at:

  • Northwestern Mutual: Chief Data Officer
  • WeWork: Led Central Data & Analytics team
  • Major League Baseball: VP Data & Analytics, overseeing Analytics Org
  • MLB Advanced Media/BAMTech: VP Data & Analytics

Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator, known for his deep expertise in the fields of digital transformation, innovation, and leadership. He has presented at industry events around the world and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.

Transcript

Michael Krigsman: Welcome to CXOTalk, episode 845, where we discuss leadership, AI and the digital economy. I'm Michael Krigsman, and today we're exploring how generative AI can drive business transformation. Our guest is Don Vu, the chief data and analytics officer of New York Life. The company is 179 years old, privately held, and they're number 78 in the Fortune 500. They're also the largest mutual life insurance company in the US. So we are really talking scale.

Don Vu: I'm the chief data analytics officer at New York Life. As you mentioned, I joined the firm about nine months ago, but most of my 25-plus-year career in data and analytics was outside the industry. I spent 13 years at Major League Baseball Advanced Media and had a brief stint at WeWork.

Michael Krigsman: When we talk about Gen AI and transformation at New York Life, how do these two intersect?

Don Vu: Under my purview is our entire AI and data strategy and its execution. And I say that intentionally. Those two things in combination, I think it's really critically important in realizing our organization that the data strategy and the AI strategy do go together and need to be synergistic.

Another required synergy is between the AI and data strategy and our business strategy. And it really all starts with that. Our CEO, Craig DeSanto, became the CEO and the chairman of the board over the last two years, and he really started to chart a fresh course with our business strategy focused on client and agent experience. He understood that having an industry-leading client and agent experience meant having one that's powered by technology, data, and artificial intelligence.

That really is the top node of our efforts and our North Star. Everything that we do with respect to AI, generative AI, data, and analytics is really meant to support and enable that business strategy. It's really an amazing time. I think the partnership between the business, technology, and the data teams has been incredibly fruitful.

This incredible moment where we've been shifting our strategy has provided a major lever for us that's really been complementary to generative AI's rise. We all recall that ChatGPT moment in the fall of 2022 where it became very tangible for folks. People really understood the magic of this technology and I think opened their minds to the many possibilities and the art of the possible within many different areas of the business.

So that's really provided fertile ground to reimagine the way client and agent experience might be powered by these new capabilities, really reimagining the ways that we work differently - work in tighter partnership and in a more iterative fashion, in an agile fashion, to bring this all to life.

Michael Krigsman: You are a 179-year-old company. How does that factor into the kind of transformation that you're describing?

Don Vu: We've been around for that long for a reason. There's been incredible success and so much of a strong foundation to build on. What's been really amazing is honoring and understanding those strengths and all those things that have made us so persistent and given us the longevity that we've had thus far and then building on that.

When we think about the things that make us really unique - our mission-based way by which we go to market, our focus on our clients and our policyholders, the partnership we have with our agents - that really provides a lens through which we build our business strategy, and then also build these new capabilities.

We don't shy away from our history. I think oftentimes as folks want to evolve and they intentionally want to avoid becoming obsolete as things move on, they shy away from their history. But we really have embraced it while also understanding that we don't want to be defined by it and we don't want any of the past to be an anchor. We want to use it more as a jumping off point to continue to evolve.

Michael Krigsman: Any company with the history of New York Life has an established culture. You have established technologies. There has to be technical debt. There's a whole range of different issues. And so, as you're looking to this AI and gen AI-driven transformation, how do you grapple with the needed transformation?

Don Vu: The reality on the ground is you don't have an organization and business that's run for almost two centuries and not have some level of technical debt. And like many other companies, that's something that we always try to balance. I think it's one of the things where we've been really, really grateful to have leaders that have navigated a lot of this journey before, and we've been very intentional about it.

The key, candidly, has been a true partnership between business and technology. Technology, data, AI and business - these groups can work in isolation. Any sort of navigation of technical debt, like the investments that you need to make, the simplification of the environment, all of those things require tight partnership and synergy and alignment on the path forward and how you can not only progress your business strategy, but also clean up the environment along the way.

In business, I think folks want to have their cake and eat it, too. It's hard to be just one thing or the other. It's hard to be simply operational with no innovation at all. And so I think the key to our success thus far has been a tight partnership between business, technology and data to work through this problem together. Because there is no easy button. There really is just good old elbow grease and planning and challenges related to getting off things like mainframe technology, related to getting into the cloud where we can be more agile with our solutions. And so really, I would say it's that partnership, that synergy, the collaboration between business, tech and data, that's been critically important.

Michael Krigsman: Are there tensions there? Because you are responsible for driving AI strategy, which by definition is all about innovation. At the same time, the company has lots of existing customers and processes. And so on the one hand you're driving disruption, but on the other hand, you can't disrupt how the business functions. So again, how do you manage that tension?

Don Vu: One of the things, as I talked to peers in the industry, and candidly, having been in this space for 25 plus years and doing applied AI and ML for close to a decade, the last mile problem - this notion of operationalizing solutions, which also is inclusive of change management, business process reengineering, changing the way that we do things, given the unlock of these new capabilities - that really is the hardest part.

Working with our business partners on that change management, acknowledging collectively that we have a bit of an innovator's dilemma at times, that we have to deal with the same processes, folks that have optimized a certain system, maybe are challenged, or have blind spots when you need to disrupt them. We've found that that's been really, really healthy to acknowledge openly and then really work on.

How would we have to re-engineer these business processes? How do we do these changes? What are the human routines and processes and workflows that need to be rewired in order for us to be collectively successful? What we found is partnering on that and working on those hard questions and those hard last mile issues from the beginning, which sets expectations of the road ahead, has been really, really critical. And it's helped us. We're not perfect. We don't have any particular secret sauce. It really is about collaboration and proactive communication.

Michael Krigsman: And you mentioned that this is an important project for your CEO. So maybe talk about the importance of having senior leadership support in an initiative like you're engaged in.

Don Vu: I'm an old school John Kotter disciple, so John Kotter wrote one of my favorite books, Leading Change. I think it's one of Harvard Business Review's Top 25 business books of all time. So if you haven't read it, I highly recommend it. And one of the things he talks about is a sense of urgency. He talks about a coalition of the willing or a guiding coalition and putting points on the board and quick wins. Those are three of the principles that he talks about.

And with respect to a sense of urgency, the reality is New York Life and a lot of insurance companies have been incredibly successful, and they have very sturdy businesses. These are businesses that have lasted through not just things like Covid, but also the Spanish flu and the Great Depression. These are incredibly sturdy businesses. And oftentimes that level of strength, you can imagine the flip side of that. The sense of urgency and competitors nipping at your heels is not quite as pervasive as in other industries.

And so I think it's really critical for a leader such as Craig, and I think he understands this so intuitively, to really set the tone, set the tone from the top, and he's really done that. And that's something that I've been very grateful for and for which these programs could not have any success, and that executive and top line leadership to really set the case for change, why we need to do this.

This is all about the policyholders and delivering for our clients. It's really important for us to continue to evolve our client and agent experience for that reason, and generative AI and this tidal wave of capabilities that are opening up new possibilities. We certainly have to skate to where the puck is going with that as well in order to make sure that we meet our objectives. And he sets the tone. He really has been incredible with doing that and absolutely couldn't be successful without that.

Michael Krigsman: So, there's a recognition that despite the success and the longevity of the business, you need to be thinking ahead and therefore taking active steps to drive disruption yourselves.

Don Vu: Absolutely. I think there's an acknowledgement that we don't want to be on our back foot with this. We're being cautious, candidly. Of course, we're in the business of risk, so it's not as if we're charging through haphazardly. There's plenty of things like CBAs, cost benefit analyses, and things of that nature - ROI discussions that are going on, and I think those are healthy activities.

We also balance that with understanding the broader, let's call it business case and the strategic importance of certain things. But we certainly do look at hard cost benefits along the way in tandem with these other things. But yeah, it's absolutely been critically important.

Michael Krigsman: We have incredible shows that are coming up, so you should subscribe to our newsletter so we can notify you of these live events. Don, you have a framework for AI. Maybe you can describe that to us.

Don Vu: We have a few different objectives with our AI and data strategy. For AI specifically, we for sure want to make sure that we have incredible business impact with these AI solutions. So that's of course something that we're always mindful of. But like we talked about from the very beginning, we also think that this is a bit of a catalyst moment, and having an AI-empowered workforce is something that we're also mindful of.

So that's almost like a reach sort of consideration if you think about KPIs, whereas the impact might be more of like a revenue or efficiency gain sort of metric that you're orienting towards. And in order to kind of optimize on both of those dimensions, we think about it through a bit of a multi-threaded approach, or you can call them different buckets.

And in talking to a lot of peers across the industry and many other industries, I think there are very similar themes and others are approaching it in similar ways. But the way we try to frame it is what we call "defend, extend and upend". That's just one way that happens to rhyme. We try to compartmentalize a couple different avenues by which AI is moving the needle for the organization.

The first is defend. One of the interesting things about generative AI is it basically rebooted so many different roadmaps for software companies. I think every software company basically started to fold in generative AI capabilities. And so generative AI is naturally manifesting in tools that our employees already use today, like Microsoft Office 365 and what they now call Copilot, with all sorts of generative AI capabilities.

There are tools such as OpenAI Enterprise that have been created that really try to give the capabilities of creating generative AI solutions to individuals that are closest to the problem. So when you think about defend, this bucket is really trying to leverage those types of tools, these tools that have generative AI and AI capabilities already baked in, and trying to get that into the hands of as many employees as possible to again try to increase the breadth of our AI-empowered workforce.

When you then look at extend, these are tailored, these are specific use cases that we're targeting. These use cases try to solve specific problems, and they might be in service. It might be in the context of marketing, it might be in the context of underwriting. Then we have generative AI solutions that we build mostly in-house, but of course using foundation models and other tooling that's available to us through the various technology providers and we build a more pinpoint and a more targeted solution for that. That's the extend part of our portfolio. That's where we're leaning into existing challenges that exist today.

Upend is really about that next horizon. Upend is really about what are the things that might completely upend or disrupt our business model and how might we place a bet? How might we start to work on some innovative ways by which that may come to fruition and really just start to understand the problem space a little bit better with respect to things like ROI and having a tight understanding of what the road ahead might be.

I think it's pretty clear that for extend, for a targeted use case, understanding the ROI given the clarity of the use case is the most straightforward of the three. I would say for the defend bucket where you have a bunch of people using tools like Microsoft Excel that now have generative AI capabilities, or Microsoft Outlook which now has generative AI capabilities, that productivity measurement's a little harder, it's a little more diffuse.

So I would say that if we had to stack rank them, I'd put that one behind the extend bucket. And upend is very hard to measure the ROI. If you're talking about completely new business models, you can certainly conjecture what those business models and those revenue streams and what the impact might be of that, but it is a bit of a guess.

So amongst those three, I think we're mindful of the fact that ROI could be a little shakier, but we're really mindful of trying to not necessarily disrupt ourselves, but try to peek around the corner of what might be coming, at least educate ourselves on the aspects of it all.

Michael Krigsman: I'm assuming that say initially minor productivity gains, for example, somebody using Office 365 and they have a copilot to help them write an email that is easy, relatively easy to roll out. And then it becomes more difficult the more you're changing processes or giving people tools that they have to relearn how they do things.

Don Vu: That's absolutely right, Michael. You hit the nail on the head and it's really connected to one of our prior conversations. It's really that change management, the change of habits, humans changing and the way that they go about solving problems or going through about their workflow is actually probably one of the biggest parts of the success of all of these solutions.

Which is why exactly like you described, some of the easiest rollouts and some of the most successful traction that you see is with folks that are using the existing tools today. They just happen to have enhanced features. It's not lost on me that when Apple rolls out their Apple Intelligence, suddenly over 2 billion users with iPhones and iOS devices across the globe are going to suddenly have a lot of these capabilities right in their hands and it's going to be more straightforward to use them because they're accustomed to using these things every day.

Whereas in our case, as we build these custom solutions, the targeted use case, you have to change the workflow, you have to change the way you do certain things. And then upend is a completely different paradigm. Typically there's real massive change that's required, but you really hit the nail on the head.

Michael Krigsman: I have a friend who's a venture capitalist, Evangelos Simoudis, who actually has been a guest on CXOTalk, and I was describing our upcoming conversation to him and he asked me to ask you a very good question. He said, do you have real world Gen AI deployments at New York Life today or are you still in the proof of concept or the prototype stage?

Don Vu: Last year, I think very much was the year of the POC, the proof of concept. Some people called the state that we were in even earlier this year, this notion of "PoC purgatory." And I think that was natural. It was a new technology. Folks were swarming to it. Folks were learning and experimenting and trying to find, let's call it product market fit, if you want to take a startup term.

For us, we had similar efforts, but we were fortunate to find some traction in some use cases where we have pilots in production. So that's a production deployment on a smaller scale and they're scaling out throughout the year. And so I think that we do have some. They're across domains such as content creation. They're in areas such as marketing and claims as well.

And then we also have a whole class of other ones, more along the lines of things like knowledge management and copilots that are in other areas that are right on the cusp as well and looking like they're going to be in production later this year. So we're trying to actually create even more kind of experience and more POCs. But we're really grateful that we have a few pilots in production that expect some scaled production later this year as well.

Michael Krigsman: What about your peers and colleagues with other companies? What do you see as the overall maturity in that flow from proof of concept to putting these applications in production?

Don Vu: So two thoughts come to mind. The first is one that we've been talking about a lot, this notion of change management and that last mile being one of the biggest challenges, I would say. The other thing maybe to mention is that not all gen AI use cases are the same.

So those solutions, and this is inclusive of ours that have tended to have more success, particularly with respect to going to production, are solutions that have a human in the loop where you're using generative AI for the things that it's good at and also guarding against the things that it's not as good at.

So generative AI is incredibly great at creating a first draft piece of content. But I think most folks that have looked at generative AI are familiar with the term hallucinations. Some people might call it creativity, but either way, it's not like an explicitly accurate and necessarily factual answer as part of what's provided back.

And so to guard against that challenge with generative AI, then having a human in the loop is often really, really important. And at New York Life, that's something that we're really mindful of. Although we have great generative AI solutions that are going to production and really, actually having incredible efficiency gains, and that we've seen already, there is a trusted New York Life professional that's really taken it over that last mile.

They've just had, let's call it like 90% of the kind of the gunk that would have preceded that, automated in some fashion or streamlined. But really, the final decision and the final touches on the content that's being created is through a trusted professional. And that's a pretty common theme across many of my industry peers. Just having that as one frame that we've seen to have a bunch more success with. And that's why you see areas such as marketing having a bit more traction than others.

Michael Krigsman: How about an area like call centers, for example, contact centers? You mentioned agents earlier. How do you manage this issue of hallucinations where there is no opportunity for a human to be in the loop, as you were just describing?

Don Vu: It varies. The interesting thing, service is definitely another area in the notion of call centers where we've seen pretty consistent traction across industries. And one of the canonical examples of that is a company called Klarna, and they've been very open about some of the benefits they've seen.

I think there's a lot of dimensions to the success you've seen there. I think part of it relies on where each organization is with respect to their deployment and maturity of solutions for service as a whole. So certain companies have chatbots that are already consumer facing. And so leveraging generative AI is really about having an evolution of that infrastructure.

For those folks that don't have that and have a more traditional phone call center, then that's not really an option. It's more about understanding the workflow in the place that each organization is in in order to understand what the opportunity is. So perhaps there are some preceding steps that are required for generative AI to have the true impact that it can. But I think oftentimes you can work within those confines.

So one thing that I've heard is there could be a time where a call center, a call comes in and there's a level one, or call it an L1 service professional that fields the call. Typically, if they weren't able to solve that issue, they would escalate that to an L2. Perhaps they would look through a knowledge base or a wiki in order to get the answer. We've seen generative AI be used instead as a little knowledge management bot in order to navigate that.

That can be measured with its impact in a couple of different ways. Perhaps that LLM and that Gen AI Bot is more effective and gets people to the answer more quickly than the internal knowledge management tool. Perhaps that leads to less need for escalations to an L2 as an example.

And the way that one of the measures by which you can understand whether you got the accuracy correct or not is understanding whether or not it solved the consumer's problem. The thing that they called the call center for - all of these deployments, they'll look at those metrics and that telemetry, and then even have what they call RLHF, reinforcement learning from human feedback. So there's this notion of a thumbs up and thumbs down.

So all of these things together are guardrails or mechanisms by which we continue to optimize these things. But really, we're early days with this stuff. It's amazing how quickly it's moving. That's a lot of what I've seen and heard about thus far, and I'm sure it's going to continue to evolve from here.

Michael Krigsman: Definitely. It seems like there's some experimentation, of course, that's still going on at this point in time, where the gen AI business, so to speak, is still relatively immature.

Don Vu: It is, absolutely. And I'll give you, actually another example that I found to be pretty interesting. Another common discussion area and a common opportunity that folks have dug into is around software engineering. So, coding and leveraging generative AI to, let's say, autocomplete certain things, or to help engineers with all the different things they can and all the different aspects of their job, that's been a really critical area for folks.

I think one of the interesting applications that, again, guards against the things that generative AI is not as strong at, but then leverages the things that it's good at is in the context of documentation or understanding really old code. So one of the things about being at a company that's 179 years old that has things like mainframe, you have these really old languages for which there might not be as many people around that really understand it.

There's subject matter experts that understand your particular code base that was written a certain way that maybe doesn't have the robust documentation that you might like. And generative AI can be used as a tool, not a perfect tool, but a tool to kind of pour through that code and start to understand what that logic is and start to provide some commentary around that.

Now, of course, we mentioned before it needs to be validated, but it then does really help facilitate the overhead on certain subject matter experts that oftentimes are needed across many initiatives and end up being the bottleneck. So that's been kind of an interesting place to be. And you mentioned immaturity. I think this use case navigates some of the immaturity in a mindful way.

Michael Krigsman: I'm thinking about the COBOL programmers who are retiring and getting older and Gen AI helping bring that code into the future and at least make it understandable.

Don Vu: I'm still old enough to remember I graduated before Y2K and I had some COBOL skills, and so they were in demand. And I'm with you. I think this is a really helpful bridge. This is a really helpful bridge to understand, kind of really this code that's been out in production really for literally for 25 or, sorry, for 50 plus years.

Michael Krigsman: If this AI thing doesn't work out, you may have an alternative career as a COBOL programmer, perhaps.

Don Vu: You never know. You never know. Pay the bills.

Michael Krigsman: Most enterprises are using LLMs out of the box for their experiments. A smaller group is fine tuning the LLMs. A very small group is developing their own large language models and small language models. Where does New York Life stand on this spectrum and why?

Don Vu: We're early in our journey, and maybe I'll just almost start backwards. I think that group that you ended with, folks that are creating their own models, that's not a bucket that we've really leaned into to date. I think there's so much for us to unlock with so many of the resources that are here today and understanding candidly how resource intensive that last endeavor is.

We've been pretty mindful of that, and we think there's plenty of business value to be achieved via other mechanisms before we dig into that very candidly costly sort of path. And so for now, that's not something that we're really concentrating on.

And so for us, we're doing a lot of what we call like prompt tuning or prompt engineering as a starting point for these Gen AI LLMs. And we're, you know, I've been in data for 25 plus years, I've always feel incredibly grateful to be in this space, having seen how far we've come. And it's really amazing all the capabilities of these frontier models and the incredible investments that all of these organizations have made, whether it's OpenAI or Google or Anthropic, and all of these organizations that are really leaning into productizing research into these incredible frontier models that we have.

We have really leaned into leveraging those frontier models, leveraging prompt engineering, and in some cases doing some fine tuning. But you'd be surprised how far you can get with really, really robust prompts. That's been a lot of where we've been focusing our efforts. We certainly have been doing some fine tuning, of course.

And it's hard not to think that with open source, leveraging open source for a more narrow use case when we start to think about the long term economics is something that we're really going to concentrate on as well. But right now, as we're trying to look for, call it product market fit, as we're trying to look for the efficacy of these solutions for certain business use cases, the way that we found that we can get speed to market, speed to value, and speed to an ability to iterate most quickly has been on focusing on these, leveraging these frontier models, leveraging a lot of prompt engineering, prompt tuning and a bunch of other techniques, including not as much fine tuning, but some of it.

It's a multithreaded effort. Everything's changing so quickly, and we certainly are trying to do our best to use as many techniques and approaches as we can.

Michael Krigsman: And we have a question from Greg Walters, who says, going back to the ROI issue, he said there's been recent chatter around the quote-unquote AI bubble and over hype. How do you see this? He says he sees similarities with past bubbles, such as Bitcoin, metaverse, and others. He thinks AI is unique. What do you think?

Don Vu: First off, I tend to agree with him. I don't think it's a bubble, and I think it's of a different nature than some of these other kind of bubbles that have been referenced. I've been around long enough to, I started my career in '98, and in '99 I worked for a startup that built startups during that whole Web 1.0 wave. And there was a bubble there eventually from a stock market perspective that I was personally a part of. So I've seen some of this play out before.

The thing that I think is very interesting about AI, I think where we're essentially in what I would call, and I'm stealing shamelessly from a professor, Ajay Agrawal, the author of Prediction Machines. We are in what I would call the in-between times, where we have this new technology that in my opinion is truly revolutionary, and I think many others agree, hence all of the incredible investments there.

But a lot of the scaffolding that we need to truly unlock its potential is still under development. And the analog, or the analogy that I've heard oftentimes shared, is when electricity came on board and really started to challenge steam as the primary energy source. And so you had factories that were completely designed around maximizing the power of steam that had to be completely refactored in order to have electricity flow through it. And so that took time, that took decades.

Now, do I think this will take decades? No, because I think this technology wave is standing on the shoulders of prior ones. We have obviously Web 1.0 that washed itself out. We then had the advent of mobile, and we have billions of people with these incredible devices in their pockets with tons of telemetry and context for what you would call inference for AI that is used to really customize what an LLM or an AI solution is going to be providing.

So I think that the in-between times that we're in is not going to be as long as it was before. But I think the fact that we're already seeing some early returns and some specific use cases for which AI is actually moving the needle in what I would call point solutions will give us enough time, will buy us time such that the systems level disruption that's on the horizon in that next, as I talk about being in the in-between times, like traversing that and getting to that next horizon, there'll be enough there. So that's my personal opinion, and I think you'll see that tangibly as the space matures. I think even in the near future.

Michael Krigsman: Let's move on to another question. I love taking questions from the audience. The people listening are so smart, and the questions they ask are fantastic. Here's a question from Lisbeth Shaw. She says, can you give examples of business transformations that New York Life is contending with. And how does AI help?

Don Vu: One of the things that's interesting about organizations that have been around for a long time is a lot of the processes that have been in place are incredibly manual and oftentimes paper based. One of the things that's amazing about generative AI is it's actually very good at what they call unstructured data.

Structured data is basically data in tables and in databases. You can almost think of Excel, whereas unstructured data are things like PDFs and images. When you deal with problem sets such as underwriting, where you have an attending physician's note with all sorts of doctor's handwriting, or when you have very paper-based processes that have been traditionally used, then these technologies really unlock new ways to approach those problems.

All the business process reengineering that we've been focused on in many of these areas, those are just kind of some really light touch ones, are ones where we feel like there's a really, really great fit for this technology and we're digging into all those areas.

Michael Krigsman: We have another question from Wes Andrews, who says the encapsulation of transformative business change technologies... Hold on, I'm reading through to get to the meat of the question here. As you well state. He says to you, Don, managerial change is often quick win fixated, yet data strategy requires complex, more of a long view effort, and isn't a finite kind of project. How do you see managing the costs?

Don Vu: I think that's a really astute observation. I think one of the paradigm shifts that I've seen organizations undertake when they go on journeys such as this, and this is not my first time having gone through a similar sort of data and AI transformation, is moving from exactly what you describe - I call it annual project-based funding, to more so multi-year product-focused investments.

And to me a product is something that folks acknowledge is a bit more evergreen. It's something that requires investment and has a strategic objective. And it has multiple years for which you have a roadmap and a plan. And I've seen organizations that have been in the prior kind of mode being more project-based. What are we going to get done in the next twelve months? Really a need to turn the corner and be more about persistently funded teams, persistently funded products.

And then I think it's incumbent on those AI and data leaders that are responsible for those projects to be able to tell the story and the narrative and connect to the business strategy, why these things are so relevant, why they need multiple years. So I think that's probably the advice or the perspective that I would give.

It's absolutely this person - the question hits the nail on the head. I mean, that's the requirement for success. In all candor, I think there's a pretty popular stat that most chief data officers last 18 months, I believe, and it's very much because of this exact problem. I think these challenges take a long time to manifest.

And so one of the jobs for folks in my seat is to make sure that we find those tangible kind of early wins that are part of the path to get us where we want to be in three years. And you stitch that all together. So navigating, again, these two time dimensions is really, really important.

Michael Krigsman: And of course, as you described earlier, the farther out on the innovation curve that you go, the more difficult it is to assess and balance the risk between the investment that's needed today versus what will ultimately become essential in the future.

Don Vu: Yeah, absolutely.

Michael Krigsman: We have another question from Twitter. This is from Arsalan Khan. And Arsalan Khan says, what are the drawbacks of Gen AI to organizations? It's an interesting one.

Don Vu: It's fair. I mean, there's no perfect technology. And I think one of the things that you could take a bear or a bull view to this. I think for most folks that have been following the commentary and the discussion, there's been plenty of bullish perspectives and optimism, particularly if you look at the venture space naturally, and also the deep tech space as well, and you've seen some bearish points of view. I know Goldman kind of recently published a bit of a long treatise on that, and I think both perspectives are worthy of discussion.

I think that the amount of capital investment that's required for AI solutions to come to bear is something that we need to be mindful of. The amount of energy consumption that's required is something to be mindful of. So those are the things I think people need to understand that that's part of the entire package of things.

It will take time to rewire, like I described that scaffolding that's going to allow us to fully unlock the billions and billions of dollars that have been invested so far. I think if you look at the ROI of generative AI as a whole, then there is an imbalance between what's been put in from a capex and investment perspective and what revenue can be directly attributed to generative AI. I think that's a fair point to make.

So I would say on the macro, those are the things that come to mind. On the micro, it's still an imperfect and immature technology. We've talked about hallucinations, the fact that you have to do, quote unquote, prompt engineering, which is basically asking the question in a more articulate way, in a more robust way. You can imagine that that's going to be far more automated as things go down the line.

It's hard not to think about the early days of search and the way search was a bit clunky early on. And now you have things like autocomplete, and they're using that corpus of all those queries that have come before it to provide the most common queries, streamline that experience. So we are so early in this, I think a lot of the shortcomings are going to be overcome.

Michael Krigsman: Arsalan comes back again, and he says, do you consider business transformation to be a journey or a destination? And here's the kicker, he says, what about exhaustion in pursuing AI if it becomes a journey that's too long?

Don Vu: I think in some ways it's almost both, and it depends on the DNA of the organization. For some organizations that might be really, really behind, it's certainly a journey, but it has certain mile markers that you think about. I think destination implies that you'll stop moving. And this is just a personal perspective, but I feel like an organization's standing is not in isolation or in a test tube. Ultimately, there is a competitive and marketplace environment.

And so standing still just feels like something that doesn't really behoove an organization to kind of do. And so to say it's a destination for me, I think maybe I would think of things differently. And so I think the journey is something that is always going to be the case. You're always going to be walking towards that horizon. Now, how fast you walk, how exhausted you might be along the way, I think are really great points to make.

The reality is humans can take a certain pace and change. So I think it's really important to calibrate that and understand it. And that's where the why is so critically important. Going back to John Kotter and really anchoring the folks on the case for change and the why we're doing these things, I think helps quite a bit. But it's absolutely accurate to acknowledge that it can be really tiring, it can be really rough. I think it's up to leaders to calibrate their pace appropriately, clearly, with any kind of transformation.

Michael Krigsman: Calibrating the pace as you just described is crucially important because otherwise the team will feel burned out and the results will be stillborn, you won't get what you want, even though the promise is there.

Don Vu: Absolutely, absolutely.

Michael Krigsman: And this is from Pavda on Twitter. And he says he has a question around AI capabilities that the technology team may utilize which impacts the business, since there is such a tight relationship between the two.

Don Vu: Sorry, what was the question exactly? It's not like more of a statement.

Michael Krigsman: I'm trying to figure that out as well. And I am wondering whether that is an AI generated question, because when I look at his account, I see a lot of material that is completely unrelated to anything to do with this. And so I'm wondering whether, who knows?

Don Vu: It's so meta, an AI, the little AI incident during our discussion. So that's good stuff.

Michael Krigsman: I love it. Okay, let's talk about data strategy. Okay, and can you talk about the connection between the data strategy and the AI strategy?

Don Vu: Yeah, no, I love this angle because having been a data practitioner for over 25 plus years, I think one of the phrases that those folks like myself who have done this for a minute, one that we've always been anchored to, is this notion of garbage in, garbage out. And it's meant to essentially capture that the quality of your data is so critically important to the solutions that you're trying to bring to bear.

And with LLMs, that's really shed sunlight on that in particular. And so this notion of if you don't have great data and great unstructured data in particular, and great knowledge captured in that data, if it's not up to date, then what you're training your LLM on and the way the answers that it provides is going to be suboptimal as well.

And so it's been amazing, actually, with generative AI being an unlock that has really opened the possibilities and the imaginations of business partners and wanting to have these things be successful, and then as you try to bring them there, they start to understand that data quality is so critically important.

I've never before seen more folks and more business partners talk about data quality, data ownership, data governance, things like that, candidly have been typically below the waterline. Oftentimes, data strategy and the management of all of these considerations, which is within a data strategy, has become just critically important.

All these things that we've built up from a data governance and data management perspective mostly focus on structured data and databases. Data warehouses now need to be expanded to unstructured data. And now more than ever, it's just really critically important to move away from human based processes that are involved in these kind of data strategy initiatives and have them be far more systematic because it's just expanded and exploded so much given the generative AI push that we've had.

Michael Krigsman: Why do so many organizations find this data strategy piece to be so challenging?

Don Vu: Candidly, I feel like a lot of times data strategies haven't been so explicitly connected to the business impact they provide. Business leaders know intuitively that they need data. I oftentimes use the analogy that it's like clean water, but oftentimes connecting that data, that clean water, that incredible proprietary asset that enterprises have to the business outcomes they achieve and call it the attribution of that can be a challenge.

I think that's been an area where a lot of organizations have struggled and a lot of organizations have focused on things like reporting as the main kind of activation method. But for those organizations that are more mature and focused on things like client experience, of course, if you don't have, let's say, a canonical source of truth for your clients' information that then manifests consistently across all aspects of the experience, then you're not going to see that direct connection of data to something that's really critically and business impactful, such as your client experience. I think that's where oftentimes organizations have challenges, but we're getting more and more mature, and fortunately, the space is getting better and better every day with it.

Michael Krigsman: It turns out that Pav, his name is Mike Pavelek and he's not a bot, he said that didn't go as planned, so he's going to ask his question again. Meanwhile, Arsalan Khan on Twitter and Greg Walters on LinkedIn suddenly have the same question: the impact of AI on the number of employees. Is there a decrease or an increase? Is it a shift? That's what Greg Walters says. And Arsalan Khan says, again, along the same lines, how important is it for all levels of an organization to understand AI and its implications to their jobs? Are we going to cut FTEs, full time equivalents? Because AI can do anything? So the impact of AI on jobs.

Don Vu: First, I'll start with AI can't do anything. Let's say it's quite limited in what it can do right now. I mean, will there be a future, five, however many years down the line, where we're going to see these capabilities be far more extensive and encroach on these areas in a different way than perhaps.

I will say that there's a bit of a cliche that AI is not going to take your job, but someone using AI will and I think to a certain extent that's true. I think that folks that are more AI empowered are going to be more effective at their jobs. And so I think you'll see a lot of productivity gains manifest in that way.

That being said, kind of going hand in hand with that, I think you'll maybe need less people to do the same job in certain areas. We talked about things like marketing or copywriting or certain areas where maybe those same folks that had to do first drafts of certain copy, you might see you can do more with less. You've seen organizations like Walmart that have implemented AI really shift people from certain roles to other roles instead.

And so I think it'll vary by organization. In most instances, there's a lot of strategic work that instead can be focused on, but there are some areas that I think will be truly disrupted. There's a lot of organizations I think, is it Goldman or a couple other research firms that have done a per job family sort of view on what can and would be automated more? What percentage of the tasks would be automated? I think that's actually a useful frame when folks think about job replacement or job impact.

I think the way to think about it perhaps is more about what are the tasks that are comprised and make up a job. In all likelihood, some subset of those tasks for a job might be able to be replaced by some of these capabilities. But again, having a human in the loop, having a human involved to do all the other things that are tangential to what that job really is, is going to be critically important. So it might help with a lot of the kind of what I would call like the gunk in some aspects of the job. But when you think about most jobs, they're very multidimensional, and these subset of tasks are only one aspect of them.

Michael Krigsman: We have another question from Twitter. This is from Roger Dodger, and who asks an excellent question, right, within context. And I am totally confused because I look at the Roger Dodger account on Twitter and it has nothing to do with any of this. But here's an excellent question. What is your top use case for harnessing unstructured data? And thank you for asking that.

Don Vu: I think the most common one is around anything that's like knowledge management or like a copilot. The specific application is called retrieval augmented generation. So RAG. RAG is essentially taking an LLM that has already been trained on the corpus of the Internet, like most of these leading LLMs have been, and other sources as well, and other techniques.

But then leveraging, quote unquote your data, whether it's an enterprise or an individual, and let's say, quote unquote, your data within your domain might be 5000 PDF documents. And so that corpus of unstructured data that's focused on a certain domain, whether it be marketing or sales or some specific product line and leveraging RAG retrieval augmented generation is one of the most, probably the most frequent use cases that enterprises have been tackling.

Now, RAG is not perfect either. RAG is challenging, particularly if you have certain accuracy levels. That's not a silver bullet, but that's probably the most common use case. And the most prevalent one that I've seen in the enterprise is retrieval augmented generation.

Michael Krigsman: Mike Pavelek comes back and he says, and this one's a complicated one, he says, how does New York Life plan to integrate generative or hyper modal AI capabilities into your monitoring and observability strategy to ensure application efficiency that will drive revenue and improve customer experience? And have you thought about the impact of this on the business?

Don Vu: The word he meant to use was, or what he was referring to was like the multimodality aspect of AI. And so for folks that aren't familiar, that's essentially being able to take as input, not just text and a prompt that most folks are accustomed to through ChatGPT, but also things like images, also things like video and audio.

So I think multimodal is definitely something that is on our minds. One of the things that I'm so mindful of is how the client experience and expectations of end consumers is going to change dramatically as they continue to use AI tools every day. I look very closely at Apple and Apple Intelligence and how they're going to be having Siri make a step function improvement, and how GPT-4 and the voice mode by which you interact with that is going to be in again, billions of people's hands. And it's not hard to understand how customer experience and the expectations will shift with that. So 100% we're looking at these sorts of things. It's absolutely something on our radar and something we need to be prepared for.

Michael Krigsman: I have a question that relates to the use of AI in your core business, and I'm sure you've thought about this, which is the role of AI in evaluating life insurance risk.

Don Vu: I think the interesting thing about AI is it's just given a lot more focus on just the domain as well. And I think the power of these solutions is not in any one method of AI or what I would even call machine learning and data science in isolation. I think there's this movement towards what they call compound AI systems, where it's multiple models working in concert together to provide a solution. That is really where I think most folks understand that this is going to be the true unlock.

Even things like GPT-4 and the voice mode and a lot of what's being delivered in the market is the combination of compound systems. Similarly, with respect to things like risk, there's traditional ML models that would be used. You can imagine LLMs being used for things like unstructured data and processing. Those may be part of a broader workflow. I think the unlock is going to be in all these things coming together.

Michael Krigsman: Let's take another one from Greg Walters on LinkedIn. Aren't we organizing past data? When will AI no longer require historic data as it connects to real time data? He's thinking sensors and optimized training results.

Don Vu: Interestingly enough, I keep bringing up this example, but it's a tangible one. But I think we're going to see that sooner rather than later. If you think about Apple Intelligence and the fact that these LLMs and these technologies are going to be in your pocket and have all of this ambient information, as we all know, these devices understand where you are from a geolocation perspective. There's an accelerometer in there. They have the context of what you're doing. So sooner rather than later. And even in some cases today, this is already happening.

It's also why you've had these startups like Humane pop up. There's new paradigms for engaging with AI, and new ways by which I would call ambient information is being fed into these AI such that they can use them during the inference stage of these solutions. That's only going to continue, and it's going to be in a lot of folks' pockets later this year.

Michael Krigsman: Arsalan Khan comes back. He says, everything - process, data, people - are coming together. What skill sets are needed for future holistic thinkers?

Don Vu: One of the things that I've reflected on, so I'm a father of two, my kids are eleven and nine. I think all the time what skills are they going to need in this new horizon? And the thing that I always anchor to is, I would say, the ability to learn and the ability to be agile and embrace kind of new technologies.

I think about, you know, I'm really grateful for my education. I loved going to the University of Virginia and the McIntire School of Commerce. But the amount that I've learned since leaving university in the 25 plus years is pretty significant. I think that served me well and I think that self-learners are going to only be more rewarded in this next horizon because they're going to be augmented and supercharged by these tools. And so the iterative way by which they can keep getting better I think is going to be a huge differentiator.

Michael Krigsman: And with that, we are out of time. A huge thank you to Don Vu from New York Life. Don, thank you so much for being with us here today.

Don Vu: Thank you, Michael. It was really fun. Thanks so much.

Michael Krigsman: And an enormous thank you to everybody who watched and especially you folks who asked such excellent questions. You guys are amazing. We have incredible shows that are coming up so you should subscribe to our newsletter so we can notify you of these live events that we have every week. And subscribe to our YouTube channel. Check out CXOTalk.com. We really do have extraordinary shows coming up. Thanks so much everybody. Thanks to Don and have a good one. Take care everybody.

Published Date: Jul 12, 2024

Author: Michael Krigsman

Episode ID: 845