In CXOTalk episode 869, discover actionable strategies for aligning AI innovation with business objectives with two world-leading CIO advisors. Explore the pitfalls and lessons learned!
CIO Survival: Lessons from AI Early Adopters in the Enterprise
Join host Michael Krigsman for CXOTalk episode 869, unlocking enterprise leadership at the intersection of AI with two of the world’s most influential CIO advisors: Tim Crawford and Isaac Sacolick.
In This Episode, You’ll Discover:
- Strategic Integration: How CIOs can align AI initiatives with core business objectives to drive growth and competitive advantage.
- Balancing Innovation and Governance: Approaches to managing the trade-off between pioneering AI applications and maintaining robust ethical standards and risk management.
- Building Agile Teams: Insights on fostering cross-functional talent that bridges the gap between technical execution and strategic vision.
- Real-World Lessons: Practical examples and candid reflections on successes and pitfalls encountered in the journey toward enterprise AI transformation.
Whether you’re a CIO, a digital strategist, or a senior executive at the forefront of business innovation, this episode promises actionable insights and forward-thinking strategies to enhance your AI leadership. Discover how to overcome enterprise AI challenges, translate lessons learned into long-term value, and shape your organization's future.
Episode Participants
Tim Crawford is a seasoned CIO strategic advisor and host of the CIO in the Know podcast; Tim is ranked as one of the topmost influential CIOs and is regularly quoted in the Wall Street Journal, CIO.com, Forbes, SiliconAngle, and TechTarget. Tim has extensive experience helping Fortune 500 companies navigate digital disruption and innovation.
Isaac Sacolick is the President of StarCIO and the author of Driving Digital. He is known for his expertise in agile methodologies and delivering measurable technology-driven growth. He is a key voice in modern enterprise transformation and one of the foremost and most influential CIO advisors. He writes for CIO, InfoWorld, and other technology media outlets and hosts a live, weekly discussion series.
Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator known for his deep digital transformation, innovation, and leadership expertise. He has presented at industry events worldwide and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.
Transcript
Michael Krigsman: How can Chief Information Officers survive and thrive amidst the turmoil, madness, and opportunity of AI? Today, on episode 869, we're getting advice from two of the world's foremost CIO advisors, Tim Crawford and Isaac Sacolick. Isaac, let me jump to you first. How can early adopters create effective AI strategies for CIOs?
Isaac Sacolick: The first thing I think of when I think about AI strategy is governance. I need to make sure that my staff understands what data they can work with, that the data has got enough volume, it's cleansed enough to be used inside some AI model that I'm going to use it with, that people understand what some of our high-level objectives are in terms of what we're trying to go after. I'm starting with governance instead of just going after and trying out new things.
I think the second thing, and this is where many organizations do fall off, is I'm going to talk about change management very early on. Many of the early AI successes have been happening in the IT groups. If I want to start bringing AI into my sales and marketing, customer support groups where there's a lot more value, I need to be thinking about how I'm going to educate the groups, where I'm going to give them some leeway to use some tools and work with them. I'm going to really be thinking about change management much earlier in an AI strategy than maybe I was able to get away with in the past.
Then lastly, I'm looking for value other than just productivity. There's a lot of talk about AI will let us be more productive. I can't fund transformation efforts or AI for that matter on just productivity improvements. I want to hear where it is transforming our business. I want to hear about where workflow is going to be improved and what ways, what value, where we're driving quality. I'm looking for value beyond just productivity to be part of my strategy.
Michael Krigsman: Tim, thoughts on this set of issues? Isaac has just raised a bunch of topics from innovation to culture and how AI is different from traditional IT.
Tim Crawford: You have to think about the end in mind when you start down this path. But as part of the prerequisites for AI, you have to start to break things down. Isaac talked about transformation. I think about it similarly, but I use a couple of different terms in defining different types of AI initiatives.
One type is around innovation, and the other type is around efficiency. And as Isaac mentioned, efficiency becomes hard because how are you really driving toward efficiency and what metrics are you using and how are you actually measuring that?
There's a lot of gamification that goes into it and a lot of questions around whether you can actually achieve reasonable efficiency gains considering the amount of effort you're putting in. As opposed to innovation efforts, which have a much higher output and much higher value to the organization than efficiency in the long run. Therefore, they tend to have lower hurdles to get over because efficiency is something that technically you could do with more people, but you're using technology as a means to advance your position.
Innovation is a definition. I'm looking at those types of efforts as these are things that you couldn't do any other way. These are insights you could not get by throwing more people at the problem.
Then the last component that you have to think about before you even start to lift a finger on AI is you have to start thinking about your data strategy. You have to have a comprehensive data strategy in place because if you don't have good data going in, you're going to have bad results coming out.
This is going to require a different way of thinking for a lot of organizations in terms of just simply how they think about data, not just where the data is and how they can start to bring it together, but also as Isaac mentioned, where's that governance? How do we start to think about governance? There's a lot of regulation that's coming about, over 400 components across just the US states alone, something like 40 in California. There's a lot to get your arms around there. And I think you have to start with the end in mind and be real methodical about how you traverse down this path.
Isaac Sacolick: On the efficiency side, I think we're at a point where CIOs can start looking top-down and start reinventing workflow. How are we hiring people? What does this mechanical operation look like if we built it from the ground up using data, analytics, AI, automation from the ground up? A lot of what we've been doing over the last two years in fitting AI in has been taking a piece of that process or taking a process that we haven't done well and saying, we're putting in a technology or a step in here that's going to either make it more efficient or more scalable or improve quality.
We're really at a point where we can start reinventing our workflow. That's going to rattle a lot of heads, where we are always used to how things are running today. We're always used to what our job is in that function. And so that flows back into what I was saying earlier around change management. We need programs for our organization, training programs, learning programs, so that they start understanding and embracing what their work is going to look like as AI is brought in more fundamentally into processes.
When I think about innovation, Tim, I go back to the days when we launched mobile technologies. In the early days of mobile, we were taking web interfaces and slapping them on a small screen and saying, hey, we have a mobile site. The technology got a lot easier, to do mobile-first capabilities. We started seeing app stores come out. We started seeing mobile-first capabilities. And now we start reinventing the user experience. We haven't seen that so much with AI yet. Even our agents, the agents that we're talking about and bring workflow in, most of the talk around that and most of the value we're saying is inside our organization, but look over the next year or two and saying, how are we going to do something completely different now that we can put an agent in place and how does that change the customer experience?
Michael Krigsman: If you're watching on Twitter, pop your question onto Twitter using the hashtag CXOTalk. If you're watching on LinkedIn, just ask your questions, share your comment in the LinkedIn chat and ask these guys pretty much whatever you want. It's a great opportunity. You can get free consulting right now. Take advantage of it.
You're both describing attributes of IT, culture, deployment and so forth that are not really unique or new. Culture is a human-nature phenomenon. Change management and all of that, is there anything that's really new about AI that changes the CIO role, that has a big impact here, that's different from what we've had in the past?
Tim Crawford: We have to get more comfortable with how we trust technology, and also, how we bring automation into the fold. Even with technology, a lot of applications still rely on a human component. With artificial intelligence and some of the newer technology that's coming out, it requires us to get more comfortable with automation in ways that we haven't had to deal with in the past. And that is new. That's very new because we haven't necessarily had to get focused on our processes, such that we need to make sure that they are sound, they are clear.
This touches into some of the things that Isaac mentioned around change management. Do we have good processes in place and good change management in place? If you just turn around and say, oh, we're just going to automate it, that's a very dangerous decision to make.
One of the things that's new is it's causing us to go back and rethink how we do things to speed it up, to ensure that we are getting more accurate output and ensuring that we're not automating bad processes. What that means is that there's a cultural shift to embrace technology more fully than we ever have. And that is change. That's a hard cultural change for a lot of organizations that have maybe been more accustomed to, especially as you get to larger organizations, more accustomed to more incremental change. That doesn't suffice anymore. You have to make tectonic improvements. You have to make transformational improvements because that's exactly where your competition is going. And if you don't make those, you'll be left in the dust.
Isaac Sacolick: One of the differences, Michael, is that it's happening to employees in our back offices a lot faster than what automation did. Their jobs are changing and it's looking like the things I used to do, I maybe no longer have to do it at all or do it fundamentally differently than I've ever done before.
Software development. If you asked any of us four or five years ago, could machines code for us, we probably would have said, no. I have a blog post around this. Now, somewhere around 20 to 30% of code being written by AI is being accepted inside, being pushed and being pushed into production, being accepted by the development group. It's changing fundamentally how our employees are working. For some, they embrace it, they learn it, where for a lot of other people, they're still scratching their heads and saying, what do I want to do now that AI is being able to do some of the things that I'm doing.
I think of that as a writer. Am I writing for people or am I writing for LLMs that are going to bring my data in as content and use it to answer questions? It's a fundamental shift.
With the CIO role, we've seen this a little bit in the past, but it's getting harder. Every time there's a whole new discipline that we had to learn, whether it was digital or data and now AI, if we didn't learn it as a CIO enough and build up enough competency around us to lead these areas, the board, the executive group would look at and say, we're going to go hire a chief digital officer. We're going to go hire a chief data officer. Now we're saying we're going to go hire a chief AI officer.
Learning AI is fundamentally a little bit more difficult. We have to scale our ability to learn from the organization. I think the goal for CIOs we need to move really fast and make sure that our lieutenants and our high potentials are getting enough leadership and getting out there and learning enough so that they can inform me as a CIO about how to invest and not meet going back to them and saying, this is the area that I want to focus on
Michael Krigsman: On Twitter, Mike Boysen says that to take advantage of these AI opportunities, you would need completely new business models to weave with AI and those business models will drive what your data needs are.
And on LinkedIn, Greg Walters asks about transforming from manual thinking to artificial thinking. Does this mean we will proceed by ignoring silos and hierarchies to change everything and ensure fully integrated organization-wide adoption of AI?
The common thread here is the power of AI to drive disruption, but what kind of changes do we as CIOs need to make to take advantage of these opportunities?
Tim Crawford: It comes back to whether AI is going to drive changes to organizational structure and how we think about and operate organizations. Let's not forget that an organization is a living being. It's an evolving living being that changes over time. Culture is just one component of that.
I do think that AI is and we already have examples of how it is changing how we think about structures within an organization. Has it gone so far as to break down the silos broadly, not yet. I think at some point that might start to happen.
You could envision a position that we get to sometime down the road where an AI type of solution looks at how we work as an organization, starts to learn how we engage with customers, how we engage with employees, what the market dynamics are, what the environmental factors are from geopolitics to weather and is able to actually tell us how to run our business more efficiently. That's actually something that we can envision and get to.
And then building off of Isaac's earlier point. Now just imagine you need software to be able to do it and that same system can then go and start to build some of the software. There's already talk amongst vendors, big huge vendors with really big complicated applications and platforms around can we use AI to essentially rewrite the entire platform. And how can we learn from the way people work to improve upon that and then feed that into the software. The short answer is not yet broadly, but we can completely envision that. I do think that AI is already having an impact though on org change.
Isaac Sacolick: I'm going to add one person to that and that's the CHRO. I think it's really important to have them there when we talk about change in people's jobs and that changing that quickly. And a tip for CIOs, if you're looking for budget around training and learning and development programs, a lot of enterprises have that with the CHRO. Bring them into the conversation around how AI is going to impact the business.
Michael Krigsman: We have a question now from Arsalan Khan on Twitter relating to this. And since you're the one who first brought up culture, let me address this to you. He says, who decides what's a good process versus a bad process? Most standard operating procedures become archaic since people find other ways of doing things that are not even documented.
Isaac Sacolick: Our notion of process is changing dramatically. We used to think of linear processes and linear handoffs. We used to think about how to automate pieces of this and, what's changing is our ability to put intelligence to handle all the nuances in complex decision-making.
We want to be able to ask questions and knowing that the analytics, the data behind it is far more complex than what can go into a linear process. I'm going to ask a question, given what's happening in the United States over the last few weeks, how should I evolve my supply chain? What are my options? This is not a linear process anymore. This is about having the right data in place to make tactical decisions as things change and evolve quickly. And going back to what Tim was saying, we may not be able to do strategic oriented questions at that scale just yet, but we certainly can get our data ready for that. I think that's a big challenge for organizations to think about today.
Michael Krigsman: Martin Davis asks this question. He asks how a CIO can avoid pilot purgatory with AI.
Isaac Sacolick: I just read a statistic that said, 69% of companies are running 10 or more POCs and 10% are doing 50 or more of them. To some extent, I don't have an issue with that. I think pilots and POCs, not all of them are going to make it into production. A lot of them are learning exercises. A lot of them are really fishing into the underlying data. Can the data provide enough of a backing to support the AI, the hypothesis around the AI? Even when it does the AI that you've created drive enough business value to put it into production?
When you put that all into play, it's not surprising to me at least that a good number of these are not making it into production. I think the real issue is that organizations don't have a process to make that happen. When they really have a winning POC, a pilot that's delivering, do they have the change management in place to impact the people who are going to be affected by it? Can they bring that AI out at scale? Can they manage the operations around it, the model ops and the ML ops that go into monitoring a model and making sure that it's still relevant? I think there's so much work just to figure out what POCs to go after, what data to go after, that even when you have a winning formula, you haven't done enough of productionizing to figure out how to make it do that at scale.
Tim Crawford: Every time I turn around, I see another survey, more anecdotal information that comes in that basically supports that. Now, I don't want to come down on the idea of experimentation. Having a culture of experimentation, especially within a technology org is really important. That's important because you want people to use their imagination. You want people to bring that creativity in because ultimately, that's what leads to change within an organization and potential ways for your organization to differentiate yourself from your competition.
Leading CIOs know that they need to find ways to really explore these new areas. But as Isaac said, you do need to be somewhat methodical with it. And as we've gone through the last 12, 18, 24 months, there's been a market shift where the bar has gone up that before you even start an AI experiment, you need to have some very clear business outcomes. And you need to understand how it ties to one of your company's objectives. Typically, that falls into one of three camps. It's either around customer experience, employee experience, or business operations and supply chain. It's one of those three.
The thing to consider there and to Martin's original question is, you just have to think about what you're doing and what makes the most sense, but still don't make sure that you don't put everything together and expect it to be buttoned up from the start. You still have to experiment, but you need to more quickly come to an outcome to determine whether this is a project that has legs or whether it's something you need to pull the ripcord on.
Isaac Sacolick: Tim, I like that part of it. I think that's what companies miss a lot is, how long do you give a team to experiment? Even if the experiment is a little bit open-ended, when do they come back with their vision? How is it aligned to the strategy? You can give people some time to go out and learn what the data is telling you, but you can't give them a mile-long runway anymore. The expectation is that an ROI that an AI is going to be delivered in about eight months, that ROI is going to be measured in about 13 months. That's pretty quick. We couldn't do apps at that rate 10 years ago. We're saying, we're doing all this data work, we're doing all this change management work, we're putting all this AI in place, we still have a skill issue here and we're trying to get to value between eight and 13 months as sentiment from boards and CEOs are looking at and saying, the honeymoon's over, I better start seeing some value from all this investment I'm making.
Michael Krigsman: Liz Martinez on LinkedIn is screaming out during this conversation: business case.
Tim Crawford: I agree. There needs to be some connection to a business outcome. I would just caution on using the phrase business case. And the reason why we might be agreeing or disagreeing on this, Liz, is because a lot of times when people think about the business case, they put together this multi-page document that could be 20, 30 pages defining why they want to make this investment into this experiment and what the potential outcome is. And that's not what we're talking about. We're talking about these very quick wins.
But I agree, it needs to have some connection to the business as I mentioned earlier. So as long as that business case is very short and direct and to the point, and as you start down the path, you can show connection kind of closing those connecting the dots if you will to the outcome in short order in the time frame that Isaac mentioned, then I'm great. I'm good with that. Just don't look for the fully put together business case, business plan.
Isaac Sacolick: I use a one-page vision statement. And it's got information around customers around it. It's got information around strategy, you have to project a timeline, you have to project what OKRs you're impacting. Very important you got to have a value proposition for the end user customer that's benefiting. Anyone wants it, you can reach out to me. I'll be able to share it with anybody who's interested in it. But I agree with you. It's got to be simple and most importantly, what we used to call business case is really about alignment, making sure that people who are working on this, making sure that stakeholders and making sure executives know what the objective is.
Michael Krigsman: I'm going to combine two questions from Joseph Puglisi and Ashish Pathak. Joseph says, "Speak to the need for governance. What lessons have you learned around early efforts to use AI?" Ashish says, "How can the company communicate its AI strategy and progress to employees and stakeholders in a way that is inclusive and gets adoption?"
Isaac Sacolick: AI governance is an extension over what we've been doing with data governance. There are some nuances to it that are important around areas around bias, around ethics and AI. The main lesson I learned is that we treated data governance and data security very separate tracks from our innovation and very often lagging. We'd figure out what the innovation was around data, we'd cleanse our data and then we'd start looking at what are the governance implications around it.
With AI, you need to flip that and even with data, the advice I've given to groups I work with is, I do everything in Agile, I have my data scientists working on the model, I have my application developers working on the user experience. I need my data governance specialist there. I need my AI governance on that Agile team to make sure that as we're working through this problem, we're doing things that align with compliance and regulation as Tim mentioned earlier, these are changing very rapidly. There's a lot of depth to them so not everybody has the answers around them, but I'm putting people who are experts around this right on my team so that I can do governance in parallel with my innovation.
Tim Crawford: Couple of things to add into that second question briefly is number one, make sure you have good relationships with your head of legal, head of audit. If you're the CIO, those are some of the first relationships you should be making within your C Suite, within your peer network.
Then this second part of the question, around communicating this out. Many organizations have had success with creating a governance body or a council. What that does is two things. One, it starts to bring different ideas and perspectives into the mix. You get those different personas engaged, but it also has a dual purpose, which is starting to communicate those shared objectives out to the rest of the organization. By bringing that together, it's not just sitting with the CIO or one individual, whether it's head of audit, head of compliance, whomever, depending on your industry, but rather it becomes a shared responsibility. And I think that's probably the best way to go at this stage of the game.
Isaac Sacolick: I'm going to add one person to that and that's the CHRO. I think it's really important to have there when we talk about change and people's jobs and that changing that quickly. And a tip for CIOs, if you're looking for budget around training and learning and development programs, a lot of enterprises have that with the CHRO. Bring them into the conversation around how AI is going to impact the business.
Michael Krigsman: We now have a question about data from Michele Clarke on LinkedIn. Michele Clarke says, "How do you get clean data? How do you know that your medical diagnosis data, for example, isn't full of racial and gender bias?" This is really important for every business.
Isaac Sacolick: Usually a lot of financial services industry companies that have been using these data platforms, watch longer with a lot more skills than in other industries. Because so many of us are investing in AI and looking at our data, we need to have those platforms in place. If you don't know if your data is cleansed enough or it's healthy enough or is trustworthy enough, go look at the platforms and see how to tune them for your type of data.
Tim Crawford: Completely agree. Use technology to your advantage. They're getting far more sophisticated now where they're actually showing ongoing real-time dashboards about the health of your data. Leverage that and I would just add, you have to also bring the human component into the mix too, which is help people understand why things like bias and how they're bringing that data, how they're putting data into these systems, how it starts to impact things downstream.
Michael Krigsman: Let's go back to Twitter, a really important question from Niya on Twitter, who says, what are the top three challenges organizations face when pioneering AI apps?
Isaac Sacolick: There's two sides to this. One area you're going to see a lot of pioneering is built directly into the platforms that people are already using. CRM, CRO platforms, they're all putting agents in place. They're all trying to say, put more data in our environment or port more data accessible in my environment because I'm going to bring the AI to you.
The real question there is understanding whether your data is ready for this. There's one of the things I don't see in some of the platforms is they can go do sales forecasting or they can go do assistance around hiring, but they're not telling you enough about whether the data in your platform is ready for that. Ask your people. This is a great place to engage people. Let them go use the platforms that you've already sanctioned to try out these agents and come back and saying, what's where's it delivering value? Where's the data need to be improved on and go from there.
When it comes to building your own agents, and I know a number of companies that are starting to explore this, it really comes down to a build-buy discussion and thinking about where are you having proprietary data, proprietary value to customer that you're going to start investing in building out an AI agent, which such good data that's going to really be a game-changer for you. So that's what I'm looking for first is, where is there something that's going to be a game-changer in our industry or in our company or because I have data that's worth investing in building out that capability.
Tim Crawford: I'm going to counter Isaac a little bit on this one, because I do agree with some of the points, but one thing is it's people. Is one of the biggest challenges that organizations have. And the lighthouse companies that started to roll their own with AI, learned very quickly they just do not have the capability to truly do that and maintain it. Because let's keep this in mind. It's not a one-and-done project.
What you're seeing now is companies starting to adopt AI technology built into those existing enterprise applications as opposed to building. Because those companies understand the challenges around the data management, data governance and they're able to put the right guard rails in place. Most enterprise organizations don't necessarily know how to navigate that. This is all new for them. But those enterprise application vendors actually have the girth or scale to be able to do it for many different companies.
A good way to think about this is, I can build off the backs of those organizations as opposed to having to figure out how to do the basics myself. Unfortunately, there's a lot of assumptions that IT organizations make. We saw this in data centers thinking, oh, my data center is super secure. Guess what? Of all the data centers I've done assessments on, they are not as secure as you think they are. The problem is you have to take those assumptions out and unless you're willing to do that culturally, it's really hard to ensure that things are buttoned up to the same degree.
Isaac Sacolick: I agree today is a buy over build, but what CIOs have to keep in mind is the build is going to get easier and cheaper. We saw that with mobile technologies. If you built too early, you were building a lot of proprietary code and getting into technical debt around that and getting into user experiences that needed to get rewritten. It's going to get easier to build your own agents.
Tim Crawford: It is, but the other piece to this that hasn't come up in our conversation is the adoption piece. And we're already seeing a lot of dragging of the feet and just not really good adoption numbers of just simple things like co-pilot, within your existing productivity applications. This kind of gets back to the people and the cultural pieces, but adoption of these tools is going to be really key.
Michael Krigsman: Ashish Parulekar comes back and he says, are organizations adjusting their OKRs and metrics to assess the impact of AI on business outcomes or how are they planning to measure the ROI? This comes directly back to the question of alignment between AI initiatives and business strategy that you both alluded to.
Tim Crawford: It depends on which OKR or which particular strategy you're talking about because in some cases it shouldn't impact them whatsoever. Whether you use a blue technology, a yellow technology, red, green, it doesn't matter. What matters is the business outcome. There are different ways that you're going to accelerate or hinder your business, but you need to stay focused on what that outcome is, whether it's AI or not.
When you get into some of the operational pieces and you start to want to measure things like before and after of bringing new applications or technology into the mix, I could see those changing, but you need to distinguish between those business strategy pieces and the ones that are more in the weeds. And that's a really big differentiator to answer that question.
Michael Krigsman: Isaac, here's another question. This is from Chris Peterson on Twitter, and he says, regarding POCs, to what extent are legal and audit functions actually ready to be involved in the development and innovation and not just be reactive after the POC? In other words, to what extent are legal and audit functions involved and should be involved?
Isaac Sacolick: They're understaffed to be able to keep up with the space of change. The technologies aren't transparent enough for them. That might change. When you look at what's happening with agents, being able to take natural language input and be able to share that natural language, thinking, a natural language output with where when it has a conversation with another agent. We're now starting to be able to create an audit trail in English that an auditor or somebody in legal can start following. It's still early in this path, but because we're taking our services, moving off APIs and moving into natural language interfaces and they're sharing their thought streams and where they're passing other questions to other agents, we're going to start being able to have a lot more audit controls and legal controls around it.
Tim Crawford: They have to be right up there at the front of the line. At the beginning of the conversation, audit and legal has to be part of the conversation as part of your data strategy. But the other thing is, there's been a lot of concern around the Black box nature of these LLMs and that's starting to change. We just saw OpenAI come out and start to share some of the reasoning. And I think Deepseek is driving some of this, but now you're starting to see companies like OpenAI that are saying, okay, we're going to actually expose some of the reasoning that went into the answer that we gave you from your prompt. I would expect to see more of that.
Michael Krigsman: This is from LinkedIn, from Jason Gutierrez who says, quick wins, take smaller bites, BYTES as well, out of the problem you're trying to solve.
Here's the question. What KPI are you trying to influence? Is your dev team skilled enough to deliver an AI app quickly or are they still upskilling? Which raises the very important question of talent. You guys spoke about build versus buy technology earlier, what about talent and developing talent in house versus going out on the market and recruiting people who have those skills? How do you think about that, balance that?
Isaac Sacolick: Skill set has changed from knowing as much as you can and how to do something into knowing what to do and whether what's being built is being built securely and robustly and has high performance. It's a shift in mindset into knowing how to ask the right question, then knowing how to roll up your sleeves and getting something done. That's perplexing, particularly for us engineers and those working in IT to be able to think about this.
But that's the nature of what AI is allowing us to do. It's not just around can am I more productive? It's am I able to do things that I wasn't able to do because AI is providing the assistance around it. Last summer, I did coding for the first time, but I didn't write a line of code. I had AI write the line of code and I used to have a person asking the questions and saying, I need help to be able to do this function. How can I get some code to be able to do this? And after five or six prompts, being able to do this.
I think the same thing is happening inside our IT organization. I think there's a real question for CIOs in particular. I saw a data point from McKinsey saying that IT is more than twice advanced using GenAI than other departments. But the question is, when we use the word productivity, when we use the word capability and we start asking the question around ROI, we're not showing where that's delivering value, we're going to be asked to give cost up.
Michael Krigsman: Jason Gutierrez comes back to both of you guys. Maybe Tim, you can grab this. And he says, sure, but doesn't that necessarily mean more OPEX?
Tim Crawford: There isn't a one-size fit all for every organization. I think one of the things that I'd back away from is trying to get too granular on that answer. I think what's important for the CIO, one of the things I'm looking at is, how do I start to measure my organization's value and impact to my business? When I say my business, I don't mean a particular department of the company. I mean to the business and our company's customers.
I'm looking at how I tie what we do as an organization within IT with our business partners in HR, with our business partners in finance and operations and engineering, and I'm putting together OKRs and metrics around our impact and performance against those objectives that we all share. That's where you have to start. Now, you can delve further into developer productivity and call center productivity and that's great. But when you start to get further into the weeds, that's where a lot more variables come into play and there is no one size fits all answer at that point.
Michael Krigsman: Isaac, we have a question from Derrick Butts, and it relates to risk and security. And he says, AI, risk guidelines and standards are continuously being developed. Can you recommend any AI frameworks to securely roll out and mature the value of AI tools for your business operations without increasing the risk across your business culture?
I think really fundamentally, we're talking about risk, culture and rolling out at scale.
Isaac Sacolick: When you're doing innovation, when you're driving change, you by definition are increasing risk. There's no ways around that. If you want to stay doing what you did today and continue to perfect it, that's when you're going to be in that box that can be disruptive. So as soon as I'm innovating, as soon as I'm looking at any new capability or I'm bringing change to my organization, I'm taking on risk. And the question is, are you taking smart parallel measures around that risk? Are you identifying it? Are you bringing the right people in who are going to ask the questions from a risk management perspective?
Tim's brought up legal, audit, security. Am I bringing those people into the right conversations so that we're asking those questions earlier. What's going to be the impact if this data leaks? Should we be using this data for this perspective? How are we securing this data? Is our data masked? So that if anybody comes into our organization and starts using that data in a new way, that data is masked for PI information. These are the kinds of things that are building blocks when we're doing innovation that we have in place so that we can do those things securely.
Michael Krigsman: Tim, this is something that's orthogonally related. And this comes from Arlan Khan on Twitter, who says, how do you address shadow IT from frontline employees versus executives? Should we encourage shadow IT? And I think it's orthogonally related because one of the traditional arguments against Shadow IT, of course, has been, oh, we're going to increase our risk footprint. So what about Shadow IT and especially in this age of AI where everybody's using ChatGPT?
Tim Crawford: I actually support Shadow IT. And there are a lot of folks that think it is a dirty word or can be problematic for the organization, but you have to go back and understand why is Shadow IT coming into your organization to begin with? And typically it's because someone isn't getting what they need in the way they need it. That's the simple answer. The longer answer may have a lot more components to it.
When it comes to AI, yeah, there's sure, there's potential for invoking more risk into the equation. However, if you put the right guard rails in place, you actually can enable Shadow IT and that creativity, especially in business units that understand the business better than your organization. And organizations have been highly successful in creating a culture around Shadow IT. So it becomes more co-development, it becomes more collaborative in nature as opposed to us and them, but it starts with the culture and then just fans out from there.
Yes, of course, you have to think about the data governance pieces. Yes, of course, you have to think about cyber security and those kinds of components. And you also want to think about simple things like how many different versions of generative AI do you want running in your organization and all the data that goes behind it. All of those do come into play, but I would start with what is your current stance with regards to Shadow IT and then let it fan out from there.
Michael Krigsman: Let's go to another question. This is from Mark P. McDonald, who's been a guest on this show. He's a distinguished vice president and research fellow at Gartner. And he says, who are the AI leaders, companies that we can follow that CIOs see as the leaders right now that we can study and who are doing AI well?
Tim Crawford: There are some companies that are doing some amazing things. Can I talk publicly about them? No. And the unfortunate piece to that is because we are in the very early days, they're using this as differentiating for their business strategy. And so they're finding ways to really change the game, not just change the chess pieces, but changing the whole game. Using a very different approach with it. We're not quite there yet.
You do see some public examples of smaller examples of how AI is being used, everything from co-generation to summarization. Think back to the earlier conversation about legal. Being able to summarize the body of work around legal. That's a massive opportunity.
So there are some of those opportunities that come into that efficiency space, but the ones that I'm familiar with, the really big demonstrable ones that would be those light houses, those beacons of opportunity to follow in their footsteps. Those are pretty close to the vest still.
Michael Krigsman: Is it close to the vest because they have figured out the magic silver bullet or is it close to the vest because they're crying in the corners and don't want anybody to see it?
Tim Crawford: There's probably some of the crying in the corners too, to be honest. Nobody wants to cry out in public. We want to get back to look our wounds. Yeah, let's face it. You got to have a few cuts to have success. The examples that I'm thinking of, they're pretty significant examples. But again, I wish I could even give you some context, but I don't know how to do that without exposing who it is.
Isaac Sacolick: Look, I'm looking for the small examples, okay? And the reason they're slow to announce them is their fear of people coming in, using the technology to find the gaps and what the AI isn't doing well, and then they're going to be in the headlines for the thing it shouldn't have been doing in the first place. So they're rolling these things out slowly, but I'm starting to see customer-facing agents come out. I've got one in the email this morning from a bank about being able to go through an agent around a car loan. We hate going for car loans. It's a horrible experience. So when you start seeing that being publicized, little things that people are doing that agents are starting to help well, you know that some of that is changing in the marketplace.
Michael Krigsman: Jason Genovese comes back and he says, regarding Shadow IT, which you're in favor of, he says, as Shadow IT steps in, at what point does enterprise architecture become a concern?
Isaac Sacolick: Enterprise architecture has to still be there. I will say a lot of enterprise architecture organizations get a little too big for their britches. And so there has to be a bit of a check and balance there. But I think if you're taking the right approach with EA in such a way that it becomes more modular and more of a framework as opposed to a heavily over-architected structure, then it's accommodating for Shadow IT.
Michael Krigsman: Tim, here's one I'll direct to you and this is from Ashish Pathak and he's on LinkedIn. He says, buying AI may come with a cheaper cost, but what about the fear of data privacy and security as the data is governed by the AI service provider. What are the key parameters to be kept in mind when choosing an AI service provider?
Tim Crawford: This kind of touches into what I was going to respond to an earlier question, which is why many companies are choosing to buy and use AI built into their existing enterprise applications as opposed to building. Because those companies understand the challenges around the data management, data governance and they're able to put the right guard rails in place. Most enterprise organizations don't necessarily know how to navigate that. This is all new for them. But those enterprise application vendors actually have the girth or scale to be able to do it for many different companies.
A good way to think about this is, I can build off the backs of those organizations as opposed to having to figure out how to do the basics myself. And unfortunately, there's a lot of assumptions that IT organizations make. We saw this in data centers, thinking, oh, my data center is super secure. Guess what? Of all the data centers I've done assessments on, they are not as secure as you think they are. And so the problem is you have to take those assumptions out. And unless you're willing to do that culturally, it's really hard to ensure that things are buttoned up to the same degree.
Michael Krigsman: Isaac, Jason Genovese wants to be clear that he disagrees with Shadow IT. He thinks there's too much risk with data exfiltration. If you're going to allow it, you must have appropriate security controls in place. And he does agree, however, that responsible sandboxing of apps for testing or a POC is necessary. You just can't forgo security controls.
Isaac Sacolick: I don't think we're fighting.
I don't agree that Shadow IT is a good thing. But what I think CIOs have to understand is that Shadow IT and now Shadow AI is happening, okay? That they can't put up the walls and prevent it from happening. And then you can go back to Tim's comments and saying, okay, what can we learn from what people are trying to do that we're not servicing well or they're not learning enough in terms of what are the risks or what are the technologies that we've already put out in place that they can go out and use. So I think the real question is, how is CIOs monitoring for this and responding to it?
Michael Krigsman: All right, so you're on Jason's side. I'm on Tim's side because personally, I think that if an organization has shadow IT, it means that the people out in the trenches are not getting what they need and so they're bypassing and they're just doing it themselves. But that's why I say, I wish we were together and we could go out and fight about it.
Isaac Sacolick: Just to be clear on that, I'm not saying Shadow IT free for all is okay. So just so we're clear on that. I'm talking about a managed approach, an integrated managed approach to Shadow IT. I'm not talking about the free for all that many people think it is.
As a CIO, you have a responsibility to the organization, right? And so on one hand, you need to make sure and make hard decisions around what you do and how you support the organization and how you engage the organization. And so that's why I think Shadow IT can be really powerful in a collaborative sense, but you're right. If it's run amuck, just going off and doing whatever they want to do and however they want to do it, sure, that's risky. That's not a good thing. But again, that requires some mature thinking from a leadership standpoint to be able to get to that point.
Michael Krigsman: Martin Davis, who says, AI as a service has the potential to turn all business apps into simply databases with an AI layer above to handle all business logic in one place, no longer siloed to functions, purposes, etc. How should we prepare and plan for this future?
Isaac Sacolick: It's a great vision. I don't think we can get there that easily. We've been having a vision of a Utopian connected system for a very long time. I think to execute on that vision really comes down to developing your people. The technology is changing so rapidly when I talk about transformation evolves every 18 to 24 months. This is just the next wave of transformation with its own languages, with its own risks and I think it really comes down to having the leaders in place that are looking for the opportunities, where to spend time on, where to do the experiments and how they're going to evolve, not just the cost equations and the efficiencies that we're working on, but how are we going to really change our business model and evolve because AI is a brand-new capability for us to consider.
Michael Krigsman: Tim, you're going to get the last word.
Tim Crawford: I actually agree. I think one of the things that you need to think about is how can you accelerate the rate in which you're transforming? Transformation is an ongoing process. It doesn't have a start and stop to it. But you need to look at how you can accelerate not just the technology you're using and the innovation you're using, but also how your organization, both within IT and outside is evolving too. So that's going to require change in terms of the people, the personas, the skills, reskilling, upskilling is going to be heavily engaged in this process as well as the relationships you have both inside and outside the organization. That's a huge remit for CIOs and it's a different remit than they necessarily have had at this scale at this pace in the past.
Michael Krigsman: All right, we have covered a lot of territory today in this hour. I want to thank Tim Crawford and Isaac Sacolick. Thank you both so much for coming back and spending your time with CXOTalk. Really appreciate you both.
Tim Crawford: Thanks for having me.
Isaac Sacolick: Thanks for having me, Michael. Great show.
Michael Krigsman: And thank you to everybody who watched. You guys are an incredible audience. The questions and the insights that you have. But before you go, join our community. Go to CXOTalk.com, sign up for our newsletter so we can notify you of other live shows. You see the discussions we have. So join us now.
In two weeks, we are speaking with AT&T's president of Consumer. She owns the lion's share of AT&T's revenue and business. So join us then, you can ask her questions and share your comments. And with that, I hope everybody has a great day and we will see you again next time. Take care everybody.
Published Date: Feb 07, 2025
Author: Michael Krigsman
Episode ID: 869