Learn how AI is transforming software development. Ashley Kramer, Chief Marketing & Strategy Officer at GitLab, discusses DevSecOps, measuring AI's ROI, boosting developer happiness, and navigating the hype. Discover practical strategies for integrating AI effectively and building better software faster.
AI is transforming software development by boosting productivity, enhancing security, and accelerating innovation. In CXOTalk episode 861, Ashley Kramer, Chief Marketing and Strategy Officer at GitLab, discusses the importance of integrating AI into the software development lifecycle and the role of DevSecOps in creating a secure process.
When adopting AI, Kramer emphasizes the need for clear business objectives, developer satisfaction, and robust data privacy measures. She also warns against focusing solely on code generation and encourages leaders to explore AI's broader applications.
Episode Highlights
Integrate security into the software development lifecycle
- Make security an integral part of the development process, not a separate project phase. This will foster a culture of shared responsibility for security and help identify vulnerabilities earlier, reducing remediation costs.
- Use a DevSecOps platform to automate and integrate security checks into the developer workflow. This will streamline security practices and make them a natural part of the development cycle.
Measure the ROI of AI in software development
- To assess the value of AI integration, focus on business outcomes, such as faster time to market, improved security, and increased developer happiness. Tie these outcomes to measurable metrics.
- Track metrics like task completion time, team collaboration effectiveness, and time to release to quantify the impact of AI on developer productivity and overall software delivery performance.
Boost developer happiness to drive productivity and retention
- Empower developers by automating mundane tasks like code commenting and test suite generation, allowing them to focus on more challenging and rewarding work. This can increase job satisfaction and reduce turnover.
- Foster a collaborative environment where developers feel valued and can easily access information and support from other team members, including security and operations professionals.
Implement AI strategically with a focus on clear goals and guardrails.
- Define clear business objectives and use cases for AI adoption in software development. Rather than trying to implement AI across the board at once, start with a specific problem area, such as security vulnerability detection or code testing.
- Prioritize data privacy and transparency by carefully evaluating AI tools and vendors. Ensure alignment with organizational security policies and regulatory requirements. Establish clear guidelines on data usage and access.
Look beyond the hype of AI-powered code generation.
- Recognize that coding represents only a small portion of the software development lifecycle. Focus on leveraging AI to improve other critical areas, such as planning, security, testing, and deployment.
- Aim for better code, not just more code. Prioritizing code quality over quantity reduces technical debt and improves long-term maintainability. AI can achieve this by automating quality checks and providing insightful feedback throughout the development process.
Key Takeaways
Prioritize Developer Happiness for Higher Productivity: Happy developers produce better code faster. Streamlining workflows, automating mundane tasks, and fostering collaboration create a more positive and productive work environment, increasing efficiency and decreasing turnover. Invest in tools and processes that support developer satisfaction.
Move Beyond Code Generation to Unlock the Full Potential of AI: While AI-powered code generation offers some benefits, its impact remains limited. Focus on integrating AI across the entire software development lifecycle, from planning and security to testing and deployment, to maximize its value and significantly improve efficiency, quality, and speed.
Establish Clear Goals and Guardrails for Successful AI Adoption: Don't get caught up in the hype. Define specific business objectives and use cases for AI in software development. Implement guardrails around data privacy, transparency, and security to mitigate risks and ensure responsible AI usage. Start small, measure results, and iterate based on learnings.
Episode Participants
Ashley Kramer is GitLab's Interim Chief Revenue Officer and Chief Marketing & Strategy Officer. A former GitLab customer, Ashley works closely with our customers and is the executive sponsor of many enterprise accounts. She leverages experience from her marketing, product, and technology roles to message and position GitLab as the leading DevSecOps platform through the next stage of growth. She is responsible for setting GitLab’s long-term strategy - including leading the company’s enterprise data team, open source strategy, and AI vision - and driving core marketing and pipeline generation. She is on the boards of Seeq Corporation and dbt Labs and is an advisor for Snorkel AI and several other startups.
Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator known for his deep expertise in digital transformation, innovation, and leadership. He has presented at industry events worldwide and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.
Transcript
Michael Krigsman: Welcome to CXOTalk. I'm Michael Krigsman, and we are exploring DevSecOps and the impact of AI on software development. Our guest is Ashley Kramer, Chief Marketing & Strategy Officer at GitLab where she is also interim Chief Revenue Officer.
Ashley Kramer: GitLab is the most comprehensive, AI-powered DevSecOps platform for software innovation. We were honored to recently be recognized as a leader in the Gartner Magic Quadrant for both DevOps platforms and the MQ for AI code assistance.
Michael Krigsman: Ashley, what is DevSecOps?
Ashley Kramer: DevSecOps means integrating security as part of the overall software development lifecycle. Historically, we've thought of it as developers writing code and IT professionals deploying code. But we were missing one really big piece, which is security.
Security was still happening, but it was happening as a separate process, a separate workflow and, most importantly, a separate team. And what DevSecOps does is it shifts all of that left and makes it part of a developer's every day.
It's no longer the developer finishes writing their code and throws it over to the security professionals to check, throws it back over. It is all integrated in one place and always keeps security top of mind.
We've run many surveys at GitLab where developers know that security is important. But then you go ask the security professional, and they say, "Developers aren't paying enough attention." So, by having DevSecOps and a DevSecOps platform and workflow, it makes it a natural part of the process to help solve any security vulnerabilities or anything that could cause a negative impact, which we know nobody intends to do when they build their software.
Michael Krigsman: The rise of DevSecOps seems to also coincide with the rise of AI and software development.
Ashley Kramer: Historically, in software development, we have thought of AI being integrated as helping developers write code. That is one very, very minor part of the process.
That's only 21% of the developer's day is writing code. The rest of that 79% is helping them better plan, helping them secure the code, write their test suites, get it out to production in a secure way. And so, by having this concept, this platform, this workflow of DevSecOps, AI has never been powerful to be integrated in all of those steps that I just mentioned.
Michael Krigsman: What is the current state of AI in software development?
Ashley Kramer: Originally, we saw it as helping developers with code generations, giving them code suggestions, and that's really great. But that's only a small portion of what needs to occur within software development. By properly integrating AI throughout every step of the software development lifecycle, you can help customers increase developer productivity, improve operational efficiency, reduce security and compliance risks, and really accelerate their digital transformation.
Michael Krigsman: It'll be interesting to hear some of those AI use cases when it comes to software development.
Ashley Kramer: They're leveraging it to help their developers with code generation, help them detect any vulnerabilities, remediate those vulnerabilities, help them do things in the mundane that developers hate (write comments for them, something we know that doesn't happen very often), help new developers onboard to a project. So, for those companies, they're usually highly competitive. They're trying to get an increase in time to market, in speed to market.
Now, if I think to somewhere where GitLab also really shines, which is highly regulated industries, a really interesting one I keep seeing come up more and more is app modernization. You have large banks, for example, large FinServ companies that still have a bunch of code in COBOL and they want to upgrade that to Java.
Now, you can either go hire a giant team of developers – nobody wants to do that job, by the way – or you can operationalize it. You can bring AI in as part of that.
Now if we flip to even more highly regulated, something like a government customer, they're really, really interested in understanding how to use AI, but they can't use large language models that are generally hosted. They can't use large language models where they could have to go through to a cloud service or have any risk of their confidential code ever being shared, so that's where we're seeing more of a need for bringing AI, bringing the models on-prem in an air-gapped way to really drive forward in the innovation they need to do, but with a lot more guardrails around them.
Michael Krigsman: The use cases are tied to the business objectives and the constraints of the particular organization.
Ashley Kramer: It's a really interesting question I talk to with these executives is, how do I measure the ROI? It is very different for each of them. I usually flip that back to them.
I was just talking to a large automotive customer in Germany just on Friday. I flipped it on him, and I said, "What are your main business outcomes you're looking for?"
In this case, it was speed to market. The auto industry is very highly competitive, as we all know, and they're all turning these great luxury cars into being more of software devices and not just cars you drive. For that specific one, that business outcome was speed to market.
Then when I go talk to government customers, I hear from them all of the power they know that they can extract from AI but so important for them is that they must have all of that stay within an air-gapped environment on-prem. They must be able to be context aware of what they need just within their confidential IP and, obviously, confidential information.
Their business outcome is privacy first, transparency first, as they're top of mind. So, often they overlap but it does differ from customer-to-customer.
I'll give you another really important example that I hear quite a bit. There are a lot of companies that don't have tens of thousands, hundreds of thousands of developers, but they're really trying to keep up with the market, upskilling developers via AI (having AI help them understand how to integrate security, how to properly test their code). So, we're hearing the upskilling scenario a lot as well.
Michael Krigsman: Are there certain types of development challenges, issues, problems that AI is particularly useful for solving?
Ashley Kramer: The most important ones that we hear really are around removing the mundane from a developer's every day; being able to help them quickly come up to speed on a new project, whether they're new to the company or new to a project. Those tasks that everybody comes in every day and says, "Ugh. I don't want to comment my code," or "I don't want to go through all of these different documents to understand what's happening."
That's one is eliminating the mundane. Just as important, maybe more important, is the security piece.
I want to shift security left and make it part of a developer's every day. But they might not be super-deep in understanding security around the code they're writing just yet.
AI can help them with that. It can find vulnerabilities in their code, explain it to them, help them remediate it and, more importantly, train the rest of the organization on how not to do that again (throughout the entire codebase).
On the executive side, if you have a platform like GitLab, you can start to measure productivity metrics because everything you're doing (from planning the project to writing the code, testing, securing, and deploying), all of that information is gathered in a single, unified data source. Now, as the CTO of a company or the CISO of a company, I have all of these metrics that I can look at and say, "Oh, AI is actually impacting my release time; the security of my code," and you're able to really have that one single pane of glass view into the ROI and the impact that AI is driving.
That is the number one question we're getting asked outside of privacy, transparency, is, "How can I measure after I integrate AI if it's actually having an impact?" And so, that's another way, being able to measure it via metrics.
Michael Krigsman: Can you elaborate on the metrics and how you actually measure that delta, the impact of AI on developer productivity?
Ashley Kramer: I like to break it down into something that we call the three T's.
The first is task. Are you making tasks easier for developers? Some of the things like commenting the code, writing the code faster, generating code for them, is that becoming easier?
The second is the team. Is the team more effective in collaborating? Throwing over the wall to security and throwing back over, are you able to see team progress and teams moving faster in delivering more (whether it's day after day, month after month – however your company really recognizes and sees that)?
Then the final one is the time. There are different ways you can measure it via standard DORA metrics for the three T's, via ways that you benchmark "This is how we used to deliver software. This is how many bugs we had in our last delivery or any—hopefully not—security issues that we had."
You can also do this benchmarking and see, okay, release after release, what are we seeing as an improvement if we're following the task, team, time metrics for measuring ROI when it comes to AI throughout the SDLC.
Michael Krigsman: Are the key points streamlining development workflows as well as development productivity?
Ashley Kramer: For some, it might be time to market. It might be how secure the code is, how well (after the release goes out) customers are responding. That's a big piece of it, too.
Another piece, when it comes to measuring the ROI and the success of AI that I really like executives (that I speak to every day) to remember is developer happiness. Everybody knows how hard it is to find great developers. Then if they decide to leave, how hard it is to bring on new ones, ramp them.
A new one that I've been hearing more and more is, "We've integrated AI. We've used something like a GitLab Duo." After six months of it being used, they do internal CSATs, internal satisfaction reports, to make sure that it is making an impact, it is making the developer and security professionals' and the operational professionals' lives easier.
Michael Krigsman: Developer happiness, I have not heard that phrase spoken too often. How does streamlining the process have an impact on making happier developers?
Ashley Kramer: It's everything from, they feel like they can collaborate better with the rest of their team. There are chat experiences where they feel like they're not reinventing the wheel.
"Somebody else has already done it. Great. I'm going to leverage what they did," or even better, "I'm not going to make the mistake they did. I don't have to do some of those mundane things anymore. I don't have to go read through a giant epic or an issue to understand how to start contributing."
In the end, developers want to feel like they are producing great software for their end customers, and they all want to do it in a secure and efficient way. And so, seeing AI really make that go more seamlessly and more smoothly, see it help them collaborate with their product managers and their security professionals—even better—is really an important metric, and that's something I am hearing more and more from executives measuring ROI.
Michael Krigsman: In many ways, AI frees the developer from these mundane tasks, things like code commenting, as you mentioned, enabling the developer to focus on higher level, more important, more interesting activities. The developer is happier as a result, and you're able to measure that gap.
Ashley Kramer: We constantly want to be using our brain power to solve challenging, tough issues, solve problems for our customers. But we also know we have things we have to do.
There are things we have to do when we come in every day. I have to check my email every day. Hopefully, somebody solves that for me some day. But it's not me solving challenging problems for people.
From the developer's perspective, the more AI can do those different steps for them, the more that they can use their brainpower to solve challenging problems and drive great business outcomes for customers. That's what keeps developers happy (in my experience).
Michael Krigsman: You were describing the metrics that you look at. What kind of results are you seeing among your customers at so many different organizations?
Ashley Kramer: We've seen customers report to us that they're experiencing 50% gains in developer productivity. We now have customers reporting that they are releasing with 50% less vulnerabilities because we have vulnerability detection to help them understand the value.
As we continue to work with customers, we're hearing more and more about what they need next and how they need to be able to measure context aware within their environment, via things like on-prem models, and how they're going to measure that next wave of efficiency, developer happiness, and everything that it'll take to get the ROI they expect out of integrating it throughout the SDLC.
Michael Krigsman: Again, you're combining efficiency metrics with developer happiness metrics.
Ashley Kramer: It all goes hand in hand (in my mind). If you have developers that aren't that engaged and aren't that excited about what they're working on, they're probably not the most efficient and productive developers. That's just how the human brain works.
If they're trying to go through old legacy code and understand something, some old, different types of code that was produced back in the day, that's not helping them use their brains to solve future challenges. AI can do a lot of that, help modernize code, help them not sit there and write comments around their code, and help them really feel valuable in getting up to speed fast and delivering great value, modern value for their customers.
Michael Krigsman: It's a great idea to combine efficiency with satisfaction because, if you go to one extreme or the other, you're going to end up with either inefficiency or unhappy developers, and you're trying to accomplish both together.
Ashley Kramer: You can have the most efficiently run org in software development in the world (at least in your mind). But if your developers aren't happy and they are leaving the company to go work on what they consider to be more interesting projects and more interesting things, then that efficiency is going to take a dive because it is not easy to find great developers, ramp them really quickly, and have them start adding value. And so, I do believe that now is the time to bring those two together.
Michael Krigsman: The important thing is it is a unified platform-based approach that brings all three aspects together, as you just described: development, security, and software operations.
Ashley Kramer: That's right. Integrated into what we used to call DevOps platforms, security now, so it's a DevSecOps platform; one place for people to collaborate, to make sure the code is written securely, deployed efficiently, all into one process.
Michael Krigsman: how is GitLab's approach to AI in software development unique?
Ashley Kramer: It's because of how the platform was architected many, many years ago. We made a decision back in probably 2013-14 that every time we integrated a new stage as part of our platform to help customers achieve software delivery across the entire SDLC that we would integrate it as part of the platform.
We wouldn't bolt it on. We wouldn't use a partner exclusively. We would build a complete, end to end platform.
Enter the age of AI. Now we have one platform with one single unified data store to hold all of the important information, so integrating AI throughout (from planning to writing code to testing, securing, deploying, and more) is easily achieved.
Other vendors that don't have the platform approach cannot leverage that end to end, and they also cannot measure it via our Value Stream Analytics, which we call it, which is where the executives want to see, "Okay, of all of the AI different capabilities my team is using throughout the SDLC, how can I measure success, productivity, efficiency, developer happiness?"
You can only do all of those things with a platform, and GitLab made a decision over ten years ago to do a platform from the beginning.
Michael Krigsman: How can business and technology leaders adopt AI in their software development workflows and processes without causing disruption inside their organizations?
Ashley Kramer: They need to have the right practices built around any AI that comes in, but particularly around software development. That involves the guardrails.
What does the AI, the large language models that are being leveraged to bring AI into the organization, what is that built on? Is it trained on other's data, like our competitors? Is it going to take our data to share it? Having that privacy and transparency first and the guardrails around it is really important.
I've seen people have different councils or have different groups within the company that can go and check that and make sure what we're bringing in is going to be secure.
Having those success metrics. After I bring in AI and the team tests it, how am I going to prove the value, and what is that going to change about our organization, our business results, and how we plan for next year? All of that is really important to understand upfront.
We've often seen this shift in the past. Everybody just starts using this tool that helps you write code. But then you're not clear if it's secure. There's no transparency in what's happening to your code. Is it actually providing ROI? Having those defined upfront is really important for great end results.
What's important next is moving from reactive AI – which is largely what we've been talking about: I write my code; it checks the security for me; it does the test suite for me and moves it along – to an AI autonomous agent that is my collaborator and my partner. And when I come in as a developer every day, it says, "Welcome, Ashley. Here are a bunch of issues you need to tackle first. We found these security problems while you were sleeping, and you should probably focus on those."
It becomes your partner. It's more of a proactive approach versus reactive. Now there's a paired programmer matched with me all day every day to help create secure software.
Michael Krigsman: All this implies a tremendous amount of change and a great deal of opportunity for CIOs, for CTOs, for CISOs.
Ashley Kramer: Think of some of the things we've been talking about: more secure code, faster time to market of code, happier developers. If you put your mindset in that position and you have the right guardrails to make sure that's exactly what you're doing, I think AI goes from being this scary topic that people are still trying to wrap their heads around to, "Okay, here is my business plan (around AI particularly) within the software development in my organization, and this is how I'm going to measure success in three months, in six months, in nine months."
I would also encourage organizations to not try to boil the ocean with AI. Pick use case by use case to really try to solve, what's your biggest issue to solve? Your developers aren't actually writing test suites, aren't testing their code, or you have way too many bugs, way too many security vulnerabilities. Go after that use case first.
Michael Krigsman: Ashley, what should business and technology leaders do now to prepare for this AI-driven future that is right in front of us?
Ashley Kramer: Now is the time to get educated on it. Now is the time to partner with vendors that deeply understand privacy first, transparent AI, and what it can mean for their organization, how they can measure success, how they can properly do a proof of concept around it or a proof of value, and take it step by step, partnering with the right organizations to help them understand the gotchas to look for, maybe some of the elements that they have concern around.
There are a lot of us that have been doing this all day every day, and we want to be the trusted advisor to these organizations. We see it organization after organization, and we can help them be successful in their AI journey when it comes to software development.
Michael Krigsman: Are there some best practices or things you've seen the best organizations do now again to be prepared for the future?
Ashley Kramer: The best organizations understand, first, what is the value or the ROI I'm looking for as an end result. They may not actually be able to see it and measure it at first. It might be six months down the road.
What are those different use cases that I want AI to improve: developer productivity, developer happiness, less security vulnerabilities? Define that upfront and measure it constantly.
The second is, what are the guardrails? What are the showstoppers and the nonstarters when it comes to AI? Ask the deepest questions you can around it.
Companies like GitLab and some others have things like AI transparency centers. You can go onto our website, and you can read everything about our AI: the large language model providers we partner with; what happens with their data, which the answer is their data remains safe, secure. Their IP remains their IP.
Understand that upfront before you start buying into all the buzz and hype of what AI can do. Understand the end results. Start with the end result in mind, and then partner with the right people and organizations to help you get there.
Michael Krigsman: This hype issue is so important. Do you have any advice to help technology leaders see through the hype?
Ashley Kramer: I do and thank you for asking this. The biggest hype in software development for years now has been AI helping developers write code faster. Writing the code is only 21% of a developer's time, and when you have only that in mind (and of course, that's what the developer will be most excited about), you are forgetting the rest of the steps that it takes to deliver software: keeping it secure, properly testing it, iterating, and making sure that you're continuously releasing that secure code in the right way.
My advice to leaders would be get beyond that hype and understand everything else that is involved throughout the SDLC and the deep value that can provide because I always like to say it: more code does not mean more secure, more valuable code for your customers.
Michael Krigsman: You want better code, not just more code.
Ashley Kramer: Better, more secure code. Sure, delivered faster eventually. But more code to me might mean more tech debt in the future or more problems if you don't focus on the 79% that goes beyond coding in the process.
Michael Krigsman: Ashley, thank you so much. I enjoyed speaking with you today.
Ashley Kramer: Thanks for having me, Michael. I enjoyed the conversation as well.
Published Date: Nov 25, 2024
Author: Michael Krigsman
Episode ID: 861