Cybersecurity Strategy and AI in 2023

In this exclusive interview, NVIDIA's Chief Security Officer, David Reber, explains why NVIDIA’s complex hardware and software product line demands a diverse portfolio of security solutions. Reber highlights the role of AI and machine learning in security and data protection, and explains how AI changes cybersecurity.

41:43

Jan 27, 2023
15,412 Views

In this exclusive interview, NVIDIA's Chief Security Officer, David Reber, discusses the complexity of the company's product line and its impact on security. He explains why NVIDIA’s complex hardware and software product line demands a diverse portfolio of security solutions. Reber highlights the role of AI and machine learning in security and data protection, and explains how AI changes cybersecurity.

Reber presents an ecosystem view of security, where knowledge and trust play a crucial role in protecting against breaches.

The conversation also sheds light on the interlinked relationship between data protection and protecting against breaches, the importance of monitoring the data supply chain, and customer experience in security. Reber concludes by stressing the importance of innovation and how security teams should focus on enabling innovation instead of stopping it.

The conversation includes these topics:

David Reber serves as the chief security officer and head of product security at NVIDIA. Before NVIDIA, Reber served over a decade as a senior staff officer in the US Intelligence Community. He held the position of senior director of security for Nutanix Frame and Government Cloud Services. His background spans enterprise secure cloud service architecture, advanced cyber operations, offensive security research, global enterprise security, secure mobile tactical communications and insider threat. Reber holds a bachelor’s degree in Information Sciences and Technology from the Pennsylvania State University.

Transcript

Michael Krigsman: Today on CXOTalk, we're speaking with David Reber of NVIDIA. We're talking about security in 2023 and especially security in this world of AI in which we live.

David Reber: I come to NVIDIA from a career in the U.S. government in cyber defense, combined with working across Silicon Valley startups, and a few other areas. Today I serve as our chief security officer where I look at how we protect NVIDIA, our customers, and our people.

How does NVIDIA's product line make security complex?

Michael Krigsman: You have a very complex product line, right? You have hardware. You have software. You are deeply entrenched in the AI world. What does that complexity imply or do to your security posture or strategy? How does that inform security?

David Reber: In my previous roles, either I worked for a cloud company so you're focused on cloud services, or you worked for a software company and you're working on how you securely deliver software but you didn't have the service element.

What gets complex here a lot of times is we're doing hardware development, hardware bring-up. That is a different suite of technologies, capabilities. Then on top of how we deliver on software. Then how do we run that and protect our customer data?

What we have to do is look at a very diverse portfolio of build systems, backend systems, infrastructure that take different security approaches. We're not able to just go in and say, "Hey, here's the one solution that's going to meet all of our developers' needs to be secure."

We have to get creative. We have to innovate different ways to enable all of our employees to help contribute to the security of our company and our ecosystem.

How does NVIDIA manage security risk?

Michael Krigsman: How do you manage then essentially multiple security solutions falling under this broader umbrella?

David Reber: I take a depth and a breadth approach. We have experts within the organization that is security that's trying to set common platforms, a common suite of capabilities to enable our developer ecosystem.

What we then do is we also have security experts distributed within our business units right there in the code with our developers helping them with that specific set of solutions, or helping ensure the features are in our hardware. That we're developing it because they're experts in that domain. But then how do we share those learnings across our entire virtual security team (we refer to it as here), and help make sure we share those lessons across all of those different business units so that we can cover depth and breadth in the defense?

What is the impact of AI and machine learning on security and data protection?

Michael Krigsman: How does AI change your thinking or affect your thinking and your strategy in relation to security or data protection?

David Reber: Ten years ago or more, we were a network-centric security model. You'd set up your perimeter, and you would have this UI/GUI center of your network.

Over time, what we've started talking about is app-centric security. How do you protect those applications? How do you protect what's going on in your network?

You start protecting against lateral movement. You start protecting against once they're in your network. That's where the zero trust perimeter started to come alive within the global community.

Where the phrase zero trust kind of started coming from is I don't want to just naturally trust my network. When we look at an AI, it's bringing in the different dynamic that really started shifting us to a data-centric security model.

At the end of the day, we need to be able to protect data where it is, as it transits, and when it's in use. This is really driving evolutions and innovations across how you do distributed data, distributes security. How do you do it all the way to the edge in devices that you can't necessarily trust from a physical access attempt to your edge data centers to cloud service providers?

It's also bringing in a shared responsibility model where you have to trust your vendors. You have to trust your suppliers. You have to start learning how you trust a cloud service provider to not lose your data. That brings in then evolutions in confidential computing, so we can protect it in processing.

But the other thing that we look at with that is that whole data supply chain of that data movement not just moving within my organization but where did it come from. How can I trust it to be able to deliver trustworthy AI?

What is NVIDIA’s security ecosystem and why is it important?

Michael Krigsman: That's really interesting. The data supply chain then is really part of an ecosystem view of security. It sounds like that ecosystem view is quite central to your strategy and your thinking.

David Reber: It's really become all of us are in this together. We have a common advisory that is the attacker, and the only way that (as we continue to evolve) the attackers keep getting better; defense has to keep getting better. It's a collective defense.

You'll see it in major breaches that happen. You'll see in a lot of the publications; it tends to be broad swathing. Every single cloud service provider that has a breach will have impact tens to hundreds if not thousands of other customers.

It really takes all of us share our knowledge and then have trust in what I'm getting from somebody else.

Michael Krigsman: How do you coral an ecosystem of players where each participant has their own set of concerns and their own agenda, and they may not fully align? How do you coral or how do you herd these cats?

David Reber: It's less about what the objective, informed experts in security do every day. It's more about the thousands of employees. Every single one of them could be that next entry vehicle for an attacker into this company.

Our goal is to figure out how we give the information to make everybody make informed, secure decisions when they need to the most, whether it's on the links they click, whether it's looking at that phishing email, the line of code that they're writing, how they're going to publish and distribute to the customer. Our goal is to figure out how we get the knowledge from the security teams to those individuals at what we call speed of light, at the fastest possible ability to deliver that so that feedback loop is quick.

It's not easy. It's always an innovation conversation of how we look at what they need to do, give them that information that we know can help them make a better decision.

We still have the standard common platforms, whether doing code scanning, whether you're doing malware scanning, whether you're doing audit logging and monitoring. We have those capabilities that we've just put in place and enable our diverse ecosystem.

Really, it is about educating the humans that aren't experts in security. They're experts in their domains, but how do we help them make smart decisions?

Michael Krigsman: You mentioned the term innovation. Can you elaborate on that? When you talk about innovation in this, what are you referring to?

David Reber: We need to be creative. One of the things that I've noticed many security organizations is you take a hammer approach. A lot of companies will say, "Oh, our security teams, they're the know people."

Our goal is how do we start with yes, and how do we move forward? Innovation is about taking steps towards a mission and constantly iterating to figure out what that best solution is.

I view our team as how do we help enable the innovation of the company versus stop it because it's not perfect. That's where, when we look at finding innovative techniques, every day there's a new technology that's coming out and security tooling has to catch up. That's just an industry-wide problem.

Rather than waiting for that. How do we help mitigate that, put controls or other aspects around it, and make that just automated, make it part of your deployment processes that we can get there and continue to find unique ways?

We then contribute back to our security partners, our security vendors within the community of, "Hey, we are learning how to do, say, the whole ML ops pipelines. We contribute those back and our learning back into the community in that collective defense as we're learning it."

We're dealing with massive data sets that some of these platforms have never dealt with before. As we solve those problems, how do we just keep everybody moving forward versus looking at that making perfection be the enemy of good enough?

Michael Krigsman: How do you think about the distinction between data protection and protecting against breaches? Or, for you, are they essentially one in the same ultimately?

David Reber: It's highly interlinked. At the end of the day, you as an attacker, as an adversary, your goal is to get information or cause harm within those things. But generally, it's around that data.

If I focus on moving to that data-centric security model, that allows us to also kind of defend against those breaches.

Now, when you look at those strategies, we also look. It's a blast radius question. How do we enable if we assume something is going to happen someday?

It's an "assumed breach" strategy. The industry has moved towards it. How do we make sure that when they get in we can detect them as fast as we can, we have rapid response capabilities to be able to deal with it?

More importantly, it's how do we make sure that they're limited? It requires them to be noisy (as an attacker) to go laterally to the next set of data or the next set of data.

One of the things we also face a lot is, as an ecosystem, we're seeing identity attacks are on the rise. How do you do those least privileges to enable developers to be innovative but, at the same time, make sure that if they are the path in, their account is the path in, that that adversary only gets limited access?

Michael Krigsman: Subscribe to our newsletter. Hit the subscribe button at the top of our website and subscribe to our YouTube channel.

Why are technology, education and training all essential to maintain cybersecurity?

It sounds like you have a combination of technology measures combined with education and training for employees and ecosystem participants because, as you said, any one of your employees could be a potential pathway for an attack.

David Reber: It's people, processes, and technology. I see many organizations focus on just the technology problem. But when we look at the people (through training or real-time information that enable them to make good decisions), putting in guardrails.

We don't want rigid processes, but we need enough. You need to put guardrails to make sure you're not putting them in harm's way to make a decision.

I've been on those interview cycles where you have to talk to the person that made the bad decision and their heart is just dropped that it was their fault (or they view it that way). How do you help make sure you help protect them to make good decisions every single day (across people, processes, and technology) to enable that innovative culture?

Michael Krigsman: What about the AI aspect? You're an AI company. Does that change the nature of how you interact with security?

David Reber: It compounds it, is what I look at it as where, before, when you're dealing with just development, you're dealing with developers, engineers that are building solutions. As you move more towards data scientists, they can get very technical in those domains but they're not necessarily experts in that underlying networking infrastructure or application infrastructure.

They're great at data science. They're great at the tools and extracting that information. We need to make sure that we have those common platforms that they can focus on what their unique value, what they are good at. The burden of protecting those other parts of that infrastructure are taken care of for them.

Now, the unique challenge is, when you start talking about data and data access, what you're also starting to see is you need playgrounds. You need experimental areas for data scientists to work. Then when they generate a model out of the data and then you start using it in production, well, that entire training environment now has just become part of your production infrastructure.

Traditionally, you could separate your dev environments out. But now, those training environments are now production. And we have to really focus on securing them against some of the new threats, new novel threats that we're starting to discuss as a community.

We're starting to see aspects of it around data poisoning and other things like that that you have to truly start to monitor that you could have isolated off your network.

How does AI make security a data problem?

Michael Krigsman: AI introduces a new set of dynamics inside your environment that security then has to address.

David Reber: Yes, and we start hitting some limitations. I mean you're talking huge data sets, huge data sets that are required in the petabyte range that we want to work on.

Your traditional security processing tools just can't handle that load, so you have to think about the process and the technology differently. As I said before, that blast radius, how do you make sure if and when something goes wrong, you contain it to that area within your infrastructure and impacts within your products?

Michael Krigsman: Where does AI come into play as far as helping you address these issues, narrow that blast radius, or help in other ways?

David Reber: I'm a big believer that security is more of a data problem than we've realized in the past. I mean the number of servers, systems, everything just continues to exponentially grow that you have to be able to monitor across.

You can't linearly scale human beings with the alerts and the volumes. You have to work smarter, not harder with it.

Really, what we try to focus on is looking at letting the machine do what the machine is good at. Give that information so you use the human for what the human is good at.

This is where we look at using AI technologies to specifically process through everything, try to highlight those needle in the stack of needle challenges, or just even visualize the data so that our analysts can start looking at different problems. With the open AI initiatives, ChatGPT, starting to look at how do you start now using it to ask questions.

When you're doing development work or you're doing a configuration, you're trying to stay hard in an S3 bucket. The information is on the Internet for you to be successful.

The question is, how can I get it at the fingertips of exactly in the context of what that individual is trying to do as fast as you can? That's where we're looking at different ways to be able to have that to cyber and non-cyber individuals to get that information to make good decisions.

“How do you protect distributed data sets?”

Michael Krigsman: There's a combination of information then. You have general background information that employees need on, for example, how not to be a victim of a phishing attack, for example. But then you need (I would imagine) close to real-time information to give to security folks who are trying to defend against an active, ongoing attack.

David Reber: Correct. It's looking at how do we push it, move security left, shift security left, do it both proactive but then reactive to do that defensive, and process all of that telemetry, all of that information that we're getting to rise up and filter through the noise.

We're also starting to see, in that second category, opportunities. It's a data gravity problem.

If we're collecting logs from all around the world, you don't want to ship them all centrally. With a lot of laws that are coming out, you can't. You need to keep your information regionalized for your customers.

Now the question is how do you do defense across distributed data sets, do that processing sharing, that knowledge, to then your SOC teams, your security experts can truly focus on the real problem.

 “It's AI until you trust it. Then it's automation.”

Michael Krigsman: What about issues such as algorithms and transparency and privacy? Does that intersect your work at all?

David Reber: Yes. It's AI until you trust it. Then it's automation.

Really, that comes into play as well as how do you have transparency that AI is going to do what you need? How do you trust? How do you assert trust to the users of it and drive that transparency? That's where we're looking at trustworthy AI, ethics of AI to make sure, how do we deliver that trust, that transparency?

The reason it really intersects that world is it's not just about the data it was trained on and the algorithms that were used in that. But it's also the entire underlying infrastructure.

Where did all of that data come from? How did it come through our network? It's a provenance conversation.

Those same primitives is what you do for confidential computing. It's what we do for how you represent that as a cloud service provider.

How do you enable people to have trust in what you do? That's where we're looking at that.

It's a common attestation solution that we can attest this is what has occurred. Then you as a customer can make your decisions. You can decide, based on the information, how does it fit into your risk profile rather than making the decision for the customer.

Michael Krigsman: We have an interesting question from Twitter from Arsalan Khan (who is a regular listener). He asks great questions. Arsalan, I really appreciate you're watching.

How do you balance security and protection against employee productivity?

He says this. He says, "How do you make sure (on an ongoing basis) that cybersecurity is on top of mind but also does not interfere with the day-to-day tasks of employees?" If I can rephrase that, how do you make sure that everything is secure but you don't step in the way of people doing their jobs (as they have to do it)?

David Reber: Part of our strategy – and I constantly mentor our teams and our members – you've got to look at good customer experience. Think about if you go to a store. If you have a bad customer experience, you may not go there again.

In a cyber world, in an IT world, into an engineering world, if you have a bad experience with security, that's where Shadow IT starts coming up. It slows everything down.

A lot of what we do is understand what the customer problem is, how do they like to work, and meet them where they are. Sometimes we have to be aggressive and move the needle for the organization. But a lot of it really is trying to have that dialog, start with here's where we need to secure, and how do we work together?

We are also seeing a pivot in the industry. We have been doing it for a while. Ensuring that your security teams have either walked a mile in the shoes of the developer or they know how to write code, they can write code to help the developer out, be part of the solution.

Then you create this dialog. You create this relationship with the business leaders.

Then what you see is a pivot from, "Oh, that's security," to "Hey, security. How do I make sure I do this right?"

That's that pivot, that transaction when you're part of the solution, not just the policy team, not just the team that's telling you, "You have to do this," but without an offer of help. That's where you'll see that change.

I mentioned earlier that depth and breadth conversation. We have breadth and expertise across the company, but we also distribute security individuals, architects, engineers directly reporting to those engineering teams, those engineering managers so that they're there on the ground helping and figuring out how do we actually make those steps forward.

What does customer experience mean in security?

Michael Krigsman: This point you just made about security needing good customer experience, can you elaborate on that? I've never heard anybody talk about security in quite that way.

David Reber: It's a mentality of service. You build any product. If your customers don't like it, they're going to go away.

When we build our developer ecosystems, when we develop our websites, when we develop our GPUs, we try to create a great experience so that the developer, it's easy for them to use. Taking that mindset and, when we build our common platforms (like our code scanning platforms), how do we make sure that you can easily integrate, you have the right documentation, you understand how they're going to go use the product?

I don't assume that they have to use us. I assume that they can choose. When you have that mindset, even as security when we can say you have to do this, but when you have that mindset, you can change your product so that it's a good experience for what needs to be done.

We also drink our own champagne within our own security teams, and we're always our first customer. We always focus on how do we use our own technologies to secure the technologies that we're building, so that we reduce that developer friction.

Michael Krigsman: The point you made earlier about not being the people who always say no, it reminds me of conversations in the past that I've had with CIOs and, historically, CIOs were the people who said, "No. You want to do this, you want this report, you want... No. We can't. We can't do it." This then sounds like it's really foundational to the integration of security inside the culture of NVIDIA and the culture of your ecosystem.

David Reber: When you're dealing with new technologies, things that no one has ever done before, we're all learning this together. There's no right answer.

When you look at AI, regulations are coming out around the world. We're just in that beginning journey of what do you have to do. How exactly do you need to do it? We don't know, so we're learning together.

That's where, when you realize that environment you're in, and we're in a learning journey, every question we get, even the most craziest questions that security people get, I can say yes to something. Something in that dialog, something in what they're trying to do, we can say yes to something. Then we can learn on that journey together.

We know that we can take a risk on that first step. We can learn and see what's going on. It gives us both as a security team to learn, to see what we need to do and the developer to continue to figure out where they're going. And we can take that next step.

Now, as technologies start being defined, it's kind of, "Here's your standard platforms. Here's your commonalities," the goal is to make sure you have infrastructure as code. You have those examples right there at the ready, so you don't need to have those conversations again, and you can focus on that next thing.

We can all be learning and innovating with the business. There's no reason that we have to be behind in this battle because, at the end of the day, our goal is to help solve the world's hardest problems. You can't do that if you're not moving forward.

What is the role of AI governance in security programs?

Michael Krigsman: Can we jump back to the discussion very specifically around AI and issues such as the governance of AI? Again, how does that kind of topic intersect with your role as the chief security officer and, frankly, why? Why is there an intersection there?

David Reber: That governance, as I said before, is all about trust. How do I trust the data the model was trained on, the algorithm, the infrastructure that it was trained on, the source of where the data came from? Who gave it, and then those continuous updates? And then how is it delivered to the customers?

When you look at the problem that way, it's nothing really different than a standard CICD system, a dev-ops pipeline. Now we're just talking about the data pipeline.

We, as a security team, may not be experts in the specific ethics of a given model, like there are domain experts in all of that. But they're not domain experts in how you do that data supply chain. That's where we have that intersection and we come together so I can talk with those experts in ethics, those experts in law, those experts in that domain area to figure out what information you need to guarantee is trustworthy across that entire supply chain.

Then what we do from a security team is figure out how we then architect that data supply chain all the way up into our vendors, into our contract requirements for them, into the security controls that they need to implement so that this way they can trust that the data that they're getting is accurate. That's where that interaction comes into, and we are heavily involved in that formation as trustworthy AI (even as a primitive and a concept) really takes form in the ecosystem.

How does NVIDIA integrate security into product development?

Michael Krigsman: What about from the product side? NVIDIA is creating new hardware, software, cloud services all the time. Where do you become involved? Again, what is the intersection between product development, product release, and your security team?

David Reber: There is security of the product, security in the product, that I look at. Of the product is the things that we need to do behind the scenes to make sure we have quality code, reduce the bugs in the code. If we're running the service, how we log and monitor and protect our running infrastructure in our customers.

Our organization and our security teams focus on how we make sure that we do everything that needs to be behind the scenes. Then what we need to do is that translation from of the product to in the product. It's a shared responsibility model, so we have architects and engineers that make sure we're being transparent with our customers.

"We do this for you, and here are the features that we've put into our services or our product that enable you to do secure workloads with what we're providing. You're responsible for monitoring it," just like any cloud service provider does that.

"Here's your feature. Here's your logging feature, so that you can then monitor your use," because, in a collective defense world, I don't know what is a good user or a bad user for our customers. I want to enable them with all the information that they need to be successful while we protect what we need to protect at our layers.

As a product security organization, that's where we look at both how we harden what we need to control and monitor, clearly articulating that trust model of what our customers do and what we need to do, in addition to architecting those features because we use our own products. So, when we use our own products, we have to secure their use as well. That's where we need to make sure that we're building the right things into that software and hardware to enable that collective defense.

Michael Krigsman: At what point in the product development lifecycle does security start to become a foundational issue on par with core product features, say?

David Reber: The goal is always from the beginning, so we do get involved with the product definitions and the product teams. We always look for, okay, what's the first customer or deeply understanding our customer's problem sets.

What compliance do they need? What features do they have to be able to protect their workload? That goes into the product definition.

From there, we then integrate through the entire lifecycle from design all the way to development to operations. Then all the way until it's retired as a product and it's out of support.

We look at it across, and it's a variety of techniques that we do both from a central security organization all the way to those distributed architects and engineers that we have across the company that specialize in that product or that domain. It's how we look at the problem set. We're defining the right product, building the right product, operating that right product, and then how we're transparent if we have to do updates if there are bugs, if there are security vulnerabilities, and communicate that to the customers.

Michael Krigsman: We have questions stacking up on Twitter, and I love taking questions from Twitter. You guys in the audience, you guys are marvelous. Why don't we do that?

The first question comes from Lisbeth Shaw. You kind of touched on this earlier, David. She says, "How do today's AI platforms, techniques, and technologies change the nature of both cyber-attacks and defense?"

David Reber: It's accelerating both. The attackers, generally, their disadvantage has been scale. How do I scale customizations to those companies?

With the open AI and ChatGPT, you're starting to be able to create phishing emails that really read very effective. It's accelerating at machine scale the ability to customize attack vectors to organizations.

As I talked earlier, though, about the defense side, it's how we process the data, look at the data. How do we make sure that the machine is doing what they're best at so that they can give information to the defenders?

Michael Krigsman: Arsalan Khan comes back, and he says, "Since AI is the future and it relies on data and algorithms," and he says this, "normal mortal non-technology people make sure that data and algorithms don't have biases since decisions might be based on AI suggestion?" Thank you, Arsalan, for asking that.

David Reber: When we look at trustworthy AI, our goal is transparency. How do you give transparency into how it was trained? What do we know about it?

As experts that have pre-trained the models, what do we know? Where do we know it? Present that information so that you can make your decisions.

That combined with constant testing, constant feedback in making that knowledge known versus an opaque box that you don't know what's behind the scenes there because some of those biases in the models, as long as you know what it's telling you when you hand it off to a human, you can make decisions with that information and knowledge. When you know it's there, you can work to make it better and, collectively, we can work to make it better.

How do we handle negative consequences of autonomous algorithmic systems?

Michael Krigsman: Yesterday, I had my account at a major technology company – and I won't say who it is – suspended because the AI system said that there was some problem. I did something wrong. It's an account that's very important to me in how I conduct my daily activities and business affairs.

There was no way for me to get to a human person. Fortunately, I know enough people, so I was able to get it (through the backend) taken care of. By evening, the account was restored.

What do we do about essentially autonomous, algorithmic, data-driven systems that make decisions that affect our lives and there's no person at the other end to help us rectify mistakes? I mean we can easily think of consequences in every sphere of life. Any quick thoughts on that? It's a deep topic, and I'm asking you for quick thoughts.

David Reber: It's a common problem where you've created a system that can really help you do your job better today. But you don't wrap around that feedback loop, that customer service, or the features within the product to help come together.

What I see in a lot of different cloud services, so account lockouts and stuff, what it's doing is they're trying to protect you and trying to protect them as a first priority. Their possible second and then next priorities are how do we give you the data as quick as possible to say this is why, because that's what they don't.

That's why you need the human. Why? What happened? Then how can you give information back in a trustworthy way so that it can (hopefully automated) unlock?

That's a dynamic because attackers use social engineering all the time. They're going to try to exploit even that part of the process. You have to get that human in the loop.

What we do, even internally for some of the systems and, like, the cyber AI options in vendor products, to be able to identify geographical and probable logins. How do we have that great feedback loop and customer experience with our SOC, with our helpdesk so you can resolve the situation as fast as possible?

Michael Krigsman: We have another question from Twitter. This is from Gus Bekdash who says, "The fundamental dilemma of cybersecurity is that the cost and inconvenience are certain and immediate but the benefits are delayed and uncertain. Can AI help with that?"

David Reber: The traditional model of security in organizations is an insurance policy. How much do we want to invest and what is the responsiveness of it?

I think what we're seeing starts of in AI in different products, especially in cyber world, is how you help better inform where you should do your investment. You have tax surface management tools that are out there. You're starting to see more intelligent tools that start looking at is this vulnerability exploitable.

Really, I think that's those initial investments that I'm very curious to see how we can progress this year, next year, and forward. Vulnerabilities come out in thousands of things in your enterprise every day. How do you know which ones are exploitable? How do you know where to do that most investment?

Back to putting information in the hands of the decision maker, people patching your networks. How do you help them? Rather than looking at hundreds of CVEs that are missing patches, be able to focus on these are the three that have the highest probability (within your network environment and context) to be successful. It's that balance for how you do those investments.

Additionally, how do you change the culture so that it is less of an insurance policy but an enabler of shared defense between you and your customers? It is a product value with it, and that's where you also look at those investments.

Why do security breaches happen so often?

Michael Krigsman: There are so many security breaches that are out there. How does that happen? This seems like companies invest money in security, then they get attacked, and millions of names are released. Then they say, "Well, oh, we're going to do a better job. We're going to invest more." But how? Why? How does that happen? To the layman outsider, it just seems to make no sense.

David Reber: The defense is always at a disadvantage. The attacker has to be right once. We have to be right every time.

When you're dealing with organizations with tens of thousands, hundreds of thousands of people, all it takes is one. It takes one human, one bad technology decision to go.

When you actually look at most of the major breaches, they generally are not that sophisticated. They're' not using these fancy exploits to go after the big, high-profile ones.

It's simple. Somebody clicked on the wrong link, or somebody pressed the multifactor that they shouldn't have. Now they're in as the human because the goal of the attacker is to become someone in the network. Then they go around.

Looking at your question more, though, it requires all of us to work together, be informative, have relationships, so a lot of times you're now starting to see supply chain attacks. One company gets breached. It impacts the next. Now you have a chain reaction.

Building those relationships across companies to be able to share information to reduce the impact to our joint customer base, that's where we need to look and that's what we need to really invest in as a community so that we can help make it safer.

Advice to business leaders on AI and security?

Michael Krigsman: David, what advice do you have for business leaders in terms of managing security in this rapidly changing world where AI is accelerating everything, as you described earlier?

David Reber: It's a people problem. Start with how your security organization integrating with your business leaders. How are you making security part of your product portfolio? That you make sure that you enable your customers because they're going to ask the questions.

When you start looking at those relationships, push your security organizations to be part of the development solution. That allows this relationship form.

Everyone talks about fixing security culture. It starts with those relationships, understanding both sides, and being able to do that.

The other thing I would say is make sure you have a clear supply chain data strategy as we go forward into this new world of AI. That's the new areas of how you look for can you trust where everything came from (from software to data), how it moves, and how it's delivered to your customers. Make that a priority. Start making that a priority as you evolve into AI.

Michael Krigsman: That's interesting. Having a supply chain security strategy is fundamental to successful security today.

David Reber: Correct, and we've seen it in many high-profile breaches and issues over the past years, a couple of years. It's becoming more and more prevalent across the industry.

Understanding your vendors, being able to have those relationships, it's important even if you're not an AI or data company. It's important for what every enterprise does today, and it's only going to compound that as the data supply chain continues to increase.

Michael Krigsman: What about the technology? We're just out of time. You've emphasized the people. Where does technology come into play as far as the defenses go?

David Reber: Use your standard technology layers, your standard set of tools. You need to continue to invest. AI is not going to replace a good foundation.

By having a good foundation from your common security controls from auditing to monitoring to hardening to lockdowns, all of that is there. Technology then, over time, is going to be able to continue to make us be able to analyze all of that data that's coming in to make more rapid decisions.

As I started off, it's about getting the information in the hands of people that make the decision every day. Technology is going to help us solve that problem. Understand what they're doing and what's the best option and least risk options, and how do they get that in near real-time, that's where technology is going to help us.

Michael Krigsman: Okay. With that, we're out of time. A huge thank you to David Reber of NVIDIA. David, thank you so much for taking your time and sharing your expertise with us today.

David Reber: Absolutely. It was a great conversation. Thank you for having me today.

Michael Krigsman: A huge thank you to everybody who watched. Now before you go, subscribe to our newsletter. Hit the subscribe button at the top of our website. Subscribe to our YouTube channel. Check out CXOTalk.com. We actually have amazing shows coming up, and you can participate live.

Thanks so much, everybody. I hope you have a great day, and we'll see you next time.

Published Date: Jan 27, 2023

Author: Michael Krigsman

Episode ID: 775