AI and Cybersecurity Update from Palo Alto Networks

In CXOTalk episode 848, Anand Oswal of Palo Alto Networks discusses securing AI applications, balancing productivity and security, and adopting a platform approach to simplify operations.

21:02

Aug 05, 2024
22,005 Views

In this episode of CXOTalk, Anand Oswal, Senior Vice President and General Manager of Network Security at Palo Alto Networks, discusses the rapid adoption of AI in business and the critical importance of securing AI applications. As organizations increasingly use AI to enhance productivity and transform customer experiences, they face new challenges in mitigating risks associated with data exposure, supply chain vulnerabilities, and runtime threats.

Anand emphasizes the need for a comprehensive approach to securing AI-powered applications, from ensuring visibility and control over employee usage to protecting against configuration risks and runtime attacks. He highlights the importance of balancing productivity and security while enabling organizations to harness AI's full potential. Anand also shares insights on the evolving AI security landscape and how Palo Alto Networks collaborates with industry leaders to develop robust security frameworks for AI applications.

Episode Highlights

Secure AI Applications by Design

  • Ensure that AI applications are integrated into the enterprise environment with complete visibility and control over data protection and threat protection policies. This involves setting the right level of data protection policies to protect sensitive data and enabling threat protection for responses from AI applications.
  • Implement AI security posture management to secure AI-powered applications from configuration risks, supply chain risks, and runtime threats such as prompt injection attacks, model DOS attacks, and data leakage.

Manage Shadow AI Risks

  • Identify and monitor AI applications used by employees, whether approved by IT or not, to ensure that sensitive data is not exposed. This includes having visibility into all application attributes to make informed decisions about usage.
  • Develop policies and recommendations to allow, deny, or limit the usage of AI applications based on their attributes and potential risks, ensuring that productivity is not compromised.

Protect AI-Powered Applications from Threats

  • Implement holistic security measures to secure AI-powered applications from classical and AI-specific threats. This includes protecting against supply chain and configuration risks, as well as runtime threats like prompt injection and model DOS attacks.
  • Collaborate with other leaders and vendors to develop joint reference architectures for securing AI applications, such as the partnership between Palo Alto Networks and NVIDIA.

Balance Security and Productivity

  • Ensure that employees can access AI applications without compromising security. This involves providing complete visibility into AI usage across the enterprise and implementing data protection and threat protection policies.
  • Automate policy creations and recommendations to enable agile and fast deployment of AI applications, ensuring that security measures do not hinder productivity.

Adopt a Platform-Centric Security Approach

  • Simplify and unify network security by adopting a platform-centric approach rather than relying on multiple point products. This helps reduce operational costs and complexity while improving security outcomes.
  • Use AI copilots to help customers use platforms and products effectively, simplifying operations and enhancing security posture.

Key Takeaways

Securing Shadow AI: Balance Productivity and Risk

The rapid adoption of AI tools by employees, often without IT approval, creates a "shadow AI" phenomenon. Over 57% of employees use AI applications to boost productivity, potentially exposing sensitive company data. Organizations need visibility into AI usage to make informed decisions about allowing, denying, or limiting access. Implementing robust data protection policies and threat detection measures is crucial to balance productivity gains with security concerns.

Holistic Security for AI-Powered Applications

As companies develop AI-powered applications, they must adopt a comprehensive security approach. This involves protecting against supply chain risks, configuration vulnerabilities, and runtime threats specific to AI, such as prompt injection and model denial-of-service attacks. Organizations should implement AI security posture management and runtime security measures to safeguard their entire AI ecosystem, including models, infrastructure, tools, datasets, and plugins.

Embracing a Platform-Centric Approach to Network Security

The complexity of managing multiple point solutions for network security is becoming unsustainable. CIOs and CISOs should adopt a platform-centric approach to simplify and unify their network security infrastructure. This strategy can lead to better security outcomes, lower operational costs, and increased agility in implementing new policies consistently across the entire infrastructure. A unified platform allows for easier management of traditional and AI-specific security concerns.

Episode Participants

Anand Oswal is the Senior Vice President and General Manager at cybersecurity leader Palo Alto Networks, where he leads the company’s firewall-as-a-platform efforts. He holds more than 60 U.S. patents and earned a bachelor’s degree in telecommunications from the College of Engineering, Pune, India and a master’s degree in computer networking from the University of Southern California, Los Angeles.

Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator, known for his deep expertise in the fields of digital transformation, innovation, and leadership. He has presented at industry events around the world and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.

Transcript

Anand Oswal: It's not just an application plugging into a model. It's an entirely new AI app stack infrastructure. How do you secure that holistically?

Michael Krigsman: Welcome to CXOTalk. I'm Michael Krigsman, and we are exploring AI and cybersecurity with Anand Oswal, who leads the network security business at Palo Alto Networks. Anand, as one of the world's foremost security experts, how do you view this new wave of AI?

Anand Oswal: I think of it as three broad use cases. One is how we are using AI to secure our customers' environments. Second, as companies adopt AI, as employees use AI applications, and as companies build their applications, how do you secure AI by design? And third, how do you ensure you use AI to simplify your operations or cybersecurity?

Michael Krigsman: Anand, you used a very interesting phrase, "securing AI by design for the enterprise." Tell us more about that.

Anand Oswal: Over 57% of employees today are using AI applications (whether IT knows about it or not). We sometimes call it the shadow AI problem. Employees are using AI applications to increase their productivity, to get their work done better, to be more efficient. That's happening. That's one.

The second, almost every single organization is thinking of building AI-powered applications to simplify their operations, to give new experiences to their end customers.

Now, both of these use cases in security AI by design that I call about will definitely increase a company's top line; will help them be more profitable. But if we don't make sure it's secure from the get-go, make sure that it's secure by design, we'll have an issue. That's what I call securing AI by design both for employees as they access AI applications and as organizations build these AI-powered applications.

Michael Krigsman: Anand, as AI explodes, what is it about this technology that creates challenges, creates complexity from a security standpoint?

Anand Oswal: You have examples of notetaking applications. They are taking notes while you're having a meeting. They summarize the meeting minutes for you. But imagine that you're discussing a sensitive topic in the meeting. And if this gets leaked, or this is an application that is not approved by IT, they cannot do anything to mitigate the risks.

The issue we have is, as employees use AI applications, the organizations need to have full visibility into who is using what applications. Should they allow the application? Should they deny the application? Or should they limit usage of the application? If you allow a given application, do you have the right controls to protect your sensitive data, your crown jewels?

Many of these AI-powered applications – or I would say all of them – give responses back. How do you ensure that the responses that come back from the applications don't have any malware?

We've seen examples, cases where, say, marketing professionals are using AI-powered applications to write a blog. The blog may have a bunch of hyperlinks. What if one of those hyperlinks is a phishing or malware attack?

You need to make sure that you're having the right visibility, control over which applications are used and not used, have the right level of data protection capabilities enforced and enabled, and protect from these threats and malware when the responses come in. That's securing it completely for employees accessing the AI applications.

Michael Krigsman: Fundamentally, you have AI proliferation everywhere. Yet, this proliferation is taking place in a very uncontrolled manner (from a security standpoint).

Anand Oswal: That's why you need to have all the visibility, the controls from a data perspective, and threat protection. You can only secure something if you know what's happening.

Once you get that, then the organization can decide what to allow, what to deny, what to limit, what data protection policies they want to apply, what threat protection capability they want to enable. That gives them a holistic view of employees accessing AI applications securely.

Most organizations would want employees to access the applications because it makes them more efficient. They're more productive. They can get things done faster. But not at the expense of security.

Every organization is building a new AI-powered application because they want to transform their business operations. They want to give new experiences to the end customer. Building AI-powered applications is not just taking an application and plugging into a model. It's much more than that. You're building an entire AI ecosystem along with that, and that's what you need to secure as well.

Michael Krigsman: From a development standpoint, how are these models built? Again, what are the implications for cybersecurity?

Anand Oswal: If you think of application evolution, the first wave was around three-tiered application models. You had a front end, you had a database tier, and the back end. Then came the cloud and gave the organizations the ability to transform their applications in the cloud using a microservices architecture.

AI-powered applications represent the third wave for application transformation. It's not just taking an application and plugging into a model. You're bringing an entire app ecosystem, AI infrastructure, AI models, tools, datasets, plugins. And all these components, Michael, they talk to each other.

Your datasets and your plugins, the developers love them because it does some amazing things. It'll talk to the Internet. It'll search. It'll do things like text to SQL queries. But these need to talk to all the components of the infrastructure I talked about. And in many cases, it requires access permissions. Now you need to protect from all the classical threats that you saw in applications and the new AI-related threats as well.

Michael Krigsman: What does this explosion mean for enterprise? What does it mean for employees?

Anand Oswal: Over the next few years, you're going to see every organization building these AI-powered applications. You have a million downloads of HuggingFace models every single day. Sixty percent of these models have only been added in the last few months.

Now imagine a developer takes a model, downloads it, and it has some known vulnerabilities. Or as it tries to build an application and puts it in a runtime environment, the data is exposed to the Internet. You've got to protect these applications from what I call supply chain and configuration threats.

Then you also need to put the applications from runtime threats. Attackers are looking at making the model do things beyond its guardrails, what we call prompt injection. So, I'm talking to the application and I'm making it do things that it's not supposed to do. Or I can do a model DOS attack.

In the early days, people said, "I can ask an LLM, 'Print Michael a billion times,'" and that can suck up CPU and memory. Of course, that's a simplistic case, but we need to protect in runtime from all these new threats, prompt injection threats, model DOS attacks, data leakage. All of this requires a very thoughtful approach, protecting your applications from supply chain risks, from configuration risks, and runtime risks.

At the same time, you also need to protect your application from all the classical risks that you've always seen for many years. That's why you need to secure your environment by design.

Michael Krigsman: Anand, what you're describing is a heterogeneous environment that works across many different vendors, many different kinds of software applications. How are you working with other leaders to protect those of us in the enterprise?

Anand Oswal: As we start building AI security posture management and AI runtime security, at Palo Alto Networks we partner with NVIDIA to build a joint reference architecture on how organizations can go about securing these applications in runtime. That is available today on our website, on the NVIDIA website, where it's a joint architecture which talks about how you can secure these applications on runtime.

Michael Krigsman: Anand, earlier you mentioned the term "shadow AI," such an intriguing phrase and, of course, reminds us of shadow IT. Tell us about that.

Anand Oswal: We have a large number of employees in organizations accessing AI-powered applications whether the organization's IT department is aware of it or not. That is a shadow AI problem.

Now you have the risk of exposing your sensitive data to the AI-powered application. So, organizations want to ensure that you get all the benefits of using these AI applications, but you use ones that they allow.

It's important to have visibility into all the usage of employees. So, organizations can decide what to allow, what to deny, enable the right data protection policies and threat protection policies.

But in many cases, you may find that organizations also want to see, "Hey, look. If I'm finding a large number of employees using an application, can I learn more about this application?"

At Palo Alto Networks, we researched over 1,000+ gen AI applications and have 60+ attributes from infrastructure, from hosting, from what model use... data use between the models, and so on and so forth. Then you have full visibility into if employees are using a given application and that's on all the attributes, making the job of the network security professional, the IT professional easy to decide, "Hey, now I can allow this application because I looked at all the attributes. I understand what they're used for," and that is what we want to enable. Enable organizations to safely use these AI-powered applications, to not impact the productivity but do it securely.

Michael Krigsman: Anand, in this environment, how do you balance security and compliance (as you've been describing) against the need for employees to remain productive and not face obstacles as they do their work?

Anand Oswal: Employees are and want to use these applications to enhance their productivity. Organizations want them to use it but would like to ensure that we have full visibility into what's being used, what data is being sent to these LLMs, and look at the responses that come back.

With AI access, what we're trying to do is really ensure that employees can access these applications without compromising on security. At the same time, give the organizations full visibility into usage of AI across their enterprise.

Michael Krigsman: What happens at scale? As we know, organizations are moving from the proof of concept phase to the scale phase.

Anand Oswal: In a few years, you'll see every application will most probably be an AI-powered application. Right? So, what we're also doing is giving the right recommendations.

If you're using a bunch of these applications that are getting authenticated through your AD provider, most likely you want to be able to allow these applications with the right set of data control policies. We're going to automate these policy creations, the recommendations that we can drive, so we are able to move very agile and very fast.

Michael Krigsman: The AI environment is so complex when it comes to security. How should CIOs and chief information security officers navigate these changes?

Anand Oswal: How they can leverage network security products and platform to secure their environment, that's through having the best in class security services that they can enable that are able to prevent day zero threats, threats before they happen. Then there is the securing AI by design, which is ensuring that users can access applications, AI applications, when the organization has full visibility, control, data protection, and threat protection capabilities.

Then as developers in the organizations are building these AI-powered applications, how do you ensure that you are protecting from configuration risks, from supply chain risks, and (as applications get deployed) again all the runtime threats, the prompt injection attacks, the model DOS attacks, the data leakage attacks, and so on and so forth?

Michael Krigsman: Do security leaders need to think about AI in a different way from the mechanisms and approaches that they used for traditional security?

Anand Oswal: Yes and no. A lot of the traditional approaches, of course, they apply. The thing is that these applications are being used at a scale and adopted at a very, very fast pace.

How do you ensure that you are agile, that your policies, your recommendations that you're doing for your employees accessing, are not slowing your organization down?

As you build AI-powered applications to transform your operations, to give new experiences, richer experiences to your customers, how do you ensure that from configuration risks, from supply chain risks, and runtime threats, you're protecting it holistically?

Michael Krigsman: As a CISO looks across their organization, they see AI proliferation. They see shadow AI. Is the fundamental question security against maintaining productivity, as you were just describing?

Anand Oswal: For the AI access part, it's ensuring that employees can use the applications, be productive, without compromising security. As you think of applications the organizations are building that will transform their operations, that'll increase their topline, that will reduce their costs, you want to make sure that you're building this by design across all aspects of security—supply chain, configuration, runtime, classical threats—because these applications that you built are talking to a lot of different components, the AI models, AI infrastructure, tools, plugins, Internet, users, other applications in order to secure it holistically.

Michael Krigsman: As you talk with CIOs and CISOs, what concerns do you hear the most?

Anand Oswal: Every CIO and CISO I talk to knows this is coming at a very fast pace. Their employees are using these AI-powered applications. They want to have visibility, control for data protection and protect their sensitive data. That's very important for organizations.

Second, as they're building these new AI-powered applications, they are now realizing it's not just an application plugging into a model. It's an entire new AI app stack infrastructure coming along with it. How do you secure that holistically from supply chain configuration in browser and runtime threats, and do it with ease, having the policies, the recommendations that are easy for them to adapt and deploy?

Michael Krigsman: Easy is a crucial part of this.

Anand Oswal: Yes, absolutely.

Michael Krigsman: What is Palo Alto Networks doing in response to this new AI environment?

Anand Oswal: Michael, a few weeks ago, we announced some amazing innovations. The innovations were around AI access security, AI security posture management, and AI runtime security.

AI access security helps our customers ensure that employees that they have can use these AI-powered applications without compromising on security. The enterprise has full visibility into who is using which AI-powered application. They have control in deciding to allow, to deny, or to limit usage of an application. They can set the right level of data protection policies to protect the sensitive data and also have threat protection, as responses from these applications come in.

Then we announced AI security posture management. As organizations build these AI-powered applications, we can secure them from configuration risks, supply chain risks, and AI runtime security, which goes hand-in-hand with it to protect these applications from all threats that we see, the threats that we've seen from classical applications but also new AI threats like prompt injection, model DOS attacks, like data leakage, as these applications talk to an entire ecosystem and also talk to various components, the outside Internet, other applications, users.

Michael Krigsman: What I find so striking is the environment continues to maintain this traditional, established set of security issues. Now, overlaid on top of it, you have all of these AI-specific concerns. Of course, from a CISO or CIO perspective, this landscape is very challenging to manage and to navigate.

Anand Oswal: I agree with you, Michael. That's why most customers that I talk to are saying, "How do I simplify and how do I unify my network security? How do I go on a platform-centric approach where I can use a point product for individual things that I need to solve?"

I want a holistic solution where I'm getting better as these things get integrated, having a single dashboard, single policies, single workflows that can help manage the entire infrastructure. Otherwise, what happens is that you have a point product solution for everything.

It increases your operational costs. You don't have enough trained resources. As you know, we have a shortage of cybersecurity professionals. And it doesn't give me better outcomes. They're looking to simplify and unify or platformize network security.

Michael Krigsman: Platformize network security, that seems like a crucial point here.

Anand Oswal: An average organization today can have 20+ tools in network security. By using this variety of point production solutions, they're not getting to better security outcomes. They're having higher operational complexity. It's hard to manage. It's hard to be agile.

They want to simplify it. They want to unify how all of this comes together. And that can only happen when you have a platform-centric approach that gives you a better security outcome that lowers your operational costs, and helps you roll out new things, new policies easily, consistently across your entire infrastructure.

Michael Krigsman: Anand, where is this all headed over the next couple of years?

Anand Oswal: The usage of AI has just begun. This is the fastest-growing technology in the history of mankind. Right? It took mobile and the Internet many years to hit a billion users. We're already at 300+ million users using these LLMs today. Right? So, that's continue to happen.

I think you're also finding that attackers or the adversities are also leveraging AI. How do you use precision AI (a combination of machine learning, deep learning, LLMs) to protect our customers from attacks that they have never seen before, to give them real-time security outcomes?

As you enable these services on the platform, they share intelligence and they get better, what I call the network effect of data.

Second, as organizations build AI-powered applications and employees access AI applications, how do you ensure you secure this usage of AI by design? That's going to be very crucial because this is going to happen across every single organization.

The third thing that we're doing is using AI copilots to help our customers use platforms and products to the best of its capabilities, to simplify their operations, to make them be more agile in terms of outcomes they want to do and get to a better security state, a better security posture.

Michael Krigsman: Anand, CIOs, CISOs, security professionals are overwhelmed. What should they do?

Anand Oswal: They want the organizations to make more money. They want them to save money, reduce their operational costs, and they want to be out of trouble.

The best way to do it as this environment gets more and more complex and complicated is to have a platform-centric approach. These point products and tools are very hard to manage. It doesn't give the right outcomes. It increases your costs, reduces your security outcomes, and very complex.

Adopting a platform-centric approach is to ensure that no matter what your user is, no matter where it is, on what device it's from, where your applications reside, you're securing it consistently from all threats that the organization can see, whether you know it or you don't know it.

Then the second one I would say is that the usage of AI is going to go up exponentially. Secure your enterprise's usage of AI by design. Ensure your employees can access AI applications without compromising on security.

Then as your organization builds these AI-powered applications to transform your customer experiences for your end customers, to have better business operations for you, ensure you're securing that by design across configuration risks, supply chain risks, and all the runtime threats the applications can have because these applications, they're not the classical applications. And it's just not an application in the model. It's an entire ecosystem that you bring together that talk to each other, that talk to the outside world, that has access to your most sensitive data. Securing your enterprise usage of AI by design is very, very important.

Michael Krigsman: Simplicity, efficiency, and ease of use.

Anand Oswal: You got it, Michael.

Michael Krigsman: Anand Oswal, thank you so much for taking time to speak with us.

Anand Oswal: Michael, it's great talking to you. Thank you.

Published Date: Aug 05, 2024

Author: Michael Krigsman

Episode ID: 848