AI and Cybersecurity: A CTO Perspective

Join CXOTalk host Michael Krigsman as he interviews Michal Pechoucek, CTO of Gen and AI researcher, on the impact of AI on cybersecurity. In this insightful discussion, the two participants explore the evolving relationship between AI and cybersecurity. 

45:33

Mar 10, 2023
16,820 Views

Join CXOTalk host Michael Krigsman as he interviews Michal Pechoucek, CTO of Gen and AI researcher, on the impact of AI on cybersecurity.

In this insightful discussion, the two participants explore the evolving relationship between AI and cybersecurity. The conversation covers a range of topics, from AI and machine learning (ML) tools in cyber defense and penetration testing to A/B testing in cyber attacks. The conversation also covers the challenges of AI and cybersecurity research and the maturity of AI-powered tools in the field. The conversation culminates in valuable advice for Chief Information Security Officers (CISOs) and Chief Information Officers (CIOs), as well as thoughts on the future of AI in cybersecurity, including the potential for cognitive attacks and the spread of misinformation.

The conversation includes these topics:

Michal Pechoucek leads the core technology, innovation and R&D teams driving security engines as well as Gen's technology vision for human centered digital safety and beyond. He is also responsible for the company’s scientific research in the fields of Artificial Intelligence, machine learning, and cybersecurity. He previously served as CTO of Avast.

Before joining Avast, Mr. Pechoucek spent over twenty years as a professor at the Czech Technical University (CTU) in Prague, during which he founded the Artificial Intelligence Center in 2001. Mr. Pechoucek has authored more than 400 high impact publications and contributed numerous innovative AI applications to research in computer science.

While pursuing his academic career, Mr. Pechoucek co-founded several technology start-ups including cybersecurity firm Cognitive Security (acquired by CISCO in 2013), AgentFly Technologies, which specializes in controlling autonomous aircraft traffic, and Blindspot Solutions, which develops AI for industrial applications (acquired by Adastra Group in 2017). He directed the R&D Center for AI and Computer Security at CISCO Systems and worked as a strategist in the CISCO Security CTO office. He is also a venture partner with Evolution Equity Partners, a VC firm specialized in cybersecurity. Michal Pechoucek co-founded the prg.ai initiative aiming to transform Prague into a world-class AI super hub. He sits on the board of several AI startups and as an early investor supports Czech AI ecosystem.

Michal graduated from University of Edinburgh and gained his PhD in Artificial Intelligence at CTU in Prague. He also worked as a visiting professor at the University of Southern California, University of Edinburgh, State University of New York at Binghamton and University of Calgary.

He sits on the boards of several education oriented non-for-profits, is a long-time sponsor of People in Need Foundation and Memory of Nation and sits on the board of Denik N, Czech daily newspaper.

Transcript

Michael Krigsman: We're discussing AI and cybersecurity with one of the foremost AI researchers in the world. Michal Pechoucek is the CTO of Gen.

Michal Pechoucek: Yeah, thank you. I'm a special breed, in a way. I've been an AI computer scientist for a big part of my life. I've been an AI professor, went through all these postdocs, sabbaticals, getting grounds, advising Ph.D. students, all these gizmos. I always was super excited about what AI can have as a positive impact on society, so I was kind of one of those application impact researchers.

Early applications of AI and deep learning in cybersecurity

Back in 2005, we started a couple of research projects with my Ph.D. students in the use of AI in the field of cybersecurity at a time where the AI was only coming up as an application area and people were using AI for image analytics and the video analytics. We just wanted to make a breakthrough in the field of how AI and machine learning can be used in cybersecurity.

Believe me, in those days it was very difficult to sell AI and ML to cybersecurity specialists, unlike today. Today, AI is driving all the cybersecurity analytics in the majority of the systems that we use these days.

But after a couple of startups and working with the VCs, I was invited and asked by Ondrej Vlcek, who is a former CTO of Avast, then a CEO who was being promoted. He kept talking to me and kind of getting me slowly excited about how fun that it would be to use AI not for B2B cybersecurity for the enterprise sector but try to use my creativity and experience for building cybersecurity for consumers, AI-enabled cybersecurity for consumers.

He planted this bug, and here I am in Gen, formerly Avast, building systems and running the R&D departments and research labs and threat labs so that we build the best in class cybersecurity for consumers. Consumer cybersecurity, this is what I do.

Consumer cybersecurity and AI-enabled defenses

Michael Krigsman: Do you want to give us maybe a brief overview of the distinction between cybersecurity for the enterprise and cybersecurity for consumers?

Michal Pechoucek: This one is extremely exciting, especially these days when the cybersecurity as a field is undergoing a major change. In the past – in the past 30 years, actually – attackers were writing malware that was targeted at computer systems and the networks and the programs and operating systems that people used. It was a duty of the enterprise, the industry, the business to make sure that every software they sell, every hardware they sell, every network that is there is safe. Then, users; consumers were pretty much users of this infrastructure that the enterprise sector have made to be safe.

With the recent change in the PC industry where it is not any longer the vulnerability of the operating systems and the networks and the computers and our devices, but it's more people that are the vulnerability in the supply chain. People are not only victims of cybersecurity attacks, but people are also a conduit and pieces on the supply chain that not only consume but also participate in deploying an attack.

Evolving cybersecurity threats and cognitive attacks

People are getting in the front. People, or people cognition, and the way how people think and consume the Internet is becoming the vulnerability.

Because of this major change, there is now a lot more interest in consumer cybersecurity where the expectation of the industry is to build technologies that would be there covering people at the very end of when they touch the Internet. And their cognition is what they see, what they read, the messages they receive, the emotions they post.

The consumer cybersecurity these days needs to be at this very edge of the Internet, which is a different edge that was exciting 10, 15 years ago. I'm not only talking about what I think, but we have data.

Gen, as a company, is a technology company that is the home of a number of technology brands in the field of cybersecurity. I originally come from Avast, which used to be the biggest European cybersecurity brand that merged last year with NortonLifeLock, which was the biggest consumer cybersecurity brand.

In Avast, because of our freemium offering, we were the first and the biggest in the freemium in consumer cybersecurity. We see close to half a billion endpoints, so we see a huge part of the Internet.

This visibility gives us data where we see that currently it's only 30% of the attacks that we see on the Internet that are caused by classical malware is targeting devices and the network infrastructure while 70% of all what we see are attacks like phishing and scamming attacks on a human cognition.

AI and deep learning: Defending against automated attacks

Michael Krigsman: Given that, where does AI come into play to help prevent this aspect of the cybersecurity supply chain, as you described it?

Michal Pechoucek: We see attackers optimizing their costs and maximizing the effect of deployment of the attacks. They were using different automation techniques and methodologies, including artificial intelligence. Different kinds, not only machine learning, but also automated planning, automated reasoning, different types of AI for making the attacks as cheap as possible and as large-scale deployable as possible.

There were lots of AI under the hood for malware writing, malware attacks, malware deployment. In order to be able to respond and to protect our consumers efficiently, the cybersecurity firms needed to deploy high-grade AI to protect the consumers.

As soon as you stop deploying the right level of automation against automated attacks, you would be losing your warfare. There is no way how, as human analysts, you can defend the automated, high-scale attacks that are coming from the attackers.

For this reason, AI has been very well designed to be used on the side of the defense. But there have been a number of problems that we've experienced as the defenders.

One of those were that for each different type of attack that we see on the Internet, we kind of needed to build a new AI detector, new AI classifier. We, cybersecurity experts, were building the classifiers by designing features and doing all sorts of training. Soon, we started to see that this doesn't scale because we need a lot of programmers and subject matter experts who can help us to design those algorithms, those machine learning tools.

Signature-based AI and deep learning to scale malware detection

Michael Krigsman: You're talking now about signature-based.

Michal Pechoucek: Yes, I'm talking about the signature-based AI malware detection. Exactly.

Michael Krigsman: Okay. Mm-hmm.

Michal Pechoucek: In order to be able to cope with this scalability problem, an explosion of the types of malware that we are seeing online, we needed to deploy deep learning kind of methods that makes the programmers and the cybersecurity analysts free of designing the features.

There is one type of algorithm, if well-designed, that could train on the different types of data, large data, and the classifier, the detector exhibits more general detection capability and can be used across the pipeline in the cybersecurity company. In Avast, what we did is we've built those, a unique, general, AI-based methods that help the users to be protected very well.

You can imagine this similar to a current way how AI is effective in building the large language models. What we did is we've built similar deep learning methods that were not trained on natural language but were JSON files.

The Internet is written in JSON, so 70% of all the files on the Internet are JSON files, structured but with variable length. To be able to train AI on any type of JSON file, that was a complexity that we were trying to resolve, and we were successful resolving.

The challenge of building AI for diverse cybersecurity threats

One challenge in cybersecurity is to be able to come up with generic enough AI that can be effective across different types of current and future threats. There is the other huge challenge that we had in cybersecurity that is explainability.

Cybersecurity experts are like medical doctors. They know the best. They don't need any AI to help them classify malware, right?

To establish the understanding between the AI researchers and cybersecurity experts is a non-trivial endeavor, which is why there is a need to build lots of explainability, the ability to explain AI to the malware analysts so that it's accepted. We were building an exciting novel tools for explainability in cybersecurity in order to accelerate the deployment of AI in cybersecurity.

Importance of AI explainability in cybersecurity

Michael Krigsman: Are the primary challenges here lying with the ability to accumulate sufficient data for your models or in building the broader general algorithms that will operate effectively on those models?

Michal Pechoucek: It's more the generality. We have access to fantastic data on the Internet, so we see a lot of the generality for that. The detectors are fine-tuned to be able to detect different types of malware campaigns. This has been a challenge.

As much as in other applications of AI, there has been a push to build an algorithm that is capable of doing more things such as, in game playing, the designers design algorithms that can play Go and Shogun at the same time. This is the aspect of generality in game playing.

Similar aspects of generality is needed in building enabled AI for a cybersecurity. Today, the cybersecurity is different because it's not a vulnerability of the operating systems but is the vulnerability of people cognition, so the attacks are different.

The attackers are writing something else. They are not writing JSON files. They are not compiling assembler code.

What the attackers are doing, they are writing text in natural language that is supposed to be deceptive and believable so that the users are willing to open attachments, are willing to share their financial data. They are willing to click on the link.

It's a totally different type of warfare. There, artificial intelligence is much more successful and much more impactful when it comes to attacking because in order to be able to craft and deploy successful cognitive attack, you need three things.

Number one is you need lots of data about the victim. You need to kind of collect data about where the victim goes on the Internet, what they like, what they did, what they show, what they read. Through this, it's possible that the algorithms will create more personalized communication, more personalized cognitive attack.

The second piece that you need is to steal somebody's credentials. Identity theft is increasing the effectiveness of a cognitive attack. If you receive the cognitive attack from an email of a friend, it's much more likely that you will click or open the attachment.

The third one is high-performance AI that is capable of building a text that is believable, a text that is easy to believe and adopt as a legitimate message and act accordingly. The current high-performance AI covered by the large language models is the ideal tool for an explosion of the cognitive attacks.

This is what I'm worried about. This is why I wake up every day and go to work because I want to contribute to the protection against AI-enabled cognitive attacks.

AI-enabled cognitive attacks, identity theft, and the future of cybersecurity

Michael Krigsman: I have been the subject of very targeted phishing attacks. None have ever been successful (to my knowledge), but where people have, bad actors have, impersonated people that I know in texts and in emails. How can AI and the tools you're developing protect me? These things are so believable, and I'm so used to being attacked that I research every one, and I know how to manually research it. But how can AI help in this?

Michal Pechoucek: I'm a contrarian thinker. I think differently than others.

In the past, the consumer cybersecurity adopted this concept of cybersecurity under the hood. Me, as a user, I do not need to understand. I don't need to see. I just buy this product and I'm covered.

It's gone. This is history. Now we live in different times. Currently, we need to build engaging cybersecurity tools, tools that will be there for people, will be there with people, and will be helping people to be more resilient against cognitive attacks.

Assuming that I install this thing and, as a result of this, I will never be attacked by phish or a scam, is just a false assumption that this will never happen. We, as a defender, need to change the perspective and try to build companion tools that will be there with people, will be gamifying cybersecurity for people, will be rewarding people with more transparency in what is going on when they read and receive a message.

AI to detect phishing attacks: Large language models and classification

When you ask me how AI can help, the large language models that are now used for creating text can be also used successfully for being able to detect text that is a scam, that is fraud, that is extortion, that is (by many other means) malicious. The capability of detecting and classifying text with malicious intent, this is currently enabled by large learning models and by AI, which we investigate and study in Gen.

Michael Krigsman: You're looking for the patterns among very large numbers of phishing attacks. And obviously, based on your work and your research, those patterns are there if you can only find them quickly enough, I assume.

Michal Pechoucek: Yeah. We are lucky because we don't need to do this work ourselves. This is what the AI does for us. We only need to provide good-quality training data and then let the deep neural network to learn its classification.

People are always asking me, "Michal, why can't I just use ChatGPT to do this for me, just ask the question in the chat window?" My response always is that cybersecurity is a much more serious deed.

We resemble lots of responsibility to our users. Whenever we help a user, we need to be crystal clear what is it that we are telling the user. If we are advising not to click, we just need to be certain. The current setup of the large language models that are generic, generally designed, and generally trained, just do not give you the certainty.

Our added valued in the AI pipeline of systems is two-fold. At the end, we just want to make sure that the chatbot, the large language model that we query is not giving some nonsense answer. Second, what we do is we do prompting, so we are prompting the language model with data, with a small amount of special sample data that are helping the large language model do the classification that is very contextual to the situation and the way the user receives the attack.

Importance of AI in Cybersecurity for Boards of Directors

Michael Krigsman: We have a question from Arsalan Khan on Twitter. He wants to know, "If AI is being used for cybersecurity, does that mean that boards of directors don't need to worry about cybersecurity?" In other words, is the AI just handling this problem and the problem is going to go away?

Michal Pechoucek: It's very similar to my answer in the threat of consumer cybersecurity. The fact that we are in a world where ordinary consumers need to worry, the same applies to the board members. You can get insurance, but it doesn't mean that the insurance will work always.

The difference between boards is that the boards do have a responsibility for a business, so their budgets for investing in cybersecurity is a different order of magnitude. While ordinary consumers who pay their daily bills, their Spotify, their Netflix, and whatever they need on the Internet, for them, paying extra for cybersecurity isn't a material part of the bill. I would say the difference is with the investment, but nobody (neither ordinary users, nor the board members) are currently relieved from a responsibility to pay attention and to make the right decisions.

Michael Krigsman: These are tough problems and, I think, for board members, it's in some cases even more difficult because they don't have the technology background that's necessary to really understand this. And so, they have to, therefore, rely upon a group of technology experts without the transparency and explainability that you described earlier.

Michal Pechoucek: Yes. Yes. I would say that currently we have the fastest average change in the types of attacks from the bad actors. So, I think that there is a huge expectation for the cybersecurity experts to really be up to the speed. Really trying to understand what are the new, dangerous threats and also what are the new technologies that people use for attacking as much as for protecting. There is a huge amount on the experts to really be there for people who need their advice.

Bridging the gap between cybersecurity experts and boards

Michael Krigsman: Many technology experts are not sufficiently comfortable communicating with the boards, which presents a problem because the security officer, for example, wants to explain but does not know how to present it in terms that are straightforward enough for the board to understand. That's a gap that causes problems in some cases.

Michal Pechoucek: I agree with you, Michael, and there is the other problem, which is the current economical environment. The budgets are stretched with everybody.

I guess, in the past, big corporations, if they didn't understand, they were kind of okay to pay extra for an extra tool that the cybersecurity guy requested. These days, I think it's going to be different.

There'll be budget fights for everything, including cybersecurity. This presents an opportunity growth for cybersecurity personnel and CISOs in the big companies to be able to explain better to the board what is it that they are buying and why they need to invest. The time is changing for everybody.

Michael Krigsman: It is always extraordinary to me the number of companies, even security companies – I mean look at LastPass – that have breaches and, after the breach, they always say, "Oh, we're going to invest more." Well, why didn't they do this before?

Michal Pechoucek: The fact that there are those centers on the Internet that are worth breaching that store users' private data and the bad actors are interested in attacking, I think it's the wrong design of the Internet. We should have less and less of such places and more and more private data should reside with the users on their ends. It would be much more difficult to make a large-scale breach through which you steal hundreds of thousands of IDs and password numbers.

I really believe that the Internet needs to undergo a change, but there is much more opportunity for users to take responsibility for their own private data and let those who just kind of need to validate and verify to check in a privacy-preserving manner what is it that I keep in my wallet. And my wallet needs to be secure and modern and good quality and covered by good compute so that whoever needs to check me, checks my wallet and doesn't need to contain the record of my personal data in the database, which creates danger for the vendor who keeps my data.

Personal data protection: Tips for better cybersecurity hygiene

Michael Krigsman: Since you brought this topic up, can you just briefly give us advice? The listeners for CXOTalk are very smart, very bright. Can you give us advice on, as individuals, what we can do to protect our data (just along the lines of what you were saying), just briefly?

Michal Pechoucek: Very briefly, we need to have good cybersecurity hygiene to work with the password manager. Do not store or send passwords by a text message. Use good tools for cybersecurity. Understand where you are sharing private data, for what purpose, whether this is really necessary.

Ask vendors to delete data because, in many countries, there is legislation. If the vendor is asked to delete the data, they are obliqued to.

Be cognizant of tracking. We are tracked. Without tracking, there is no personalized experience on the Internet, so we kind of need tracking. But be focused on when and why.

Delete your cookies. Do not agree with every single cookie popup that is bothering you.

This is the basic advice. But the truth is, and truly, storage of private data is very much connected with algorithmic manipulation. The more data the vendors know about myself, the better personalized digital experience I receive. But the better digital experience means that the vendors are restricting my choices.

Algorithm manipulation and privacy concerns; Algorithm manipulation and privacy

When I search for an interesting article, if the Internet knows all about me, it tries to second-guess what is it I want and serves the content that they assume that I need. So, there is a piece of manipulation.

I truly believe that people need better. People need better tools and technologies for keeping their privacy in check, but also for understanding how the recommender algorithms that are driving the Internet, how they work, how they work for me, how they work for me in that situation, in the other situation.

I need to understand when YouTube offers me those tiles, why are they there? What is it that I did that I see this offer? What is it that I watch? What is it that I didn't watch? What is it that I posted about?

Why the recommender is acting in a way it's acting? We don't know. There's no transparency. We should know.

We are accepting the recommendations from the Internet as they are, and this is because this great technology is on the other side. The great technology is on the side of the vendors, of the Internet companies. The amount of technology that is with users in their wallets, in their browsers, in their thoughts is actually quite limited.

Thirty years ago, there were the first bad actors and the first attackers. As a result, a big, massive cybersecurity industry became a reality.

This industry started to protect users. As a result of this, users are safe. I believe that something similar must happen in the field of privacy and algorithm manipulation that there is more covering people's back when it comes to algorithm manipulation, misinformation, and privacy handling.

AI and ML tools in cyber defense and penetration testing

Michael Krigsman: We have a couple of questions now that are popping up on Twitter, so why don't we jump there? First is from Chris Peterson who says, "AI and ML tools sound great for cyber defense. Is there analogous research in penetration testing and red team tools or better traps and honeypots for luring in threat actors?"

Michal Pechoucek: In my threat labs, we have done research, and we were able to demonstrate that the use of large language models for generating malware is possible. That you can generate malware by ChatGPT.

The truth is, is it really necessary? If I look back into how kind of malware companies are created, writing a piece of malware is only a small component. There are script algorithms for automated malware writing available in the malware community for the last 15 years.

Is the contribution of a ChatGPT-generated malware so changing for bad actors? Honestly, I don't think so. I think that the added value is only limited.

However, when we talk about the attacks that are in the form of a cognitive attack, in the form of scam and phishing manipulation, there the story is totally different. There the role and added value of large language models and the modern AI for bad actors is just massive.

A/B testing in cyber attacks

The rate through which they can create believable, unique content is just amazing. And not only the quality of the content but also the capability to test, to A/B test the effectiveness of the cognitive attack.

Currently, there are attackers who are writing phishing email. They collect some data from the Internet to learn how effective they have been. They take some learnings. They adapt the strategy. They try something else. Through a method similar to reinforcement learning, together with large language model-based text generation, this cycle can get automated.

This is my worry. If the bad actors start to really automate to generate unique, dangerous content and, at the same time, to be able to learn automatically how effective that it has been and adapt the text generation, to me this is very dangerous. This is an area where we as defenders need to pay big attention.

Michael Krigsman: It's so interesting that you just described this A/B testing because, among the attacks that have been against me, because of who we interview at CXOTalk, we're a target. And so, I get these requests that are very strange. I get these requests from what appear to be women on the Internet, you know, accomplished women. And what I've noticed – and I'm married. I have no interest. Okay. But I've noticed that the attackers... Because they're obviously fake, because I research everything.

I notice that the attackers have been changing specific variables, so they'll change, for example, a little bit about the background. All the variables will be the same, the characteristics of the proposed connection. They'll change ethnicity. They'll change the tone a little bit but keep everything else the same.

I've suspected very strongly that this is A/B testing going on.

Michal Pechoucek: Yes, yes. This A/B testing can be automated. If there is a tracker in our email, a tracker in your browser that is helping the attacker to report and understand how effective it has been, it's going to be automated.

AI recommendations on data collection

Michael Krigsman: We have another question now from Twitter, and this is again from Arsalan Khan who says, "Can AI make recommendations about not collecting certain kinds of data since it's prone to be attacked?" I think he's referring to recommendations to individuals about what kind of tracking to allow, for example, or not, based on recommendations from an AI.

Michal Pechoucek: This is a big challenge for one of my teams. We are trying to understand to which extent AI can help people to make the right privacy decisions.

We build a good quality AI that helps the user to assess and explain what is the privacy impact of using this or the other app or being on this or the other webpage. We built classifiers and detectors that help users to give them some information about that.

The truth is currently users are not ready for using this information so that they would be impacting their privacy behavior. We see that users, by big parts, are taking a binary decision. Okay? So, "I really don't care," or "I just don't think we need to," or "I don't care. I use Google Search," or "I care and I use DuckDuckGo." Right?

There is actually nothing in between. I think that the technologists and the technology firms need to build tools that would allow users to kind of set up their privacy approach in a very fine-tined way, in context based on what they do, based on what they search for, based where they are physically. Let the user fine-tune the preferences and then be able to be with the user and adapt to the privacy preferences based on the past behavior models.

I actually think that this is a missing piece that the users need. It's an optimization problem to get the best of the personalized Internet experience on one hand and protect your privacy on the other hand because these are kind of joint variables.

Challenges in AI and cybersecurity research

Michael Krigsman: Michal, can you describe to us some of the most significant challenges, especially the technology challenges, that you face in your work and your research right now?

Michal Pechoucek: One of the really fascinating research challenges is explainability in the field of cybersecurity. For the last ten years, many AI scientists have been working in the field of explainability trying to deliver some good explanations on AI verdicts.

In cybersecurity, this is fascinating because if you dig deep into the mind of threat researchers, threat analysts, there is a combination of deep knowledge, great intellect, fantastic reasoning, and intuition. The intuition is the piece which is very difficult to optimize.

Whenever there are AI scientists, national experts, they just want to do statistics. They want to optimize. Give me optimization functions and then I'll train a classifier. This is how AIP work.

To be able to marry this statistical approach to the world with the intuition that is in cybersecurity, that's fascinating. I actually think it's transferable. It's transferable to other domains.

I would say that medical doctors are very similar. I think at the time when AI will be taking over in healthcare, they would need to be able to resolve similar challenges, the challenges of explaining AI in such a way that we are getting professionals that are not being replaced by AI but their impact is being X multiplied by appropriate use of high-performance AI.

I'm lucky to be in this field that I have first-hand technical experience. This definitely is one of the challenges.

Then secondly, I'm really excited to be in this time of explosion of the large language models and to see, for the first time in my lifetime, when AI is becoming a true consumer proposition. Until now, AI actually was a proposition for B2B. Enterprises, firms, corporations, were using good quality AI to deliver products to users. But this is the first time where users are exposed to AI.

I would say that one of the big challenges for me and for my teams is to understand how this new AI is threatening people. What are the dangers and ethical limitations of this new age AI that can have an impact on our users? To be there and to think about those dangers, and to try to build a technology that is protecting users against not current but future AI dangers is very rewarding.

Michael Krigsman: We have a question from LinkedIn. This is from Nasheen Liu. She runs an organization that works with chief information officers. She says, "There are quite a few AI-powered cybersecurity tools out there. What's your take on the level of maturity of these tools? What's the confidence level of organizations as far as adoption goes, especially since the popularity of ChatGPT?"

Michal Pechoucek: Check the explainability. There are tools that people believe because they've been well explained and the users have trust in the tools. Then there are kind of new kids on the block that have a hard time to prove their value.

I'm an optimist. There are lots of great AI tools that cybersecurity experts are using. It's nontrivial to be able to detect what is working for your use case and what's not.

Advice for Chief Information Security Officers (CISO) and Chief Information Officers (CIO)

Michael Krigsman: What advice do you have for enterprise chief information security officers? I know it's a really hard question to ask. Just broad advice. But you have such an overview.

Michal Pechoucek: Not to play catchup, but think for the future. I know that CISOs often are busy with solving a crisis and with extinguishing fires. This is pretty much their work.

Try to find at least 50% of your time where you think about the future, about what may come up as soon as the technology becomes better developed and more powerful. Do not, as cybersecurity experts, get stuck in the current situation, but let us find time to focus on the future because this is the only path to resilience.

Michael Krigsman: Any advice for chief information officers?

Michal Pechoucek: Chief information officers need to partner well with CISOs and CTOs and to kind of be in the good company and try to take the advantage of what the cybersecurity experts can do for them and what the technologists can help them in driving their future decisions. And you know I don't envy you. The same with investment in security that will be under scrutiny, at the same time we will see investments in IT and cloud spend to undergo big scrutiny in 2023, unfortunately.

Future of AI in cybersecurity: cognitive attacks and misinformation

Michael Krigsman: You're working on certain problems, challenges in cybersecurity and AI. Where do you see the results ending up of your work over the next couple of years?

Michal Pechoucek: The way how I see the evolution of a scam and phish, the cognitive attacks on the Internet is from short cognitive attacks to long-term, persistent cognitive attacks. From manipulation to misinformation.

Actually, I think that, in the future, as currently cybersecurity is changing from attacking people's devices to people's cognition, I think, in the future, the cybersecurity will be changing from an immediate cognitive attacks, which is just one click, towards manipulation. How am I changing people's minds that they would be less resilient and more susceptible to an attack? How do I move people between echo chambers so that they would be more vulnerable, they will have more vulnerabilities for me as a bad actor?

I think that this combination of a scam and phishing attacks and misinformation and manipulation will be a big topic in AI cybersecurity in the years to come.

Michael Krigsman: You're saying that the shift takes place from AI in technology attacks (meaning, attacking a firewall for example) to cognitive attacks such as phishing to broader over time attacks that manifest as misinformation, disinformation, and broader psychological manipulation.

Michal Pechoucek: Yes. Yes, exactly. This is where I see the job of AI cybersecurity experts need to be even more exciting and would require a wider scope of knowledge comparing to cybersecurity experts of the past.

Michael Krigsman: It's fascinating because essentially what you're saying is that the field of misinformation and disinformation, which right now we look at as a social media problem, in fact is a cybersecurity problem.

Michal Pechoucek: It is. It is a cybersecurity problem. It's relevant to today because the most effective weapon in this, in a future cybersecurity world, is making people more resilient. Helping people to be fit, mentally fit, to be inquisitive, to be excited about checking sources, and to be resilient against future attacks, which is why those who expect the technology firms to build a manipulation firewall that I put in my browser and doesn't show any fake news, that's the wrong approach.

For one, it cannot be done. Second, it reduces my mental fitness. It's going to need to be there. Still, I want to be independent, autonomous, and keep my life under control. To this also belongs my capability to distinguish which news I believe and which news I don't believe. I don't want this right to be taken away from me.

Michael Krigsman: It seems like this is a particularly difficult challenge because now what you're asking to take place is silos of researchers to converge because right now you have cybersecurity researchers and then you have folks who are looking at news, essentially media researchers. These are different groups of people.

Michal Pechoucek: Yes, they are. But they'll be soon sharing the same objective to make the Internet a safer place for everybody.

Michael Krigsman: Well, on that, I'm afraid we are out of time. Michal, I just want to say a huge thank you for taking the time to be with us today.

Michal Pechoucek: Thank you for having me, Michael.

Michael Krigsman: Thank you so much to Michal Pechoucek, the CTO of Gen. Now, before you go, please subscribe to our YouTube channel, hit the subscribe button at the top of our website so we can send you our great newsletter.

Published Date: Mar 10, 2023

Author: Michael Krigsman

Episode ID: 781