How do hackers think and what can you do to protect your organization? Author and CEO, Stuart McClure, explains it all on this episode of CXOTalk. 

Stuart McClure leads Cylance as CEO and visionary for the first math-based approach to threat detection, prevention, and response. Stuart is the creator and founding-author of one of the most successful security books of all time, Hacking Exposed. As one of the industry’s leading authorities, he is widely recognized for his extensive and in-depth knowledge of security. Prior to Cylance, Stuart was EVP, Global CTO and General Manager at McAfee/Intel Security.

For more information on the mathematics of machine learning and security see these two videos:

Episode Outline

How has cybersecurity changed?

What were the nature of threats historically?
How is it different today?
What does this evolution mean for security today?

Interesting or unexpected attacks

Give us a few examples of strange or unexpected attacks?
What happened?
Why did it happen?
How could it have been prevented? Who should have prevented it?

Fear and loathing

What keeps you up at night?
Why does this specifically make you afraid?
How serious is the threat?
What actions do you personally take to stay safe?

Internet of things, industrial internet of things, and critical infrastructure

What are we talking about here?
To what extent is critical infrastructure at risk?
Has there been destruction so far?
What should “we” do about it? And, who is “we” – corporations, governments, citizens?

Mathematical security and machine learning

What is mathematical security? Explain in simple terms for a sophisticated audience?
Why do we need this?
What is relationship to machine learning and AI?
How effective are these techniques?
What are the limitations of these techniques?

The future of security

What will cybersecurity look like in the next 4-5 years? What will cybersecurity threats and vulnerabilities look like?
How will AI techniques employed by cybersecurity bad actors change the blocking, detection, and response? In other words, what will mitigation look like?
Are we creating an AI arms race in cybersecurity?

Staying safe - corporate

What should companies do to manage security?
Where does the role of Chief Information Security Officer fit?
What advice do you have for boards of directors?

Staying safe - individuals

What should individuals do today to manage their own security?
What should I tell my mother, who can barely use a computer, on how to stay safe online?
Is defending against cyberattacks as hopeless as it seems?

Transcript

Michael Krigsman: We are speaking now with one of the top cybersecurity experts on the planet. I'm Michael Krigsman. I'm an industry analyst and the host of CXOTalk. Before we start, there's a subscribe form, and you can subscribe to our newsletter. You can also subscribe on YouTube. Please, please do that.

We're speaking with a guy who is the guy - the guy. He wrote the book on cybersecurity. His name is Stuart McClure. He is the CEO of Cylance. It's not his first security startup either.

Stuart, how are you? Thank you so much for being here on CXOTalk.

Stuart McClure: I'm great. I'm excited to be here. Thanks so much for having me. Today, I'm CEO and Co-Founder of a company called Cylance. We've been around for six and a half years. What we do is we prevent cyberattacks at the endpoint, at the servers, in the cloud, and anywhere that they go.

Evolution of cybersecurity

Michael Krigsman: Stuart, you've been doing this for a long time. Let's very briefly look back at cybersecurity historically. What has changed?

Stuart McClure: Well, there have been a few things that have changed. First of all, in the early days, cyberattacks were very, very simple because they were really largely kept to just a few folks around the world. And so, you know, you had to deal with maybe one virus every couple of months or one a year even. So, it was quite easy for us humans that are on the defense side to take a look at something that came through, an attack that was successful, figure out how it worked, then create a detection signature for it, and then get that signature out to everybody else that has yet to be victimized by this attack. That process was really, really quite simple.

Today, we have almost a half a million different attacks that come out every single day that are brand new to the world. The sheer volume is one of the big changes that has occurred. The second is, I wouldn't say sophistication but, certainly, the advancing and the sort of collection of attacks that can be put together today can be quite complex. I think the complexity of the attacks are definitely heightened and increased let's say versus even ten years ago.

That's what we do. We make sure we maintain a focus on all of those types of attacks, brand new, old, you name it, and we train our computers to learn from all of that.

Michael Krigsman: You say that there are half a million attacks that come out every day. What are some of the more interesting attacks?

Stuart McClure: It's funny. I've got to tell you. You could write probably a fictional book every single day, a Grisham novel just on cyberattacks if you subscribe to enough mailing lists, watch enough blogs and read enough technical papers that come out because pretty much every day something comes out.

Today, in preparation for the talk here, I thought, oh, I'll take a look at today's list of running announcements. Sure enough, there was a really cool one. It's a banking trojan on your mobile device, so on Android.

This is simple. If you have an Android device and you were unfortunate enough to install a particular series of applications, and they tend to be games or tools that you might use that are fraudulently posed as legitimate, and you install it because you like a new calculator, chess game, or whatever it might be. In the background is actually sniffing and listening to all of your passcodes and all of your two-factor authentication for your banking platform. Now, that part is not new. That's been around forever and ever. We see that all the time.

What was new about is that, to sense that it's a real phone and not an analyst doing reverse engineering on it, it senses the motion in your phone. If it doesn't sense any motion in your phone, then it knows it's in a sandbox and it will not run. Whereas, if it's sensing motion, it knows it's actually in your pocket, it's a legitimate phone, and it needs to listen. It's those kinds of techniques that even though that might not be brand new, using different radios and sensors in your phone to do certain things, that particular one of sensing motion is rather new.

Michael Krigsman: How do we address that? What do we do about that?

Stuart McClure: If you think about it permutationally, there are probably eight different ways. But really core, there are really two ways. You either prevent it or you say, "Well, we can't ever prevent it, so let's just detect and respond faster."

I've always been a big, big believer that you can prevent it. So, how do we prevent this kind of a thing? Well, we prevent that app from either A) getting into the app store legitimately, B) you actually downloading and installing it, or C) before it runs, somebody like a Silencer, a BlackBerry, or something like that looking at that actual app and knowing its objectively bad and blocking it before it runs. That's the truest way preventative wise to do it.

Now, if you don't believe prevention is possible, which many do, then you want to detect and respond. You'd want to be able to allow the application to run and watch its behavior, watch its activity. Then marry that to what we know is bad activity or bad behavior. Then call that out and potentially alert and block it down the road. The downside of a detect and respond model is it's after the fact, so all of your keystrokes could be long since gone and your two-factor passcodes long since gone to the adversary and used by them to gain access to your bank account.

Adversarial AI

Michael Krigsman: What about the adversary then being aware that you're running these algorithms against the attack and thus changing the nature of the attack in various ways, obviously to try to circumvent what you're doing?

Stuart McClure: Yeah, so that's what we call adversarial AI or offensive AI, sometimes it's called. I just call it AI versus AI. We have yet to see an adversary of any sophistication leveraging AI in the wild today to defeat AI.

We know that that's coming. We certainly have anticipated it for many, many years. We actually have a team dedicated to adversarial AI research to build in a sort of preparation for that type of technique going after us.

It will happen. We know that. But for now, we haven't seen it and we are very, very ready for that and have anticipated that for quite some time.

The way that we do that is we actually try to break our own models, our own AI. By trying to break our own AI, we're actually anticipating how the adversary would try to break us as well. We do this in real time in the cloud in thousands of computers inside of Amazon AWS. By doing that, we can actually predict and prevent new forms of AI adversarial attacks.

Michael Krigsman: Is adversarial AI one of the key things that bring fear that keep you up at night and that we should be worried about?

Stuart McClure: In the next probably three to five years, I believe we will absolutely start to see this in the real world being very successful to bypass other technologies, I'm hoping not ours, but possibly ours, to bypass these technologies and to be able to gain a foothold. Right now, we are years and years ahead of the adversary because of this technique. I would say we're at least three years ahead.

Now, that window might shrink. When it does, then we will have a challenge. But again, we're spending more research, more time, more effort to make sure that we understand all of the different adversarial techniques and then building that into our improving learning math models will ultimately keep us ahead of the bad guys.

Michael Krigsman: Why are you or how are you able to stay ahead of them? Is that simply a function of resources? What I'm getting at is, let's say that you have a country, a nation-state, as they say, putting essentially unlimited resources behind their AI? At that point, do they start to win the war unless you're able to match that level of resourcing on your side?

Stuart McClure: Really, it takes three things to build a proper AI or a bypass AI model.

  • The first is the data itself. That's what you might call resources, at least the first implementation of it is the data, so the examples of what would bypass us. That has to be created somehow.
  • Now, the second thing is the security domain expertise, the ability to know what is an attack that's successful and what's not an attack that's successful and being able to label all of those elements properly.
  • Then the last is the actual learning algorithms and the platform that you use, the dynamic learning system that you've created to be able to do this very, very quickly and rapidly.

You need all three elements.

Now, a nation-state could absolutely provide the first and the third without much struggle or problem. The second, which is the domain expertise problem, that is an age-old issue. If you go into the entire security industry right today and you ask, "Well, what percentage of people," let's say adversaries in security, "actually know how to create it, find a zero-day, exploit it, and use it?" just a simple example of something that's quite complex, you're probably talking about 0.1% of the hackers out there in the world that can do that kind of thing.

Similarly, in the world of defense, the folks that can actually detect a zero-day, prevent a zero-day, and move on to clean it up are probably simple. We're in the low single digits. It's a much more difficult problem to scale is the domain expertise. While certainly a large country--China, Russia, what have you--who have a lot of resources at hand and a lot of smart people, you could start to catch up but it becomes just a very difficult scale problem because humans are not easily scalable.

Michael Krigsman: The issue, therefore, is not so much the algorithms because you can build algorithms based on resources, but the domain expertise packaged in the form or in the form of the data.

Stuart McClure: That's exactly right. The resources, the limitation around resources and just scaling resources is simply this domain expertise. Not everybody quite really understands the core foundational problems of cybersecurity and how to effect it and how to mitigate or prevent it. That becomes a real challenge because it's a very complex, multidimensional field of both attack surface area and defense capabilities.

"Mathematical cybersecurity"

Michael Krigsman: We've been talking about using algorithms and data in the service of cybersecurity. Let's dive into this a little bit more. You talk about math a lot and I'm not sure whether I heard you use the term "mathematical cybersecurity," but it's a term that came to my mind certainly as you've been talking. Tell us about the role of math in all of this and why is math so important?

Stuart McClure: To explain that, I have to start where the original idea for the company and the technique really came from. It came from me doing a talk in many different places, but one of the ones that's iconic for me is in upstate New York in Rochester at the RIT or Rochester Institute of Technology. Did a talk and, of course, it was one of the many ones that I would actually show hacking, how you break into computers, systems, networks, and devices, and everything.

At the end of the hour talk, I opened it up for questions and, sure enough, got one in the top row. He raised his hand. He says, "Hey, Stu. This is all great and good and I'm scared to death now but tell me. Show me your computer system tray. I want to know what products you're running to prevent these kinds of attacks on your computer."

Of course, I had just been acquired by McAfee, in this particular scenario, about a month, actually, before. And so, I looked down in the front row of the audience and, sure enough, it was the head of worldwide sales for McAfee. Now, if I had told the truth, I would lie to a thousand people. If I told the truth, obviously I wouldn't lie, I would probably get fired. If I didn't tell the truth, I lied, I wouldn't be able to live with myself.

I decided in that moment, real quickly, to tell the truth, to say, simply, "Look, I don't need to show you my system tray. I haven't used any computer security software on this computer for probably ten years before, and it's largely because it's just not good enough to prevent against the kind of attacks that we waged against me."

But I'm not 99.9% of the world. Not all of the world gets targeted like I do after Hacking Exposed and my positions in cybersecurity, so I have to be really, really careful. So, what do I do? I do very, very simple things to prevent the 99+% of the attacks that get out there and would be waged against me.

First is, you just don't blindly open any attachments whatsoever. Second, you don't click on any links. There are countless ones. Another one is passwords, making sure they're complex and long.

Ultimately, I started to answer these questions over and over and over again. I start to ask myself, "Gosh, I can explain how I prevent cyberattacks, very advanced ones, myself, with very simple behavioral steps. Why can't we train a computer to do that as well?" That was really the idea behind it.

I said, "Okay, let's start learning." My experience and education was programming, computer science applications. I started to think, "Why couldn't we just build a very large decision tree matrix to learn what are the characteristics or features of bad and what are the characteristics and features of good on a computer and then learn from that mathematically and build an algorithm for it?" Really, a math formula for determining the line between good and bad. That was really the beginning of it.

Now, my expertise in programming had long since expired by that time, so I brought in Ryan Permeh, my co-founder of the company, and he brought in a whole team of data scientists to really help start to solve this problem. We didn't believe it was even possible, but we wanted to try it. It felt like it should be doable.

That first original idea was proven successful about a year later when we launched and released our very first math model on learned behavior and samples for the last 20 years. We were able, with seven data scientists at the time, to effectively be 2x or 3x more accurate in detecting viruses and attacks than the largest AV cybersecurity company out there at the time between Semantic and McAfee. So, just an incredible leap forward with very simple mathematics and algorithms from a handful of data scientists are what really got us going and made us believe we can do this.

Michael Krigsman: Do you want to share just a flavor or a taste of the mathematical techniques that you're using for people who have expertise, which is in math, which is not me, but there are people out there who certainly do?

Stuart McClure: Sure. Yeah, sure. We've gone through many evolutions of our algorithms. We use many different types of techniques. Right now, we've settled on two great groups of techniques. The first is traditional deep learning algorithms like neural networks. That's sort of our primary go-to usage. But we also use more sort of anomaly-based algorithms like Gaussian and Bayesian, for example. It just depends on the use.

We've applied, now, AI mathematics into, I think, over a dozen different features inside of the technology today to catch all kinds of different attacks. And so, how these algorithms work, it's really, really simple. You take a large data set of data. You take then the characteristics of all of that data. Then you feed the characteristics, along with the labels, into these learning algorithms. It'll tell you what are the predictive features that are most predictive of a classification set.

It can tell us that this new product that has just released is going to be bad if you open it up, just by looking at the outside, the box if you will. It's sort of like being able to guess what's in your presents at Christmas. You know instantly because of the weight, because of shaking it, the sound, because of all kinds of characteristics that you've learned over the years and knowing what your mom and dad usually would give you, et cetera, or what your needs are. You'd be able to decipher it. That's the same sort of learning algorithm that we use.

One of the greatest examples I give is I usually tell people, "Just look outside or look out your window and look at people walking by on the street. Now I'm going to give you a challenge. Think of three qualities of each person walking by that would give you a high probability detection that they are a man or a woman."

Now, of course, this is a controversial topic but something that is, I think, quite interesting to talk about. You could look at them and say, "Well, look, long hair tends to be predictive of women or females, but not necessarily. It's maybe only 90%. Facial hair might be highly predictive of men. Not 100%, but maybe 90%." Adam's apple, clothes, you name it, there are all kinds of qualities that you would probably come up with as you start to look through this.

Now, just take those three or four features, these characteristics. Now plot that in a three-dimensional graph or a four-dimensional graph if you have four qualities. Then now stick these learning algorithms into that graphing matrix in memory and start to learn from it.

What'll happen is, you keep training each new sample that this is a woman, this is a man, this is a woman, this is a man, and you pull all these features. You'll start to learn that, yes, truly, these characteristics--hair length, Adam's apple, things like dress--are highly predictive of a man versus a woman. Now, it doesn't mean it's 100%, but if you learn enough from enough people around the world, you can probably get to 99.99%, and that's the same kind of concept.

Now, instead of three or four features of a classification, for us, we mapped over two million features. That's how advanced the machine learning and the feature extraction has become in our world.

Michael Krigsman: You've been doing this for, you said, about six years, seven years now?

Stuart McClure: Almost seven years, yeah, it'll be in June.

Michael Krigsman: I'm assuming that you're continuously mapping around two million features. To what extent are you introducing new features on an ongoing basis versus relying on the existing set?

Stuart McClure: All the time. What we do is, when we get a new data set, let's say, that we might have missed on or we might not have confidently convicted on, we will then map those features and characteristics of that new data set, and we might then promote those features in our prior feature map. Even though we might have seen those same features, we hadn't weighted them very high because they weren't highly predictive. However, in this new data set, those features are highly predictive and that'll elevate the weighting in our models. Then it'll relearn based on that new data. We do that every 24/7, all day, all week.

Michael Krigsman: I see. That's, of course, the key to remaining up to date with new attacks that you see crossing your radar, so to speak.

Stuart McClure: That's exactly it. You've got to stay on top of it all the time, and that's the important part of the adversarial AI, too. I have a core team of folks that all they do is they try and break our own models, like every single day. Whatever they find to break our models, well, we feed that back into the learning system and it gets better and better.

Internet of things and cybersecurity

Michael Krigsman: Okay. Let's shift gears a little bit, and let's talk about some of the applications of this. Let's begin with Internet of Things, Industrial Internet of Things, critical infrastructure. What's going on with that?

Stuart McClure: Well, as you know, devices are overwhelming our world. The easy ones are simple things that are in your pocket, the phones or the tablets that are out there. But, of course, everything is getting connected, pretty much hyperconnected at this stage.

You can look at your car. That is, if you bought a car in the last probably eight years, you probably have some connectivity in that car. It can be as simple as a tire pressure monitoring system that uses Bluetooth or it can be a full wi-fi system. It could be a cellular connection. You could have NFC templates. There are all kinds of things that are inside of these everyday objects, things that we use that are now connected and certainly electronic.

When they're electronic, they now open themselves and expose themselves to a potential cyberattack. Now either something that's within proximity of the device or something that's far away and remote. It just depends on the capabilities of the device.

That has now pushed into everywhere. That can be into things like water treatment plants, nuclear power plants, and oil and gas rigs. You name it, pretty much everything is connected in some form or fashion. The only question is, to what degree is it connected? It is connected just for logging and alerting, or is it connected for two-way control. Either way, an adversary could take advantage of that and go after it.

It really is one of the areas, when you ask me what keeps me up at night, besides my teenagers, it's probably massive cyberattacks as a precursor to something far worse in the physical world like precursor to war by attacking the electric grid. That's probably the number one. Keeping the electric grid down for weeks at a time would be an incredible precursor to something very, very bad.

Again, one of the questions I think is, "Why should we listen to you, Stuart?" Honestly, I could tell you stories that would make you not want to listen to me or at least go find a shack up in the woods somewhere off the grid because this is very, very doable and quite trivial.

Michael Krigsman: Okay, so attacks, as you were just describing on water plants, nuclear power plants, these are the things that worry you. Why is this one so serious?

Stuart McClure: Well, I think for a couple of reasons. First of all, when you can shut down physical access to things like water and food, you have a real big challenge there. The other challenge is that a lot of people just presume that a lot of it is all protected and their air-gapped. For the most part, they are air-gapped and they follow good policy and procedures and regulations around that.

But people make mistakes all the time and technology vendors make mistakes all the time in terms of vulnerabilities in their products as well. And so, all of these can be highly exploited if discovered by an adversary with a motivation to do harm. We've seen that countless times.

If you look at the Stuxnet virus, for example, the virus--it was actually, effectively, a worm and a virus--took over a nuclear power plant and then also took over a uranium enrichment plant out in Iran and was able to actually destroy the centrifuges that were enriching uranium for their plutonium for the nuclear power plants. By doing that, they were actually able, the adversary was able to set back that program, that nuclear program, by probably at least a year or two. It could have done much worse. That attack is largely attributed to Israel and to America. It just gives you a great example of what can be done with a very motivated adversary, somebody that really wants to be able to affect the physical world with cyber.

Michael Krigsman: What are the protection mechanisms that need to be in place in order to address those issues?

Stuart McClure: Well, I think there are some simple ones like it's been reported that the Stuxnet virus was originally put onto the nuclear power plant through a contractor's USB stick. If that is true, that's quite simple because the new virus could have been easily prevented had they had some sort of an AI approach to the technology or, quite simply, just don't allow USB sticks to be plugged into major systems controlling major infrastructure. Either one of those would have prevented that, quite simply.

But then, after that, after the plugin of the USB and the virus starting to run, there are many stages in that kill chain that could easily have been mitigated and stopped or prevented but, unfortunately, they weren't in large part because the adversary going after those systems knew all about what was put in there: the controls in place, the technologies in place. They knew what to get around and how to get around them quickly so that they could perform the attack quite readily.

Michael Krigsman: You say the kill path is complex. Is it similar to an airplane that, when an airplane crashes, there's generally a series of failures that take place? Is this a similar kind of thing?

Stuart McClure: That's exactly right. Yeah, that's exactly right. Any different level or layer of functionality or prevention could have failed for that attack or that crash to be successful. It's very, very rarely one little thing that allows it to happen. Sometimes it is, for sure, but not frequently.

Preventing and mitigating cyber-attacks

Michael Krigsman: You mention that companies don't necessarily keep up to date with things like patches and the security fixes that they need to apply. At the same time, software vendors are focused not just on security, but they're also focused, primarily focused, on the development of their products, especially for smaller companies, and so their attention is divided. You have lack of education. You have a whole range of issues that can lead up to this and it happens. Look at the cyberattacks that take place on credit reporting agencies, for example. With all of that, what the hell are we going to do? [Laughter]

Stuart McClure: Honestly, if you sat down and actually thought about it, it feels quite exasperating. That's why, when I wrote the Hacking Exposed book back in 1999, and every year since that I publish that thing, I try to keep it as complete as humanly possible because I think education is our weakest link. I think the vast majority of folks that are responsible for security of their organization don't know how the adversary actually works and how they actually get into their systems and networks. If they did, if they did know how, if they knew the surface area of attack, they knew the techniques, they knew the paths in, they could infinitely better prevent cyberattacks, and if they believed in prevention.

What can we do? I think the only hope, and I know it's self-serving, but it is just absolutely the truth, is a machine learning approach to prevention. That really is the only way. Collect as much data about all these attacks as humanly possible. Build learning algorithms to help learn from all the characteristics of it. Now, apply that to brand new attacks and see if it catches it. That's probably the only hope that I have in the industry outside of just turning off the computer or the device. Hitting the power button used to be my only answer to that question was, "Well, just hit the power button because that's about the only thing you can do." Even then, by the way, Michael, even by hitting the power button, in today's technology, doesn't prevent an attack.

For example, there are countless examples in the last year or two, but it used to be for the last ten years of technology on your computer that actually, even if it's powered off, an adversary can hack it and then gain access and turn on your device and hack it inside of your own computer. It's called TPM capabilities inside of Intel chipsets, but there are also other technologies that do this. It's not just about power anymore. We really do have to get to a brand new approach to this problem.

This old approach of signature-based detection and respond is absolutely 100% broken. It just does not work and it will not prevent the unknown unknowns, the attacks that everybody gets hit with all the time. And so, if you adopt that approach, this learned approach, we have a shot, but you've got to get that technology and that capability out to everybody and that's the real challenge.

Michael Krigsman: Do you have data on the efficacy of signature-based approaches versus mathematical approaches in dealing with new and evolving threats?

Stuart McClure: Yeah, the data is very, very clear and has been for quite some time that traditional signature-based approach is anywhere from 30% to 50% effective on brand new, unknown, unknown attacks whereas artificial intelligence and machine learning approaches are in the 99.9% effective rate. That has been well tested independently for the better part of five, six years now.

Future of cybersecurity

Michael Krigsman: We're going to run out of time soon, unfortunately, so let's move on to the future of attacks. I think you alluded to that but tell us about the future of cybersecurity over the next not 10 to 20 years, but next 4 to 5 years.

Stuart McClure: I think two things: One, they'll get more and more complex in terms of the bypasses and looking for such using AI and adversarial AI, for sure. Also, I think the surface area is expanding quite rapidly with all of the connected devices. There are going to be ways that attackers are going to gain access into these devices that we haven't even thought of yet, certainly, the manufacturers haven't thought of yet, and so I really think it goes always back to this education part.

From the defenders and the victims, there's an education element, but also from a supplier and a provider perspective. Technology companies build all of these. This webcam that I'm looking at right now, there are countless vulnerabilities in this thing. In this monitor, in my TV at home, all of these things have countless vulnerabilities.

The manufacturers themselves either have to get educated and start to produce more secure devices or, quite honestly, the government has to go in and regulate it. I use the R-word there, and I hate to use it, but I don't know how else we're going to get manufacturers to really take it seriously and prevent vulnerabilities from being introduced into the products because 99% of all the vulnerabilities that are present in any of these devices are completely preventable. We've known about them for decades how the adversary gets into devices. If we know about it, well, software developers and program managers should be able to make sure that it's put into these and these attacks are prevented inside these devices.

I think, really, the future is both. We've got to get manufacturers being much more secure and aware about security. We've got to get the defenders a lot more aware and educated about how the attackers and adversaries get in and how easy it is to prevent. We've got to think about it in a preventative light because detect and respond just does not work.

Advice for government regulators and policymakers

Michael Krigsman: What advice do you have for government regulators who are looking at this and they want to do something? I think everybody is pretty much well-intentioned, right? The good guys and the people working for the software vendors, they have good intentions. How does the government deal with something that is so profoundly technical? What do they do?

Stuart McClure: Well, I think first is to demystify how simple the solution really is. You don't need a team of Ph.D. programmers to come and explain it to you. It really is quite simple. I've been on the Hill multiple times to help explain and there are very, very simple things that can be done in the development lifecycle that can prevent 95% to 99% of all these attacks, for sure.

It doesn't take classes. It doesn't take regulation and a stick to be whipped upon us. It can be as simple as open the book and learn. That's the unfortunate part.

My recommendation to regulators is, number one, use the carrot first. Simply adopt a strong software development lifecycle approach with security in mind. Develop that for industry with industry. Then provide rewards, incentives for adopting those and proving independently that you've adopted those approaches and those frameworks.

Now, if after a period of time there is no adoption, no interest in adoption, and those incentives are not really appealing--they could be tax incentives or all kinds of things--then the threat of the stick in regulation might be the only last step that we can do. I don't know how else to do it.

You look at how the EU adopted GDPR. Eventually, they just said, "Guys, you guys aren't getting this fixed, so we're going to set this policy and guidelines. You have to follow it. If you don't, there'll be penalties and punishment." Unfortunately, I think that's what we might have to get to.

Michael Krigsman: GDPR may be a model for how governments can relate to this type of complex technology.

Stuart McClure: There's a lot of criticism with GDPR and they're absolutely valid; many of them are absolutely value. But I think it is a great example of how a government could step in and provide at least guidelines and recommendations and mandates, eventually.

Michael Krigsman: Stuart, in our last ten minutes or so, five to ten minutes, what should people inside corporations do? Technologists, CESOs, CIOs, what should they be doing?

Stuart McClure: First, I hate to beat the dead horse here, but education, right? Just learn as much how the bad guys -- I mean you're trying to prevent bad guys from getting in, so how can you possibly do that if you don't know how they actually get in? That's first.

Yes, whatever company you're working in today, you're going to be probably regulated in some form or fashion or at least a compliance mandate forced upon you. You need to be aware of these compliance mandates, regulations. You need to follow them, of course. All of them, every single one of these regulations, mandates, or compliance requirements of you are there because the attacker was successful in breaching your defenses.

Now you have to go back and say, "Well, wait. So, what if we could actually prevent the bad guy from bypassing our defenses and getting in?" We can do it objectively without signatures or without an old, traditional detect and response approach. Would we even need regulation? If you can prevent 99.99999%, there's no need for regulation because the likelihood of an attack occurring is so infinitely small. You're not going to build a whole system of regulation and compliance.

My recommendation: Education is number one. Know where the bad guys go. Where they come from, who really cares. They could be in a basement in Idaho or they can be in a building in Shanghai. It really doesn't matter. How do they do it? That's key. Education is number one.

Number two, for all of you that have to communicate this to your superiors, to the board, cybersecurity and the risks therein, make sure you try to take as quantitative of approach as you possibly can. What I mean by that is being able to measure your state of security and your risk in a quantitative, repeatable, independently verifiable way, and then make that your standard by which you measure yourself over and over and over again to show either improvement or, you know, not improvement, [laughter] going down.

A quantitative approach, education as much as possible, and then being able to speak to the right audience. If you're speaking to the board, don't speak to them about bits and bytes. Obviously, speak to them about risk, risk acceptance, risk mitigation, quantitative measure, holding people accountable, sense of urgency. Things of that nature will save your careers, quite frankly.

Also, really taking a preventative approach. I can't tell you how many companies; you mentioned a couple in the credit business. There are countless in retail and you name it, even banking, that have lost their leaders, the CIOs, even the CEOs, because of data breaches and in large part because they didn't believe prevention is possible.

They didn't put a concerted financial effort into, "Hey, how do we actually prevent all of these attacks from occurring?" rather than just detect and response because, even five seconds, I mean there are some people out there that say, "Well, we can detect attacks within five seconds." [Laughter] That's great. That's better than five years, which is what it was ten years ago. But five seconds is a lifetime to an attacker. You can do an insane amount of damage in five seconds, and you can set up all kinds of backdoors and all kinds of things that a detect and respond approach would never be able to catch.

Prevention is key. Start with prevention, then do detect, respond, and cleanup.

Cybersecurity advice for individuals

Michael Krigsman: Stuart, as we finish up, you mentioned that you have daughters, you have a family. What do you tell them regarding security? In other words, what's the advice for just the rest of us to not be a victim?

Stuart McClure: I think the biggest one is, just don't trust anybody.

Michael Krigsman: That's so depressing.

Stuart McClure: It sounds hyperbolic a little bit, but it's a great place to start because you know you're going to break that rule. Okay? But, yeah, it is a little depressing.

I tell my son; he's 19, first year in college, and I've been telling him ever since he could probably listen. I'm like, "Look, even if you get an email from me or a text from me, it stays Stuart McClure or it says dad or whatever it is, if you're not expecting it, if it's not something that I would have sent, or if you look at the source and it's not really from any email that you're aware of from me, don't trust it."

Again, start with, "Don't trust anybody," so zero trust. But then go to, "Okay, now only trust that which you are expecting, that which you know is traditionally within the realm of possibility and that looks legitimate," because remember, too. I could be hacked, so how would my son know? If my computer got hacked, the attacker then sends something to my son. Unless he could really peer into, "Wow, I would never get an email from my dad like this. This is very weird," and pick up the phone, call me, and say, "Hey, did you send this email to me?" Unless he did that, he would probably get hacked thinking that, "Well, I trust my dad and what he sent me."

Unfortunately, trust is probably the biggest one, a recommendation that I can provide out there. Just simply don't trust anybody. Trust, but verify. Verify that this individual is who they say that they are and that they're actually giving you what you want, for sure.

The second is make passwords long. They don't have to be complex, just long, and that they're unique to each system. The hardest part of that equation is the uniqueness. What that means is that if you have a Gmail password, you cannot ever reuse that password on your Yahoo or on your Windows computer. You have to have a brand new, long password for each and every single system.

Now, you can employ some technologies to manage those passwords, for sure. Ultimately, you don't need anything. You just need to make it long and unique, if you can remember a pattern that's complex enough that allows you to do that.

Those two things: Trust no one and long and unique passwords, you're going to kill 99.9% of the attacks out there.

Michael Krigsman: My mother is about 90 years old and, the other day, she called me up. I've been kind of working with her on some of these things slowly over the years. The other day, she called me up. She said, "Somebody sent me this thing. It's got a link, and it says I need to get Adobe PDF. Should I click that?" [Laughter]

Stuart McClure: That's exactly it. The funniest story is, I used to get these calls literally every week from my parents, my aunts, uncles, brothers, sisters, and you name it - every week. It was a major motivator for me to actually build this company because I thought to myself, "I just can't scale. I'm going to have nieces and nephews and grand-nieces and nephews I can't scale." [Laughter]

By employing now this technology into all of their computers, I literally get no phone calls anymore. Now that's, I guess, the downside of it. They only used to call me for help. Now they don't call me anymore. It's probably a sign of some problem I need to deal with.

Ultimately, this kind of technology can actually silence all of those attacks and that's the reason behind the name of the company and the techniques we use.

Michael Krigsman: Okay. Fantastic. Well, we have been speaking with Stuart McClure, who is the CEO of Cylance, which uses machine learning techniques to develop preventive measures, essentially, for cybersecurity attacks. He is the author of the book Hacking Exposed, which is now up to version seven. It's an incredible bible. He's one of the most knowledgeable people in the world on this topic. Stuart, thank you so much for taking the time to be with us and talk with us today.

Stuart McClure: Thank you, Michael.

Michael Krigsman: Everybody, go to CXOTalk.com. Subscribe to the newsletter. Subscribe on YouTube. We'll be back next week. There are lots of videos at CXOTalk.com. Thanks so much, everybody, and I hope you have a great day. Thanks for joining us. Bye-bye.

Michael Krigsman: We are speaking now with one of the top cybersecurity experts on the planet. I'm Michael Krigsman. I'm an industry analyst and the host of CXOTalk. Before we start, there's a subscribe form, and you can subscribe to our newsletter. You can also subscribe on YouTube. Please, please do that.

We're speaking with a guy who is the guy - the guy. He wrote the book on cybersecurity. His name is Stuart McClure. He is the CEO of Cylance. It's not his first security startup either.

Stuart, how are you? Thank you so much for being here on CXOTalk.

Stuart McClure: I'm great. I'm excited to be here. Thanks so much for having me. Today, I'm CEO and Co-Founder of a company called Cylance. We've been around for six and a half years. What we do is we prevent cyberattacks at the endpoint, at the servers, in the cloud, and anywhere that they go.

Evolution of cybersecurity

Michael Krigsman: Stuart, you've been doing this for a long time. Let's very briefly look back at cybersecurity historically. What has changed?

Stuart McClure: Well, there have been a few things that have changed. First of all, in the early days, cyberattacks were very, very simple because they were really largely kept to just a few folks around the world. And so, you know, you had to deal with maybe one virus every couple of months or one a year even. So, it was quite easy for us humans that are on the defense side to take a look at something that came through, an attack that was successful, figure out how it worked, then create a detection signature for it, and then get that signature out to everybody else that has yet to be victimized by this attack. That process was really, really quite simple.

Today, we have almost a half a million different attacks that come out every single day that are brand new to the world. The sheer volume is one of the big changes that has occurred. The second is, I wouldn't say sophistication but, certainly, the advancing and the sort of collection of attacks that can be put together today can be quite complex. I think the complexity of the attacks are definitely heightened and increased let's say versus even ten years ago.

That's what we do. We make sure we maintain a focus on all of those types of attacks, brand new, old, you name it, and we train our computers to learn from all of that.

Michael Krigsman: You say that there are half a million attacks that come out every day. What are some of the more interesting attacks?

Stuart McClure: It's funny. I've got to tell you. You could write probably a fictional book every single day, a Grisham novel just on cyberattacks if you subscribe to enough mailing lists, watch enough blogs and read enough technical papers that come out because pretty much every day something comes out.

Today, in preparation for the talk here, I thought, oh, I'll take a look at today's list of running announcements. Sure enough, there was a really cool one. It's a banking trojan on your mobile device, so on Android.

This is simple. If you have an Android device and you were unfortunate enough to install a particular series of applications, and they tend to be games or tools that you might use that are fraudulently posed as legitimate, and you install it because you like a new calculator, chess game, or whatever it might be. In the background is actually sniffing and listening to all of your passcodes and all of your two-factor authentication for your banking platform. Now, that part is not new. That's been around forever and ever. We see that all the time.

What was new about is that, to sense that it's a real phone and not an analyst doing reverse engineering on it, it senses the motion in your phone. If it doesn't sense any motion in your phone, then it knows it's in a sandbox and it will not run. Whereas, if it's sensing motion, it knows it's actually in your pocket, it's a legitimate phone, and it needs to listen. It's those kinds of techniques that even though that might not be brand new, using different radios and sensors in your phone to do certain things, that particular one of sensing motion is rather new.

Michael Krigsman: How do we address that? What do we do about that?

Stuart McClure: If you think about it permutationally, there are probably eight different ways. But really core, there are really two ways. You either prevent it or you say, "Well, we can't ever prevent it, so let's just detect and respond faster."

I've always been a big, big believer that you can prevent it. So, how do we prevent this kind of a thing? Well, we prevent that app from either A) getting into the app store legitimately, B) you actually downloading and installing it, or C) before it runs, somebody like a Silencer, a BlackBerry, or something like that looking at that actual app and knowing its objectively bad and blocking it before it runs. That's the truest way preventative wise to do it.

Now, if you don't believe prevention is possible, which many do, then you want to detect and respond. You'd want to be able to allow the application to run and watch its behavior, watch its activity. Then marry that to what we know is bad activity or bad behavior. Then call that out and potentially alert and block it down the road. The downside of a detect and respond model is it's after the fact, so all of your keystrokes could be long since gone and your two-factor passcodes long since gone to the adversary and used by them to gain access to your bank account.

Adversarial AI

Michael Krigsman: What about the adversary then being aware that you're running these algorithms against the attack and thus changing the nature of the attack in various ways, obviously to try to circumvent what you're doing?

Stuart McClure: Yeah, so that's what we call adversarial AI or offensive AI, sometimes it's called. I just call it AI versus AI. We have yet to see an adversary of any sophistication leveraging AI in the wild today to defeat AI.

We know that that's coming. We certainly have anticipated it for many, many years. We actually have a team dedicated to adversarial AI research to build in a sort of preparation for that type of technique going after us.

It will happen. We know that. But for now, we haven't seen it and we are very, very ready for that and have anticipated that for quite some time.

The way that we do that is we actually try to break our own models, our own AI. By trying to break our own AI, we're actually anticipating how the adversary would try to break us as well. We do this in real time in the cloud in thousands of computers inside of Amazon AWS. By doing that, we can actually predict and prevent new forms of AI adversarial attacks.

Michael Krigsman: Is adversarial AI one of the key things that bring fear that keep you up at night and that we should be worried about?

Stuart McClure: In the next probably three to five years, I believe we will absolutely start to see this in the real world being very successful to bypass other technologies, I'm hoping not ours, but possibly ours, to bypass these technologies and to be able to gain a foothold. Right now, we are years and years ahead of the adversary because of this technique. I would say we're at least three years ahead.

Now, that window might shrink. When it does, then we will have a challenge. But again, we're spending more research, more time, more effort to make sure that we understand all of the different adversarial techniques and then building that into our improving learning math models will ultimately keep us ahead of the bad guys.

Michael Krigsman: Why are you or how are you able to stay ahead of them? Is that simply a function of resources? What I'm getting at is, let's say that you have a country, a nation-state, as they say, putting essentially unlimited resources behind their AI? At that point, do they start to win the war unless you're able to match that level of resourcing on your side?

Stuart McClure: Really, it takes three things to build a proper AI or a bypass AI model.

  • The first is the data itself. That's what you might call resources, at least the first implementation of it is the data, so the examples of what would bypass us. That has to be created somehow.
  • Now, the second thing is the security domain expertise, the ability to know what is an attack that's successful and what's not an attack that's successful and being able to label all of those elements properly.
  • Then the last is the actual learning algorithms and the platform that you use, the dynamic learning system that you've created to be able to do this very, very quickly and rapidly.

You need all three elements.

Now, a nation-state could absolutely provide the first and the third without much struggle or problem. The second, which is the domain expertise problem, that is an age-old issue. If you go into the entire security industry right today and you ask, "Well, what percentage of people," let's say adversaries in security, "actually know how to create it, find a zero-day, exploit it, and use it?" just a simple example of something that's quite complex, you're probably talking about 0.1% of the hackers out there in the world that can do that kind of thing.

Similarly, in the world of defense, the folks that can actually detect a zero-day, prevent a zero-day, and move on to clean it up are probably simple. We're in the low single digits. It's a much more difficult problem to scale is the domain expertise. While certainly a large country--China, Russia, what have you--who have a lot of resources at hand and a lot of smart people, you could start to catch up but it becomes just a very difficult scale problem because humans are not easily scalable.

Michael Krigsman: The issue, therefore, is not so much the algorithms because you can build algorithms based on resources, but the domain expertise packaged in the form or in the form of the data.

Stuart McClure: That's exactly right. The resources, the limitation around resources and just scaling resources is simply this domain expertise. Not everybody quite really understands the core foundational problems of cybersecurity and how to effect it and how to mitigate or prevent it. That becomes a real challenge because it's a very complex, multidimensional field of both attack surface area and defense capabilities.

"Mathematical cybersecurity"

Michael Krigsman: We've been talking about using algorithms and data in the service of cybersecurity. Let's dive into this a little bit more. You talk about math a lot and I'm not sure whether I heard you use the term "mathematical cybersecurity," but it's a term that came to my mind certainly as you've been talking. Tell us about the role of math in all of this and why is math so important?

Stuart McClure: To explain that, I have to start where the original idea for the company and the technique really came from. It came from me doing a talk in many different places, but one of the ones that's iconic for me is in upstate New York in Rochester at the RIT or Rochester Institute of Technology. Did a talk and, of course, it was one of the many ones that I would actually show hacking, how you break into computers, systems, networks, and devices, and everything.

At the end of the hour talk, I opened it up for questions and, sure enough, got one in the top row. He raised his hand. He says, "Hey, Stu. This is all great and good and I'm scared to death now but tell me. Show me your computer system tray. I want to know what products you're running to prevent these kinds of attacks on your computer."

Of course, I had just been acquired by McAfee, in this particular scenario, about a month, actually, before. And so, I looked down in the front row of the audience and, sure enough, it was the head of worldwide sales for McAfee. Now, if I had told the truth, I would lie to a thousand people. If I told the truth, obviously I wouldn't lie, I would probably get fired. If I didn't tell the truth, I lied, I wouldn't be able to live with myself.

I decided in that moment, real quickly, to tell the truth, to say, simply, "Look, I don't need to show you my system tray. I haven't used any computer security software on this computer for probably ten years before, and it's largely because it's just not good enough to prevent against the kind of attacks that we waged against me."

But I'm not 99.9% of the world. Not all of the world gets targeted like I do after Hacking Exposed and my positions in cybersecurity, so I have to be really, really careful. So, what do I do? I do very, very simple things to prevent the 99+% of the attacks that get out there and would be waged against me.

First is, you just don't blindly open any attachments whatsoever. Second, you don't click on any links. There are countless ones. Another one is passwords, making sure they're complex and long.

Ultimately, I started to answer these questions over and over and over again. I start to ask myself, "Gosh, I can explain how I prevent cyberattacks, very advanced ones, myself, with very simple behavioral steps. Why can't we train a computer to do that as well?" That was really the idea behind it.

I said, "Okay, let's start learning." My experience and education was programming, computer science applications. I started to think, "Why couldn't we just build a very large decision tree matrix to learn what are the characteristics or features of bad and what are the characteristics and features of good on a computer and then learn from that mathematically and build an algorithm for it?" Really, a math formula for determining the line between good and bad. That was really the beginning of it.

Now, my expertise in programming had long since expired by that time, so I brought in Ryan Permeh, my co-founder of the company, and he brought in a whole team of data scientists to really help start to solve this problem. We didn't believe it was even possible, but we wanted to try it. It felt like it should be doable.

That first original idea was proven successful about a year later when we launched and released our very first math model on learned behavior and samples for the last 20 years. We were able, with seven data scientists at the time, to effectively be 2x or 3x more accurate in detecting viruses and attacks than the largest AV cybersecurity company out there at the time between Semantic and McAfee. So, just an incredible leap forward with very simple mathematics and algorithms from a handful of data scientists are what really got us going and made us believe we can do this.

Michael Krigsman: Do you want to share just a flavor or a taste of the mathematical techniques that you're using for people who have expertise, which is in math, which is not me, but there are people out there who certainly do?

Stuart McClure: Sure. Yeah, sure. We've gone through many evolutions of our algorithms. We use many different types of techniques. Right now, we've settled on two great groups of techniques. The first is traditional deep learning algorithms like neural networks. That's sort of our primary go-to usage. But we also use more sort of anomaly-based algorithms like Gaussian and Bayesian, for example. It just depends on the use.

We've applied, now, AI mathematics into, I think, over a dozen different features inside of the technology today to catch all kinds of different attacks. And so, how these algorithms work, it's really, really simple. You take a large data set of data. You take then the characteristics of all of that data. Then you feed the characteristics, along with the labels, into these learning algorithms. It'll tell you what are the predictive features that are most predictive of a classification set.

It can tell us that this new product that has just released is going to be bad if you open it up, just by looking at the outside, the box if you will. It's sort of like being able to guess what's in your presents at Christmas. You know instantly because of the weight, because of shaking it, the sound, because of all kinds of characteristics that you've learned over the years and knowing what your mom and dad usually would give you, et cetera, or what your needs are. You'd be able to decipher it. That's the same sort of learning algorithm that we use.

One of the greatest examples I give is I usually tell people, "Just look outside or look out your window and look at people walking by on the street. Now I'm going to give you a challenge. Think of three qualities of each person walking by that would give you a high probability detection that they are a man or a woman."

Now, of course, this is a controversial topic but something that is, I think, quite interesting to talk about. You could look at them and say, "Well, look, long hair tends to be predictive of women or females, but not necessarily. It's maybe only 90%. Facial hair might be highly predictive of men. Not 100%, but maybe 90%." Adam's apple, clothes, you name it, there are all kinds of qualities that you would probably come up with as you start to look through this.

Now, just take those three or four features, these characteristics. Now plot that in a three-dimensional graph or a four-dimensional graph if you have four qualities. Then now stick these learning algorithms into that graphing matrix in memory and start to learn from it.

What'll happen is, you keep training each new sample that this is a woman, this is a man, this is a woman, this is a man, and you pull all these features. You'll start to learn that, yes, truly, these characteristics--hair length, Adam's apple, things like dress--are highly predictive of a man versus a woman. Now, it doesn't mean it's 100%, but if you learn enough from enough people around the world, you can probably get to 99.99%, and that's the same kind of concept.

Now, instead of three or four features of a classification, for us, we mapped over two million features. That's how advanced the machine learning and the feature extraction has become in our world.

Michael Krigsman: You've been doing this for, you said, about six years, seven years now?

Stuart McClure: Almost seven years, yeah, it'll be in June.

Michael Krigsman: I'm assuming that you're continuously mapping around two million features. To what extent are you introducing new features on an ongoing basis versus relying on the existing set?

Stuart McClure: All the time. What we do is, when we get a new data set, let's say, that we might have missed on or we might not have confidently convicted on, we will then map those features and characteristics of that new data set, and we might then promote those features in our prior feature map. Even though we might have seen those same features, we hadn't weighted them very high because they weren't highly predictive. However, in this new data set, those features are highly predictive and that'll elevate the weighting in our models. Then it'll relearn based on that new data. We do that every 24/7, all day, all week.

Michael Krigsman: I see. That's, of course, the key to remaining up to date with new attacks that you see crossing your radar, so to speak.

Stuart McClure: That's exactly it. You've got to stay on top of it all the time, and that's the important part of the adversarial AI, too. I have a core team of folks that all they do is they try and break our own models, like every single day. Whatever they find to break our models, well, we feed that back into the learning system and it gets better and better.

Internet of things and cybersecurity

Michael Krigsman: Okay. Let's shift gears a little bit, and let's talk about some of the applications of this. Let's begin with Internet of Things, Industrial Internet of Things, critical infrastructure. What's going on with that?

Stuart McClure: Well, as you know, devices are overwhelming our world. The easy ones are simple things that are in your pocket, the phones or the tablets that are out there. But, of course, everything is getting connected, pretty much hyperconnected at this stage.

You can look at your car. That is, if you bought a car in the last probably eight years, you probably have some connectivity in that car. It can be as simple as a tire pressure monitoring system that uses Bluetooth or it can be a full wi-fi system. It could be a cellular connection. You could have NFC templates. There are all kinds of things that are inside of these everyday objects, things that we use that are now connected and certainly electronic.

When they're electronic, they now open themselves and expose themselves to a potential cyberattack. Now either something that's within proximity of the device or something that's far away and remote. It just depends on the capabilities of the device.

That has now pushed into everywhere. That can be into things like water treatment plants, nuclear power plants, and oil and gas rigs. You name it, pretty much everything is connected in some form or fashion. The only question is, to what degree is it connected? It is connected just for logging and alerting, or is it connected for two-way control. Either way, an adversary could take advantage of that and go after it.

It really is one of the areas, when you ask me what keeps me up at night, besides my teenagers, it's probably massive cyberattacks as a precursor to something far worse in the physical world like precursor to war by attacking the electric grid. That's probably the number one. Keeping the electric grid down for weeks at a time would be an incredible precursor to something very, very bad.

Again, one of the questions I think is, "Why should we listen to you, Stuart?" Honestly, I could tell you stories that would make you not want to listen to me or at least go find a shack up in the woods somewhere off the grid because this is very, very doable and quite trivial.

Michael Krigsman: Okay, so attacks, as you were just describing on water plants, nuclear power plants, these are the things that worry you. Why is this one so serious?

Stuart McClure: Well, I think for a couple of reasons. First of all, when you can shut down physical access to things like water and food, you have a real big challenge there. The other challenge is that a lot of people just presume that a lot of it is all protected and their air-gapped. For the most part, they are air-gapped and they follow good policy and procedures and regulations around that.

But people make mistakes all the time and technology vendors make mistakes all the time in terms of vulnerabilities in their products as well. And so, all of these can be highly exploited if discovered by an adversary with a motivation to do harm. We've seen that countless times.

If you look at the Stuxnet virus, for example, the virus--it was actually, effectively, a worm and a virus--took over a nuclear power plant and then also took over a uranium enrichment plant out in Iran and was able to actually destroy the centrifuges that were enriching uranium for their plutonium for the nuclear power plants. By doing that, they were actually able, the adversary was able to set back that program, that nuclear program, by probably at least a year or two. It could have done much worse. That attack is largely attributed to Israel and to America. It just gives you a great example of what can be done with a very motivated adversary, somebody that really wants to be able to affect the physical world with cyber.

Michael Krigsman: What are the protection mechanisms that need to be in place in order to address those issues?

Stuart McClure: Well, I think there are some simple ones like it's been reported that the Stuxnet virus was originally put onto the nuclear power plant through a contractor's USB stick. If that is true, that's quite simple because the new virus could have been easily prevented had they had some sort of an AI approach to the technology or, quite simply, just don't allow USB sticks to be plugged into major systems controlling major infrastructure. Either one of those would have prevented that, quite simply.

But then, after that, after the plugin of the USB and the virus starting to run, there are many stages in that kill chain that could easily have been mitigated and stopped or prevented but, unfortunately, they weren't in large part because the adversary going after those systems knew all about what was put in there: the controls in place, the technologies in place. They knew what to get around and how to get around them quickly so that they could perform the attack quite readily.

Michael Krigsman: You say the kill path is complex. Is it similar to an airplane that, when an airplane crashes, there's generally a series of failures that take place? Is this a similar kind of thing?

Stuart McClure: That's exactly right. Yeah, that's exactly right. Any different level or layer of functionality or prevention could have failed for that attack or that crash to be successful. It's very, very rarely one little thing that allows it to happen. Sometimes it is, for sure, but not frequently.

Preventing and mitigating cyber-attacks

Michael Krigsman: You mention that companies don't necessarily keep up to date with things like patches and the security fixes that they need to apply. At the same time, software vendors are focused not just on security, but they're also focused, primarily focused, on the development of their products, especially for smaller companies, and so their attention is divided. You have lack of education. You have a whole range of issues that can lead up to this and it happens. Look at the cyberattacks that take place on credit reporting agencies, for example. With all of that, what the hell are we going to do? [Laughter]

Stuart McClure: Honestly, if you sat down and actually thought about it, it feels quite exasperating. That's why, when I wrote the Hacking Exposed book back in 1999, and every year since that I publish that thing, I try to keep it as complete as humanly possible because I think education is our weakest link. I think the vast majority of folks that are responsible for security of their organization don't know how the adversary actually works and how they actually get into their systems and networks. If they did, if they did know how, if they knew the surface area of attack, they knew the techniques, they knew the paths in, they could infinitely better prevent cyberattacks, and if they believed in prevention.

What can we do? I think the only hope, and I know it's self-serving, but it is just absolutely the truth, is a machine learning approach to prevention. That really is the only way. Collect as much data about all these attacks as humanly possible. Build learning algorithms to help learn from all the characteristics of it. Now, apply that to brand new attacks and see if it catches it. That's probably the only hope that I have in the industry outside of just turning off the computer or the device. Hitting the power button used to be my only answer to that question was, "Well, just hit the power button because that's about the only thing you can do." Even then, by the way, Michael, even by hitting the power button, in today's technology, doesn't prevent an attack.

For example, there are countless examples in the last year or two, but it used to be for the last ten years of technology on your computer that actually, even if it's powered off, an adversary can hack it and then gain access and turn on your device and hack it inside of your own computer. It's called TPM capabilities inside of Intel chipsets, but there are also other technologies that do this. It's not just about power anymore. We really do have to get to a brand new approach to this problem.

This old approach of signature-based detection and respond is absolutely 100% broken. It just does not work and it will not prevent the unknown unknowns, the attacks that everybody gets hit with all the time. And so, if you adopt that approach, this learned approach, we have a shot, but you've got to get that technology and that capability out to everybody and that's the real challenge.

Michael Krigsman: Do you have data on the efficacy of signature-based approaches versus mathematical approaches in dealing with new and evolving threats?

Stuart McClure: Yeah, the data is very, very clear and has been for quite some time that traditional signature-based approach is anywhere from 30% to 50% effective on brand new, unknown, unknown attacks whereas artificial intelligence and machine learning approaches are in the 99.9% effective rate. That has been well tested independently for the better part of five, six years now.

Future of cybersecurity

Michael Krigsman: We're going to run out of time soon, unfortunately, so let's move on to the future of attacks. I think you alluded to that but tell us about the future of cybersecurity over the next not 10 to 20 years, but next 4 to 5 years.

Stuart McClure: I think two things: One, they'll get more and more complex in terms of the bypasses and looking for such using AI and adversarial AI, for sure. Also, I think the surface area is expanding quite rapidly with all of the connected devices. There are going to be ways that attackers are going to gain access into these devices that we haven't even thought of yet, certainly, the manufacturers haven't thought of yet, and so I really think it goes always back to this education part.

From the defenders and the victims, there's an education element, but also from a supplier and a provider perspective. Technology companies build all of these. This webcam that I'm looking at right now, there are countless vulnerabilities in this thing. In this monitor, in my TV at home, all of these things have countless vulnerabilities.

The manufacturers themselves either have to get educated and start to produce more secure devices or, quite honestly, the government has to go in and regulate it. I use the R-word there, and I hate to use it, but I don't know how else we're going to get manufacturers to really take it seriously and prevent vulnerabilities from being introduced into the products because 99% of all the vulnerabilities that are present in any of these devices are completely preventable. We've known about them for decades how the adversary gets into devices. If we know about it, well, software developers and program managers should be able to make sure that it's put into these and these attacks are prevented inside these devices.

I think, really, the future is both. We've got to get manufacturers being much more secure and aware about security. We've got to get the defenders a lot more aware and educated about how the attackers and adversaries get in and how easy it is to prevent. We've got to think about it in a preventative light because detect and respond just does not work.

Advice for government regulators and policymakers

Michael Krigsman: What advice do you have for government regulators who are looking at this and they want to do something? I think everybody is pretty much well-intentioned, right? The good guys and the people working for the software vendors, they have good intentions. How does the government deal with something that is so profoundly technical? What do they do?

Stuart McClure: Well, I think first is to demystify how simple the solution really is. You don't need a team of Ph.D. programmers to come and explain it to you. It really is quite simple. I've been on the Hill multiple times to help explain and there are very, very simple things that can be done in the development lifecycle that can prevent 95% to 99% of all these attacks, for sure.

It doesn't take classes. It doesn't take regulation and a stick to be whipped upon us. It can be as simple as open the book and learn. That's the unfortunate part.

My recommendation to regulators is, number one, use the carrot first. Simply adopt a strong software development lifecycle approach with security in mind. Develop that for industry with industry. Then provide rewards, incentives for adopting those and proving independently that you've adopted those approaches and those frameworks.

Now, if after a period of time there is no adoption, no interest in adoption, and those incentives are not really appealing--they could be tax incentives or all kinds of things--then the threat of the stick in regulation might be the only last step that we can do. I don't know how else to do it.

You look at how the EU adopted GDPR. Eventually, they just said, "Guys, you guys aren't getting this fixed, so we're going to set this policy and guidelines. You have to follow it. If you don't, there'll be penalties and punishment." Unfortunately, I think that's what we might have to get to.

Michael Krigsman: GDPR may be a model for how governments can relate to this type of complex technology.

Stuart McClure: There's a lot of criticism with GDPR and they're absolutely valid; many of them are absolutely value. But I think it is a great example of how a government could step in and provide at least guidelines and recommendations and mandates, eventually.

Michael Krigsman: Stuart, in our last ten minutes or so, five to ten minutes, what should people inside corporations do? Technologists, CESOs, CIOs, what should they be doing?

Stuart McClure: First, I hate to beat the dead horse here, but education, right? Just learn as much how the bad guys -- I mean you're trying to prevent bad guys from getting in, so how can you possibly do that if you don't know how they actually get in? That's first.

Yes, whatever company you're working in today, you're going to be probably regulated in some form or fashion or at least a compliance mandate forced upon you. You need to be aware of these compliance mandates, regulations. You need to follow them, of course. All of them, every single one of these regulations, mandates, or compliance requirements of you are there because the attacker was successful in breaching your defenses.

Now you have to go back and say, "Well, wait. So, what if we could actually prevent the bad guy from bypassing our defenses and getting in?" We can do it objectively without signatures or without an old, traditional detect and response approach. Would we even need regulation? If you can prevent 99.99999%, there's no need for regulation because the likelihood of an attack occurring is so infinitely small. You're not going to build a whole system of regulation and compliance.

My recommendation: Education is number one. Know where the bad guys go. Where they come from, who really cares. They could be in a basement in Idaho or they can be in a building in Shanghai. It really doesn't matter. How do they do it? That's key. Education is number one.

Number two, for all of you that have to communicate this to your superiors, to the board, cybersecurity and the risks therein, make sure you try to take as quantitative of approach as you possibly can. What I mean by that is being able to measure your state of security and your risk in a quantitative, repeatable, independently verifiable way, and then make that your standard by which you measure yourself over and over and over again to show either improvement or, you know, not improvement, [laughter] going down.

A quantitative approach, education as much as possible, and then being able to speak to the right audience. If you're speaking to the board, don't speak to them about bits and bytes. Obviously, speak to them about risk, risk acceptance, risk mitigation, quantitative measure, holding people accountable, sense of urgency. Things of that nature will save your careers, quite frankly.

Also, really taking a preventative approach. I can't tell you how many companies; you mentioned a couple in the credit business. There are countless in retail and you name it, even banking, that have lost their leaders, the CIOs, even the CEOs, because of data breaches and in large part because they didn't believe prevention is possible.

They didn't put a concerted financial effort into, "Hey, how do we actually prevent all of these attacks from occurring?" rather than just detect and response because, even five seconds, I mean there are some people out there that say, "Well, we can detect attacks within five seconds." [Laughter] That's great. That's better than five years, which is what it was ten years ago. But five seconds is a lifetime to an attacker. You can do an insane amount of damage in five seconds, and you can set up all kinds of backdoors and all kinds of things that a detect and respond approach would never be able to catch.

Prevention is key. Start with prevention, then do detect, respond, and cleanup.

Cybersecurity advice for individuals

Michael Krigsman: Stuart, as we finish up, you mentioned that you have daughters, you have a family. What do you tell them regarding security? In other words, what's the advice for just the rest of us to not be a victim?

Stuart McClure: I think the biggest one is, just don't trust anybody.

Michael Krigsman: That's so depressing.

Stuart McClure: It sounds hyperbolic a little bit, but it's a great place to start because you know you're going to break that rule. Okay? But, yeah, it is a little depressing.

I tell my son; he's 19, first year in college, and I've been telling him ever since he could probably listen. I'm like, "Look, even if you get an email from me or a text from me, it stays Stuart McClure or it says dad or whatever it is, if you're not expecting it, if it's not something that I would have sent, or if you look at the source and it's not really from any email that you're aware of from me, don't trust it."

Again, start with, "Don't trust anybody," so zero trust. But then go to, "Okay, now only trust that which you are expecting, that which you know is traditionally within the realm of possibility and that looks legitimate," because remember, too. I could be hacked, so how would my son know? If my computer got hacked, the attacker then sends something to my son. Unless he could really peer into, "Wow, I would never get an email from my dad like this. This is very weird," and pick up the phone, call me, and say, "Hey, did you send this email to me?" Unless he did that, he would probably get hacked thinking that, "Well, I trust my dad and what he sent me."

Unfortunately, trust is probably the biggest one, a recommendation that I can provide out there. Just simply don't trust anybody. Trust, but verify. Verify that this individual is who they say that they are and that they're actually giving you what you want, for sure.

The second is make passwords long. They don't have to be complex, just long, and that they're unique to each system. The hardest part of that equation is the uniqueness. What that means is that if you have a Gmail password, you cannot ever reuse that password on your Yahoo or on your Windows computer. You have to have a brand new, long password for each and every single system.

Now, you can employ some technologies to manage those passwords, for sure. Ultimately, you don't need anything. You just need to make it long and unique, if you can remember a pattern that's complex enough that allows you to do that.

Those two things: Trust no one and long and unique passwords, you're going to kill 99.9% of the attacks out there.

Michael Krigsman: My mother is about 90 years old and, the other day, she called me up. I've been kind of working with her on some of these things slowly over the years. The other day, she called me up. She said, "Somebody sent me this thing. It's got a link, and it says I need to get Adobe PDF. Should I click that?" [Laughter]

Stuart McClure: That's exactly it. The funniest story is, I used to get these calls literally every week from my parents, my aunts, uncles, brothers, sisters, and you name it - every week. It was a major motivator for me to actually build this company because I thought to myself, "I just can't scale. I'm going to have nieces and nephews and grand-nieces and nephews I can't scale." [Laughter]

By employing now this technology into all of their computers, I literally get no phone calls anymore. Now that's, I guess, the downside of it. They only used to call me for help. Now they don't call me anymore. It's probably a sign of some problem I need to deal with.

Ultimately, this kind of technology can actually silence all of those attacks and that's the reason behind the name of the company and the techniques we use.

Michael Krigsman: Okay. Fantastic. Well, we have been speaking with Stuart McClure, who is the CEO of Cylance, which uses machine learning techniques to develop preventive measures, essentially, for cybersecurity attacks. He is the author of the book Hacking Exposed, which is now up to version seven. It's an incredible bible. He's one of the most knowledgeable people in the world on this topic. Stuart, thank you so much for taking the time to be with us and talk with us today.

Stuart McClure: Thank you, Michael.

Michael Krigsman: Everybody, go to CXOTalk.com. Subscribe to the newsletter. Subscribe on YouTube. We'll be back next week. There are lots of videos at CXOTalk.com. Thanks so much, everybody, and I hope you have a great day. Thanks for joining us. Bye-bye.