Is AI/ML a game-changer for security, or overhyped?


1.3k views3 Upvotes17 Comments

VP, Director of Cyber Incident Response in Finance (non-banking), 10,001+ employees
I think that there's room for artificial intelligence and I think that there's room for machine learning. Those things are super important. At my previous job, I was on an invention disclosure for how to develop an algorithm to detect heartbeats. It's easy to detect heartbeats from something that's on the network, 24/7. But when you've got a mobile workforce, it gets a lot more difficult to try and detect those kinds of things. Especially when you've got threat actors who will have a heartbeat of 24 hours plus 10 minutes, so there's only one ping every day, or even worse: like the malware from Solar Winds. It is two weeks before it does anything bad.

I can throw up a whole bunch of reasons why it's bad or why it's hard. Executing applications based on math and trying to have computer systems that execute programs that we think are supposed to run on those particular computer systems and platforms is great. But it is so hard to keep up with the configuration management and the class of computing systems that things are running on at enterprise scale. Move it into the cloud and I think you've just magnified the problem exponentially several times over, because it's not your computer. It's somebody else's computer that you're renting space on.
1 4 Replies
VP, Chief Security & Compliance Officer in Software, 1,001 - 5,000 employees

Jeff, I agree with you. I think first of all, data is just too fluid at this point. I was on a call with SOC leaders and we were talking about next generation SOAR. What is that really going to get us?

CEO in Services (non-Government), Self-employed

I definitely agree with you, Jeff, that you are renting someone else's server. I have been raising a red flag within electronics manufacturing to secure the hardware. I've been wanting to get the electronics manufacturers industry to start securing the device. Whatever they're making, it should be secured in manufacturing. No disrespect to anybody in software, but sometimes, there's only certain things that you can't do in hardware that you need software for, and hardware is more difficult to hack. If you can build it embedded, you're ahead of the game.

CISO in Software, 51 - 200 employees

Securing the hardware before it goes out, that'll be great someday. But what's happening, especially in manufacturing, all these PLCs and industrial control systems, they're old, but now suddenly they're connected. What do we do? How do we protect them? Same thing in pharmaceuticals. We're running these million dollar robots, and then they got a Windows XP machine hooked up to them, and we're not allowed to patch them. We're not allowed to put AV on them. It's ridiculous. We do every trick in the book to separate them from the network (put them behind another firewall, block them from the internet, etc.), but then the technician comes in to fix the machine, plugs in his USB, infects the whole thing, and then we're screwed again. That's happened to me countless times to the point where we had to, if a technician came in, say, "No USB drives. If you're going to use one, you're going to use ours. We're going to get whatever data, clean, on our drive, then you can do your thing." Otherwise, it costs us weeks to get those robots recalibrated.

VP, Chief Security & Compliance Officer in Software, 1,001 - 5,000 employees
I think for AI to work for cyber, the capability of AI to learn my environment has to be faster. I'm not moving at the speed of the actors. I am stuck having to protect a hybrid environment, which is the challenge for probably most of us. So my focus is constantly fractured, because I'm defending in the cloud. I'm trying to help continue to migrate off of antiquated platforms and systems, to the cloud. So I don't have the luxury that the actors have, of being single-focused. 

I think we need to get to the place where we are actually taking cyber threats more seriously at the board level, where they don't question the investment to complete a lifecycle migration. Then we can just be done with it rather than spoon-feeding the migration, which then keeps you from being able to defend at the same rate. When we look at the Solar Winds attack, it wasn't necessarily that they leveraged a vulnerability which was embedded in code. It was the fact that they were able to go undetected for so long and mimic traffic that our security tools are designed to detect. I think we have to be able to free ourselves, to move at the speed of the actors, and have that singular focus to really start to win this battle. AI and machine learning are very important to do that, but they have to learn my network faster. I can't take like a year and a half for it to learn my network.
2 Replies
President and National Managing Principal in Software, 501 - 1,000 employees

If the AI isn't getting to know your network, is it bad AI or is it that your network isn't generating enough information to create a useful model? To create a set of patterns of normal activity.

Field CISO in Consumer Goods, 5,001 - 10,000 employees

I wholly agree that AI needs to be faster at learning the intricacies of individual environments. Time-to-value is one of the biggest barriers of acceptance for AI being necessary within an organization. I also agree that much of the issue resides in the lack of meaningful input to feed the AI engine. Without external influene and integrations as well as "care and feeding", many times ROI doesn't start until 6 months and true value at a year or so (if ever without the right inputs). But is that so different than a human in the same role? There is a reason why skilled cyber-pros are hard to come by. They have years of experience to lean on. They can use information from past events and from their peers and they can use their intuition to correlate seemingly non-similar data. Dare I say that AI and ML are under-hyped in their ability to help organizations cut through the noise, expedite response time, and augment the human element today but over-hyped when it comes to being the "silver bullet" of tomorrow?

CEO in Services (non-Government), Self-employed
Cyber threats are dynamic. You're never going to know when they first come in, you're only really going to be able to see them within a certain period of time, after they've already invaded. Edge brings a lot to the table in that respect. But with respect to AI particularly, I would offer this: If you don't know where the original data came from to build a model for the machine learning to train the AI, you might as well not even bother. The only way you can do that is to gather data, encrypt it, and keep it under lock key, while the 3-5 data scientists are building out that model. Make sure that there's no way they can leak anything. Allow them to address bias just as much as security threat, because a security threat and a bias can be the same thing. You can manipulate the data that you want that AI to be defending against, simply by the way the model is being built for the AI. 

To wit, if a model is trained to pick up a pronoun in the wrong context, that's a very simple way for an AI to start breaking down NLP or other kinds of capabilities, whether it's email or content or something outside of a structured environment, and use it. That's just one example. In manufacturing, it's 10 times that, because you have sensors and actuators and PLCs and all sorts of equipment. In that environment, even the 1 or 0 in binary can become a weapon for a hacker. Because the mechanical of “open a circuit, close a circuit, one and zero,” can easily be triggered by a malevolent actor to go the wrong way. There goes $100,000 worth of product that falls off the line.

But there's still something inside of me that says, there is a way to do this properly. Maybe it's design for security or design for privacy or both at the model level. Maybe the emphasis to the board is that if we build the models correctly, we can leverage those, because depending on whose version of model and how good the data scientist, you don't need a year and a half. You need five cases that are germane and specific to the traffic flow of the environment. I live in insurance, I need something that's insurance-related. I live in manufacturing, I need something that's manufacturing-related and so on and so forth.
President and National Managing Principal in Software, 501 - 1,000 employees
You can't just throw AI, just like you can't just throw technology, at a problem without having a good, in-depth understanding of the inputs, the throughputs, the outputs. From there, you can fully articulate what your use case is. And also, who's going to be there to correct it. Back in the day we had analysts that were looking at statistical modeling charts, tweaking the standard deviation rules, tweaking the alert mechanisms. Because in one sense, it was too chatty and in another sense it wasn't chatty enough. That's what people leave out of the whole AI discussion. You've got to have people that are there turning the knobs and dialing it back or forward or adjusting it to what their particular use case is.
Sr. Director of Enterprise Security in Software, 5,001 - 10,000 employees
Is it just me, or does it seem like so many of these AI/ML solutions are posed to be some sort of magic bullet, supposed to make up for the fact that your best practices are terrible? That's what I keep seeing. Every new security product is designed around that. I'll take strong best practices in an organization over some magic bullet AI that's going to fill my gaps for me. I feel like when every new security startup pitches their solution, I don't know how they do it. They can't seem to reproduce anything outside of the demo they show me. I'm not really sure how this is going to help me.
1 Reply
VP, Chief Security & Compliance Officer in Software, 1,001 - 5,000 employees

I agree with you, Joseph. At the heart of it, it's just good hygiene. But then the additional assurance is important.

Senior Information Security Manager in Software, 501 - 1,000 employees
Eextremely overhyped.
1
CIO in Education, 1,001 - 5,000 employees
Overhyped for sure
1
Director of Technology Strategy in Services (non-Government), 2 - 10 employees
It's a game changer when peered with a team of experts.

It's overhyped if it's just on its own.
Director of Information Security in Energy and Utilities, 5,001 - 10,000 employees
Its really overhyped because at the end of the day what you see still needs to make sense in the larger context and you need to understand. On it's own AI/ML adds only limited amount of value.
Director of Engineering in Energy and Utilities, 10,001+ employees
To be frank, AI/ML is a good add-on to the security aspects, but the hype created is way too much! It will definitely contribute to various aspects but selling it in the manner it is being done today does not give the true picture.

Content you might like

CTO in Software, 201 - 500 employees
Without a doubt - Technical Debt! It's a ball and chain that creates an ever increasing drag on any organization, stifles innovation, and prevents transformation.
Read More Comments
40.8k views131 Upvotes319 Comments

Patch management: to reduce attack surface and avoid system misconfigurations39%

Malware and ransomware prevention: to protect endpoints from social engineering attacks58%

Malware and fileless malware detection and response: to protect against malicious software49%

Threat Hunting: to detect unknown threats that are acting or dormant in your environment and have bypassed the security controls33%

Not planning to change endpoint security strategy10%


184 PARTICIPANTS

393 views

Insider threats – rogue admins19%

Encrypting my data51%

Deleting my backup copies11%

Resident malware8%

Data theft – data exfiltration11%

Other1%


142 PARTICIPANTS

1.6k views1 Comment