Is AI the solution to vulnerability management?
Sort by:
No, but proper utilization of AI in the space could possibly benefit us all.
Not the solution but will enhance it drastically
The big challenge with vulnerability management—even with the small number we have—is narrowing down which ones are most important. I understand that every vulnerability doesn't have the same level of importance within the environment and we have the rating system, etc. But when you have thousands of machines sitting in the environment, imagine having the ability to say, "This is important, but the reality of the situation is that you need to patch these 47 machines out of your fleet of 15K right now. Because as I look at the environment, these are the ones that either have the most critical data or are on an exploitation path so this needs to be closed." Then I could say to my team, "This is where I really need you to focus." Think about what that does in terms of true risk management in the environment and the level of effort required. I am doing that manually now and there are just not enough hours in the day.
The problem with vulnerability management sounds complex but it isn’t. From a data standpoint, there are only 200K known, unique vulnerabilities. The challenge is understanding which one is a known exploit, labeling the exploit as a remote code execution (RCE) and determining which one is trending or trying to do ransomware; that’s where AI models can help. The marketing term for it is “knowledge graph” but what you're really doing is indexing everything and ranking it all, like a search engine.
That's what we have done at RiskSense: We took what Google did 20 years ago with apps and authorities from a page rank perspective, and then re-implemented that for cyber today from a vulnerability and threat management perspective. For example, one metric is called Term frequency–Inverse document frequency (tf–idf) and measures how often or how rarely the term occurred.
So using that approach, we went back and looked at all exploit code that's committed to Metasploit and PRCs. Rather than taking the tags at face value, we studied those exploits ourselves and labeled them using Natural Language Processing (NLP). It used to take four days for an analyst to understand an exploit and label it; today we can do it in four seconds. That's a huge win for us. We run the models on a continual basis now and when a new exploit comes, we label it. If we don't get the accuracy then a human looks at it. It's a very typical live example.
I don't think it's the be-all solution for vulnerability management but more another tool in the toolbox to manage and administer your vulnerability strategic platform.