When it comes to the vulnerability management process, where can current AI capabilities provide the most value? Have you had success with using AI-enabled tools specifically for deduplication, false positive reduction, prioritization, etc.?
Sort by:
To build on Lawrence’s point, one of the key benefits is that AI provides a baseline from which to start. This helps prevent staff from becoming inundated by alert fatigue and the overwhelming volume of identified vulnerabilities. AI gives you that initial starting point, but it remains essential to train staff to review and maintain checkpoints to ensure accuracy. Double-checking is always important, as you do not want incorrect information coming from the AI models.
AI engines within vulnerability management toolsets add definite value, particularly in deduplication. The ability of AI to prioritize vulnerabilities and understand the environment is also powerful. For years, the industry has relied on CVSS scores, which are often high but lack contextualization to specific environments, making it difficult to know what to prioritize. AI-driven contextualization enables organizations to assess risk within the context of their own environment, rather than relying solely on generic scores. However, the challenge lies in the significant upfront work required, such as asset classification and identifying critical assets, to ensure the system has the right context and data. When these elements are in place, AI-driven tools can work effectively and provide considerable value.
Some modern enhancement on AI enabled VM tools claims to speed up triage by focusing on fewer, more important problems. They claim to use AI to combine CVSS, EPSS, exploitability, asset criticality, attack path context, and real-world telemetry. Honestly, I haven't deployed them but have seen some interesting demos of their products. I have been evaluating them to reach certain board ready metrics on the below three counts;
1. Mean Time to Remediate (MTTR) for Exploitable Vulnerabilities, based on (AI-detected) exploited-in-the-wild or high-EPSS vulnerabilities.
2. MTTR for Vulnerabilities on our Crown Jewel Systems SAP S/4HANA, POS systems, tenant management systems, leasing platforms, etc.
3. Average Exposure Window (AEW) or time between detection and remediation for top 5% riskiest vulnerabilities.
If I can get these 3 metrics from AI capabilities of VM tools, I would be able to attain 'exposure window metrics' critical to ascertain the speed of risk reduction emanating from ransomware or other malwares due to missing patches and reducing alert fatigue.
While it’s not specifically AI, I want to mention the prioritization functionality we use, which is a product formerly known as Silk, now acquired by Armis. This tool does a good job of contextualizing vulnerabilities and providing additional information. It employs a proprietary prioritization engine built on AI to enhance the information provided. Although we are not using AI directly for prioritization, this solution has been very useful over the past six months or so.
We conducted a pilot using Claude to integrate with our vulnerability management tool, Tenable, aiming to facilitate communication and gain insights into remediation tasks for our team. The results were underwhelming. The tool struggled with hallucinations and lacked contextual understanding of our environment. Our engineers attempted to make it work across our manufacturing environment, OT security, and enterprise security, but the integration did not yield actionable vulnerability results. It failed to provide recommendations that an agent would actually execute. But the AI performed well in the realm of threat hunting by assisting with tasks like looking up IOCs, finding hashes, and searching within our SOC. For proactive vulnerability management (such as identifying necessary patches or available mitigations) the AI sometimes hallucinated or provided inaccurate information. We are still evaluating its capabilities, but so far, there has not been a measurable ROI to justify the investment; it remains proof of concept.
We use Rapid7 and have encountered the same issue, the context is missing. Prioritization remains a challenge, and we have not seen any added value from AI. Vendors continue to assure us that improvements are forthcoming, but we have yet to experience tangible results.
We also use Tenable and have explored similar avenues to reduce noise in our vulnerability management process. So far, we have not found any AI tool that effectively addresses this challenge. The lack of context is a significant issue, as you mentioned. For example, understanding the impact of a Java vulnerability on internal networks or solutions requires human knowledge, which is difficult to incorporate into an AI model. That contextual understanding is crucial for improving the effectiveness of vulnerability management, and it remains an ongoing challenge.

I appreciate the concept of data contextualization, and we are working toward that goal. However, as a CISO, I am concerned about over-reliance on these tools. The sheer volume of data points exceeds what people can physically review, raising questions about how to validate the tool’s accuracy beyond simple spot checks. If the tool provides inaccurate contextualization, it can be difficult to detect, and that uncertainty is concerning. While this is not necessarily about AI hallucinations, the challenge is managing massive volumes of data and ensuring accuracy in ways that are not always immediately apparent.