Maintained by NIST, the National Vulnerability Database currently lists 231,000 vulnerabilities. Last year, 25,000 new vulnerabilities were added to the database at a rate of around 2,000 every month. The technology underpinning the world’s largest entities is perforated, to say the least, and capabilities stretched.
How organizations address vulnerability management in a structured way using processes and tools such as Continuous Threat Exposure Management (CTEM), was the subject of a debate amongst security leaders at an event in Central London, who settled on several key points.
Attack Surface Management in a fragmented environment
There was agreement that, compounded by the sprawling nature of the modern technical environment, traditional approaches to vulnerability management are underwater. Across cloud, on-prem, multi-cloud, legacy, remote, supply chain, OT, IT, and more – the attack surface continues to stretch ever further.
The same could be said of security teams. Spread thinly across a plethora of vulnerability scanners, threat intel feeds, patching tools, and spreadsheets – the next breach is a small, overlooked piece of code away. For an example, look no further than Equifax. After two months and an unpatched version of Struts, attackers managed to inflict damage to the tune of $575m and growing.
Risk-based vulnerability management – focus on efficiency
Those present agreed that the elephant in the room is that the problem has become too much to manage. People, processes, and technology simply can’t scale mitigations with the pace and volume necessary to use traditional methods.
The answer lies in focusing on the vulnerabilities which pose the most risk and giving these priority. The data support this. Studies show that organizations can only remediate 5 – 20% of vulnerabilities in their environment, and, in addition, only 5% of those vulnerabilities are actively exploited in the wild.
Identifying which ones these might be, however, is no simple task. For this to be successful, every organization’s specific view of risk must be considered.
Achieving this requires aggregating data from across the entire environment to gather context. For example, infrastructure data, network devices, firewall configurations, CMDB records, and identity information can all be used to create a detailed digital twin.
With this data as a backdrop, threat intelligence can be overlaid to help security teams answer the questions critical to understanding risk in context. For example, security teams can ask which vulnerabilities are being actively exploited. Are these on exposed assets in my environment? Those present agreed that answering such questions brings focus. This can quickly whittle down millions of risk points to thousands in large environments.
Attack surface visualization is important
An important part of this exercise that elevates it beyond just ‘counting vulnerabilities’ is understanding the technical dependencies of critical business processes. Companies regulated by DORA and the operational resilience expectations of the FCA may already have visibility of this for compliance purposes. Still, for others, the complexity of this task can be daunting.
Security leaders agreed that mapping the web of related assets, systems, and processes they secure is vital for understanding how an attack may unfold. Doing so shines a light on the paths that might be used for lateral movement and where single points of failure exist, highlighting where effort should be focused. This should also include supply chains.
Never make assumptions
As with any cybersecurity initiative, it was agreed that success relies on a continually updated picture of how the external threat environment applies to your specific organization.
Vulnerabilities shouldn’t be discounted simply because they have aged. While high-profile CVEs have a definite window of popularity with adversaries – this doesn’t mean the risk is negated once it has fallen out of favor amongst the herd. What could hurt you six months ago can still hurt you today if it remains unpatched.
Similarly, it is not uncommon for low CVSS rating threats to be exploited purely because their innocuous nature means they are off the radar for security teams. It is dangerous to assume.
Threat mitigation: the promise and the reality
Earlier, the point was made that, even with a rigorous CTEM program, patching is no panacea. In large organizations – identifying, assessing, and testing software fixes before deployment in a way that keeps pace with risk can be a herculean task in its own right.
Against this backdrop, having an asset inventory with a clear view of dependencies and attack paths becomes even more important for directing work streams that mitigate the largest portion of the risk. Security teams must also adopt alternate mitigations such as firewall rules changes and network segmentation.
Like much in cybersecurity, vulnerability management must adapt as it taps into the age of overwhelming volume. Without a clear strategy to prioritize – security teams and outmoded tool sets swim against a considerable undertow. Only with focus can they rise above the tide.