The recently published Vulnerability and Threat Trends report by Skybox Security dramatically highlights some of the fundamental root causes of increased cybersecurity attacks that consume most enterprise security teams’ containment efforts. Let’s take a look at just a few of the eye-opening realities we can learn from the report so we can better assess what is needed to establish more effective vulnerability management programs:
- 18,341 new vulnerabilities were discovered in 20201. Keep in mind that this is a cumulative number that has consistently grown each year. So, total vulnerabilities keep mounting and most will remain out there as an “exploitable threat vector” for many months and possibly even years. It’s like leaving one of the windows in the back of your house indefinitely open and assuming nobody will ever bother to access your house through that particular window simply because it hasn’t happened yet. The reality we must deal with is that 85% of exploited vulnerabilities are more than two years old and the accumulation of these keep growing each year.
- Attack vectors are broadening with new vulnerabilities being discovered across a greater number of different products, software, applications, and device types. This growing breadth of vulnerabilities now cuts across mobile devices, tablets, OT and IoT devices, different browsers, and e-business applications, giving threat actors more opportunities than ever to wage sophisticated attack campaigns. We saw this play out with the recent SolarWinds attack.
- Forty percent of new vulnerabilities are classified as “medium severity”. This may sound like it’s not a big deal because they aren’t “critical” or “high”, but the bad news is that threat actors are increasingly targeting these medium vulnerabilities because they know many organizations focus primarily on ones that are categorized as critical and high. While existing vulnerability management programs continue to focus on closing critical and high severities that they can find, threat actors are going around them and methodically looking for and finding the open window that allows them to get in or to move around laterally and advance an ongoing campaign once they are already in.
- There was a 128% increase in Trojans and a 106% increase in new Ransomware samples2. This close correlation in rate of increase is far from a coincidence. As highlighted in the report, threat actors are waging sophisticated cyberattack campaigns using combinations of different malware types to achieve their desired state – for example the combined use of Emotet and Trickbot to provide a back-door entry for Ryuk Ransomware3. The large number of medium severity vulnerabilities being bypassed by enterprise teams, increases the likelihood that these types of multi-stage attacks will find a way to succeed.
There is additional evidence cited in the report that highlights the stark reality we face – that many Vulnerability and Threat Management programs in place today are simply not good enough. They might be efficiently patching volumes of vulnerable assets, but with attack vectors continuing to grow and be exploited it’s obvious that an ongoing cadence of scanning and basic prioritization efforts followed by patch management is a losing strategy. If the goal is to reduce the greatest amount of business risk and tighten security efficacy as much as possible, then it’s obvious that more needs to be done.
Let’s examine three critical flaws that exist within vulnerability management programs that are in place across organizations today:
Flaw #1: Vulnerability analysis with incomplete data sets
This is a major problem since the efficacy of vulnerability prioritization and remediation efforts depends entirely on the data sets that you are working with. Vulnerability assessments need to be actionable, accurate, continuous and based on centralized, normalized, and complete sets of data. Having multiple disconnected discovery methods or relying on data from periodic scans is simply not good enough in today’s world, considering the rate at which new applications and endpoints are being added and with enterprise network configurations being in a state of constant change.
First, you need to ensure that you are merging and correlating all vulnerability scan data from various sources into one normalized source. That’s not enough, however. You need to then augment this data by passively collecting vulnerability asset data from configuration, patch, and asset management systems, from endpoint security systems (EDRs, EPPs), from network security devices (firewalls, IPS/IDS, etc.), from network infrastructure devices (routers, switches, load balancers, etc.), from various cloud assets, OT systems, and any other relevant parts of your hybrid network that may be considered “unscannable”. This provides a continuous and normalized “single source of truth” that is absolutely critical to ensure that vulnerability analysis and the ensuing prioritization and remediation efforts are effective.
Flaw #2: Failure to properly calculate exposure risk in conjunction with Vulnerability Prioritization
This is probably the biggest problem. Many organizations are failing to calculate their true exposure risk because they are merely factoring the correlation between critical and high severity vulnerabilities with known exploits. This common and overly simplistic approach doesn’t take into account the many possible vectors and paths that today’s sophisticated cyberattack campaigns may take in finding a way to exploit vulnerable assets across your environments. Some of these attack paths may be via lateral movement within your networks. A complete and accurate exposure analysis can only be achieved by leveraging a network model that understands your unique hybrid network context, all security controls that are in place, and provides a multi-dimensional analysis and simulation of all potential attack paths.