Patch me if you can: managing your external attack surface

“The perimeter is dead” is a phrase that’s been thrown around for the last few years in our industry. It’s not that your perimeter doesn’t exist anymore, but digital sprawl is now so vast that maintaining an up-to-date asset list is a logistical nightmare.

At the same time, IBM X-Force Threat Intelligence Index 2021 recently revealed that ”scanning and exploiting” for vulnerabilities has now surpassed phishing as the most common exploitation vector, only increasing the pressure on organizations to keep on top of their perimeter assets.

With high-impact threats like ransomware attacks looming over every security manager and board member, external security is beginning to feel like an overwhelming game of whack-a-mole. Patch as you might, it’s impossible to keep up with growing lists of assets and vulnerabilities and resolve everything. And, as anyone who's ever been involved in enterprise patch management knows, it's not as simple as hitting as great big update button.

It's no surprise then that we’ve seen analysts like Gartner talking about External Attack Surface Management (EASM)—the process of mapping and managing your entire external perimeter—as a crucial emerging technology1 trend. But this topic is much bigger than just technology. Organizations need to understand the problem in the context of their own business before putting their faith (and their funds) into tooling alone.

Taking compliance and regulation out of the equation, there are a few key factors to consider when defending your organization from external threats. That’s not to say you should ignore your regulatory requirements, but it is best to initially approach EASM from a purely risk-based perspective, acknowledging that compliance and priority risk reduction are not always aligned. The approach comes down to:

  1. Knowing and understanding your attack surface
  2. Rationally approaching the large list of known issues across your estate
  3. Identifying new and emerging threats and managing the risk they pose

1. Understanding your attack surface

The chances are that when it comes to your organization’s external attack surface, you aren’t 100% sure what’s out there. Most organizations understand the value of asset cataloguing and have been taking action, but modern networks are ever-expanding, and asset management isn’t a one-time thing, it should be an ongoing process.

The agile approach and autonomy afforded to technology creators means that services are constantly spinning up and changing, and it’s difficult to keep up with new and changing assets without getting in the way of development. Add third parties, tech migrations, mergers, and acquisitions into the mix, and the challenge becomes even more complex.

The good news is that technology can already help solve a good chunk of this. Automated asset mapping and monitoring tools, if used correctly, will be able to help you keep an up-to-date catalogue of your external assets, and this can serve as a source of truth when the next big vulnerability drops. Critically, once you have a good understanding of your external assets, you can start to reduce your overall attack surface by decommissioning anything unnecessary, like legacy systems and testing environments long-since forgotten.

The danger here though is getting caught up in this stage—it’s easy to generate lots of data, and only natural to want to take that data and feed it into your other processes, like automated vulnerability assessment (VA) scanning. The trap is that all of this can easily generate a lot of work. Scanning everything usually creates huge lists of vulnerabilities, and focusing on these is often counter-productive, as we’ll discuss further on.

The main thing to remember is that your datasets (including vulnerability lists) are a tool, not a set of tasks. It’s something that you can use to answer questions like:

  1. Does this new vulnerability affect us, and if so, where? 
  2. If an attacker looks to us as a target, what exactly will they see?

2. How do you prioritize outstanding issues?

Even with an up-to-date map of your attack surface, it’s often just not feasible to keep up with the rate at which vulnerabilities are identified by the global security community, or the volume with which they are reported by automated tooling. It’s common for organizations to approach vulnerability management as a big list of tasks, ordering thousands of vulnerabilities by ‘severity’ and fixing from the top down. If you find yourself doing this, you’re not alone, but it is impractical, and here’s why:

Research conducted by Kenna Security shows that 77% of CVE’s have no observed or published exploit code. In fact, while 22.4% of CVE’s have published exploit code, only 1.8% of total CVE’s are ever actually exploited in the wild.

According to NIST, more than 18,000 vulnerabilities were discovered in 2020. So, if only 1.8% of those were actually exploited in the wild, how much time was potentially spent fixing the >18,000 that weren’t? More importantly, how can we identify and prioritize the other 330?

This demonstrates the need for both pragmatism and context in attack surface management, i.e., considering the likely actions of an attacker, as well as understanding what actually impacts your environment. If you’re facing a wall of outstanding issues and don’t know where to start, the solution is to triage pragmatically using criteria that consider not only the impact of an exploit, but also the likelihood that it will be used. Consider asking yourself questions like:

  1. Which of these issues have public exploit code available?
  2. Of those, which are easy to identify as being present from the outside?
  3. Of those, which require the least skill to exploit?
  4. Of those, which are most popular or commonly exploited?

For this, it’s best to work with your offensive team to resolve issues together. Use their interpretation of what potential a vulnerability offers an attacker within the context of an end-to-end attack, as well as the likelihood of them being able to identify and exploit it. As mentioned in the fourth bullet point above, it’s also worth considering the popularity of certain technologies and vulnerabilities. Generally speaking, the more a technology is talked about, the more likely it is to be targeted.

3. Understanding and responding to new threats

When new vulnerabilities drop, they immediately present a threat to organizations that harbor them within their estate, so it’s important to act quickly. This is where a comprehensive, up-to-date asset list comes back in. If you can quickly review your assets, understand which (if any) are impacted by a new vulnerability, and execute a response quickly, you can significantly reduce the window of opportunity for an attacker to exploit it. Just as important is knowing when you’re not impacted. If you can confidently say “we don’t use this technology”, you will save yourself (and the board) a lot of time and effort.

If a new vulnerability does pose a threat to your estate, then the course of action is no different to the triage process discussed above. The trick is to assess its risk in the same way, and allow it to enter your priority list at the right place, regardless of how long the other items have been there. Organizations naturally tend to implement internal service level agreements (SLAs) for fixing vulnerabilities of a certain severity within pre-defined timelines. But, because new vulnerabilities get released so often and vary so much in priority and time to fix, it’s important to always focus on actual risk and reward, not an arbitrary SLA. Be dynamic, and open to change. If a new vulnerability is discovered and needs urgent attention, it should take precedence over older issues which are less likely to be exploited. A large component of successfully doing this is becoming comfortable with taking a fluid approach to prioritization, without rigid or inflexible frameworks.

Digesting new vulnerabilities and understanding whether an issue impacts you or not is half the battle in maintaining perimeter security. But of course, in order to respond quickly to new and emerging threats, you need to be aware of them in the first place. Commercial threat intelligence feeds can be useful here, but a news aggregator or a good Twitter follow list will also go a long way. A common mistake organizations make is not creating time for this proactive OSINT work in the EASM process. Given that vulnerabilities are commonly exploited in under 24 hours after patch release, being the first to react to a new threat can bring significant competitive advantage in the race to remediate faster than attackers can exploit.

So, what are the key things to remember?

Attack surface management can seem overwhelming, especially given the vast amount of data available, but the key is to use this data to your advantage, and start with the basics:

Keep a current and complete understanding of your perimeter, so you can protect it. Asset mapping, identification, and management is ongoing, so build a process to reflect this.

Develop a strategy that identifies and prioritizes the vulnerabilities that matter most. It may seem counterintuitive not to simply focus on all existing vulnerabilities, but your business is more protected when you focus time and effort on the issues most likely to cause damage. Be pragmatic about which vulnerabilities are most likely to be exploited and/or which are easiest to find.

Stay informed of new and emerging threats and respond accordingly. If you have a good inventory list, and can quickly and confidently say “this doesn’t impact us”, you’ll save a lot of time. Equally, if a high-profile, high-risk vulnerability with proof-of-concept code impacting your estate is all over Twitter, it’s often wise to push it to the top of your list. Use public data to your advantage, understand what’s popular, identify new vulnerabilities when they come out, and manage the risk accordingly.

 

1Emerging Technologies: Critical Insights for External Attack Surface Management, Gartner, Published 19 March 2021 - ID G00737807 - By Analysts Ruggero Contu, Elizabeth Kim, Mark Wah

Reading time: 12 min

    Published

  • 08/2021
Owen Evans

Director

Related resources

A risk-based formula for security testing

Risk prioritization helps you move away from testing approaches that don’t suit the scale and complexity of your environment, nor the evolution of threats.

Read more