How do you detect when your detection fails?
Attack detection is all the rage. This is largely due to the realization that trying to prevent all attacks isn’t the answer. We know that we cannot stop all attacks.
Building and maintaining an effective detection capability is a significant undertaking for a business of any size. Not only is the technical challenge vast, with a range of attack types and systems to defend, but you also need highly skilled people and effective processes.
In an ideal world, the work of a Security Operation Center (SOC) would be exciting all the time. (I’ll use SOC as the catch-all term for the team responsible for detecting threats.) The detection capability that you build would continue to function as your environment changes. It would remain up-to-date and consistent across offices and regions, and your users wouldn’t click on things that say, “click here”. You could focus your time on fending off the most advanced attackers.
Unfortunately, we don’t live in an ideal world, and developing a consistent detection capability is a challenge for SOCs. Why?
- The environment(s) you need to monitor change a lot.
- Threats evolve all the time.
- Analysts who have a deep understanding of offensive security and a love for attack detection are notoriously hard to attract and retain.
- Detection capabilities operating at scale are not always consistent. This is particularly true for larger organizations with a global footprint, who often see regional variations in what they can and can’t detect.
Yet despite the obvious challenges, investing in your detection capability will pay dividends. According to Verizon, a quarter of breaches remain undetected for a month. Once an attack has been detected, your SOC has an opportunity to respond; closing the gap between detection and response reduces the risk posed to the organization.
How can SOCs achieve this? Consider the following goals:
- Increasing visibility of your estate—What data do you need to detect a given attacker action? Start with a hypothesis and be focused and specific in which logs you collect.
- Putting your detection data in context—Why might an attacker execute a given action within the context of your environment(s)? Not all attacker actions are malicious when considered in isolation, which means avoiding the temptation to build an alert for everything you see on MITRE.
- Testing your changing detection capability—How does your detection capability perform as your environment changes? Because environments and threats change all the time, when you build capability you need to test it continuously.
“But I don’t have time to continually test everything.”
All environments undergo constant change. The larger the environment, the more changes there will be. These changes could relate to the network, the infrastructure, and even security controls, and they all have the potential to impact detection capability. For example, a log source could stop ingesting, or a network segment could have isolated key hosts. Even in a fantasy world where threats don’t evolve, this presents a window of opportunity for attackers. The goal of regression testing is to ensure that the detection capability you’ve built in the past continues to function consistently. This is where tooling comes in.
Using attack simulation tooling, you can replicate a wide variety of attacker actions to test what you are capable of detecting and whether your logs and alerts are effective. Now you can continuously monitor and maintain your detection capability as you build it. This can be executed in multiple locations across your estate to understand what you can detect and where in your environment you can (or can’t) detect it.
Such tooling exists and, in my opinion, every organization stands to benefit from using it. During Purple Teaming exercises, for example, it’s common to identify a detection capability that doesn't operate as intended, either through a fault in its initial implementation or due to other environmental changes. Testing your detection capability with attack simulation tooling can uncover these blind spots on a continual basis, with less effort than doing so manually.
Testing your detection capability manually requires collaboration between SOCs and red teams. Automating this takes the pressure off both teams. SOC analysts can focus on the more exciting work of building capability, rather than manual maintenance, while red teams can focus on building and simulating realistic, complex attacks.
“Great! So, a new tool will solve all my problems?”
Alas, no. Tooling alone won’t carry you on the journey to creating a high-performance SOC. If set up and used correctly, attack simulation tooling can provide the data needed for SOCs to improve, but it’s how you use this data to drive long-term improvements that generates real value.
For instance, mapping your control coverage to MITRE, and overlaying that with threat intelligence, will provide significant coverage. Add in the technical validation from attack simulation tooling and you can put this theory into practice, gathering empirical data that confirms (or denies) how you perceive your capability, and thus your risk mitigation.
Start by thinking like an attacker and looking at the whole picture. Consider the following key questions and work on sub-strategies when answering them:
- What systems and assets will an attacker target? What routes will they take to reach them?
- If we were to be phished, which kinds of payloads could reach an inbox, and of those, which could be executed?
- How will the attacker establish Command and Control? What persistence techniques are they likely to use?
- Out of the potential payloads or attack techniques, how many can we detect with usable logs and alerts for the SOC?
Being able to answer questions like these is the crux of managing the risks associated with a cyber attack. It’s about understanding the battlefield and what you have in your armory. The intricacies of your detection capability and your preventative controls should be a surprise to your attacker, not to you. They’re your home-field advantage.
The challenge, as I have mentioned, is that the picture is continuously changing. By using tooling to highlight inevitable fluctuations in detection capability, and continuously contextualizing this with threat intelligence, you can successfully prioritize actions and remediations based on real risks.
Continued investment is necessary both for maintaining the detection capability you already have and for progressively improving it. To ensure investment, it helps to provide evidence of the dynamic nature of the problem. In addition, continuing to provide such data and acting upon it will clearly demonstrate the Return on Investment (ROI) over time.
So, with some hard work, more data, and a clear focus, you can serve up a big slice of ROI pie whilst still focusing all your SOC efforts on doing the interesting work: detecting better and securing your business.