True Forensics Uncovered SE01 E04: Judgment Day

WithSecure_experts_data1
Reading time: 16 min

    Published

  • 07/2021
Anssi Matti Helin

Incident Response Consultant

We all know from examples in history that it’s the bummock of the iceberg—the submerged part—that sinks ships. The tip (its hummock) is a warning, but it’s what you can’t see that poses the real threat.

Some cyber security incidents represent the small, perceptible layer of a much greater risk that persists whilst going either unnoticed or simply unaddressed. They are that tip of the iceberg, so to speak. But with deeper investigation, and the client’s trust, it is possible to get below the surface and reveal the murkier reality of why the organization was put in danger in the first place. Unspoken truths can surface, sometimes leading to the total transformation of a business. This is the tale of one such investigation, where we responded to one client's appeal to dive in deeper. It’s a call out to any leader who knows something isn’t working for their security at its core but feels powerless to do something about it.   

The case started like many others, escalated to us via our threat analysts. This normally means we get a wealth of evidence  up-front in the data collected by our tooling. Not this time. The attack detection agent’s roll-out had only just begun across the client’s environment when a ransomware attack struck.

The scene of the crime

The client was a mid-size B2B services provider that had merged with a larger group corporation during a recent series of  acquisitions. Yet to integrate with the corporation’s security operations center (SOC), the client’s security fell under the responsibility of its IT managed service provider (MSP). The business was evolving and growing; all in all, things were great. That was until a vast amount of data was encrypted.  

We were called into a meeting with the organization’s CEO, COO, and representatives from the MSP, who were the acting CTO and IT Operations Manager. Given the situation, and our distance from the client, the meeting was hosted remotely. I  dialed in  from my apartment in  Helsinki, the rest of the team from theirs,  and we listened as the CTO shared the story so far.  

The ransomware hit  48 hours earlier and spread quickly to several critical servers. The team spotted the indicators of compromise (IOCs)  less than 12 hours after  its  deployment and  blocked the malware  from reaching the command and control (C2)  server. They  halted  it from  spreading further and yet had no idea  where the attack had come from. Though the immediate threat had been neutralized, we didn’t know how to stop a resurgence from the original source. We needed to find that hole and close it before there was a second successful attack. The CEO had many questions and was understandably concerned about this happening. The team and I tried to unwrap exactly what had taken place to enable the attacker’s entry and piece together where this sat in the bigger picture, i.e., why had the attack taken place at all?  

After deciding that the case would be handled remotely to keep the client’s costs down and increase our speed, the investigation began. We  quizzed the CTO, first, around recent security assessments, asset inventories, remote access methods, known security issues, and so on. His answers were vague, which didn’t altogether surprise us. In a situation like this, it's understandable that those responsible for an organization's security may feel their abilities are being called into question. Assigning blame is of no interest to us, however. It’s never constructive. All we want is to find the best course of action to get everyone back on their feet and stop further disruption. Despite the  uncertainty of the CTO’s  initial  responses,  it didn’t take us long to realize that the organization’s security posture was weak; the attacker would not have met much resistance during the initial stages of their attack. 

The first lead

The CTO’s recollections got particularly ambiguous when it came to the client’s use of remote access. As seasoned practitioners, we can pinpoint attackers that have gone to great lengths to go untraced. And as a result, we notice the smallest pieces of evidence and follow them. The CTO's persistent vagueness raised our suspicions, so that was that—the client’s remote access setup is where we would begin. We requested everything that was available around the remote desktop infrastructure and centralized logging server. It was  time to get our hands dirty. 

In any investigation, it’s a beautiful thing to hear that the client has centralized  logging.  When  critical logs from every  workstation, server,  and  network device have been collected  and  consolidated  in one place, we can catch up with the attacker faster. All their actions and movements on  the network are “on paper” so to speak.  Sadly, the  attacker knew this too, and their first post-reconnaissance  action was to encrypt all files on the central logging server. Inventive? No.  But I had to hand it to them—they knew we would come for them and had thrown a sting in our path. 

There was one positive aspect to this discovery. We now had hard evidence of the approximate period the attack began and what the attacker did. At a time before ransomware wiped Windows event logs as standard, we also had chance to investigate and trace these back to the attacker’s entry point. It appeared they had logged on from a device whose name suggested it was a remote desktop server, and we dug into its logs. This was it. This was how they had achieved their foothold.

We could now prevent the attacker (and others) from reusing the same attack vector. But investigations don't just end like that. No way. You keep pulling at the strings to find out why an attack path was taken, who the attacker was, what they wanted, and if there are any other access routes that could allow them back in. Yes, we had found their entry point, but we had also  found clues that standard best practices to secure remote access were not being followed more widely.

Cracks appear

With just a handful of exceptions, it’s recommended that remote desktop is accessible only  after  logging into a VPN. That is, a single point of entry which can itself be hardened, audited, and monitored  for different activity.  Via the VPN—and only via the VPN—can any user access  resources. Quite simply, the client wasn't doing this, and their remote desktop infrastructure was wide open and exposed to the internet.  

The IT team had clearly “read the manual” (or started to), because they had implemented a remote desktop broker infrastructure  via a threat monitoring gateway. In this setup, a front-end server is placed in front of the remote desktop server (a terminal server)  to process the  connections  taking place. The client’s gateway didn’t appear to be doing much much though. We requested all possible evidence from this pivot point. 

Though  our  initial analysis of the gateway had revealed little, further investigation led us to a torrent of  logins from across the internet.  Without any rate limiting, access limits by IP, or a VPN in front, users  from anywhere and everywhere were trying to log in to the servers. We were seeing thousands of  illegitimate login attempts every minute, which is mind-boggling when you think of what that amounts to in weeks and months.  Plus, with so much traffic,  event logs become  practically useless.  As I recall, the  Windows security log on the threat monitoring gateway  had a total log retention time of 15 minutes,  due to the sheer sheer number of events coming through. I’ve seen some bad things in my time, and this was really bad. The organization was a walking target.

Thankfully,  our EDR  agent  came to the rescue.  Despite its absence on most critical infrastructure, including the threat management gateway,  it was already active in  some  areas; we weren't flying completely blind.  Tracing  activity back from the attacker’s log encryption to the remote desktop server, we saw that they had logged in with a user account.  And after mapping out the steps of their lateral movement from that  initial  server, it appeared we might have been dealing with 2 attackers.  One user had logged on and performed their reconnaissance via Cobalt Strike, before using other  common network mapping tools. A few days passed before the ransomware was  deployed. Sometimes, attackers do lie in wait like this, however, there were indicators that didn’t match between the two logons. This was a potential sign that initial  access had been sold from one attacker to another,  probably via a criminal marketplace.  It’s something we see used by organized criminal groups and opportunists alike: a shadow supply chain comprising the parties that have the capability, means, or opportunity (CMO) to execute a successful heist. 

Still, as far as ransomware attacks go, the client’s fast response meant that this one hadn’t ended in disaster.  Based on the investigation up to this point, we started offering remediations while focusing on the case’s conclusion. They included adding a VPN appliance or server in front of their remote desktop server. Multi-factor authentication (MFA) on remote desktop use was also recommended as a quick win.  (As a note for the reader, remote desktops often harbor  their  own  vulnerabilities, therefore this measure will not provide total security and should be taken at minimum.) 

No going back

Things seemed to be coming together. That was until the CEO called me at home one evening. In my experience, this tends not to be a great sign. Of course, I took the call and we spoke candidly. She seemed happy with the outcome of the investigation to date, though I sensed a “but” was coming.  

She told me this wasn’t the first time an incident of this nature had occurred. It was  not the  worst, but it wasn’t unique, and she feared it wouldn’t be the last. A catalogue of failures was putting the organization at risk, and not just operationally, but also threatening the terms by which its acquisition had been agreed. Ergo, deficits in its IT setup were making it a liability to the acquirer. This could potentially lead to a retrospective reduction in its valuation and the price paid for its shares.  

The previous few months had cost the client excessively for reactive response and remediation. Now, something critical came to light. The CEO was convinced the IT MSP—that her organization was contractually locked into using—had been negligent in the discharge of its security responsibilities. And sadly, as she laid out the facts, it appeared that under the acting CTO’s supervision, hazardous decisions were being made regularly. She had little power to alter this way of working due to the contract in place. She and the COO had presented their concerns to the MSP and asked it to correct the issues, but this breach was yet more evidence that nothing was happening. 

Blame culture has no place in IR. Never has an investigation been solved or an incident closed more quickly when “the blame game” got played. Incidents are rarely the fault of one person, and it takes a whole team—many teams in fact—to collaborate and come through strong. Unfortunately, in this case, the MSP had failed to provide reasonable protection for the client. Basic security principles had been overlooked, exposing their business. With no sign that the MSP had the desire or means to improve, the situation needed resolving from the outside. The client would, at some point, become the target of a successful attack if this pattern continued. 

So, on that same call with the CEO, we agreed it was in the business's best interest for the investigation to be extended. Our role as IR provider was greater than simply concluding the existing course, handing over our report, and wishing them good luck. The team and I would dig further into the threat management gateway and get to the bottom of the MSP’s negligence. This would prove the material default of the agreement and the MSP's failure to correct it, thus enabling the client to terminate the contract and take back control of their security posture.  Now, any avoidance of issues was replaced with frankness. We pushed hard for clarity whenever we met with the CTO and IT Operations Manager. Tough conversations were had. And, at the CEO’s request, we obtained a full disk image of the threat management gateway. Once we pulled on this tiny bit of thread, the whole thing quickly started unravelling. 

The gateway was running Microsoft Forefront, which (some readers will remember) was effectively discontinued in 2015 after development ceased in 2012. Three years later, it was being used on the client’s network without mainstream support, and there was no evidence of extended support provision. This was reflected in the client’s asset inventory, in that there simply wasn’t one. We'd asked for access several times, only to hear nothing—which now made sense. There was no IT management policy, no asset lifecycle management, no patch monitoring, nothing. From the disk image, we could also see what the gateway was doing. As expected, it wasn’t much at all. It was certainly blocking the most obvious offenders—let’s say, 1000 remote desktop protocol (RDP) logins per minute from a single IP address. But mostly, it was quietly logging all attempts in a database that nobody had accessed in years. 

This was all the evidence we needed: a remote access setup without a VPN, an ineffective threat gateway, thousands upon thousands of undocumented login attempts happening daily, a piece of antiquated security software, and no asset inventory. The list was incriminating. Years of neglect had led to an almost total lack of readiness. While our findings were harrowing for the CEO to hear, it was the evidence she needed. Once presented with the overwhelming evidence, the MSP quickly agreed to terminate the contract. The client could start looking forward. A plan (that had been discouraged by the MSP) to migrate to Azure on an entirely new domain and build up a secure network from scratch was brought to life, with a robust asset lifecycle policy and plentiful remote desktop controls. 

Learnings

This isn’t the story of one individual’s misjudgment, nor the failings of a provider. It would be unwise to focus on either as the moral here. Instead, the message is that technical debt is real. Security managers and C-suite management can’t rest on their laurels, do something properly once, then leave it be. In this instance, the client’s infrastructure was most likely in a healthy state years before. But perpetual neglect, and advances in attacker tradecraft, had left its defenses obsolete. IT asset management is so often the Achilles' heel of organizations that otherwise have a strong posture. You can’t always be reactive—you must be ready too.  

It also shows how far forensics can go as a tool to improve your business. For organizations whose strong security postures are the product of years, maybe decades, of work, it can add to existing strength. In cases like this, where the problems are foundational, investigations can be used to rip the plaster off and start again with a new, resilience-focused approach. 

Related resources

True Forensics Uncovered SE01 E03: Too Close to Home

Lifting the lid on cyber forensics with a true crime thriller. This first article in a new series shows how investigators uncover evidence during an incident and use it to contain and eradicate the attacker. 

Read more

True Forensics Uncovered SE01 E02: The Opportunist

Opportunistic attackers exploit circumstantial weakness and fly under the radar. Our second true forensics thriller is a story about challenging the facts and the power of looking twice. 

Read more

True Forensics Uncovered SE01 E01: Hidden in Plain Sight

Lifting the lid on cyber forensics with a true crime thriller. This first article in a new series shows how investigators uncover evidence during an incident and use it to contain and eradicate the attacker. 

Read more

Incident response

Learn about the pre-emptive and counteractive activities needed to make incident response (IR) successful. Explore a library of helpful content created by our consultants or get in touch.

Read more