Style over substance: Why tech not culture is key to DEVSECOPS security
Nearly half (48%) of all organizations  say that digital transformation is driving their DevOps teams to deploy faster. Rapid development needs security to keep pace, and there is increasing demand for development infrastructures to be tailor-built around the most likely risks.
With DevSecOps comes a wholly integrated approach to security within DevOps. It has established itself firmly as a way of thinking in organizations large and small, where software engineers, operations managers, and security specialists collaborate to build products quickly and securely.
As in the name, DevSecOps exists to make security a fundamental part of the DevOps process. Done correctly, this means security is built into every aspect, rather than being laid solely over development. For example, where observability is architected into a system, it is automatically treated as a tool for detection – its very nature concerning the visibility of the data flowing through this environment.
Without DevSecOps, organizations must manually align development and operations with security. This leads to bottlenecks in the development pipeline, or indeed increased risk when a security protocol is bypassed or oversimplified.
Our consultants have seen an encouraging level of DevSecOps adoption. However, whilst many of these approaches provide a strong foundation, they frequently lack processes for (securely) implementing new tech or managing the risk attributes of existing tech. The popularity of DevSecOps being embraced as a culture has come at the cost of its technical facets.
Formed primarily during the initial inception thereof, this culture-centric mindset positions DevSecOps specifically as an approach to modern day developmental practices and processes. Client engagements have shown us the potential detriment of such a mindset, i.e. one that overlooks securely-implemented tools and technology. In short, a wholly cultural approach leaves environments critical to business continuity vulnerable to attack.
Over time – with handovers, development teams owning processes without formal decision tracking, or indeed, because there was never any knowledge of what existed on day one – teams lose oversight of the tech underpinning their development. The streamlining, which made them that much quicker, starts to obscure the elements crucial to every action and product. Understanding how systems work individually and together, and their respective security, is the first step to a secure ecosystem. Thus, without a clear view of your technical framework, security becomes harder to achieve. What then should be the approach?
Creating a baseline DevSecOps ecosystem
Much can go wrong if a technological component of the DevSecOps ecosystem is vulnerable or has been compromised. To avoid this, teams benefit from uncovering the existing and potential vulnerabilities of the environment they develop in.
To reduce the likelihood of security issues, a robust DevSecOps model employs the Continuous Integration/Continuous Deployment (CI/CD) delivery practices at its core to produce software reliably and safely at the required rate. These practices can then be integrated into a baseline model (Fig. 1), as below:
Fig 1. Baseline DevSecOps model
Using this model as the basis for assessments will aid discovery within your own DevSecOps environment; specifically: the discovery of risks associated with digital tools and systems, and their potential impact. As an abstraction of a DevSecOps environment, the structure may not be consistent with your own. Yet it can still be adopted by working to the principles below.
The model is split into integration blocks, each containing its own set of assets. Every block is a major component within, and forming, the DevSecOps environment – for example, your code repository and staging environment. Dividing the model into blocks that integrate with one another provides a navigable, visual representation of your environment. As DevSecOps is not a one size fits all solution, assets are not limited to a single block and may encompass multiple blocks.
Just as a cultural mindset suffers by neglecting technology, your model should not neglect people and process. Team responsibilities within the environment should be mapped out just like the digital assets. Teams, or team members, may be solely responsible for a single block in the environment, or they may be responsible for multiple blocks. This information is critical to establishing effective trust boundaries.
The CI block used in the example below illustrates the abstract concept of the modelling process, with tools that may form part of the environment.
Fig 2. Modelling the integrations between the CI block and other tools and systems
Mapping integrations in your DevSecOps baseline model
Once you have identified the tools and systems that exist in your organization’s DevSecOps environment, you can then process map each block by asking the following questions:
- What is used to develop the asset’s codebase?
- Where is the codebase stored, and what is used to store it?
- Do any third-party dependencies exist, and with whom?
- What does the code review process consist of?
- Who are the team members involved?
- What is used to review the code?
- What is used to build the asset?
- What integrates into this building process?
- Is any static or dynamic analysis carried out on this codebase during the building process?
- What tests (functional or requirements-based) are run during the process?
- Does the build environment use agents in the building process?
- What is the deployment process of the asset?
- Are the artefacts generated during the build process stored somewhere, pre- or post-deployment?
- Does the environment use a secrets management or key management solution? These are also known respectively as a secret vault or key vault.
- Where is the asset deployed to? The labelling may be unique to your organization’s naming convention.
- Does any monitoring take place on the asset post-deployment?
- Does any monitoring take place during the build and/or deployment phases?
DevSecOps is bespoke and multifaceted by design. Therefore, it can be difficult to understand the sequence of actions required to configure a baseline model. It’s within this confusion that security gaps and misconfigurations often occur. As such, understanding what your development environment consists of – and, crucially, having a system to guide the secure implementation of tools and systems –disentangles the process and quickens yours and your teams’ understanding.
When DevSecOps goes wrong
Engineering pipelines, systems, and code are pathways to an organization’s critical assets and, as such, become critical assets themselves. For any organization that develops a digital asset, a breach within the DevSecOps environment would lead to either the full compromise of the asset, or a pivot point allowing access to the internal environment.
The examples below demonstrate how it is possible for attackers to pivot through CI/CD and DevSecOps systems. They reflect our own red teaming and Attack Path Mapping (APM) activities with clients.
ATTACK SCENARIO 1
The client’s DevSecOps environment was hosted on AWS, with Gitlab used as the repository, and Jenkins as CI tooling. This pipeline was used to develop a platform containing a custom cryptographic transport layer, which encrypted messages to and from a client.
The encryption key for this transport layer was stored within the secrets management system used by the pipeline. AWS Key Management Service (KMS) was the storage solution for the keys.
The client’s platform used AWS Lambdas to interact with AWS KMS for message encryption and decryption.
Extract the keys stored in AWS KMS or provide a backdoor using them. In a real-world situation, this would allow an attacker to decrypt messages at will.
Walkthrough (Starting at an assumed foothold position on a developer's machine)
- Browse the code repository to observe the structure of the environment. The compromised developer account did not have access to the sensitive repository containing the cryptographic functionality. However, it was possible for the user to fork a project connected to the Jenkins build environment. Misconfiguration in this environment resulted in a build on every merge request. This highlighted an integration misconfiguration between the tools, rather than the tools themselves, which were deemed to be secure individually.
- Fork the connected project. To use Jenkins, one must provide it with a set of instructions in the form of a build script defining how to build the code. This build script allows users to execute commands on the build agent. By placing a backdoor inside the script, the red team was able to gain access to the Jenkins build agent. A merge request was created, and the script was executed.
- Enumerate available assets and objects. Positioned on the build agent, the team could easily enumerate what was available. The visibility of other project builds revealed that this was a shared build environment. In a real attack would grant an attacker access to projects not available to the developer.
- Wait for the target. The goal of the engagement was to gain access to the sensitive key material, meaning it was a simple game of waiting until the correct project was built. Once this occurred, the team had access to the source code of the sensitive functionality, and could replace it with their own malicious code.
- Extract the key material. The team replaced the legitimate functionality with one that could extract the key from KMS and send it to the attacker. The goal of the engagement had been met.
Fig 3. Attack scenario exploiting an insecure code repository structure and CI tooling misconfigurations
ATTACK SCENARIO 2
The organization, a large financial services provider, hosted its build environment on-premise. Due to its large size, teams working on the environment were distinctly segregated, with a dedicated Ops team managing it wholly.
Separate development teams existed to build different application types. Various build environments also existed, comprising unique tooling sets. Some of the teams still shared a build environment, however, consisting of the Atlassian tool stack. As a result, the code repository was a BitBucket instance, and CI tooling was delivered through the Bamboo CI server. The BitBucket instance had anonymous read-only access enabled on its repositories.
Due to the shared building environment, different teams had access to the build queue on the Bamboo environment. Some would often cancel other teams’ build jobs to prioritize theirs. In response, one team had employed their own local build agent. This provided them with full control, but consequently also made them a target. They did not perform necessary maintenance, and the build agent machine was left outdated – placed on a desk next to the developers’ workspace. This of course left the machine vulnerable to be accessed by anyone in the office.
The same team did not make use of Multi-Factor Authentication (MFA) prior to a release being pushed from the staging environment to the production environment.
The remainder of the environment did not play a role in the attack scenario and will thus remain abstracted.
The main objective was to alter the code in the financial application to embed a keylogger within the codebase. In real terms, this would allow an attacker to harvest credentials from the entire userbase. Although this scenario was limited to the use of a keylogger to do so, any other malicious code (e.g. ransomware) could be used to achieve the same goal. The attack scenario was modelled around the NotPetya ransomware attacks .
- Discover the target application. The anonymous read on the repository allowed the team to find the critical asset. An injection point for the key logger was identified.
- Compromise the build agent. Due to the build agent not meeting the same security standards as the rest of the environment, the device was compromised quickly. This could have been achieved using other methods, such as exploiting services in the outdated ntwork, or planting a rogue HID device on the machine. However, the developers were not following the organization’s stringent policies, and the build agent was configured without a password.
- Implant the malicious code. The build process could be initiated using the extracted credentials from the build agent and accessing the Bamboo master agent. Due to the other systems being automated, initiating a build in the specific branch meant it was only necessary to compromise the code at the build stage.
- Compromise the production application. Once the malicious code was planted, the team simply waited for the build artefact to be pushed to the production server. Their code was then activated, allowing them to harvest credentials from the application’s entire active userbase.
Fig 4. Attack scenario exploiting an ineffectively-managed build agent and no MFA on deployments
Improving the DevSecOps model
Before an attacker has even begun their assault on the DevSecOps ecosystem, they will assess the environment for its weaknesses. The same assessment can be carried out by your team at the project design phase, making security an intentional step in response to the risk of a real attack and the techniques likely to be used.
As established earlier in this article, most organizations with large development environments have little oversight of their constituent parts. Furthermore, multiple unique development environments complexify the implementation of security measures.
When an organization understands the makeup of their DevSecOps environment, they can more effectively manage the technology in use, and secure that environment. It’s in this discovery process that the baseline model outlined previously comes to light. By mapping each component to its relevant sectors, it is easier to visualize how they integrate. These integrations are represented by the connecting arrows, with each being unique. They may follow the same concept, but it is very rare that two different toolsets fulfil the same purpose or have the same functionality.
Assessing these integrations is just as important as assessing the tools themselves, as the previous attack scenarios demonstrate; the tools were not vulnerable by design, rather risks were created by their integration with the environment. This is increasingly the case as organizations move to DevSecOps in the cloud.
To make the process of mapping the environment more effective, and to shift this security process even more to the left it needs to be accompanied by threat modelling exercises. (Shifting security “to the left”, is to implement security measures at the earliest possible time in the development lifecycle.) This is vital to the security of any digital asset, environment, and architectural design process. Threat modelling can be used post-design, but the cost of fixing risks discovered during this phase is likely to be higher.
Targeted threat modelling
Targeted Threat Modelling (TTM) follows the same principles as that of a traditional threat modelling exercise, but focuses on how people, process, and technologies interface with one another. It is designed specifically for the development environment.
To understand how TTM works, and why it is an effective security solution in this context, one need only visualize the vast sum of digital assets that exist in such an environment. Traditional security assessments are not suitable for this exact reason; deployed within a DevSecOps environment, the result will be a list of hostnames or IP addresses that need to be scanned. Normally, once a tester sees that non-default passwords have been used, and that the operating system and installed packages (including DevOps tooling) are up to date, the “all clear” is given. Traditional testing lacks the necessary holistic oversight, and by assessing components in isolation, the security implications of their integrations is lost.
Using TTM in your organization
A targeted approach to threat modelling can increase the effectiveness of security in a dedicated and bespoke environment such as DevSecOps. By assessing integrations between people, process, and technology, misalignments and misconfigurations can be highlighted. Coupling this approach with the visual representation of the environment gives stakeholders holistic oversight, where associated risk can be identified and attributed to attack paths. By prioritizing by risk, it is easier to spot where a security control may only prove to hamper the process with superfluous layers of security. By employing techniques such as TTM, it aids in the process of shifting security to the left.
Fig 5. Including TTM in the security framework for DevSecOps
Summary and conclusion of outcomes
DevOps, at its core, is founded on the speed-to-market conundrum facing any organization with a development faculty. When adding the Sec into DevOps, processes and technology should not be clamped down to hinder progress; they should be finely tuned to provide maximum security whilst allowing development at the required speed.
The approaches discussed in this article facilitate such an outcome, and have proven effective in engagements with clients. Due to their flexibility and scalability, the baseline model and TTM are approaches that can be adopted by any organization. Deployed properly, they deliver actionable results that a traditional security assessment simply cannot.
Speak to a consultant
Complete the form and we will contact you shortly