Using SAFe® to align cyber security and executive goals in an agile setting

Antti Vähä-Sipilä, Principal Security Consultant and Heli Syväoja, Business Process Development Lead
June, 2020

Reading time: 19 min

Scott Piper’s AWS Security Roadmap became "the handbook” for security in the AWS community, but an equivalent source of comprehensive guidance for Microsoft Azure users has been harder to come by. Considering how many organizations now use the platform, this has been a large void, filled either by teams translating AWS instructions or piecing together Microsoft’s documentation. What organizations have needed is a single security roadmap, providing the security building blocks to consider, interpret, and deploy across your own workstreams.

Security by Design

Currently in version 5.0, SAFe interprets security as a non-functional requirement [1] alongside other "qualities" such as maintainability and performance. Despite being a traditional means of addressing security in formal software development models, non-functional requirements are often difficult to meet. This is simply because time is never found for the work involved to take place. 

The reasons behind this are simple. Executive goals and security are not known as obvious bedfellows. The near-term profit strategy favoured by the former tends not to align with security’s battle against hypothetical losses in a more distant future. In an agile setting, both need to align to work. 

The questions that need answering to help overcome this are: 

  • How can organizations determine a risk-based level of security, suitable for agile development? 
  • How can that be assessed? 
  • How does one evidence the work so that it can be used to show compliance against regulations that require it, notably the EU General Data Protection Regulation (GDPR)? 

This article will apply security and privacy practices to SAFe to show how a large-scale agile process can produce more secure software. These ideas can generally be applied to any agile product management setup.

Threat modelling as a solution

Past attempts to improve software security have often been built on acceptance gates. These take the form of long checklists that require fulfilment during certain phases of development. Much security assurance has also been based on manual security testing rushed before release. 

The consensus today is that security should be shifted left: built in by design, proactively implemented, and accounted for from software conception. This is driven both by compliance (GDPR necessitates data protection by design and by default), and by cost; it’s magnitudes cheaper to fix broken architecture when it’s on the whiteboard, not cast in code. 

Threat modelling is a means to achieve the above, referenced as a core security activity by SAFe in its architectural and development processes. Threat modelling can be understood as a form of risk analysis and design review. Various methodologies can be applied depending on the needs of an organization and its development processes. Although security experts could simply perform freeform brainstorming, those with less experience require levels of structure. Examples include weakness taxonomies (like Microsoft STRIDE), multi-stage approaches (such as PASTA), and even checklists focused on a specific piece of functionality.  

Threat modelling deployed within customer journeys and user experience flows typically uses a more nonformulaic, creative process resulting in "misuse cases" and "attacker stories"—risks to the business. 

Threat modelling for technical design and its data flows leads to a more mechanical approach, resulting in specific design changes and technical security controls. 

Developers and other DevOps roles (particularly site reliability engineers (SREs) and test engineers) usually have a well-developed sense of security risk and can thus form the backbone of a team's threat modelling capability. When threat modelling is not implemented, the reasons tend to be less to do with a skills shortage, and more with managing resource. 

Perceived barriers to threat modelling: 

  • Time. Threat modelling is a process that takes time, which is the most expensive commodity in a software development team. 
  • Prioritization. Security and privacy are two considerations developers must juggle. Managing architectural decisions and technical debt, performance, operational cost optimization, service design, and fixing bugs all need to be addressed too. 

Thankfully, there is an elegant way to solve both challenges.  

Software development teams manage their work via tickets (or backlog items). If a task is ticketed, its provision has been made a visible and explicit need. This need can then be balanced and prioritized against all other ticketed work. 

Security work has not traditionally been ticketed. Rather, it has remained a collection of security requirements and guidelines, general acceptance criteria, or indeed, a collection of vague non-functional constraints. This has become an issue when teams are faced with tight deadlines and feature pressure, because—although guidelines existed—the explicit time allocation did not.  

By making threat modelling (and all other non-automated security work) visible as development tickets, the time allocation has chance to follow. The analyses and proposed mitigations from the activity themselves also become tickets. Security consequently increases in priority, taking a seat at the table when business priorities are discussed.

Conducting threat modelling in SAFe

For organizations using SAFe, threat modelling can be introduced at two separate points in the development cycle: epic refinement (known in SAFe as portfolio level) [2] and feature refinement (earlier known as program and team levels) [3].

Epic refinement

During epic refinement, threat modelling can be introduced via a risk analysis method to identify the possible negative business outcomes of SAFe epics [4]. The risk analysis would likely be performed against existing epics in the portfolio kanban's [5]  ‘analysis’ phase. 

Fig 1: Threat modelling after a triage in epic refinement, outputting security requirements. More on the 'triage' part in the section on threat modelling practicalities.

The Privacy impact assessment (PIA) or its GDPR-mandated Doppelgänger, Data Protection Impact Assessment (DPIA) form part of this risk analysis. They also help demonstrate compliance with GDPR's Privacy by Design and by Default (PbDD) [6] requirement. Privacy and security have natural interdependencies; if it transpires that the underlying business case itself is at odds with privacy regulations, changing that later at implementation is likely to kill the project due to remediation time or expense.  

It is unlikely that the business case for a product can be changed to avert the risk completely. In these situations, identified risks need to be documented. The goal is to transform that risk into positive (implementable) backlog items, which tend to be features [7] that drive security functionality and material design changes. They may also turn out to be enablers [8], particularly those that SAFe refers to as infrastructure and compliance enablers. An automated test case, or even the underlying test automation system, are examples of such enablers. 

Documenting risks that currently have no solution 

Especially in epic refinement, it may not be immediately clear what sort of security features or enablers are required. This gap may be bridged by adding attacker stories on the backlog (past literature talks about misuse cases).  

Attacker stories represent an unwanted event or attack pattern, whilst specifying this should not be possible. Attacker stories technically work as SAFe features, typically converting to functionalities in later analysis. Until the design details are worked out, these act as a vehicle to communicate undesired risks.  

Though it may be tempting to create a security epic under which to add all security work, this should be avoided. Doing so would lead to work not being suitably prioritized, with security needs not explicitly linked to business value. It’s possible that security requirements buried under their own security epic never get seen again by anyone in the business. 

This is the first point where security and privacy work become explicitly visible in the development process. Security work appears as a backlog item and can be directly used as evidence of the work that has been performed, as well as that which is outstanding. 

Feature refinement

In feature refinement, threat modelling is applied to features. These are the actual statements of work to be delivered by the developers. Instead of treating security and privacy aspects as non-functional requirements, the goal is to make them features and enablers in their own right. This forces product management to make explicit decisions when allocating developer time for functionality or security. Security over spending is kept in check as a result, whilst direct evidence of security work is proven through its ticketing. 

If threat modelling has already been performed through epic refinement, the feature refinement stage may inherit attacker stories that require converting into positive security controls: either features or enablers. Either way, increased visibility into technical details allows further threat modelling to happen here. 

Threat modelling in feature refinement is likely to be mostly about specific technical design and implementation issues. Ergo, the best people to perform threat modelling are developers themselves.  

When should feature-level threat modelling happen?

Performing triage and threat modelling for features can be made a Definition of Ready (DoR) [9] criterion. This means that the feature would not progress into the program increment content (that is, be committed to implementation by the team) until the threat modelling activity was performed. 

There exist three options for running threat modelling on any functional feature: 

Technical threat modelling can happen as a part of a feature's refinement work.  

Challenge: Threat modelling would eat into the time quota developers use for refinement work. The challenge here is having enough time perform threat modelling on complex features in the allotted time. 

Threat modelling can be pushed into the same Program Increment where the feature will be implemented.  

Challenge: Here, the team would take the risk that threat modelling unearths more work, which then risks the Program Increment content to swell up. If the feature being developed is a stretch goal (meaning that failure to deliver is an acceptable risk) this can be an option. 

Threat modelling could also be performed in the previous Program Increment and any output from modelling would just be pushed back on the backlog.  

Challenge: The next Program Increment could then work from full understanding of the feature's threats. However, this may push a feature's implementation plan a couple of months into the future. When using plain Scrum, where there are short sprints one after each other, performing threat modelling within the previous sprint is usually the best alternative. 

Fig 2: Performing threat modelling within feature refinement. The team has either enough time for threat modelling here, or the features that require threat modelling are simple enough.

Fig 3: When threat modelling seems to take a lot of time, it can be pushed onto the Program Increment with all the development work. The threat modelling results may cause changes to the increment content, or the results can be just pushed on the backlog for later implementation.

Even if threat modelling is performed during feature refinement, it is still important to make the threat modelling task visible on the backlog. This ensures there is evidence of the work performed and that it won’t be overlooked. 

Knowing when threat modelling is necessary 

Not every epic or feature has security relevance. The most important are those relating to changes in how personal data gets used and those that impact the attack surface of the system. 

It may be self-evident what personal data and attack surface are for security practitioners, however, it can help tremendously if there is guideline. This could, for example, be on detecting when a feature changes the way an attacker would interact with a system. Our consultants’ solution has been to create a triage checklist: short and memorable instructions for locating security-critical backlog items. This triage checklist should be adapted to the business, with different ones used for epic and feature refinement. 

Triage checklists presented in sets of approximately five questions can be easily remembered. The goal is not for each list to be completed as such, but to help you develop an understanding of security-sensitive functionality through examples. The triage must not take a lot of time—after all, significant security work would need to be ticketed, and a triage would be usually form DoR criteria. What they are not is the 150-line security requirement Excel sheets familiar to (and often loathed by) developers. 

Selecting the right threat modelling methodology 

The threat modelling methodology you use depends on whether its input is a business value proposal (an epic) or a functional requirement (a feature).

For epics, it may be not possible to dive deep into the design details as they may well not be even known. Success has been achieved instead by using an attacker storyboard, showing the deployment of attack patterns that could be used to cause a technical impact, resulting in consequences either to the business (security) or the data subject (privacy). It is possible to facilitate a discussion where stakeholders can devise all the necessary parts of an attacker story. Understanding attack patterns is likely being the most challenging.

For features, it is important to clarify technical assumptions and project any design and implementation problems. Here, the threat modelling work would usually use a data flow driven methodology, such as Microsoft’s STRIDE (spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege). In systems processing personal data, it is common to tack on a TRIM discussion that is a lightweight privacy checkpoint. If you are interested, find our Elevation of Privacy card game on Github [10].

Conclusion

Some organizations may be worried that developers will not have the skills to perform threat modelling. However, when activity is driven by SAFe across a number of features, familiarity usually follows rapidly. The threat models—the shared understanding of existing risks—within each Agile Release Train (ART) [11] tend to be rather stable, so engineers can build on earlier threat modelling results.

Once threat modelling becomes the commonplace for epics and features, business and product management can expect to see that security and privacy are no longer costs and sources of schedule risks. Instead, security may be balanced with tasks that create immediate business value and that can be articulated in terms of customer value.

Key points to remember

Security and privacy work need to be visible on backlogs. This visibility will make time allocation and relative priorities explicit and at the same time produce evidence of security work performed. It will also give organization's security and privacy functions a way to follow up on the security activities without extra reporting. 

Don’t use non-functional requirements with security. The attackers will not care if you have valiant statements in your acceptance criteria. Each security requirement needs to boil down into either a hard, functional requirement or an actual enabler. If you cannot yet tell how exactly to tackle the risk, attacker stories or attacker stories are a great vehicle for communicating the “negative” requirements to your engineers. 

Perform threat modelling (and if needed, privacy impact assessment) using two different methodologies. Threat modelling in epic refinement makes your business case robust, and feature-level threat modelling improves your design and implementation. Combining both results in a teflon coating for your software. 

SAFe® and Scaled Agile Framework® are registered trademarks of Scaled Agile, Inc. 

References

[1] https://www.scaledagileframework.com/nonfunctional-requirements/#:~:text=Nonfunctional%20Requirements%20(NFRs)%20define%20system,system%20across%20the%20different%20backlogs.

[2] https://www.scaledagileframework.com/portfolio-safe

[3] https://v46.scaledagileframework.com/program-level

[4] https://www.scaledagileframework.com/epic

[5] https://www.scaledagileframework.com/portfolio-kanban

[6] https://edpb.europa.eu/sites/edpb/files/consultation/edpb_guidelines_201904_dataprotection_by_design_and_by_default.pdf

[7] https://www.scaledagileframework.com/features-and-capabilities

[8] https://www.scaledagileframework.com/enablers

[9] https://www.scrum.org/resources/blog/walking-through-definition-ready

[10] https://github.com/F-Secure/elevation-of-privacy

[11] https://www.scaledagileframework.com/agile-release-train

Speak to a Consultant


Fill out your details below and we will contact you shortly

We process the personal data you share with us in accordance with our Corporate Business Privacy Policy.