Building a Non-Functional Requirements Framework – Overview
I’m planning on documenting a framework that we built for managing non-functional requirements. This is post #1 of the series.
A pain point for our infrastructure and security teams was a lack of useable, consistent availability and security requirements for our internally developed applications. The business analysts worked with the organization to create requirements for the functionality of the application but ignored most of what infrastructure, identity management, and security would need until the end of the development process. By the time these teams got insight into the application it was too late to wedge in new requirements. The net was that the organization was promised an applications or enhancements, but because no consideration had been made for non-functional requirements, deadlines were often missed. The worst example was the pending release of a major new application that allowed manipulation of financial information, but for which no consideration had been made for authentication, authorization requirements, or database & application hosting security. Retrofitting that project added a year to the timeline.
Additionally, we had a series of outstanding audit findings related to the lack of enterprise-wide standards for securing systems. We tended to build secure and available systems because we knew what we were doing – not because we built to an objective, measurable standard. Auditors would prefer that we built to a standard that ensured a secure, available system – and of course we agreed.
When I had a few months of down time (approx. 2012-2013) I decided to see what the state of art was in creating and maintaining non-functional requirement (NFR’s). I looked at the obvious – FURPS+, ISO-9126, ISO-25010 and a handful of University published research papers. My biggest issue with the various existing models was that they were software specific. I felt that NFR’s should apply to entire systems, not just the software running on the system.
As far as I could tell at the time, the various sources, authors, consultants and Gartner didn’t really agree on much other than that NFR’s are not Functional Requirement’s and that you need to have some. I found that:
Many web sites have lists and examples of NFR’s.
Some try to define NFR’s, few succeed.
Others admit that NFR’s are difficult to gather.
Few apply NFR’s to systems (vs. software)
FURPS+, ISO-9126, ISO-25010 and similar didn’t treat security as a first class citizen, nor did they address legal requirements.
What I did find though, were a couple of sources that I thought I could use to build a set of generic non-functional requirements.
Erik Simmons and John Terzakis (Intel) each have a fair bit of good information in various presentations that are readily searchable.
Tom Gilb’s ‘Planguage’ seemed like a valuable tool, and both Simmons and Terzakis describe how to use Planguage for requirements writing.
These sources were close to being adaptable, but rather than try to adopt an existing framework as-is, I thought that it’d be best for us to come up with something usable by borrowing from various existing sources, primarily borrowing bits and pieces from Simmons, Terzakis, and Gilb.
Into the Non-Functional Requirement Abyss
We agreed that Requirements are not designs and should not specify a particular technology or configuration. Requirements should specify an end result, not the path to achieve that result. We tried to keep this in mind as we worked out our framework.
Our starting point (and first disagreement…) was on the definition of non-functional requirements. Here’s what we used:
Functional Requirements describe the intended behavior of the system (or software), or what a system should do.
Non-functional Requirements describe how well the system does whatever it does and under what constraints the system must operate. NFR’s describe operational characteristics, performance, availability, etc.
We decided to leverage a permutation of the common ‘S.M.A.R.T’ framework as a requirement for writing the requirements. By placing bounds on the requirements writing process, we hoped that we’d end up with requirements that would have a chance of being valuable to the organization.
Our version of ‘S.M.A.R.T’:
Specific: Requirements will be clear, concise, unambiguous, with consistent terminology, and with detail sufficient such that designs based on the requirements will meet operational goals.
Measurable: A test can be devised that verifies the requirement using a bounded measurement.
Attainable: The requirement is technically feasible within the constraints of current technology, and for which there is at least one design and implementation.
Realizable: The requirement is fiscally and manageably implementable within the constraints of organizational budget and staffing.
Unambiguous: The requirement will have a single, non-conflicting interpretation.
Traceable: The source of a requirement will be traceable to stakeholder need. The requirement is traceable to business strategy or roadmap. The life cycle of the requirement is traceable from its conception to its current state.
Specificity and Measurability were considered important because we hoped it would keep us from writing vague requirements or requirements for which there were no means of measuring attainment.
Attainability and Realizability were intended to prevent the implementation of requirements for which there was no solution possible, or no solution that was actually implementable in our environment with our limited capabilities.
Traceability was desired to prevent the imposition of requirements for which there was no business need (requirements for the sake of requirements, or requirements to give us an excuse to buy shiny new resume-building technology) or requirements that appeared out of nowhere or were modified outside of a formal process.
Be cause we like putting things in neat buckets, we created broad categories of NFR’s for which we thought we’d have an immediate need. The various industry models have categories (Maintainability, Reliability, Portability, etc.) but our thinking at the time was that those categories didn’t work for us. So we started from scratch and ended up with the following:
Resiliency – The requirements that describe the ability of the system to continue to function during common failure modes. A resilient system continues to work after routine failures (disk, server, OS or process). Resiliency is necessary to meet availability requirements and usability requirements. A resilient system may use technologies such as redundancy, clustering, load balancing, error handling, and error recovery to function after component failure. Resiliency encompasses the concepts of availability, reliability, robustness, fault tolerance and exception handling as described by other authors.
Recoverability – The requirements that describe the ability to recover from failed states and return the system to its as-built condition. Using the example of a failed unit of hardware, a resilient system will continue to function after failure, a recoverable system will have a simple and predictable method for recovering from the hardware failure. Data backups, data replication, hot-swap hard drives, and automated operating system and application deployment tools may be technologies or techniques to recover a failed component.
Maintainability – The requirements that describe the ability to maintain the system over its operational life. Among other attributes, a maintainable system can have routine hardware upgrades and application deployments without user affecting outages, it will have monitoring, logging and auditing sufficient for routine troubleshooting, it will have a low operational cost. Maintainability encompasses manageability, upgradability, deployability and flexibility as describe by other authors.
Scalability -The requirements that describe the ability to add and remove capacity to the system without affecting the availability to the system, while maximizing maintainability and constraining costs.
Security – The ability to maintain the confidentiality and integrity of a system and the data contain in or controlled by the system. Requirements related to system access, system integrity, system confidentiality and system configuration.
These can be mapped back into FURPS+, ISO-9126 & ISO-25010 and ISO-27002, NIST 800-53, etc.
Note that Availability, Performance, Reliability are not requirements categories in our model. We determined that if a system met a set of Resiliency, Recoverability and Security requirements, the system would also meet an appropriate level of availability and reliability as a byproduct of the Resiliency, Recoverability and Security Requirements. Likewise, the system would be able to meet Performance requirements as a byproduct of scalability and maintainability requirements.
Usability, Portability and Compatibility are common requirement families in other models, but as the model was driven by short-term infrastructure and security needs, they were left out in the early phases
Non-Functional Requirements Form & Format
Following the work done by Simmons & Terzakis (Intel) we decided to implement a modified template and Planguage-like structured language for the NFR’s. Each NFR exists as a single document.
The Non-functional requirements template and definitions that we settled on are:
Category: A text field representing the category that the requirement is classified under in the Minnesota State Model. The Category and Context are equivalent to the ‘ID:’ in Planguage or ‘Ambition’ in (Simmons/Intel 2011).
Context: A text field representing the requirement, unique within a category. The Category and Context are equivalent to the ‘ID:’ in Planguage or ‘Ambition’ in (Simmons/Intel 2011).
Goals: Natural language description of the intent of the requirement and how it supports one or more of the general goals. The Goal is equivalent to ‘Gist:’ in Planguage or ‘Ambition’ in (Simmons/Intel 2011).
Rationale: The reason that the requirement exists. Expressed in natural language.
Requirement: The requirement to which the system will be held, expressed in constrained natural language. Requirement will be written in a constrained natural language meeting Minnesota State Non-Functional Requirements Attributes.
Metric: Measurement used to determine if requirement has been met and the process or device used to locate the measurement on the scale. Metric must include ‘Minimum’, the minimum acceptable measurement, and may include ‘Target’, the measurement to which the system must be designed.
Scale: The scale of measure used to quantify the requirement.
Stakeholders: Persons who stand to gain or lose by implementation of requirements. Expressed as roles, not individuals.
Implications: Implications to the stakeholders if these requirements are not met.
Applicability: Systems or categories of systems to which requirement applies.
Status: One of Draft, Approved, Revised, or other constrained choice of statuses matching the requirements implementing process.
Author: Person responsible for authoring and maintaining requirement.
Revision: Sequential number representing approved revision of requirement.
Date: Date of last revision of requirement
The NFR’s have a structure and format that could be adapted to metadata driven requirements tooling.
At this stage we had a handful of Non-Functional Requirements categories and a template for writing the NFR’s, but no actual requirements.
Next up: an attempt to create a generic set of NFR’s usable across systems.