A common practice of software design, and common to many of the RFP’s I see come across my desk, is to declare the specific modes of security necessary for a particular application or product, resulting in a set of proscriptive requirements, or possibly a specific set of descriptive requirements, that must be implemented into the application. This is a useful exercise for describing user stories that can be tested during and after the development process, but it lacks as much in mastery as it creates in clarity by treating each security story as a specific feature to be implemented.
Besides the obvious problem of creating the risk of being incomplete, one of the challenges with this method for building or designing security into a product is that it subjects security to the chopping block implicit in the feature design negotiation table. If security as a nonfunctional requirement has the same value to the customer as a functional business rule, then any particular security feature might be subject to the risk of not being approved. It would then never move out of the development backlog into the current development cycle and never get implemented.
I see this IRL. Customers will always want new features, and security only tends to get promoted as an act of compliance. This places the integrity of the application and its data at the mercy of a potential future compromise in favor of a current client-facing feature. Consider the conversation with a client in a grooming session, where you ask, “would you like to increase the security of the system by funding a system which would allow your users to have a second factor of authentication?”
In this case, the client may not choose the option to INCREASE the state of the application’s security, instead assuming that usernames and passwords are sufficient. This is the risk — the client believes the application is already at a baseline of security (their specific requirements) and are being asked to increase the security beyond what was in the RFP, at additional expense.
Contrast this with simply assuming best practices in strong security and baking a fully secure system into an application at sprint zero, including what might normally be considered “enhanced” features:
- Strong authentication for all users, including multi-factor authentication.
- Full auditing capabilities, allowing for the reproduction of any change event in the system.
- Full access control, requiring positive escalation of privilege at each change of state within the system.
- Resilience against commonly known attack methods, such as Cross Site Scripting or SQL Injection.
- Protection of Personally Identifiable Information (PII) by encrypting protected data.
- Designing traps in the application’s public surface to intercept and inhibit brute force attacks by blocking unsafe behavior.
Fully assuming these features and other best practices into the stories used for describing the application at its base state (again, in sprint zero) changes the dynamics of the development conversation by moving the features out of backlog negotiation and ahead of active development, ensuring the application is secure by default.
The conversation with the client is then changed, because it requires the client to assert that they want to actively DECREASE the state of the application’s security when they remove that functionality. While they still hold control over the state of the application and its development, and certainly can decide how their development dollars ought to be spent, we can provide leadership by changing the dynamics of that conversation, and by recommending they maintain the strength of a secure application.
Share your experiences and concerns developing secure applications with us in the comments section below!