I've been a Developer for *cough* *cough* years, and in Application Security for *cough* years.
I've seen Security applied to Development in an after-the-fact manner where Security would tell Development "you just build it, we'll secure it". I've seen an Application Vulnerability Assessments (AVA) come in right at the end of a project in a typically waterfall-like fashion that unearthed fundamental flaws in the architecture of the application that required a massive undertaking to remediate.
I've seen Development and Security almost coming to blows. And the most common is Security saying (or worse still, another team saying on behalf of Security) "no no no you can't do that" and when Development ask "but how should we do it?" the response is "dunno, just not like that, it's your job to develop, you're Development, we're Security".
But it's not all doom and gloom. I've seen the Security - Development relationships work because there was good, clear upfront communication, there was well thought out planning, or because there was a great deal of upper management buy-in, or from just dumb luck and being in the right place at the right time.
Security and Development have not always played nicely together, but there really shouldn't be any reason they can't just get along.
There is (hopefully) nothing new in this post, it's just a list of a few recommendations that help to embed Application Security into the Software Development Lifecycle (SDLC), or to help you build a Secure Development Lifecycle (SDL).
Just the facts: The format is a Recommendation followed by a Why.
Take measurements, so you can show improvement. Measure the number of Critical, High and Medium issues for each project through Static Code Analysis (SCA), Automated Web Scanning (AWS) and manual AVA. Record who has had what training and when. Measure the number of policy deviations per delivery stream. Measure time-to-fix; how long it takes a delivery stream to respond to issues being raised. Record where (in the SDLC) you find a security issue and make a rough calculation on remediation costs - ie the financial cost of people and systems that are involved and the impact cost of being pulled off what they are supposed to be involved in. This information can be used to demonstrate the value add of the SDL and can help offset the actual cost of the SDL implementation.
If you can't measure, you can't manage. I don't know who said it, but I quite like it. If you're spending money on an embedding an SDL, you'll want to be able to demonstrate it is adding value and you're showing progress in a controlled fashion and that you're reducing risk. As such you'll need to be able to measure specifics both before implementing controls or giving training or adding tools and after such efforts so you can show the value added. It is important to use the same measurements and that you measure the same thing in the same way each time; ie use the same ruler.
It's not always a good idea to share the results of the measurement with everyone, this can lead to loss of hearts and minds. No one likes being told that what they are currently doing is not best practice, or risky, or just plain wrong.
Thought should go into what policies you roll-out and how to make them visible, understood, enforced and measurable against what is delivered. It is important to encourage Development to get involved in the policy creation; if you're involved in the creation of something, you're more likely to understand and enforce it.
Record and audit actual policy adherence or deviation. No one likes surprises and that is what happens when continuous measurements are not performed. Your polices form the basis of your NFRs. Your NFRs can be expressed as user stories and can be included on the backlog of each and every project upon creation. You can probably take similar NFRs from other teams such as Infrastructure or a Performance team if your organisation have such a structure.
I don't like reading reams and reams of documentation with unnecessary padding or fluff. Just the facts please. If you keep it simple and readable guess what, it's more likely people will read it and understand it, or ask questions about it.
All you need it to explain the gist policy, why it needs to be done and link to any documentation to back it up. This may be a legal document, or a news article on recent hacking activity.
Install, configure, and tune your tools. Don't just buy a tool and expect it to be a silver bullet; integrate it, tune it, reduce noise through a triage and verification process, create custom rules and make it part of the SDLC.
Humans do not scale well, tools scale better. Tools don't have working hours, so they can continue to test whilst you sleep. Tools do what you tell them to do, consistently, they don't get tired, have off days, and they don't skip steps.
Use SCA, AWS and security-centric unit tests (UT). Test early, test often, test automatically to borrow phrase. Having all these tools in your pipeline will allow you to you measure security risk on a per-check-in basis if you want that level of granularity. Remember, just because you have tools, it doesn't mean they'll find everything or that everything they find is an actual problem, you'll need to perform some analysis and verification on any findings.
Encourage the use of some or all of these tools on the Developer workstation as this enforces the rule that the early you find an issue, the cheaper it is to remediate, and what cheaper way than to never have it committed to your Source Code Management (SCM) system.
Implement Continuous Integrity (CI).
Your build pipeline allows you to commit your code at one end and at the other end out pops a artefact deployed to an environment. You get the same reports from each step within that pipeline every time you commit code. Ensuring that what you're reporting on is actually what is being deployed helps you retain integrity of the build process. Ensure artefacts are only deployed through this pipeline and block access to random drive-bys. You can create an effective audit trail by implementing role-based access control (RBAC) on the SCM ensuring that is the only way to effect change for any given stream.
Stop saying No! and stop blocking.
Comments such as 'You'll have to check with Security, they won't like it' or using Security as the reason for arguing against something you have no constructive reason to argue against is not helpful.
Find a way to say Yes and help be a business enabler, not a disabler.
Think about targeted training.
Don't train everyone on everything and don't forget awareness training even at director level. Development need to understand the different classes of vulnerabilities and how to defend against them, or not introduce them, whereas management need to understand the impact of vulnerabilities not the technical details.
A manager spending a day on XSS training, as an example, may not be money well spent; it's a day out of the calendar of the manager as well as the cost of the training and may be training that is never used. I'm not saying they shouldn't be aware of XSS, just that they may not require the same level of detail as a Developer would need.
Create Security Champions within each development stream. A core Application Security team should also be in place and should foster the Application Security Community.
Application Security Teams don't scale well as the number of delivery streams grow along with the number of products or applications that a business create.
The ratio of Application Security resources to Developers or developments streams is a finger in air exercise.
By having a Champion in each delivery stream, you can be sure Application Security will be represented within each stream.
Hold a regular catch-up with the Application Security core team and the Champions. This helps foster a community. Make an effort to attend all of these such meetings; if the Champions see indifference from the core team, then there is a danger this indifference will spread to the Champions.
Encourage both Core members and Champions to give presentations. Always follow up the evening with some beers.
Embed Security within the Architecture design process.
Architecture teams usually have a process for initiating a project through high-level design documents to detailed design documents. Piggy-back this process if possible as trying to roll-out an additional review process is challenging. Getting involved early in the life of an application architecture helps you understand the application better thus allowing you to threat model more effectively.
Push issue detection upstream.
It is cheaper to resolve an issue the earlier you find it. The most expensive way to resolve an issue is if it is found in production by a 'hacker'. This can involve everyone from Incidence Response, Management, Development, Testing, Infrastructure, the Legal team; and distracts everyone from their current work which may delay the rolling out of a product that makes money.
A combination of developer training and tools exercised pre-check-in drives down the cost of each issue.
Spend money on someone really good.
From experience good people are expensive, and they are expensive for a reason. It's usually cost effective to spend money on someone really good, that has been around the block, as it were, rather than spending the same amount of money on two or three individuals who can spend time and money researching and hoping they come to a final consensus.
Ensure that you're fixing the issues you find.
Make sure you have Security representation during release and sprint planning. It's one thing finding issues and and raising these defects on the delivery stream backlogs, but it's another thing getting the issues prioritised (that are not revenue generating, but revenue protecting) against features that have the potential to generate cold hard cash.
Don't share all the results, all of the time.
Set bug bars or you're in danger of losing hearts and minds. One strategy is to start off with a class of vulnerability, one that may be the most common within a code base for instance, and concentrate on an awareness programme around it. This can take the form of a presentation and a demo and a 'this is what the code looks like'. 'This is how to fix it' and 'this is how to not introduce it in the first place'. Then move onto the next class of vulnerability.
Always triage results before raising a defects on the backlog of a delivery stream. Raising false positives undermines the entire effort. Ensure you only communicate issues that are actually issues. Start off with the highest severity of issues and treat the rest as 'noise'. Once the highest level had subsided, then start introducing issues from the next severity down.
There are a few more formal SDL frameworks around and this isn't an attempt to question them; it's meant as a quick win checklist and some observations based on past experience.