Showing posts with label Application Security. Show all posts
Showing posts with label Application Security. Show all posts

Tuesday, 19 March 2013

CTF/DTF idea

I had a brief chat earlier today about your standard CTF experience at xyzSecCon. It's always been a bit biased towards breaking-in rather than defending, and biased towards networks/services/apps/etc rather than on just the app layer.

Some initial thoughts for a potentially interesting and new(er) format are thus:

There are teams of attackers
There are teams of defenders

Each defending team is given the same application to defend
* The application is riddled with issues, you name it, it's there

Each defending team has the same codebase with which to start from

Each defending team is given a branch within the SCM repo
* the build pipeline is in place
* dependencies, project files and imports to different IDEs (eclipse, intellij) are all there
* VMs are auto provisioned and releases are deployed to those VMs
* essentially all the defenders have to think about is the code, everything else is in place

Each branch and build track has the same tool set,
* functional and non-functional unit tests including some security centric unit tests (xUnit, etc),
* functional and non-functional user tests including some security centric user tests (Selenium or casper js, etc),
* static code analysis (Sonar, Findbugs, Coverity, Fortify, etc),
* automated web scanning (acunetix, burp, skipfish, etc)

Each attacking team is given prior knowledge of all the flaws within said application
* well in advance if required/to make it more fun

Each attacking team can use any tool they see fit - this is about attack the app, not the network/service/OS
* In all cases the attacking teams can add their own tools to the tool set

Each defending team must deploy at least once an hour

Each deployment goes to a new environment for that team - so if the CTF/DTF lasts over 2 (10 hour) days, then each team needs 20 different virtual envs.
* This is so that any long running attacks have time to complete against any given 'release'

The application contains your standard web functionality
* Anonymous/Authenticated browsing
* Registration
* Login/Logout
* Forgotten password
* Common issues such as Captcha/Username already registered/etc
* Register CC
* Calls out to payment providers/etc
* Shopping cart
* Checkout
* RSS feeds
* Social network integration
* etc etc, this is just a quick list

One (of many) responses to consider
* If there is a leakage of passwords - for instance - the defending team need to think about how they flick a switch so that all new logins are forced to change a password

Obviously the attacking teams will find most vulns early on, so the idea is that the defending teams reduce the number of exploitable vulns over time. Attacking teams can attack all the deployments for all the defending teams, so the scoring is recorded in a matrix

There can even be a set of dumb users; they can be socially engineered on an automagic basis - ie links in emails coming into email inboxes are automagically clicked

Anyway, I've thought about this for a total of an hour or so, but I think there is something different in this, something different to your normal run-of-the-mill CTF

It's usually easier to break stuff than it is to fix it. The initial challenge is in fixing rather than breaking.

Once the defending teams have gone through several release iterations, it will be harder to break stuff than it is to fix it. The longer challenge is in breaking rather than fixing.

Points will be awarded for the number of vulnerabilities detected; there will be a sliding scale for the level to which the vulnerability is exploited. For instance, displaying alert(1) on an attackers browser is 5 points, but harvesting the application user base session ids would be worth 50 points.

Different attacking teams will adopt different strategies. Do they go after the low hanging fruit that the defending teams will likely implement defences for in the first few releases; thus picking up some early points to add to their total but potentially wasting time developing exploits for the trickier issues that the defending team will not implement fixes for until release 10?

Sunday, 13 January 2008

Secure Development Lifecycle

I've been a Developer for *cough* *cough* years, and in Application Security for *cough* years.

I've seen Security applied to Development in an after-the-fact manner where Security would tell Development "you just build it, we'll secure it". I've seen an Application Vulnerability Assessments (AVA) come in right at the end of a project in a typically waterfall-like fashion that unearthed fundamental flaws in the architecture of the application that required a massive undertaking to remediate. I've seen Development and Security almost coming to blows. And the most common is Security saying (or worse still, another team saying on behalf of Security) "no no no you can't do that" and when Development ask "but how should we do it?" the response is "dunno, just not like that, it's your job to develop, you're Development, we're Security".

But it's not all doom and gloom. I've seen the Security - Development relationships work because there was good, clear upfront communication, there was well thought out planning, or because there was a great deal of upper management buy-in, or from just dumb luck and being in the right place at the right time.

Security and Development have not always played nicely together, but there really shouldn't be any reason they can't just get along.

There is (hopefully) nothing new in this post, it's just a list of a few recommendations that help to embed Application Security into the Software Development Lifecycle (SDLC), or to help you build a Secure Development Lifecycle (SDL).

Just the facts: The format is a Recommendation followed by a Why.

R1 Take measurements, so you can show improvement. Measure the number of Critical, High and Medium issues for each project through Static Code Analysis (SCA), Automated Web Scanning (AWS) and manual AVA. Record who has had what training and when. Measure the number of policy deviations per delivery stream. Measure time-to-fix; how long it takes a delivery stream to respond to issues being raised. Record where (in the SDLC) you find a security issue and make a rough calculation on remediation costs - ie the financial cost of people and systems that are involved and the impact cost of being pulled off what they are supposed to be involved in. This information can be used to demonstrate the value add of the SDL and can help offset the actual cost of the SDL implementation.

W1 If you can't measure, you can't manage. I don't know who said it, but I quite like it. If you're spending money on an embedding an SDL, you'll want to be able to demonstrate it is adding value and you're showing progress in a controlled fashion and that you're reducing risk. As such you'll need to be able to measure specifics both before implementing controls or giving training or adding tools and after such efforts so you can show the value added. It is important to use the same measurements and that you measure the same thing in the same way each time; ie use the same ruler. It's not always a good idea to share the results of the measurement with everyone, this can lead to loss of hearts and minds. No one likes being told that what they are currently doing is not best practice, or risky, or just plain wrong.


R2 Thought should go into what policies you roll-out and how to make them visible, understood, enforced and measurable against what is delivered. It is important to encourage Development to get involved in the policy creation; if you're involved in the creation of something, you're more likely to understand and enforce it.

W2 Record and audit actual policy adherence or deviation. No one likes surprises and that is what happens when continuous measurements are not performed. Your polices form the basis of your NFRs. Your NFRs can be expressed as user stories and can be included on the backlog of each and every project upon creation. You can probably take similar NFRs from other teams such as Infrastructure or a Performance team if your organisation have such a structure. I don't like reading reams and reams of documentation with unnecessary padding or fluff. Just the facts please. If you keep it simple and readable guess what, it's more likely people will read it and understand it, or ask questions about it. All you need it to explain the gist policy, why it needs to be done and link to any documentation to back it up. This may be a legal document, or a news article on recent hacking activity.


R3 Install, configure, and tune your tools. Don't just buy a tool and expect it to be a silver bullet; integrate it, tune it, reduce noise through a triage and verification process, create custom rules and make it part of the SDLC.

W3 Humans do not scale well, tools scale better. Tools don't have working hours, so they can continue to test whilst you sleep. Tools do what you tell them to do, consistently, they don't get tired, have off days, and they don't skip steps. Use SCA, AWS and security-centric unit tests (UT). Test early, test often, test automatically to borrow phrase. Having all these tools in your pipeline will allow you to you measure security risk on a per-check-in basis if you want that level of granularity. Remember, just because you have tools, it doesn't mean they'll find everything or that everything they find is an actual problem, you'll need to perform some analysis and verification on any findings. Encourage the use of some or all of these tools on the Developer workstation as this enforces the rule that the early you find an issue, the cheaper it is to remediate, and what cheaper way than to never have it committed to your Source Code Management (SCM) system.


R4 Implement Continuous Integrity (CI).

W4 Your build pipeline allows you to commit your code at one end and at the other end out pops a artefact deployed to an environment. You get the same reports from each step within that pipeline every time you commit code. Ensuring that what you're reporting on is actually what is being deployed helps you retain integrity of the build process. Ensure artefacts are only deployed through this pipeline and block access to random drive-bys. You can create an effective audit trail by implementing role-based access control (RBAC) on the SCM ensuring that is the only way to effect change for any given stream.


R5 Stop saying No! and stop blocking.

W5 Comments such as 'You'll have to check with Security, they won't like it' or using Security as the reason for arguing against something you have no constructive reason to argue against is not helpful. Find a way to say Yes and help be a business enabler, not a disabler.


R6 Think about targeted training.

W6 Don't train everyone on everything and don't forget awareness training even at director level. Development need to understand the different classes of vulnerabilities and how to defend against them, or not introduce them, whereas management need to understand the impact of vulnerabilities not the technical details. A manager spending a day on XSS training, as an example, may not be money well spent; it's a day out of the calendar of the manager as well as the cost of the training and may be training that is never used. I'm not saying they shouldn't be aware of XSS, just that they may not require the same level of detail as a Developer would need.


R7 Create Security Champions within each development stream. A core Application Security team should also be in place and should foster the Application Security Community.

W7 Application Security Teams don't scale well as the number of delivery streams grow along with the number of products or applications that a business create. The ratio of Application Security resources to Developers or developments streams is a finger in air exercise. By having a Champion in each delivery stream, you can be sure Application Security will be represented within each stream. Hold a regular catch-up with the Application Security core team and the Champions. This helps foster a community. Make an effort to attend all of these such meetings; if the Champions see indifference from the core team, then there is a danger this indifference will spread to the Champions. Encourage both Core members and Champions to give presentations. Always follow up the evening with some beers.


R8 Embed Security within the Architecture design process.

W8 Architecture teams usually have a process for initiating a project through high-level design documents to detailed design documents. Piggy-back this process if possible as trying to roll-out an additional review process is challenging. Getting involved early in the life of an application architecture helps you understand the application better thus allowing you to threat model more effectively.


R9 Push issue detection upstream.

W9 It is cheaper to resolve an issue the earlier you find it. The most expensive way to resolve an issue is if it is found in production by a 'hacker'. This can involve everyone from Incidence Response, Management, Development, Testing, Infrastructure, the Legal team; and distracts everyone from their current work which may delay the rolling out of a product that makes money. A combination of developer training and tools exercised pre-check-in drives down the cost of each issue.


R10 Spend money on someone really good.

W10 From experience good people are expensive, and they are expensive for a reason. It's usually cost effective to spend money on someone really good, that has been around the block, as it were, rather than spending the same amount of money on two or three individuals who can spend time and money researching and hoping they come to a final consensus.


R11 Ensure that you're fixing the issues you find.

W11 Make sure you have Security representation during release and sprint planning. It's one thing finding issues and and raising these defects on the delivery stream backlogs, but it's another thing getting the issues prioritised (that are not revenue generating, but revenue protecting) against features that have the potential to generate cold hard cash.


R12 Don't share all the results, all of the time.

W12 Set bug bars or you're in danger of losing hearts and minds. One strategy is to start off with a class of vulnerability, one that may be the most common within a code base for instance, and concentrate on an awareness programme around it. This can take the form of a presentation and a demo and a 'this is what the code looks like'. 'This is how to fix it' and 'this is how to not introduce it in the first place'. Then move onto the next class of vulnerability. Always triage results before raising a defects on the backlog of a delivery stream. Raising false positives undermines the entire effort. Ensure you only communicate issues that are actually issues. Start off with the highest severity of issues and treat the rest as 'noise'. Once the highest level had subsided, then start introducing issues from the next severity down.

There are a few more formal SDL frameworks around and this isn't an attempt to question them; it's meant as a quick win checklist and some observations based on past experience.