Monday, 21 June 2010

Passwords

Passwords are now so everyday that remembering a dozen or so long, complex password doesn't phase anyone. But why do passwords have to be so long and so complex? Well, you don't want to bad guys to be able to guess it or to apply brute force to 'crack' it and gain access to your private data, do you?

There are many factors that influence the strength of a password and each of those factors used together can help protect your credentials and thus your data. If you're still in doubt over the knowledge your users have around protecting their password and their password strength, then you can always educate them.

  • Show them your password policy and explain to them why it is how it is.
  • Explain to them the pitfalls of clicking on links in emails.
  • Explain installing random software can be invasive and dangerous.
  • Provide links to tools that can help them generate a strong password, or to essays or research papers. Or spend a day writing your own, summarising all the facts important to your business.

This post is designed to be a guide to how to help you decide how to implement safe storage of passwords. It is not meant to be complete one-size-fits-all policy document, though it could be used as a basis for creating a password policy specific to your needs. I've also attempted to give some positive and negative impacts for each area.

Length, or "size is important"

Password length is the most important factor in password strength. Forget entropy, make your password as long as you can. Many policies enforce a minimum length of mixed case, numbers and special characters, and some even impose a maximum length.

The minimum length is important, but please make it longer than 8, which is extremely common. I don't understand why anyone would want to restrict the maximum length of your password. At the end of the day the resultant hash is of a fixed length, so longer user passwords won't have any negative impact on storage concerns. Provide a large characters set from which to chose the password but there is no real need to force a user to use the upper/lower/numeric/special characters.

+ve longer passwords increase complexity, slowing down any brute force attack

-ve the longer the password the more likely it will be to be forgotten by the user resulting in more password resets which could result in more helpdesk calls.
-ve a user not being able to log in at a given time could impact time sensitive sales


Lifetime, or "all change please"

You should impose a maximum lifetime of a password. You could implement expiration such that after a given period of time you prompt the user to change their password. You could do this at login, or you could send them a mail prompting them to change their password, maybe providing a link to the reset page.
It may be worth adding historical knowledge of passwords used. When a user is prompted to change their password according to the lifetime policy the new password should not match any of the previous passwords used for the account. This eliminates the possibility of a stolen password database where the passwords have been expired and thus becoming stale, to then becoming valid again.

+ve any leaked password DB would only have a finite lifetime as the passwords would be stale after a given period of time
+ve informing the user to change their password may increase customer confidence and show you take security seriously.

-ve implementation of an expiration mechanism would have to be coded, which takes time and resource
-ve nagging the user may have a negative impact on user experience


Lockout, or "add some treacle"

It is quite common for hackers to brute force a live system. Obviously you've already implement a set of rules that will alert you to such attempts, along with your detection of cross-site scripting monitoring and alerting. You can take a couple of approaches here. You can lock accounts after a specified number of failed login attempts. Any account that has been locked out would require administrative intervention to re-enable the account.

+ve this will quickly stop any brute force attempts

-ve calls to helpdesk have a cost attributed to them, businesses are in business to make money

An alternative is to add some treacle. After a given number of failed attempts, apply a pause, a delay until the next login attempt can be made.

+ve slows the brute force attempt down


Salting, or "pass the salt"

Appending a salt to the password prior to applying the hash function has the effect of lengthening the password. You should generate a different salt for each password. If you can store the salt in a separate database, then great, that will slow a brute force attempt on a leaked database and storing salts in a different databases reduces the likelihood of the hash and the salt being stolen at the same time. Added the salt slows down the hash brute force, it may even take so long that the password has gone stale by the time it is cracked. There is no harm in rotating salts each time the user logs in. This could be implemented a an asynchronously (thus not affecting the user experience) strengthening the argument for storing the salt in a different database to the hash; if the theft of the two databases is not synchronised, then the hacker doesn't have the expected salt for a given user.  You can always append an additional, option application salt. This adds one more pieces to the puzzle that the attack must solve.  Store this somewhere that the salt, the final hash or the algorithm is not stored, thus requiring the attack to gain access to additional systems.

+ve slows down a brute force attempt


Key strengthening, or "stretch, and relax"

Employ a form of key stretching will strengthen any given password. Key stretching, by design executes slowly; introducing a delay to the hashing process introduces the same delay in the brute forcing process.

+ve hacker needs to know how the algorithm is implemented, and for that they would need access to source code
+ve significantly slows down any brute forcing

-ve slowing down the password verification _could_ impact user experience when logging in
initialise key
loop 100,000
        key = hash(key + password + salt)

Why hash all 3? This reduces the chances of collisions


Cryptographic functions, or "don't make a hash of it"

Choose your hashing function you probably want to use SHA-2, though judging by the rate of hash function breakage, you'll want to ensure you can quickly and easily upgrade to the latest SHA implementation; so prepare for SHA-3.


Complexity indicator, or "a show of strength"

It is always good to give your users a visual indication of how strong their password is. You can add a graphical indicator on the registration page and on password reset page. Though I would not retrospectively send my users an email telling them that they must change their password as it has found to be under strength. This would give the impression that somehow you have visibility of it.


I (don't) want to do whatever common people do

Common passwords have been collated for years. For example, 123456 was the #1 common password for years. There are multiple sources where you can download lists for the top 100, 1,000, 10,000, 100,000 passwords. Use one of these lists to ensure your users don't.


Sensible storage, or "Cover me, I need back up!"

Back up source code which will contain the algorithm, the per password salt, application salt and the resultant hash in isolation. There is no point is keeping them all separate in production if you then back themup to the same location. If you can, encrypt your back up.


Summary

There are many ways to increase the strength of passwords at rest and how you influence and educate your users in choosing strong passwords in the first place. There is no one right way and no one size fits all.
  • Educate your users/customers about secure passwords, they'll thank you for it.
  • Weigh up what is the right solution for your requirements.
  • Leave comments/suggestions/advice below (-;

Sunday, 13 January 2008

Secure Development Lifecycle

I've been a Developer for *cough* *cough* years, and in Application Security for *cough* years.

I've seen Security applied to Development in an after-the-fact manner where Security would tell Development "you just build it, we'll secure it". I've seen an Application Vulnerability Assessments (AVA) come in right at the end of a project in a typically waterfall-like fashion that unearthed fundamental flaws in the architecture of the application that required a massive undertaking to remediate. I've seen Development and Security almost coming to blows. And the most common is Security saying (or worse still, another team saying on behalf of Security) "no no no you can't do that" and when Development ask "but how should we do it?" the response is "dunno, just not like that, it's your job to develop, you're Development, we're Security".

But it's not all doom and gloom. I've seen the Security - Development relationships work because there was good, clear upfront communication, there was well thought out planning, or because there was a great deal of upper management buy-in, or from just dumb luck and being in the right place at the right time.

Security and Development have not always played nicely together, but there really shouldn't be any reason they can't just get along.

There is (hopefully) nothing new in this post, it's just a list of a few recommendations that help to embed Application Security into the Software Development Lifecycle (SDLC), or to help you build a Secure Development Lifecycle (SDL).

Just the facts: The format is a Recommendation followed by a Why.

R1 Take measurements, so you can show improvement. Measure the number of Critical, High and Medium issues for each project through Static Code Analysis (SCA), Automated Web Scanning (AWS) and manual AVA. Record who has had what training and when. Measure the number of policy deviations per delivery stream. Measure time-to-fix; how long it takes a delivery stream to respond to issues being raised. Record where (in the SDLC) you find a security issue and make a rough calculation on remediation costs - ie the financial cost of people and systems that are involved and the impact cost of being pulled off what they are supposed to be involved in. This information can be used to demonstrate the value add of the SDL and can help offset the actual cost of the SDL implementation.

W1 If you can't measure, you can't manage. I don't know who said it, but I quite like it. If you're spending money on an embedding an SDL, you'll want to be able to demonstrate it is adding value and you're showing progress in a controlled fashion and that you're reducing risk. As such you'll need to be able to measure specifics both before implementing controls or giving training or adding tools and after such efforts so you can show the value added. It is important to use the same measurements and that you measure the same thing in the same way each time; ie use the same ruler. It's not always a good idea to share the results of the measurement with everyone, this can lead to loss of hearts and minds. No one likes being told that what they are currently doing is not best practice, or risky, or just plain wrong.


R2 Thought should go into what policies you roll-out and how to make them visible, understood, enforced and measurable against what is delivered. It is important to encourage Development to get involved in the policy creation; if you're involved in the creation of something, you're more likely to understand and enforce it.

W2 Record and audit actual policy adherence or deviation. No one likes surprises and that is what happens when continuous measurements are not performed. Your polices form the basis of your NFRs. Your NFRs can be expressed as user stories and can be included on the backlog of each and every project upon creation. You can probably take similar NFRs from other teams such as Infrastructure or a Performance team if your organisation have such a structure. I don't like reading reams and reams of documentation with unnecessary padding or fluff. Just the facts please. If you keep it simple and readable guess what, it's more likely people will read it and understand it, or ask questions about it. All you need it to explain the gist policy, why it needs to be done and link to any documentation to back it up. This may be a legal document, or a news article on recent hacking activity.


R3 Install, configure, and tune your tools. Don't just buy a tool and expect it to be a silver bullet; integrate it, tune it, reduce noise through a triage and verification process, create custom rules and make it part of the SDLC.

W3 Humans do not scale well, tools scale better. Tools don't have working hours, so they can continue to test whilst you sleep. Tools do what you tell them to do, consistently, they don't get tired, have off days, and they don't skip steps. Use SCA, AWS and security-centric unit tests (UT). Test early, test often, test automatically to borrow phrase. Having all these tools in your pipeline will allow you to you measure security risk on a per-check-in basis if you want that level of granularity. Remember, just because you have tools, it doesn't mean they'll find everything or that everything they find is an actual problem, you'll need to perform some analysis and verification on any findings. Encourage the use of some or all of these tools on the Developer workstation as this enforces the rule that the early you find an issue, the cheaper it is to remediate, and what cheaper way than to never have it committed to your Source Code Management (SCM) system.


R4 Implement Continuous Integrity (CI).

W4 Your build pipeline allows you to commit your code at one end and at the other end out pops a artefact deployed to an environment. You get the same reports from each step within that pipeline every time you commit code. Ensuring that what you're reporting on is actually what is being deployed helps you retain integrity of the build process. Ensure artefacts are only deployed through this pipeline and block access to random drive-bys. You can create an effective audit trail by implementing role-based access control (RBAC) on the SCM ensuring that is the only way to effect change for any given stream.


R5 Stop saying No! and stop blocking.

W5 Comments such as 'You'll have to check with Security, they won't like it' or using Security as the reason for arguing against something you have no constructive reason to argue against is not helpful. Find a way to say Yes and help be a business enabler, not a disabler.


R6 Think about targeted training.

W6 Don't train everyone on everything and don't forget awareness training even at director level. Development need to understand the different classes of vulnerabilities and how to defend against them, or not introduce them, whereas management need to understand the impact of vulnerabilities not the technical details. A manager spending a day on XSS training, as an example, may not be money well spent; it's a day out of the calendar of the manager as well as the cost of the training and may be training that is never used. I'm not saying they shouldn't be aware of XSS, just that they may not require the same level of detail as a Developer would need.


R7 Create Security Champions within each development stream. A core Application Security team should also be in place and should foster the Application Security Community.

W7 Application Security Teams don't scale well as the number of delivery streams grow along with the number of products or applications that a business create. The ratio of Application Security resources to Developers or developments streams is a finger in air exercise. By having a Champion in each delivery stream, you can be sure Application Security will be represented within each stream. Hold a regular catch-up with the Application Security core team and the Champions. This helps foster a community. Make an effort to attend all of these such meetings; if the Champions see indifference from the core team, then there is a danger this indifference will spread to the Champions. Encourage both Core members and Champions to give presentations. Always follow up the evening with some beers.


R8 Embed Security within the Architecture design process.

W8 Architecture teams usually have a process for initiating a project through high-level design documents to detailed design documents. Piggy-back this process if possible as trying to roll-out an additional review process is challenging. Getting involved early in the life of an application architecture helps you understand the application better thus allowing you to threat model more effectively.


R9 Push issue detection upstream.

W9 It is cheaper to resolve an issue the earlier you find it. The most expensive way to resolve an issue is if it is found in production by a 'hacker'. This can involve everyone from Incidence Response, Management, Development, Testing, Infrastructure, the Legal team; and distracts everyone from their current work which may delay the rolling out of a product that makes money. A combination of developer training and tools exercised pre-check-in drives down the cost of each issue.


R10 Spend money on someone really good.

W10 From experience good people are expensive, and they are expensive for a reason. It's usually cost effective to spend money on someone really good, that has been around the block, as it were, rather than spending the same amount of money on two or three individuals who can spend time and money researching and hoping they come to a final consensus.


R11 Ensure that you're fixing the issues you find.

W11 Make sure you have Security representation during release and sprint planning. It's one thing finding issues and and raising these defects on the delivery stream backlogs, but it's another thing getting the issues prioritised (that are not revenue generating, but revenue protecting) against features that have the potential to generate cold hard cash.


R12 Don't share all the results, all of the time.

W12 Set bug bars or you're in danger of losing hearts and minds. One strategy is to start off with a class of vulnerability, one that may be the most common within a code base for instance, and concentrate on an awareness programme around it. This can take the form of a presentation and a demo and a 'this is what the code looks like'. 'This is how to fix it' and 'this is how to not introduce it in the first place'. Then move onto the next class of vulnerability. Always triage results before raising a defects on the backlog of a delivery stream. Raising false positives undermines the entire effort. Ensure you only communicate issues that are actually issues. Start off with the highest severity of issues and treat the rest as 'noise'. Once the highest level had subsided, then start introducing issues from the next severity down.

There are a few more formal SDL frameworks around and this isn't an attempt to question them; it's meant as a quick win checklist and some observations based on past experience.