Avoiding the AppSec Blame Game - Part 15 Reasons Why Application Security Fails
Several years ago, some well-meaning folks at SANS suggested that developers should be held liable for introducing vulnerabilities into their application code. Economic theory suggests that this is an efficient approach, since developers are in the best position to prevent security glitches from occurring. But blaming developers for application security problems is exactly the wrong thing to do.
See Also: You've Got BEC!
In this blog, we'll look at a few ways application security fails in the software development ecosystem of many companies. In Part 2 next month, we'll discuss some simple things that you can do to change your software engineering culture to encourage security. (Also, read: Five Application Security Tips.)
Blame is a culture problem that destroys any hope of developers, architects and security specialists working together.
Let's start with a few of the most common anti-patterns that destroy progress in application security programs:
- Don't Even Try: There are lots of organizations that don't even really try to write secure code. They might be scanning and penetration testing, but that's not the same as building defenses into your applications. If it were possible to find and fix vulnerabilities accurately and inexpensively, this reactive approach might be viable. But it's not. A typical enterprise application has dozens of vulnerabilities, and on average each one costs $1,000 to detect and triage and $4,000 to patch.
- Blind Faith: Many organizations don't have their own workable definition of what "secure" means for their business, so they rely on external definitions, such as PCI DSS and other compliance regimes, scanner signatures or rules, the OWASP Top Ten, or whatever happens to be a penetration tester's latest interest. Those rules are designed for someone else's priorities or for what's easy to find, not what you should care about.
- Narrow Focus: Many security teams are very successful at finding vulnerabilities in single applications, but never bother to look across the entire application portfolio to seek out the root causes of these problems. The result is that only a small fraction of the portfolio gets assessed, and there is no visibility into common vulnerability patterns that might be strategically targeted.
- Audit Mentality: You can't make things secure by measuring their insecurity all of the time. If you're spending the majority of your application security budget on scans, pentests, and code reviews, you need to think about what actually helps development projects write secure code. It makes no sense to spend money verifying when only minimal effort was put into securing the application in the first place.
- Allowing Blame: When a vulnerability is discovered, the natural reaction is to look for a scapegoat. However, blaming developers creates a dangerous feedback loop where developers despise security, security teams exaggerate their findings to get attention, and everyone ends up blindly trusting applications.
These failings are natural and understandable. At their core, they stem from an innate desire to trust software even when there is no evidence either way about whether it's secure. When a tester discovers a vulnerability, that misplaced trust is shattered, and the knee-jerk reaction is to quickly patch the hole and restore 'secure' status. But this only gives a false sense of security.
In fact, this pattern can lead to a culture where discovering vulnerabilities is discouraged because it makes applications insecure. Ultimately, organizations can end up using the lowest cost security testing provider and generic scanning tools because they really don't want to find holes.
That's bad enough, but the most corrosive of all of these anti-patterns is also the simplest - allowing blame. Blame is a culture problem that destroys any hope of developers, architects and security specialists working together. Without that collaboration, reliably producing secure code is essentially impossible. Blaming a single role for a culture problem isn't fair. As Ice-T put it, "Don't hate the playa, hate the game."
Recently, I talked with a group of Java developers who candidly discussed their 'relationship' with their security team. Every one of them acknowledged the importance of security, but felt abandoned and discouraged about their company's process. They felt that security requirements were being made up and imposed upon them at the end of the process to make them look bad, and felt they had no voice in deciding whether an alleged vulnerability was serious or how to fix it.
I've taught thousands of developers how to design and build secure code, and all of them take pride in their code's security and want to learn defensive programming. I've also worked with many security specialists who are genuine in their desire to help make things more secure. Although it might seem like a good idea, putting these two groups in competition with each other destroys any chance that you'll be able to produce secure code.
Becoming an organization capable of reliably producing secure code is possible. The good news is that Microsoft and others have proven that it doesn't add significant cost to software development. It's more like fitness - it just takes dedication to living a particular way and restraint from destructive activities.
Next month, we'll talk about some effective ways to 'ruggedize' your security culture by bringing development and security teams together.
Jeff Williams is the co-founder of both OWASP and Aspect Security, a consulting company focused exclusively on application security and training services.