The Business Case for Software Security

Dr. Dobb's Journal March 2004

It's time for a change

By Herbert H. Thompson and James A. Whittaker

Herbert is Director of Security Technology at Security Innovation Inc. and James is a professor of computer science at the Florida Institute of Technology. They can be contacted at hthompson@sisecure.com and jw@se.fit.edu, respectively.

The software industry has the technology, processes, and skills to greatly reduce the number of vulnerabilities in released applications. For example, we understand the dangers of certain programming languages like C and C++ and how they can easily lead to the biggest security problem to plague modern software—the buffer overflow. As our knowledge has grown, though, so has the number of security vulnerabilities reported in released software. In the first three quarters of 2003, the CERT Coordination Center reported almost 3000 new software vulnerabilities. Software insecurity has become an epidemic.

For most vendors, the decisions of when to release an application, which programming language to use, what bug fixes can be postponed, and so forth, are business decisions—as they should be. Given the current state of software engineering, doing what it takes to weed out security problems can be expensive, and thus at odds with conducting a profitable business. It is possible to build much more secure software, but is it profitable? Is it worth it for vendors to make the investment?

The Business Case for Insecure Software

Until recently, the great cost-cutting machete has struck hard on activities that might improve security, like developer training and more rigorous testing. Marketing has ruled ship decisions; and resource allocation has favored new features, performance improvement, productivity enhancement, and compatibility rather than security and reliability. This seems reasonable given that product comparisons in the media, vendor web sites, and analysts all rate products based on such criteria. The decision to acquire a particular application or software "solution" depends heavily (and in some cases exclusively) on how it stacks up on things we know how to measure, and software vendors have historically spent money on improving these key characteristics. Software security has been conspicuously absent from that list.

There are no good measures or ways to compare security features. You could endlessly measure the performance differences in, say, Microsoft Exchange versus Lotus Notes. You could look at stress test results, side-by-side feature comparisons, and price. But what benchmark could you use to measure security that would be meaningful to consumers? The bottom line is that what consumers do not measure does not affect their purchasing decisions, and what does not affect purchasing decisions quickly draws the gaze of cost-cutting assassins.

Beyond measurement, legacy code that harbors security flaws can be expensive to replace. Several years ago, security was only a minor concern. As an industry, we did not understand the security implications of development decisions and we certainly did not code with the Internet and ubiquitous connectivity in mind. Many software companies have a large existing code base, so when it is time to develop a new application version, vendors have a tough decision to make: Either use some existing components that do not need to be rewritten or rewrite all of the old code with security in mind. The languages and development paradigms that we typically use to build software encourage reusability. In addition, reuse is perceived as cheaper and more likely to preserve backward compatibility. Rewriting components can be costly, time consuming, and may require extensive reverse engineering to recover sometimes lost requirements. Even then, there is no guarantee of security.

To add to the problem, software developers are usually not trained to write secure code. Up until a few years ago, there were no classes in any computer science curriculum at any university on secure coding practices or security testing. Developers are actually trained to ignore security concerns because traditional programming goals like usability, functionality, and performance usually run directly counter to security. Effectively retraining developers to think differently can be expensive. Microsoft, for example, recently spent over $100 million and stopped all product development for two months to train its development staff on security techniques.

Beyond development problems, testing for security is more time consuming than functional testing and requires testers to be more technical. Consumers have historically valued functionality and quality over "security," so resources have usually been allocated to building new features and exposing functional bugs. Many of the security bugs found by users and attackers could have been discovered by vendors before the application was released. Instead, penetrate and patch—the process of issuing patches to fix vulnerabilities in released software when they are found—has become the norm. It costs vendors to develop and deploy a patch, but it is the consumer who shoulders deployment costs.

Finally, there is the issue of liability. When a structural engineer signs off on the design of a bridge, and later the bridge collapses because of faulty calculations, the engineer and his or her company are usually liable. The same is true of other professionals or companies that offer goods and services: They can be financially liable for damage caused by their professional negligence. Software vendors are different. Security flaws in commercial software have cost consumers billions of dollars yet vendors have not been made to pay damages. The end-user licensing agreement (EULA) that we typically click through when installing software virtually absolves vendors of liability.

In years past, these issues have lead to vendors not making the investment to fortify their software. For any business though, it is the consumer that ultimately controls what vendors spend money to improve, and customer attitudes are beginning to change.

The Changing Tides

Just as business concerns have previously guided software vendors away from security, the bottom line is now influencing consumers to demand it. The result is the emergence of the security-savvy consumer. Previously, businesses have considered cost of acquisition and deployment as the most important factor in buying a particular vendor's product. Now, corporations are looking at the Total Cost of Ownership (TCO), of which purchase price is typically only a small component.

Security costs consumers in ways we are not used to measuring. Gartner estimates that IT managers spend up to two hours a day managing patches. The 2003 CSI/FBI survey estimates that the cost of security breaches and denial of service attacks from external hackers tops $1.4 million for the average company. Then there are viruses and worms. SQL Slammer, for example, cost consumers an estimated $1.2 billion in lost productivity in its first five days alone. Most of these incidents are facilitated by security flaws in software.

All of these are costs to consumers of software, not vendors. Since software vendors have been routinely held inculpable for the cost of their bugs, none of this has significantly impacted vendor bottom lines.

That may change. Technology analysts are now advising their clients to demand security from vendors and with good reason. One of the biggest costs to deploying an application is the recurring cost to manage patches. These are costs that are difficult if not impossible to estimate at purchase time because security is not a quality of software that is easily quantifiable.

Nevertheless, corporate customers are starting to gather security-relevant data on their vendors. Corporations are beginning to ask tough questions and extract critical information from vendors like: What is your security strategy? How have you worked to improve your development process to prevent the types of vulnerabilities in previous releases? What is your patch deployment model? It also makes sense for corporations to independently assess their most critical applications for security. This may involve setting up set up small, focused security testing teams and running penetration tests on products in a controlled environment before deployment decisions are made. Considering the cost and expertise needed to perform security assessment, many corporations are outsourcing this to security companies. Corporations are also looking at security indicators like the number and type of vulnerabilities reported in a vendor's other products or previous releases. No matter what the assessment method, consumers are starting to care about security, and vendors are finally beginning to respond.

Where the Industry is Headed

It is clear that in competitive software markets, security will become a key discriminator. Corporate users are likely to lead the charge since they have been the most adversely affected by insecurity in software and have the most to lose. Software vendors will need to convince their customers that they consider security a high priority. What is not clear is how to measure progress, but there are signs that consumer security demands are starting to change the way that software is produced.

Large vendors such as Microsoft and IBM have made strong internal pushes for securing their software, and other vendors are following. It may be starting to work. The number of new vulnerabilities reported in software in 2003 by CERT is very close to the number reported in 2002. This may be a significant indicator of improvement since 2003 will be the first year that this number has not doubled. Also, the most recent CSI/FBI Computer Crime and Security Survey reports a 56 percent reduction in the annual losses incurred by corporations due to security breaches in 2003.

Indeed, we are beginning to see signs of a revolution. As with most revolutions, the masses are beginning to clamor. It will be interesting to see what balance of security versus usability, functionality, and price users are prepared to tolerate.

DDJ