
“Geekonomics is really about the economics of a technology … and the positive and negative impact on us.”
That’s according to David Rice, author of Geekonomics: The Real Cost of Insecure Software.
In his book, Rice discusses how software defects severely impact the U.S. economy and national security.
In an interview with the ECT News Network Podcast Series, Rice lays out some of the key points in his new book.
One of those points is that perfection is impossible. So it’s not about creating perfect software. It’s really about getting software manufacturers to minimize the vulnerabilities in their products so they will be less susceptible to outside intrusion by hackers.
Contrary to popular opinion, hackers don’t break software. They simply seek out and leverage existing vulnerabilities.
Software defects, and hacker attacks, cost the U.S. economy from US$20 billion to $100 billion annually, Rice said. This includes all of the additional security protection that consumers are forced to buy, and the parts of the economy that lose revenue because of money flowing into managing software defects.
In order to reduce software vulnerabilities, he says that the software industry should institute a ratings system similar to the National Highway Traffic Safety Administration rating system for automobile safety.
Software manufacturers have no market incentive to motivate them to minimize defects, Rice said. Such an incentive will likely include legal, regulatory, and legislative liability.
Finally, Rice posits that software defects are also a national security issue as foreign governments sponsor attacks on U.S. assets, inside and outside of government, and steal intellectual property.

Here are some excerpts of the interview:
E-Commerce Times: One of the things you focus on is software defects. How would you define a software defect?
David Rice:
When we look at software, one of the recognitions is that like any product, perfection is impossible. So a defect or a flaw is something that the manufacturer failed to detect that affects the quality, the reliability or the security of the product itself. And so when we start considering the amount of imperfections we’re going to tolerate, that’s sort of what Geekonomics focuses on.
ECT: What have you determined in your analysis that would be a tolerable level of defects in software, because all of the time, whenever a new version of something comes out — let’s say Windows is one obvious example — there’s always defects. So how do you determine what’s tolerable and what’s not?
DR:
No one can really come up with an answer to that unilaterally. When we look at, for example, safety in vehicles and cars, there’s no such thing as a perfectly safe vehicle. So what we use is something like the NHTSA five-star rating system. What is says is with a five-star safety rating, you have a 10 percent chance of dying in a collision, and we have different ratings for different types of impacts, etc. What that says is, perfect safety is impossible, so what we can do is give you a pretty good guess of how safe you’re going to be, assuming safe driving practices.
So when we first were looking at the notion of software, we have nothing like that, so it’s very difficult for any one individual — even a government — to say, “this is the level of security that you should absolutely abide by.” So what we do is give the consumers the ability to purchase the risk that they’re willing to buy. So when you buy a five-star rated vehicle, you’re actually purchasing a very low-risk vehicle — or at least the risk of anything in the vehicle is quite low, given safe driving practices.
So when we look at software, we really don’t have anything like that. Buyers are entering the software market completely blind. So a labeling scheme would actually give consumers the level of risk they’re willing to purchase and to tolerate.
ECT: What’s the status of the labeling scheme right now? Is that something that’s in the works, or is it something that’s not being considered yet?
DR:
It’s very rudimentary at this point. What makes this situation almost asinine is that for 50 years, we’ve known how to make really good software, but the market chooses not to — mainly because it doesn’t have an incentive to do so. So, the techniques, the tools, the practices have all been there, but they really haven’t been employed consistently enough to give us the type of security we need.
So there are measures out there, either managed by [the U.S. Department of Homeland Security] or the Mitre Corporation, there are companies like Ounce and Fortify and Veracode that all have the rudimentary elements of the ability to start labeling software that consumers can make this choice, but we have a ways to go. But we still have the capability to do so, and we simply need to add the fuel in order to get that going.
ECT: What would be the fuel? What would light a fire under software developers to get them to move in that direction?
DR:
That’s a challenging aspect of this. When we start talking about what the software market does, we have to introduce a term called “market failure.” Market failure simply acknowledges that people will not self-correct, despite large negative externalities. Basically, they’re not going to do something that makes them worse off.
What we mean by this is that manufacturers aren’t going to add the additional cost and complexity of doing additional security testing on their software if they don’t need to. So how do we correct a market failure? How do we force people to self-correct? One way of doing that is similar to the NHTSA Five-Star rating system. Auto manufacturers in the ’50s and ’60s weren’t trying to make unsafe cars — but they were. The same thing can be said of software manufacturers: They’re not trying to make bad software — but they are.