Welcome Guest | Sign In

When It Comes to Security, Openness Isn't Always a Virtue - Pro: Sam Ransbotham

By Katherine Noyes
Jun 15, 2010 7:00 AM PT

Ask fans of FOSS what keeps them loyal to free and open source software, and security will likely be high on the list of advantages they cite. Ask proponents of paid, proprietary software, however, and they may well say the same thing.

When It Comes to Security, Openness Isn't Always a Virtue - Pro: Sam Ransbotham

Therein lies a paradox that has brought no small amount of consternation to parties on both sides for many years.

On the one hand, there's the argument that open source software is more secure because the broad availability of its code ensures that any problems will be identified and fixed sooner. Then, however, there's the reasoning that that very openness makes the code more vulnerable to malicious attempts to exploit any weaknesses.

Which is correct? There's no shortage of users and data-equipped market researchers ready to support each side.

Still, the controversy continues.

In this, the first in LinuxInsider's new FOSS Face-Off series, we'll give both sides a chance to have their say. Then, in classical debate fashion, we'll also give each a chance to respond to the other's comments. From there, it will be up to you, the reader, to draw your own conclusions.

First up: Sam Ransbotham, assistant professor of information systems at Boston College's Carroll School of Management, on why openness isn't always a virtue.

LinuxInsider: How do you define security when it comes to software -- that is, what makes one piece of software more secure than another, in your estimation?

Sam Ransbotham: Security is a difficult problem with a variety of issues, such as vulnerabilities, privacy, piracy, stability and dependability, and complications, including externalities, incentive structures and market power -- most of which defy easy metrics and change rapidly.

In fact, a big part of the problem is assessing the relative security of two pieces of software considering multiple aspects simultaneously. We must then make choices and compromises without good ways to evaluate them.

As an example, the absolute number of vulnerabilities reported in released software means little without ways to weight for severity or to adjust for scope and features. Given that complete security is not possible or practical, we need much more empirical research and analysis to understand the related costs and benefits.

LinuxInsider: Some advocate security through obscurity, while others believe that open code will help problems get fixed sooner. Which side do you think is right?

Ransbotham: As with most things, neither option is a panacea. Security involves many different motivations, incentives and externalities. Even the question focuses on only part of the problem -- fixing. But there is much more involved than just fixing code. Many of the benefits of open code come in the pre-release vulnerability discovery process.

After discovery, prior studies show that open source code gets fixed faster. I also believe open source users are quicker about deploying corrections. Having the source code should help defenders get countermeasures in place after discovery but before corrections are deployed. These are all benefits of transparency.

My recent research, however, indicates that the open source code can help attackers create exploits once a vulnerability is discovered. This is a cost of transparency.

My sense is that the transparency benefits far outweigh this cost. Nevertheless, it indicates to open source users that they need to be vigilant about post-discovery activities to mitigate this downside. I would like to see more research that measures and quantifies these costs and benefits -- it is a difficult problem.

LinuxInsider: It seems to be a given that a certain amount of software insecurity is a function of the broadness of its adoption. That being the case, if people seeking better security are persuaded to adopt a less-targeted system, wouldn't the logical outcome be greater vulnerability due to increased popularity? That seems to be what happened to Firefox, for example, after it became more competitive with the "less secure" Internet Explorer.

Ransbotham: While their individual perceptions of "what is valuable?" will vary, rational attackers pursue paths that yield the greatest net reward. The value of systems, effort/cost of compromise, and risk/penalty of punishment all influence this net reward -- and increased popularity could affect each component of the net reward in different ways.

As you mentioned, increased popularity would likely correspond to increased value, with more systems as potential targets. However, the key is the value of systems rather than strictly the breadth of adoption. It depends on who adopts, combined with the attacker's motivation.

Considering an extreme case of a targeted attack, additional adopters may serve to hide the "needle" in a larger "haystack." Additional adopters could also increase resources allocated for defenses or allow sharing of countermeasure costs. Overall, the net effect of something like this is difficult to determine without analysis. It makes researching these topics interesting and important.

LinuxInsider: Even Google has now dropped Windows internally, apparently, citing security reasons. What is your opinion of that move?

Ransbotham: In the interests of full disclosure, I should mention first that Google has funded some of my unrelated research. That aside, however, it certainly is an interesting development.

While security may be one part of the decision, it seems likely there are others. Remember that Google is increasingly a Microsoft competitor, such as with its Chrome operating system, search and office applications. Therefore, it may be more about strategy in general than security specifically. If so, it would not be the first time that security was used to justify a change with different underlying motives.

Regardless of the motives, though, I believe the attention to and visibility of the change will increase competitive pressures and will be good for the industry. From a security perspective, it certainly raises the stakes for Google with Chrome. The first big, and realistically inevitable, security problem in the post-Windows environment at Google will get a lot of attention. That should create additional incentives for Google to emphasize security.

LinuxInsider: What about the Obama administration and other governments that have begun to embrace open source software. Do you think that's a problem for national security?

Ransbotham: Not intrinsically. Regardless of the source, all software has security issues. I don't think increased reliance on open source software would be a problem for a specific adopter.

Going back to the discussion of net reward and increased adoption, whatever system a government adopts would boost the aggregate value of that system, particularly for attackers with political or national motivations. But in the case of open source, it may also increase government-sponsored contributions to open source and, at a minimum, increase the number of eyeballs looking at the source code.

It is also interesting to think about the effect that increased government open source adoption would have on non-government open source users -- there is the potential for negative externalities. It could be the virtual analogy to your next-door neighbor announcing publicly that they keep large amounts of money in their mattress. Your neighborhood might attract more criminal activity as a result. Similarly, all other open source users may experience more attacks as attackers find increased incentives and net reward from exploitation activities.

LinuxInsider: Overall, when it comes to security, do you think openness is a virtue or a vice?

Ransbotham: Overall openness is a virtue, but not without elements of vice. The challenge for open source communities is to maintain the benefits while mitigating the downsides.

My recent research points out one part of the vulnerability process, where attackers may benefit from seeing open source, and tries to quantify that benefit.

I think we need more research and analysis to understand better how these virtues and vices combine. Open source communities can then use that information to respond to the ever-changing threat environment.

"When It Comes to Security, Openness Isn't Always a Virtue - Con: Joe Brockmeier."

Facebook Twitter LinkedIn Google+ RSS
How worried are you that you personally will be the victim of a cybercrime?
Very worried -- I've taken steps to protect myself.
Very worried -- but I don't think it's possible to protect myself.
I've already been a victim, and it was devastating.
I've already been a victim, but it was no big deal.
The risk is overblown. I don't consider myself a likely target.
The tech industry is doing a good job protecting me.