Enterprise

FOSS FACE-OFF

When It Comes to Security, Openness Isn’t Always a Virtue – Con: Joe Brockmeier

The question of security is one that has plagued proponents of proprietary and open source software alike for as long as there has been a choice.

Is free and open source software more secure by virtue of all the many pairs of eyes that can see the code, identify vulnerabilities and fix them? Or is “security through obscurity” the safer route for the way it keeps the code hidden, out of the reach of those with a malicious intent?

In Part 1 of this three-part debate series devoted to the topic, Boston College professor Sam Ransbotham gave his views on “the cost of transparency” from a security perspective.

Now, Joe Brockmeier, GNOME PR team lead and former openSUSE community manager for Novell, makes the case that FOSS is the more secure option.

LinuxInsider: How do you define “security” when it comes to software — that is, what makes one piece of software more secure than another, in your estimation?

Joe Brockmeier:

Good design; as few as possible vulnerabilities; and when vulnerabilities are discovered, the project or vendor fixes them as quickly as possible. Also, the software does what it’s expected to do, and can be examined by users and security experts to verify how it operates; that it’s doing what it should — and no more — and how it does it.

LinuxInsider: Some advocate security through obscurity, while others believe that open code will help problems get fixed sooner. Which side do you think is right?

Brockmeier:

When you’re talking about the source of a program, I believe security through obscurity is a joke. IE, for example, has been found to have many exploitable vulnerabilities. These were not discovered because IE’s code is open — they were discovered through other means. It’s entirely possible for motivated and talented attackers to discover vulnerabilities in software without access to source code. But lack of access to source code hinders those who would like to discover and fix vulnerabilities as soon as possible.

Obscurity has some value when it comes to things like network design and, obviously, usernames, etc. But, just as an example, the “obscurity” of Microsoft’s code has done little to nothing to help protect its software from attack.

LinuxInsider: It seems to be a given that a certain amount of software insecurity is a function of the broadness of its adoption. That being the case, if people seeking better security are persuaded to adopt a less-targeted system, wouldn’t the logical outcome be greater vulnerability due to increased popularity? That seems to be what happened to Firefox, for example, after it became more competitive with the “less secure” Internet Explorer.

Brockmeier:

I don’t agree with this premise. Yes, more users means that a given piece of software may receive more attention from attackers. Note that I said “may” and not “will.” Firefox is hovering around 25 percent market share, depending on who is measuring, but how many successful exploits have been written to attack Firefox?

Firefox may issue a fair number of security updates to address potential vulnerabilities, but that is not the same as exploits in the wild. That’s a fairly common mistake — or deliberate tactic — used when discussing software security: equating the number of security patches with actual exploits.

LinuxInsider: Even Google has now dropped Windows internally, apparently, citing security reasons. What is your opinion of that move?

Brockmeier:

Dropping Windows is not a bad idea if a company can do so, but it shouldn’t be the only component of a security strategy — and I’m sure it’s not Google’s only security measure.

LinuxInsider: What about the Obama administration and other governments that have begun to embrace open source software. Do you think that’s a problem for national security?

Brockmeier:

No, not at all. On the contrary, I think it’s a positive step. It worries me that so much of our national security is dependent on closed source software that we can’t examine.

LinuxInsider: Overall, when it comes to security, do you think openness is a virtue or a vice?

Brockmeier:

When you’re talking about source code for applications, operating systems, etc., I think openness is a virtue. Sure, an attacker has access to the source code — but so does everyone else. Anyone concerned with the security of a system can examine the code and look for flaws. More importantly, anyone can *fix* the flaws, which is not possible with closed source applications.

As an added thought, it’s worth noting that a lot of our computing and data is shifting to mobile devices and “the cloud,” with applications being hosted by third parties and users dependent on those third parties to not only provide secure applications, but to notify users when security has been breached.

Companies have not shown a tendency to be entirely forthcoming about security breaches unless they have to be. It’s not only impossible to examine the code for vulnerabilities, it’s also impossible to know exactly what is being done with your data. This should scare the hell out of people when talking about their personal data.

When It Comes to Security, Openness Isn’t Always a Virtue – Rebuttals

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

LinuxInsider Channels