FOSS: Insecure by Design?
New research from Boston College's Sam Ransbotham suggests open source software's accessible nature makes it more susceptible to security exploits, and that those exploits reach higher penetration levels. Critics point out that openness works both ways -- malicious hackers may have access to all the code, but so do those who aim to patch holes and solve security problems.
Jun 8, 2010 5:00 AM PT
Open source software is often considered more secure than proprietary counterparts by virtue primarily of the many sets of eyes that can find and patch any vulnerabilities, but a new report suggests otherwise.
In fact, the broad visibility of open source code serves to make it more easily exploitable, according to Sam Ransbotham, assistant professor at Boston College's Carroll School of Management.
"My theoretical development and empirical results indicate that, compared with closed source software, vulnerabilities in open source software: (a) have increased risk of exploitation, (b) diffuse sooner and with higher total penetration, and (c) increase the volume of exploitation attempts," Ransbotham wrote.
Ransbotham's paper, "An Empirical Analysis of Exploitation Attempts Based on Vulnerabilities in Open Source Software," will be presented Tuesday at the Ninth Workshop on the Economics of Information Security at Harvard University.
Attempts Three Days Sooner
Ransbotham's study involved an empirical analysis of log data from intrusion detection systems to examine the risk, diffusion and volume of exploitation attempts. The log data spans two years -- including 400 million alerts -- and was generated by 960 clients of SecureWorks, a managed security service provider.
Also included in the study were detailed vulnerability data from the National Vulnerability Database (NVD) and a manual classification as "open" or "closed" of the software products associated with each vulnerability.
According to the findings, exploitation attempts on open source software occur about three days sooner than those on proprietary software do, while the overall penetration rate of those exploits is roughly 50 percent higher.
The availability of a security signature, meanwhile, not only increases the risk of vulnerability exploitation, but it also reduces delay and increases penetration, the study found, leading Ransbotham to suggest that "attackers learn to exploit a vulnerability by reverse engineering the associated signatures."
Highly complex vulnerabilities are also more likely to be exploited, the study found.
'All Software Is Susceptible'
The topic of Ransbotham's study "centers on the longstanding debate of security through obscurity -- that is, code is proprietary and closed to public view -- versus the advantage of many eyes on the code when it is open source and available for review," Jay Lyman, an analyst for open source with The 451 Group, told LinuxInsider.
"I'm not going to pick a side in that, as both have legitimate reasoning, pros and cons," Lyman added. "However, I will say that any code, open source or not, is potentially vulnerable to exploitation and attack."
Further, "for every additional vulnerability that is discovered in open source software, we must ask ourselves how many vulnerabilities in proprietary code have been discovered, but not reported?" Lyman pointed out. "In addition, if attackers are leveraging a defect or vulnerability in the code, they will likely have access to the code they need, whether it is proprietary or not."
In the end, "all software is created by people and is susceptible to attack," Lyman concluded. "There are a number of variables -- code complexity, quality, modularity, interdependencies, testing, QA and overall ruggedness -- that are far more important than whether code is open source or not."
'That Works Both Ways'
The results of such a study depend on "how you measure the vulnerability of a particular piece of software," Johannes Ullrich, chief technology officer at the SANS Institute, told LinuxInsider.
"It's definitely true that it's easier to find a particular flaw if you have the source code available," Ullrich explained. "But that works both ways -- for the bad guys and the good guys."
Finding flaws is also only part of the picture, he noted.
"You also have to look at how hard it is and how long it takes to fix a flaw once it's found," Ullrich pointed out. "That's typically easier in well-maintained open source software because many people are working on it."
In fact, with open source, "you can essentially do it yourself as a user of the software," he added, "whereas with closed software, you're at the mercy of the company that made it."