In the wake of Heartbleed, open source software is under scrutiny
Heartbleed, the OpenSSL vulnerability that potentially exposes data in otherwise secure transactions, has raised once again the question of security in open source software. Some maintain that open source, because of all the eyes looking at it, is likely to be more secure. Others say that with anyone able to tamper with it, of course it can’t be trusted.
The truth is that neither open source nor proprietary software is inherently more secure. A recent report by Coverity from security scans of 750 million lines of code on 700 open source projects found for the first time that the quality of open source surpassed proprietary projects, with an error density of .59 per 1,000 lines of code for open source compared to .72 for proprietary code scanned.
But that is not the whole story, said Zack Samocha, Coverity’s senior director of products. The quality of the code depends on the commitment to security, he said. “You need to be really committed and fix issues as they come.”
Proprietary code produced under pressure of market demands also can be buggy. Users should test the products they are using to make sure they do what they are supposed to do. And if you are using open source, review the code and take part in its development, said Barrett Lyon, founder and CTO of Defense.Net.
“You should contribute,” Lyon said. “If it’s open source and it’s not secure, it’s partly your fault.”
With all of the eyes that could have looked at OpenSSL, the surprising thing is that Heartbleed escaped detection for two years. How did it happen in the first place? Like many security problems, it was the result of a trade-off with performance. Earlier versions of OpenSSL had a feature called memory management security that prevented memory leaks. But Lyon, in reviewing the code, saw that in version 1.0.1 a developer disabled the feature because it hurt performance in some implementations.
“It comes down to humans making mistakes,” said Jonathan Rajewski, an assistant professor and director of the Leahy Center for Digital Investigation at Champlain College. “This is an ‘ah-ha!’ moment that we can learn from, because it will happen again.”
The “fix” that caused the problem is in versions 1.0.1 through 1.01f of OpenSSL. The “fix” was fixed in version 1.0.1g
One of the most serious concerns for agencies using affected versions of OpenSSL is the possibility of digital certificates being exposed, which could allow an attacker to spoof a government site, not only putting visitors to the site at risk, but damaging the agency’s reputation.
But there are a few bright spots. Although security experts preach the gospel of the timely update, this is a situation where procrastination could be a blessing. Many government administrators are conservative in their updating; a diplomatic way of saying they don’t have time to keep their software up-to-date, and if something is working they ignore it. This means that many agencies are running OpenSSL 1.0.0 versions, which are not vulnerable.
Another bright spot is that there are tools to detect vulnerable implementations of the software. The Nessus scanner from Tenable, for example, will detect it through its remote and local checks. And although exploits of the vulnerability don’t leave direct footprints, there are some telltale signs. Because exposed memory is gathered in 64 kilobyte chunks, it takes a large number of repeated log-ins to gather useful information, and this kind of anomalous behavior should be detected by logs.
For those affected, updating OpenSSL in your applications, revoking and replacing digital certificates and reissuing keys will be a chore. But there is yet another challenge. Because users are being advised to replace their passwords, a flood or reset requests could overwhelm help desks or automated reset systems.
“It’s going to be interesting,” Lyon said.
Posted by William Jackson on Apr 18, 2014 at 11:33 AM