open source components not always secure

How secure are your open source-based systems?

Responsibility for secure open source software is, well, complicated.

Some believe open source is more secure than proprietary software because, as Linus’s Law says, “Given enough eyeballs, all bugs are shallow.”  That means that the more widely available open software is, the more scrutiny it will receive, the more flaws will be surfaced and the stronger the code will be.

That would be true if components that make up open source code were regularly reviewed and if developers verified the security of components before incorporating them into their work.

But that’s not always the case. Like automobile assembly plants that build cars with independently manufactured airbag and brake components, software developers often assume that open source components in their supply chain are reliable, patched and up to date.

Unfortunately, assumptions like that allow for vulnerabilities like those that were behind the Heartbleed bug.

Flaws exist in open source software for a variety of reasons: the components might be old or not mature when they were first used. Or they might not have been audited or adequately tested. But often, once an open source component makes it into a widely used application, it is assumed to be secure, and demand for testing diminishes.

It’s not just open source code that’s vulnerable. Much proprietary software uses open source components. According to Gartner, 95 percent of all mainstream IT organizations will leverage some element of open source software – directly or indirectly – within their mission-critical IT systems in 2015.

And in an analysis of more than 5,300 enterprise applications uploaded to its platform in the fall of 2014, Veracode, a security firm that runs a cloud-based vulnerability scanning service, found that third-party components introduce an average of 24 known vulnerabilities into each web application.

To address this escalating risk in the software supply chain, industry groups such as The Open Web Application Security Project, PCI Security Standards Council and Financial Services Information Sharing and Analysis Center now require explicit policies and controls to govern the use of components, according to Veracode.

The use of open source in federal systems is also attracting scrutiny. In December, House Committee on Foreign Affairs Chairman Ed Royce (R-Calif.) and Rep. Lynn Jenkins (R-Kan.) introduced the Cyber Supply Chain and Transparency Act of 2014 (H.R. 5793) that would have required any supplier of software to the federal government to identify which third-party and open source components are used and verify that they do not include known vulnerabilities for which a less vulnerable alternative is available.

The bill also would have required the Office of Management and Budget to issue guidance on setting up an inventory of vulnerable software and replacing or repairing known or discovered vulnerabilities. Agencies would have had to annually report on the security of projects using open source components and their suppliers for reference by other agencies.

The bill is important because, as Rep. Royce said in his introductory remarks, much of nation’s economy relies on software with open source components. 

“It is precisely because of the importance of open source components to modern software development that we need to ensure integrity in the open source supply chain, so vulnerabilities are not populated throughout the hundreds of thousands of software applications that use open source components,” Royce said.

But not everyone thought the proposed bill was necessary. Trey Hodgkins, senior vice president for public sector at the IT Alliance for Public Sector, told Government Technology that he thought H.R. 5793  duplicated security measures many companies already use.

Do you know what’s in your software?

“We cannot afford to include known exploitable software in our government infrastructure,” said Wayne Jackson, CEO of Sonatype Inc., a software supply chain service provider that is the steward of the Central Repository, the largest source of Java components, as well as creator of the Apache Maven project and distributor of the Nexus open source repository manager.

Today, 90 percent of a typical application is composed of open source and third party components, Jackson wrote in a blog post. The Central Repository clocked in 17.2 billion downloads in 2014 – more than 47 million components every day.

That makes the inventory of open source components critical, Jackson said, because without it, IT managers can’t know if their systems contain compromised components.

One way to check is with Application Health Check that provides a free breakdown of every component in an application and alerts IT managers to potential security and licensing problems. 

“When open source is found to be defective, it’s disclosed, but if you don’t know what’s in your software, that disclosure tips off adversaries who can use it to exploit vulnerabilities,” Jackson said. And hackers get the biggest bang for the buck by going after the components that are widely used, as the OpenSSL/Heartbleed attack demonstrated.

And it’s not just enterprise business software that’s vulnerable, Jackson said. The problem affects the security of any system with digital components, from websites to cars to insulin pumps. The whole Internet of Things is vulnerable to exploits because it is based, in part, on components that have no upgrade path once deployed.

So how can agencies ensure that their systems use a software supply chain that’s been secured?

Use the best ingredients. Agencies should first make sure the components used come directly from a trusted repository. Look for software that is officially compatible with CVE (Common Vulnerabilities and Exposures), the set of standard identifiers for publicly known security vulnerabilities and exposures, said Red Hat’s Dave Egts.

On the flip side, don’t use components with known security (or other) defects, especially when newer, fixed versions are available. Although this sounds like a no-brainer, it’s not yet a mainstream best practice, Jackson said.

Make a list. IT managers should create and preserve a bill of materials, or a list of ingredients, for the components used in a given piece of software.

Scan the code. Agencies should use automated code scanners compatible with the Security Content Automation Protocol (SCAP). Open source tools like OpenSCAP are free and built into many operating systems and certified by the National Institute of Standards and Technology.

Use government-certified software. Using FIPS-certified cryptography libraries, for example, to write encryption applications eliminates the need to obtain additional FIPS-certification.

Monitor security information sites. Check the NIST National Vulnerability Database for new disclosures that might affect the components in critical systems.

There may be no way to completely protect government’s critical systems from determined adversaries, but ensuring that the basic building blocks are secure is a good place to start. 

inside gcn

Reader Comments

Sun, Jan 25, 2015

Who here loves a giant plate of fud served up for the masses?

Thu, Jan 22, 2015 Patrick Masson New York

It's interesting to see the transition that has occurred over the years related to open source software security issues. In the past the primary concerns I heard about open source security were around "hacking," or exploits/intrusions that could be made because the source code was available to everyone / anyone. I seem to recall statements like, "if anyone can access the source code, they could 'hack' it." Today the security issues tend to focus on quality and the vulnerabilities that might be present due to legacy / updated code, or even development undertaken by less (or too few) skilled developers. I think the quality of the communities that mange open source projects is an important factor in reducing risk. Projects with strong leadership and formal practices have the means to review new submissions and assess their legacy code. Indeed, I would advise organizations thinking of adopting open source software to invest just as much in their analysis of the community that manages the software as the software itself. The fundamental idea that "many eyeballs" can find and fix problems is not incorrect, it's simply that too often it's "not many eyeballs." So yes, a fair question is, "how many eyeballs are looking at the software?" And if that is a fair question, the most important factor is, "how many eyeballs *can* look at the software?" Obviously open source code allows greater review than proprietary code, even if organizations do not avail themselves of that opportunity. Unfortunately, the same organizations that may fret over the lack of scrutiny in open source software projects can never scrutinize proprietary code. I would think the latter is actually a greater security risk. If you accept this perspective, you can then extend this article to include proprietary software as well, and ask... "How secure are your [software]-based systems?"

Thu, Jan 22, 2015 Jack Ring

Excellent article. Regarding "..ensuring that the basic building blocks are secure is a good place to start..." it is now possible to ensure that combinations of building blocks are also secure. Today, every new line of code introduced anywhere in a system must be considered a cyberthreat and assessed appropriately.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group