What gives? Shellshock fails to shock

What a difference a few months can make. Shortly after the Heartbleed bug caused a panic in security circles, along comes something which could be even more serious and the reaction seems to be one big yawn.

The so-called Shellshock vulnerability is in the GNU Bourne-Again Shell (Bash), which is the command-line shell used in Linux and Unix operating systems as well as Apple’s Unix-based Mac OS X. It could allow an attacker to execute shell commands and insert malware into systems.

This is not a vulnerability in concept only. Trend Micro, which has been looking for threats based on Shellshock, has already identified a slew of them and says other common communications protocols such as HTTP, SMTP, SSH and FTP are also vulnerable.

Shellshock doubles the threat posed by the OpenSSL Heartbleed bug in various ways. Apparently servers that host OpenVPN, a widely used application, are also vulnerable to Shellshock, just as they are from Heartbleed.  Other security researchers have reported exploits. Heartbleed and Shellshock come from similar development stock. Both are faults in code used in the initial writing of programs that went unnoticed for a long time, apparently for over 20 years in the case of Shellshock. Developers then simply didn’t think of the kind of vulnerabilities today’s threat environment can use, and it’s brought the issue of open source development rigor into question.

Patches are quickly being thrown out to cope with Shellshock, just as with Heartbleed, though security organizations have warned that initial solutions don’t completely resolve the vulnerability. And, anyway, it depends on what people do with these fixes. Months after the Heartbleed bug was trumpeted in the headlines, critical systems around the world were still at risk.

Not all vulnerabilities are equal

Then again, perhaps organizations aren’t as vulnerable from Heartbleed, Shellshock and similar code-driven bugs as people think. University and industry researchers have proposed in a recent paper that existing security metrics don’t capture the extent of actual exploits.

The researchers developed several new metrics derived from actual field data and evaluated those metrics on some 300 million intrusions reported on over 6 millions hosts and found that none of the products they used in their study has more than 35 percent of their disclosed vulnerabilities exploited in the wild, and that for all the products combined only 15 percent of vulnerabilities are exploited.

“Furthermore,” the authors wrote, “the exploitation ratio and the exercised attack surface tend to decrease with newer product releases [and that] hosts that quickly upgrade to newer product versions tend to have reduced exercised attack surfaces.”

In all, they propose four new metrics that they claim, when added to existing metrics, provide a necessary measure for systems that are already deployed and working in real-world environments:

  • A count of vulnerabilities in the wild.
  • The ratio of a product’s vulnerabilities to how many of those are exploited over time.
  • A product’s attack volume, or how frequently it’s attacked.
  • The exercised attack surface, or the portion of a product’s vulnerabilities that are attacked in a given month.

These metrics, they say, could be used as part of a quantitative assessment of cyber risks and can inform the design of future security technologies.

Don’t forget the hardware

Then again, what’s the use of vulnerability announcements and security metrics, all aimed at revealing software bugs and fixes, if the hardware that hosts the software is compromised?

In times past, when chips and the systems that use them were all manufactured in the United States, or by trusted allies, that wasn’t such a concern. But with the spread of globalization comes a diversification of manufacturing sources to China and other countries and increasing fears of adversaries tampering with hardware components to make it easier for them to successfully attack U.S. systems.

That’s been the impetus behind several trusted computing initiatives in the past few years. Most recently, the National Institute of Standards and Technology developed its Systems Security Engineering initiative to try and guide the building of trustworthy systems.

The National Science Foundation is now in the game through the government’s Secure, Trustworthy, Assured and Resilient Semiconductors and Systems (STARRS) program. One approach, in concert with the Semiconductor Research Corporation (SRC), is to develop tools and techniques to make sure components have the necessary assured security from the design stage through manufacturing.

Nine initial research awards were recently made for this program, which is a part of the NSF’s $75 million Secure and Trustworthy Cyberspace “game changing” program.

While all of this is pretty broad-based, the ultimate result for government agencies could be that, in just a few years, they will be able to specify in their procurements exactly what assured hardware the computing systems they buy need to contain. 

Posted by Brian Robinson on Oct 10, 2014 at 11:52 AM1 comments


Hoping higher FISMA scores mean more than compliance

Hoping higher FISMA scores mean more than compliance

The news in government cybersecurity is not all bad.

Following a slip in compliance scores for IT security requirements in fiscal 2012, scores rebounded in FY 2013. And a new emphasis on continuous monitoring and authorization of IT systems – together with a program to provide the necessary tools for the job – could mean that things will get a little better when the results are in for the fiscal year just ended.

The overall state of government cybersecurity is judged by the Federal Information Security Management Act, and the scorecard is the Office of Management and Budget’s annual report to Congress on FISMA compliance. In the report for FY 2012, released in early 2013, overall FISMA compliance slipped from 75 percent in FY 2011 to 73 percent.

In the report for FY 2013 however, overall performance jumped to 81 percent, “with significant improvements in areas such as the adoption of automated configuration management, remote access authentication and email encryption.”

I am the first to admit that FISMA compliance – or compliance with any standards – does not equate to security. But the reports provide a useful baseline and indicate that agencies are paying attention to their security and the maturity of their programs.

Patrick Howard, former chief information security officer for the Nuclear Regulatory Commission and the Department of Housing and Urban Development (and now the program manager for continuous diagnostics and mitigation (CDM) at Kratos Defense), points out that the most recent results show that agencies still are struggling to develop long-term security plans, and he expects to see this again for FY 2014. “That’s nothing new,” he said. “We’ve been seeing that for years.”

But there are some reasons to believe – or at least hope – that there will be continued improvement. The latest report cited an improvement in meeting cross-agency performance goals, including trusted Internet connections, strong authentication and continuous monitoring. And there will be a stronger emphasis on continuous monitoring in the next evaluations.

In November 2013, OMB Memo M-14-03 set a timeline for agencies to move from static reauthorization of IT systems every three years, to continuous monitoring and ongoing reauthorization. Agencies were to have a strategy for information security continuous monitoring (ICSM) in place by Feb. 28, 2014, begin cooperation with the Homeland Security Department to implement the plans and begin procuring products and services through the DHS CDM program. Agencies will be evaluated on their compliance with these requirements in their 2014 FISMA reviews.

Challenges to fully implementing these ICSM goals remain, of course. DHS has not yet established a governmentwide ISCM dashboard, as called for in the memo. And the CDM program, which provides a source for procuring tools and services through a blanket purchase agreement at the General Services Administration, still is a work in progress.

Two of six task orders to be released under Phase 1 of CDM have been released for industry quotes, and the remaining four orders are expected to be released in fiscal 2015. Phase 2 of the CDM program still is being developed. Howard says there is a lack of awareness among many agencies about the continuous monitoring services available under CDM and that many agencies are waiting to see what happens with the second task order before implementing these services.

I am hopeful that the increased resources and attention on continuous monitoring – both in formal programs and in the security community in general – will help continue the upward trend in FISMA scores, however. Higher scores might not mean that agency IT systems are more secure, but they couldn’t hurt.

Posted by William Jackson on Oct 03, 2014 at 12:33 PM0 comments


Why so slow to move off SHA-1?

Why so slow to move off SHA-1?

We all know the gears of government grind slowly, but when it comes to the arcane world of government encryption standards, “slowly” can mean something else entirely. When government time meets technology time, sparks can fly.

Take SHA-1, for example. That 160-bit hash algorithm has been at the heart of vital web security protocols such as Secure Sockets Layer (SSL) and Transport Layer Security (TLS) since shortly after it was developed by the National Security Agency in the 1990s. It has also been a core member of the FIPS standards published by National Institute of Standards and Technology.

However, it’s been under fire for nearly a decade. In 2005, a professor in China demonstrated an attack that could be successfully launched against the SHA-1 hash function, a feat that led to a lot of soul searching within the encryption community. Less than a year later, NIST was urging agencies to begin moving away from SHA-1 and toward stronger algorithms.

At the beginning of 2011, NIST went even further and put what seemed the final kibosh on the beleaguered algorithm by stating definitely that “SHA-1 shall not be used for digital signature generation after December 31, 2013.”

But earlier this year, stories began to emerge pointing out that despite the NIST statement, many government entities were still generating new SSL certificates using SHA-1 in favor of stronger versions.

In a February survey, web services company Netcraft found that fully 98 percent of all the SSL certificates used on the web used SHA-1 signatures, and less than 2 percent used the 256-bit SHA-256. The company also pointed out that a huge number of those certificates as originally issued would still be valid beyond the start of 2017.

It’s not that the security provided by these certificates has so far proven to be porous, but a so-called collision attack could open up valid certificates to be substituted by ones constructed by attackers, allowing them to circumvent web browser verification checks.

It would be time-consuming and need a lot of computing power, but the increasingly market-driven nature of the threat industry is making that less of a barrier. Researchers have shown how the price of an SHA-1 attack will rapidly shrink over the next few years.

That’s all driving a sense of inevitability about the continuing use of SHA-1. Companies such as Microsoft and Google said some time ago they would start winding down the use of the algorithm in their products, and now the browser companies are getting on board.

The developers of Chrome, for example, recently said they will start sunsetting the use of SHA-1 beginning with a release due in November, and on Sept. 23 those in charge of Mozilla-based browsers such as Firefox said they also will be “proactively” phasing out their support of certificates that use SHA-1 signatures.

What’s a government agency to think of this? There have certainly been confusing signals along the way. In 2012, the year after it said it wanted agencies to move away from SHA-1, NIST announced the winner in a competition to create a secure hash algorithm that could eventually be the basis of a new federal SHA-3 standard.

But at the same time, NIST downplayed the need for a new standard in the shorter term, saying SHA-2 seemed to be working just fine (though NIST recent issued a request for comments on a new FIPS 202 that will validate the use of SHA-3). Meanwhile. the current version of NIST secure hash standards (FIPS PUB 180-4) still lists SHA-1 as valid for use in government applications. At the rate the private sector seems to be moving, however, it seems that will soon be impractical.

Posted by Brian Robinson on Sep 26, 2014 at 10:21 AM0 comments


biometrics

Passwords vs. biometrics

It has been a brutal season for data breaches, from the wholesale theft of customer records numbering in the billions to the exposure of naughty celebrity pictures. More significant to agencies is the case that cost US Investigations Services (USIS) a contract to perform government background checks.

It was bad enough when USIS gained attention as the contractor that vetted NSA leaker Edward Snowden and Washington Navy Yard shooter Aaron Alexis. But in the wake of an IT breach that might have exposed the files of thousands of Homeland Security employees, the Office of Personnel Management in September said “enough,” and dropped the company.

The growing pressure by hackers against high value targets and the volumes of personal and other sensitive information being stolen highlights one of the basic questions of cybersecurity: How do you keep the bad guys out?

Identity management and access control are the front lines of security. The ability to accurately identify users and control what they do within your systems is what separates insiders from outsiders. It has been apparent for some time that the traditional tool for this task – the password – is inadequate for the job, and biometrics is emerging as an alternative.

Which is better? The answer is that neither is adequate for strong, practical security on its own. Each has strengths and weaknesses, and real security requires some combination of these or other technologies.

The password by itself actually is a pretty good tool. It is simple to use, easy to implement and can be reasonably strong. The problem is one of scale. For a user juggling passwords for multiple accounts and for administrators juggling many users, the system quickly becomes unwieldy, and strong security begins to break down. In addition, the steady growth in computing power erodes password security by making dictionary and brute force attacks more practical.

Biometrics – the use of physical traits such as fingerprints, irises, faces or voices to identify persons – is more complex, but is becoming more practical. It offers the promise of better security based on the premise that there is only one you.

Yet it has its drawbacks. All forms of biometrics operate on the “close enough” principle. Whereas a password must be exact to be accepted, matching a biometric trait requires a judgment about whether there is a proper match. This leaves room for mistakes, either as false positives or false negatives. The algorithms making the decision can be tuned depending on the level of security required. But higher security comes at a cost in the form of increased time or computing power to determine a match and by increasing the possibility that a legitimate biometric will be rejected. And although there is only one you, biometric systems can be susceptible to spoofing. A stolen digital template of a biometric trait could be inserted into the authentication process to authenticate the wrong user.

There are other ID management technologies, of course, such as digital certificates, a form of electronic ID vouched for by a trusted party. These can be powerful, but also challenging to manage on a large scale.

The bottom line is, no matter how much these technologies improve, no single tool is likely to be good enough for really practical strong authentication, and it is unlikely that a new and perfect technology will come along any time soon. None of these technologies is a complete failure, either. By combining strengths to offset weaknesses, these common tools can be integrated into multifactor authentication that provides security that is stronger than the sum of its parts.

Government already has a tool that can enable multi-factor authentication, the Personal Identity Verification Card and its military counterpart, the Common Access Card. Taking full advantage of these for access control could go a long way toward improving federal cybersecurity.

Posted by William Jackson on Sep 19, 2014 at 6:56 AM3 comments