By now you no doubt have heard about SandWorm, the cyberespionage campaign against NATO and other high-value targets, attributed by researchers at iSight Partners to Russian hackers.
The researchers have been monitoring activities of this hacker team since late 2013, but its origins date back as far as 2009. Using spearphishing with malicious attachments, they have successfully exploited a zero-day Windows vulnerability and other vulnerabilities to compromise military and other Western European government organizations, including energy companies, the Ukrainian government and U.S. academic organizations.
It seems to be a textbook example of an advanced, persistent threat. The attackers were motivated and well resourced; and the compromises were successful, stealthy and apparently long-lived.
“Though we have not observed details on what data was exfiltrated in this campaign, the use of this zero-day vulnerability virtually guarantees that all of those entities targeted fell victim to some degree,” wrote iSight’s Stephen Ward.
How do agencies defend against such an threat? When the vulnerability is unknown and the malicious code is well hidden, IT managers have to look for active footprints. They have to keep an eye on the traffic that is entering and leaving their systems and watch what is happening inside those systems. No matter how stealthy the exploit, it has to activate inside the system, and that is where to spot it and stop it.
That’s the idea behind the Cyber Kill Chain.
The Cyber Kill Chain is based on the military concept of establishing a systematic process to target, engage and defeat an adversary. It relies on the assumption that an adversary will have to carry out specific steps to attack in a given environment.
The Cyber Kill Chain, introduced by Lockheed Martin in 2011, upends the traditional wisdom that an IT defender has to be successful 100 percent of the time, while an attacker has to succeed only once. Under this concept, the attacker has to successfully complete the entire seven-step process, while the defender can defeat him at any point in the chain.
The seven links in the Cyber Kill Chain are:
- Reconnaissance: Gathering intelligence to identify a target.
- Weaponization: Packaging an exploit in a deliverable payload.
- Delivery: Delivering the weapon to the victim, through email, malicious websites, removable media, etc.
- Exploitation: Executing the exploit on the victim’s system.
- Installation: Installing malware on the target.
- Command and control: Opening a channel for remote manipulation of the target system.
- Action on objectives: Gathering, exfiltrating or altering data, manipulating systems or other activity against the target.
Breaking an attack into incremental steps rather than looking at it as a binary action – compromised or not compromised – gives the defender many points at which the attack can be identified, targeted, and eliminated or mitigated.
But it also requires an intelligence-driven approach to defense. That means having visibility into the networks and systems being defended and the ability to analyze data so that anomalies or other patterns being displayed in the attack can be identified.
This is not necessarily easy to achieve, and defending systems against complex or sophisticated attacks will remain challenging.
But tools and services are available, and the government’s move toward continuous monitoring (or continuous diagnostics and mitigation) is a step toward enabling intelligence-driven defense. Attacks and breaches might be inevitable, but cyberdefense is not a game we have to lose.
Posted by William Jackson on Oct 17, 2014 at 10:27 AM0 comments
What a difference a few months can make. Shortly after the Heartbleed bug caused a panic in security circles, along comes something which could be even more serious and the reaction seems to be one big yawn.
The so-called Shellshock vulnerability is in the GNU Bourne-Again Shell (Bash), which is the command-line shell used in Linux and Unix operating systems as well as Apple’s Unix-based Mac OS X. It could allow an attacker to execute shell commands and insert malware into systems.
This is not a vulnerability in concept only. Trend Micro, which has been looking for threats based on Shellshock, has already identified a slew of them and says other common communications protocols such as HTTP, SMTP, SSH and FTP are also vulnerable.
Shellshock doubles the threat posed by the OpenSSL Heartbleed bug in various ways. Apparently servers that host OpenVPN, a widely used application, are also vulnerable to Shellshock, just as they are from Heartbleed. Other security researchers have reported exploits. Heartbleed and Shellshock come from similar development stock. Both are faults in code used in the initial writing of programs that went unnoticed for a long time, apparently for over 20 years in the case of Shellshock. Developers then simply didn’t think of the kind of vulnerabilities today’s threat environment can use, and it’s brought the issue of open source development rigor into question.
Patches are quickly being thrown out to cope with Shellshock, just as with Heartbleed, though security organizations have warned that initial solutions don’t completely resolve the vulnerability. And, anyway, it depends on what people do with these fixes. Months after the Heartbleed bug was trumpeted in the headlines, critical systems around the world were still at risk.
Not all vulnerabilities are equal
Then again, perhaps organizations aren’t as vulnerable from Heartbleed, Shellshock and similar code-driven bugs as people think. University and industry researchers have proposed in a recent paper that existing security metrics don’t capture the extent of actual exploits.
The researchers developed several new metrics derived from actual field data and evaluated those metrics on some 300 million intrusions reported on over 6 millions hosts and found that none of the products they used in their study has more than 35 percent of their disclosed vulnerabilities exploited in the wild, and that for all the products combined only 15 percent of vulnerabilities are exploited.
“Furthermore,” the authors wrote, “the exploitation ratio and the exercised attack surface tend to decrease with newer product releases [and that] hosts that quickly upgrade to newer product versions tend to have reduced exercised attack surfaces.”
In all, they propose four new metrics that they claim, when added to existing metrics, provide a necessary measure for systems that are already deployed and working in real-world environments:
- A count of vulnerabilities in the wild.
- The ratio of a product’s vulnerabilities to how many of those are exploited over time.
- A product’s attack volume, or how frequently it’s attacked.
- The exercised attack surface, or the portion of a product’s vulnerabilities that are attacked in a given month.
These metrics, they say, could be used as part of a quantitative assessment of cyber risks and can inform the design of future security technologies.
Don’t forget the hardware
Then again, what’s the use of vulnerability announcements and security metrics, all aimed at revealing software bugs and fixes, if the hardware that hosts the software is compromised?
In times past, when chips and the systems that use them were all manufactured in the United States, or by trusted allies, that wasn’t such a concern. But with the spread of globalization comes a diversification of manufacturing sources to China and other countries and increasing fears of adversaries tampering with hardware components to make it easier for them to successfully attack U.S. systems.
That’s been the impetus behind several trusted computing initiatives in the past few years. Most recently, the National Institute of Standards and Technology developed its Systems Security Engineering initiative to try and guide the building of trustworthy systems.
The National Science Foundation is now in the game through the government’s Secure, Trustworthy, Assured and Resilient Semiconductors and Systems (STARRS) program. One approach, in concert with the Semiconductor Research Corporation (SRC), is to develop tools and techniques to make sure components have the necessary assured security from the design stage through manufacturing.
Nine initial research awards were recently made for this program, which is a part of the NSF’s $75 million Secure and Trustworthy Cyberspace “game changing” program.
While all of this is pretty broad-based, the ultimate result for government agencies could be that, in just a few years, they will be able to specify in their procurements exactly what assured hardware the computing systems they buy need to contain.
Posted by Brian Robinson on Oct 10, 2014 at 11:52 AM1 comments
The news in government cybersecurity is not all bad.
Following a slip in compliance scores for IT security requirements in fiscal 2012, scores rebounded in FY 2013. And a new emphasis on continuous monitoring and authorization of IT systems – together with a program to provide the necessary tools for the job – could mean that things will get a little better when the results are in for the fiscal year just ended.
The overall state of government cybersecurity is judged by the Federal Information Security Management Act, and the scorecard is the Office of Management and Budget’s annual report to Congress on FISMA compliance. In the report for FY 2012, released in early 2013, overall FISMA compliance slipped from 75 percent in FY 2011 to 73 percent.
In the report for FY 2013 however, overall performance jumped to 81 percent, “with significant improvements in areas such as the adoption of automated configuration management, remote access authentication and email encryption.”
I am the first to admit that FISMA compliance – or compliance with any standards – does not equate to security. But the reports provide a useful baseline and indicate that agencies are paying attention to their security and the maturity of their programs.
Patrick Howard, former chief information security officer for the Nuclear Regulatory Commission and the Department of Housing and Urban Development (and now the program manager for continuous diagnostics and mitigation (CDM) at Kratos Defense), points out that the most recent results show that agencies still are struggling to develop long-term security plans, and he expects to see this again for FY 2014. “That’s nothing new,” he said. “We’ve been seeing that for years.”
But there are some reasons to believe – or at least hope – that there will be continued improvement. The latest report cited an improvement in meeting cross-agency performance goals, including trusted Internet connections, strong authentication and continuous monitoring. And there will be a stronger emphasis on continuous monitoring in the next evaluations.
In November 2013, OMB Memo M-14-03 set a timeline for agencies to move from static reauthorization of IT systems every three years, to continuous monitoring and ongoing reauthorization. Agencies were to have a strategy for information security continuous monitoring (ICSM) in place by Feb. 28, 2014, begin cooperation with the Homeland Security Department to implement the plans and begin procuring products and services through the DHS CDM program. Agencies will be evaluated on their compliance with these requirements in their 2014 FISMA reviews.
Challenges to fully implementing these ICSM goals remain, of course. DHS has not yet established a governmentwide ISCM dashboard, as called for in the memo. And the CDM program, which provides a source for procuring tools and services through a blanket purchase agreement at the General Services Administration, still is a work in progress.
Two of six task orders to be released under Phase 1 of CDM have been released for industry quotes, and the remaining four orders are expected to be released in fiscal 2015. Phase 2 of the CDM program still is being developed. Howard says there is a lack of awareness among many agencies about the continuous monitoring services available under CDM and that many agencies are waiting to see what happens with the second task order before implementing these services.
I am hopeful that the increased resources and attention on continuous monitoring – both in formal programs and in the security community in general – will help continue the upward trend in FISMA scores, however. Higher scores might not mean that agency IT systems are more secure, but they couldn’t hurt.
Posted by William Jackson on Oct 03, 2014 at 12:33 PM0 comments
We all know the gears of government grind slowly, but when it comes to the arcane world of government encryption standards, “slowly” can mean something else entirely. When government time meets technology time, sparks can fly.
Take SHA-1, for example. That 160-bit hash algorithm has been at the heart of vital web security protocols such as Secure Sockets Layer (SSL) and Transport Layer Security (TLS) since shortly after it was developed by the National Security Agency in the 1990s. It has also been a core member of the FIPS standards published by National Institute of Standards and Technology.
However, it’s been under fire for nearly a decade. In 2005, a professor in China demonstrated an attack that could be successfully launched against the SHA-1 hash function, a feat that led to a lot of soul searching within the encryption community. Less than a year later, NIST was urging agencies to begin moving away from SHA-1 and toward stronger algorithms.
At the beginning of 2011, NIST went even further and put what seemed the final kibosh on the beleaguered algorithm by stating definitely that “SHA-1 shall not be used for digital signature generation after December 31, 2013.”
But earlier this year, stories began to emerge pointing out that despite the NIST statement, many government entities were still generating new SSL certificates using SHA-1 in favor of stronger versions.
In a February survey, web services company Netcraft found that fully 98 percent of all the SSL certificates used on the web used SHA-1 signatures, and less than 2 percent used the 256-bit SHA-256. The company also pointed out that a huge number of those certificates as originally issued would still be valid beyond the start of 2017.
It’s not that the security provided by these certificates has so far proven to be porous, but a so-called collision attack could open up valid certificates to be substituted by ones constructed by attackers, allowing them to circumvent web browser verification checks.
It would be time-consuming and need a lot of computing power, but the increasingly market-driven nature of the threat industry is making that less of a barrier. Researchers have shown how the price of an SHA-1 attack will rapidly shrink over the next few years.
That’s all driving a sense of inevitability about the continuing use of SHA-1. Companies such as Microsoft and Google said some time ago they would start winding down the use of the algorithm in their products, and now the browser companies are getting on board.
The developers of Chrome, for example, recently said they will start sunsetting the use of SHA-1 beginning with a release due in November, and on Sept. 23 those in charge of Mozilla-based browsers such as Firefox said they also will be “proactively” phasing out their support of certificates that use SHA-1 signatures.
What’s a government agency to think of this? There have certainly been confusing signals along the way. In 2012, the year after it said it wanted agencies to move away from SHA-1, NIST announced the winner in a competition to create a secure hash algorithm that could eventually be the basis of a new federal SHA-3 standard.
But at the same time, NIST downplayed the need for a new standard in the shorter term, saying SHA-2 seemed to be working just fine (though NIST recent issued a request for comments on a new FIPS 202 that will validate the use of SHA-3). Meanwhile. the current version of NIST secure hash standards (FIPS PUB 180-4) still lists SHA-1 as valid for use in government applications. At the rate the private sector seems to be moving, however, it seems that will soon be impractical.
Posted by Brian Robinson on Sep 26, 2014 at 10:21 AM0 comments