OpenSSL is back in the news again, almost a year after it first made a splash with the now infamous Heartbleed bug revelation. This time around, however, it looks like it could be a good thing.
Cryptography Services, a part of the Linux Foundation's Core Infrastructure Initiative (CII), is going to audit OpenSSL security. It's billed as an independent audit, even though the CII has been instrumental over the past year in trying to right the OpenSSL ship by providing some of the money to get the beleaguered open source software full time development help.
Heartbleed was a major shock to the cybersecurity ecosystem for several reasons: Not only is OpenSSL widely used in both public and private organizations' network and system security, the coding mistake that created it apparently went undetected for several years before it could be patched, and no one could say for certain how many systems had been affected or what data might have been compromised.
The crisis created by that bug fed into a concern about open source software overall, with other threats such as the Shellshock vulnerability in the Linux and Unix operating systems and a possible SQL injection attack on the popular Drupal content management system adding to the worries.
It’s not as if any of these major open source resources can easily be replaced. OpenSSL is reckoned to be used on up to two-thirds of existing web servers; Linux and Unix also drives many servers, and Drupal has become a reliable and flexible option for website operations, including those at the White House and other government agencies.
Open source software isn’t alone in having security holes, of course, as many users of Microsoft, Apple, Adobe, Java and other proprietary software know. But open source security is seen as suffering from the same resource that’s considered its strength, namely an army of volunteer developers. On the one hand that leads to innovation and fast turnaround of new features that users of open source crave but also to more opportunities for tampering and coding mistakes.
Admittedly, others think all those volunteer developers can also be a security strength, since it puts that many more eyeballs into reviewing code. However, the events of 2014 threw enough doubt onto the security of open source software that both industry and government have been moved to do something to improve it, from bills aimed at ensuring the software supply chain to proposals for controls on the use of third-party software components.
At first glance, the Cryptography Services audit could be the most comprehensive and important of these efforts. According to the consultants that will be running it, the audit will cover a range of security concerns but will focus primarily on Transport Layer Security stacks and on protocol flow, state transitions and memory management. The audit may be the largest effort to date to review OpenSSL, the group said, and it’s “definitely the most public.” It will help to spot and fix bugs such as Heartbleed before they become the kind of problem they did last year.
Preliminary results of the audit could be out by the beginning of the summer, Cryptography Services said.
It should be eagerly anticipated, as the revelation of Heartbleed, Shellshock and other bugs hasn’t necessarily brought better security. Months after the initial announcement of Heartbleed, around half of the 500,000 servers thought to be vulnerable from the bug had not been fixed. And the vulnerabilities keep on giving, with Cisco just one of the latest to say that its products had been affected.
Posted by Brian Robinson on Mar 13, 2015 at 11:43 AM0 comments
The recent revelation that the Equation Group uses disk drive firmware to plant malware in systems points to the kind of sophisticated and hard-to-tackle threat that will increasingly be a part of black hat attacks.
Kaspersky Lab, which came out with the initial report on Equation, said the group attacked the firmware of major drive makers such as Samsung, Seagate, Western Digital, Hitachi and Maxtor. Unlike other attacks, apparently no kind of clean-up efforts can scrub the firmware. That gives a whole new context to the phrase “persistent.”
Technically, the attack uses the nls_933w.dll module to both reprogram the disk drive firmware with a custom payload, as well as provide an application programming interface for attackers to access hidden storage sectors on the drive. Kaspersky also published a much more detailed version of its investigation (in which it breathlessly labeled Equation “The Death Star” of the malware galaxy) and listed organizations it believed the group had infiltrated, many of them government related.
A number of sources have suggested Equation might be a very limited threat, given the effort needed to master the level of programming required to rewrite the firmware. However, that could be an optimistic assessment given the level of sophistication that other state-sponsored groups and organized crime have shown recently.
Using hard drive firmware as an avenue of attack is also not that new of an idea. Researchers at public universities were detailing five years ago how disk drive firmware could be used to embed malicious software.
Rewriting software that controls hardware is also at the heart of what’s been described as one of the hottest hacks of 2014. BadUSB is an attack that reprograms the controller chips on USB peripherals, including thumb drives, to emulate a keyboard and allow an attacker to issue commands to download files or install malware. It can also be used to redirect network traffic or install a virus to infect an operating system before it boots.
As with the disk drive firmware attack, it’s apparently hard to clean up a BadUSB infection. Reinstalling an operating system won’t necessarily work since the drive used for that may itself be compromised, and a BadUSB device may already have replaced a system’s BIOS.
Researchers have been busy detailing how BadUSB attacks could be used against organizations, some of which get to be downright scary. Michael Toecker of Context Industrial Security recently described how USB-to-serial converters that are being used to connect critical legacy hardware at industrial control plants can have their firmware reprogrammed. He tested his theory on 20 different converters, and 15 of the chips could not be reprogrammed, so it would probably be a tough nut to crack. But that still left five that could be manipulated.
The Kaspersky revelations are not the first time firmware reprogramming has been mentioned in relation to the NSA. In December 2013, German magazine Der Spiegel published a lengthy investigative piece on the activities of the NSA, which had several months earlier been shown to have intercepted the mobile phone conversations of a number of state leaders, including that of German Chancellor Angela Merkel.
As a part of that investigation, the magazine detailed the contents of what it called the NSA’s Spy Catalog, a years-in-the-making collection of NSA-developed malware and surveillance hardware. That included, according to documents the magazine obtained, “spyware capable of embedding itself unnoticed into hard drives manufactured by Western Digital, Seagate and Samsung.”
It’s tempting to believe that if this catalog exists (there were no official confirmations),it’s a rare resource only available to those with the money and technical sophistication of the NSA. Given the industrialization of malware over the past few years, however, that’s a big leap.
Posted by Brian Robinson on Feb 27, 2015 at 11:36 AM0 comments
The need for timely sharing of information about both potential and actual attacks has been considered a prime focus for government and industry cybersecurity for at least the past decade. The 9/11 Commission report first brought to light the lack of intell sharing among agencies, for example, and that lack was seen extending into the cybersecurity realm.
The language used in the report, though aimed at terrorism, speaks as much to the problems surrounding cybersecurity today. The events of 9/11 showed "an enemy who is sophisticated, patient, disciplined, and lethal," and also the "fault lines within our government (and the) pervasive problems of managing and sharing information across a large and unwieldy government."
The Obama administration's most recent push to improve U.S. cybersecurity tries to ratchet up efforts to improve information sharing both within government and with the private sector. Shortly after, the administration announced the formation of a new Cyber Threat Intelligence Integration Center that's intended to be the government's focus for rapid collection and dissemination of information on cyberthreats.
How far this will go is an open question. While some have welcomed the new proposals, others wonder if the new center will just add to the organizational confusion. The National Security Agency, the Department of Homeland Security, the FBI and the military already have responsibility for collecting this kind of information and, after years of acrimony and pushback, they've managed to develop cohesion about sharing it.
Technically, the tools for sharing have also progressed, leading to a number of acronymic specs such as TAXII (the Trusted Automated eXchange of Indicator Information), STIX (the Structured Threat Information eXpression) and the Cyber Observable eXpression (cybOX). Joining them recently is the Data Aggregation Reference Architecture (DARA), a first response to the 2012 National Strategy for Information Sharing and Safeguarding.
These and other tools all perform important roles. DARA, for example, is aimed at providing a model for how various groups can pull data sets together in order to improve security while also protecting individual privacy, which has been one of the big stumbling blocks to sharing of information.
But is all of this enough? If 2014 showed anything, it's that cybersecurity efforts are falling behind the speed and the level of sophistication attackers apply to the way they get threats into the cyber infrastructure. President Obama in fact mentioned the attack on Sony Pictures late last year as just the latest reason behind his new legislative proposals.
Industry looks to the government for a lead on many aspects of cybersecurity, but the fact is that government is not noted for its speed in dealing with cyber threats, or for convincing industry to share information about attacks with it. However, it is trying. The FBI, for example, released an unclassified version of its Binary Analysis Characterization and Storage System (BACSS) as an additional incentive to public/private sharing.
Now industry seems to be expanding its own efforts to improve sharing. Facebook has launched a framework for "importing information about [threats] on the Internet in arbitrary formats, storing it efficiently, and making it accessible for both real-time defensive systems and long-term analysis." Early partners already include Bitly, Dropbox, Pinterest, Tumblr, Twitter and Yahoo.
Microsoft last year also introduced Interflow, its own attempt to collaborate more closely with the cybersecurity community. That adds to a number of other international collection and sharing efforts, as well as the global infrastructures that individual security companies have established to collect information about threats.
There are still major barriers to sharing, particularly privacy and the need for encryption. How government manages to live within, and profit from, this growing sharing ecosystem while improving how fast it reacts to threats is the real question it has to address.
Posted by Brian Robinson on Feb 13, 2015 at 9:41 AM3 comments
It’s become an article of faith that you can’t accomplish real advances in cybersecurity until you get the executive suite involved, and that applies as much to government as to private industry. Well, you can’t get much higher on the ladder than the rarified atmosphere at Davos in Switzerland, where the elite’s elite gather yearly to discuss issues affecting the world’s economic health.
Cybersecurity was a main topic this year. It’s been on the agenda before, or at least has been talked about in the hallways, but after last year’s horrendous breaches and Black Hat successes, the consensus was that cyber requires an urgent focus.
Among the most critical topics, security experts at Davos warned about the increasing dangers from The Internet of Things – dubbed the Internet of Threats by Kaspersky Labs’ Eugene Kaspersky – and the fact that cybercrime is becoming much more professionalized, with both criminal and terrorist activists now in the game.
As much as technology offers promise for economic growth, they seemed to say, it also dramatically increases the ways attackers can threaten the viability of public and commercial enterprises.
The same message can be taken from a number of wide-ranging studies recently published. Cisco’s 2015 Annual Security Report warned that attackers are, indeed, getting better at what they do and that users are becoming unwitting enablers.
For example, said Jason Brvenik, principal engineer with Cisco’s Security Business Group, online spam had been decreasing recently but showed a 250 percent increase in 2014 compared to the previous year. Attackers are now also using it for phishing to directly target users.
“With the emphasis by organizations now on protecting data and IT assets, attackers are increasingly challenged in attacking those kinds of targets,” he said. “So, they are paying more attention to users who may have the (network) credentials that will let them get inside the enterprise.”
Despite the warnings, organizations are still not up to speed with what’s needed for this emerging ecosystem of cyber threats. Only 38 percent of those surveyed by Cisco admitted to using patching and system configuration updates to boost their defensive capabilities, which Brvenik said is considered an effective security practice. And more than half of all versions of Open SSL – whose vulnerabilities led to the catastrophic Heartbleed bug – were found to be older than 50 months, and therefore still wide open to attack.
Here’s another frightening statistic: As organizations actually start moving their data to the cloud, they may be opening up other avenues for attackers to exploit, according to the Cloud Security Alliance. In its 2014 Cloud Adoption, Practices and Priorities Survey Report (CAPP), it found that three quarters of respondents were aware of the need for security, but nearly as many admitted that they didn’t know the number of ‘shadow IT’ apps within their organization.
Shadow IT is a very modern dilemma. It refers to the users within organizations who tend to use whatever technology they can find to make themselves more productive. So they reach for software that is easily downloadable and configurable – without informing the IT department. What’s more, each line of business uses what apps it deems appropriate, without coordinating with other groups in the same organization.
That leads to situations such as one described by Kamal Shah, vice president of CSA member Skyhigh Networks. One of its customers ended up with 27 different file sharing services being used among its 80,000 plus employees, many of which didn’t meet the company’s security and use policies.
“We’re seeing more and more CSOs and CIOs having cloud security discussions (with senior executives) in both the government and private sectors,” Shah said. “However, while there’s been a lot of talk recently about the increase in sophistication of threats, and cloud security is starting to get to where it needs to be, we’re only in the early innings of this.”
One of the problems could be that, according to a survey by EMC Corp., data protection still warrants only a standalone focus as a way of making sure data is always available, and not a practice that’s seen as a part of overall cybersecurity measures.
“But we definitely think it should be, since we are talking about (overall) protection of data, and cybersecurity is a part of that,” said Gregg Mahdessian, EMC’s director of federal sales, Data Protection Solutions.
What’s the takeaway from all of this? As Davos and the various surveys show, there’s now no shortage of awareness of the magnitude of the problem. Executives at the highest level in both public and private organizations know that attackers are getting better at what they do, and therefore security also needs to get better. The disconnect is in understanding how technologies and tools can best be deployed to make that happen.
“The trend in days past was that the more invisible security was, the more effective it was being,” said Cisco’s Brvenik. “That led, in many cases, to organizations putting security technology in place and then forgetting it was there.”
That needs to change, he said. Now, there needs to be a high visibility into what security tools can provide and why, which will lead to a greater understanding from the executive level down to individual users about those tools and the secure processes they embody.
If the user is the new focus for attackers, an informed user will be the best defense.
Editor's note: This blog was changed Feb. 2 to correct the spelling of Gregg Mahdessian's name.
Posted by Brian Robinson on Jan 30, 2015 at 10:54 AM2 comments