heartbleed and encryption key

Heartbleed begets headaches in perfecting encryption

In the wake of major data breaches during the last few months, and with the ongoing scare over the OpenSSL Heartbleed bug, government security managers will likely find themselves dealing with major changes in some of the encryption tools they use to safeguard their agencies’ data.

One change is already in the works. The National Institute of Standards and Technology recently released an update to its Special Publication 800-52, which offers guidelines on how to implement transport layer security (TLS) protocols. TLS is the standard used to protect sensitive data — anything from credit card numbers to patient health information, email and social networking details — that have to move across open networks, such as the Internet.

The Internet Engineering Task Force (IETF), which oversees Internet standards, found vulnerabilities in the 1.0 version of TLS and spent several years upgrading it through versions 1.1 and 1.2. Meanwhile, NIST withdrew the initial 800-52 guidelines, which it released in 2005, because it didn’t include any of the updates included in TLS 1.1 and 1.2.

The latest revision of the guidelines rectifies that, and offers network administrators a range of recommendations on how to configure TLS options, according to NIST, including which algorithms to use and the length of cryptographic keys.

And admins will have to deal with even bigger changes down the road. The IETF is currently considering yet another update in the TLS spec, to 1.3, which will likely remove the RSA key transport cipher suites. The RSA code has been the decades-old basis of the “handshakes” that are used to control network sessions, and are considered to be too vulnerable to attack.

Improving the TLS handshake is one of the prime design targets for the IETF TLS 1.3 working group.  The current IETF thinking is apparently to instead use systems such as the Diffie-Hellman Exchange or Elliptic Curve Diffie-Hellman Exchange. Both of those support perfect forward secrecy, which many organizations are pushing because they see it as offering much better protection for the keys used to encrypt data and communications. Perfect forward secrecy is set up when a compromise of one message cannot lead to the compromise of others.

However, perfect forward secrecy is not entirely perfect, as it turns out. All of the network sessions signed using a leaked or stolen key would still be open to compromise, for example. So while it would work for all future sessions it wouldn’t retroactively solve problems caused by the Heartbleed bug.

But the need for something better, and for the features TLS 1.3 will offer, is now much clearer. Heartbleed itself is still being examined, though early attempts to estimate the costs from fixing the many systems that could be affected suggest there will be major repercussions.

A more tangible indicator is what happened with the data breach at Target stores last year. Analysis of the numbers involved with that breach — which was not even the year’s biggest — are staggering: $200 million for banks to reissue cards to customers whose card numbers were stolen, and  $100 million for Target to upgrade its payment terminals. The company also saw a 46 percent hit to its bottom line in the fourth quarter of 2013, as it struggled to regain its customers’ confidence.

There’s a takeaway from that analysis that NIST, the IETF or anybody else could have fixed, however. There were precisely zero people at Target with the title of either chief information security officer or chief security officer.

Posted by Brian Robinson on May 09, 2014 at 10:13 AM0 comments


Device login screens plugged into the cloud

Federating identity will slow personal information leaks

The heartbleed vulnerability, which can leak sensitive data from supposedly secure Web connections, exposes the limits of using one-off credentials that must be authenticated separately for each transaction. Attack surfaces are greatly expanded when personally identifiable information (PII) is maintained by every agency and Web site offering online services.

“The idea that the user must have information everywhere is a bad idea,” said Andre Boysen, executive vice president for marketing at SecureKey. Having a single credential that can be authenticated by a trusted authority and accepted by multiple users can reduce the attack surface by maintaining PII at a single point. It also helps relieve the burden of managing credentials and identities.

This idea of federated identity is not new. Banks, merchants and credit card companies have been using a form of it for years. Merchants no longer have to issue and manage their own credit cards. A bank vets your identity and creditworthiness, a card company ensures the credit card is valid and has not been compromised, and online merchants do not have to worry about who you are as long as a credit card company vouches for the card.

It is not a risk-free system, but the risk is managed. Credit card numbers are sometimes exposed, but the exposure is considerably less than if every merchant had to maintain PII for every customer. When a breach occurs, users have to change one credit card, not one for every merchant visited.

Why can’t government online authentication be this simple? “It is heading that way,” Boysen said.

Canada implemented a Federated Identity Management program to leverage interoperable security credentials several years ago. In the United States, the Postal Service is preparing to roll out the Federal Cloud Credential Exchange (FCCX), a federated identity management hub that will let agencies accept online credentials issued by trusted third parties.

The system is part of the National Strategy for Trusted Identities in Cyberspace. Personal information and the identity of the original issuer of the credentials will be hidden from the FCCX hub, and log-in information will not be shared or compared between agencies. But the agency will know what it needs to know: You are who you say you are.

It should be noted that SecureKey is not exactly an impartial observer in this issue. The company has contracts with both Canada and USPS to provide a cloud-based platform for authenticating digital credentials. But this does not change the fact that a federated system offers a way to improve both security and privacy at a time when attacks on online activities are growing and digital credentials are like money in the bank for criminals.

Government stands to benefit greatly from federated identity schemes. Unlike banks, agencies tend to have relatively few transactions with each individual, which raises the overhead of authenticating each user that logs on. “Basically, every transaction is a re-enrollment,” Boysen said. This is frustrating to the user, expensive for the agency, and each agency also must manage and secure its own database of PII. Offloading authentication to a central hub eliminates the need to hold and protect that extra data.

No scheme will provide absolute security or completely transparent authentication. But federation and an interoperable system of trust can help. With such a system in place, agencies won’t have to worry if PII is leaking from their sites, and users would be able to whittle down the number of passwords and other credentials to be replaced when something does go wrong.

Posted by William Jackson on May 02, 2014 at 11:44 AM2 comments


Man looking at light bulbs

Takeaways from Verizon's data breach report

Given the kinds of pressures it puts on organizations, taking care of security tends to be a short-term memory thing. It’s all about what happened yesterday and what’s likely to happen tomorrow, and even a year ago can seem like another reality.

Verizon, in its 2014 Data Breach Investigation Report (DBIR), shows that the short-term focus has its downsides. Sometimes, that longer-term view can produce real dividends that might otherwise be overlooked.

Foregoing the usual trends approach of these studies — what happened last year versus the previous one — Verizon instead took a deeper dive into the decade of data it has in its vaults.

The goal was to derive “actionable” information that would be of actual use. The report analyzed data from some 50 organizations around the world, taking a look at more than 63,000 security incidents that resulted in 1,367 confirmed data breaches in 2013. Previous DBIR’s had never had more than 1,000.

It’s over longer periods of time that the patterns show themselves, however. The attack methods that created the vast majority of those 2013 incidents can be reduced to just nine and, of those, three stand out as the main culprits. That also confirms the results Verizon found from analyzing previous years’ data.

Drill down even further and two classes of incidents — caused by Web app attacks and cyber espionage — are seen as the main breach threats over the past few years. Point-of-sale attacks, which had exploded through 2011, dropped off sharply in 2012 and only picked up again slightly in 2013. That’s due, Verizon analysts feel, to more small and medium sized businesses becoming aware of them.

“There’s been so many attacks of that kind over the last several years that it’s been like overfishing in the ocean,” said Marc Spitler, Verizon’s senior risk analyst. “So many of the SMBs have been caught, and they have now put meaningful defenses in place.”

The real revelation, however, comes with a comprehensive matrix of incident classifications and industries they’ve affected. That shows a huge difference in the kinds of hits that various sectors take.

Verizon chart showing frequency of incident by sector

It also produces surprises. While Web attacks and cyber espionage may be the strongest threats overall, as far as the public sector is concerned ‘insider misuse’ and ‘miscellaneous error’ accounted for nearly 60 percent of the entire number of incidents in 2013. The feared Web attacks and cyber espionage each accounted for less than 1 percent of public sector incidents.

Misuse and error can be anything from insiders deliberately stealing information to inadvertent mistakes, such as sending an email to the wrong person or attaching files that shouldn’t be attached, according to Spitler.

And that points to the kind of actionable information he hopes readers of the DBIR will glean from the report. Despite all of the fervor and money that’s being directed at government cyber security, a much more immediately effective remedy might be to really tighten up on information handling processes and procedures and general data hygiene?

The overall message of the Verizon report is probably summed up in a very simple graphic that compares, over 10  years, the time it has taken for an attacker to compromise an asset versus the time it takes for a defender to discover the breach. The gap is widening.

chart showing widening gap between time to launch an attack and time to discover breach

“We’re definitely not happy to see that trend,” Spitler said, “and it points to the fact that we need to see better detection methods and controls, because we need to pick these things up faster and sooner to prevent data loss.”

As a member of the self-confessed geek squad inside Verizon that spend a lot of their time going over incident and breach data, Spitler confesses he’s “extremely excited” by the more than 40 organizations who will likely add their data into the next DBIR, which can only improve the accuracy of the company’s big data analysis of these long-term security features.

“With better knowledge of these events we can do a better job of defending against the breaches they cause,” he said. “And it’s the little things we really need to understand.”

Posted by Brian Robinson on Apr 25, 2014 at 10:59 AM0 comments


Man blamed for using open source

In the wake of Heartbleed, open source software is under scrutiny

Heartbleed, the OpenSSL vulnerability that potentially exposes data in otherwise secure transactions, has raised once again the question of security in open source software. Some maintain that open source, because of all the eyes looking at it, is likely to be more secure. Others say that with anyone able to tamper with it, of course it can’t be trusted.

The truth is that neither open source nor proprietary software is inherently more secure. A recent report by Coverity from security scans of 750 million lines of code on 700 open source projects found for the first time that the quality of open source surpassed proprietary projects, with an error density of .59 per 1,000 lines of code for open source compared to .72 for proprietary code scanned.

But that is not the whole story, said Zack Samocha, Coverity’s senior director of products. The quality of the code depends on the commitment to security, he said. “You need to be really committed and fix issues as they come.”

Proprietary code produced under pressure of market demands also can be buggy. Users should test the products they are using to make sure they do what they are supposed to do. And if you are using open source, review the code and take part in its development, said Barrett Lyon, founder and CTO of Defense.Net.

“You should contribute,” Lyon said. “If it’s open source and it’s not secure, it’s partly your fault.”

With all of the eyes that could have looked at OpenSSL, the surprising thing is that Heartbleed escaped detection for two years. How did it happen in the first place? Like many security problems, it was the result of a trade-off with performance. Earlier versions of OpenSSL had a feature called memory management security that prevented memory leaks. But Lyon, in reviewing the code, saw that in version 1.0.1 a developer disabled the feature because it hurt performance in some implementations.

“It comes down to humans making mistakes,” said Jonathan Rajewski, an assistant professor and director of the Leahy Center for Digital Investigation at Champlain College. “This is an ‘ah-ha!’ moment that we can learn from, because it will happen again.”

The “fix” that caused the problem is in versions 1.0.1 through 1.01f of OpenSSL. The “fix” was fixed in version 1.0.1g

One of the most serious concerns for agencies using affected versions of OpenSSL is the possibility of digital certificates being exposed, which could allow an attacker to spoof a government site, not only putting visitors to the site at risk, but damaging the agency’s reputation.

But there are a few bright spots. Although security experts preach the gospel of the timely update, this is a situation where procrastination could be a blessing. Many government administrators are conservative in their updating; a diplomatic way of saying they don’t have time to keep their software up-to-date, and if something is working they ignore it. This means that many agencies are running OpenSSL 1.0.0 versions, which are not vulnerable.

Another bright spot is that there are tools to detect vulnerable implementations of the software. The Nessus scanner from Tenable, for example, will detect it through its remote and local checks. And although exploits of the vulnerability don’t leave direct footprints, there are some telltale signs. Because exposed memory is gathered in 64 kilobyte chunks, it takes a large number of repeated log-ins to gather useful information, and this kind of anomalous behavior should be detected by logs.

For those affected, updating OpenSSL in your applications, revoking and replacing digital certificates and reissuing keys will be a chore. But there is yet another challenge. Because users are being advised to replace their passwords, a flood or reset requests could overwhelm help desks or automated reset systems.

“It’s going to be interesting,” Lyon said.

Posted by William Jackson on Apr 18, 2014 at 11:33 AM1 comments