Looking for insider threats

Insider threat detection tools: Hard to find, harder to fund

While most of the emphasis in cybersecurity seems to be on external threats and the damage suffered when network and data defenses are breached, threats from insiders are getting more attention in the aftermath of the Snowden and Wikileaks revelations. What to do about those is another question, since the tools currently used by organizations to track incursions don’t seem up to the task.

It’s not a new phenomenon. The FBI a long time ago began voicing its concern about threats from privileged users of data, both in government and industry. The issue has its very own website at the FBI, and the concern within government was bolstered by a White House memo published at the end of 2012 aimed at the heads of agencies.

Now comes a survey by the Ponemon Institute, sponsored by Raytheon, that shows where the recognition/mitigation gap lies.

Over all of the government and industry sources surveyed, for example, 88 percent said they recognized that the insider threat is a cause for alarm, and that the abuse will increase. At the same time, however, they said they have difficulty identifying what specific threatening action looks like.

chart showing challenges in establish whether an event is an insider threat

Source: Insider Threat Ponemon Survey Report

“Responders said they just don’t have enough contextual information from their existing tools, which also throw up too many false positives,” said Michael Crouse, Raytheon’s director of insider threat strategies. “There’s a real need for a different way to attack the problem.”

Unlike external threats, where malicious intent is assumed, the situation with insiders is more nuanced. Of those who access sensitive or confidential information that isn’t necessary for their jobs, for example, survey respondents said as many as two-thirds are simply driven by curiosity.

In government, you can probably add the frustration of people under increasing pressure to get the job done and who don’t want to spend the time working through the red tape necessary to access information they think they need. Who hasn’t asked a buddy in the office to help with that kind of thing?

Other recent studies have also made the point that insider threats come from relatively innocent actions as much, or even more, than malicious events. Verizon’s 2014 Data Breach Investigation Report, for example, showed that misuse by insiders could come from something as simple as sending an email to the wrong person or attaching files that shouldn’t be attached.

One simple move toward an answer would be for organizations to properly configure tools they do have, something Crouse said is “the easiest and most cost-effective” thing they can do. Beyond that, agencies  need complementary tools, such as end-point monitoring that show how users behave when they access data through an end-point, detailing IM traffic, contextual emails and whether they are cutting and pasting information in ways they haven’t previously.

That’s all well and good, of course, but there’s a big catch. While nearly 90 percent of those surveyed in the Ponemon report said they understood the need for enhanced security, only 40 percent had any kind of a dedicated budget to spend on tools specifically aimed at insider threats. That’s why most organizations — and certainly government agencies — have to limp along by trying to jerry-rig existing, and unsuitable, cybersecurity tools to do the job.

One of the reasons for that budget shortfall, Crouse gamely admitted, is that companies like his have not done a good job explaining the ROI from money spent on these tools. What organizations don’t understand, he said, is that while the number of actual breaches from insiders is low compared to those from external threats, the impact from those breaches is substantially higher.

“I don’t think they truly understand either the monetary or mission impact from these insider breaches,” he said. “They’re just now trying to get their heads around that.”

Posted by Brian Robinson on May 23, 2014 at 9:30 AM1 comments


Man keeping secret about a security vulnerability

Would the government have told us about Heartbleed? Should it?

The government says it did not know about the Heartbleed vulnerability in OpenSSL before it was publicly disclosed. But White House Cybersecurity Coordinator Michael Daniel says that if it had known, it might not have told us.

“In the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest,” Daniel wrote in a recent White House blog post.  But not always. “Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence that could thwart a terrorist attack, stop the theft of our nation’s intellectual property or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks.”

Daniel goes on to explain some of the criteria used in deciding when and when not to disclose a serious vulnerability.

Over the years, the security community has come to a consensus on how to handle disclosure of security vulnerabilities in software. The discoverer first informs the product’s vendor, giving the company time to develop a patch or workaround before reporting it publicly. This protocol is not mandatory, however. Researchers can use the threat of disclosure to pressure vendors to respond to vulnerabilities, and some companies offer a bounty for new vulnerabilities to encourage researchers to cooperate. But the value of a new vulnerability can be much greater than a bounty.

In the end, how a vulnerability is handled depends on the motives and morals of the discoverer. For criminals, a good zero-day vulnerability—one for which no fix yet exists—is money in the bank. For governments, it can be an espionage tool or a weapon. The Stuxnet worm, an offensive weapon widely believed to have been developed by the United States and Israel, exploited several zero-day vulnerabilities.

Daniel said there are practical limits on hoarding bugs. “Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest,” he wrote. “But that is not the same as arguing that we should completely forgo this tool as a way to conduct intelligence collection and better protect our country in the long-run.”

Daniel said there are no hard and fast rules for determining when to disclose, but the administration has a “disciplined, rigorous” process for deciding. The criteria include:

  • How widely used and important is the vulnerable product?
  • How serious is the vulnerability. Can it be patched, and how much harm could it do if it falls into the wrong hands?
  • Would we know if someone else was using it?
  • What is the value of the intelligence we could gather with it, and are there other ways to gather it?
  • Is someone else likely to discover it?

Heartbleed potentially leaked sensitive information protected by OpenSSL, which is very widely used to protect online commerce and other transactions. The vulnerability was critical, and although a fixed version of the software was released, replacing it will take some time.

Would we know if someone was using it? Maybe. Gathering useful information requires a high number of connections to a vulnerable server, which could be detected in activity logs. Shortly after the disclosure, Canadian police arrested a young man for allegedly using Heartbleed to steal tax data.

As for the value of intelligence to be gained, who can say? Is someone else likely to discover it? Yes, given that it was in open source software available to anyone. And did someone else discover it? Yep. Researchers at Codenomicon and Google Security.

So, did the National Security Agency discover Heartbleed first, and if they did would they have told us? According to White House criteria, it would be a good candidate for disclosure. But we’ll probably never know.

Posted by William Jackson on May 16, 2014 at 9:03 AM1 comments


heartbleed and encryption key

Heartbleed begets headaches in perfecting encryption

In the wake of major data breaches during the last few months, and with the ongoing scare over the OpenSSL Heartbleed bug, government security managers will likely find themselves dealing with major changes in some of the encryption tools they use to safeguard their agencies’ data.

One change is already in the works. The National Institute of Standards and Technology recently released an update to its Special Publication 800-52, which offers guidelines on how to implement transport layer security (TLS) protocols. TLS is the standard used to protect sensitive data — anything from credit card numbers to patient health information, email and social networking details — that have to move across open networks, such as the Internet.

The Internet Engineering Task Force (IETF), which oversees Internet standards, found vulnerabilities in the 1.0 version of TLS and spent several years upgrading it through versions 1.1 and 1.2. Meanwhile, NIST withdrew the initial 800-52 guidelines, which it released in 2005, because it didn’t include any of the updates included in TLS 1.1 and 1.2.

The latest revision of the guidelines rectifies that, and offers network administrators a range of recommendations on how to configure TLS options, according to NIST, including which algorithms to use and the length of cryptographic keys.

And admins will have to deal with even bigger changes down the road. The IETF is currently considering yet another update in the TLS spec, to 1.3, which will likely remove the RSA key transport cipher suites. The RSA code has been the decades-old basis of the “handshakes” that are used to control network sessions, and are considered to be too vulnerable to attack.

Improving the TLS handshake is one of the prime design targets for the IETF TLS 1.3 working group.  The current IETF thinking is apparently to instead use systems such as the Diffie-Hellman Exchange or Elliptic Curve Diffie-Hellman Exchange. Both of those support perfect forward secrecy, which many organizations are pushing because they see it as offering much better protection for the keys used to encrypt data and communications. Perfect forward secrecy is set up when a compromise of one message cannot lead to the compromise of others.

However, perfect forward secrecy is not entirely perfect, as it turns out. All of the network sessions signed using a leaked or stolen key would still be open to compromise, for example. So while it would work for all future sessions it wouldn’t retroactively solve problems caused by the Heartbleed bug.

But the need for something better, and for the features TLS 1.3 will offer, is now much clearer. Heartbleed itself is still being examined, though early attempts to estimate the costs from fixing the many systems that could be affected suggest there will be major repercussions.

A more tangible indicator is what happened with the data breach at Target stores last year. Analysis of the numbers involved with that breach — which was not even the year’s biggest — are staggering: $200 million for banks to reissue cards to customers whose card numbers were stolen, and  $100 million for Target to upgrade its payment terminals. The company also saw a 46 percent hit to its bottom line in the fourth quarter of 2013, as it struggled to regain its customers’ confidence.

There’s a takeaway from that analysis that NIST, the IETF or anybody else could have fixed, however. There were precisely zero people at Target with the title of either chief information security officer or chief security officer.

Posted by Brian Robinson on May 09, 2014 at 10:13 AM0 comments


Device login screens plugged into the cloud

Federating identity will slow personal information leaks

The heartbleed vulnerability, which can leak sensitive data from supposedly secure Web connections, exposes the limits of using one-off credentials that must be authenticated separately for each transaction. Attack surfaces are greatly expanded when personally identifiable information (PII) is maintained by every agency and Web site offering online services.

“The idea that the user must have information everywhere is a bad idea,” said Andre Boysen, executive vice president for marketing at SecureKey. Having a single credential that can be authenticated by a trusted authority and accepted by multiple users can reduce the attack surface by maintaining PII at a single point. It also helps relieve the burden of managing credentials and identities.

This idea of federated identity is not new. Banks, merchants and credit card companies have been using a form of it for years. Merchants no longer have to issue and manage their own credit cards. A bank vets your identity and creditworthiness, a card company ensures the credit card is valid and has not been compromised, and online merchants do not have to worry about who you are as long as a credit card company vouches for the card.

It is not a risk-free system, but the risk is managed. Credit card numbers are sometimes exposed, but the exposure is considerably less than if every merchant had to maintain PII for every customer. When a breach occurs, users have to change one credit card, not one for every merchant visited.

Why can’t government online authentication be this simple? “It is heading that way,” Boysen said.

Canada implemented a Federated Identity Management program to leverage interoperable security credentials several years ago. In the United States, the Postal Service is preparing to roll out the Federal Cloud Credential Exchange (FCCX), a federated identity management hub that will let agencies accept online credentials issued by trusted third parties.

The system is part of the National Strategy for Trusted Identities in Cyberspace. Personal information and the identity of the original issuer of the credentials will be hidden from the FCCX hub, and log-in information will not be shared or compared between agencies. But the agency will know what it needs to know: You are who you say you are.

It should be noted that SecureKey is not exactly an impartial observer in this issue. The company has contracts with both Canada and USPS to provide a cloud-based platform for authenticating digital credentials. But this does not change the fact that a federated system offers a way to improve both security and privacy at a time when attacks on online activities are growing and digital credentials are like money in the bank for criminals.

Government stands to benefit greatly from federated identity schemes. Unlike banks, agencies tend to have relatively few transactions with each individual, which raises the overhead of authenticating each user that logs on. “Basically, every transaction is a re-enrollment,” Boysen said. This is frustrating to the user, expensive for the agency, and each agency also must manage and secure its own database of PII. Offloading authentication to a central hub eliminates the need to hold and protect that extra data.

No scheme will provide absolute security or completely transparent authentication. But federation and an interoperable system of trust can help. With such a system in place, agencies won’t have to worry if PII is leaking from their sites, and users would be able to whittle down the number of passwords and other credentials to be replaced when something does go wrong.

Posted by William Jackson on May 02, 2014 at 11:44 AM2 comments