Man looking at light bulbs

Takeaways from Verizon's data breach report

Given the kinds of pressures it puts on organizations, taking care of security tends to be a short-term memory thing. It’s all about what happened yesterday and what’s likely to happen tomorrow, and even a year ago can seem like another reality.

Verizon, in its 2014 Data Breach Investigation Report (DBIR), shows that the short-term focus has its downsides. Sometimes, that longer-term view can produce real dividends that might otherwise be overlooked.

Foregoing the usual trends approach of these studies — what happened last year versus the previous one — Verizon instead took a deeper dive into the decade of data it has in its vaults.

The goal was to derive “actionable” information that would be of actual use. The report analyzed data from some 50 organizations around the world, taking a look at more than 63,000 security incidents that resulted in 1,367 confirmed data breaches in 2013. Previous DBIR’s had never had more than 1,000.

It’s over longer periods of time that the patterns show themselves, however. The attack methods that created the vast majority of those 2013 incidents can be reduced to just nine and, of those, three stand out as the main culprits. That also confirms the results Verizon found from analyzing previous years’ data.

Drill down even further and two classes of incidents — caused by Web app attacks and cyber espionage — are seen as the main breach threats over the past few years. Point-of-sale attacks, which had exploded through 2011, dropped off sharply in 2012 and only picked up again slightly in 2013. That’s due, Verizon analysts feel, to more small and medium sized businesses becoming aware of them.

“There’s been so many attacks of that kind over the last several years that it’s been like overfishing in the ocean,” said Marc Spitler, Verizon’s senior risk analyst. “So many of the SMBs have been caught, and they have now put meaningful defenses in place.”

The real revelation, however, comes with a comprehensive matrix of incident classifications and industries they’ve affected. That shows a huge difference in the kinds of hits that various sectors take.

Verizon chart showing frequency of incident by sector

It also produces surprises. While Web attacks and cyber espionage may be the strongest threats overall, as far as the public sector is concerned ‘insider misuse’ and ‘miscellaneous error’ accounted for nearly 60 percent of the entire number of incidents in 2013. The feared Web attacks and cyber espionage each accounted for less than 1 percent of public sector incidents.

Misuse and error can be anything from insiders deliberately stealing information to inadvertent mistakes, such as sending an email to the wrong person or attaching files that shouldn’t be attached, according to Spitler.

And that points to the kind of actionable information he hopes readers of the DBIR will glean from the report. Despite all of the fervor and money that’s being directed at government cyber security, a much more immediately effective remedy might be to really tighten up on information handling processes and procedures and general data hygiene?

The overall message of the Verizon report is probably summed up in a very simple graphic that compares, over 10  years, the time it has taken for an attacker to compromise an asset versus the time it takes for a defender to discover the breach. The gap is widening.

chart showing widening gap between time to launch an attack and time to discover breach

“We’re definitely not happy to see that trend,” Spitler said, “and it points to the fact that we need to see better detection methods and controls, because we need to pick these things up faster and sooner to prevent data loss.”

As a member of the self-confessed geek squad inside Verizon that spend a lot of their time going over incident and breach data, Spitler confesses he’s “extremely excited” by the more than 40 organizations who will likely add their data into the next DBIR, which can only improve the accuracy of the company’s big data analysis of these long-term security features.

“With better knowledge of these events we can do a better job of defending against the breaches they cause,” he said. “And it’s the little things we really need to understand.”

Posted by Brian Robinson on Apr 25, 2014 at 10:59 AM0 comments


Man blamed for using open source

In the wake of Heartbleed, open source software is under scrutiny

Heartbleed, the OpenSSL vulnerability that potentially exposes data in otherwise secure transactions, has raised once again the question of security in open source software. Some maintain that open source, because of all the eyes looking at it, is likely to be more secure. Others say that with anyone able to tamper with it, of course it can’t be trusted.

The truth is that neither open source nor proprietary software is inherently more secure. A recent report by Coverity from security scans of 750 million lines of code on 700 open source projects found for the first time that the quality of open source surpassed proprietary projects, with an error density of .59 per 1,000 lines of code for open source compared to .72 for proprietary code scanned.

But that is not the whole story, said Zack Samocha, Coverity’s senior director of products. The quality of the code depends on the commitment to security, he said. “You need to be really committed and fix issues as they come.”

Proprietary code produced under pressure of market demands also can be buggy. Users should test the products they are using to make sure they do what they are supposed to do. And if you are using open source, review the code and take part in its development, said Barrett Lyon, founder and CTO of Defense.Net.

“You should contribute,” Lyon said. “If it’s open source and it’s not secure, it’s partly your fault.”

With all of the eyes that could have looked at OpenSSL, the surprising thing is that Heartbleed escaped detection for two years. How did it happen in the first place? Like many security problems, it was the result of a trade-off with performance. Earlier versions of OpenSSL had a feature called memory management security that prevented memory leaks. But Lyon, in reviewing the code, saw that in version 1.0.1 a developer disabled the feature because it hurt performance in some implementations.

“It comes down to humans making mistakes,” said Jonathan Rajewski, an assistant professor and director of the Leahy Center for Digital Investigation at Champlain College. “This is an ‘ah-ha!’ moment that we can learn from, because it will happen again.”

The “fix” that caused the problem is in versions 1.0.1 through 1.01f of OpenSSL. The “fix” was fixed in version 1.0.1g

One of the most serious concerns for agencies using affected versions of OpenSSL is the possibility of digital certificates being exposed, which could allow an attacker to spoof a government site, not only putting visitors to the site at risk, but damaging the agency’s reputation.

But there are a few bright spots. Although security experts preach the gospel of the timely update, this is a situation where procrastination could be a blessing. Many government administrators are conservative in their updating; a diplomatic way of saying they don’t have time to keep their software up-to-date, and if something is working they ignore it. This means that many agencies are running OpenSSL 1.0.0 versions, which are not vulnerable.

Another bright spot is that there are tools to detect vulnerable implementations of the software. The Nessus scanner from Tenable, for example, will detect it through its remote and local checks. And although exploits of the vulnerability don’t leave direct footprints, there are some telltale signs. Because exposed memory is gathered in 64 kilobyte chunks, it takes a large number of repeated log-ins to gather useful information, and this kind of anomalous behavior should be detected by logs.

For those affected, updating OpenSSL in your applications, revoking and replacing digital certificates and reissuing keys will be a chore. But there is yet another challenge. Because users are being advised to replace their passwords, a flood or reset requests could overwhelm help desks or automated reset systems.

“It’s going to be interesting,” Lyon said.

Posted by William Jackson on Apr 18, 2014 at 11:33 AM1 comments


Firefighter working on containment

Is limiting damage the best hope for cybersecurity?

When it comes to cybersecurity, government defenses tend to be measured against broad threats such as cyberespionage and possible nation state attacks on the country’s critical infrastructure. As recent studies show, however, that focus may be a bit wayward.

Symantec’s 2014 Internet Security Threat Report  shows yet again why it’s the smaller, oft-used threats that likely remain the biggest problem for agencies. Those have grown in number, but also continue to evolve in response to the development of better defenses.

Spear phishing, for example, was a major problem in the past but had been seen as diminishing as other threats grew and took up more of organizations’ attention. Not so, according to Symantec, which called reports of the death of spear phishing “greatly exaggerated.” In fact, while the total number of emails used per phishing campaign decreased, along with the number of targets, the total number of campaigns almost doubled in 2013.

“This ‘low and slow’ approach (campaigns also run three times longer than those in 2012) are a sign that user awareness and protection technologies have driven spear phishers to tighten their targeting and sharpen their social engineering, Symantec said.

The even worse news? Government is in the top three targets for these kinds of attacks, the report said, with odds of 1 in 3.1 that at any given time a government employee is being subject to a phishing attack  (though, admittedly, the method they used to come up with that ratio is a little fishy!).

The rest of the Symantec report is not more hopeful, and its conclusions make for scary reading:

  • More zero-day vulnerabilities were discovered in 2013 than any other year, in fact 2013 registered more of those than the previous two years combined.
  • Ransomware attacks, where perpetrators pretend to be local law enforcement demanding payment of fake fines, grew by 500 percent in 2013 and “turned vicious.”
  • There was explosive growth of scams and malware attacks via mobile media in 2013, though the prevalence of those is still relatively low.
  • Users continue to fall for scams on social media sites, and the fear is that this behavior will have even worse consequences as the activity migrates to mobile devices.
  • Attackers are now turning to the Internet of Things. With device manufactures so far not paying much attention to security, the onus falls on the user, which surely has attackers salivating at the prospects. As Symantec said, there’ll be a huge increase in data because of the IoT, and “big data is big money.”

The latest illustration of the potential for attackers came with the revelation on April 7 of the so-called OpenSSL Heartbleed bug, a vulnerability that had existed in the OpenSSL 1.01.f standard for a couple of years but that had only recently been patched.

Some high-profile sites had apparently been open to leaking information because of the bug, including the FBI’s main site. OpenSSL is a widely used SSL library, and is the basis for a lot of data encryption across the Web.

Looking ahead, Symantec makes a salient point: Even though better cooperation between law enforcement and industry is making it increasingly difficult for cyber criminals to operate, this won’t make them stop. Instead, Symantec said, e-crime is likely to move toward a new and more professional model.

That’s in line with other recent reports. As this blog recently pointed out, not only are cyber criminals becoming more professionalized, the market for the attacks tools they use is also proliferating, ramping up threats posed by a profit-based, market-driven business.

It may be tempting for those in government to throw up their hands and concede defeat. How is a ponderous and slow-turning ship like the government supposed to compete against the nimble and light-footed criminal set?

The easy answer is that it can’t. There’s no way a bureaucratic and budget-constrained organization like the government, or its agencies, can compete at that level. But it can instill a mindset that will drive government responses to cybersecurity, and even that has been missing, until recently.

The champion in this case is the National Institute of Standards and Technology, a non-regulatory body that has been pushing for a risk-based framework for cybersecurity that emphasizes limiting damage from attacks rather than trying to prevent them completely.

That approach has been adopted by the Department of Homeland Security, and private industry is also increasingly taking it up. Earlier this year, the National Association of State Chief Information Officers (NASCIO) said it was adopting NIST’s framework, which “provides states with a common platform on which to base strategic security decisions, allocate resources and build defenses against both common and sophisticated attacks.”

The final leg in the stool came with the decision by the Defense Department a few weeks ago, after several years of negotiation and discussion, to adopt NIST’s risk management framework as the basis of its cyber defense. With that, there is now a common language that all levels of government and the private sector can use to define and coordinate their cybersecurity efforts.

It won’t stop cyber criminals getting into government systems, and breaches will continue. But it provides a foundation for something that could, finally, provide a resilient defense.

Posted by Brian Robinson on Apr 11, 2014 at 7:55 AM0 comments


Decorated veteran in a parade

Making IT security a priority at VA

If a demonstration is needed that security is a process, not a product, and that it depends on management, not technology, the Veterans Affairs Department provides it.

The Government Accountability Office recently recited to a House panel a litany of weaknesses in the sprawling department’s struggling IT security program. The VA inspector general has identified development of an info security program as a “major management challenge,” and auditors have flagged inadequate security controls in financial systems as a material weakness for 12 years. GAO warnings date back to 1998, and it has reported consistent weaknesses in security control areas at VA since 2007.

“The persistence of similar weaknesses over 16 years later indicates the need for stronger, more focused management attention and action to ensure that VA fully implements a robust security program,” Gregory Wilshusen, GAO’s director of information security issues, told a House VA oversight subcommittee on March 25.

In an effort to refocus management attention, Rep. Jackie Walorski, (R-Ind.) on April 2 introduced a bill, H.R. 4370, to “improve the transparency and the governance of the information security program of the department.” The contents of the bill are not yet available, but Walorski said in a statement that it would provide “a clear roadmap for immediately securing its system.”

The department’s security shortcomings have been so consistent for so long that they merit attention. The size of the department and the scope of its mission make it one of the greatest IT security challenges in government. VA operates the nation’s largest healthcare system, providing healthcare for about 6 million veterans, administers financial benefits for millions more and manages veterans’ graves all across the country.

In June last year, the House VA Oversight and Investigations Subcommittee recommended designating the VA network a “compromised environment,” and said that VA should establish controls to reclaim it, “from nation state sponsored organizations.”

Department CIO Stephen W. Warren in a November 2013 letter to subcommittee Chairman Rep. Mike Coffman, responded that “VA has in place a strong, multi-layered defense to combat evolving cybersecurity threats, including monitoring by external partners and active scanning of Web applications and source code.”

But from January 2010 through October 2013, more than 29,000 possible data breaches were reported by VA. In his letter, Warren noted that “virtually all of VA’s data breaches are paper-based, equipment loss or unencrypted e-mailing of sensitive information.”

VA is addressing the equipment loss issue by encrypting laptops and desktops, which began last year in conjunction with the department’s upgrade to the Windows 7 OS. Warren reported that as of Oct. 29, 87 percent of the computers, more than 330,000 systems, were running Windows 7 and most of the rest were expected to be upgraded by the end of January 2014. He noted, however, that some pockets were likely to remain due to what he called “blocker” applications, “applications that are not compatible with Windows 7 and have not yet been replaced.”

Whether Congress will be able to significantly improve VA’s cybersecurity with new legislation remains an open question. Wilshusen, in last month’s testimony to the subcommittee, said that “many of the actions and activities specified in the bill are sound information security practices and consistent with federal guidelines. If implemented on a risk-based basis, they could prompt VA to refocus its efforts on steps needed to improve the security of its systems and information.”

But he cautioned that security should be risk-based and not based on technology requirements that could quickly become outdated.

Posted by William Jackson on Apr 04, 2014 at 9:26 AM0 comments