Verizon breach report

Verizon breach report: bad news and worse news

The trouble with reports such as Verizon’s deeply detailed 2015 Data Breach Report is that they make for such interesting reading, even while they effectively depress the hell out of everybody.

The very first element in the report talks about “victim demographics,” and carries a graphic that depicts in red where incidents and breaches happened around the world. The whole of North America, Australia, Russia, most of Europe and Asia, and a good part of Latin America are a deep crimson. The only place not well colored is Africa, but that’s probably due more to the fact that few of the organizations reporting breaches to Verizon actually operate there.

But then there are the interesting bits. The public sector once again seems to be the major casualty when it comes to data breaches, with over 50,000 security incidents tallied during the year, far more than other sectors reported. However, as Verizon itself points out, that’s misleading, since there are many government incident response teams participating in the survey and they handle a high volume of incidents, many of which fall under mandatory reporting regulations.

The number of confirmed data losses probably paints a more accurate picture. With over 300, the public sector had the highest number (other than the “unknown” sector), but it wasn’t that far ahead of the financial services industry. Manufacturing took the third-place slot.

Depression returns when the report looks at some of the threats and how successful they are. How long, for example, have we been told to regard all unsolicited offers online as suspicious? Social engineering has for years been attackers' best way to get inside organizations, and phishing once again tops the Verizon threat list. For the past two years, phishing has been a part of more than two-thirds of the cyber-espionage pattern Verizon tracks.

And no wonder, since the ROI for the bad guys is apparently so good. Some 23 percent of the recipients of these emails open them, according to the report, and 11 percent click on the attachments. The numbers, Verizon said, show that a campaign of just 10 emails yields a greater than 90 percent chance that at least one person will fall prey to the phishers. A test conducted for the report showed that nearly half of users open emails and click on links within the first hour of one of these phishing campaigns.

A separate study, sponsored by KnowBe4, confirms that email spear phishing is the number one source of data breaches, with human error following that. Education of users is seen as the best solution, and every government agency says it has programs that are meant to bring users up to speed on the dangers, but that depends on what your definition of “program” is.

The organizational approach to user education is a big part of the problem, according to KnowBe4 chief executive Stu Sjouwerman. For compliance reasons, he said, “too many companies still rely on a once-a-year ... ‘death by PowerPoint’ training approach, or just rely on their filters, do no training and see no change in behavior.”

And then there are vulnerabilities. Notice all of those notifications you get about upgrades to operating systems and apps? Many them involve security upgrades to patch vulnerabilities that have been found, and it’s the same for enterprise systems. The past year seemed to surface an especially large number of vulnerabilities, including three alone involving the key OpenSSL security protocol, and which resulted in the now infamous Heartbleed bug, among others.

According to a study of the exploit data reported for the Verizon report, fully 99.9 percent of the exploited vulnerabilities were still being compromised more than a year after they were reported. The lesson? Don’t just patch in response to announced “critical” vulnerabilities, but patch often and completely. The report “demonstrates the need for all those stinking patches on all your stinking systems,” its authors said.

The Verizon report wasn’t a complete downer, though. It looked at the security problems surrounding mobile devices, for example, which have been a focus of government for some years and have been a major reason for the anemic uptake of bring your own device programs in agencies. But a forensic examination of the breach data surrounding mobile showed that less than 1 percent of smartphones used on the Verizon Wireless system — the biggest in the U.S. — were infected with malware. A minuscule number of the devices carried what Verizon called “high-grade” malicious code.

Given the detail in the report, just about every organization can get something from it, though coming up with an overall conclusion about the state of cyber security is tougher. For the report’s authors, however, the practical solutions are tried and true, if a bit tedious.

“Don’t sleep on basic, boring security practices,” they say. “Stop rolling your eyes. If you feel you have met minimum-security standards and continue to validate this level of information, then bully for you! It is, however, still apparent that not all organizations are getting the essentials right.”

That’s probably an understatement.

Posted by Brian Robinson on Apr 24, 2015 at 12:45 PM2 comments


DARPA’s strategy for 100-year software

An axiom of systems design is that the more complex the system, the harder it is to understand and, therefore, the harder it is to manage. When it comes to cybersecurity, that principle is what bad actors rely on to get their malware through enterprise defenses -- where it can then squirrel away vital information or damage essential systems.

The complexity is partially caused the fact that modern software simply does not have the shelf life it used to. Back in the day, software was not expected to change much over a number of years, making it relatively easy to maintain.

Those days are long gone. The pace of innovation today means there is almost constant churn in IT technologies, with the introduction of new processors and devices that require significant changes to operating systems, application software, application programming interfaces (APIs), to mention a few. Use cases for these technologies can also change quickly, which means more modifications to software and system configurations are required.

Now consider future scenarios. With distributed devices and networking driving the Internet of Things, there may be no central point of intelligence.  We may not know what changes are being made to what systems, when, or by whom. How is that a good idea?

What's needed is a new way of looking at software development, aimed at ensuring applications can continue to function as expected in this rapidly changing environment. That’s what the Defense Advanced Research Projects Agency is looking for in a program it calls BRASS, for Building Resource Adaptive Software Systems.

Without some way of ensuring long-term functionality, DARPA warns, it’s not just software running websites or home thermostats that is at risk. “The inability to seamlessly adapt to new operating conditions negatively impacts economic productivity, hampers the development of resilient and secure cyber-infrastructure, raises the long-term risk that access to important digital content will be lost as the software that generates and interprets that content becomes outdated and complicates the construction of autonomous mission-critical programs.”

A new approach to building and maintaining software for the long term will lead to “significant improvements in software resilience, correctness and maintainability,” DARPA maintains. BRASS aims to automate discovery of relationships between computations that happen in IT ecosystems as well as the resources they need and discover techniques that can be used to dynamically incorporate algorithms constructed as adaptations to these ecosystem changes.

DARPA is obviously intent on trying to wrestle this issue of software complexity to the ground. A few months ago it kicked off its Mining and Understanding Software Enclaves (MUSE) program, which is aimed at improving the reliability of mission-critical systems and reducing the vulnerabilities of these large and complex programs to cyber threats. Late last year it outlined another program called Transparent Computing, which intends to provide “high-fidelity” visibility into the interactions of software components, with the goal of better understanding how modern computer systems do what they do.

DARPA has a reputation for blue-sky thinking, which goes along with its mandate of tackling high-risk, high-reward problems. It's not backing down from that in this new line of attack on complexity, since it describes the BRASS program as a way to create software systems that would remain robust and functional “in excess of 100 years.”

When it comes to software, however, it does have a decent track record. After all, in 1973 it kicked off a program to see how it could link various network data packets. That produced the Transmission Control Protocol (TCP) and the Internet Protocol (IP), they begat the ARPANET, the forerunner of the Internet.... and everyone knows what happened then.

Posted by Brian Robinson on Apr 10, 2015 at 12:19 PM1 comments


Progress toward an identity ecosystem

Progress toward an identity ecosystem

First, a bit of good news.

The National Institute of Standards and Technology met its March 16 deadline to produce baseline requirements for its Identity Ecosystem Framework (IDEF), the bedrock document aimed at revving up a move to more secure credentials that are interoperable across the Internet and a big advance toward the holy grail of a single, Internetwide sign-on for individuals.

The first version of the IDEF will be launched sometime this summer. By defining the overall set of interoperability standards, risk models, privacy and liability policies needed to fully describe an identity-based ecosystem, both government and private organizations will be able to see how their identity efforts match up to the IDEF requirements.

The IDEF springs from the Obama administration's National Strategy for Trusted Identities in Cyberspace (NSTIC) initiative, which was launched in 2011. The intent was for the government, through NIST, to bring together the private sector, advocacy groups and government agencies to create an environment that replaces the current one, which uses many different kinds of authentication to access online services,

NIST has a rundown of the kinds of things such an identity ecosystem can be used for, and it does seem enticing when compared to today’s authentication systems. The IDEF by itself won't be enough, of course, because such an ecosystem depends on a broad level of trust among parties, and that will be a huge nut to crack.

But identity is increasingly the focus for future security platforms because, as has become obvious over the past couple of years, traditional network, data and systems protection techniques are of limited use against the focused efforts of today's more sophisticated cyber criminals. Beyond security, a strong identity solution will also act as an enabler, according to Jeremy Grant, the head of the NSTIC initiative.

“If we have easy-to-use identity solutions that enable secure and privacy-enhancing transactions, we can enable citizens to engage with government in more meaningful ways,” he wrote. “With a vibrant identity ecosystem – where citizens can use the same credential to access services at multiple sites – we can enable a wide array of new citizen-facing digital services while reducing costs and hassles for individuals and government agencies alike.”

That the trust needed to build that ecosystem should be at the top of the list of requirements is made clearer by a report from the Ponemon Institute, which looked at the use of security certificates and cryptographic keys around the world and found rampant abuse.

The survey, with over 2,300 security professionals responding, found that 58 percent of them believed their organizations needed to do better in securing certificates and keys in order to stop man-in-the-middle attacks. Over half of them didn't even know where all of their certificates and keys were located.

Over the last two years, the number of keys and certificates deployed on web servers, network appliances and cloud services grew to almost 24,000 per enterprise, the survey found. The major fears respondents listed were of a “cryptopocalypse” and misuse of mobile certificates. All of this could cost organizations at least $53 million over the next couple of years, Ponemon concluded, up 51 percent from 2013.

NIST has already funded four rounds of pilot programs aimed at developing the technologies needed for the identity ecosystem, for a total so far of around $30 million. The intent, according to Grant, is that by 2019 consumers “will think it's quaint” when online service providers ask them to create a new account, and that the NSTIC program office will have become “a blessed memory.”

Posted by Brian Robinson on Mar 27, 2015 at 1:32 PM0 comments


Massive OpenSSL audit hopes to squash Heartbleed-like bugs

Massive OpenSSL audit hopes to squash Heartbleed-like bugs

OpenSSL is back in the news again, almost a year after it first made a splash with the now infamous Heartbleed bug revelation. This time around, however, it looks like it could be a good thing.

Cryptography Services, a part of the Linux Foundation's Core Infrastructure Initiative (CII), is going to audit OpenSSL security. It's billed as an independent audit, even though the CII has been instrumental over the past year in trying to right the OpenSSL ship by providing some of the money to get the beleaguered open source software full time development help.

Heartbleed was a major shock to the cybersecurity ecosystem for several reasons: Not only is OpenSSL widely used in both public and private organizations' network and system security, the coding mistake that created it apparently went undetected for several years before it could be patched, and no one could say for certain how many systems had been affected or what data might have been compromised.

The crisis created by that bug fed into a concern about open source software overall, with other threats such as the Shellshock vulnerability in the Linux and Unix operating systems and a possible SQL injection attack on the popular Drupal content management system adding to the worries.

It’s not as if any of these major open source resources can easily be replaced. OpenSSL is reckoned to be used on up to two-thirds of existing web servers; Linux and Unix also drives many servers, and Drupal has become a reliable and flexible option for website operations, including those at the White House and other government agencies.

Open source software isn’t alone in having security holes, of course, as many users of Microsoft, Apple, Adobe, Java and other proprietary software know. But open source security is seen as suffering from the same resource that’s considered its strength, namely an army of volunteer developers. On the one hand that leads to innovation and fast turnaround of new features that users of open source crave but also to more opportunities for tampering and coding mistakes.

Admittedly, others think all those volunteer developers can also be a security strength, since it puts that many more eyeballs into reviewing code. However, the events of 2014 threw enough doubt onto the security of open source software that both industry and government have been moved to do something to improve it, from bills aimed at ensuring the software supply chain to proposals for controls on the use of third-party software components.

At first glance, the Cryptography Services  audit could be the most comprehensive and important of these efforts. According to the consultants that will be running it, the audit will cover a range of security concerns but will focus primarily on Transport Layer Security stacks and on protocol flow, state transitions and memory management. The audit may be the largest effort to date to review OpenSSL, the group said, and it’s “definitely the most public.” It will help to spot and fix bugs such as Heartbleed before they become the kind of problem they did last year.

Preliminary results of the audit could be out by the beginning of the summer, Cryptography Services  said.

It should be eagerly anticipated, as the revelation of Heartbleed, Shellshock and other bugs hasn’t necessarily brought better security. Months after the initial announcement of Heartbleed, around half of the 500,000 servers thought to be vulnerable from the bug had not been fixed. And the vulnerabilities keep on giving, with Cisco just one of the latest to say that its products had been affected.

Posted by Brian Robinson on Mar 13, 2015 at 11:43 AM0 comments