More bad news: The bad guys are getting better

More bad news: The bad guys are getting better

If there’s one lesson to be gained from all the security breaches and revelations of major bugs in security protocols in 2014, it’s that attackers are upping their game and finding more opportunities. That’s only reinforced by several new studies.

German security company G Data, for example, reported a huge increase in the number of new malware strains in the second half of the year -- on average, a new type was discovered every 3.75 seconds! For the year as a whole, just under six million new malware strains were seen in the wild, some 77 percent more than 2013's total.

Not all kinds of malware saw an increase. Those using backdoor vulnerabilities in software fell, for example, and worms and spyware remained relatively flat. But rootkits, while still a very small percentage of the overall number of malware, jumped more than ten-fold in the second half of the year.

Rootkits are software included in malware that help to embed the malicious part of the package in a system and ensure the persistence of additional attacks by helping the malware evade the scanners and monitors now used to detect it.

Not surprisingly, malware developers are mainly targeting the ubiquitous Microsoft platforms, with malware programmed as .NET applications continuing to rise. Overall, new variants for Windows platforms made up 99.9 percent of the new malware variants.

More problems could arise with Microsoft’s withdrawal of support for Windows XP in April last year, G Data said, because systems still using this operating system are “unprotected against attacks on existing or newly discovered security holes going forward.”

Akamai Technologies' most recent State of the Internet survey similarly reported more than double the number of distributed denial of service attacks in the first quarter of 2015 compared to first quarter 2014, and over 35 percent the number in the final quarter.

DDoS attacks may not be such a big deal for the public sector, which gets only around two percent of the total. But Akamai noted a potentially dangerous trend in the 2015 attacks, with peak DDoS attacks of 100 Gbps making up a significantly bigger part of the total. That suggests attackers have been developing better ways to maximize the impact of their work.

At the rate attacks are progressing, Akamai said, security researchers are concerned about what attackers may be able to accomplish by this time next year. Add to that the fact that employing current attacks techniques “has not required much skill,” and even relatively inexperienced attackers could be capable of creating major damage as more potent tools enter the picture and attack bandwidth increases.

And what, then, to make of the recent news that the Defense Department is going to take a “no holds barred” approach with users who threaten security with sloppy cyber habits? Bad cyber hygiene “is just eating our shorts,” according to David Cotton, deputy CIO for Information Enterprise at the Pentagon.

Users will be given a very short time to comply with DOD password-security policies or to change behavior that invites phishing attacks while using  third-party social media accounts. The Pentagon is also pushing vendors to come up with more timely patches for security vulnerabilities, though recent research also points to the need to make sure patches are updated on all hosts at the same time.

The DOD, along with the intelligence agencies, is considered to be better at security than most other parts of the government, so it’s a little startling to read that the Pentagon’s crackdown as aimed at giving department leadership “a consolidated view of basic network vulnerabilities.”

Isn’t this supposed to be the very first thing organizations do when assessing security needs? And if the DOD doesn’t even have this bit of the puzzle sorted out, how is it ever going to successfully defend against the threats indicated by the G Data and Akamai reports?

Perhaps it’s finally time for government organizations to give up on security that is user focused. The Cloud Security Alliance’s “Dark Cloud” project could be one way of doing that.

Posted by Brian Robinson on May 22, 2015 at 8:37 AM0 comments


Identity as a Service

The new perimeter and the rise of IDaaS

Identity management has been a major focus in security for a long time, and in government that stretches at least as far back as the implementation of HSPD-12 in 2005. The Obama administration ratcheted the effort even higher in 2012 when it released the National Strategy for Trusted Identities in Cyberspace (NSTIC).

Strong identity solutions have become even more vital following the rash of high-profile breaches of both public and private industry sites last year. An executive order from President Barack Obama duly followed late last year, requiring agencies to cut down on identity-related crimes by issuing credentials with stronger security.

And identity will become even more of an issue as agencies finally start moving more of their IT needs to the cloud. Critical data will stay behind agency firewalls in private clouds, but other services and applications will migrate to the public cloud. And “extending an organization’s identity services into the cloud is a necessary prerequisite for strategic use of on-demand computing resources,” according to the Cloud Security Alliance.

That’s easier said than done. Agencies are tightly wedded to their onsite identity and access management (IAM) systems, which generally use Active Directory (AD) and Lightweight Directory Access Protocol (LDAP) and  over time have become shaped by the individual policies and specific needs of agencies. What’s needed is federated identity management for hybrid clouds that allows agencies to extend these AD/LDAP systems into the cloud.

Cue the rise of identity-as-a-service (IDaaS). It’s a generic term that, according to CSA, covers a number of services needed for an identity ecosystem, such as policy enforcement points, policy decision points, policy access points,  as well as related services that provide entities with identity and provide reputation.

Cloud providers such as Microsoft and Amazon already offer cloud-based directories that synch with on-premises systems. But Gartner expects full-blown IDaaS to make up a quarter of the total IAM market in 2015, versus just four percent in 2011, as spending on cloud computing becomes the bulk of all IT investment by 2016.

That’s driving development of new, native cloud-based identity solutions. Centrify, for example, which already has a fair number of government agencies as customers for its current “cloud savvy” identity management product, recently launched its Centrify Privilege Service,  which it claims is the first purely cloud-based, privileged identity solution.

Privileged accounts in particular have become a favorite target of cyberattacks since, once gained, they allow bad guys almost unlimited freedom to roam across an organization’s systems and steal data or disrupt operations. Centrify said CPS offers a way to manage and secure privileged accounts that legacy IAM cannot do in hybrid IT environments.

However, the company still doesn’t expect it to be an easy sell, particularly in government. Though fears about the security of cloud solutions is easing, and budget pressures make the cloud an increasingly attractive answer, agencies are still doubtful about giving up key assets such as privileged accounts to the cloud.

Centrify chief marketing office Mark Weiner said that, so far, seven or eight agencies have begun playing with CPS to see what it might do for them, “though not the largest military or intelligence agencies.”

Parallel to the growing demand for IDaaS is the use of the phrase “identity is the new perimeter” to describe the brave new world of IT. Again, it’s something that was coined years ago but, as mobile devices proliferate and the cloud becomes the primary way of delivering apps and services, the former hard edge of the network is becoming much fuzzier.

Single logons that grant users access across these soft-edged enterprises will become ubiquitous as agencies work toward business efficiency. Making sure the identities used for that access stay secure will be key.

Posted by Brian Robinson on May 08, 2015 at 10:14 AM0 comments


Verizon breach report

Verizon breach report: bad news and worse news

The trouble with reports such as Verizon’s deeply detailed 2015 Data Breach Report is that they make for such interesting reading, even while they effectively depress the hell out of everybody.

The very first element in the report talks about “victim demographics,” and carries a graphic that depicts in red where incidents and breaches happened around the world. The whole of North America, Australia, Russia, most of Europe and Asia, and a good part of Latin America are a deep crimson. The only place not well colored is Africa, but that’s probably due more to the fact that few of the organizations reporting breaches to Verizon actually operate there.

But then there are the interesting bits. The public sector once again seems to be the major casualty when it comes to data breaches, with over 50,000 security incidents tallied during the year, far more than other sectors reported. However, as Verizon itself points out, that’s misleading, since there are many government incident response teams participating in the survey and they handle a high volume of incidents, many of which fall under mandatory reporting regulations.

The number of confirmed data losses probably paints a more accurate picture. With over 300, the public sector had the highest number (other than the “unknown” sector), but it wasn’t that far ahead of the financial services industry. Manufacturing took the third-place slot.

Depression returns when the report looks at some of the threats and how successful they are. How long, for example, have we been told to regard all unsolicited offers online as suspicious? Social engineering has for years been attackers' best way to get inside organizations, and phishing once again tops the Verizon threat list. For the past two years, phishing has been a part of more than two-thirds of the cyber-espionage pattern Verizon tracks.

And no wonder, since the ROI for the bad guys is apparently so good. Some 23 percent of the recipients of these emails open them, according to the report, and 11 percent click on the attachments. The numbers, Verizon said, show that a campaign of just 10 emails yields a greater than 90 percent chance that at least one person will fall prey to the phishers. A test conducted for the report showed that nearly half of users open emails and click on links within the first hour of one of these phishing campaigns.

A separate study, sponsored by KnowBe4, confirms that email spear phishing is the number one source of data breaches, with human error following that. Education of users is seen as the best solution, and every government agency says it has programs that are meant to bring users up to speed on the dangers, but that depends on what your definition of “program” is.

The organizational approach to user education is a big part of the problem, according to KnowBe4 chief executive Stu Sjouwerman. For compliance reasons, he said, “too many companies still rely on a once-a-year ... ‘death by PowerPoint’ training approach, or just rely on their filters, do no training and see no change in behavior.”

And then there are vulnerabilities. Notice all of those notifications you get about upgrades to operating systems and apps? Many them involve security upgrades to patch vulnerabilities that have been found, and it’s the same for enterprise systems. The past year seemed to surface an especially large number of vulnerabilities, including three alone involving the key OpenSSL security protocol, and which resulted in the now infamous Heartbleed bug, among others.

According to a study of the exploit data reported for the Verizon report, fully 99.9 percent of the exploited vulnerabilities were still being compromised more than a year after they were reported. The lesson? Don’t just patch in response to announced “critical” vulnerabilities, but patch often and completely. The report “demonstrates the need for all those stinking patches on all your stinking systems,” its authors said.

The Verizon report wasn’t a complete downer, though. It looked at the security problems surrounding mobile devices, for example, which have been a focus of government for some years and have been a major reason for the anemic uptake of bring your own device programs in agencies. But a forensic examination of the breach data surrounding mobile showed that less than 1 percent of smartphones used on the Verizon Wireless system — the biggest in the U.S. — were infected with malware. A minuscule number of the devices carried what Verizon called “high-grade” malicious code.

Given the detail in the report, just about every organization can get something from it, though coming up with an overall conclusion about the state of cyber security is tougher. For the report’s authors, however, the practical solutions are tried and true, if a bit tedious.

“Don’t sleep on basic, boring security practices,” they say. “Stop rolling your eyes. If you feel you have met minimum-security standards and continue to validate this level of information, then bully for you! It is, however, still apparent that not all organizations are getting the essentials right.”

That’s probably an understatement.

Posted by Brian Robinson on Apr 24, 2015 at 12:45 PM2 comments


DARPA’s strategy for 100-year software

An axiom of systems design is that the more complex the system, the harder it is to understand and, therefore, the harder it is to manage. When it comes to cybersecurity, that principle is what bad actors rely on to get their malware through enterprise defenses -- where it can then squirrel away vital information or damage essential systems.

The complexity is partially caused the fact that modern software simply does not have the shelf life it used to. Back in the day, software was not expected to change much over a number of years, making it relatively easy to maintain.

Those days are long gone. The pace of innovation today means there is almost constant churn in IT technologies, with the introduction of new processors and devices that require significant changes to operating systems, application software, application programming interfaces (APIs), to mention a few. Use cases for these technologies can also change quickly, which means more modifications to software and system configurations are required.

Now consider future scenarios. With distributed devices and networking driving the Internet of Things, there may be no central point of intelligence.  We may not know what changes are being made to what systems, when, or by whom. How is that a good idea?

What's needed is a new way of looking at software development, aimed at ensuring applications can continue to function as expected in this rapidly changing environment. That’s what the Defense Advanced Research Projects Agency is looking for in a program it calls BRASS, for Building Resource Adaptive Software Systems.

Without some way of ensuring long-term functionality, DARPA warns, it’s not just software running websites or home thermostats that is at risk. “The inability to seamlessly adapt to new operating conditions negatively impacts economic productivity, hampers the development of resilient and secure cyber-infrastructure, raises the long-term risk that access to important digital content will be lost as the software that generates and interprets that content becomes outdated and complicates the construction of autonomous mission-critical programs.”

A new approach to building and maintaining software for the long term will lead to “significant improvements in software resilience, correctness and maintainability,” DARPA maintains. BRASS aims to automate discovery of relationships between computations that happen in IT ecosystems as well as the resources they need and discover techniques that can be used to dynamically incorporate algorithms constructed as adaptations to these ecosystem changes.

DARPA is obviously intent on trying to wrestle this issue of software complexity to the ground. A few months ago it kicked off its Mining and Understanding Software Enclaves (MUSE) program, which is aimed at improving the reliability of mission-critical systems and reducing the vulnerabilities of these large and complex programs to cyber threats. Late last year it outlined another program called Transparent Computing, which intends to provide “high-fidelity” visibility into the interactions of software components, with the goal of better understanding how modern computer systems do what they do.

DARPA has a reputation for blue-sky thinking, which goes along with its mandate of tackling high-risk, high-reward problems. It's not backing down from that in this new line of attack on complexity, since it describes the BRASS program as a way to create software systems that would remain robust and functional “in excess of 100 years.”

When it comes to software, however, it does have a decent track record. After all, in 1973 it kicked off a program to see how it could link various network data packets. That produced the Transmission Control Protocol (TCP) and the Internet Protocol (IP), they begat the ARPANET, the forerunner of the Internet.... and everyone knows what happened then.

Posted by Brian Robinson on Apr 10, 2015 at 12:19 PM1 comments