Identity as a Service

The new perimeter and the rise of IDaaS

Identity management has been a major focus in security for a long time, and in government that stretches at least as far back as the implementation of HSPD-12 in 2005. The Obama administration ratcheted the effort even higher in 2012 when it released the National Strategy for Trusted Identities in Cyberspace (NSTIC).

Strong identity solutions have become even more vital following the rash of high-profile breaches of both public and private industry sites last year. An executive order from President Barack Obama duly followed late last year, requiring agencies to cut down on identity-related crimes by issuing credentials with stronger security.

And identity will become even more of an issue as agencies finally start moving more of their IT needs to the cloud. Critical data will stay behind agency firewalls in private clouds, but other services and applications will migrate to the public cloud. And “extending an organization’s identity services into the cloud is a necessary prerequisite for strategic use of on-demand computing resources,” according to the Cloud Security Alliance.

That’s easier said than done. Agencies are tightly wedded to their onsite identity and access management (IAM) systems, which generally use Active Directory (AD) and Lightweight Directory Access Protocol (LDAP) and  over time have become shaped by the individual policies and specific needs of agencies. What’s needed is federated identity management for hybrid clouds that allows agencies to extend these AD/LDAP systems into the cloud.

Cue the rise of identity-as-a-service (IDaaS). It’s a generic term that, according to CSA, covers a number of services needed for an identity ecosystem, such as policy enforcement points, policy decision points, policy access points,  as well as related services that provide entities with identity and provide reputation.

Cloud providers such as Microsoft and Amazon already offer cloud-based directories that synch with on-premises systems. But Gartner expects full-blown IDaaS to make up a quarter of the total IAM market in 2015, versus just four percent in 2011, as spending on cloud computing becomes the bulk of all IT investment by 2016.

That’s driving development of new, native cloud-based identity solutions. Centrify, for example, which already has a fair number of government agencies as customers for its current “cloud savvy” identity management product, recently launched its Centrify Privilege Service,  which it claims is the first purely cloud-based, privileged identity solution.

Privileged accounts in particular have become a favorite target of cyberattacks since, once gained, they allow bad guys almost unlimited freedom to roam across an organization’s systems and steal data or disrupt operations. Centrify said CPS offers a way to manage and secure privileged accounts that legacy IAM cannot do in hybrid IT environments.

However, the company still doesn’t expect it to be an easy sell, particularly in government. Though fears about the security of cloud solutions is easing, and budget pressures make the cloud an increasingly attractive answer, agencies are still doubtful about giving up key assets such as privileged accounts to the cloud.

Centrify chief marketing office Mark Weiner said that, so far, seven or eight agencies have begun playing with CPS to see what it might do for them, “though not the largest military or intelligence agencies.”

Parallel to the growing demand for IDaaS is the use of the phrase “identity is the new perimeter” to describe the brave new world of IT. Again, it’s something that was coined years ago but, as mobile devices proliferate and the cloud becomes the primary way of delivering apps and services, the former hard edge of the network is becoming much fuzzier.

Single logons that grant users access across these soft-edged enterprises will become ubiquitous as agencies work toward business efficiency. Making sure the identities used for that access stay secure will be key.

Posted by Brian Robinson on May 08, 2015 at 10:14 AM0 comments


Verizon breach report

Verizon breach report: bad news and worse news

The trouble with reports such as Verizon’s deeply detailed 2015 Data Breach Report is that they make for such interesting reading, even while they effectively depress the hell out of everybody.

The very first element in the report talks about “victim demographics,” and carries a graphic that depicts in red where incidents and breaches happened around the world. The whole of North America, Australia, Russia, most of Europe and Asia, and a good part of Latin America are a deep crimson. The only place not well colored is Africa, but that’s probably due more to the fact that few of the organizations reporting breaches to Verizon actually operate there.

But then there are the interesting bits. The public sector once again seems to be the major casualty when it comes to data breaches, with over 50,000 security incidents tallied during the year, far more than other sectors reported. However, as Verizon itself points out, that’s misleading, since there are many government incident response teams participating in the survey and they handle a high volume of incidents, many of which fall under mandatory reporting regulations.

The number of confirmed data losses probably paints a more accurate picture. With over 300, the public sector had the highest number (other than the “unknown” sector), but it wasn’t that far ahead of the financial services industry. Manufacturing took the third-place slot.

Depression returns when the report looks at some of the threats and how successful they are. How long, for example, have we been told to regard all unsolicited offers online as suspicious? Social engineering has for years been attackers' best way to get inside organizations, and phishing once again tops the Verizon threat list. For the past two years, phishing has been a part of more than two-thirds of the cyber-espionage pattern Verizon tracks.

And no wonder, since the ROI for the bad guys is apparently so good. Some 23 percent of the recipients of these emails open them, according to the report, and 11 percent click on the attachments. The numbers, Verizon said, show that a campaign of just 10 emails yields a greater than 90 percent chance that at least one person will fall prey to the phishers. A test conducted for the report showed that nearly half of users open emails and click on links within the first hour of one of these phishing campaigns.

A separate study, sponsored by KnowBe4, confirms that email spear phishing is the number one source of data breaches, with human error following that. Education of users is seen as the best solution, and every government agency says it has programs that are meant to bring users up to speed on the dangers, but that depends on what your definition of “program” is.

The organizational approach to user education is a big part of the problem, according to KnowBe4 chief executive Stu Sjouwerman. For compliance reasons, he said, “too many companies still rely on a once-a-year ... ‘death by PowerPoint’ training approach, or just rely on their filters, do no training and see no change in behavior.”

And then there are vulnerabilities. Notice all of those notifications you get about upgrades to operating systems and apps? Many them involve security upgrades to patch vulnerabilities that have been found, and it’s the same for enterprise systems. The past year seemed to surface an especially large number of vulnerabilities, including three alone involving the key OpenSSL security protocol, and which resulted in the now infamous Heartbleed bug, among others.

According to a study of the exploit data reported for the Verizon report, fully 99.9 percent of the exploited vulnerabilities were still being compromised more than a year after they were reported. The lesson? Don’t just patch in response to announced “critical” vulnerabilities, but patch often and completely. The report “demonstrates the need for all those stinking patches on all your stinking systems,” its authors said.

The Verizon report wasn’t a complete downer, though. It looked at the security problems surrounding mobile devices, for example, which have been a focus of government for some years and have been a major reason for the anemic uptake of bring your own device programs in agencies. But a forensic examination of the breach data surrounding mobile showed that less than 1 percent of smartphones used on the Verizon Wireless system — the biggest in the U.S. — were infected with malware. A minuscule number of the devices carried what Verizon called “high-grade” malicious code.

Given the detail in the report, just about every organization can get something from it, though coming up with an overall conclusion about the state of cyber security is tougher. For the report’s authors, however, the practical solutions are tried and true, if a bit tedious.

“Don’t sleep on basic, boring security practices,” they say. “Stop rolling your eyes. If you feel you have met minimum-security standards and continue to validate this level of information, then bully for you! It is, however, still apparent that not all organizations are getting the essentials right.”

That’s probably an understatement.

Posted by Brian Robinson on Apr 24, 2015 at 12:45 PM2 comments


DARPA’s strategy for 100-year software

An axiom of systems design is that the more complex the system, the harder it is to understand and, therefore, the harder it is to manage. When it comes to cybersecurity, that principle is what bad actors rely on to get their malware through enterprise defenses -- where it can then squirrel away vital information or damage essential systems.

The complexity is partially caused the fact that modern software simply does not have the shelf life it used to. Back in the day, software was not expected to change much over a number of years, making it relatively easy to maintain.

Those days are long gone. The pace of innovation today means there is almost constant churn in IT technologies, with the introduction of new processors and devices that require significant changes to operating systems, application software, application programming interfaces (APIs), to mention a few. Use cases for these technologies can also change quickly, which means more modifications to software and system configurations are required.

Now consider future scenarios. With distributed devices and networking driving the Internet of Things, there may be no central point of intelligence.  We may not know what changes are being made to what systems, when, or by whom. How is that a good idea?

What's needed is a new way of looking at software development, aimed at ensuring applications can continue to function as expected in this rapidly changing environment. That’s what the Defense Advanced Research Projects Agency is looking for in a program it calls BRASS, for Building Resource Adaptive Software Systems.

Without some way of ensuring long-term functionality, DARPA warns, it’s not just software running websites or home thermostats that is at risk. “The inability to seamlessly adapt to new operating conditions negatively impacts economic productivity, hampers the development of resilient and secure cyber-infrastructure, raises the long-term risk that access to important digital content will be lost as the software that generates and interprets that content becomes outdated and complicates the construction of autonomous mission-critical programs.”

A new approach to building and maintaining software for the long term will lead to “significant improvements in software resilience, correctness and maintainability,” DARPA maintains. BRASS aims to automate discovery of relationships between computations that happen in IT ecosystems as well as the resources they need and discover techniques that can be used to dynamically incorporate algorithms constructed as adaptations to these ecosystem changes.

DARPA is obviously intent on trying to wrestle this issue of software complexity to the ground. A few months ago it kicked off its Mining and Understanding Software Enclaves (MUSE) program, which is aimed at improving the reliability of mission-critical systems and reducing the vulnerabilities of these large and complex programs to cyber threats. Late last year it outlined another program called Transparent Computing, which intends to provide “high-fidelity” visibility into the interactions of software components, with the goal of better understanding how modern computer systems do what they do.

DARPA has a reputation for blue-sky thinking, which goes along with its mandate of tackling high-risk, high-reward problems. It's not backing down from that in this new line of attack on complexity, since it describes the BRASS program as a way to create software systems that would remain robust and functional “in excess of 100 years.”

When it comes to software, however, it does have a decent track record. After all, in 1973 it kicked off a program to see how it could link various network data packets. That produced the Transmission Control Protocol (TCP) and the Internet Protocol (IP), they begat the ARPANET, the forerunner of the Internet.... and everyone knows what happened then.

Posted by Brian Robinson on Apr 10, 2015 at 12:19 PM1 comments


Progress toward an identity ecosystem

Progress toward an identity ecosystem

First, a bit of good news.

The National Institute of Standards and Technology met its March 16 deadline to produce baseline requirements for its Identity Ecosystem Framework (IDEF), the bedrock document aimed at revving up a move to more secure credentials that are interoperable across the Internet and a big advance toward the holy grail of a single, Internetwide sign-on for individuals.

The first version of the IDEF will be launched sometime this summer. By defining the overall set of interoperability standards, risk models, privacy and liability policies needed to fully describe an identity-based ecosystem, both government and private organizations will be able to see how their identity efforts match up to the IDEF requirements.

The IDEF springs from the Obama administration's National Strategy for Trusted Identities in Cyberspace (NSTIC) initiative, which was launched in 2011. The intent was for the government, through NIST, to bring together the private sector, advocacy groups and government agencies to create an environment that replaces the current one, which uses many different kinds of authentication to access online services,

NIST has a rundown of the kinds of things such an identity ecosystem can be used for, and it does seem enticing when compared to today’s authentication systems. The IDEF by itself won't be enough, of course, because such an ecosystem depends on a broad level of trust among parties, and that will be a huge nut to crack.

But identity is increasingly the focus for future security platforms because, as has become obvious over the past couple of years, traditional network, data and systems protection techniques are of limited use against the focused efforts of today's more sophisticated cyber criminals. Beyond security, a strong identity solution will also act as an enabler, according to Jeremy Grant, the head of the NSTIC initiative.

“If we have easy-to-use identity solutions that enable secure and privacy-enhancing transactions, we can enable citizens to engage with government in more meaningful ways,” he wrote. “With a vibrant identity ecosystem – where citizens can use the same credential to access services at multiple sites – we can enable a wide array of new citizen-facing digital services while reducing costs and hassles for individuals and government agencies alike.”

That the trust needed to build that ecosystem should be at the top of the list of requirements is made clearer by a report from the Ponemon Institute, which looked at the use of security certificates and cryptographic keys around the world and found rampant abuse.

The survey, with over 2,300 security professionals responding, found that 58 percent of them believed their organizations needed to do better in securing certificates and keys in order to stop man-in-the-middle attacks. Over half of them didn't even know where all of their certificates and keys were located.

Over the last two years, the number of keys and certificates deployed on web servers, network appliances and cloud services grew to almost 24,000 per enterprise, the survey found. The major fears respondents listed were of a “cryptopocalypse” and misuse of mobile certificates. All of this could cost organizations at least $53 million over the next couple of years, Ponemon concluded, up 51 percent from 2013.

NIST has already funded four rounds of pilot programs aimed at developing the technologies needed for the identity ecosystem, for a total so far of around $30 million. The intent, according to Grant, is that by 2019 consumers “will think it's quaint” when online service providers ask them to create a new account, and that the NSTIC program office will have become “a blessed memory.”

Posted by Brian Robinson on Mar 27, 2015 at 1:32 PM0 comments