risk alternatives (Narong Jongsirikul/Shutterstock.com)

NIST's how-to for prioritizing risk

Some of the hardest parts of a security professional’s job are identifying which elements in an enterprise infrastructure pose the greatest risk and keeping that infrastructure secure going forward. The underlying constraint in these considerations is how to do this with a less-than-infinite budget.

In many organizations, and certainly for most of government, that comes down to keeping systems up and running when at least some part of that infrastructure depends on legacy systems. Agencies can’t replace all of the aging machines and applications, so where should they invest scarce dollars to boost security, while at the same time making sure they don’t introduce problems that prevent that infrastructure from functioning properly?

That’s what the National Institute of Standards and Technology most recent guidance on risk assessment aims to address. Unlike other cybersecurity guidance NIST has published, however, this document includes a step-by-step process that agencies can use to identify the most critical parts of an infrastructure so they can better choose what to upgrade and where to spend their (usually scarce) dollars.

NIST itself said the new guidance builds on previous publications, such as SP 800-53 Rev. 4, SP 800-160 and SP 800-161, all of which also emphasized picking out critical parts of an infrastructure, but didn’t say how to do that.

The relevant publication, the NIST Cybersecurity Framework -- an answer to the President Barack Obama’s 2013 Executive Order 13636 on “Improving Critical Infrastructure Cybersecurity” -- includes a detailed mechanism that organizations can use to better understand how to managing security risks.

The framework has become a standard document for both public- and private-sector organizations in establishing their approach to cybersecurity. In May, the Trump White House issued an executive order on strengthening federal cybersecurity that effectively made use of the NIST framework government policy.

The new NIST guide described what it calls a “high level criticality analysis process model,” which steps users through the various components needed to get to the end point of a detailed analysis of the criticality levels for all of the programs, systems, subsystems, components and subcomponents needed in a particular enterprise.

This kind of approach will give agencies more certainty in what they buy, and it won’t upset the business logic that supports an agency and its mission. After all, even though cybersecurity has certainly risen in the list of agency priorities, the main question most IT managers ask security product vendors is how any new tool will affect the normal running of current networks and systems.

The authors of NIST's new guidance believe their approach could eliminate the debate over return on investment of security solutions versus the long term resilience of systems. That’s something to be hoped for, but it may be a while before agency bosses shunt aside the well-established ROI for something that’s still so nebulous -- for now, anyway -- as resilience.

The new NIST publication does hint at the need for more active outcomes for all of the guidance -- from NIST and others -- that’s been published over the last few years. The House, for example, recently tried to push measurable metrics onto the NIST Framework through the NIST Cybersecurity Framework, Assessment and Auditing Act of 2017, which was introduced in February.

It would be a real advance if that effort produced actual metrics that could be used because it’s been notoriously hard to do that with any kind of specific security guidance. Each organization has very different needs when it comes to the application of security, so getting a general set of metrics to measure effectiveness may not be possible.

Still, the current draft of the NIST criticality guidance, which is open for comment until Aug. 18, gets halfway there. It at least promises to give users a better idea of what they have and how best to insert new security solutions and systems. That should make for a more certain and more effective acquisition process. And, who knows, it might eventually take its place alongside the NIST Cybersecurity Framework as a solid basis for government cybersecurity efforts.

Posted by Brian Robinson on Jul 24, 2017 at 10:33 AM0 comments


cyber attack (By GlebStock/Shutterstock.com)

WannaCry: A preview of coming attacks?

The astonishing spread of the WannaCry ransomware that exploded onto the global scene on May 12 is not the work of some genius malware developers.  Rather, it is a clear example of the confluence of two trends, one that should have been strangled a long time ago and the other an inevitable result of technological progress.

Most people, if they’ve been paying attention, have noticed the recent growth in ransomware. In its 2017 Data Breach Investigations Report, Verizon said ransomware is now the fifth most common malware, compared to just the 22nd most common in 2014.

Part of the reason for that jump is the increasingly sophisticated techniques used to create the malware and share the code. The WannaCry malware apparently uses code first developed by the Lazarus Group, a shady outfit that’s been linked to some of the biggest and most effective raids on bank and finance systems around the world. The rise of ransomware-as-a-service is apparently making sophisticated malware available to even the most technically deficient criminal.

WannaCry also took advantage of a Windows exploit called EternalBlue that was developed by the National Security Agency and that attacks weaknesses in Microsoft’s SMBv1 (Server Message Block 1.0) using a backdoor tool also created by the NSA. All Windows machines still running an older version of the operating system -- Windows XP up through Windows 7 -- were vulnerable to WannaCry.

It’s not clear just how aware security professionals are, both in the public and private sectors, of the increasingly industrial nature of malware development and exploits. Malware creators are every bit as capable as their white-hat counterparts, and the infrastructure that makes malware easily obtainable by criminals is starting to mirror that of the legitimate software industry. The other side of this picture is the continued foot-dragging by users to start practicing baseline, no-brainer security such as regularly patching their systems. Microsoft, for example, issued a security update for the SMBv1 vulnerability in March, but thousands of systems were still thought to be unpatched when the WannaCry ransomware was launched.

Microsoft took the unusual step of sending out an emergency custom patch for Windows XP, Windows 8 and Windows Server 2003 machines on the first day of the attack. It also suggested that users make other changes, such as blocking legacy protocols on their networks, to counter similar attacks in the future.

One thing that’s still unclear is the potential impact of the attack for the government’s own agencies -- in this case, the NSA. It developed EternalBlue as its own weapon in the fight against groups hostile to the U.S., but when it was stolen last year along with a stash of other NSA cyber weapons and the code eventually published, questions began to be asked about whether the NSA was itself secure enough to be holding such potent hacking tools.

NSA officials also apparently worried about that. In a blog post, Brad Smith, Microsoft’s president and chief legal officer, said the WannaCry incident is yet another example of why the stockpiling of such things as EternalBlue, which wasn’t revealed to industry or anyone else, is such a problem.

“This is an emerging pattern in 2017,” he wrote. “We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world.”

All governments should treat this attack as a wake-up call, he said, and they must take a different approach and apply the same rules to cyber weapons as they do to weapons in the physical world.

That’s probably good advice. Up to now, cyberattacks have been non-lethal, but WannaCry showed just what real-world damage can be caused by ransomware and other types of malware. The UK’s National Health Service was one of the first and worst hit by WannaCry, and many hospitals there had to put off essential surgeries and other procedures.

With the pace of malware innovation seemingly outpacing the efforts of both public and private entities to protect against them, we must find a new way to deal with the issues malware poses. Microsoft, for example, wants a Digital Geneva Convention that will govern global cybersecurity, which would include a requirement for governments to report vulnerabilities to software vendors, rather than stockpile them.

Right now, that kind of collective response is a reach, but WannaCry has certainly shown just why it’s needed.

Posted by Brian Robinson on May 17, 2017 at 12:53 PM0 comments


mobile security (Boiko Y/Shutterstock.com)

The road to derived mobile credentials

The effort to provide government workers who use mobile devices with personal identity verification credentials is picking up momentum, with programs in both the civilian and military sectors starting to deliver on earlier promises.

Solutions for mobile users are long overdue. As the swing away from the desktop and onto the mobile device became obvious some years ago, government agencies found themselves without any clear direction to take when it came to security. Providing the level of security that comes with smart cards, which workers can use to authenticate their system and network access using card readers on the desktop, is not easy with mobile devices.

That spurred various programs to try and take those smart card credentials and convert them for use for mobile devices, which is where the term “derived” comes from. It’s not been easy, and both the National Institute of Standards and Technology and the Defense Information Systems Agency have been working for several years to come up with answers.

NIST, for example, released guidelines for derived PIV credentials nearly two years ago, basically an update to Special Publication 800-157, which describes ways to implement credentials on mobile devices. More recently, the Derived PIV Credentials Project  from NIST’s National Cybersecurity Center of Excellence (NCCoE) will build on SP 800-157 and describe practice guides that agencies can use to start implementing a derived credential program.

On the military side, DISA earlier this year implemented Purebred as a way for Defense Department public-key infrastructure subscribers to use their common access cards to generate derived credentials on their mobile devices. A three-year, phased program designed to overcome specific DOD issues with PKI mobile provisioning, Purebred is currently available for iOS, Android and BlackBerry phones and tablets.

How derived credentials might be created in the future is not clear, however, since the DOD a year ago said it would eliminate CACs in favor of a new, multifactor authentication system as early as 2018.

Sean Frazier, chief technical evangelist for mobile security firm MobileIron, said the NCCoE practice guides will help to accelerate agencies’ use of derived PIV credentials. It’s not just a technology problem, he said, and the guides “will also provide guidance for workflows for enrollment and credential lifecycle management.”

The practice guides work in conjunction with a reference architecture “to assist agencies in being able to get to see how to get to the top of the mountain,” Frazier said. “Otherwise, PIV-D is rather daunting.”

MobileIron, along with its technology partner Entrust Datacard, was recently chosen by NIST to provide a derived credential solution for the NCCoE program. Last year, the two companies announced their first derived credential product after a two-year development process. Frazier said at the time that civilian agencies would likely be the first users of the product, though it also recently announced its derived credential solution would integrate with Purebred.

As well as providing better security for mobile devices, the government is also hoping that the use of derived credentials will help to open up a broader use of devices across all agencies.

With the influx of younger workers into government, bring-your-own-device issues have become a major thorn in the side of agency security professionals. They hope use of derived credentials will provide a level of security that can free up the use of BYOD, which most agencies now view as a desirable goal.

This article was changed May 1 to correct the name of the National Cybersecurity Center of Excellence.

Posted by Brian Robinson on Apr 28, 2017 at 7:01 AM0 comments


Jumping on the blockchain bandwagon

Jumping on the blockchain bandwagon

It’s always a bit of a push and an invitation to hype to declare “the year of” anything.  That being said, 2017 could well be the year blockchain takes off.  

Given that the cryptographic ledger technology behind the Bitcoin digital currency already has been widely touted as a new way of solving a multitude of cybersecurity issues, one could say blockchain has already been hyped. But it’s also gained firm adherents in some areas of government.

The Department of Health and Human Services is certainly one supporter. Last year HHS issued a series of public challenges for ideas about how blockchain could be used to address privacy, security and scalability challenges in managing electronic health records. It announced the 15 winners in September, with several chosen for presentations at the Office of National Coordinator for Health IT’s Blockchain & Healthcare Workshop.

It will consolidate that interest next month when the HHS’ Office of the National Coordinator for Health Information Technology sponsors the Blockchain in Healthcare Code-A-Thon, apparently the first-ever blockchain hackathon hosted by a government entity. The Washington, D.C.-based Chamber of Digital Commerce will be a co-host, and results of the hackathon will be announced at its March 14-15 D.C. Blockchain Summit.

The fact that HHS has put its name behind a cybersecurity hackathon, where contestants compete to produce real and actual solutions to problems, indicates a real intent to deploy. HHS here is signaling its intention to go ahead and use blockchain for its needs.

Other government agencies have also realized the importance of blockchain technology. The U.S. Postal Service, for one, believes blockchain could disrupt many of the industries it services, and so is worthy of closer study. The technology also seen as potentially having a major transformative impact on cities.

IBM surveyed some 200 government organizations around the world about blockchain and found fully nine in 10 plan to invest in blockchain technology for projects such as financial transaction, asset and contract  management. Around 14 percent -- a group IBM labels as the trailblazers in this area -- plan to have blockchain in production and “at scale” in 2017.

“These findings reveal that blockchain adoption is accelerating faster than originally anticipated,” the IBM report said, “with government executives identifying key areas and benefits to explore.”

Blockchain might be ramping up quickly, but there are still plenty of people on the sidelines, at least according to a Deloitte Consulting survey that found that around 40 percent of those industry executives it questioned still have little or no knowledge of blockchain. Of those that do, however, more than a quarter list it as one of their top five priorities for 2017, and over half believe they’ll be less competitive if they don’t adopt the technology.

Another mark of how far blockchain fever has spread comes in an article by the World Economic Forum explaining what blockchain is, its history and how it might be of advantage to the Forum’s global business, government and industry audience. The WEF believes blockchain could be the technology that “helps globalization work for everybody.” In January this year, it formally announced the formation of its Global Futures Council on Blockchain.

So activities galore, but does this translate into a real interest to adopt and exploit blockchain for real-world security problems?

It’s not a silver bullet by any means. For one thing, there are still many questions about the security of blockchains themselves, at least as they are now being used. But the technology offers too tempting an answer for a range of pressing problems for it to be held back very long, as the Department of Homeland Security indicated last year in a solicitation for the use of blockchain in identity management solutions.

So, while calling 2017 “the year of blockchain” might be pushing it, this seems to be the year when blockchain hype dissipates and it becomes a major, if still early-stage, cybersecurity tool.

Posted by Brian Robinson on Feb 13, 2017 at 1:12 PM0 comments