lock with bullet hole (wk1003mike/Shutterstock.com)

New breach, same lessons

The story of recent breaches at the credit-rating agency Equifax, which may have involved the personal details of nearly 150 million people, has probably just begun, given the confusion that still surrounds events. But it’s brought the security of open source software to the fore yet again, and highlighted the ongoing struggle organizations still have with cybersecurity.

So far there’s no indication of how many U.S. government organizations may have been affected by the software bug that apparently led to the Equifax mess. However, it’s already been compared to some of the most damaging breaches in recent years, such as those at Sony and Target and at the Office of Personnel Management in 2015 that could have leaked sensitive details of over 20 million U.S. government employees.

It also brings up elements of the debate that resulted from the 2014 Heartbleed bug that was related to a vulnerability in OpenSSL software. That discovery launched a back-and-forth argument about the inherent security of open source software and how much responsibility organizations should bear for the security of applications that used it.

The Equifax breach was blamed on a vulnerability in the Apache Software Foundation’s Struts version 2, an open source framework for building Java web applications. There have been a number of announcements of Struts vulnerabilities over the past few months, the most recent issued by the Center for Internet Security on Sept. 15.

Depending on the privileges associated with the application, according to CIS, an attacker could “install programs; view, change, or delete data; or create new accounts with full user rights.” It put the risk for medium and large government enterprises at high.

It was an earlier vulnerability, publicly announced Sept. 4, that’s thought to have been the one that attackers exploited. However, the Apache Foundation itself said that, given that the security breach at Equifax was already detected on July 5, it’s more likely that an even older earlier announced vulnerability on an unpatched Equifax server, or a zero-day exploit of an at-the-time unknown vulnerability, was the culprit.

The timelines on this are confusing. There were detailed stories of attacks using a Struts 2 vulnerability as far back as March 2017, with attackers carrying out a series of probing acts and injecting malware into systems.

Back in the Heartbleed days, detractors claimed that open-source software was inherently insecure because developers just didn’t keep as close an eye on security issues and weren’t as systematic in finding potential holes in code as proprietary developers were. Proponents, on the other, said that open source was, in fact, inherently at least as secure as other software and that it was safe for government agencies to use.

It’s not an academic issue. Sonatype, for example, claims that some 80-90 percent of modern applications contain open source components and issued a recent report that said an “insatiable appetite for innovation” was fueling both the supply and demand of open source components.

The Apache Foundation made the case that its developers put a huge effort into both hardening its products and fixing problems as they become known. However, it said, since vulnerability detection and exploitation is now a professional business, “it is and always will be likely that attacks will occur even before we fully disclose the attack vectors.”

In other words, it’s up to organizations that use Struts, or any other open source product for that matter, to treat security just as they would for a proprietary product: assume there are flaws in the software, put security layers in place and look for any unusual access to public-facing web pages. It's critical that organizations  look for security patch releases and software updates and act on them immediately.

Sound advice for everyone. If only everyone followed it.

Posted by Brian Robinson on Sep 26, 2017 at 12:34 PM0 comments


risk alternatives (Narong Jongsirikul/Shutterstock.com)

NIST's how-to for prioritizing risk

Some of the hardest parts of a security professional’s job are identifying which elements in an enterprise infrastructure pose the greatest risk and keeping that infrastructure secure going forward. The underlying constraint in these considerations is how to do this with a less-than-infinite budget.

In many organizations, and certainly for most of government, that comes down to keeping systems up and running when at least some part of that infrastructure depends on legacy systems. Agencies can’t replace all of the aging machines and applications, so where should they invest scarce dollars to boost security, while at the same time making sure they don’t introduce problems that prevent that infrastructure from functioning properly?

That’s what the National Institute of Standards and Technology most recent guidance on risk assessment aims to address. Unlike other cybersecurity guidance NIST has published, however, this document includes a step-by-step process that agencies can use to identify the most critical parts of an infrastructure so they can better choose what to upgrade and where to spend their (usually scarce) dollars.

NIST itself said the new guidance builds on previous publications, such as SP 800-53 Rev. 4, SP 800-160 and SP 800-161, all of which also emphasized picking out critical parts of an infrastructure, but didn’t say how to do that.

The relevant publication, the NIST Cybersecurity Framework -- an answer to the President Barack Obama’s 2013 Executive Order 13636 on “Improving Critical Infrastructure Cybersecurity” -- includes a detailed mechanism that organizations can use to better understand how to managing security risks.

The framework has become a standard document for both public- and private-sector organizations in establishing their approach to cybersecurity. In May, the Trump White House issued an executive order on strengthening federal cybersecurity that effectively made use of the NIST framework government policy.

The new NIST guide described what it calls a “high level criticality analysis process model,” which steps users through the various components needed to get to the end point of a detailed analysis of the criticality levels for all of the programs, systems, subsystems, components and subcomponents needed in a particular enterprise.

This kind of approach will give agencies more certainty in what they buy, and it won’t upset the business logic that supports an agency and its mission. After all, even though cybersecurity has certainly risen in the list of agency priorities, the main question most IT managers ask security product vendors is how any new tool will affect the normal running of current networks and systems.

The authors of NIST's new guidance believe their approach could eliminate the debate over return on investment of security solutions versus the long term resilience of systems. That’s something to be hoped for, but it may be a while before agency bosses shunt aside the well-established ROI for something that’s still so nebulous -- for now, anyway -- as resilience.

The new NIST publication does hint at the need for more active outcomes for all of the guidance -- from NIST and others -- that’s been published over the last few years. The House, for example, recently tried to push measurable metrics onto the NIST Framework through the NIST Cybersecurity Framework, Assessment and Auditing Act of 2017, which was introduced in February.

It would be a real advance if that effort produced actual metrics that could be used because it’s been notoriously hard to do that with any kind of specific security guidance. Each organization has very different needs when it comes to the application of security, so getting a general set of metrics to measure effectiveness may not be possible.

Still, the current draft of the NIST criticality guidance, which is open for comment until Aug. 18, gets halfway there. It at least promises to give users a better idea of what they have and how best to insert new security solutions and systems. That should make for a more certain and more effective acquisition process. And, who knows, it might eventually take its place alongside the NIST Cybersecurity Framework as a solid basis for government cybersecurity efforts.

Posted by Brian Robinson on Jul 24, 2017 at 10:33 AM0 comments


cyber attack (By GlebStock/Shutterstock.com)

WannaCry: A preview of coming attacks?

The astonishing spread of the WannaCry ransomware that exploded onto the global scene on May 12 is not the work of some genius malware developers.  Rather, it is a clear example of the confluence of two trends, one that should have been strangled a long time ago and the other an inevitable result of technological progress.

Most people, if they’ve been paying attention, have noticed the recent growth in ransomware. In its 2017 Data Breach Investigations Report, Verizon said ransomware is now the fifth most common malware, compared to just the 22nd most common in 2014.

Part of the reason for that jump is the increasingly sophisticated techniques used to create the malware and share the code. The WannaCry malware apparently uses code first developed by the Lazarus Group, a shady outfit that’s been linked to some of the biggest and most effective raids on bank and finance systems around the world. The rise of ransomware-as-a-service is apparently making sophisticated malware available to even the most technically deficient criminal.

WannaCry also took advantage of a Windows exploit called EternalBlue that was developed by the National Security Agency and that attacks weaknesses in Microsoft’s SMBv1 (Server Message Block 1.0) using a backdoor tool also created by the NSA. All Windows machines still running an older version of the operating system -- Windows XP up through Windows 7 -- were vulnerable to WannaCry.

It’s not clear just how aware security professionals are, both in the public and private sectors, of the increasingly industrial nature of malware development and exploits. Malware creators are every bit as capable as their white-hat counterparts, and the infrastructure that makes malware easily obtainable by criminals is starting to mirror that of the legitimate software industry. The other side of this picture is the continued foot-dragging by users to start practicing baseline, no-brainer security such as regularly patching their systems. Microsoft, for example, issued a security update for the SMBv1 vulnerability in March, but thousands of systems were still thought to be unpatched when the WannaCry ransomware was launched.

Microsoft took the unusual step of sending out an emergency custom patch for Windows XP, Windows 8 and Windows Server 2003 machines on the first day of the attack. It also suggested that users make other changes, such as blocking legacy protocols on their networks, to counter similar attacks in the future.

One thing that’s still unclear is the potential impact of the attack for the government’s own agencies -- in this case, the NSA. It developed EternalBlue as its own weapon in the fight against groups hostile to the U.S., but when it was stolen last year along with a stash of other NSA cyber weapons and the code eventually published, questions began to be asked about whether the NSA was itself secure enough to be holding such potent hacking tools.

NSA officials also apparently worried about that. In a blog post, Brad Smith, Microsoft’s president and chief legal officer, said the WannaCry incident is yet another example of why the stockpiling of such things as EternalBlue, which wasn’t revealed to industry or anyone else, is such a problem.

“This is an emerging pattern in 2017,” he wrote. “We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world.”

All governments should treat this attack as a wake-up call, he said, and they must take a different approach and apply the same rules to cyber weapons as they do to weapons in the physical world.

That’s probably good advice. Up to now, cyberattacks have been non-lethal, but WannaCry showed just what real-world damage can be caused by ransomware and other types of malware. The UK’s National Health Service was one of the first and worst hit by WannaCry, and many hospitals there had to put off essential surgeries and other procedures.

With the pace of malware innovation seemingly outpacing the efforts of both public and private entities to protect against them, we must find a new way to deal with the issues malware poses. Microsoft, for example, wants a Digital Geneva Convention that will govern global cybersecurity, which would include a requirement for governments to report vulnerabilities to software vendors, rather than stockpile them.

Right now, that kind of collective response is a reach, but WannaCry has certainly shown just why it’s needed.

Posted by Brian Robinson on May 17, 2017 at 12:53 PM0 comments


mobile security (Boiko Y/Shutterstock.com)

The road to derived mobile credentials

The effort to provide government workers who use mobile devices with personal identity verification credentials is picking up momentum, with programs in both the civilian and military sectors starting to deliver on earlier promises.

Solutions for mobile users are long overdue. As the swing away from the desktop and onto the mobile device became obvious some years ago, government agencies found themselves without any clear direction to take when it came to security. Providing the level of security that comes with smart cards, which workers can use to authenticate their system and network access using card readers on the desktop, is not easy with mobile devices.

That spurred various programs to try and take those smart card credentials and convert them for use for mobile devices, which is where the term “derived” comes from. It’s not been easy, and both the National Institute of Standards and Technology and the Defense Information Systems Agency have been working for several years to come up with answers.

NIST, for example, released guidelines for derived PIV credentials nearly two years ago, basically an update to Special Publication 800-157, which describes ways to implement credentials on mobile devices. More recently, the Derived PIV Credentials Project  from NIST’s National Cybersecurity Center of Excellence (NCCoE) will build on SP 800-157 and describe practice guides that agencies can use to start implementing a derived credential program.

On the military side, DISA earlier this year implemented Purebred as a way for Defense Department public-key infrastructure subscribers to use their common access cards to generate derived credentials on their mobile devices. A three-year, phased program designed to overcome specific DOD issues with PKI mobile provisioning, Purebred is currently available for iOS, Android and BlackBerry phones and tablets.

How derived credentials might be created in the future is not clear, however, since the DOD a year ago said it would eliminate CACs in favor of a new, multifactor authentication system as early as 2018.

Sean Frazier, chief technical evangelist for mobile security firm MobileIron, said the NCCoE practice guides will help to accelerate agencies’ use of derived PIV credentials. It’s not just a technology problem, he said, and the guides “will also provide guidance for workflows for enrollment and credential lifecycle management.”

The practice guides work in conjunction with a reference architecture “to assist agencies in being able to get to see how to get to the top of the mountain,” Frazier said. “Otherwise, PIV-D is rather daunting.”

MobileIron, along with its technology partner Entrust Datacard, was recently chosen by NIST to provide a derived credential solution for the NCCoE program. Last year, the two companies announced their first derived credential product after a two-year development process. Frazier said at the time that civilian agencies would likely be the first users of the product, though it also recently announced its derived credential solution would integrate with Purebred.

As well as providing better security for mobile devices, the government is also hoping that the use of derived credentials will help to open up a broader use of devices across all agencies.

With the influx of younger workers into government, bring-your-own-device issues have become a major thorn in the side of agency security professionals. They hope use of derived credentials will provide a level of security that can free up the use of BYOD, which most agencies now view as a desirable goal.

This article was changed May 1 to correct the name of the National Cybersecurity Center of Excellence.

Posted by Brian Robinson on Apr 28, 2017 at 7:01 AM0 comments