Moving cybersecurity from art to science

Moving cybersecurity from art to science

When it comes to cybersecurity and the ability to catch threats in the early stages before they can much damage, where does government stand? Effective, ineffective? Is it at least improving?

The picture over of the past couple of years  doesn’t look encouraging. The infamous breach at the Office of Personnel and Management, other noted attacks on the Pentagon and the Internal Revenue Service and minor breaches elsewhere would seem to suggest the government is overwhelmed.

Some analyses seem to confirm that. The Government Accountability Office, for example, recently came out with a report that pointed out the number of cyber incidents affecting federal agencies rocketed to over 77,000 in 2015 compared to just 5,503 in 2006. That’s more than a 1,300 percent increase.

Over the last several years, GAO has made around 2,500 recommendations to agencies intended to help improve their information security controls, GAO Director of Information Security Issues Gregory Wilshusen told the President’s Commission on Enhancing National Cybersecurity. As of mid-September 2016, 1,000 of those had yet to be implemented.

As the GAO does, Wilshusen then listed a raft of actions agencies should take to improve the protection of their information and systems.

One of the emerging technologies that’s being pitched as a potential advance for security is big data analytics, which can look into the flood of data that’s being collected by various sensors and sort out the patterns that might point to potential security attacks. Even though many are skeptical of data analytics, particularly predictive analytics, it’s one of the more promising technologies government can use to get in front of security problems.

A MeriTalk survey showed that interest in using big data is high in government, with 81 percent of respondents saying their agencies are using it in some capacity, and over 50 percent already have it built into their cybersecurity strategy.

However, only 45 percent of those surveyed said they trusted big data results when it comes to cybersecurity. Nearly 90 percent of them said they had trouble drawing intelligence from the data, and a third of them admitted they still don’t have the right systems in place to gather the information they need even to start applying data analytics.

Read around the figures in the various studies, however, and things look more optimistic. At the least, it seems that the organizational resistance and executive-level inattention that has plagued government cybersecurity finally seem to have been overcome.

As Rocky DeStafano, cybersecurity expert at Cloudera, which sponsored the MeriTalk survey, pointed out, at least there’s interest in improving. The positive you can take away from the survey, two years after a similar one, is that a high percentage of government that is at least starting to use big data analytics, compared to much lower numbers back then.

And people are already reporting encouraging results, DeStafano said, such as 90 percent who have seen some reduction in successful attacks and 84 percent who are able to thwart at least some kinds of attacks by leveraging the results of big data analytics.

“That’s the most encouraging thing to me,” he said. “This is all still in its infancy and yet it’s still very, very effective.”

Outside of the federal arena, optimism in states also seems to be catching on. A report from Deloitte and the National Association of State Chief Information Officers showed an increasing level of awareness of security issues at the executive level, with cybersecurity is becoming “part of the fabric” of government operations.

Even the GAO, usually so critical of government security, had some kind words. While pointing out the faults and inconsistencies of agencies’ security efforts and that additional actions are needed, Wilshusen did tell the presidential commission that the Obama administration and agencies have acted to improve cybersecurity protections.

So it’s a start, but one that must be accelerated into much more effective and wider application. After all, when it comes to technologies like data analytics and other tools that can be used to their advantage, the bad guys have also not been slow to try and take advantage.

Both government and private industry are changing how they approach cybersecurity, DeStafano believes, and it will take patience. Unlike in the past, when security was much more a case of intuition and guesswork, there’s now a cadre of highly-skilled people identifying threats with mechanisms and techniques that can be replicated and improved for the future.

“What’s really happening is that we’re turning an art into a science, and that’s going to take time,” DeStafano said. “When we do that, we’ll be able to get a little more ahead of the game than we are today.”

Posted by Brian Robinson on Oct 21, 2016 at 1:07 PM0 comments

cybersecurity quality assurance

NIST offers cyber self-assessment tool, updates email security guidance

The National Institute of Standards and Technology has long  been a national resource on cybersecurity, and its Cybersecurity Framework has been widely adopted in both government and private industry. The guidance, however, doesn’t come with many pointers to tell organizations how well they are deploying it.

Hearing the many pleas for some way of doing that, NIST has finally come out with a self-assessment tool that should give organizations a better understanding of how they are progressing with security risk management efforts. It’s asking for public comment on the current draft document.

The Baldrige Cybersecurity Excellence Builder pulls together two prized Commerce Department initiatives. The new tool incorporates elements of NIST’s Cybersecurity Framework, which was introduced in February 2014, and takes inspiration from the Baldrige Award, created in 1987 and named after the late Commerce Secretary Malcolm Baldrige.

The award begat the Baldrige Excellence Framework, which organizations can use to build performance-boosting programs. After that came the Baldrige Performance Excellence Program, managed by NIST, that also includes various self-assessment tools that can tell organizations how well they are doing.

As far as the Cybersecurity Framework goes, it’s proving to be as popular as the Baldrige program has been over the years, and there’s hope it might be as effective. Though it has its critics, the Cybersecurity Framework has so far been adopted by around 30 percent of U.S. organizations, according to Gartner, and that’s expected to rise to 50 percent by 2020.

The new assessment tool, according to NIST, guides users through a process that details their particular characteristics and strategic needs for cybersecurity and will enable them to:

  • Determine cybersecurity-related activities that are important to business strategy and the delivery of critical services
  • Prioritize investments in managing cybersecurity risk
  • Assess the effectiveness and efficiency of using cybersecurity standards, guidelines and practices
  • Assess cybersecurity results
  • Identify priorities for improvement

At the end, the assessment will put the organizations at a certain maturity level -- reactive, early, mature or role model -- and from there, each organization can build out its own action plan for upgrades and cybersecurity improvements.

NIST is looking for comments on the first draft of the guidelines by Dec. 15.

Email security has also long been a focus for NIST, with its Special Publication 800-45 providing basic guidance. However, the most recent version of that guidance was published in early 2007 and the universe of security threats has much larger.

A new missive on Trustworthy Email, SP 800-177, seeks to plug the holes. Billed as complementary to 800-45, it provides more up to date recommendations for managing digital signatures, encryption, spam and more.

Man-in-the-middle attacks have become widespread, for example, as a way for bad actors to put themselves between the sender and receiver of a clear-text email so they can get information directly from the email. The NIST publication points out that these attacks can be prevented by encrypting email end-to-end and by implementing message-based authentication and confidentiality procedures.

There’s nothing especially new in the NIST email guidance, but even the basic recommendations mentioned in the document are often not implemented at organizations. Trustworthy Email should be useful, if for nothing else, for bringing all the current standard methods of protecting email together into a focused resource for email and network administrators and information security managers.

Posted by Brian Robinson on Sep 29, 2016 at 9:27 AM0 comments

Lessons from the OPM breach

When the data breaches at the Office of Personnel Management were revealed in 2015, it took some time for people to come to terms with the damage that had been wrought. In the end, over 20 million government employee and contractor records were compromised and OPM executives lost their jobs. It may be years before  everything gets sorted out.

The report released Sept. 7 by the Republican majority staff of the House Committee on Oversight and Government Reform  claims that the loss of background investigation information and fingerprint data “will harm counterintelligence efforts for at least a generation to come.”

That’s unlikely to be the last word. The Democrats on the committee have already rejected at least some part of the report, claiming factual deficiencies and insufficient blame attached to federal contractors. OPM asserts the report doesn’t reflect how much progress it has made on security since the breaches were discovered.

Nevertheless, the report is the most comprehensive official account to date of what happened at OPM, and in its details it presents what could turn out to be both a model for what not to do and a template for how to design security to prevent future breaches.

The first lesson: When you get advice from knowledgeable sources, you should really take it. As far back as 2005, the OPM inspector general warned that agency data was vulnerable to hackers. The risk was upgraded to a “significant deficiency” in 2014. Even as recently as November 2015, months after the breach was revealed, the IG was still complaining that OPM was not meeting the requirements of the Federal Information Security Management  Act and that the agency’s IT security program wasn’t in compliance.

Then look at the failure to implement basic security requirements, even when the mandate for doing so had been around for a while. OPM used multifactor authentication for only a very small fraction of its staff, despite a policy from the Office of Management and Budget issued several years before the breach. OPM also allowed key IT systems to operate without a security assessment and a valid authority to operate.

There’s also a lesson to be learned about overconfidence. The DHS Computer Emergency Response Team notified OPM as early as March 2014 that someone was snatching data from its network. The OPM then monitored that hacker for two months to get a better idea of the threat.

Fair enough, except that by focusing on that first hacker, OPM missed another who, posing as a contractor, installed malware and created a backdoor. The agency eventually tackled the threat posed by the first hacker, but the second hacker went unnoticed and remained in the system. OPM thought it had cleared its systems, but it overlooked the remaining hacker who successfully stole data.

“Had OPM implemented basic, required security controls and more expeditiously deployed cutting edge security tools when they first learned hackers were targeting such sensitive data,” the House report said, “they could have significantly delayed, potentially prevented or significantly mitigated the theft.”

In fact, the agency did use tools from Cylance Inc., but only after the breach caused by the second hacker was identified. In just the six weeks following that discovery, from April 16, 2015 through to the end of May, the tools “consistently detected malicious code and other threats to OPM,” the report said. Unfortunately OPM’s security director had recommended using the Cylance tools way back in March 2014, after the discovery of the first hack.

OPM, to its credit, seems to have hustled to repair both its security and, though it may take a long time, its reputation. Acting Director Beth Cobert has laid out a series of steps the agency has taken, including imposing multifactor authentication for anyone accessing the agency network, shoring up the web-based systems used to get information for employee background investigation, implementing the government’s continuous monitoring program and working with the Defense Department to construct a new IT infrastructure for background checks.

Notably, OPM has also brought on a “senior cybersecurity advisor” who reports directly to OPM’s director, among a number of other IT and security changes. It’s also centralized cybersecurity resources and responsibilities under a new chief information security officer.

That’s as important as any security technology OPM will use. As the House reports notes, the breaches at OPM represent “a failure of culture and leadership, not technology.” The security tools that could have prevented the breaches were available, but OPM failed to recognize their importance.

Though the OPM hack itself is over, it could take years for the repercussions to subside, particularly the ongoing threat to government employees because of the personal information that was stolen. It could also cause lasting damage to U.S. counterintelligence efforts.

The publication of the House report, and its damning details, should lead to major reforms in how agencies tackle cybersecurity. If those reforms don’t come about after what is widely considered one of the biggest security failures ever, then you have to wonder what it will take.

Posted by Brian Robinson on Sep 09, 2016 at 11:52 AM0 comments

Derived credentials for mobile on the horizon?

Derived credentials for mobile on the horizon?

All government employees are familiar with using a smart card to access buildings, their desktop computers or just about anything that requires a so-called hard-token passport. That’s not such a good thing for employees who want to access government information from their mobile devices, however, which is where derived credentials come in.

For one thing, supplying all government users of mobile devices with a card reader is a very expensive proposition. Using that reader every time employees want to use their device to get into a government network or website is also cumbersome, to say the least, and potentially life-threatening when responders want network access during an emergency.

The dangers of using a single password for mobile access have been obvious for a while, so government has been on the lookout for a better and more convenient solution. Just over two years ago, the National Institute of Standards and Technology issued the first draft of guidelines for implementing derived personal identity verification (PIV) credentials on mobile devices.

Essentially, it’s a software version of the PIV credentials stored on the government smart cards. It’s directly related to NIST SP 800-57 guidelines on cryptographic key management, which is the basis for government security policies such as Homeland Security Presidential Directive 12 and those governing public-key infrastructures (PKIs).

So far, however, derived credentials have rarely been implemented in government. That might change with the recent announcement by MobileIron, an enterprise mobility management provider, and Entrust Datacard that they are partnering on a derived credential product that could be available for government mobile users by the end of the year.

It’s the result of a long process, said Sean Frazier, chief federal technical evangelist at MobileIron. NIST had to spend time developing the final SP 800-157 guidelines on derived credentials, and at the same time, Entrust and other organizations that control the government’s backend certificate management systems that enable the use of PIVs, had to amend and adapt those systems to allow for derived credentials.

MobileIron’s job was to develop the software that would manage the credentials on the device and make sure they could be seen by various applications.

“We didn’t start seeing pilots or testing for this until the NIST guidelines went into draft mode in 2014,” Frazier said, “and it’s taken the two years since then to wind our way through the final policy pronouncements and product development.”

The first users of the product will likely be civilian agencies, which have been looking for a solution that would also allow them to extend their current investments in hard-token PIV and smart cards instead of having to develop a different authentication system for mobile users.

It’s unclear so far whether the Defense Department would use this type of mobile soft token. DOD went its own way on PKI and Common Access Card implementation, and a few months ago, it said it would be eliminating CACs in favor of a new, multifactor authentication system. Meanwhile, last year DOD approved the use of derived credentials for some of its BlackBerry users.

Frazier said the new derived credential product could spell the end for legacy BlackBerry devices still in use at various civilian agencies. The devices had been pervasive in government because of the secure connectivity they brought with them. For that reason, agencies have been reluctant to get rid of them until they had other viable systems in place, Frazier said, adding that the derived credential could be a big incentive for making the switch.

“This allows [agencies] to provide the kind of seamless mobile experience they’ve always talked about wanting to deliver for their users,” he said. “It gives them that experience while tying the security to that existing hardware token, so they have a higher level of security and better usability.”

Posted by Brian Robinson on Aug 26, 2016 at 1:07 PM0 comments