Are states ill-equipped to manage cybersecurity?

Are states ill-equipped to manage cybersecurity?

While the focus on cybersecurity and cybercrime in government has lately been focused on the federal side, much of the risk lives at the state and local level. That’s where the bad guys can find much of the personal information that makes cybercrime so lucrative, and where disruptive hacks can cause the most havoc.

At the end of 2015, the Pell Center for International Relations and Public Policy looked at cybersecurity in eight of the most populous states. While it noted that the states had begun to grapple with the issues, and several had made substantial progress, it concluded that “no state is cyber ready.”

More recently, the 2016 Deloitte-NASCIO Cybersecurity Study found that awareness of cybersecurity had finally begun to rise to the top of states’ executive branches. But the security professionals themselves were still struggling with “stubbornly persistent” issues.

Ironically, the study pointed out, the newer systems that states have been introducing to foster innovations in service delivery to better serve constituents -- technology that has been pushed as a critical need -- have only served to increase cyber risks. Securing sufficient resources -- both funding and talent -- also remained one of the top challenges.

The new-technology problem is one that could bedevil organizations for a long time. The legacy systems slated for replacement have their own problems when it comes to cybersecurity, such as old and hard-to-update operating systems, but the new technology introduces quite a bit of complexity into the equation.

It’s that complexity that long-time players in the cybersecurity field worry will continue to threaten organizations. Ron Ross, a fellow at the National Institute for Standards and Technology, thinks there are too many bases – the software, firmware and hardware that runs all of the critical infrastructure and technology we rely on today -- for cybersecurity professionals to realistically cover right now.

While many of the broad-ranging reports have some element of hope to them, at the operational level,  things don’t look so rosy. The state of Oregon, for instance, recently conducted an audit of 13 state agencies’ plans for information security and concluded that, overall, “planning efforts were often perfunctory, security staffing was generally insufficient, and critical security functions were not always performed.”

In particular, it said the Office of the State Chief Information Officer had “not yet provided state agencies with sufficient and appropriate information technology security standards and oversight.” It also didn’t have processes in place to ensure that agencies comply with statewide security standards or regulations imposed by federal requirements.

“These weaknesses continued because the state abandoned initial security plans, did not assign security roles and responsibilities, or provide sufficient security staff,” the report said. Even while the governor and CIO have taken first steps to fix the problems, “the solutions will take time, resources and cooperation from state agencies.”

While the Oregon governor’s office and state CIO said they largely agreed with the auditor’s report, they also claimed they were on track to fix many of the problems, tackling the risks according to perceived priorities. In other words, given limited resources, not everything can be fixed at once.

Sound familiar? State governments have been under constant pressure over the past decade or more to modernize their IT systems, improve service delivery to citizens and at the same time cut costs. Along the way, something is bound to break.

The Deloitte-NASCIO report pointed to the evolving complexity of the threat environment as the main challenge for organizations going forward. States “faced with a myriad of priorities and ongoing resource constraints may be hard-pressed to allocate sufficient funding to cybersecurity initiatives, [and] competition for top talent can make it difficult to attract the professionals needed to effectively combat constantly evolving threats.”

However, it said, chief information security officers have one thing in their favor:  State executives are starting to “pay more attention to the issue of cybersecurity.” That’s nice. Let’s hope that resolves into actual, better cybersecurity soon.

Posted by Brian Robinson on Dec 16, 2016 at 11:19 AM0 comments

Get ready for IoT-enabled threats

Get ready for IoT-enabled threats

The recent distributed denial of service attacks that affected large parts of the internet, along with major online outfits such as Twitter and Netflix, was an eye-opener for those who may not have been familiar with this type of threat. It was also a vindication of sorts for the government’s cybersecurity focus.

Despite the obvious dangers posed by criminals and state-sponsored advanced persistent threats (APTs) that trawl government systems for specific data, DDoS attacks are consistently seen as the biggest potential threat. So much so that the Department of Homeland Security has been spending serious money to develop defenses against it.

That attention seems warranted. The October attack again DNS provider Dyn using the Mirai botnet has raised the stakes significantly, at least in technical terms. Up to 100,000 bots were eventually involved, with the attack volume eventually thought to have exceeded 1 terabit/sec .

That’s a huge number, and a DDoS attack at that level will overwhelm most defenses now in place, simply because they can’t keep up with the deluge that’s flooding them. Mean time to failure of any compromised Internet of Things device -- the means of attack targeted by the Mirai botnet --  is just 10 minutes. You can’t just turn devices off and on again as a way of mitigating attacks.

The IoT, in other words, is a potential mother lode for cyber bad guys. It’s seen as having a tremendous potential to wring value out of assets through improved supply chains and logistics operations. It could mean as much as $1.9 trillion dollars in added value, which is a huge attraction for device manufacturers.

Unfortunately, security so far hasn’t kept up with demand. Two years ago, the SANS Institute detailed the vulnerabilities of digital video recorders as internet-connected devices. Revisiting the situation after the Mirai attack, it found not much has changed.

The ways the IoT can be attacked seem to be endless. One organization has described how Philips smart streetlights can be used to spread worms that result in so-called “bricking” attacks that can shut down the lighting in large areas of a city. Think of the havoc such blackouts can cause. Others have shown that even everyday devices such as smart toasters can be hijacked.

It didn’t take long after the Mirai attack for similar threats to surface. Linux/IRCTelnet malware (based on Aidra botnet) apparently has the same roots as Mirai and also borrows from other botnets. It has the same abilities to attack weak telnet credentials, but can also attack systems running much newer protocols such as IPv6. There are also warnings that new attack vectors such as Lightweight Directory Access Protocol could be used to launch terabit-scale DDoS attacks.

Just as the original Stuxnet attack was seen as the progenitor of much of the sophisticated APT malware industry that’s been built up over the past few years, it’s all but inevitable that the recent success of Mirai will stimulate similar development of DDoS threats.

To counter that, it’s critical that better and more capable tools are developed. Organizations such as DHS are ahead of the game, and after the recent attacks Congress has been stirred to action. Sen. Mark Warner (D-Va.), co-founder of the Senate Cybersecurity Caucus, asked the Federal Communications Commission, the Federal Trade Commission and the DHS’s National Cybersecurity & Communications Integration Center for information on current and future tools that will be needed to bolster IoT security.

DHS is apparently going further by developing a set of strategic principles that will set out security guidelines for connected devices, and calling for manufacturers to integrate more security into their devices. It will be interesting to see how manufacturers react to this, given the tradeoffs behind improving device security and getting devices quickly to market to meet the burgeoning IoT demand.

The chip industry is also getting involved. Much as the Trusted Computing Platform has enabled widespread chip-based security for laptops and other computing devices, so companies such as ARM and Microchip (teaming with Amazon) are looking to provide processor-based security for IoT devices.

All of that will take some time to make its way into the IoT mainstream however. Meanwhile, there are practices organizations can follow now that could lessen the effects of a DDoS attack, such as building up infrastructure resilience and replacing obvious network credentials for devices. The factory default “admin-password,” for example, was just one of the things Mirai looked for.

Posted by Brian Robinson on Nov 08, 2016 at 9:57 AM0 comments

Moving cybersecurity from art to science

Moving cybersecurity from art to science

When it comes to cybersecurity and the ability to catch threats in the early stages before they can much damage, where does government stand? Effective, ineffective? Is it at least improving?

The picture over of the past couple of years doesn’t look encouraging. The infamous breach at the Office of Personnel and Management, other noted attacks on the Pentagon and the Internal Revenue Service and minor breaches elsewhere would seem to suggest the government is overwhelmed.

Some analyses seem to confirm that. The Government Accountability Office, for example, recently came out with a report that pointed out the number of cyber incidents affecting federal agencies rocketed to over 77,000 in 2015 compared to just 5,503 in 2006. That’s more than a 1,300 percent increase.

Over the last several years, GAO has made around 2,500 recommendations to agencies intended to help improve their information security controls, GAO Director of Information Security Issues Gregory Wilshusen told the President’s Commission on Enhancing National Cybersecurity. As of mid-September 2016, 1,000 of those had yet to be implemented.

As the GAO does, Wilshusen then listed a raft of actions agencies should take to improve the protection of their information and systems.

One of the emerging technologies that’s being pitched as a potential advance for security is big data analytics, which can look into the flood of data that’s being collected by various sensors and sort out the patterns that might point to potential security attacks. Even though many are skeptical of data analytics, particularly predictive analytics, it’s one of the more promising technologies government can use to get in front of security problems.

A MeriTalk survey showed that interest in using big data is high in government, with 81 percent of respondents saying their agencies are using it in some capacity, and over 50 percent already have it built into their cybersecurity strategy.

However, only 45 percent of those surveyed said they trusted big data results when it comes to cybersecurity. Nearly 90 percent of them said they had trouble drawing intelligence from the data, and a third of them admitted they still don’t have the right systems in place to gather the information they need even to start applying data analytics.

Read around the figures in the various studies, however, and things look more optimistic. At the least, it seems that the organizational resistance and executive-level inattention that has plagued government cybersecurity finally seem to have been overcome.

As Rocky DeStefano, cybersecurity expert at Cloudera, which sponsored the MeriTalk survey, pointed out, at least there’s interest in improving. The positive you can take away from the survey, two years after a similar one, is that a high percentage of government that is at least starting to use big data analytics, compared to much lower numbers back then.

And people are already reporting encouraging results, DeStefano said, such as 90 percent who have seen some reduction in successful attacks and 84 percent who are able to thwart at least some kinds of attacks by leveraging the results of big data analytics.

“That’s the most encouraging thing to me,” he said. “This is all still in its infancy and yet it’s still very, very effective.”

Outside of the federal arena, optimism in states also seems to be catching on. A report from Deloitte and the National Association of State Chief Information Officers showed an increasing level of awareness of security issues at the executive level, with cybersecurity is becoming “part of the fabric” of government operations.

Even the GAO, usually so critical of government security, had some kind words. While pointing out the faults and inconsistencies of agencies’ security efforts and that additional actions are needed, Wilshusen did tell the presidential commission that the Obama administration and agencies have acted to improve cybersecurity protections.

So it’s a start, but one that must be accelerated into much more effective and wider application. After all, when it comes to technologies like data analytics and other tools that can be used to their advantage, the bad guys have also not been slow to try and take advantage.

Both government and private industry are changing how they approach cybersecurity, DeStefano believes, and it will take patience. Unlike in the past, when security was much more a case of intuition and guesswork, there’s now a cadre of highly-skilled people identifying threats with mechanisms and techniques that can be replicated and improved for the future.

“What’s really happening is that we’re turning an art into a science, and that’s going to take time,” DeStefano said. “When we do that, we’ll be able to get a little more ahead of the game than we are today.”

This blog was changed Oct. 26 to correct the spelling of Mr. DeStefano's name.

Posted by Brian Robinson on Oct 21, 2016 at 12:50 PM0 comments

cybersecurity quality assurance

NIST offers cyber self-assessment tool, updates email security guidance

The National Institute of Standards and Technology has long  been a national resource on cybersecurity, and its Cybersecurity Framework has been widely adopted in both government and private industry. The guidance, however, doesn’t come with many pointers to tell organizations how well they are deploying it.

Hearing the many pleas for some way of doing that, NIST has finally come out with a self-assessment tool that should give organizations a better understanding of how they are progressing with security risk management efforts. It’s asking for public comment on the current draft document.

The Baldrige Cybersecurity Excellence Builder pulls together two prized Commerce Department initiatives. The new tool incorporates elements of NIST’s Cybersecurity Framework, which was introduced in February 2014, and takes inspiration from the Baldrige Award, created in 1987 and named after the late Commerce Secretary Malcolm Baldrige.

The award begat the Baldrige Excellence Framework, which organizations can use to build performance-boosting programs. After that came the Baldrige Performance Excellence Program, managed by NIST, that also includes various self-assessment tools that can tell organizations how well they are doing.

As far as the Cybersecurity Framework goes, it’s proving to be as popular as the Baldrige program has been over the years, and there’s hope it might be as effective. Though it has its critics, the Cybersecurity Framework has so far been adopted by around 30 percent of U.S. organizations, according to Gartner, and that’s expected to rise to 50 percent by 2020.

The new assessment tool, according to NIST, guides users through a process that details their particular characteristics and strategic needs for cybersecurity and will enable them to:

  • Determine cybersecurity-related activities that are important to business strategy and the delivery of critical services
  • Prioritize investments in managing cybersecurity risk
  • Assess the effectiveness and efficiency of using cybersecurity standards, guidelines and practices
  • Assess cybersecurity results
  • Identify priorities for improvement

At the end, the assessment will put the organizations at a certain maturity level -- reactive, early, mature or role model -- and from there, each organization can build out its own action plan for upgrades and cybersecurity improvements.

NIST is looking for comments on the first draft of the guidelines by Dec. 15.

Email security has also long been a focus for NIST, with its Special Publication 800-45 providing basic guidance. However, the most recent version of that guidance was published in early 2007 and the universe of security threats has much larger.

A new missive on Trustworthy Email, SP 800-177, seeks to plug the holes. Billed as complementary to 800-45, it provides more up to date recommendations for managing digital signatures, encryption, spam and more.

Man-in-the-middle attacks have become widespread, for example, as a way for bad actors to put themselves between the sender and receiver of a clear-text email so they can get information directly from the email. The NIST publication points out that these attacks can be prevented by encrypting email end-to-end and by implementing message-based authentication and confidentiality procedures.

There’s nothing especially new in the NIST email guidance, but even the basic recommendations mentioned in the document are often not implemented at organizations. Trustworthy Email should be useful, if for nothing else, for bringing all the current standard methods of protecting email together into a focused resource for email and network administrators and information security managers.

Posted by Brian Robinson on Sep 29, 2016 at 9:27 AM0 comments