Why network resiliency is so hard to get right

Why network resiliency is so hard to get right

The new chairman of the Joint Chiefs of Staff thinks the July hack of his organization’s unclassified email network showed a deficiency in the Pentagon’s cybersecurity investment and a worrying lack of “resiliency” in cybersecurity in general.

It was an embarrassing event for sure. The hackers, suspected to be Russian, got into the network through a phishing campaign and, once in, reportedly took advantage of encrypted outgoing traffic that was not being decrypted and examined. Gen. Joseph Dunford, who took command Oct.  1, said the hack highlighted that cyber investments to date “have not gotten us to where we need to be.”

As a goal, resiliency is a fuzzy concept. If it means keeping hackers out completely, then Dunford is right – the Defense Department has a problem. If it means being able to do something once hackers get in to limit or negate the effects of the hack, then he’s off the mark.

Best practice in the security industry is now to expect that even the best cyber defenses will be breached at some point. The effectiveness – or resiliency -- of an organization’s security will ultimately be judged on how it deals with that breach and how efficiently it can mitigate its effects.

In 2015, the government’s cybersecurity low point had to be the hack of the Office of Personnel Management’s systems, which compromised the personal data of millions of government workers. Attackers had apparently gained access to OPM’s networks months before the hack was discovered, giving them plenty of time to wander through the agency’s systems, steal and then exfiltrate the data.

That experience prompted plenty of heartache and soul searching.  It seemed that, even after some years of experience of increasingly sophisticated hacks, both public and private organizations were still not paying the attention they needed to their internal security, and instead fixating on defending the network’s edge.

In that sense, the Joint Chiefs email attack could be seen as a success, at least in terms of the reaction to it. Security personnel quickly detected the attack, closed down the email network and then set about investigating possible damage and systematically eradicating any malware that attackers had left behind.

In the end, the email network was down for around two weeks, with the Pentagon declaring it a learning experience and claiming confidence in the integrity of DOD networks.

Learning experiences are great, but the fact is that most government organizations are still more vulnerable than they should be. And some agencies still seem to place more faith than is warranted on networks’ peripheral defenses.

Even the best of those will prove vulnerable at some point, however. Google’s Project Zero recently reported a vulnerability in security appliances produced by FireEye, one of leaders in the field, that allowed someone access to networks via a single malicious email. (FireEye quickly patched the vulnerability.)

The many government assertions that agencies are also raising employee awareness of potential email security hazards has also come into question, given that phishing remains such a successful way for hackers to get network access credentials. According to Verizon, a phishing campaign of just 10 emails has a 90 percent chance of at least one recipient becoming a victim.

A basic problem in all of this is that security, like resiliency, is still much more qualitative than quantitative when it comes to assessing cybersecurity strength. You know you’ve got a good system in place if you can deter attacks or catch and mitigate them quickly once they happen. But there’s no way to know, with a level of certainty, if that’s the case until a serious breach is attempted.

To move the needle on that, the National Institute of Standards and Technology will be holding a two-day technical workshop in January that will look at how to apply measurement science in determining the strength of the various solutions that now exist to assure identities in cyberspace.

To that end, it’s released three whitepapers ahead of the workshop that look at ways to measure the strength of identity proofing and authentication and how to attribute metadata to scoring the confidence of the authorization decision-making process.

Posted by Brian Robinson on Dec 18, 2015 at 12:55 PM0 comments


Securing the human endpoint

Securing the human endpoint

Endpoint protection has become a major focus for agency security efforts over the past few years, as mobile devices proliferate and the bring-your-own-device movement grows as a major factor in government communications, even when agencies remain leery about it. But is it the device or the employee using it that’s the greatest threat?

Organizations such as the Defense Information Systems Agency have made their concerns over endpoint security clear. Early in 2015, DISA put out a request for information on next-generation solutions, saying the endpoint had evolved “to encompass a complex hybrid environment of desktops, laptops,

mobile devices, virtual endpoints, servers and infrastructure involving both public and private clouds.”

That complicated soup of devices and technologies is defeating agencies’ attempts to bolster their overall security, according to a recent report.  Federal IT managers surveyed by MeriTalk estimated that just under half of the endpoints that can access agency networks are at risk, with nearly one-third saying they had experienced endpoint breaches due to advanced persistent threats or zero-day attacks.

As DISA pointed out in its RFI, traditional signature-based defenses can’t scale to cover agencies’ sprawling endpoint infrastructures,  especially when exacerbated by the growth of virtualization.

However, even if agencies could tie down the physical security of endpoints — and the MeriTalk survey shows they are failing at that — there’s still the matter of employees and their actions. It’s no use having good endpoint security if the behavior of the user negates that.

The Ponemon Institute made that point at the beginning of 2015 in its annual look at the state of endpoint security. That study concluded fairly bluntly that negligent employees who do not comply with security policies are seen “as the greatest source of endpoint risk.”

Some of the problem is based on the sheer demand for endpoint device connectivity that is overwhelming IT departments. Over two-thirds of the respondents in the Ponemon study said their IT groups couldn’t provide the support for that, while the same number admitted endpoint security has become a far more important part of overall IT security.

Bookending that Ponemon report is a study published a few days ago by Ping Identity, which surveyed employees at U.S. enterprises and concluded that “the majority of enterprise employees are not connecting the dots between security best practices they are taught and behavior in their work and personal lives.”

Employees are doing some things really well to keep data secure, according to Ping, and following good security practices, such as creating unique and strong passwords. But then they reuse those passwords across personal or work accounts and share them with familiar colleagues.

“No matter how good employees’ intentions are,” said Andre Durand, Ping’s CEO, “this behavior poses a real security threat.”

Now, take the enterprise infrastructure even further to include partner organizations that have network access, such as service providers or, in the case of government agencies, contractors. No matter how bulletproof the prime organization’s security, if those partners have holes in their endpoint security, attackers will find and exploit them.

That was the reason behind some of the biggest security breaches of the past two years.

All of which seems to beg the question of what is meant by endpoint security. If organizations in 2016 bear down on securing their endpoints — which they will have to do — just what exactly is an endpoint? Is it the device, virtualized or not, or does it come down to the user? There are some good endpoint security solutions that have been developed, but how will they take the human into account?

That could be the biggest factor for IT security in the future.

Posted by Brian Robinson on Dec 04, 2015 at 1:26 PM1 comments


laptop connection

The Internet of malware-infected things

Body cameras are having a bumpy introduction. Most people on both sides seem to agree, in varying degrees, that such systems provide more transparency in incidents where law enforcement officials interact with the public as well as in training and evidence gathering. Technically, however, there are quite a few issues to still iron out.

Customs and Border Protection recently published findings of a feasibility study it had conducted to see how cameras could help its agents with their operations. Though the potential advantages were great, CPB said it found “significant challenges” to a rollout of the cameras because of cybersecurity, data processing and other issues.

The cameras generally lack adequate security features, the study found, and vulnerabilities could be introduced by streaming video or the interface between the cameras and non-approved devices. The cameras’ signals were also susceptible to hacking.

There also seems to be a question about whether some of the camera manufacturers understand any of this -- or even know what’s embedded in the devices they make. One recent case involved a company called Martel Frontline Camera, which makes $500 body cameras whose systems were found to be infected with the Conficker worm right out of the box.

Bear in mind that Conficker was state of the art back in 2008. These days, it’s a well-known threat that should be fairly easily caught by standard antivirus and firewall software. Any government security professionals who allow the Conficker worm into agency systems would rightly have their competency questioned.

Florida integrator iPower Technology had bought several of Martel’s cameras with which to test a cloud-based video system it was developing for government agencies and police departments. During testing and evaluation of the Martel product, the company discovered that the body cameras had been preloaded with the Win32/Conficker.Blinf worm.

iPower’s own antivirus software immediately discovered the worm but, as the company pointed out, any computer that didn’t have antivirus installed would have immediately been infected and could have spread the worm to other systems and across the network.

When iPower reached out to Martel, which has been in business for three decades, Martel technicians were incredulous, according to iPower’s owner and president. In fact, Jarrett Pavao told Threatpost, Martel didn’t even think there was software in the camera.

In iPower’s own release on its findings, Pavao made several good points.

“…as the Internet of Things continues to grow into every device we use in our businesses and home lives each day, it becomes even more important that manufacturers have stringent security protocols,” he said. “If products are being produced in offshore locations, what responsibilities lie with the manufacturer to guarantee our safety?”

Supply chain security has become a major worry for government. With so many IT components now made outside the United States, there’s a clear path for criminals and foreign states to plant malware that could infect U.S. systems and provide a way for people to steal information or commit espionage against U.S. government and private-sector organizations.

A Justice Department statement, for example, recently revealed that two Defense Information Systems Agency contractors had been fined for using unauthorized programmers to write software for Defense Department communications systems, which a separate Public Integrity investigation found actually involved Russian programmers. The code they provided — surprise — included numerous viruses.

Any company with modern security systems can easily deal with threats such as Conficker. As Pavao pointed out, however, there are many organizations that have much older, legacy systems and software that will have far more difficulty in detecting and dealing with threats. That’s a problem in many government agencies -- one that can’t be easily solved because many of those legacy systems are still running mission-critical applications.

With the most deadly threats today far more sophisticated than Conficker, however, the potential for havoc is absolute. Could it be that future breaches could stem from such non-obvious sources as police body cameras or similar devices?

Posted by Brian Robinson on Nov 20, 2015 at 1:47 PM2 comments


Is mobile security finally getting some respect?

Is mobile security finally getting some respect?

It looks like mobile security may at last be getting some attention in government, and it’s long overdue. While other aspects of IT security have been ratcheted up over the years, for some reason mobile security has proven a much tougher nut to crack -- and has lagged in the race for attention and funding.

Mobile security has proven a pain for most agencies, particularly with the once-hyped bring-your-own-device trend, in which government employees used their personal phones and tablets to do government work. With access and data security much harder to employ in mobile than for desktop devices, that threw up all kinds of concerns for organizations.

So much so, in fact, that some agencies simply tried to mitigate those concerns by banning most BYOD altogether. Well, no one expected that was going to work for the long run. And as a recent survey by mobile security firm Lookout found, many employees use their own devices no matter what the agency policy is. Fully half of the employees the company surveyed used their own devices to get government email, and nearly as many used them to download work documents.

In its Oct. 30 memo laying out a “Cybersecurity Strategy and Implementation Plan” for the civilian side of government, the Office of Management and Budget directly addressed mobile in a section on new cybersecurity shared services. Mobile devices, it said, have become as powerful and connected as desktop and laptop computers and require the same level of security attention.

But mobile security “has unique challenges that require different solutions than existing programs offer,” OMB said. “This service (or services) could address authentication, application management, device management, and encryption, and may include approved tools, best practices, and implementation support.”

Bob Stevens, vice president for federal systems at Lookout, said he’s encouraged by OMB’s statement, and by the formation of a forthcoming cybersecurity shared service center.   “Until now," he said, "most legislation and mandates around cybersecurity have been looking to solve problems that existed in 2009, not the problems that plague us today.”

A few days after OMB published its memo, the National Institute of Standards and Technology chimed in with a draft guide for securing mobile devices, based on a “typical” scenario drawn up and tested by engineers at NIST’s National Cybersecurity Center of Excellence. Examples in the guide show how organizations can configure a trusted device and, equally important, how to remove device details from IT systems if those devices are lost or stolen.

Public comment on the draft, part of the Center’s new Special Publication Series 1800 Cybersecurity practice Guides, is open through Jan 8, 2016.

These initiatives won’t be enough by themselves, given that agencies are so far behind the curve on mobile security. But at least now they’ll have a good place to start.

Posted by Brian Robinson on Nov 06, 2015 at 10:54 AM0 comments