Heartbleed redux with Secure Shell?

Heartbleed redux with Secure Shell?

Is the Secure Shell (SSH) vulnerability going to be this year’s OpenSSL? As with the stock market, it’s a mug’s game to predict the future, but warning flags have been raised in response to reports of problems with major security devices.

It was problems with the OpenSSL version of the Secure Sockets Layer encryption that led to the discovery two years ago of the Heartbleed bug, which many security professionals called one of the scariest things they had seen. It allowed anyone who could get to an infected device to compromise the private keys used to identify service providers and encrypt data traffic.

Eventually, hundreds of thousands of servers around the world were found to be vulnerable to Heartbleed, and even now no one seems sure if all the holes have been plugged.

In December 2015, Juniper Networks said it had found “unauthorized code” in its ScreenOS, the operating system that runs on its widely used NetScreen firewalls. That code would allow a knowledgeable attacker to gain administrative access to NetScreen devices over SSH and Telnet, the company said, and to decrypt VPN connections.

The company has since made several fixes to its software to close down the gap, the latest to the Dual_EC random number generator used in the firewalls. That’s been a long time coming, since Dual_EC has reportedly contained a backdoor inspired by the National Security Agency (that could also be exploited by bad guys).

Now researchers have found suspicious code in Fortinet’s FortiOS firewalls, saying it was also essentially an SSH backdoor. Fortinet, however, downplayed that allegation, saying it was a “management authentication issue” that had been fixed some time ago.

Coincidentally, the National Institute of Standards and Technology recently released a new guidance document on the security of SSH key-based access, which it said is often overlooked by organizations. That would be a bad thing, as NIST also points out, because misuse of SSH keys “could lead to unauthorized access, often with high privileges.” In other words, it’s potentially handing the keys to the kingdom over to people who will gratefully accept the gift -- and then take you for all you are worth.

Backdoor keys are specifically mentioned by NIST as one of the seven categories of vulnerability in SSH, which is widely used to manage servers, routers and other security devices as well as firewalls. It’s also used to provide privileged access to servers and networks.

However, NIST pointed out, SSH public key authentication can also be used to create a backdoor by generating a new key pair and adding a new authorized key to an authorized keys file. That allows someone to get around the access management system and its monitoring and auditing capabilities.

Other vulnerabilities NIST cited include: poor SSH implementation; improperly configured access controls; stolen, leaked, derived and unterminated keys; unintended usage of keys; theft of keys as attackers inside the system move from server to server and steal credentials along the way; and the always present human error.

The recent firewall revelations are by no means the only reported problems with Secure Shell. In the middle of last year, researchers also discovered vulnerabilities with the OpenSSH version of the protocol, which allowed attackers to get around authentication attempt limits and launch brute force attacks on targeted servers.

The big problem with these kinds of vulnerabilities is not necessarily that they exist. If they are quickly noticed and patched, any likely damage is minimized. But the OpenSSL bug went unnoticed for several years, so the door to networks and systems that used that protocol was open all that time. The OpenSSH bug could have been present on versions of the FreeBSD operating system as far back as 2007.

Heartbleed redux? Not so far, it seems, but the year is yet young.

Posted by Brian Robinson on Jan 19, 2016 at 1:56 PM0 comments


Cybersecurity in 2016: Real change or more of the same?

Cybersecurity in 2016: Real change or more of the same?

Looking back, 2015 was a time of strain in the public sector when it came to cybersecurity, with the hack of systems at the Office of Personnel Management that exposed over 20 million government employee records, the high infection rate of state and local networks by malware and ransomware and the overall lack of security compliance of government software. Here, then, is a start-of-the-year list of some the issues government will face in 2016.

The Internet of Things: The rap about the IoT is that it’s still on the horizon,  but many argue that it’s already here, and people are putting (betting?) money on it influencing both industrial IT and the evolution of smart cities. The White House demonstrated its support when it jumped into the fray in 2015 with a $160 million initiative. But there are concerns that the technical evolution of the IoT is getting too far ahead of the much slower development of security and privacy policies. This year should see rapid movement around those issues. We hope.

Contractors get the security eye: Big integrators and government contractors have long been aware of the need for tight cybersecurity in their work with government agencies, but the same can’t always be said of the subcontractors they hire. As hackers get more sophisticated about how to access government systems, they are finding vulnerabilities in the security of those smaller firms, with devastating results. The OPM hack, for example, was attributed to the theft of a vendor’s network access credentials. The massive breach at the retailing giant Target the previous year was blamed on credentials stole from a HVAC vendor. The government has begun to try and rein in lax vendor security, with the Office of Management and Budget issuing “cyber guidance” for contractors. Don’t bet on that being the last word, however.

Encryption: There’s a debate brewing over encryption. On the one hand, it’s considered essential to cover security, given that best practices now assume that persistent hackers will penetrate even the best defenses. In that case, the more data and communications can be encrypted, the safer they will be. OMB seemed to be following that logic when it issued a memo in June requiring federal agencies to encrypt their websites and web services connections. But others say that it’s a much more nuanced argument than just encrypting everything, and that true security also requires a way to inspect both incoming and outgoing encrypted traffic. Email encryption has become a particular target for security critics, especially in light of the hackers of Pentagon email networks who reportedly took advantage of outgoing encrypted traffic that was not being inspected. Meanwhile, the government is having to address public demands for even more encryption, a move intelligence agencies view with suspicion, saying it could hurt their efforts against terrorism.

The value of NIST: The National Institute of Standards and Technology has never been an outfit to blow its own horn, so its impact on cybersecurity over the past few years has often seemed less obvious than that of the Department of Homeland Security and DOD. But NIST is arguably the most ambitious agency when it comes to addressing the more technical aspects of cybersecurity, and 2016 could finally be the year when agency gets the recognition it deserves. Surveys already suggest that the majority of government agencies are now getting on board with NIST’s Cybersecurity Framework, and the private sector is also paying greater attention to that guidance. Other NIST endeavors, such as its Identity Ecosystem Framework, the first version of which was released in October, are also taking flight. The agency recently asked for proposals that can develop identity solutions for state and local government.

The malware ecosystem: The biggest hurdle government may have when it comes to cybersecurity is realizing what it’s up against. Despite the now-clichéd idea that only state actors such as Russia and China are capable of the biggest and most penetrating attacks, the lure of filthy lucre has created a widespread and highly networked ecosystem of criminals and hackers that crank out highly sophisticated threats. And government is not ready to deal with this kind of industrialized threat machine, according to some observers. The spectrum of tools now available to hackers is immense and growing, and the research and development that goes on in this underground industry is impressive -- turning even legitimate proxy networks into channels hackers can use to their advantage.

The year security is taken seriously? Despite 2015 providing an embarrassment of cybersecurity breach riches, there’s no sign that government overall is paying that much more attention to information security -- it still seems to fall behind many other issues that cash-strapped agencies must address. Whatever security resources are being deployed seem to be mostly aimed at reaction rather than preemption. However, there are signs that attitude may be changing. Early in 2015, the DOD said it would kick workers guilty of “poor cyber hygiene” off networks they need to do their jobs, and Congress has made noises about stricter oversight of agency cybersecurity and holding agencies accountable for failures. Time will tell if these efforts amount to anything. If the lesson of the OPM hack -- that agency executives can lose their jobs if they don’t take care of security -- doesn’t hit home, you have to wonder what will.

Posted by Brian Robinson on Jan 04, 2016 at 12:48 PM0 comments


Why network resiliency is so hard to get right

Why network resiliency is so hard to get right

The new chairman of the Joint Chiefs of Staff thinks the July hack of his organization’s unclassified email network showed a deficiency in the Pentagon’s cybersecurity investment and a worrying lack of “resiliency” in cybersecurity in general.

It was an embarrassing event for sure. The hackers, suspected to be Russian, got into the network through a phishing campaign and, once in, reportedly took advantage of encrypted outgoing traffic that was not being decrypted and examined. Gen. Joseph Dunford, who took command Oct.  1, said the hack highlighted that cyber investments to date “have not gotten us to where we need to be.”

As a goal, resiliency is a fuzzy concept. If it means keeping hackers out completely, then Dunford is right – the Defense Department has a problem. If it means being able to do something once hackers get in to limit or negate the effects of the hack, then he’s off the mark.

Best practice in the security industry is now to expect that even the best cyber defenses will be breached at some point. The effectiveness – or resiliency -- of an organization’s security will ultimately be judged on how it deals with that breach and how efficiently it can mitigate its effects.

In 2015, the government’s cybersecurity low point had to be the hack of the Office of Personnel Management’s systems, which compromised the personal data of millions of government workers. Attackers had apparently gained access to OPM’s networks months before the hack was discovered, giving them plenty of time to wander through the agency’s systems, steal and then exfiltrate the data.

That experience prompted plenty of heartache and soul searching.  It seemed that, even after some years of experience of increasingly sophisticated hacks, both public and private organizations were still not paying the attention they needed to their internal security, and instead fixating on defending the network’s edge.

In that sense, the Joint Chiefs email attack could be seen as a success, at least in terms of the reaction to it. Security personnel quickly detected the attack, closed down the email network and then set about investigating possible damage and systematically eradicating any malware that attackers had left behind.

In the end, the email network was down for around two weeks, with the Pentagon declaring it a learning experience and claiming confidence in the integrity of DOD networks.

Learning experiences are great, but the fact is that most government organizations are still more vulnerable than they should be. And some agencies still seem to place more faith than is warranted on networks’ peripheral defenses.

Even the best of those will prove vulnerable at some point, however. Google’s Project Zero recently reported a vulnerability in security appliances produced by FireEye, one of leaders in the field, that allowed someone access to networks via a single malicious email. (FireEye quickly patched the vulnerability.)

The many government assertions that agencies are also raising employee awareness of potential email security hazards has also come into question, given that phishing remains such a successful way for hackers to get network access credentials. According to Verizon, a phishing campaign of just 10 emails has a 90 percent chance of at least one recipient becoming a victim.

A basic problem in all of this is that security, like resiliency, is still much more qualitative than quantitative when it comes to assessing cybersecurity strength. You know you’ve got a good system in place if you can deter attacks or catch and mitigate them quickly once they happen. But there’s no way to know, with a level of certainty, if that’s the case until a serious breach is attempted.

To move the needle on that, the National Institute of Standards and Technology will be holding a two-day technical workshop in January that will look at how to apply measurement science in determining the strength of the various solutions that now exist to assure identities in cyberspace.

To that end, it’s released three whitepapers ahead of the workshop that look at ways to measure the strength of identity proofing and authentication and how to attribute metadata to scoring the confidence of the authorization decision-making process.

Posted by Brian Robinson on Dec 18, 2015 at 12:55 PM0 comments


Securing the human endpoint

Securing the human endpoint

Endpoint protection has become a major focus for agency security efforts over the past few years, as mobile devices proliferate and the bring-your-own-device movement grows as a major factor in government communications, even when agencies remain leery about it. But is it the device or the employee using it that’s the greatest threat?

Organizations such as the Defense Information Systems Agency have made their concerns over endpoint security clear. Early in 2015, DISA put out a request for information on next-generation solutions, saying the endpoint had evolved “to encompass a complex hybrid environment of desktops, laptops,

mobile devices, virtual endpoints, servers and infrastructure involving both public and private clouds.”

That complicated soup of devices and technologies is defeating agencies’ attempts to bolster their overall security, according to a recent report.  Federal IT managers surveyed by MeriTalk estimated that just under half of the endpoints that can access agency networks are at risk, with nearly one-third saying they had experienced endpoint breaches due to advanced persistent threats or zero-day attacks.

As DISA pointed out in its RFI, traditional signature-based defenses can’t scale to cover agencies’ sprawling endpoint infrastructures,  especially when exacerbated by the growth of virtualization.

However, even if agencies could tie down the physical security of endpoints — and the MeriTalk survey shows they are failing at that — there’s still the matter of employees and their actions. It’s no use having good endpoint security if the behavior of the user negates that.

The Ponemon Institute made that point at the beginning of 2015 in its annual look at the state of endpoint security. That study concluded fairly bluntly that negligent employees who do not comply with security policies are seen “as the greatest source of endpoint risk.”

Some of the problem is based on the sheer demand for endpoint device connectivity that is overwhelming IT departments. Over two-thirds of the respondents in the Ponemon study said their IT groups couldn’t provide the support for that, while the same number admitted endpoint security has become a far more important part of overall IT security.

Bookending that Ponemon report is a study published a few days ago by Ping Identity, which surveyed employees at U.S. enterprises and concluded that “the majority of enterprise employees are not connecting the dots between security best practices they are taught and behavior in their work and personal lives.”

Employees are doing some things really well to keep data secure, according to Ping, and following good security practices, such as creating unique and strong passwords. But then they reuse those passwords across personal or work accounts and share them with familiar colleagues.

“No matter how good employees’ intentions are,” said Andre Durand, Ping’s CEO, “this behavior poses a real security threat.”

Now, take the enterprise infrastructure even further to include partner organizations that have network access, such as service providers or, in the case of government agencies, contractors. No matter how bulletproof the prime organization’s security, if those partners have holes in their endpoint security, attackers will find and exploit them.

That was the reason behind some of the biggest security breaches of the past two years.

All of which seems to beg the question of what is meant by endpoint security. If organizations in 2016 bear down on securing their endpoints — which they will have to do — just what exactly is an endpoint? Is it the device, virtualized or not, or does it come down to the user? There are some good endpoint security solutions that have been developed, but how will they take the human into account?

That could be the biggest factor for IT security in the future.

Posted by Brian Robinson on Dec 04, 2015 at 1:26 PM1 comments