Still early days for federal cybersecurity?

Still early days for federal cybersecurity?

Government gets it in the neck frequently when it comes to cybersecurity, usually along the lines of it being too dense or too slow to react when problems arise. Some of that criticism is warranted, some not, but let’s give credit where credit is due.

House lawmakers were quick to jump on the revelation that Juniper Networks, which sells its popular NetScreen firewalls to many government agencies, had found flaws in the operating system that runs those firewalls. This defect  would allow someone to remotely access a device through SSH protocols or telnet and then monitor and decrypt VPN traffic.

On Jan. 21, the House Committee on Oversight and Government Reform sent letters to the heads of major agencies asking them to audit their use of Juniper’s firewalls and report by Feb. 4 on how they might have been affected by the ScreenOS flaws and what corrective measures they took prior to Juniper releasing a software patch on Dec. 20.

The committee’s fast action follows the devastating breach at the Office of Personnel Management last year, which went undetected for several months. A year earlier, major problems were found with the widely used OpenSSL protocol, which may still be affecting systems around the world today.

It will be interesting to see what the House committee finds. Any agency that is on top of its security game should already have done that Juniper audit and should have no problem providing the information requested. Those that haven’t may have to scramble, and any committee report should show the extent of that.

Other elements of the government’s security status aren’t developing so quickly. Last year, the Government Accountability Office gave its regular report on the status of government cybersecurity, giving a lukewarm review of the Department of Homeland Security’s EINSTEIN program, more formerly known as the National Cybersecurity Protection System (NCPS).

EINSTEIN was designed some years ago to be a central plank in the government’s overall cybersecurity posture, aimed at providing agencies with intrusion detection, intrusion prevention, analytics and information sharing technologies. If those tools were fully in place across agencies, breaches such as those at the OPM and other agencies may have been prevented, or at least noticed and mitigated much sooner than they were.

Getting EINSTEIN in place governmentwide has been frustratingly slow, however, and according to the latest GAO report on the system, that sluggish pace continues. The DHS program is still only partially meeting its objectives, GAO said, and is deficient in all four areas examined.

With intrusion detection, for example, it can only compare network traffic to known signatures of malware, which covers maybe 80 percent of the bad stuff. The other malicious activity, which contains the advanced persistent threats that do most of the damage these days, requires more sophisticated detection.

Likewise, EINSTEIN now only prevents intrusion of particular kinds of malicious data, but it can’t block the kind that’s hidden inside the web traffic itself. DHS says it plans to deliver that capability sometime this year.

Overall, the uptake of EINSTEIN has been spotty, because of deficiencies at the agencies or the DHS itself. All of the 23 agencies required to implement intrusion detection capabilities had routed at least some of their traffic through the NCPS sensors, the GAO said, but only five were receiving intrusion prevention services. Agencies had not taken all of the technical steps needed to implement the system, in part because the DHS had not yet provided them with the necessary guidance.

It’s all an example of the strange and often puzzling disparities in the government’s approach to security. On the one hand, at least some parts of Congress seem to understand the urgency and are prepared to pressure agencies to move faster. On the other, critical technology that was recognized as essential years ago still isn’t fully deployed.

Posted by Brian Robinson on Jan 29, 2016 at 12:01 PM0 comments


Heartbleed redux with Secure Shell?

Heartbleed redux with Secure Shell?

Is the Secure Shell (SSH) vulnerability going to be this year’s OpenSSL? As with the stock market, it’s a mug’s game to predict the future, but warning flags have been raised in response to reports of problems with major security devices.

It was problems with the OpenSSL version of the Secure Sockets Layer encryption that led to the discovery two years ago of the Heartbleed bug, which many security professionals called one of the scariest things they had seen. It allowed anyone who could get to an infected device to compromise the private keys used to identify service providers and encrypt data traffic.

Eventually, hundreds of thousands of servers around the world were found to be vulnerable to Heartbleed, and even now no one seems sure if all the holes have been plugged.

In December 2015, Juniper Networks said it had found “unauthorized code” in its ScreenOS, the operating system that runs on its widely used NetScreen firewalls. That code would allow a knowledgeable attacker to gain administrative access to NetScreen devices over SSH and Telnet, the company said, and to decrypt VPN connections.

The company has since made several fixes to its software to close down the gap, the latest to the Dual_EC random number generator used in the firewalls. That’s been a long time coming, since Dual_EC has reportedly contained a backdoor inspired by the National Security Agency (that could also be exploited by bad guys).

Now researchers have found suspicious code in Fortinet’s FortiOS firewalls, saying it was also essentially an SSH backdoor. Fortinet, however, downplayed that allegation, saying it was a “management authentication issue” that had been fixed some time ago.

Coincidentally, the National Institute of Standards and Technology recently released a new guidance document on the security of SSH key-based access, which it said is often overlooked by organizations. That would be a bad thing, as NIST also points out, because misuse of SSH keys “could lead to unauthorized access, often with high privileges.” In other words, it’s potentially handing the keys to the kingdom over to people who will gratefully accept the gift -- and then take you for all you are worth.

Backdoor keys are specifically mentioned by NIST as one of the seven categories of vulnerability in SSH, which is widely used to manage servers, routers and other security devices as well as firewalls. It’s also used to provide privileged access to servers and networks.

However, NIST pointed out, SSH public key authentication can also be used to create a backdoor by generating a new key pair and adding a new authorized key to an authorized keys file. That allows someone to get around the access management system and its monitoring and auditing capabilities.

Other vulnerabilities NIST cited include: poor SSH implementation; improperly configured access controls; stolen, leaked, derived and unterminated keys; unintended usage of keys; theft of keys as attackers inside the system move from server to server and steal credentials along the way; and the always present human error.

The recent firewall revelations are by no means the only reported problems with Secure Shell. In the middle of last year, researchers also discovered vulnerabilities with the OpenSSH version of the protocol, which allowed attackers to get around authentication attempt limits and launch brute force attacks on targeted servers.

The big problem with these kinds of vulnerabilities is not necessarily that they exist. If they are quickly noticed and patched, any likely damage is minimized. But the OpenSSL bug went unnoticed for several years, so the door to networks and systems that used that protocol was open all that time. The OpenSSH bug could have been present on versions of the FreeBSD operating system as far back as 2007.

Heartbleed redux? Not so far, it seems, but the year is yet young.

Posted by Brian Robinson on Jan 19, 2016 at 1:56 PM0 comments


Cybersecurity in 2016: Real change or more of the same?

Cybersecurity in 2016: Real change or more of the same?

Looking back, 2015 was a time of strain in the public sector when it came to cybersecurity, with the hack of systems at the Office of Personnel Management that exposed over 20 million government employee records, the high infection rate of state and local networks by malware and ransomware and the overall lack of security compliance of government software. Here, then, is a start-of-the-year list of some the issues government will face in 2016.

The Internet of Things: The rap about the IoT is that it’s still on the horizon,  but many argue that it’s already here, and people are putting (betting?) money on it influencing both industrial IT and the evolution of smart cities. The White House demonstrated its support when it jumped into the fray in 2015 with a $160 million initiative. But there are concerns that the technical evolution of the IoT is getting too far ahead of the much slower development of security and privacy policies. This year should see rapid movement around those issues. We hope.

Contractors get the security eye: Big integrators and government contractors have long been aware of the need for tight cybersecurity in their work with government agencies, but the same can’t always be said of the subcontractors they hire. As hackers get more sophisticated about how to access government systems, they are finding vulnerabilities in the security of those smaller firms, with devastating results. The OPM hack, for example, was attributed to the theft of a vendor’s network access credentials. The massive breach at the retailing giant Target the previous year was blamed on credentials stole from a HVAC vendor. The government has begun to try and rein in lax vendor security, with the Office of Management and Budget issuing “cyber guidance” for contractors. Don’t bet on that being the last word, however.

Encryption: There’s a debate brewing over encryption. On the one hand, it’s considered essential to cover security, given that best practices now assume that persistent hackers will penetrate even the best defenses. In that case, the more data and communications can be encrypted, the safer they will be. OMB seemed to be following that logic when it issued a memo in June requiring federal agencies to encrypt their websites and web services connections. But others say that it’s a much more nuanced argument than just encrypting everything, and that true security also requires a way to inspect both incoming and outgoing encrypted traffic. Email encryption has become a particular target for security critics, especially in light of the hackers of Pentagon email networks who reportedly took advantage of outgoing encrypted traffic that was not being inspected. Meanwhile, the government is having to address public demands for even more encryption, a move intelligence agencies view with suspicion, saying it could hurt their efforts against terrorism.

The value of NIST: The National Institute of Standards and Technology has never been an outfit to blow its own horn, so its impact on cybersecurity over the past few years has often seemed less obvious than that of the Department of Homeland Security and DOD. But NIST is arguably the most ambitious agency when it comes to addressing the more technical aspects of cybersecurity, and 2016 could finally be the year when agency gets the recognition it deserves. Surveys already suggest that the majority of government agencies are now getting on board with NIST’s Cybersecurity Framework, and the private sector is also paying greater attention to that guidance. Other NIST endeavors, such as its Identity Ecosystem Framework, the first version of which was released in October, are also taking flight. The agency recently asked for proposals that can develop identity solutions for state and local government.

The malware ecosystem: The biggest hurdle government may have when it comes to cybersecurity is realizing what it’s up against. Despite the now-clichéd idea that only state actors such as Russia and China are capable of the biggest and most penetrating attacks, the lure of filthy lucre has created a widespread and highly networked ecosystem of criminals and hackers that crank out highly sophisticated threats. And government is not ready to deal with this kind of industrialized threat machine, according to some observers. The spectrum of tools now available to hackers is immense and growing, and the research and development that goes on in this underground industry is impressive -- turning even legitimate proxy networks into channels hackers can use to their advantage.

The year security is taken seriously? Despite 2015 providing an embarrassment of cybersecurity breach riches, there’s no sign that government overall is paying that much more attention to information security -- it still seems to fall behind many other issues that cash-strapped agencies must address. Whatever security resources are being deployed seem to be mostly aimed at reaction rather than preemption. However, there are signs that attitude may be changing. Early in 2015, the DOD said it would kick workers guilty of “poor cyber hygiene” off networks they need to do their jobs, and Congress has made noises about stricter oversight of agency cybersecurity and holding agencies accountable for failures. Time will tell if these efforts amount to anything. If the lesson of the OPM hack -- that agency executives can lose their jobs if they don’t take care of security -- doesn’t hit home, you have to wonder what will.

Posted by Brian Robinson on Jan 04, 2016 at 12:48 PM0 comments


Why network resiliency is so hard to get right

Why network resiliency is so hard to get right

The new chairman of the Joint Chiefs of Staff thinks the July hack of his organization’s unclassified email network showed a deficiency in the Pentagon’s cybersecurity investment and a worrying lack of “resiliency” in cybersecurity in general.

It was an embarrassing event for sure. The hackers, suspected to be Russian, got into the network through a phishing campaign and, once in, reportedly took advantage of encrypted outgoing traffic that was not being decrypted and examined. Gen. Joseph Dunford, who took command Oct.  1, said the hack highlighted that cyber investments to date “have not gotten us to where we need to be.”

As a goal, resiliency is a fuzzy concept. If it means keeping hackers out completely, then Dunford is right – the Defense Department has a problem. If it means being able to do something once hackers get in to limit or negate the effects of the hack, then he’s off the mark.

Best practice in the security industry is now to expect that even the best cyber defenses will be breached at some point. The effectiveness – or resiliency -- of an organization’s security will ultimately be judged on how it deals with that breach and how efficiently it can mitigate its effects.

In 2015, the government’s cybersecurity low point had to be the hack of the Office of Personnel Management’s systems, which compromised the personal data of millions of government workers. Attackers had apparently gained access to OPM’s networks months before the hack was discovered, giving them plenty of time to wander through the agency’s systems, steal and then exfiltrate the data.

That experience prompted plenty of heartache and soul searching.  It seemed that, even after some years of experience of increasingly sophisticated hacks, both public and private organizations were still not paying the attention they needed to their internal security, and instead fixating on defending the network’s edge.

In that sense, the Joint Chiefs email attack could be seen as a success, at least in terms of the reaction to it. Security personnel quickly detected the attack, closed down the email network and then set about investigating possible damage and systematically eradicating any malware that attackers had left behind.

In the end, the email network was down for around two weeks, with the Pentagon declaring it a learning experience and claiming confidence in the integrity of DOD networks.

Learning experiences are great, but the fact is that most government organizations are still more vulnerable than they should be. And some agencies still seem to place more faith than is warranted on networks’ peripheral defenses.

Even the best of those will prove vulnerable at some point, however. Google’s Project Zero recently reported a vulnerability in security appliances produced by FireEye, one of leaders in the field, that allowed someone access to networks via a single malicious email. (FireEye quickly patched the vulnerability.)

The many government assertions that agencies are also raising employee awareness of potential email security hazards has also come into question, given that phishing remains such a successful way for hackers to get network access credentials. According to Verizon, a phishing campaign of just 10 emails has a 90 percent chance of at least one recipient becoming a victim.

A basic problem in all of this is that security, like resiliency, is still much more qualitative than quantitative when it comes to assessing cybersecurity strength. You know you’ve got a good system in place if you can deter attacks or catch and mitigate them quickly once they happen. But there’s no way to know, with a level of certainty, if that’s the case until a serious breach is attempted.

To move the needle on that, the National Institute of Standards and Technology will be holding a two-day technical workshop in January that will look at how to apply measurement science in determining the strength of the various solutions that now exist to assure identities in cyberspace.

To that end, it’s released three whitepapers ahead of the workshop that look at ways to measure the strength of identity proofing and authentication and how to attribute metadata to scoring the confidence of the authorization decision-making process.

Posted by Brian Robinson on Dec 18, 2015 at 12:55 PM0 comments