Feds want mobile security, except when they don’t

Feds want mobile security, except when they don’t

Mobile security is assumed to critical to an agency’s overall IT security, but details on the effectiveness of such programs are scarce, making it hard to assess the overall risk from mobile devices.

A study by the Ponemon Institute and cybersecurity company Lookout of nearly 600 IT and security executives at major organizations, including those in the public sector, shows the risk from mobile devices is great and increasing. In fact, the majority of the study respondents believe mobile is a root cause of breaches.

Some 83 percent say mobile devices are susceptible to hacking, and over two-thirds said it was certain or likely that their organization had a data breach caused by employees accessing sensitive and confidential information using mobile devices.

At the same time, only 33 percent of the respondents said their organization was vigilant in protecting data from unauthorized access. Even more startling, nearly 40 percent didn’t even consider protection of that data on mobile devices to be a priority.

Perhaps that’s not surprising when, according to the study, most of these IT security professionals didn’t know what their employees were really accessing on their devices. Those who said they did know thought the data was mostly email and text, when, in fact, personally identifiable information, customer records and confidential and classified documents made up a large part of it.

One of the biggest problems for security pros is translating this kind of information into the hard dollar damage that executive leaders look for to put a price on breaches. Ponemon takes a tilt at that figure, concluding that dealing with mobile devices with malware on them could cost over $26 million for the organizations in the study.

The inconsistent thinking over the utility of mobile devices and the security problems they pose is not new. A survey in 2014 by the Government Business Council  found that 72 percent of federal government employees back then said they used mobile devices for work, and over half saw mobile security as one of the major challenges to expanding use of mobile. Yet less than one-third used any kind of mobile security app.

Despite all of this seeming inattention to mobile security, things seem to be improving. Last year, the Office of Management and Budget put out a cybersecurity memo that directly addressed mobile security, and the National Institute of Standards and Technology came out with a draft guide for securing mobile devices -- both moves indicating the importance of keeping mobile devices and the data they hold secure.

What, then, to make of the recent kerfuffle over the FBI getting a court order requiring Apple to break the strong encryption on an iPhone used by one of the terrorists who gunned down government workers in San Bernardino, Calif., in December?

The merits of the FBI’s argument (or of Apple’s pushback against that order) aside, this argument has implications for overall mobile security. If the FBI wins the debate and Apple must write iOS code that allows the FBI and other law enforcement and intelligence agencies to break into phones, that weaker security could compromise every other mobile user.

Strong encryption has been proposed as a universal solution for protecting data on mobile devices. It might not stop the most determined attacker, but it will prevent most of the bad actors from stealing whatever data is on a device. The Obama Administration itself has pushed for encryption, and the Ponemon report in its study found it was the most preferred means of securing data.

Recently, however, Bloomberg reported on what it called a “secret meeting” at the White House around Thanksgiving last year, where senior national security officials ordered government agencies to develop encryption workarounds so that investigators could get to user data as they needed.

All of this seems to throw the issue of mobile security risk -- one of the most important government IT issues -- into doubt, once again. With malware and the attackers who use it becoming ever more sophisticated and capable, any weaknesses will be found out and exploited. For agencies and mobile users, conflicting messages over security sow doubt and confusion.

So, where to now?

This blog was changed Feb. 29 to include Lookout, Ponemon Institute's partner in the mobile risk study.

Posted by Brian Robinson on Feb 26, 2016 at 10:46 AM0 comments


Is SDx the model for IT security?

Is SDx the model for IT security?

Is this the year when software-defined anything (SDx) becomes the template for federal agency IT security? It’s been knocking at the door for a while, and the spending outlook for government IT in President Barack Obama’s recent budget proposals could finally be the opening it needs.

In calling for a 35 percent increase in cybersecurity spending to $19 billion, the White House also proposed a $3.1 billion revolving fund to upgrade legacy IT throughout the government. Venting his frustration, and no doubt that of many others in the administration and Congress, Obama talked about ancient Cobol software running Social Security systems, archaic IRS systems and other old, broken machines and software at federal agencies.

That’s not a new story. Agency IT managers will readily tell you about the problems they have with trying to maintain legacy technology and the way that sucks up funds and manpower. They say they have too little time to focus on what they feel their jobs are really about, which is delivering better services to their users.

Security is just one item among many they must address, but it’s become a much more urgent one after a 2015 that saw major breaches at the Office of Personnel Management and elsewhere. That point was driven home again this year when the IRS revealed that over 100,000 attempts using stolen Social Security numbers had succeeded in generating the personal identification numbers used by tax payers to electronically file and pay taxes.

The revolving IT Modernization Fund in the White House budget proposal would pay for projects that will be prioritized based on the extent to which they lower the overall security risk of federal IT systems. The savings achieved by shifting to more cost-effective and scalable platforms will be recycled back into the fund.

Cost-effectiveness and scalablity are among the main advantages that proponents put forward for SDx architectures, along with agility in response to security threats. As threats become more targeted, more sophisticated and more numerous, protecting networks gets more difficult. With IT staff overwhelmed by just the legacy systems they have to keep running, organizations face much greater risk of damage from those attacks.

By simplifying infrastructure management with the software overlay that software-defined networking (SDN) brings, IT and security managers get a much better way of identifying when they are being attacked and a faster and more focused way of responding.

In a poll conducted earlier last year, ESG Research identified a significant percentage of enterprise security professionals who said they would use SDN to address network security across a wide range of different scenarios.

Researchers at the Idaho National Lab have already developed a proof-of-concept that uses SDx to emulate the use and security of the laboratory’s business systems. It’s already delivered “amazing outcomes” and demonstrates how SDx can be used to improve security, repeatability of processes and consistency in results, they said.

The future will only bring more security challenges for government, as the Internet of Things takes hold. That will introduce thousands of new avenues that attackers will use to try and penetrate networks. Given the kind of benefits that the IoT is expected to bring to government organizations, the trick will be in securing networks without limiting the facility of IoT.

One approach that won’t work is simply throwing the solution du jour at the problem, which has been the traditional answer. Bolting on more point-to-point, single-purpose devices simply won’t scale fast enough to deal with vulnerabilties and will be too costly. Those devices are also themselves proving more vulnerable than people thought, with Cisco joining Juniper and Fortinet in the list of manufacturers whose advanced firewalls apparently suffer from potential software problems.

Right now, the only viable solution in this brave new world of security seems to be through some kind of software-defined approach. It’s not a silver bullet by any means, and it must be part of an overall approach to security. IT and security professionals must also be convinced that it will provide for the kind of subtleties and granularity needed to weed out modern threats.

If -- and in an election year, it’s a big if -- Obama’s budget proposals make headway in Congress, SDx could prove the best way to tackle the security problems that otherwise threaten to overwhelm government.

Posted by Brian Robinson on Feb 12, 2016 at 10:46 AM0 comments


Still early days for federal cybersecurity?

Still early days for federal cybersecurity?

Government gets it in the neck frequently when it comes to cybersecurity, usually along the lines of it being too dense or too slow to react when problems arise. Some of that criticism is warranted, some not, but let’s give credit where credit is due.

House lawmakers were quick to jump on the revelation that Juniper Networks, which sells its popular NetScreen firewalls to many government agencies, had found flaws in the operating system that runs those firewalls. This defect  would allow someone to remotely access a device through SSH protocols or telnet and then monitor and decrypt VPN traffic.

On Jan. 21, the House Committee on Oversight and Government Reform sent letters to the heads of major agencies asking them to audit their use of Juniper’s firewalls and report by Feb. 4 on how they might have been affected by the ScreenOS flaws and what corrective measures they took prior to Juniper releasing a software patch on Dec. 20.

The committee’s fast action follows the devastating breach at the Office of Personnel Management last year, which went undetected for several months. A year earlier, major problems were found with the widely used OpenSSL protocol, which may still be affecting systems around the world today.

It will be interesting to see what the House committee finds. Any agency that is on top of its security game should already have done that Juniper audit and should have no problem providing the information requested. Those that haven’t may have to scramble, and any committee report should show the extent of that.

Other elements of the government’s security status aren’t developing so quickly. Last year, the Government Accountability Office gave its regular report on the status of government cybersecurity, giving a lukewarm review of the Department of Homeland Security’s EINSTEIN program, more formerly known as the National Cybersecurity Protection System (NCPS).

EINSTEIN was designed some years ago to be a central plank in the government’s overall cybersecurity posture, aimed at providing agencies with intrusion detection, intrusion prevention, analytics and information sharing technologies. If those tools were fully in place across agencies, breaches such as those at the OPM and other agencies may have been prevented, or at least noticed and mitigated much sooner than they were.

Getting EINSTEIN in place governmentwide has been frustratingly slow, however, and according to the latest GAO report on the system, that sluggish pace continues. The DHS program is still only partially meeting its objectives, GAO said, and is deficient in all four areas examined.

With intrusion detection, for example, it can only compare network traffic to known signatures of malware, which covers maybe 80 percent of the bad stuff. The other malicious activity, which contains the advanced persistent threats that do most of the damage these days, requires more sophisticated detection.

Likewise, EINSTEIN now only prevents intrusion of particular kinds of malicious data, but it can’t block the kind that’s hidden inside the web traffic itself. DHS says it plans to deliver that capability sometime this year.

Overall, the uptake of EINSTEIN has been spotty, because of deficiencies at the agencies or the DHS itself. All of the 23 agencies required to implement intrusion detection capabilities had routed at least some of their traffic through the NCPS sensors, the GAO said, but only five were receiving intrusion prevention services. Agencies had not taken all of the technical steps needed to implement the system, in part because the DHS had not yet provided them with the necessary guidance.

It’s all an example of the strange and often puzzling disparities in the government’s approach to security. On the one hand, at least some parts of Congress seem to understand the urgency and are prepared to pressure agencies to move faster. On the other, critical technology that was recognized as essential years ago still isn’t fully deployed.

Posted by Brian Robinson on Jan 29, 2016 at 12:01 PM0 comments


Heartbleed redux with Secure Shell?

Heartbleed redux with Secure Shell?

Is the Secure Shell (SSH) vulnerability going to be this year’s OpenSSL? As with the stock market, it’s a mug’s game to predict the future, but warning flags have been raised in response to reports of problems with major security devices.

It was problems with the OpenSSL version of the Secure Sockets Layer encryption that led to the discovery two years ago of the Heartbleed bug, which many security professionals called one of the scariest things they had seen. It allowed anyone who could get to an infected device to compromise the private keys used to identify service providers and encrypt data traffic.

Eventually, hundreds of thousands of servers around the world were found to be vulnerable to Heartbleed, and even now no one seems sure if all the holes have been plugged.

In December 2015, Juniper Networks said it had found “unauthorized code” in its ScreenOS, the operating system that runs on its widely used NetScreen firewalls. That code would allow a knowledgeable attacker to gain administrative access to NetScreen devices over SSH and Telnet, the company said, and to decrypt VPN connections.

The company has since made several fixes to its software to close down the gap, the latest to the Dual_EC random number generator used in the firewalls. That’s been a long time coming, since Dual_EC has reportedly contained a backdoor inspired by the National Security Agency (that could also be exploited by bad guys).

Now researchers have found suspicious code in Fortinet’s FortiOS firewalls, saying it was also essentially an SSH backdoor. Fortinet, however, downplayed that allegation, saying it was a “management authentication issue” that had been fixed some time ago.

Coincidentally, the National Institute of Standards and Technology recently released a new guidance document on the security of SSH key-based access, which it said is often overlooked by organizations. That would be a bad thing, as NIST also points out, because misuse of SSH keys “could lead to unauthorized access, often with high privileges.” In other words, it’s potentially handing the keys to the kingdom over to people who will gratefully accept the gift -- and then take you for all you are worth.

Backdoor keys are specifically mentioned by NIST as one of the seven categories of vulnerability in SSH, which is widely used to manage servers, routers and other security devices as well as firewalls. It’s also used to provide privileged access to servers and networks.

However, NIST pointed out, SSH public key authentication can also be used to create a backdoor by generating a new key pair and adding a new authorized key to an authorized keys file. That allows someone to get around the access management system and its monitoring and auditing capabilities.

Other vulnerabilities NIST cited include: poor SSH implementation; improperly configured access controls; stolen, leaked, derived and unterminated keys; unintended usage of keys; theft of keys as attackers inside the system move from server to server and steal credentials along the way; and the always present human error.

The recent firewall revelations are by no means the only reported problems with Secure Shell. In the middle of last year, researchers also discovered vulnerabilities with the OpenSSH version of the protocol, which allowed attackers to get around authentication attempt limits and launch brute force attacks on targeted servers.

The big problem with these kinds of vulnerabilities is not necessarily that they exist. If they are quickly noticed and patched, any likely damage is minimized. But the OpenSSL bug went unnoticed for several years, so the door to networks and systems that used that protocol was open all that time. The OpenSSH bug could have been present on versions of the FreeBSD operating system as far back as 2007.

Heartbleed redux? Not so far, it seems, but the year is yet young.

Posted by Brian Robinson on Jan 19, 2016 at 1:56 PM0 comments