outsource

Outsourcing cybersecurity? Feds get behind the idea.

The recent award of a $6 billion blanket purchase agreement to 17 companies for security monitoring tools and services was a big business story and no doubt welcome news for federal contractors in this age of sequestration. It also illustrates government’s growing acceptance of the idea of security–as-a-service.

Agencies are moving from static, endpoint security tools toward a more holistic approach to cybersecurity, letting service providers handle more of the chores of continuously monitoring and assessing the security status of IT systems at the enterprise level.

It is not a wholesale shift, of course. There still are plenty of point products being used and security management being done in-house. But just a few years ago the idea of outsourcing security was controversial. Today, the Homeland Security Department is touting continuous monitoring as a service as a part of a major step forward in protecting government systems.

The blanket purchase agreements are part of a move in government from periodic assessment and certification under the Federal Information Security Management Act to continuous monitoring. Continuous monitoring of IT systems and networks was identified last year by the Office of Management and Budget as a Cross-Agency Priority goal. DHS, which has been delegated responsibility for overseeing FISMA, established the more appropriately named Continuous Diagnostics and Mitigation program, intended as a one-stop shop for tools and services enabling monitoring.

On Aug. 12, BPAs were awarded through the General Services Administration to 17 companies to provide these tools and services. The contracts have a one-year base period with four one-year options and an estimated value of $6 billion. The goal is to not only provide a cost-effective way to acquire cybersecurity solutions, but to also create a standardized platform for automated monitoring and reporting of the state of hardware and software.

Agencies will have their own dashboards that will alert them to the most critical security risks, helping them prioritize mitigation efforts and provide near-real-time information on security status. Summary information would give DHS a similar view of the entire .gov domain.

This is not DHS’s first foray into security as a service. In July, the Einstein 3 intrusion detection and prevention service went into operation at the first agency. It is a managed security service provided by DHS through Internet service providers. Initially deployed in 2004, it has advanced from network traffic analysis to automated blocking of malicious traffic. The Veterans Affairs Department was scheduled to become the second agency to turn on the service in August, with others coming online as ISPs are ready to accept them.

Both of these trends — the move from static evaluation to continuous monitoring and letting service providers handle enterprise level tasks — could go a long way toward improving federal cybersecurity.

For more than a decade FISMA has provided a framework for IT security, and agencies have struggled to improve their security postures while complying with the law’s requirements. Almost from its inception in 2002 there have been calls for FISMA reform to move agencies away from focusing on compliance and toward actually improving security. Despite these calls, successive Congresses mired in partisan gridlock have been unable to provide reform.

Recent developments are evidence that FISMA’s supporters might be right, however. The problem is not in the law, which has always called for risk-based security and continuous (or near continuous) monitoring of systems, but with oversight that has placed more importance on compliance than results.

Not everything has been fixed. Statutory responsibility for overseeing FISMA still lies with OMB rather than DHS. And neither Einstein 3 nor the Continuous Diagnostics and Monitoring program have been in place long enough to show results. But the administration is demonstrating practical creativity in evolving federal cybersecurity.

Posted by William Jackson on Aug 23, 2013 at 6:40 AM5 comments


Businessman with laptop concerned by IPv6 Ping of Death

Microsoft issues fix for resurrected Ping of Death

The latest round of patches from Microsoft includes a fix for an ICMPv6 vulnerability in all of the company’s operating systems that support IPv6.

The vulnerability, rated “important,” is an IPv6 version of the old Ping of Death, a denial of service attack that originally was fixed more than a decade ago. The current version was reported by Symantec’s Basil Gabriel, and no public exploits of it had been reported at the time Microsoft released the security bulletin on Aug. 13. 

But it is one more reason to be aware of the fact that whether or not an agency is using IPv6 on its network, modern operating systems support the new Internet Protocols out of the box and network admins need to be aware of traffic using them.

The ICMPv6 vulnerability was one of eight security bulletins in Microsoft’s Aug. 13 Patch Tuesday release.  Three were rated critical and five important.

ICMP, the Internet Control Message Protocol, is a utility for error reporting and diagnostics used in IP networks, and is implemented in Version 6 as well as Version 4 of the Internet Protocols. One of its functions is pinging — using an echo request packet to measure the time of a round trip for a message to a specified IP address. Like many other denial of service attacks, a ping flood uses a high volume of these packets to overwhelm a target. But it was found in the 1990s that a single malformed ping packet larger than the size allowed in IPv4 could cause a buffer overflow when it was reassembled by the host operating system, causing it to crash.

This was fixed in most operating systems by 1998, but Gabriel found that at least some operating systems had the same problem reassembling oversize packets under ICMPv6. This is not a problem in ICMP, which is a required part of IP networking, but it does affect Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012 and Windows RT. As Microsoft describes it, “the vulnerability is caused when the TCP/IP stack does not properly allocate memory for incoming ICMPv6 packets.”

The patch corrects memory allocation while processing these packets, and the problem also can be handled by firewalls that detect and block the malformed packets. So with a properly configured firewall and an updated OS, the resurrected Ping of Death should not be a problem. It does offer a reminder that IPv6 will present a host of security challenges, however. Some will be new unique to the new protocols and some will be recycled versions of problems already addressed in IPv4.

Until recently, the surest way to dodge challenges like this was to avoid IPv6 altogether. This tactic is quickly becoming an impractical — and soon  impossible — solution. Current operating systems and other technologies support IPv6 out of the box, and many prefer the new protocols by default, making it difficult to opt out. With the depletion of new IPv4 addresses available for assignment, future growth in the Internet will be in the IPv6 address space, making it necessary for networks to accommodate the new traffic.

All of this aside, federal agencies are under order to enable IPv6 on their networks. Starting this as early as possible and doing it with a security plan in place will help make the process less risky.

Posted by William Jackson on Aug 16, 2013 at 9:10 AM1 comments


owls

Threat-info sharing: Still broken after all these years

Comfoo, a Chinese Trojan that was used in the breach of RSA back in 2010, dates back at least to 2006 and remains in wide use more than three years after it was exposed, according to research by the Dell SecureWorks Counter Threat Unit.

Why has Comfoo been successful for so long?

“Not enough people are sharing information,” said Joe Stewart, CTU’s director of malware research. Because people hold onto threat data, rather than share it, malware owners are able to use the same tools for years.

Stewart and his partner, senior security researcher Don Jackson, suspect that the federal government probably already knew much of what it has spent the last two years finding out about the threat landscape, but because the information was classified the research had to be duplicated in the private sector.

It’s not that the government doesn’t want to help, Stewart said. “The government people I’m talking to say they are trying to get to the point that they can share the information, but they aren’t there yet.”

“There have been discussions” with government officials about Comfoo “that have gone nowhere fast,” because of the classified information involved, Jackson said. “If we had known the same thing they knew, a lot less damage could have been done.”

Some threat information is being shared, of course. There are industry sector Information Sharing and Analysis Centers — ISACs — that allow companies to come together and assess risks, with some government participation. And there are industry working groups targeting specific challenges. But the success of these efforts so far have been limited, said Kathleen M. Moriarty, global lead security architect at EMC Corp., the parent company of RSA.

“Organizations today rely on information-sharing processes that are so manually intensive, duplicative and inefficient that they cannot scale to meet critical computer network defense requirements,” she writes in a paper on threat intelligence sharing.

The problem with most of these efforts is not a lack of information being made available, but how to make it useful, she said -- deciding “what to share with who.” Merely sharing is not enough. Threat information needs to actionable and able to be used in automated responses. “You have to have a business problem you are going to solve,” she said.

She cited examples of how sharing can be effective, among them the efforts of the Messaging, Malware, Mobile Anti-Abuse Working Group, a collaborative effort of large email service providers and the Anti-Phishing Working Group. By providing clearinghouses for actionable data they help their industries and allow security vendors to take advantage of that information for their products.

But these models presuppose formal sharing already in place, which is not always the case. A lot of threat sharing is informal, back-channel and bottom-up, especially with government.

“I think all governments are interested in helping,” Moriarty said. But there are barriers of trust, policy and law.

And turf. “Turf battles are nothing new in government,” Mark Weatherford, former Homeland Security Department cybersecurity official now with the Chertoff Group, said in a recent Black Hat panel discussion. “In Washington, power is everything, and information is power.”

There are efforts to break these barriers, such as DOD’s Defense Industrial Base (DIB) program to share classified information with contractors. “The DIB pilot worked well, except that the information is classified,” which limits how it can be shared or used, said retired Adm. William Fallon, former commander of the U.S. Central Command.

Participants in the Black Hat discussion agreed on two things: Information sharing is improving, but the remaining challenges put defenders at a disadvantage when going up against the offense. Secondly, they all concurred with an audience member’s assertion that “pain and humiliation is a great motivator.”

It is likely to take a cyber disaster to effectively change the information sharing landscape.

Posted by William Jackson on Aug 12, 2013 at 9:07 AM0 comments


Black Hat

Mobile threats and other new directions from Black Hat

Mobile computing seems to be the new frontier in cybersecurity, edging out the cloud as a fruitful area for research and hacking at last week’s Black Hat Briefings. But stealthy persistent threats remain a serious concern and the emerging Internet of Things offers new challenges to privacy.

It’s getting harder to spot trends at Black Hat as the annual security conference grows and evolves, however. It remains a premier venue for original research, but with more than 7,500 attendees and presentations offered in 11 simultaneous tracks at the U.S. Briefings July 31 and Aug. 1 in Las Vegas, it no longer is a compact community where you can keep your ear to the ground. The crowds not only are larger, they also are more diverse, with a growing number of corporate and government types joining the hackers and researchers (although government employees are loath to identify themselves).

That change was illustrated by the reception given NSA Director Gen. Keith Alexander, who gave the opening keynote. Although Black Hat founder Jeff Moss said in introductory remarks that tensions between the hacker/security community and government were at an all-time high in the wake of revelations about domestic NSA snooping, the general found a largely friendly crowd. Yes, there was a shouted expletive and a few taunts from the audience, but people seemed to be mostly on Alexander’s side.

“There is such a thing as professionalism,” one audience member sniffed at the heckling.

But pushback has always been a hallmark of Black Hat and attendees are encouraged to challenge unsupported claims. This year’s audience seemed to be unusually willing to accept on faith assertions from Alexander that a more skeptical crowd would have questioned. Statements such as, “we have tremendous oversight and compliance” in surveillance programs, and the claim that there has never been any NSA overreach in gathering data. Alexander might be right, but we have no way of knowing as long as the programs remain classified. The general said “trust me,” and the audience did.

That said, there still is a lot of research being presented. As mobile computing comes of age there is a growing interest in possibilities offered by the Google Android and Apple iOS platforms. A malicious USB charging device can bypass digital signature requirements on many iPhone versions to install phony apps with malware without jailbreaking the phone.

Cryptographic keys for signing Android applications can be exposed to create bots that can set up unlimited numbers of spam accounts on social networking sites. Other vulnerabilities in Android authentication can allow legitimate apps to be altered, giving an attacker system control of the phone. Automated exploits for this one already are in the wild.

The BlackBerry OS 10 presents an attack surface that can allow remote entry and unauthorized escalation of privileges. And there are new mobile malware and mobile rootkits, and the LTE network itself is far from secure.

All of this takes on added significance as desktops become obsolete, laptops passé, and everyone uses tablets and smart phones to access data and applications that are being moved to the cloud.

At the same time, complex multistage threats and rootkits still are being advanced and distributed denial-of-service attacks capable of delivering multi-gigabit streams to targets are being offered as a service. In short, nothing is getting better and a lot of things are getting worse. All of this means plenty of job security for anyone who can defend a network, a server, a computer or an application.

As long as you can keep up with the bad guys, that is.

Posted by William Jackson on Aug 06, 2013 at 8:41 AM0 comments