SEA

Syrian Electronic Army's attacks expose the Internet's weak links

The Syrian Electronic Army has been at it again. Most recently, it was the online presence of the New York Times and Twitter being targeted with traffic being redirected to pro-Syrian Web pages. And as the Obama administration publicly contemplates military action against the Assad regime, it is a safe bet that the hacktivists will be watching for opportunities in the .gov domain.

(UPDATE: Over the weekend, the SEA reportedly attacked a Marine Corps recruiting website, redirecting visitors to a message appealing to U.S. soldiers not to attack Syria.)

We still don’t know much about the SEA, but the attacks are — unfortunately — well known.

“This is really not new,” said Paul Ferguson, vice president of threat intelligence at Internet Identity. “It’s happening with alarming frequency.”

In this case the attackers modified Domain Name Service records to redirect traffic to propaganda pages. “It didn’t cause a lot of havoc,” Ferguson said. “It could have been worse.”

But the more serious issue is that attackers are leveraging low-level exploits, in this case a phishing attack against a domain name registrar, to escalate attacks and hop-scotch to third-party targets. By taking advantage of the weakest link in the chain of Internet services, attackers can move up the chain and past the defenses of more important targets. This time it was the SEA against New York Times and Twitter. In the past it has been China going after Lockheed Martin through RSA. Regardless of the attackers, the targets and the exploits used, it is happening on a regular basis, Ferguson said. “It’s a phenomenon we see more and more of.”

In the current case, it is believed that a phishing attack was used against an Australian domain name registrar to steal credentials. The credentials were used to access and change DNS records on a server. These records can become distributed through the DNS hierarchy, redirecting traffic until they expire. In this case, the time to live for the records was set at 24 hours.

The Domain Name System was designed to work in this distributed way so that it can handle the huge volume of global Internet traffic, translating URLs to numerical IP addresses without overwhelming a small number of servers. “It’s a feature, not a flaw,” Ferguson said. “It was designed to keep the chatter in the DNS system as local as possible.”

Ferguson calls the design ingenious, but unfortunately the bad guys understand how to use it for their own purposes. Records with very short times to live are used for “fast flux” botnets, changing the addresses for command and control servers quickly so that they are more difficult to identify and shut down. Records with a long time to live can disrupt the flow of traffic to target sites.

Even when the results of a given attack are not serious, the cumulative effect of such misuse of the DNS system is an erosion of trust in Internet transactions. The best defense against this is to strengthen the weak links with fundamental Internet hygiene and basic security. In a system that is globally interconnected, there is no link in the chain that can be assumed to be unimportant.

Posted by William Jackson on Aug 30, 2013 at 6:42 AM1 comments


outsource

Outsourcing cybersecurity? Feds get behind the idea.

The recent award of a $6 billion blanket purchase agreement to 17 companies for security monitoring tools and services was a big business story and no doubt welcome news for federal contractors in this age of sequestration. It also illustrates government’s growing acceptance of the idea of security–as-a-service.

Agencies are moving from static, endpoint security tools toward a more holistic approach to cybersecurity, letting service providers handle more of the chores of continuously monitoring and assessing the security status of IT systems at the enterprise level.

It is not a wholesale shift, of course. There still are plenty of point products being used and security management being done in-house. But just a few years ago the idea of outsourcing security was controversial. Today, the Homeland Security Department is touting continuous monitoring as a service as a part of a major step forward in protecting government systems.

The blanket purchase agreements are part of a move in government from periodic assessment and certification under the Federal Information Security Management Act to continuous monitoring. Continuous monitoring of IT systems and networks was identified last year by the Office of Management and Budget as a Cross-Agency Priority goal. DHS, which has been delegated responsibility for overseeing FISMA, established the more appropriately named Continuous Diagnostics and Mitigation program, intended as a one-stop shop for tools and services enabling monitoring.

On Aug. 12, BPAs were awarded through the General Services Administration to 17 companies to provide these tools and services. The contracts have a one-year base period with four one-year options and an estimated value of $6 billion. The goal is to not only provide a cost-effective way to acquire cybersecurity solutions, but to also create a standardized platform for automated monitoring and reporting of the state of hardware and software.

Agencies will have their own dashboards that will alert them to the most critical security risks, helping them prioritize mitigation efforts and provide near-real-time information on security status. Summary information would give DHS a similar view of the entire .gov domain.

This is not DHS’s first foray into security as a service. In July, the Einstein 3 intrusion detection and prevention service went into operation at the first agency. It is a managed security service provided by DHS through Internet service providers. Initially deployed in 2004, it has advanced from network traffic analysis to automated blocking of malicious traffic. The Veterans Affairs Department was scheduled to become the second agency to turn on the service in August, with others coming online as ISPs are ready to accept them.

Both of these trends — the move from static evaluation to continuous monitoring and letting service providers handle enterprise level tasks — could go a long way toward improving federal cybersecurity.

For more than a decade FISMA has provided a framework for IT security, and agencies have struggled to improve their security postures while complying with the law’s requirements. Almost from its inception in 2002 there have been calls for FISMA reform to move agencies away from focusing on compliance and toward actually improving security. Despite these calls, successive Congresses mired in partisan gridlock have been unable to provide reform.

Recent developments are evidence that FISMA’s supporters might be right, however. The problem is not in the law, which has always called for risk-based security and continuous (or near continuous) monitoring of systems, but with oversight that has placed more importance on compliance than results.

Not everything has been fixed. Statutory responsibility for overseeing FISMA still lies with OMB rather than DHS. And neither Einstein 3 nor the Continuous Diagnostics and Monitoring program have been in place long enough to show results. But the administration is demonstrating practical creativity in evolving federal cybersecurity.

Posted by William Jackson on Aug 23, 2013 at 6:40 AM5 comments


Businessman with laptop concerned by IPv6 Ping of Death

Microsoft issues fix for resurrected Ping of Death

The latest round of patches from Microsoft includes a fix for an ICMPv6 vulnerability in all of the company’s operating systems that support IPv6.

The vulnerability, rated “important,” is an IPv6 version of the old Ping of Death, a denial of service attack that originally was fixed more than a decade ago. The current version was reported by Symantec’s Basil Gabriel, and no public exploits of it had been reported at the time Microsoft released the security bulletin on Aug. 13. 

But it is one more reason to be aware of the fact that whether or not an agency is using IPv6 on its network, modern operating systems support the new Internet Protocols out of the box and network admins need to be aware of traffic using them.

The ICMPv6 vulnerability was one of eight security bulletins in Microsoft’s Aug. 13 Patch Tuesday release.  Three were rated critical and five important.

ICMP, the Internet Control Message Protocol, is a utility for error reporting and diagnostics used in IP networks, and is implemented in Version 6 as well as Version 4 of the Internet Protocols. One of its functions is pinging — using an echo request packet to measure the time of a round trip for a message to a specified IP address. Like many other denial of service attacks, a ping flood uses a high volume of these packets to overwhelm a target. But it was found in the 1990s that a single malformed ping packet larger than the size allowed in IPv4 could cause a buffer overflow when it was reassembled by the host operating system, causing it to crash.

This was fixed in most operating systems by 1998, but Gabriel found that at least some operating systems had the same problem reassembling oversize packets under ICMPv6. This is not a problem in ICMP, which is a required part of IP networking, but it does affect Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012 and Windows RT. As Microsoft describes it, “the vulnerability is caused when the TCP/IP stack does not properly allocate memory for incoming ICMPv6 packets.”

The patch corrects memory allocation while processing these packets, and the problem also can be handled by firewalls that detect and block the malformed packets. So with a properly configured firewall and an updated OS, the resurrected Ping of Death should not be a problem. It does offer a reminder that IPv6 will present a host of security challenges, however. Some will be new unique to the new protocols and some will be recycled versions of problems already addressed in IPv4.

Until recently, the surest way to dodge challenges like this was to avoid IPv6 altogether. This tactic is quickly becoming an impractical — and soon  impossible — solution. Current operating systems and other technologies support IPv6 out of the box, and many prefer the new protocols by default, making it difficult to opt out. With the depletion of new IPv4 addresses available for assignment, future growth in the Internet will be in the IPv6 address space, making it necessary for networks to accommodate the new traffic.

All of this aside, federal agencies are under order to enable IPv6 on their networks. Starting this as early as possible and doing it with a security plan in place will help make the process less risky.

Posted by William Jackson on Aug 16, 2013 at 9:10 AM1 comments


owls

Threat-info sharing: Still broken after all these years

Comfoo, a Chinese Trojan that was used in the breach of RSA back in 2010, dates back at least to 2006 and remains in wide use more than three years after it was exposed, according to research by the Dell SecureWorks Counter Threat Unit.

Why has Comfoo been successful for so long?

“Not enough people are sharing information,” said Joe Stewart, CTU’s director of malware research. Because people hold onto threat data, rather than share it, malware owners are able to use the same tools for years.

Stewart and his partner, senior security researcher Don Jackson, suspect that the federal government probably already knew much of what it has spent the last two years finding out about the threat landscape, but because the information was classified the research had to be duplicated in the private sector.

It’s not that the government doesn’t want to help, Stewart said. “The government people I’m talking to say they are trying to get to the point that they can share the information, but they aren’t there yet.”

“There have been discussions” with government officials about Comfoo “that have gone nowhere fast,” because of the classified information involved, Jackson said. “If we had known the same thing they knew, a lot less damage could have been done.”

Some threat information is being shared, of course. There are industry sector Information Sharing and Analysis Centers — ISACs — that allow companies to come together and assess risks, with some government participation. And there are industry working groups targeting specific challenges. But the success of these efforts so far have been limited, said Kathleen M. Moriarty, global lead security architect at EMC Corp., the parent company of RSA.

“Organizations today rely on information-sharing processes that are so manually intensive, duplicative and inefficient that they cannot scale to meet critical computer network defense requirements,” she writes in a paper on threat intelligence sharing.

The problem with most of these efforts is not a lack of information being made available, but how to make it useful, she said -- deciding “what to share with who.” Merely sharing is not enough. Threat information needs to actionable and able to be used in automated responses. “You have to have a business problem you are going to solve,” she said.

She cited examples of how sharing can be effective, among them the efforts of the Messaging, Malware, Mobile Anti-Abuse Working Group, a collaborative effort of large email service providers and the Anti-Phishing Working Group. By providing clearinghouses for actionable data they help their industries and allow security vendors to take advantage of that information for their products.

But these models presuppose formal sharing already in place, which is not always the case. A lot of threat sharing is informal, back-channel and bottom-up, especially with government.

“I think all governments are interested in helping,” Moriarty said. But there are barriers of trust, policy and law.

And turf. “Turf battles are nothing new in government,” Mark Weatherford, former Homeland Security Department cybersecurity official now with the Chertoff Group, said in a recent Black Hat panel discussion. “In Washington, power is everything, and information is power.”

There are efforts to break these barriers, such as DOD’s Defense Industrial Base (DIB) program to share classified information with contractors. “The DIB pilot worked well, except that the information is classified,” which limits how it can be shared or used, said retired Adm. William Fallon, former commander of the U.S. Central Command.

Participants in the Black Hat discussion agreed on two things: Information sharing is improving, but the remaining challenges put defenders at a disadvantage when going up against the offense. Secondly, they all concurred with an audience member’s assertion that “pain and humiliation is a great motivator.”

It is likely to take a cyber disaster to effectively change the information sharing landscape.

Posted by William Jackson on Aug 12, 2013 at 9:07 AM0 comments