Man standing at open back door

The NSA wants to be your backdoor man

Distrust of the National Security Agency has deep roots. As far back as 1976 many believed that the code-breaking agency had slipped a backdoor into the new Data Encryption Standard, the approved algorithm for government encryption. For years, the suspicions were met with stony silence. Then, 35 years later, the NSA came clean.

The agency contributed changes to the proposed design, but left no backdoors or other surprises, Richard “Dickie” George, then technical director of NSA’s information assurance directorate, told an audience at the RSA Conference in 2011. “We’re actually pretty good guys,” George said. “We wanted to make sure we were as squeaky clean as possible.”

Now some of the squeak is wearing off that clean. No one doubts that the NSA is good at breaking codes. But the latest revelations from the Snowden files seem to confirm what many have long suspected: The NSA knows that it is easier to break a code when someone gives you the keys. Documents published by the New York Times describe a Signals Intelligence program to “actively engage the U.S. and foreign IT industries to covertly influence and/or overtly leverage their commercial products’ designs.”

A goal of the program is to “insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communications devices used by targets,” and to “influence policies, standards and specifications for commercial public-key technologies.”

In other words, to install backdoors in commercial products.

There is a lot of outrage about the disclosure, but little surprise. Few people have taken theNSA’s assertions of the sanctity of commercial products seriously. The NSA seems proud of its efforts at subverting the security of personal communications. The project is in line with the Comprehensive National Cybersecurity Initiative, NSA said in its 2013 budget request, because it invests in corporate partnerships and cuts costs by exploiting existing sources of intelligence.

Most of us assumed that the public-private partnerships advocated in the CNCI were intended to strengthen cybersecurity and privacy. Live and learn.

To Chris Wysopal, chief technology officer at the application security company Veracode, what is surprising about the latest revelations is not so much that the NSA apparently is tampering with products. Everyone expects them to do that, he said. “What is eye-opening is that they are tampering with standards.” That would weaken all technology built to those standards, including that used by the U.S. government.

Although the NSA has expressed its desire to weaken standards, there is little evidence to date that it has managed to do so, Wysopal said. But there may be some evidence. In 2007 weaknesses were found in a pseudorandom number generator published by the NSA and included as a cryptographic standard for government use. It was immediately suspected that the flaw could have been intentional. Intentional or not, “in this case, it was detected and not used,” Wysopal said.

Since then there have not been similar discoveries in public crypto standards. And that underlines the greatest challenge in inserting backdoors through standards. As Dickie George told his audience of crypto professionals in 2011, “I don’t think we were good enough to sneak things in that you guys wouldn’t have found.”

Still, absence of evidence is not evidence of absence. We don’t know what we still don’t know.

Posted by William Jackson on Sep 06, 2013 at 11:58 AM3 comments


SEA

Syrian Electronic Army's attacks expose the Internet's weak links

The Syrian Electronic Army has been at it again. Most recently, it was the online presence of the New York Times and Twitter being targeted with traffic being redirected to pro-Syrian Web pages. And as the Obama administration publicly contemplates military action against the Assad regime, it is a safe bet that the hacktivists will be watching for opportunities in the .gov domain.

(UPDATE: Over the weekend, the SEA reportedly attacked a Marine Corps recruiting website, redirecting visitors to a message appealing to U.S. soldiers not to attack Syria.)

We still don’t know much about the SEA, but the attacks are — unfortunately — well known.

“This is really not new,” said Paul Ferguson, vice president of threat intelligence at Internet Identity. “It’s happening with alarming frequency.”

In this case the attackers modified Domain Name Service records to redirect traffic to propaganda pages. “It didn’t cause a lot of havoc,” Ferguson said. “It could have been worse.”

But the more serious issue is that attackers are leveraging low-level exploits, in this case a phishing attack against a domain name registrar, to escalate attacks and hop-scotch to third-party targets. By taking advantage of the weakest link in the chain of Internet services, attackers can move up the chain and past the defenses of more important targets. This time it was the SEA against New York Times and Twitter. In the past it has been China going after Lockheed Martin through RSA. Regardless of the attackers, the targets and the exploits used, it is happening on a regular basis, Ferguson said. “It’s a phenomenon we see more and more of.”

In the current case, it is believed that a phishing attack was used against an Australian domain name registrar to steal credentials. The credentials were used to access and change DNS records on a server. These records can become distributed through the DNS hierarchy, redirecting traffic until they expire. In this case, the time to live for the records was set at 24 hours.

The Domain Name System was designed to work in this distributed way so that it can handle the huge volume of global Internet traffic, translating URLs to numerical IP addresses without overwhelming a small number of servers. “It’s a feature, not a flaw,” Ferguson said. “It was designed to keep the chatter in the DNS system as local as possible.”

Ferguson calls the design ingenious, but unfortunately the bad guys understand how to use it for their own purposes. Records with very short times to live are used for “fast flux” botnets, changing the addresses for command and control servers quickly so that they are more difficult to identify and shut down. Records with a long time to live can disrupt the flow of traffic to target sites.

Even when the results of a given attack are not serious, the cumulative effect of such misuse of the DNS system is an erosion of trust in Internet transactions. The best defense against this is to strengthen the weak links with fundamental Internet hygiene and basic security. In a system that is globally interconnected, there is no link in the chain that can be assumed to be unimportant.

Posted by William Jackson on Aug 30, 2013 at 6:42 AM1 comments


outsource

Outsourcing cybersecurity? Feds get behind the idea.

The recent award of a $6 billion blanket purchase agreement to 17 companies for security monitoring tools and services was a big business story and no doubt welcome news for federal contractors in this age of sequestration. It also illustrates government’s growing acceptance of the idea of security–as-a-service.

Agencies are moving from static, endpoint security tools toward a more holistic approach to cybersecurity, letting service providers handle more of the chores of continuously monitoring and assessing the security status of IT systems at the enterprise level.

It is not a wholesale shift, of course. There still are plenty of point products being used and security management being done in-house. But just a few years ago the idea of outsourcing security was controversial. Today, the Homeland Security Department is touting continuous monitoring as a service as a part of a major step forward in protecting government systems.

The blanket purchase agreements are part of a move in government from periodic assessment and certification under the Federal Information Security Management Act to continuous monitoring. Continuous monitoring of IT systems and networks was identified last year by the Office of Management and Budget as a Cross-Agency Priority goal. DHS, which has been delegated responsibility for overseeing FISMA, established the more appropriately named Continuous Diagnostics and Mitigation program, intended as a one-stop shop for tools and services enabling monitoring.

On Aug. 12, BPAs were awarded through the General Services Administration to 17 companies to provide these tools and services. The contracts have a one-year base period with four one-year options and an estimated value of $6 billion. The goal is to not only provide a cost-effective way to acquire cybersecurity solutions, but to also create a standardized platform for automated monitoring and reporting of the state of hardware and software.

Agencies will have their own dashboards that will alert them to the most critical security risks, helping them prioritize mitigation efforts and provide near-real-time information on security status. Summary information would give DHS a similar view of the entire .gov domain.

This is not DHS’s first foray into security as a service. In July, the Einstein 3 intrusion detection and prevention service went into operation at the first agency. It is a managed security service provided by DHS through Internet service providers. Initially deployed in 2004, it has advanced from network traffic analysis to automated blocking of malicious traffic. The Veterans Affairs Department was scheduled to become the second agency to turn on the service in August, with others coming online as ISPs are ready to accept them.

Both of these trends — the move from static evaluation to continuous monitoring and letting service providers handle enterprise level tasks — could go a long way toward improving federal cybersecurity.

For more than a decade FISMA has provided a framework for IT security, and agencies have struggled to improve their security postures while complying with the law’s requirements. Almost from its inception in 2002 there have been calls for FISMA reform to move agencies away from focusing on compliance and toward actually improving security. Despite these calls, successive Congresses mired in partisan gridlock have been unable to provide reform.

Recent developments are evidence that FISMA’s supporters might be right, however. The problem is not in the law, which has always called for risk-based security and continuous (or near continuous) monitoring of systems, but with oversight that has placed more importance on compliance than results.

Not everything has been fixed. Statutory responsibility for overseeing FISMA still lies with OMB rather than DHS. And neither Einstein 3 nor the Continuous Diagnostics and Monitoring program have been in place long enough to show results. But the administration is demonstrating practical creativity in evolving federal cybersecurity.

Posted by William Jackson on Aug 23, 2013 at 6:40 AM5 comments


Businessman with laptop concerned by IPv6 Ping of Death

Microsoft issues fix for resurrected Ping of Death

The latest round of patches from Microsoft includes a fix for an ICMPv6 vulnerability in all of the company’s operating systems that support IPv6.

The vulnerability, rated “important,” is an IPv6 version of the old Ping of Death, a denial of service attack that originally was fixed more than a decade ago. The current version was reported by Symantec’s Basil Gabriel, and no public exploits of it had been reported at the time Microsoft released the security bulletin on Aug. 13. 

But it is one more reason to be aware of the fact that whether or not an agency is using IPv6 on its network, modern operating systems support the new Internet Protocols out of the box and network admins need to be aware of traffic using them.

The ICMPv6 vulnerability was one of eight security bulletins in Microsoft’s Aug. 13 Patch Tuesday release.  Three were rated critical and five important.

ICMP, the Internet Control Message Protocol, is a utility for error reporting and diagnostics used in IP networks, and is implemented in Version 6 as well as Version 4 of the Internet Protocols. One of its functions is pinging — using an echo request packet to measure the time of a round trip for a message to a specified IP address. Like many other denial of service attacks, a ping flood uses a high volume of these packets to overwhelm a target. But it was found in the 1990s that a single malformed ping packet larger than the size allowed in IPv4 could cause a buffer overflow when it was reassembled by the host operating system, causing it to crash.

This was fixed in most operating systems by 1998, but Gabriel found that at least some operating systems had the same problem reassembling oversize packets under ICMPv6. This is not a problem in ICMP, which is a required part of IP networking, but it does affect Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012 and Windows RT. As Microsoft describes it, “the vulnerability is caused when the TCP/IP stack does not properly allocate memory for incoming ICMPv6 packets.”

The patch corrects memory allocation while processing these packets, and the problem also can be handled by firewalls that detect and block the malformed packets. So with a properly configured firewall and an updated OS, the resurrected Ping of Death should not be a problem. It does offer a reminder that IPv6 will present a host of security challenges, however. Some will be new unique to the new protocols and some will be recycled versions of problems already addressed in IPv4.

Until recently, the surest way to dodge challenges like this was to avoid IPv6 altogether. This tactic is quickly becoming an impractical — and soon  impossible — solution. Current operating systems and other technologies support IPv6 out of the box, and many prefer the new protocols by default, making it difficult to opt out. With the depletion of new IPv4 addresses available for assignment, future growth in the Internet will be in the IPv6 address space, making it necessary for networks to accommodate the new traffic.

All of this aside, federal agencies are under order to enable IPv6 on their networks. Starting this as early as possible and doing it with a security plan in place will help make the process less risky.

Posted by William Jackson on Aug 16, 2013 at 9:10 AM1 comments