People love or hate peer-to-peer networking for many of same reasons: If offloads bandwidth demands from content providers to end users; it can provide a convenient way to copy and share copyrighted material, with or without permission; and it effectively turns your computer into someone else’s server.
These factors can cut two ways, so the House of Representatives opened a can of worms when it recently banned use of the Spotify P2P music service. The decision, reported by Politico on Jan. 31, apparently is part of a broad ban on P2P technology within the chamber, but the Chief Administrative Office, which oversees the House’s IT services, isn’t saying whether there are specific security concerns with Spotify.
The move drew quick criticism from the industry. The Recording Industry Association of America, long in the forefront of fighting unauthorized file-sharing, quickly sent a letter to Chief Administrative Officer Daniel J. Strodel pointing out that “Spotify is a licensed, secure online music streaming service,” one of dozens of authorized services that have RIAA’s blessing. It calls the ban a problem that needs to be fixed.
Daniel Castro, senior analyst with the Information Technology and Innovation Foundation, called the ban “haphazard,” and said in a statement that “I have yet to see any evidence from the CAO that using this music service presents a credible security risk.”
Peer-to-peer networking in itself is not an inherent risk, Castro said in an interview, saying that here is risk in any application or network connection. “You never know who controls the connection on the other side,” he said. But, “Spotify is a reputable company.”
Spotify does properly license its music and does appear to be reputable. But there is an additional layer of uncertainty and risk that comes with P2P networking that merits caution. The US-CERT warns that “P2P applications introduce security risks that may put your information or your computer in jeopardy,” and advises that “the best way to eliminate these risks is to avoid using P2P applications.”
But peer-to-peer networking has become a fact of life that now is difficult to avoid. In fact, you might be using it without knowing it. CNN generated some heat in 2009 when it used the P2P application Octoshape Grid Delivery to deliver online video coverage of President Obama’s first inauguration. It wasn’t exactly a secret — users of CNN.com Live were prompted to click “yes” to install an Adobe Flash Player plug-in for “faster, better video.”
The network describes Octoshape as a technology “to deliver higher quality video.” But a closer look shows that it is a P2P application that can take video from any user and deliver it to any other user. So if you watched the inaugural address online, your video might have been coming from someone else’s PC rather than from CNN. And someone else might have been watching your video stream. That upset some people.
Peer-to-peer has come a long way since the days of rogue services such as Napster, which created an uproar in the music and movie industries because its users shared without concern for copyright. There also were serious security concerns. A study by the U.S. Patent and Trademark Office back in 2006 found that five popular P2P applications of the time not only allowed sharing of files that users had downloaded via the apps, but also allowed users to browse throughout anything on another user’s drive and download any file.
One would hope that modern file-sharing schemes operating with the blessing of RIAA do not have such blatantly malicious components. Spotify’s terms of agreement prohibit: “using the Spotify service to import or copy any local files you do not have the legal right to import or copy in this way,” as well as using it for spamming, phishing or distributing malware.
But Spotify is not perfect. In 2008 it found and fixed a bug that that could let intruders acquire user passwords and other user information. The company might prohibit improper use of its services, but can it stop them? That is impossible to say without looking at the software. And as Castro admitted, “I haven’t done a security audit of Spotify.”
Most assessments of peer-to-peer applications, from US-CERT to the SANS Institute, conclude that turning your computer into someone else’s server entails risk. These risks presumably can be mitigated or avoided, but without good evidence that this has been done, caution in using or allowing P2P is warranted.
Posted by William Jackson on Feb 01, 2013 at 9:39 AM0 comments
We tend to think of the Internet as part of a virtual world — cyberspace — in which battle is continuously being waged between hackers and defenders using the 1s and 0s of binary code. It’s easy to forget that the Internet relies on a physical infrastructure that can break.
As Ted Stevens, Alaska’s late Republican senator, famously pointed out, the Internet is a series of tubes. When one of them breaks, your Internet connection can go dark.
The latest State of the Internet report from Akamai noted a concerted wave of distributed denial of service attacks in the third quarter of last year, some producing traffic levels as high as 65 gigabits/sec. But it also noted four disruptions in that quarter that really did break the Internet, at least temporarily, but which probably had nothing to do with DDOS attacks.
Lebanon suffered an outage last July that took the country virtually offline for several hours and that was attributed to problems with a submarine cable in the Mediterranean between Lebanon and Cyprus on which it depends for Internet connectivity. Lebanon reportedly has plans for a second submarine cable to provide more bandwidth and back-up connectivity, but it has not yet appropriated money for it.
A month later, Jordan saw sharp drops in its Internet connectivity, the result of what reportedly was a cut in the power supply to the country’s main Internet service provider. An Internet blackout in Syria in July apparently was a denial of service attack, but that appears to have been carried out by Syria’s own government when local Internet provider networks routed through the state-affiliated Syrian Telecommunications Establishment were removed from the global routing table. This brief outage was neither the first nor the last time the government effectively pulled the plug on the nation’s Internet.
The most high-profile outage last year was at Go Daddy, the Internet registrar and Web hosting company, which in September was knocked out for five hours, leaving as many as 54 million domain names unavailable. The hacktivist collective Anonymous quickly claimed credit, but Go Daddy blamed it on internal network problems that corrupted router data tables, eventually exhausting its resources. In other words, a self-inflicted denial of service.
Not every outage is an attack. Sometimes a tube breaks.
Posted by William Jackson on Jan 30, 2013 at 9:39 AM0 comments
We now are in the opening weeks of a new Congress, and several cybersecurity bills already have been introduced, aimed primarily at improving cybersecurity education and protecting critical infrastructure. It is just a matter of time before FISMA reform is again brought up.
At 11 years old, the Federal Information Security Management Act of 2002 is well into middle age for an IT law — in fact, it’s probably moving into old age — so it is due for a legislative update. When Congress does address the issue, it should move cautiously, taking the time to evaluate what is right about FISMA and what could be improved, and looking at what agencies have been doing right in securing their information systems.
Moving cautiously does not mean stalling. Any number of FISMA reform bills have been introduced in past sessions, only to die without making it to the floor. But Congress should take the time to ensure that any new law is a clear improvement over the existing one.
FISMA has always had its detractors, but it has proved to be a robust law. One of its strengths has been its ability to evolve through non-legislative means. Over the years, the agencies overseeing it have shifted focus away from static compliance and toward risk management, continuous monitoring and real-time awareness. In the past year or so, the National Institute of Standards and Technology has updated its guidelines on risk assessment (Special Publication 800-30 Rev. 1, revised in Sept. 2012), security controls (SP 800-53 Rev. 4, draft revision issued in February 2012) and continuous monitoring (SP 800-137, issued in September 2011).
In 2010, the Office of Management and Budget designated the Homeland Security Department the lead agency for establishing cybersecurity metrics, and by 2011 overall compliance had increased from 62 percent to 74 percent. DHS introduced CyberScope for automated FISMA reporting in 2010, and its reporting guidelines for fiscal 2013 continuing an increased emphasis on continuous monitoring.
This does not mean that everything is all right with FISMA. A 2012 survey of federal officials by nCircle showed that IT security still is focused on compliance rather than risk, which has been a complaint against FISMA from the beginning. As has been amply demonstrated over the last decade, compliance does not equal security.
But the problem with FISMA has been in its implementation rather than its goals. Before Congress fiddles too much with the act, lawmakers should have a good idea of how that implementation has improved and what the impact has been, and what practices have actually improved security in agencies. It may be an old law, but it’s possible that FISMA needs only a tune-up rather than a major overhaul.
Posted by William Jackson on Jan 28, 2013 at 9:39 AM0 comments
It isn’t just your imagination or media hype — denial-of-service attacks were more common in 2012 than ever before. Prolexic Technologies logged a 53 percent increase in the attacks for last year over the year before, and the largest single culprit seems to be the itsoknoproblembro DDOS toolkit.
According to the security company’s most recent quarterly report on DDOS activity the attacks not only are becoming more common but also more powerful, and the botnets that support them are more resilient. Itsoknoproblembro was used to launch high-profile distributed attacks against banking companies in late 2012 and had a role in most of the attacks analyzed by the company in the fourth quarter. A number of government agencies also were among the organizations targeted.
Prolexic provides protection against DDOS attacks, absorbing or dropping attack traffic before it reaches its targets. It regularly analyzes attack data to report on trends.
Although most of the high profile attacks caused some disruptions in services, they generally failed to take their targets offline completely. But defending against them is becoming more challenging as itsoknoproblembro and its botnets evolve. The average volume of attack traffic grew to 5.9 gigabits/sec in the last quarter of 2012, up from 5.2 gigabits/sec a year earlier, and the company recorded seven high-bandwidth attacks of 50 gigabits/sec or more.
One of the most interesting trends is the sharp increase in DDOS attacks aimed at Web applications rather than the network. Although network attacks still account for 75 percent of DDOS attacks, the number of application layer attacks is growing at a faster rate. Application layer attacks grew by 30 percent in the fourth quarter of 2012 over the same quarter a year earlier, and jumped by more than 70 percent over the previous quarter.
What does this mean to administrators defending their systems from these attacks? For one thing, application attacks are likely to be stealthier because they rely on malformed requests to specific applications that more slowly consume a server’s resources rather than on volume directed against a network. While you’re watching for barbarian hordes to attack your gates, individual intruders might already be quietly chipping away inside your walls.
It also is a reflection of the continuing cat-and-mouse game going on between attackers and defenders, with methods and vectors of attack rapidly shifting. The itsoknoproblembro kit evolved throughout the year, modifying file names and methods for executing attacks to evade detection and remediation. Defenders were able to keep up with these changes, but have not been able to get out ahead far enough to stop the attacks.
This also is reflected in the botnets being used to deliver attacks. “Some of the newer botnets have resilient command and control architectures where individual bots can become command and control servers,” researchers found. “This means that for practical reasons the individual bots themselves must ultimately be identified and removed.”
Taking down individual bots can be a daunting task in countries where there is little official cooperation. Unfortunately, that is where many of the bots reside. China was the top source of attack traffic through last year, by a commanding margin. “Prolexic expects that despite continued efforts in bot takedowns, many new botnets will emerge and there will remain a significant number of active bots for the foreseeable future,” the report concludes.
So keep watching the gates and keep an eye on your applications. This year could be a rocky one, especially if you have ticked anyone off.
Posted by William Jackson on Jan 18, 2013 at 9:39 AM0 comments