“It was only a matter of time,” the security company Mandiant said about recent phishing attacks using its report on Chinese hacking as bait.
Mandiant released its report, “APT1: Exposing One of China’s Cyber Espionage Units” on Feb. 18, focusing on the activities of a group that the company says is responsible for a cyber espionage campaign against a broad range of Western companies and governments over the last seven years. The report immediately attracted worldwide attention, not all of it benign. Within two days, two apparently unrelated phishing attacks were identified using the report as bait.
“We are currently tracking the threat actors behind the activity and have no indication that APT1 itself is associated with either variant,” Mandiant wrote in its response. “Mandiant has not been compromised.”
The first attack, reported by Symantec appears to be aimed at Japanese targets with an e-mail attachment titled “Mandiant.pdf.” When opened, the attachment contains the first page of the report, but also delivers malicious code exploiting a vulnerability in Adobe Reader. A patch for the vulnerability was released Feb. 20. The malware communicates with a command and control server hosted in Korea.
The second attack was identified by researcher Brandon Dixon and targets Chinese journalists with an attachment titled “Mandiant_APT2_Report.pdf.” When opened it exploits another Adobe Reader vulnerability. The malicious code connects with a domain associated in earlier attacks against human rights activists.
The attacks are another example of how attacks are being refined to specific targets. Scammers have often targeted viral Internet topics in wide-scale phishing scams, trying to lure people into clicking on malicious links that purportedly related to Steve Jobs’ death, an on-court outburst by tennis star Serena Williams or photos of Osama bin Laden. Now, targeted, spear-phishing attacks are being used to target journalists reporting on a report about hacking.
The advice is old, but bears repeating: Be careful opening attachments. Hashes for the malicious PDF files are available on the report blogs. The hash for the genuine report is available from Mandiant’s download site. If you’d like to read the report, download it yourself. Don’t wait for someone to e-mail it to you.
Posted on Feb 25, 2013 at 11:06 AM0 comments
I recently had to have my computer disinfected, which was frustrating. My firewall is up, I keep my antivirus up to date, I’m cautious about opening e-mail and don’t click indiscriminately on links. But something got through.
A new report from Lastline, a security company that focuses on advanced malware, offers some insight into a new technique used by black hat writers to escape detection by having their code do busywork in a security sandbox until it is allowed out.
It should be noted that Lastline has a dog in this fight and is offering a solution to counter this new threat. But the information is still interesting.
A sandbox is a virtual environment with its own guest operating system where intercepted incoming code can be observed. If it acts maliciously or suspiciously, it can be tossed out. Observing behavior of code in a sandbox should detect and block malware regardless of whether the code or the vulnerability it exploits is already known.
The challenge for attackers, then, is to outwit the sandbox. They do that with environmental checking; malware might check for the presence of a virtual machine or it might query well-known registry keys or files that indicate a sandbox. Other malware authors instruct their malware to sleep for a while, waiting for the sandbox to time out.
Security vendors have countered by looking for behavior such as queries for registry keys and by forcing sleeping code to wake up.
The latest trick by malware writers is what Lastline calls stalling code. It delays the execution of a malicious code inside a sandbox and instead performs a computation that appears legitimate. Sort of like an intruder avoiding notice by carrying a clipboard through an office. Once the sandbox has timed out, the evasive malware is free to execute.
This is not the ultimate malware; evasive techniques can be countered by better sandboxes. Also, these techniques are no good if the vulnerabilities being exploited have been patched or if the signature of the code is known. Although signature-based detection has been shown to be an inadequate defense by itself, it still works well when it works. (We’ll look later at why it doesn’t always work.)
But it is a reminder that what the mind of one man can achieve, another can overcome. No attack and no defense is perfect, and the battle goes on.
Posted on Feb 22, 2013 at 12:20 PM1 comments
The Internet has created “a golden age for intelligence collection,” says James Lewis, a fellow at the Center for Strategic and International Studies. In fact, he writes in a new paper on conflict in cyberspace, “The primary challenge for sophisticated intelligence agencies is not the collection of data, so porous are Internet-based systems, but the ability to store, process and analyze the data they have acquired.”
This is not much of a surprise in the wake of recent reports such as that from Mandiant detailing the incursion efforts by the Chinese People’s Liberation Army, believed responsible for penetrating the systems of more than 140 companies, many of them in the United States. The Mandiant study itself builds on earlier work by other security researchers. The clear message is that the Chinese are in U.S. systems, have been for some time, and are not likely to leave any time soon.
All of which raises the question: How do we protect ourselves against these attacks? Better security awareness would help. Organizations, both government and private, need to know what resources must be protected and then focus their efforts on those. Even organizations that are not targets can become vulnerable links in a chain of complex attacks and they need to protect themselves accordingly.
But relying on technology alone is not enough, Lewis says. The stakes are too high and the systems being targeted are too complex for that.
“Any analysis of cybersecurity needs to accept the fact that cyber espionage will continue,” he writes. Improving system security can discourage amateurs and criminals looking for easy money, “but advanced services, with their resources and their combined technical means, will retain an advantage. The task of cyber espionage will become more difficult, and a sophisticated opponent will still be able to achieve success.”
Government must bring to bear its intelligence, diplomatic and political resources, treating espionage as an IP and trade issue rather than a cybersecurity issue, Lewis writes. “Vigorous response is the key to managing cyber espionage.”
One roadblock to this approach has been the lack of attribution — the ability to identify the ultimate source of attacks with a high degree of confidence.
But Lewis says this is a false barrier, for two reasons. First, everybody knows China is doing this; and second, this is a matter of diplomacy, not a court of law, and proof doesn’t need to be established beyond a reasonable doubt. Diplomatic pressure and economic sanctions backed by intelligence could make it politically difficult for China to continue this behavior.
What is needed is an accepted set of international norms concerning behavior in cyberspace — the kinds of norms that helped the United States survive the Cold War. The Cold War “worked,” in that the United States and the Soviet Union were able to confront each other without nuclear war because there were more or less clearly defined roles and conventions with an understanding of what could be done and how. Currently, that is missing from cyberspace.
None of this means that firewalls and vulnerability patching are not important. They are. But while system administrators raise the technical bars, the policy wonks also will have to raise the political bars.
Posted on Feb 21, 2013 at 8:25 AM4 comments
The Federal Communications Commission was dinged in a recent audit for cutting corners while upgrading network security in response to a breach.
The Government Accountability Office said that the security of the commission’s Enhanced Secured Network was compromised because the FCC did not implement appropriate security controls and follow proper procedures in project development and deployment.
But FCC countered that the ESN was an emergency response, “designed to avoid an increase in security risks posed by delays in implementation,” and that even with cutting corners, “the FCC’s network is stronger, better, and more secure than it was before the commission started these upgrade efforts.”
The case is a good example of the conflict between the requirements of auditors who evaluate regulatory compliance and the demands on frontline administrators who must deal with real-world threats while keeping systems running. The conflict is an old one and has implications for IT security. Auditors evaluate how something is done rather than what is accomplished. They focus on process and documentation. Process and documentation are important because they help ensure repeatability of results and keep everyone on the same page while doing a job. Results often are hard to quantify and measure, so adherence to process can the best way to tell if requirements have been met.
But the guys on the front lines spend a lot of time putting out fires and patching things, with little time for paperwork. Duct tape isn’t pretty, but admins do what they have to do to keep things running. Maybe they can go back and fix it properly later — after putting out the next fire. Auditors hate this. Administrators aren’t crazy about it either and would gladly change things if they had the budget, time and resources they need.
The FCC situation began with the 2011 discovery of a breach during an upgrade of the commission’s security and monitoring systems. The ESN project was the response and it was brought in under budget and on schedule. But GAO found that impact assessments had not been done to ensure that the proper security controls were being used and that the system had not been formally reauthorized for operation as required by the Federal Information Security Management Act.
FCC acknowledged these lapses but said they were necessary at the time and that it had gone back to cover these bases after ESN was up and running.
Both sides have their points. The key to the dispute lies in a single word in GAO’s conclusion: “As a result of these and other deficiencies, FCC faces an unnecessary risk that individuals could gain unauthorized access to its sensitive systems and information.” The key word is “unnecessary.”
Did FCC create an unnecessary risk? Or did the commission accept a necessary amount of risk to get a necessary fix in place as quickly as possible?
It is impossible to say without knowing the details of the breach and the fixes, which haven’t been released. But it would be wrong to conclude that a risk is unnecessary just because it could be prevented under ideal conditions. Most people go to work each day and do the best they can with the conditions at hand, which seldom are ideal.
Posted on Feb 11, 2013 at 8:13 AM2 comments