Old hacks never die, they just attack new systems
50-year-old vulnerabilities still plague applications, and the latest security can be beaten
- By William Jackson
- Feb 10, 2010
As the world becomes increasingly dependent on information technology and digital communications, persistent vulnerabilities — some of which have been known for 50 years — continue to expose the world’s networks and applications to attacks.
“In 2009, the most notable trend is the continued use of existing attack techniques despite the security industry’s awareness of these vulnerabilities,” concluded a Global Security Report that Trustwave released at the Black Hat Federal Briefings in Washington.
Nicholas Percoco, senior vice president of Trustwave’s SpiderLabs, said enterprise administrators are overlooking basic security threats while chasing the newest vulnerabilities. Meanwhile, attackers are taking advantage of the tried-and-true vulnerabilities in addition to the latest zero-day flaws.
Gregory Schaffer, assistant secretary of cybersecurity and communications at the Homeland Security Department, said at the conference that information security should be part of basic enterprise policies, but that message has not yet been heard by top executives.
“We have moved into a space where cybersecurity is central to all business functions,” he said in his keynote address. “But some of the issues we talked about a dozen years ago we are still talking about today. We haven’t made our point to those who don’t do this for a living.”
Security experts from a dozen countries gathered at the conference to immerse themselves in the bits and bytes of the latest research by engineers, analysts and hackers who deconstruct and probe for weak points in software and hardware.
While networks and applications remain vulnerable to old exploits, the latest hardware security devices also will yield their secrets to a determined attacker.
Using an electron microscope to operate at the nanometer scale and Adobe Photoshop to plan his attack, security engineer Christopher Tarnovsky was able to reverse-engineer the family of chips from Infineon Technologies AG, which includes its Trusted Platform Module implementation; gain access to the chip’s data bus; and listen to unencrypted code.
It took him six months of work and the effort would cost an estimated $200,000 to do commercially, said Tarnovsky, who runs Flylogic Engineering and specializes in analyzing semiconductor security. But in the end, “I can get any piece of information stored on the chip,” he told his Black Hat audience.
Trustwave used the conference to release its latest Global Security Report, which was based on its analysis of 218 incident response investigations and nearly 1,900 penetration tests conducted in 48 countries in 2009. As companies devote their attention to countering the latest vulnerabilities and defining policies for new threats such as social networking and cloud computing, attackers are using old vulnerabilities to compromise networks and applications — and know they won’t be detected, Percoco said. The average lapse between an initial breach and its detection was 156 days, the report states. In some cases, the lapse was close to two years.
The top 10 vulnerabilities discovered in penetration tests on external networks dated from as early as 1979 (using default or easily discovered authentication credentials) to 2008 (Domain Name System cache poisoning), with most of them dating to the 1990s. On internal networks, three of the top vulnerabilities dated from the 1970s, the earliest from 1974 (cryptographic keys stored alongside encrypted data). The most recent was 2006 (virtual network computing authentication bypass).
Applications contained the oldest vulnerabilities, with two that dated from 1960 (authentication bypass and vulnerable third-party software). Wireless vulnerabilities were, as might be expected, generally more recent, with the oldest dating only to 1993 (lack of segmentation between wired and wireless networks) and most dating from the last decade.
The overall pattern shows that blind trust in third parties is a huge liability and that fixing new issues as they arise is no substitute for fixing older ones, Percoco said.
Building security into hardware, such as chips, is often seen as a solution for securing data. The Trusted Platform Module is the Trusted Computing Group's specifications for implementing cryptography in silicon. The chips can be used to support data protection, communications security, strong authentication, identity management, network access control and nonrepudiation. But Tarnovsky was able to penetrate the defenses of Infineon chips, which are used to implement TPM and secure commercial products that require licensed use, such as Microsoft’s Xbox 360 game console.
The Trusted Computing Group, in a statement released after the Tarnovsky’s presentation, emphasized the difficulty of the attack and value of enabling the Trusted Platform Module where it is available.
“This work would be exceedingly difficult to replicate in a real-world environment,” the group said. “Turning on and using the TPM chip is one of the single most cost-effective steps for ensuring robust security in the PC.” TPM was intended to provide protection against software-based attacks. “The Trusted Computing Group has never claimed that a physical attack — given enough time, specialized equipment, know-how and money — was impossible. No form of security can ever be held to that standard. The TPM, as designed, offers a robust defense against complex software-based attacks.”
Tarnovsky said the task was not easy, and he praised the chips' physical defenses.
“I really like them a lot,” he said of Infineon. “The security is built in layers. It’s like Fort Knox in there,” with defenses such as optical sensors that can detect light from optical telescopes.
Breaking in requires physical access to the chip and access to a focused ion beam workstation. The device is a type of electron microscope that can see at a much smaller scale than an optical microscope and can manipulate tiny needles less than a micron across. Using it, a hacker can inject conductors and insulators to rearrange a chip's circuits.
Tarnovsky began by buying chips in bulk for pennies apiece to experiment with and break. He stripped each layer off the chip to expose its topography, imaged the layers using optical and electron microscopes, and used Photoshop to layer the images so he could plan his attack through an intact chip. His target was the processing core, the chip’s central nervous system. Instructions in the core have to run in the clear; information in the rest of the chip is encrypted.
Once inside the core, “I can sit on the data bus and listen,” he said. “I can get any piece of information stored on the chip.”
Besides data on individual chips being at risk from a determined attacker, once the manufacturer’s code is copied from the chip, it could be used to produce counterfeit chips, which could contain back doors. However, because of its complexity, such an attack is not likely to become common soon.
But hardware hacking does not need to be expensive, said electrical engineer Joe Grand, president of Grand Idea Studio. In a presentation on hardware hacking in which he outlined the processes for examining and reverse-engineering chips and devices such as electronic parking meters, he said security is often ignored in hardware design. The tools for hacking those devices are affordable, and information is readily available, he said. The simplest attacks that have been known for decades still work.