Clear and present dangers
Four key categories of cyberthreats will likely dominate the security landscape during the next year
- By William Jackson
- Oct 06, 2008
Identifying the most serious cybersecurity threats is an inexact science. How do you measure just how bad something potentially is, and how can you be sure it will still be important tomorrow?
Lists of top threats change almost daily as vulnerabilities and exploits come and go, and others turn out to be surprisingly resilient. Who would have guessed when the Storm worm first appeared in early 2007 that it would be so persistent? And you might have thought that we learned our lesson a decade ago about e-mail messages with 'I love you' in the subject line, but this social-engineering trick still works today.
However, there are a handful of techniques ' with a lot of overlap and interrelationships ' for exploiting systemic weaknesses in the information technology environment that can broadly define the threat landscape. They include:BOTNETS AND ORGANIZED EXPLOITS.
The phenomenon of organizing compromised computers into a network that can be used for nefarious purposes has been around for years, but it is becoming an increasingly powerful platform responsible for a growing variety of attacks. 'Botnets are very much the Swiss Army knife of online miscreants,' said Zulfikar Ramzan, technical director at Symantec Security Technology and Response.WEB SITE AND WEB APPLICATION EXPLOITS.
According to one recent study, as many as 82 percent of Web sites have at least one security weakness. This is linked to the botnet phenomenon. Some experts blame the augmentation of Structured Query Language injection vulnerabilities for the apparent rapid growth in botnets in recent months. SQL injection is a form of attack on a database-driven Web site in which the attacker executes unauthorized SQL commands by taking advantage of insecure code on a system connected to the Internet, bypassing the firewall. One out of every three vulnerabilities reported in the second quarter of 2008 was a SQL injection, said Tom Stracener, senior security analyst at the Cenzic Intelligent Analysis Lab. 'There is a tremendous focus on it in the research community,' he said.VIRTUALIZATION.
This is an emerging security issue, but one that is important because of the rapid adoption of virtual machines in data centers and of virtual environments for delivering applications to users. It is not that virtualization is inherently insecure, said Kurt Roemer, chief security strategist at Citrix. In fact, it can offer some security advantages. But it is neither a panacea nor a disaster. 'Virtualization is just a different delivery vehicle,' he said. 'It does beg you to think differently in some ways.'NETWORK INFRASTRUCTURE.
As operating systems become more secure, more attention is being paid to the network and its underlying services. The recently reported vulnerability in the protocols of the Domain Name System (DNS) is a good example. If you can control the network, you can control the nodes on it. 'I don't want to sound like Chicken Little here, but it is a pretty dire situation we are in,' said Paul Parisi, chief technology officer at DNSStuff.
This is not a comprehensive list, and vulnerabilities and exploits will continue to come and go, requiring day-to-day attention from IT administrators and security shops. But the list addresses matters that are going to merit concern in the coming year and are worth further investigation.Botnets and organized exploits
Networks of compromised computers are being organized into turnkey solutions for activities such as spam delivery and phishing. Criminals can outsource an infrastructure and the management of their enterprises. The distinguishing characteristic of modern botnets is that they are all about cash flow and profitability, said Zulfikar Ramzan, technical director at Symantec Security Technology and Response.
Size matters in botnets, but in many cases, smaller is better. Although the number of compromised computers is growing, botnets are increasingly used to deliver low and slow attacks, staying under security monitoring devices' radar to maximize their return. The new twist in botnets is not their technology but the use of social engineering.
The Storm worm has worked well in expanding botnets by delivering malware through e-mail messages with compelling, often targeted subjects. But 'it's getting harder to get e-mail to work,' Ramzan said. The trend now is toward the use of peer-to-peer networks for downloading files for infecting PCs. Peer-to-peer networks also are being used for command and control of botnets, making it more difficult to shut down the lines of communication that deliver malicious code and marching orders to the zombies.
The job of protecting your network from infection is being complicated by a new generation entering the workforce that has grown up with computers and take personal mobile computing for granted. But although new workers might know how to use computers, they are not necessarily savvy about how they work or the security implications that come with them.
'The new workforce has a lot more demands from a security perspective,' Ramzan said. 'The enterprise boundary has become amorphous. It is becoming more difficult to manage a network.'
BT America, which is expanding its Multiprotocol Label Switching network offerings in this country, has recently added botnet detection to its suite of security services. Correlation engines look for anomalies and traffic patterns culled from firewalls and other network security devices that could indicate botnet activity. Suspicious events are passed along to human analysts at security operations centers in Chantilly, Va., and El Segundo, Calif.
Host agents for detecting suspicious activity are appropriate for servers and PCs, but correlating and analyzing network data is a necessary second line of defense against botnets, said Jeff Schmidt, general manager of BT America's Managed Security Solutions Group in North America.
'We believe that correlation of events across all devices is the best way to do it,' Schmidt said. Millions of reported events a month can be boiled down to a few hundred anomalies that can be analyzed to identify a handful of security incidents that managers should address.
Because so many channels can be used to compromise the computers that are brought into botnets and because they can be used in a variety of ways that often go undetected, technology will not solve the botnet problem, Ramzan said.
'As long as these people can make money with them, the demand for botnets will continue,' he said. 'We have to take a big step forward to hinder their profitability.' This means increasing the risk to the criminals using them, reducing the return and driving down the demand for these automated networks.Web site and Web application exploits
Although recent security surveys show a slight decrease overall in the number of vulnerabilities being reported, a growing percentage of those vulnerabilities is occurring in Web sites and Web applications. According to Cenzic, Web applications accounted for 73 percent of reported vulnerabilities in the second quarter of this year, up 3 percent from the previous quarter and 5 percent from late 2007.
'This quarter has been the highest on record,' Stracener said. 'It's part of a trend that has been going on since 2006.'
Cenzic reported that 70 percent of the Web applications the company analyzed used insecure communications that opened them to possible exploits during transactions, and another 70 percent contained cross-site scripting vulnerabilities, the most common injection flaw.
These findings are in line with those of WhiteHat Security, which reported that 82 percent of Web sites analyzed had at least one security issue despite a decline in the number of overall IT vulnerabilities being reported. The company said that since 2006, 'the industry has seen the Web-layer rise to be the No.1 target for malicious online attacks.'
As with botnets, the motive is money, Stracener said. Although fewer vulnerabilities exist, more exploit toolkits are being developed and commercialized for the Web for an underground criminal economy.
'The world hasn't grown more secure,' he said.
The overwhelming majority of reported vulnerabilities are showing up in Web applications, which accounted for 88 percent of vulnerabilities in the Cenzic study, compared with just 7 percent for Web servers, 4 percent in browsers, and 1 percent in plugins and Microsoft ActiveX. Most of the flaws were accounted for by SQL injection, at 34 percent, and cross-site scripting, at 23 percent.
Cross-site scripting is a security breach wherein an attacker inserts a malicious script in dynamically generated Web pages that is activated when a browser reads it. The attacker can change user accounts, steal information and poison cookies.
WhiteHat reported that cross-site request forgery vulnerabilities broke into its top 10 list for the first time last quarter. The company estimates that 75 percent of the world's Web sites contain one.
'On a positive note, 66 percent of all vulnerabilities identified have been remediated,' WhiteHat said, although the pace of remediation leaves something to be desired. In the study, the company found the average time to patch or fix HTTP response splitting vulnerabilities was 93 days, while information leakage problems required 26 days to fix.
This leaves large windows of opportunity for exploitation and underscores the need for consistent and aggressive configuration and patch management programs. It also illustrates the lack of vulnerability assessment during the Web application development process. Stracener estimated that less than 5 percent of applications undergo assessment during development.
'It's not clear that there has been improvement in that area,' he said. Applications are developed under tight time constraints that do not allow for adequate testing, and the applications often become business-critical once they go live on a Web site. 'They can't stop doing business and put the code into dry dock.'
Noninvasive testing in a virtual environment can help in assessing the security of online code throughout its life cycle, but it is clear that the pressure on IT administrators to efficiently patch and manage online applications is growing.Virtualization
Virtualization is a hot topic, and like all hot topics, it comes with security baggage.
A common driver for virtualization is data center optimization, reducing space and energy requirements, with security only an afterthought. 'Security usually is not built in,' said Kurt Roemer, chief security strategist at Citrix. 'It is bolted on at the end.'
Virtual machines often are looked at as if they are free, said Dave Capuano, chief marketing officer at Fortisphere, which sells management tools for virtual environments. This can lead to virtual sprawl, with new machines being rapidly added to a network, often lying dormant and unnoticed until their resources are needed.
The result is an expanding virtual infrastructure with little thought being given to configuration control, policy enforcement or management of communications among a multitude of operating systems and applications coexisting on the same hardware.
All of that puts a premium on planning when deploying virtual images that thousands of people will be use.
'You'd better be sure you've got it right the first time,' Roemer said. 'You'd better make sure you got the right image in place and have configuration management.'
Policies also must be in place to control how virtual machines communicate with one another within their new environment because they coexist within the network perimeter and are not buffered by firewalls.
Keeping track of virtual machines can be difficult, because they often lie dormant until needed. The latest release of Fortisphere's Virtual Essentials suite of management tools includes the ability to look at dormant machines so that policy enforcement can be applied before they are brought online.
Although virtualization on the back end can add new security concerns, virtual applications and desktops for the client can provide additional security, Roemer said.
'Now you've given the end user a sandboxed application that is separate from everything else on their machine,' he said. The user can't screw things up, and configuration can be managed centrally. 'That was all designed in when the application was provisioned.'
IT managers can use an appliance or thin client using a virtual desktop to supply a suite of applications and tools without putting the data itself on the client. This could make it simple to comply with requirements from the Office of Management and Budget for securing and controlling sensitive data on mobile devices, mandated after several high-profile data breaches involving stolen laptop computers. The only things that occur on the remote device are keystrokes, mouse clicks and screen refreshes.
'The data never hits the laptop,' Roemer said. 'You can even control what people can copy, paste and print locally.'
'Is this right for everybody?' Roemer asked, referring to virtual computing. 'No.' One drawback is that the user usually has to be online to use the application, although some streamed applications can be used off-line. 'And there may be some reasons a user would have to have the data locally.
But that should be the exception rather than the rule,' he said.Network infrastructure
Security researcher Dan Kaminsky's discovery earlier this year of a flaw in the Domain Name System protocols highlighted the vulnerability of network infrastructure to manipulation, but it was neither the only nor the first problem that could let bad guys misdirect Internet traffic.
'Cache poisoning has been a soft underbelly' of the Internet for years, Parisi said. Dan found a way to leverage it.'
DNS is crucial because it is the system for resolving common domain names to numerical Internet addresses used to locate and route traffic to and from online devices. If users cannot be sure that a DNS request has received an accurate response, they can have no confidence in the resources they are accessing.
'On the surface, it is a very simple protocol, based on trust, but it can be very complicated when you go into it,' Parisi said. This complexity, plus DNS' interrelated nature, makes correcting problems difficult.
Despite the potential for misuse, 'there have not been a lot of original exploits' since the most recent vulnerability discovery, Parisi said. But security experts have observed a lot of poking and prodding.
Parisi described the Internet ' which was not designed with security in mind ' as a house of straw. 'Everything we do on the Internet is based on trust,' and we can no longer trust it, he said. 'The Internet is broken. I don't think that's too much of an overstatement.'
That does not mean that there is no hope.
'IPv6 would fix a lot of this,' Parisi said, but its adoption in applications to date has been marginal.
DNSSec, which provides cryptographic protection by signing DNS requests, also would be a step forward. Some experts describe DNSSec as a hodgepodge, others say it is fairly effective, and still others say it is merely the best we have. Regardless of opinions, to be effective, it would have to be adopted universally. Otherwise, the overall system would remain as weak as its weakest point.
'If .com were to adopt DNSSec and mandate signing, online commerce would stop' because setting it up is complex, Parisi said. 'There are vendors scrambling to make DNS simple, and they will charge for that.'
Regardless of its complexity, the U.S. government is taking initial steps toward universal deployment by putting DNSSec on the .gov domain.
The Federal Information Security Management Act has already required security protocols for IT systems rated at high- or moderate-impact levels, but there was no timeline for implementing them. In August, Office of Management and Budget issued a new policy mandating the use of DNSSec on all government systems by the end of next year.
'The federal government will deploy DNSSec to the top-level .gov domain by January 2009,' OMB said. 'Signing the top-level .gov domain is a critical procedure necessary for broad deployment of DNSSec, increases the utility of DNSSec and simplifies lower-level deployment by agencies.'
Agencies must have plans in place to deploy DNSSec to all of their systems by December 2009.