Despite the infamous pronouncement several years ago by the Gartner Group, intrusion detection is not dead.
Even with the growth of intrusion prevention systems and other blocking and filtering tools, intrusion detection systems (IDS) still are necessary to ensure that we know what is happening on our IT systems, Italian security researcher Stefano Zanero said Thursday at the Black Hat Federal Briefings in Arlington, Va.. The only thing worse than having a system compromised is to be compromised and not know it.
And compromise probably is inevitable, he said. Failure is always an option.
Most IDS systems today are signature-based, reliable tools that identify and issue alerts on known attacks. These systems rarely miss an attack for which a signature is available and rarely produce false positives, and they provide specific information about what is happening on the network. But they are not flexible and do not offer protection against zero-day attacks for which signatures are not available.
To provide that protection, an anomaly detection IDS is needed, a system that defines what is normal network activity and flags deviations from that behavior. This can theoretically spot zero-day attacks but does not provide specific information about the attack being conducted, and false positives will be a fact of life.
'There will always be false positives, no matter how precise and intelligent it is,' Zanero said.
The two techniques in theory complement each other, but in reality there are few anomaly based IDSes available. Zanero, in graduate research at the Politecnico di Milano's performance evaluation laboratory, helped develop an anomaly detection prototype and learned just why the systems are so rare. What was produced was good enough to earn him a Ph.D. but not good enough for deployment.
The tool produced a detection rate of 70 percent with a false positive rate of 0.03 percent. The false positive rate is an order of magnitude better than other systems, but 'not good by my measure,' Zanero said.
The system works by using algorithms that analyze network traffic, using algorithms to 'learn' what is normal and spot the unusual. This is not as easy to do as it is to explain. Separate algorithms are used to identify clusters of similar activities, which become normal, and to identify outlying activity, which could be an attack. Selecting the proper algorithms for the data being analyzed and tuning them to spot clusters and outliers is as much an art as a science.
Zanero is continuing work on the system, and although it is not yet ready for prime time, he plans to release it as a plug-in for GPL Snort, the open-source network-sniffing tool.
Given the statistical improbability of protecting against every type of attack, network administrators have to accept a certain level of risk. The key to being secure lies in knowing and planning for that risk, and that means planning for failure by designing systems that fail gracefully and can be recovered effectively.
'Plan for the worst, and hope it doesn't get any worse than that,' Zanero said.