Lab's behavioral system can catch insider threats
- By William Jackson
- Nov 17, 2011
Researchers at the Energy Department’s Oak Ridge National Laboratory are developing a tool to identify malicious insiders and stop them from sending sensitive information outside the enterprise.
The system, which is being tested in a lab environment, uses a host-based agent to “learn” a user’s behavior and to look for anomalous behavior or other signatures, said computer scientist and project leader Justin Beaver.
“It turns out there is a lot of data on each host you can leverage if you know what to look for,” Beaver said.
Oak Ridge lab shuts down e-mail, Internet after cyberattack
Oak Ridge turbocharges intrusion detection systems
He said his team’s work has demonstrated that profiles of normal behavior can be built from low-level system data on a user’s computer over a relatively short time and that signatures for exfiltrating data can be recognized. The system responds to these events by seamlessly switching the malicious user to a honeypot environment where he is isolated from data but his actions can be studied.
Although the system works in the lab it is not yet ready for an operational environment, in part because of false positives, erroneous results that can trigger a response against the user.
“In this particular operation, false positives can be dangerous,” Beaver said. The system will have to be tweaked to reduce false positives and to include a human in the loop to ensure that the work of legitimate users is not interrupted by mistakenly shunting them off the system.
The research is an internally funded project to address the problem of insider threats. The work does not defend against intrusions, but responds to a threat that already is inside the enterprise perimeter.
“A lot of defense is set up to operate at the perimeter,” Beaver said. “The unspoken assumption is that the inside is safe. That is rarely true.”
Research so far has focused on user behavior, to identify malicious humans rather than malicious code that might have been installed on a compromised machine. The logical next step would be to extend the work to malware, Beaver said. But, “right now we have a lot of user data” that can be used to define and identify suspicious behavior. Similar data on malware has not yet been collected.
Oak Ridge was the victim in April of a successful phishing attack that infected its network with what a spokesperson called a “very sophisticated” piece of malware, apparently designed to steal information from the lab’s network. E-mail and Internet access at the lab were shut down until the infection could be identified and removed.
Among the characteristic information leveraged by the system are system call sequences. Each function on a computer initiates a series of calls for services. This occurs at a low level in the operating system, out of the user’s view, and creates a characteristic pattern for each user over time. Researchers found that normal patterns remain surprisingly consistent for individuals as they switch between computers and jobs.
“It doesn’t seem to matter at the low system call level,” Beaver said. He said that the number of unique system calls forming a user’s “signature” usually levels off at around 200, so that a useful baseline usually can be created in a matter of hours or days. The system also looks for other patterns of activity associated with the exfiltration of data.
This analysis and detection is done by an agent on the host computer. Response is handled on a central controller, which can move the suspect user to a dynamic honeypot that duplicates the system in which he is working.
“It looks like the same data from a navigation standpoint,” Beaver said, but the actual content is false. At this time, the switch to the dynamic honeypot only works in a virtual environment, such as a cloud.
William Jackson is freelance writer and the author of the CyberEye blog.