Connecting state and local government leaders
Using data-driven anomaly detection, researchers at Oak Ridge National Laboratory can tell if a hacker has modified a vehicle's electrical signaling.
Protecting increasingly computerized cars and other vehicles from cyberattacks is harder than you might think. In fact, according to Bobby Bridges, a research mathematician at the Oak Ridge National Laboratory in Tennessee, even detecting an attack takes some demanding data analysis.
The problem, Bridges said, lies partly in the way that the signals -- which tell the car to switch on a turn signal, engage the brakes or accelerate, for example -- travel through a controller-area network bus. “One problem,” he said, “is that every make and model year of car" maps and codes the messages differently.
Another problem, Bridges said, is that the messages aren’t discrete, one-at-a-time messages. That is, if you turn on the headlights, there isn’t a single message to that effect sent across the CAN bus. Instead, status messages are repeatedly sent across the CAN bus with a regular frequency to the effect of, “Headlights on.”
The bottom line, said Bridges, is that “for any car you have a ton of messages flowing by and you don't know what they are coded for. From the cyber perspective it is hard to attack, but it is also hard to defend.”
“That,” he added, “is where data science comes in.”
After the ORNL team tapped into the CAN buses on several vehicles -- which are present on all automobiles manufactured after 1997 -- they knocked on Bridges’ door and asked him to see what he could make of it.
The first step was building some sort of detector, Bridges said. “Can we detect if someone were to inject signals that shouldn’t be there?”
The team found that the signals controlling each device in the car are sent in such regular intervals that it was simple to detect if someone injected signals. “Imagine you want to suppress my brake lights while I'm driving,” he said. "Since the car is repeatedly sending the signal “brake lights on,” a hacker would have to match the pattern and repeatedly send the message “brake lights off.”
"It makes it very easy to see if somebody is injecting a signal," he explained, "because they are not going to hit the right frequency.”
The ORNL team didn’t just work with data -- it built a prototype aftermarket plug-in that could be used for intrusion detection. “We got a little Nvidia JetPack -- a little board that has a CPU on it and is about four times the size of a Raspberry Pi,” Bridges said.
What’s more, according to Bridges, the device doesn’t need to know the CAN bus coding of a particular vehicle to work, because it relies on detecting changes in the frequency of specific coded signals. It doesn’t, in short, need to know what those signals are intended to control.
The current version of the ONRL detection system does have some significant limitations. First, while it is quite good at detecting intruding signals, it is not able to effectively block them. Still, the operator would at least know it’s time to stop the car and have it checked.
The device also doesn’t address the potential issue of sophisticated hackers taking over the entire CAN bus and substituting all the data, including the vehicle's normal operating data. While such a hack has been demonstrated in the lab, unless the goal is to completely disable the car, hackers would need to know the CAN bus coding for that particular vehicle's devices.
The ORNL team also faced another challenge in U.S. patent law. According to Bridges, the team’s patent lawyers found that others had already patented the general idea without having any idea of how to implement it.
“Apparently, you can patent something that you have no idea how to physically pull off,” he sighed. But if the lab adds a unique machine learning algorithm that can help the detection system make predictions, he said, "we can probably have something that is patentable.”
NEXT STORY: Are voting machine hacks overblown?