Risk management: The answer to security, or the problem?
One security experts says, ‘We need to drop the risk management banner’
- By William Jackson
- Aug 13, 2010
One of the guiding principles of information technology security is risk management, the concept that security efforts should be prioritized so that resources can be directed at the most significant threats and residual risk can be knowingly accepted and anticipated. However, not everyone shares that view.
“I think the biggest threat out there now is the concept of risk management,” said Brian Chess, chief science officer and co-founder of IT security company Fortify. “Most people think risk management is the answer. I think we’re at a point where risk management is a problem.”
The problem isn’t so much the idea, Chess said. “It’s how it is implemented.”
6 reasons to worry about cybersecurity
NIST guide: The imperative of real-time risk management
Better cybersecurity depends of better information management
Without an adequate foundation of knowledge and expertise, what should be an objective evaluation becomes a subjective gamble, and risk management becomes a risky proposition, he said.
Risk management is based on the assumption that absolute security is impossible. Because some risk will always remain in any system, the focus should be on managing it rather than futilely trying to completely eliminate it. All well and good, Chess said; but there are difficulties with how it typically is implemented.
Most practitioners don’t know the attackers and their methods well enough to accurately measure the amount of risk they are accepting.
Most are not good enough at math to understand how multiple risks across a number of parameters in their IT systems can add up or multiply to increase exposure.
Without these foundations, those who must sign off on accepted residual risk are making judgment calls with a pseudo-scientific veneer.
“Humans are really bad at that,” Chess said. “We don’t seem to be able to make good risk decisions. We need to drop the risk management banner.”
He suggested replacing it with objective standards, such as those that have evolved in the physical world to provide acceptable levels of safety. Take bridges, for example. Lots of bridges have failed, but engineers have developed objective standards for construction that have evolved by adapting to those failures.
“When we build a bridge, we are doing risk management,” Chess said. But designers are not left to make their own decisions about each bridge from scratch. By using objective standards, “we’ve improved bridges a lot over the last 100 years.”
The security community has a love/hate relationship with standards, he said. It loves the unanimity but hates being required to adhere to specific practices that might not be adequate. But Chess said he believes that professionally crafted, objectively applied standards that evolve over time can provide a uniformly higher level of security than what is achieved through individual guesswork. The world is not so fragmented that we cannot develop an effective set of common standards, he said.
The question remains: Who will develop these standards?
It will be either industry or government. “The big thing that the software industry fears is regulation,” Chess said. But if industry does not do an adequate job of developing and adhering to effective levels of care for security, Congress eventually will step in and do it for them.
William Jackson is freelance writer and the author of the CyberEye blog.