Would the government have told us about Heartbleed? Should it?
The government says it did not know about the Heartbleed vulnerability in OpenSSL before it was publicly disclosed. But White House Cybersecurity Coordinator Michael Daniel says that if it had known, it might not have told us.
“In the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest,” Daniel wrote in a recent White House blog post. But not always. “Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence that could thwart a terrorist attack, stop the theft of our nation’s intellectual property or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks.”
Daniel goes on to explain some of the criteria used in deciding when and when not to disclose a serious vulnerability.
Over the years, the security community has come to a consensus on how to handle disclosure of security vulnerabilities in software. The discoverer first informs the product’s vendor, giving the company time to develop a patch or workaround before reporting it publicly. This protocol is not mandatory, however. Researchers can use the threat of disclosure to pressure vendors to respond to vulnerabilities, and some companies offer a bounty for new vulnerabilities to encourage researchers to cooperate. But the value of a new vulnerability can be much greater than a bounty.
In the end, how a vulnerability is handled depends on the motives and morals of the discoverer. For criminals, a good zero-day vulnerability—one for which no fix yet exists—is money in the bank. For governments, it can be an espionage tool or a weapon. The Stuxnet worm, an offensive weapon widely believed to have been developed by the United States and Israel, exploited several zero-day vulnerabilities.
Daniel said there are practical limits on hoarding bugs. “Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest,” he wrote. “But that is not the same as arguing that we should completely forgo this tool as a way to conduct intelligence collection and better protect our country in the long-run.”
Daniel said there are no hard and fast rules for determining when to disclose, but the administration has a “disciplined, rigorous” process for deciding. The criteria include:
- How widely used and important is the vulnerable product?
- How serious is the vulnerability. Can it be patched, and how much harm could it do if it falls into the wrong hands?
- Would we know if someone else was using it?
- What is the value of the intelligence we could gather with it, and are there other ways to gather it?
- Is someone else likely to discover it?
Heartbleed potentially leaked sensitive information protected by OpenSSL, which is very widely used to protect online commerce and other transactions. The vulnerability was critical, and although a fixed version of the software was released, replacing it will take some time.
Would we know if someone was using it? Maybe. Gathering useful information requires a high number of connections to a vulnerable server, which could be detected in activity logs. Shortly after the disclosure, Canadian police arrested a young man for allegedly using Heartbleed to steal tax data.
As for the value of intelligence to be gained, who can say? Is someone else likely to discover it? Yes, given that it was in open source software available to anyone. And did someone else discover it? Yep. Researchers at Codenomicon and Google Security.
So, did the National Security Agency discover Heartbleed first, and if they did would they have told us? According to White House criteria, it would be a good candidate for disclosure. But we’ll probably never know.
Posted by William Jackson on May 16, 2014 at 9:03 AM