William Jackson | The Common Criteria
Cybereye | Commentary: When push comes to shove, evaluations are a question of dollars and cents
- By William Jackson
- Oct 22, 2007
To paraphrase Winston Churchill: The Common Criteria is the worst security product evaluation scheme, except for all others that have been tried.
That sums up the findings of Richard Smith, an associate professor of computer science at the University of St. Thomas in Minnesota, who published an interesting paper on the subject, 'Trends in Security Product Evaluations,' in the July issue of the journal Information Systems Security.
Smith, formerly a systems engineer working on product evaluations at Secure Computing, said that for information technology companies, having a security product evaluated under a national standards scheme is a business decision, not a technical one.
'They were struggling with the economic question of whether or not to evaluate,' he said of Secure Computing.
The question is not whether Common Criteria or other schemes, such as Europe's IT Security Evaluation Criteria, produce more secure products. They do, Smith said. The question for vendors is, 'are we getting the most bang for the buck?' If a company decides the process is not cost effective, it does not evaluate and nobody benefits from the scheme.
'Security evaluations have always been controversial,' he wrote. 'They have always been expensive and time consuming, and they have never been able to ensure the absence of security flaws.' He cited studies showing that a typical CC evaluation can take two years and $250,000. Increasing the Evaluation Assurance Level significantly increases both development time and evaluation cost. Product development time can double for a product shooting for an EAL 4 rather than EAL 2.
Still, 'despite shortcomings, more evaluations are taking place every year,' he wrote. The number of product evaluations under national and international schemes nearly doubled between 2004 and 2006, from 129 to 240. But many vendors appear to feel that they are not getting an adequate return on their evaluation investments. In his study of 860 product evaluations between 1984 and 2005, he found that historically only 30 percent of participating vendors have had more than one product evaluated. Since 2002, 49 percent of participating vendors appear to have dropped out of evaluation programs (even after accounting for company disappearances due to mergers, acquisitions and business failures).
Smith offered no suggestions for government on how to make evaluation programs more attractive to vendors, but he did give some enlightening criteria for companies to consider in deciding whether or not to evaluate. First, what are the customer requirements? Must a customer use evaluated products? Do they have established protection profiles or security targets that products must meet? Despite general requirements that U.S. agencies use CC-evaluated products, these have proved reasonably easy for agencies to evade, Smith said.
Secondly, consider the competition. If their products are not being evaluated, you might not get any competitive advantage from evaluating yours, so why bother? At the very least, you probably don't want to have your product evaluated at a higher EAL than the competition. And if your product is cheaper than the competition's, the cheapest and lowest level of evaluation might suffice for you, if you decide on one at all.
So, given that evaluations can improve security, but that the current process is flawed and in many cases neither the vendor nor the customer is getting the full benefit from it, what can be done to improve Common Criteria? Unfortunately, maybe very little, Smith said in an interview.
'It's not entirely clear that there will be any ideal product evaluation strategy,' he said. 'I'm not sure there is a good way to do evaluations. The current system isn't good, but it's better than nothing.'
Cold comfort for security professionals, maybe; but it's better than no comfort at all.
William Jackson is a Maryland-based freelance writer.