Under attack

Common Criteria has loads of critics, but is it getting a bum rap?

Levels of Common Criteria's Evaluation Assurance

Products validated under the Common Criteria Evaluation and Validation Scheme are assigned one of seven Evaluation Assurance Levels, sets of predefined assurance packages that give an indication of how rigorous the evaluation process has been.

The assurance level applies to the evaluation process rather than to the product's functionality.

EAL 1 is the lowest level and EAL 7 is the highest. Most products validated to date are certified at EAL 2, 3 and 4. Levels 1 through 4 are recognized under the Common Criteria Recognition Agreement, meaning that all participating countries recognize the evaluation result regardless of the country in which the evaluation was done. The higher EALs are generally country-specific.

The National Information Assurance Partnership warns that 'reliance on EALs alone does not provide a method for determining the security robustness of a product. The EAL merely provides a convenient reference for the amount of analysis and testing performed on the product.'

A recent listing of 186 products validated under Common Criteria showed the following assurance levels.

NO. OF VALIDATED PRODUCTS BY ASSURANCE LEVELS

EAL 1 - 5

EAL 2 - 90

EAL 3 - 33

EAL 4 - 53

EAL 5 - 3

EAL 6 - 0

EAL 7 - 2

Lost Cause: 'The evidence so far suggests that it is a waste of time and resources. I would be extremely happy to see evidence to the contrary, but it doesn't seem to be out there,' said Jonathan Shapiro, an assistant professor at Johns Hopkins University.

GCN Photo by Zaid Hamid

'You are not testing the product at all. You are testing the paperwork.'' Alan Paller, Sans Institute

GCN File Photo

Pity the poor Common Criteria Evaluation and Validation Scheme. Conceived as a way to provide independent evaluation of security products against a set of standard criteria that could be accepted by end users in many countries, it has been condemned by vendors and security experts alike.

The use of evaluated security products is mandated for government networks carrying sensitive information. But vendors say the validation process is too expensive and cumbersome. Security folks say it is a paperwork drill rather than a product evaluation. Both agree that it has not made software on government systems more secure.

'Common Criteria is just something that we do,' said Wesley Higaki, director of product certification at Symantec. 'We're just going through the motions.'

'If you're asking, is the effort worth the money, the answer is a resounding no,' said Alan Paller, director of research at the SANS Institute.

Trying to be all things to all people, Common Criteria has ended up pleasing almost no one. That is not to say that no one has a good word for it. But even the positive statements are damning in their faint praise.

'The scheme itself is sound,' said Jonathan Shapiro, an assistant professor at Johns Hopkins University's Computer Science Department. Shapiro has been one of the biggest critics of the Common Criteria. 'Personally I think that there is benefit ' evaluation is certainly no worse than any other inspection process ' but it is not cost-effective,' he said.

It is enough to make you feel sorry for the National Information Assurance Partnership, which oversees Common Criteria in the United States.

'Defending the program is a full-time effort. It is a difficult job,' said NIAP Director Audrey Dale of the National Security Agency. Dale acknowledges industry frustration with the scheme, and she said NIAP is trying to address vendors' concerns.

'We've been working really hard,' she said. 'We work a lot with the vendors.'
But there is a limit to the amount of change vendors can expect in Common Criteria.
'We each have our needs,' she said. The government needs security and the vendors need money. 'We're trying to meet in the middle.'

One framework

Common Criteria is an international standard that grew from the Defense Department's Orange Book standards for security products. The DOD requirements were harmonized in the 1990s with standards from Europe and Canada to create a common framework for an evaluation process that could be accepted by multiple countries. By having one framework to build to, rather than individual frameworks for each country, vendors could save time and effort in the accreditation process.
Today, more than a dozen countries have joined the Common Criteria Recognition Arrangement, each agreeing to recognize the evaluation results of any of the accredited laboratories in the program.

Common Criteria does not define the features or functionality that a product must have or require that the product itself be secure.

Instead, the development of the product is evaluated against a security target, which can be a protection profile developed by a user or a company statement of what the product is intended to do.

These are evaluated against a set of security assurance requirements to determine if the development process for the product enables it to meet its claimed security functionality. Basically, it tries to determine if the product does what it says it will do.

This approach is a strength and a weakness of Common Criteria. By not specifying functionality requirements, it is a flexible framework that can be applied across a broad spectrum of products. But it focuses on process rather than product. Knowing what a product is designed to do does not necessarily mean it can do it well or securely, critics say.

'Any software product can contain vulnerabilities, and there is nothing in the protection profile that provides any confidence or assurance to the customer that we've done a good job in that area,' Symantec's Higaki said.

Common Criteria evaluations are assigned one of seven assurance levels reflecting the requirements met for the development process of the product. Evaluation Assurance Levels reflect the degree of confidence a user can have in the results of the evaluation and the performance of the product. The lower assurance levels, EAL 1 through 4, where the vast majority of products are evaluated (see chart), do not require evaluation of the software, only of the development process and documentation.

Because of this, critics say evaluation does nothing to prove or improve the security of a product.

'You are not testing the product at all,' Paller said. 'You are testing the paperwork.'

Shapiro says that even the higher assurance levels do not provide assurance of security because they do not require a code review for bugs, but only a rigorous correspondence between the code and the specifications.

'The result is that the buyer can have confidence in the specification, but not the implementation,' Shapiro said.

Because of this, he lumps the evaluation levels into two broad categories: EAL 1 through 5, which is 'not good enough for commercial or military use,' and EAL 6 and 7, which is 'promising, but insufficiently explored to justify confidence.'
One of the reasons the focus is on the process rather than the code is that the evaluation process is intended for proprietary code, which developers generally keep secret.

This means that the evaluation process is opaque and cannot be scientifically replicated from one laboratory to the next, reducing the confidence a buyer places in the evaluation, Shapiro said. He said he believes that the customer should be able to independently confirm evaluation results.

'The system only works if an evidence trail, including the source code, is available,' he said.

He said Microsoft's decision to make source code available selectively to researchers and universities shows that it is possible to balance reasonable levels of confidentiality and access. But NIAP does not require this. It is this absence of a feedback loop that flaws an otherwise fundamentally sound scheme, Shapiro said.

'The evidence so far suggests that it is a waste of time and resources,' he said. 'I would be extremely happy to see evidence to the contrary, but it doesn't seem to be out there.'

New boss

Complicating vendor relationships with the program, NIAP leadership has shifted in recent years. Some companies have felt that their influence on the decisions made about Common Criteria have been reduced because of the move.

NIAP was originally made up of NSA and the National Institute of Standards and Technology. NIST still accredits labs for NIAP under its National Voluntary Laboratory Accreditation Program, but that is the extent of its involvement in the program, said Ray Snouffer, leader of NIST's security testing and metrics group.

'NIST was probably more involved in the early days,' Snouffer said. But in 2002 the directorship passed to NSA as NIST assumed responsibility for setting information technology certification and accreditation standards under the Federal Information Security Management Act. 'The people who worked in the NIAP space on our side have transitioned to that.'

Resources were reallocated as NIST shifted its attention to developing standards and specifications for FISMA.

'We're pretty much running the whole program,' NSA's Dale said.

That shift in resources and leadership brought a shift in vision at NIAP. The original goal was to cover the needs of all of government ' military, intelligence and civilian ' and to make inroads into private-sector standards as well.

'Everybody needs computer security and information assurance,' Dale said. NSA would focus on government needs, and NIST would be the resource for the private sector. But with the shift, 'what happened over time was we have had to start narrowing the scope of products we could evaluate,' and the focus has become more government-centric.

That is part of the problem, Higaki said.

'NIST had a good history of working in partnership with industry,' he said. NSA is more open today than it has been in the past. For years a standard joke was that NSA stood for 'No Such Agency,' but Higaki said it is not as industry-friendly as its nominal NIAP partner.

An industry group called the CC Vendors Forum was formed to work with NIAP and government counterparts in other participating nations, but Higaki said results so far have been disappointing.

'They all need to agree that vendors can have a seat at the table or at least have a formal mechanism for our input,' he said.

Industry voice

Although Dale said the CC Development Board works closely with Higaki and the forum, Higaki said industry still is not being given the voice it wants. Don Wright, director of standards at vendor forum member Lexmark International, said there is a good working relationship with NIAP officials on the development of a protection profile for printers and other hard-copy peripheral devices. But he also acknowledged, 'There are some difficulties there.'

This disconnect between industry and NIAP has resulted in an awkward evaluation process that ensures that security products are well into their life cycles, if not obsolete, by the time they can be evaluated, vendors say.

The first document required from the vendor in the evaluation process is a security target, stating just what the vendor expects to have evaluated. This seems reasonable, but Higaki said information for the security target often is not available until later in the development process. Because of the lengthy evaluation process, buyers often are faced with the choice of implementing a security product that is evaluated but obsolete or in need of an update, or an up-to-date product that has not been evaluated.

'What I would like to see is tighter integration with activities that are normal to the development cycle,' he said.

This would mean taking the security target 'up a level of abstraction' so that it could be produced earlier in the development cycle. But Dale said this is not likely to happen.

NIAP wants a finished product, she said. 'We don't want to evaluate a product in development.' She said that with the lengthy evaluation process, 'this is a huge problem, [but] we have made some headway.' For instance, there are requirements that only changes in subsequent releases of products need be evaluated rather than undergoing a complete re-evaluation.

Dale said a lot of work is going into improving the product evaluation process, not just in NIAP but in test efforts throughout NSA.

'We're exploring all kinds of alternatives,' she said. 'We want to fix what can be fixed in NIAP, but we are looking at a lot of alternatives.'

Power of the status quo

But change is hard. One of the reasons Common Criteria is difficult to change is because of the financial incentives to keep things the way they are, Paller and Shapiro say.

'The problem is, there are 20 countries in this, and some of the labs in other countries are making a fortune doing evaluations because they are easier than the U.S. labs,' Paller said.

'I don't think the labs per se are the problem,' Shapiro said. 'It's who pays, and can the results be confirmed? At the moment, vendors are negotiating with evaluators and walking down the street to a second evaluator when the first evaluator will not give them what they want. This is not hypothetical. The behavior is being observed in the wild.'

Under the scheme, everyone accepts one lab's results, and under the opaque evaluation process, results cannot be easily confirmed.

'This issue is fundamental,' Shapiro said. 'The problem here is there is no feedback mechanism from the customer to the vendor. I think the only way this can be resolved is to insist that the evidence trail must be fully public.'

This does not work well with proprietary software, he said, but open-source software offers a chance to address this issue. This would require realignment in the way the government acquires software, and that is not likely to happen soon.
In the final evaluation, is Common Criteria better than nothing?

'One view of Common Criteria is that it is a process validation vehicle,' Shapiro said. 'That is not a bad thing to have, but you have to ask if the process demanded is the one that we want. The answer is, at best, 'we have no idea', ' because there is no feedback loop for measuring and refining the Common Criteria process.

Common Criteria is not absolutely bad, Paller said. But the cost in time and money is not worth the results produced.

'If it cost $10,000 and took a few weeks, it would be all right,' he said. 'But if it costs millions of dollars, shouldn't you be spending that money on improving the product itself? This is a stupid way to spend money.'

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above