FEDERAL CONTRACT LAW

Here's a lesson in how not to rate past performance

Joseph J. Petrillo

According to legend, Frederick the Great of Prussia said that learning from his own mistakes was too costly; he preferred to learn from the mistakes of others. That's a good reason to study the opinion in Seattle Security Services vs. United States, a Jan. 28 decision by the Court of Federal Claims.

Its lessons are especially important because they show how not to evaluate past performance'a topic of intense interest.

The General Services Administration set out to award a single contract for armed guard services at federal offices in the northwest. There had been two contracts, one for Washington and one for Oregon.

Because the contract would exceed $1 million, past performance was required in the evaluation process. The evaluation of past performance was applied to at least three 'existing and prior contracts for similar products and services.' Trouble was, GSA thought it could dispense with all other evaluation criteria except for price.

According to the solicitation, GSA would go down the list of bidders, starting with the lowest-priced and working up, looking for a performance record 'better than satisfactory.' When it found one, it sought out a higher-priced bid with an even better past performance rating. The government compared the two to determine if the superior rating was worth the extra money. The results of that comparison determined the contract award.

Interestingly, this methodology eliminated all companies that did not have some relevant past performance. The Federal Acquisition Regulation states that such bidders may not be evaluated favorably or unfavorably. Presumably, they received only a satisfactory rating under this solicitation. Since only bidders with a better than satisfactory grade were considered for award, new entrants were out of luck.

Thing of the past

During the review of past performance, the contracting officer used a questionnaire and asked the same five questions of each reference. The first two questions asked about the numbers of sites and guard posts. If they equaled or exceeded a specific number, the bidder got a single point (but only a single point) for each question. Otherwise, the bidder got a zero.

The third question awarded points based on the number of complaints recorded annually from a maximum or five for no complaints to zero for six or more. The fourth question awarded up to five points for speedy response to emergency calls. And the fifth question was, 'Would you rehire this company?' A 'yes' added one point to the score, and a 'no' subtracted one point.

This evaluation system was a disaster waiting to happen. A precise measure, dollars, was paired with an imprecise one, average past performance ratings on a 14-point scale. Worse, the scoring system had arbitrary cutoffs. A contract with 28 sites and 42 posts was worth two points; one with 27 sites and 41 posts zero. A contract with more than 100 posts was ranked the same as one with only 42.

When the scores were added up, the points for the six vendors ranged from a low of nine to a high of 12. The winning bidder was only one-tenth of 1 percent lower in price than Seattle Security, which was the incumbent in both states and second lowest in price. But the low bidder's past performance rating of 11 edged out Seattle Service's 10.

Seattle Service's protest to the Court of Federal Claims revealed several errors. In evaluating the company's past performance, the contracting officer completely ignored one of its two incumbent contracts, failing to consider that Seattle Service was performing both of them simultaneously, just as the winning contractor would have to.

The contracting officer didn't even use the scoring sheet for the low-bidder's past performance and thus had no documentation to support this company's 11-point score.

The lessons here are legion. The government set out to evaluate past performance with an arbitrary methodology. It then used this poorly crafted evaluation as the sole non-price criterion, magnifying the problems. Then it applied the methodology inconsistently. Finally, it failed to document key parts of the evaluation.

Apart from all this, one has to question the wisdom of ignoring every other non-price criterion and whether such subtle distinctions among past performances can be measured with reasonable precision.

Joseph J. Petrillo is an attorney with the Washington law firm of Petrillo & Powell, PLLC. E-mail him at jp@petrillopowell.com.

inside gcn

  • network

    6 growing threats to network security

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group