Bill Jackson | Industry faces up to NIST challenge for better biometrics

Cybereye

William Jackson

In a face-off sponsored by the National Institute of Standards and Technology, facial-recognition algorithms showed a tenfold improvement in accuracy during the past four years.

Facial-recognition software is not just getting better ' but 'the best-performing algorithms were more accurate than humans,' NIST concluded in its report published in March.

The tests, conducted last year, also demonstrated how government can effectively spur the development of new technology. The improvements were made by industry and academia, which supplied the algorithms for the Facial Recognition Vendor Test and the Iris Challenge Evaluation, the latest in a series of tests dating to 1993.
'This is one paradigm that works,' said Dr. Jonathan Phillips, program manager of the tests and one of the report's authors.

That paradigm is for government to set a goal for a technology, make as much information as possible available to vendors and developers, and then reap the benefits.

This series of challenges began in 1993 at the Defense Advanced Research Projects Agency, where Phillips managed a lab. DARPA conducted three tests of facial-recognition technology.

'I was very impressed with the results,' Phillips said. In less than four years from the first tests, algorithms progressed from primitive tools tested against only a few hundred images to fully automated algorithms processing sets of 3,000 images.

Accuracy also improved. The single algorithm tested in 1993 had 79 false rejections, or false negatives, out of 100 with a baseline of one false acceptance per thousand. By 1997, the false rejection rate had fallen to 54 in every 100 tests.

NIST picked up where DARPA left off in 2002 with the first Facial Recognition Vendor Test. In this test, the algorithms had an average false-rejection rate of 20 in every 100 and a false-acceptance rate of one in 1,000. In 2005, NIST issued its Facial Recognition Grand Challenge, challenging industry and academia to improve performance by an order of magnitude.

'We put out a lot of data to lower the barrier of entry and to facilitate the development of new algorithms,' Phillips said. 'The challenge was the homework. The evaluation was the exam.'

NIST performed the evaluations in 2006 using 50,000 images the participants had not seen, including high-resolution still photos and still photos taken under uncontrolled lighting conditions, in addition to 3-D facial images. The algorithms examined pairs of images to determine whether they were of the same person.
The tests included algorithms from Neven Vision, now owned by Google; Viisage, now merged with L-1 Identity Solutions; Cognitec Systems; Toshiba America and Sagem Morpho.

The accuracy of the algorithms varied with the quality of the images, and performance time on the trial set ranged from six to 300 hours. Four algorithms met or exceeded the goal of making no more than two false rejections per 100 tests.

About the Author

William Jackson is freelance writer and the author of the CyberEye blog.

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above