facial recognition tech that penetrates disguises (Amarjot Singh, et.al.)

Facial recognition penetrates disguises

Facial recognition technology is improving rapidly and is being used to authenticate travelers at five U.S. airports by taking their photos and matching those images to passport pictures. In New York officials have used it has to crackdown on identity fraud by matching photos of people applying for a driver's license to a database of photos. But there have always been limitations: If anything obstructs the face -- glasses, scarves, hats -- a match can’t be made.

Now, researchers have developed a deep learning framework that can make matches of people who may be hiding their identity by wearing face-obscuring accessories.

Amarjot Singh, a co-author of the research and a Ph.D. candidate at University of Cambridge, said there are some very accurate facial recognition programs, but "we are trying to take it one step further and recognize faces which are in disguise.”

Singh and his colleagues started by determining how people recognize each other. Neuroscience has shown that the brain shows more activity when subjects look at particular parts of a  face. The researchers identified those parts of the face that aid in recognition and  narrowed them to 14 points around the eyes, nose and mouth.

“When you add disguises, the appearances of the face changes drastically,” Singh said.

The researchers then had someone go through pictures of people wearing disguises and identify the key points in the 14 areas they determined to be important to recognition. Humans, he said, are very good at making these approximations.

“That’s what this model is mimicking,” he said, “this expertise of humans to make a prediction about where the lips could’ve been even if you can’t see them.”

The accuracy of the researchers' framework varies depending on the picture, though. It is very good at recognizing people in simple disguises like a cap or glasses who are standing in front of a solid background with good lighting. But the accuracy drops with varied backgrounds, and it drops further with the addition of more disguises.

While the system works when someone is using a cloth mask to cover part of their face, it doesn’t work with the rigid Guy Fawkes masks commonly worn by protesters.

The system has also been trained only on Caucasian and Indian ethnicities, so it would need to be expanded to work with the broader population. The researchers are releasing their datasets so that others can work with the data and maybe even improve upon the algorithm.

Singh said the next step is to put the framework into standalone hardware (it currently runs on a server). The algorithm would also have to be adjusted so that it can run quickly enough for use by law enforcement or in security cameras.

Singh acknowledged some have raised the privacy concerns after the research was published. “The only thing you can do is make sure that the system doesn’t fall in the wrong hands,” he said.

About the Author

Matt Leonard is a reporter/producer at GCN.

Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.

Leonard can be contacted at mleonard@gcn.com or follow him on Twitter @Matt_Lnrd.

Click here for previous articles by Leonard.


inside gcn

  • contemplating the future (SFIO CRACHO/Shutterstock.com)

    Governors prep for disruptive technology

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group