Researchers have developed a deep learning framework that can make matches to faces even when they're obscured.
Facial recognition technology is improving rapidly and is being used to authenticate travelers at five U.S. airports by taking their photos and matching those images to passport pictures. In New York officials have used it has to crackdown on identity fraud by matching photos of people applying for a driver's license to a database of photos. But there have always been limitations: If anything obstructs the face -- glasses, scarves, hats -- a match can’t be made.
Now, researchers have developed a deep learning framework that can make matches of people who may be hiding their identity by wearing face-obscuring accessories.
Amarjot Singh, a co-author of the research and a Ph.D. candidate at University of Cambridge, said there are some very accurate facial recognition programs, but "we are trying to take it one step further and recognize faces which are in disguise.”
Singh and his colleagues started by determining how people recognize each other. Neuroscience has shown that the brain shows more activity when subjects look at particular parts of a face. The researchers identified those parts of the face that aid in recognition and narrowed them to 14 points around the eyes, nose and mouth.
“When you add disguises, the appearances of the face changes drastically,” Singh said.
The researchers then had someone go through pictures of people wearing disguises and identify the key points in the 14 areas they determined to be important to recognition. Humans, he said, are very good at making these approximations.
“That’s what this model is mimicking,” he said, “this expertise of humans to make a prediction about where the lips could’ve been even if you can’t see them.”
The accuracy of the researchers' framework varies depending on the picture, though. It is very good at recognizing people in simple disguises like a cap or glasses who are standing in front of a solid background with good lighting. But the accuracy drops with varied backgrounds, and it drops further with the addition of more disguises.
While the system works when someone is using a cloth mask to cover part of their face, it doesn’t work with the rigid Guy Fawkes masks commonly worn by protesters.
The system has also been trained only on Caucasian and Indian ethnicities, so it would need to be expanded to work with the broader population. The researchers are releasing their datasets so that others can work with the data and maybe even improve upon the algorithm.
Singh said the next step is to put the framework into standalone hardware (it currently runs on a server). The algorithm would also have to be adjusted so that it can run quickly enough for use by law enforcement or in security cameras.
Singh acknowledged some have raised the privacy concerns after the research was published. “The only thing you can do is make sure that the system doesn’t fall in the wrong hands,” he said.
NEXT STORY: Cyberattacks target energy infrastructure