This Artificial Intelligence System Can ID Faces
Even If They Are Disguised
Head coverings and fake beards have foiled face recognition technologies, but a new system overcomes many of the challenges while raising privacy concerns.
It’s not easy these days to be just another face in a crowd.
Last month, for example, Twitter users via a form of spontaneous crowdsourcing identified certain demonstrators at the protest on August 12 in Charlottesville, Virginia, regarding plans to remove a statue of the Confederate Gen. Robert E. Lee. Many eyes were on the related photos and videos — prevalent in this era of selfies, social media, and pervasive video.
Face recognition technologies are coming close to achieving such feats in an instant, helping to identify criminals, terrorists, missing persons, and many others. Disguises or other head gear worn by individuals can challenge the artificial intelligence, but an international team of researchers has just overcome many of the problems related to what is known in the tech industry as Disguised Face Identification, or DFI.
“We analyzed numerous images and videos of crimes, as well as protests, all over the world to see which parts of the face are usually covered by individuals to disguise themselves,” project leader Amarjot Singh of the University of Cambridge Department of Engineering told Seeker. “In the majority of the videos, the individuals were either wearing glasses that covered the eyes or were wearing a scarf that covered the mouth.”
Those observations led to brainstorming by Singh and his colleagues: Devendra Patil and G. Meghana Reddy of the National Institute of Technology, India, and S.N. Omkar of the Indian Institute of Science. The research team will present its findings next week at the IEEE International Conference on Computer Vision Workshop in Venice, Italy.
The researchers developed DFI that targets 14 “facial key-points” — 10 of them at and around the eyes, one at the nose, and three at the lips — that are most likely to be missing when a disguise is worn.
If the system has this missing information about the person from other photos, like a mugshot and/or video still, it can then make predictions to fill in the areas in images showing the same individual in disguise. The technology therefore relies not only on the 14 facial key-points, but also on datasets containing multiple images.
First-Ever Face Recognition ATM Comes to China
The heart of the system that identifies the key-points is called a deep convolutional network.
“Deep convolutional networks are software creations organized into interconnected layers, much like the visual cortex, the part of the brain that processes visual information,” Singh explained. He noted that these networks “learn” from datasets using an algorithm called “back-propagation” that mirrors how the brain’s own neuronal networks pertaining to vision work.
“For example,” he continued, “if a network is trained on images, the neurons in the layers at the beginning of the network learn to recognize edges or basic shapes, while neurons in higher layers can ‘see’ objects — say, a dog or a person.”
In the case of the AI network, it reads the batches of photos of disguised individuals and learns to predict the target facial key-points. It then tries to correct the prediction error made for each key-point, and eventually is often able to make a match to a known person if given a large enough database to work with.
At this stage, however, the DFI technology is not without major glitches.
“The precision of the system decreases drastically in the presence of a complex background that has images with uneven lighting conditions,” Singh acknowledged.
A photo showing a disguised person with his or her head turned near a bunch of buildings and in front of a street lamp, for example, would confuse the network. Likewise, if a person completely covers his or her head and is captured by CCTV or some other imaging device, the DFI technology would essentially have nothing to process.
Anti-Surveillance Camo Hides You from Face Recognition Tech
Nevertheless, the new system represents a major breakthrough in DFI. Another research team previously tried to use data on unique skin tone and texture from individuals to solve related challenges, but that method was also faulty and heightened concerns about racial profiling.
The new developments in DFI and other face recognition technologies are also increasing concerns about privacy.
“Although faces are peculiar to individuals, they are also public, so technology does not, at first sight, intrude on something that is private,” says a recent article in The Economist. “And yet the ability to record, store and analyze images of faces cheaply, quickly and on a vast scale promises one day to bring about fundamental changes to notions of privacy, fairness and trust.”
The article points out that China’s government keeps a record of its citizens’ faces, and that photos of half of America’s adult population are stored in databases that can be accessed by the FBI.
Such technologies could also be a boon to companies hoping to identify potential customers. If, for instance, photographs are taken of a person browsing through a store and facial recognition later identifies the individual, he or she might be sent targeted ads for particular merchandise.
An even scarier scenario concerns identification of inherent traits and health issues. Michal Kosinski of Stanford University and his colleague Yilun Wang recently reported in a paper published in the Journal of Personality and Social Psychology that AI is more accurate than humans at detecting sexual orientation from facial images. The accuracy of their algorithm was between 81–91 percent, while human judges were only accurate 54–61 percent of the time.
“In countries where homosexuality is a crime, software which promises to infer sexuality from a face is an alarming prospect,” says The Economist.
Yet another problem concerns identity theft based on biometric data, which refers to everything from fingerprints to photos.
Robert Capp, author of a Biometric Technology Today paper on the future of identity verification, noted that biometric data is “vulnerable to mimicry, spoofing, and impersonation. Fingerprints have been lifted and copied. Iris and facial data has been taken from high-resolution photographs and HD video that is good enough to fool biometric detection.”
ATMs utilizing face recognition technology are already in use in certain Asian countries, while China has been relying on the tech to identify jaywalkers. Border control agencies across the globe are also investigating similar systems, so their associated vulnerabilities and inaccuracies are no small matter.
The European Union has been preparing for related challenges with its General Data Protection Regulation (GDPR), which will take effect on May 25 next year. Under the GDPR, images of faces are considered to be “Sensitive Personal Data” that are subject to additional security requirements and restrictions.
A technology war of sorts is now underway, with some research teams creating systems that comply with GDPR while fighting AI facial recognition. For example, the new company D-ID, which stands for De-Identification, offers a system that is said to protect images from unauthorized, automated face recognition.
According to the company’s website, “Images are processed in a groundbreaking way that causes face recognition algorithms to fail to identify the subject in the image, while maintaining enough similar to the original image for humans not to notice the difference.”
These and similar efforts are not stopping the work of Singh and his team, though. While their determinations at present are still considered to be “proof of concept,” they plan to put their system into actual practice as soon as possible.
“We are currently trying to improve the proposed AI model so that it can function in real-time with less computational power and would require smaller memory in the hardware,” he said, adding that he hopes other research teams will also work to develop still stronger systems with expanded datasets. These could include even more disguise possibilities, beyond the hats, scarves, glasses, and faux facial hair ones that are in the existing dataset.
“Overall,” he concluded, “this work will get the ball rolling toward solving the DFI task.”