March 21, 2014 -
Basic facial recognition software is getting pretty good at recognizing well-light faces looking straight at the camera, but 3D visualizing and mapping makes it better.
That’s why Facebook’s AI team has created a new facial verification software called DeepFace that it says provides a significant improvement over previous matching software, recognition faces correctly 97.53 percent of the time – even when only a partial face is available.
According to a report in the MIT Technology Review, the new software uses an approach known as “deep learning,” which uses simulated neurons to recognize patterns in large batches of data.
From the MIT report, “The deep-learning part of DeepFace consists of nine layers of simple simulated neurons, with more than 120 million connections between them. To train that network, Facebook’s researchers tapped a tiny slice of data from their company’s hoard of user images—four million photos of faces belonging to almost 4,000 people.”
Like other 3-D visualizing facial recognition algorithms, DeepFace recognizes partially obscured faces by correcting the angle by re-interpreting a 3-D model of the face, extrapolated from available data. This model is then converted into a numerical sequence and compared to other numerical descriptions of faces with an identity already attached.
According to the report, DeepFace is currently only a research project, but the developers will present their findings at the IEE conference on Computer Vision and Pattern Recognition this summer.
Again, this method of 3D facial recognition is nothing new for biometrics, but it is new for Facebook, and means that all of those party pictures in which you thought you could hide in the crowd, you may soon be asked to tag yourself.
As we reported last year, Facebook said it was considering incorporating user profile pictures into its growing facial recognition database.