3D face reconstructions made with sub-millimeter accuracy using an iPhone
Carnegie Mellon University researchers have used a single iPhone X shooting 20 seconds of two-dimensional video followed by 40 minutes of in-phone processing to digitally reconstruct a human face in three dimensions and with sub-millimeter accuracy.
The resulting image can be used for biometric facial recognition, medical procedures and virtual and augmented reality programs, the researchers in CMU’s Robotics Institute said. It is not a new idea, but this approach could find acceptance.
Machine vision has been used to create similar profiles, but the systems are more elaborate. It typically takes an expensive combination of laser scanners, multiple cameras and structured light to pull it off.
The iPhone is set for slow motion video to achieve a high frame rate, which creates a dense point cloud of each subject. Another person moves the camera in an arc around a person’s head, from ear to ear.
A common imaging technique, simultaneous localization and mapping, is applied to each video clip. The technique creates a face’s shape by triangulating points on its surface. Triangulation also tracks the position of the camera in relation to the person in order to keep facial features in perspective.
Not all data is recorded, however, and the iPhone used “classical computer vision techniques” and, to a lesser degree, deep learning algorithms to fill holes in, according to the researchers. Processing time runs 30 to 40 minutes.
Lucid VR Inc. sells vision systems with depth perception, but they use proprietary mounted hardware featuring two camera lenses.
Going down to the component level, Sony Corp. in December 2018 said it was increasing production of its three-dimension camera processors for smartphones.