Long distance imaging and partial-image facial biometrics raise new possibilities
A team of researchers from the University of Science and Technology of China in Shanghai have developed technology capable of photographing subjects up to 45 kilometers (28 miles) away, even through heavy smog, MIT Technology Review reports.
The scientists, led by Zheng-Ping Li, use single-photon detectors and a custom computational imaging algorithm for their technique based on lidar (laser ranging and detection) technology. Photon detection provides an inherent “gating” effect that dramatically reduces the noise in the signal and enables lidar systems to be highly sensitive at specific distances. The team used a 1550-nanometer infrared laser with a repetition rate of 100 kilohertz and power of only 120 milliwatts to make the system eye-safe and filter solar photons, according to the Technology Review.
Two or three-dimensional systems can be created with the system, and the algorithm combines the relatively small amount of data together to fill it in. The Technology Review reports that the system was able to produce images with a spatial resolution of about 60 cm (2 feet) from a building roughly 45 km away using a telescope that produced only noise with conventional imaging. Research has previously shown single-photon detectors could be used to acquire images of subjects 10 km (six miles) away.
The device is about the size of a large shoebox, making it potentially viable for portable applications, one day. In the future, the team expects to be able to produce images of subjects a few hundred kilometers away.
University of Bradford researchers, meanwhile, have used a convolutional neural network with a VGG feature extraction model to match 100 percent of faces using images of three-quarters and half faces, The Science Times reports.
The system was initially only able to correctly match 60 percent of images of the bottom halves of faces, and 40 percent of images of just the subject’s eyes and nose. After training the model on the partial images, however, these challenging partial photos were correctly identified close to 90 percent of the time. Matching individual facial features like the nose, cheek, forehead, or mouth was generally unsuccessful.
Lead researcher Professor Hassan Ugail said the findings now need to be validated on a larger dataset.