Google researchers develop method for improved mobile facial recognition algorithm performance
A team of Google researchers has developed a method of AI model selection that avoids the computational trade-off of slowness or inaccuracy imposed on object detection and facial recognition apps by hardware constraints.
In a paper titled “MnasNet: Platform-Aware Neural Architecture Search for Mobile,” the researchers explain that their automated MnasNet system can choose a neural architecture from a list of options to design convolutional neural network (CNN) models for mobile devices. Speed information is explicitly written into the search algorithm, so that it can identify the best accuracy and speed balance for the application.
Using this approach, the team achieved speeds 1.5 times faster than MobileNetV2 and 2.4 times faster than NASNet on the ImageNet classification task, with the same top-1 accuracy. It also achieved higher accuracy and speed over MobileNet on COCO object detection, with comparable accuracy to the SSD300 model, but with 35 times less computational cost, according to a blog post by researcher Mingxing Tan.
The approach consists of a recurrent neural network-based (RNN-based) controller for learning and sampling architectures, a trainer to build and train models, and an inference engine for measuring model speed on mobile phones using TensorFlow Lite.
Researchers have been working on improving on-device AI performance for improved mobile biometrics, with recent developments for image analysis involving new hardware, and Qualcomm researchers reporting improved speech recognition accuracy with a system utilizing both a CNN and an RNN.