Deep Vision launches AI processor with novel data architecture for edge biometrics
A startup called Deep Vision has emerged with a new AI processor with novel chip design that is better suited to edge biometric applications like Smart Cities and Smart Retail where lower energy consumption is a key requirement.
While other chips coming to market are similarly aiming for low power applications like smart cameras and edge gateways, Deep Vision’s says its chip design and software tools together take a different approach that will enable devices to significantly improve image recognition, object tracking and other functions with better computer vision accuracy and lower latency than the competition. In applications such as retail banking and grocery store biometrics, for example, cameras will be able to track more people with greater accuracy; other applications include in-cabin monitoring of passengers with facial recognition for autonomous vehicle operation.
Deep Vision’s chip is based around a data architecture that is capable of handling varied dataflows to minimize on-chip data movement. Keeping data close to the compute engines minimizes data movement ensuring high inference throughput, low latency, and greater power efficiency, according to Ravi Annavajjhala, CEO of Deep Vision.
The company’s design is based on research conducted by Dr. Rehan Hameed and Dr. Wajahat Qadeer, who founded Deep Vision in 2015. The resulting approach is a patented “Polymorphic Dataflow Architecture” that is characterized as focusing on latency, compared to chips in the vein of Nvidia GPUs, Google TPUs or other AI-focused chips deployed in cloud data centers that were designed for massive throughput for a single AI model. The Deep Vision ARA-1 is claimed to offer a lower system power consumption (typically around 2 Watts) compared to other designs, yet the processor is said to run deep learning models such as Resnet-50 at a 6x improved latency than Google’s Edge TPU and 4x improved latency than Intel’s MyriadX.
New chip designs like ARA-1 that have new instruction sets do not find traction in the market if developers find them hard to program. Annavajjhala, CEO of Deep Vision, said the company has focused extensively on enabling an easy to program environment for customers.
“Our biggest design goal was a seamless software experience,” he said. The adoption of any processor is affected by how easy the software experience is, he acknowledged.
The compiler has been built to allow seamless porting of all industry-standard AI frameworks, including: Caffe, Tensor Flow, MXNET, PyTorch, and Networks like Deep Lab V3, Resnet-50, Resnet-152, MobileNet-SSD, YOLO V3, Pose Estimation and UNET.
Beyond support for standard models, Deep Vision’s software developer kit (SDK) offers a bit-accurate simulator and tools for optimizing power and performance to the needs of the customer’s application. Deep Vision says that its SDK also allows for a frictionless workflow, which results in a low code, automated, seamless migration process from training model to the production application.
Customers have been able to port models over and accurately simulate models before even having sample chips, and once Deep Vision shipped chips, found that the models ran correctly without additional coding, executives said.
Deep Vision has raised $19 million and backed by multiple investors, including Silicon Motion, Western Digital, Stanford, Exfinity Ventures, and Sinovation Ventures.