FB pixel

New edge AI accelerators coming from Samsung-Baidu partnership and Innodisk

Categories Biometric R&D  |  Biometrics News
 

Baidu and Samsung Electronics are planning to begin mass producing a cloud-to-edge AI accelerator chip in early 2020, to power faster inferencing by natural language processing (NLP) and other edge systems.

The new Baidu KUNLUN is built on the company’s advanced home-grown XPU neural processor and Samsung’s 14-nanometer (nm) process technology with the Interposer-Cube (I-Cube) package solution, according to the announcement. It provides 512 GBps and performs up to 260 Tera operations per second (TOPS) at 150 watts.

The chip also accelerates the ERNIE pre-training module for NLP to three times faster than the conventional GPU/FPGA-accelerating model.

I-Cube technology connects a logic chip and high bandwidth memory with an interposer to maximize density and bandwidth with minimal size. Samsung claims the new solutions improve power/signal integrity by 50 percent.

The partnership is the first cooperative foundry project by the two companies, leveraging Baidu’s advanced AI platforms and expanding Samsung’s foundry business into high performance computing chips designed for cloud and edge use.

“We are excited to lead the HPC industry together with Samsung Foundry,” states OuYang Jian, Distinguished Architect of Baidu. “Baidu KUNLUN is a very challenging project since it requires not only a high level of reliability and performance at the same time, but is also a compilation of the most advanced technologies in the semiconductor industry. Thanks to Samsung’s state of the art process technologies and competent foundry services, we were able to meet and surpass our goal to offer superior AI user experience.”

Innodisk, meanwhile, has announced the launch of a new AI accelerator card series to provide a vision processing unit (VPU) for AI inference in machine vision at the edge.

The company says its new AI accelerator cards provide exponential performance gains for edge AI inferencing, leveraging the deep neural network inferencing of Intel’s third-generation Movidius Myriad X vision processing unit (VPU), making it effective for biometric facial recognition, license plate recognition and other applications.

Depending on the edge device’s CPU, the performance boost can be up to 30-fold, Innodisk claims. This is key, according to the announcement, as embedded devices typical to the edge are not well equipped for image recognition. The low power consumption and thermal efficiency prioritized in edge device CPUs, but matching them with Innodisk’s new AI accelerator involves only a modest increase in thermal footprint and power consumption, operating in an analogous way to how a video card takes the graphics rendering load from the CPU in PCs.

The accelerator cards come in mPCIe and M.2 2280 for factors, with one and two Myriad X, respectively. Each supports Windows and Linux, as well as Caffe, TensorFlow, Apache MXNet, and Open Neural Network Exchange (ONNX) and other deep learning frameworks through the Intel OpenVINO toolkit.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics