FB pixel

BrainChip showcases Akida neural processing capabilities, opens developer environment


AI and machine learning application developer BrainChip has unveiled its latest technology at a top industry event in California and announced its Akida Development Environment (ADE) is freely available for designers to use for edge and enterprise product development.

BrainChip has demonstrated the capabilities of its latest class of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California, the company announced.

“We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems,” said in a prepared statement Louis DiNardo, CEO of BrainChip. “We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

In a session titled “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” BrainChip rolled out a demo of how the Akida Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. As a result, Akida takes up 40 to 60 percent less computations to process a CNN compared to a DLA.

BrainChip Senior Field Applications Engineer Chris Anastasi held a presentation to introduce “On-Chip Learning with Akida,” which involved the collection of attendees’ hand gestures and positions using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. The demonstration proved the live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip that requires less data and power compared to a traditional deep neural network (DNN) counterpart.

Akida is a licensable IP technology that can be embedded into ASIC devices and will be applicable as an integrated SoC for neural processing on the edge. It can be leveraged to develop solutions with industry standard flows, such as Tensorflow/Keras.

It can be deployed for surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis and Industrial Internet-of-Things (IIoT).

Akida Development Environment available without pre-approval

The Akida Development Environment (ADE) no longer requires pre-approval, BrainChip said in a separate announcement. Designers can leverage it to develop systems for edge and enterprise products on the Akida Neural Processing technology.

ADE is a machine learning framework for deeply learned neural networks that operates on TensorFlow and Keras and includes a compiler that maps the network to the Akida fabric and runs simulations on the Akida Execution Engine. The simulator can be run on industry-standard datasets and benchmarks in the Akida model zoo such as Imagenet1000, Google Speech Commands, MobileNet among others.

ADE is written in Python and includes three packages. These are the Akida Execution Engine with the Akida Simulator interface, the CNN development tool that leverages TensorFlow/Keras and the Akida model zoo with pre-built neural network models.

“The enormous success of our early-adopters program allowed us to make ADE available to developers looking to use an Akida-based environment for their deep machine learning needs,” said Louis DiNardo, CEO of BrainChip. “This is an important milestone for BrainChip as we continue to deliver our technology to a marketplace in search of a solution to overcome the power- and training-intense needs that deep learning networks currently require. With the ADE, designers can access the tools and resources needed to develop and deploy Edge application neural networks on the Akida neural processing technology.”

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News


Could be 25 years before TSA gets facial recognition in all US airports

The Transportation Security Administration (TSA) foresees significant delays in implementing facial recognition across U.S. airports if revenue continues to be…


Single solution for regulating AI unlikely as laws require flexibility and context

There is no more timely topic than the state of AI regulation around the globe, which is exactly what a…


Indonesia’s President launches platform to drive digital ID and service integration

In a bid to accelerate digital transformation in Indonesia, President Joko Widodo launched the Indonesian government’s new technology platform, INA…


MFA and passwordless authentication effective against growing identity threats

A new identity security trends report from the Identity Defined Security Alliance (IDSA) highlights the challenges companies continue to face…


Zighra behavioral biometrics contracted for Canadian government cybersecurity testing

Zighra has won a contract with Shared Services Canada (SSC) to protect digital identities with threat detection and Zero Trust…


Klick Labs develops deepfake detection method focusing on vocal biomarkers

The rise in deepfake audio technology has significant threats in various domains, such as personal privacy, political manipulation, and national…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events