DarwinAI and Intel generate neural network with 16X faster image classification inferencing speed
Artificial intelligence startup DarwinAI has announced that its Generative Synthesis platform has been used with Intel technology and optimizations to generate neural networks with a 16.3 times improvement in image classification inference performance.
The results of the optimization are shared by Intel in a solution brief. The brief describes how Intel engineers used Intel Optimizations for TensorFlow with Intel Math Kernel Library (Intel MKL and Intel MKL-DNN) to run image classification performance tests with ResNet50 and NASNet. DarwinAI’s platform delivered a 16.3 times improvement in inference speed for ResNet50 over baseline measurements on an Intel Xeon Platinum 8153 processor, and a 9.6 times improvement in inference speed for NASNet.
DarwinAI’s patented “AI building AI” technology dramatically reduces the size, complexity, and guesswork associated with designing efficient, high-performance deep learning systems, according to the announcement. It also facilitates “explainable” deep learning with “root cause analysis” features via AI-powered explainability tools.
“The complexity of deep neural networks makes them a challenge to build, run and use, especially in edge-based scenarios such as autonomous vehicles and mobile devices where power and computational resources are limited,” observes DarwinAI CEO Sheldon Fernandez. “Our Generative Synthesis platform is a key technology in enabling AI at the edge – a fact bolstered and validated by Intel’s solution brief.”
DarwinAI is a member of Intel’s AI Builders Program, and has won recognition including a Frost and Sullivan 2019 Technology Innovation Award, and a place among Hello Tomorrow 2019’s Top 500 Deep Tech Startups.