Amazon moving facial recognition and voice processing off of Nvidia chips
Amazon is switching to using its own custom chips to provide the cloud processing for its Alexa voice assistant service and facial recognition service Rekognition, Reuters reports, in an attempt to both improve speed and cost efficiency and to reduce its reliance on chips from Nvidia.
The new ‘Inferentia’ processor chips, which were launched in 2018, will handle the majority of the processor load for the voice assistant going forward. The company says the chip is designed to accelerate bulk machine learning processes like text-to-speech translation or image recognition, and that it improved Alexa latency by 25 percent for 30 percent less cost.
Inferentia chips are now also being adopted for Rekognition, though less detail was provided about that implementation.
The company also recently launched chips for edge processing in Echo devices to take some of that processing burden out of the cloud.
Pilots of the biometric payment device began at a pair of cashier-less Amazon Go locations in South Lake Union earlier this year, and are now being rolled out at Redmond’s Amazon Go Grocery, Amazon Books in University Village, and Southcenter’s Amazon 4-Star store.
The devices help Amazon track physical purchases the way it tracks online ones.
There are 26 Amazon Go locations in total.