MIT researchers develop highly efficient chip for on-device neural networks
MIT researchers have developed a powerful new chip for neural-network computations that is three to seven times faster than other processors, while using 94 to 95 percent less energy, potentially making it practical to run neural networks locally on mobile or IoT devices, MIT News reports.
“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” says MIT electrical engineering and computer science graduate student Avishek Biswas, who led the chip development project.
“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”
The chip converts the input values of nodes into electrical voltages and multiplied in that form before being converted back into digital form for storage and further processing. This allows the prototype to calculate dot products for 16 nodes at a time in one step, without moving data between the memory and processor. MIT News says it is a more faithful reproduction of what happens in the synapse of a living brain.
All the weights, which govern the relations between the nodes are either 1 or -1, which allows them to be implemented as simple switches, with the trade-off of a loss in accuracy generally within 2 to 3 percent of a conventional network’s, according to the researchers.
Biswas will present a paper describing the chip at the International Solid State Circuits Conference, along with his thesis advisor Anantha Chandrakasan, dean of School of Engineering at MIT and Vannevar Bush Professor of Electrical Engineering and Computer Science.
SensibleVision CEO George Brostoff examined the potential of custom processors to dramatically transform secure authentication on mobile devices in a guest post for Biometric Update in December. Since then, FWDNXT has announced the development of a low-power mobile coprocessor for image recognition and classification by deep neural networks, and ARM has announced new chips custom designed for machine learning and object detection.