New technologies support AI vision and speech recognition edge deployments
With artificial intelligence technology maturing, one of the focuses at Mobile World Congress (MWC) 2019 is on delivering AI-based applications. Several new products have been announced to bring AI services like biometrics and speech recognition to the network edge for a variety of use cases.
Microsoft’s Azure Kinect Developer Kit (DK) is essentially a sensor-housing device, which includes a 1-megapixel time-of-flight depth sensing array, seven microphones for far-field speech and sound capture, and a 12-megapixel RGB camera, along with other sensors such as an accelerometer and a gyroscope, ZDNet reports. As an Azure product, the Kinect DK accesses AI services like facial recognition, body tracking, and other AI vision and speech services through the cloud.
The Kinect DK is related only by name and concept to the motion sensing add-on for the Xbox gaming system, and instead is meant for business use cases such as alerting hospital staff when a patient is likely to fall, or monitoring athletic performance, Microsoft says.
Microsoft partners with D-Link on smart city technology
D-Link announced at MWC that it is collaborating with Microsoft to integrate Microsoft Vision AI into tailor-made intelligent edge solutions featuring facial and object recognition for enterprises and cities. Next-generation smart city solutions from D-Link leverage Azure Machine Learning, Azure Media Services, and Azure IoT Edge to deliver seamless machine learning, modeling, conversion, deployment and video analytics, according to the announcement.
“Infusing vision AI into devices, like IP cameras, opens up more use cases by enabling data processing in real time without high-powered machines or continuous network connections,” says Microsoft Vice President of IoT Rodney Clark. “We are excited to work with D-Link to deliver innovative edge solutions with cognitive capability that empower businesses and cities to make smarter decisions.”
Syntiant launches neural decision processors for speech in edge devices
The new Syntiant NDP100 and NDP101 microwatt-power Neural Decision Processors (NDPs) are built to run deep learning algorithms, and provide higher performance with much lower memory power consumption, the company says. By exploiting the inherent parallelism of deep learning and computing at only required numerical precision, the devices achieve roughly 100 times greater efficiency than stored program architectures like CPUs and DSPs, according to the announcement. This low power consumption makes the processors suitable for always-on gate-keeper applications.
Industry standard machine learning frameworks like TensorFlow are supported by the Syntiant training development kit (TDK), allowing solutions to be implemented on Syntiant NDP devices without the platform-specific tuning. The company says processors are being built into wearables, hearables, smartphones, smart speakers, remote controls, and home automation devices.
Altek launches three edge AI vision products
Digital imaging company Altek has launched new Vision AI chips, commercial AI surveillance cameras, and 3D depth-sensing modules to bring vision AI to the edge.
The new chip and surveillance camera products support human and object detection, event detection and behavior recognition, with all AI processing on the device to reduce network bandwidth needs and improve detection speed. Altek’s 3D depth-sensing solution has been integrated with CyberLink’s FaceMe facial recognition engine, and the company’s technology is also built into IP cameras for AI surveillance from Qualcomm.