FB pixel

Edge voice dev kit, funding revealed as market for NLP-focused hardware grows

Deep Vision raises $35M in funding
Edge voice dev kit, funding revealed as market for NLP-focused hardware grows
 

The market for Natural Language Processing (NLP) and ambient sound solutions seems to be growing steadily, as new companies announce new product launches or investments. Specifically, Knowles announced a new Raspberry Pi Development Kit with voice, audio edge processing, and machine learning (ML) listening capabilities, and Deep Vision announced it has raised $35 million in a Series B funding round, with plans to further develop its edge biometrics-supporting processor. In addition, a new report by ABI Research has highlighted the benefits of deep learning-based ambient sound and NLP in both cloud and edge applications.

Knowles releases new Raspberry Pi Development Kit

The kit is designed to bring voice biometrics, audio edge processing, and ML listening capabilities to devices and systems in a variety of new industries.

The solution enables companies to streamline the design, development, and testing of voice and audio integration technologies.

The new development kit is built on Knowles’ AISonic IA8201 Audio Edge Processor OpenDSP created for ultralow-power and high-performance audio processing needs.

The processor features two Tensilica-based, audio-centric DSP cores. One of them for high-power compute and AI/ML applications, and the other for very low-power, always-on processing of sensor inputs.

Thanks to Knowles’ open DSP platform, the new kit has enabled access to a wide range of onboard audio algorithms and AI/ML libraries.

It also includes two microphone array boards to aid engineers select the appropriate algorithm configurations for the end application.

Deep Vision raises $35M for biometrics applications

The AI processor chip maker recently announced that it raised $35 million in a Series B financing round, led by Tiger Global with the participation of Exfinity Venture Partners, SiliconMotion, and Western Digital.

The fresh funds will reportedly aid Deep Vision’s renewed efforts in the improvement of its patented AI processor ARA-1.

The hardware can be used as a face biometrics tool to deliver real-time video analytics. However, ARA-1 also supports NLP capabilities for several voice-controlled applications.

“To improve latency and reliability for voice and other cloud services, edge products such as drones, security cameras, robots, and smart retail applications are implementing complex and robust neural networks,” explained Linley Gwennap, principal analyst of The Linley Group.

“Within these edge AI applications, we see an increasing demand for more performance, greater accuracy, and higher resolution,” Gwennap added. “This fast-growing market provides a large opportunity for Deep Vision’s AI accelerator, which offers impressive performance and low power.”

Ambient Sound and NLP dedicated chipset on the rise

More than two billion devices will be shipped with a dedicated chipset for ambient sound or NLP by 2026, according to new data from ABI Research.

The figures come from the ‘Deep Learning-Based Ambient Sound and Language Processing: Cloud to Edge’ report, which highlights the state of deep learning-based ambient sound and NLP technologies across different industries.

According to the report, ambient sound and NLP will follow the same cloud-to-edge evolutionary path as machine vision.

“Through efficient hardware and model compression technologies, this technology now requires fewer resources and can be fully embedded in end devices,” explained Lian Jye Su, principal analyst for Artificial Intelligence and Machine Learning at ABI Research.

“At the moment, most of the implementations focus on simple tasks, such as wake word detection, scene recognition, and voice biometrics. However, moving forward, AI-enabled devices will feature more complex audio and voice processing applications,” Su added.

According to the technology expert, many chipset vendors — including Qualcomm — are aware of this trend, and are now actively forming partnerships to boost their capabilities.

“Through multimodal learning, edge AI systems can become smarter and more secure if they combine insights from multiple data sources,” Su said.

“With federated learning, end users can personalize voice AI in end devices, as edge AI can improve based on learning from their unique local environments,” he concluded.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events