FB pixel

Edge voice dev kit, funding revealed as market for NLP-focused hardware grows

Deep Vision raises $35M in funding
Edge voice dev kit, funding revealed as market for NLP-focused hardware grows
 

The market for Natural Language Processing (NLP) and ambient sound solutions seems to be growing steadily, as new companies announce new product launches or investments. Specifically, Knowles announced a new Raspberry Pi Development Kit with voice, audio edge processing, and machine learning (ML) listening capabilities, and Deep Vision announced it has raised $35 million in a Series B funding round, with plans to further develop its edge biometrics-supporting processor. In addition, a new report by ABI Research has highlighted the benefits of deep learning-based ambient sound and NLP in both cloud and edge applications.

Knowles releases new Raspberry Pi Development Kit

The kit is designed to bring voice biometrics, audio edge processing, and ML listening capabilities to devices and systems in a variety of new industries.

The solution enables companies to streamline the design, development, and testing of voice and audio integration technologies.

The new development kit is built on Knowles’ AISonic IA8201 Audio Edge Processor OpenDSP created for ultralow-power and high-performance audio processing needs.

The processor features two Tensilica-based, audio-centric DSP cores. One of them for high-power compute and AI/ML applications, and the other for very low-power, always-on processing of sensor inputs.

Thanks to Knowles’ open DSP platform, the new kit has enabled access to a wide range of onboard audio algorithms and AI/ML libraries.

It also includes two microphone array boards to aid engineers select the appropriate algorithm configurations for the end application.

Deep Vision raises $35M for biometrics applications

The AI processor chip maker recently announced that it raised $35 million in a Series B financing round, led by Tiger Global with the participation of Exfinity Venture Partners, SiliconMotion, and Western Digital.

The fresh funds will reportedly aid Deep Vision’s renewed efforts in the improvement of its patented AI processor ARA-1.

The hardware can be used as a face biometrics tool to deliver real-time video analytics. However, ARA-1 also supports NLP capabilities for several voice-controlled applications.

“To improve latency and reliability for voice and other cloud services, edge products such as drones, security cameras, robots, and smart retail applications are implementing complex and robust neural networks,” explained Linley Gwennap, principal analyst of The Linley Group.

“Within these edge AI applications, we see an increasing demand for more performance, greater accuracy, and higher resolution,” Gwennap added. “This fast-growing market provides a large opportunity for Deep Vision’s AI accelerator, which offers impressive performance and low power.”

Ambient Sound and NLP dedicated chipset on the rise

More than two billion devices will be shipped with a dedicated chipset for ambient sound or NLP by 2026, according to new data from ABI Research.

The figures come from the ‘Deep Learning-Based Ambient Sound and Language Processing: Cloud to Edge’ report, which highlights the state of deep learning-based ambient sound and NLP technologies across different industries.

According to the report, ambient sound and NLP will follow the same cloud-to-edge evolutionary path as machine vision.

“Through efficient hardware and model compression technologies, this technology now requires fewer resources and can be fully embedded in end devices,” explained Lian Jye Su, principal analyst for Artificial Intelligence and Machine Learning at ABI Research.

“At the moment, most of the implementations focus on simple tasks, such as wake word detection, scene recognition, and voice biometrics. However, moving forward, AI-enabled devices will feature more complex audio and voice processing applications,” Su added.

According to the technology expert, many chipset vendors — including Qualcomm — are aware of this trend, and are now actively forming partnerships to boost their capabilities.

“Through multimodal learning, edge AI systems can become smarter and more secure if they combine insights from multiple data sources,” Su said.

“With federated learning, end users can personalize voice AI in end devices, as edge AI can improve based on learning from their unique local environments,” he concluded.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Huawei accelerates global digital transformation with digital ID, ICT and 5G projects

In recent years, global telecommunications provider Huawei Technologies has significantly ramped up its involvement in infrastructure projects aimed at supporting…

 

Move over, Armani: Italy’s It Wallet is the digital ID accessory of the season

Bravo to Italy for the forthcoming launch of its digital wallet scheme, clearing the path for a national digital identity…

 

Sumsub brings non-doc biometric identity verification to new markets

Biometric identity verification providers are expanding their market reach in various ways, including Sumsub’s support for users without ID documents…

 

Scottish Government emphasizes security of new platform for digital public services

Scotland is talking up the data security measures designed and build into ScotAccount, a single-sign on (SSO) service designed to…

 

Zambia, Namibia, Tanzania upgrade digital ID systems in concert with development partners

Zambia has carried out a major first step in its transition towards a modern legal and digital identity system, by…

 

Bermuda delays facial recognition deployment for national CCTV project

Bermuda’s government will not be deploying facial recognition capabilities in its CCTV system, at least for now, due to unspecified…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events