FB pixel

AWS introduces innovative features for Alexa, Amazon Rekognition

Categories Biometric R&D  |  Biometrics News
 

AWS

Amazon Web Services has made Alexa Voice available on low-powered devices which previously did not have enough processing power to integrate voice control, because the service has “a minimum requirement of at least 100 megabytes of on-device RAM and an ARM Cortex ‘A’ class microprocessor,” writes SiliconANGLE.

To cut back on costs, Amazon is transferring tasks such as processing requirements, retrieving, buffering, decoding and mixing audio on devices to the cloud, making voice control, and potentially biometrics, possible even for light switches.

The AWS IoT Greengrass service has been extended for use with IoT devices. By offering support for Docker containers, data collection and analysis can be performed at the edge by using built-in analytics in AWS IoT Greengrass. Another important update is Greengrass Stream Manager that makes it easier for companies to collect and manage data at the edge. Developers no longer waste time with trying to build stream management systems using AWS Lambda functions.

“When you know the state of your physical assets, you can solve a lot of things,” AWS Vice President of IoT Dirk Didascalou told SiliconANGLE. “You can also create a lot of new services. A lot of our customers have this need.”

To speed up workloads for IoT developers, AWS is integrating new features and capabilities related to connectivity and control services such as Fleet Provisioning for AWS IoT Core for easier onboarding to the AWS cloud, Configurable Endpoints for AWS IoT Core through which companies scale up faster by easily migrating devices from a self-managed infrastructure to fully managed AWS IoT services, and Secure Tunneling for AWS IoT Device Management that enables remote troubleshooting communication between IoT devices.

Amazon Alexa features two new emotional capabilities available for the U.S. market: speaking in news or music style to move away from a neutral tone. Users can now speak with a personal assistant that can respond in a both happy/excited and disappointed/empathetic voice, Amazon announced. Depending on the content, Alexa can adapt its speaking tone and intonation to a TV news anchor, for example, or an excited voice if a victory is involved, to deliver a better customer experience. In Australia, the service can be adapted to an Australian accent.

According to Amazon customer feedback, 30 percent said they had a better experience when Alexa expressed emotion and responded in a more natural voice. To make this feature possible, Alexa uses text-to-speech technology (Neural TTS (NTTS)) that turns written text into synthesized speech. It uses a service called Amazon Polly, which has shifted toward neural-network-based text-to-speech systems to make it more natural sounding.

Amazon Web Services customers can also now create their own machine learning image analysis thanks to a new feature added to Amazon Rekognition called Amazon Rekognition Custom Labels, available as of December 3, 2019. The feature can identify unique objects, for example when an auto repair shop wants to detect machine parts in inventory, the algorithm can now be trained, without needing machine learning expertise, to differentiate between “turbochargers” and “torque converters,” explains in a blog post Anushri Mainthia, Senior Product Manager on the Amazon Rekognition team and product lead for Amazon Rekognition Custom Labels.

For each machine part, users need 10 sample images that they will upload and label in the console. When the dataset is finalized, Amazon Rekognition Custom Labels will take over. When the image processing stage is done, Amazon Rekognition Object and Scene detection will list all the machine parts in inventory, while Amazon Rekognition Custom Labels will categorize the parts and list their names. Amazon claims its tool “can process tens of thousands of images stored in Amazon S3 in an hour.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics startup gets accelerator backing for Africa, Middle East expansion

Startup Vove ID has secured funding from The Baobab Network to support an expansion of its biometric KYC and AML…

 

NHIs see new funding, products and security approaches

Non-human identities (NHI) like AI agents now outnumber human users in many organizations. Companies are coming up with products to…

 

White House includes NSF research on deepfakes among threats to free speech

The U.S. National Science Foundation (NSF) has new priorities, and while some might call them great priorities – tremendous priorities,…

 

Infineon powers demo of edge biometrics for user onboarding

Edge AI and embedded systems developer embedUR unveiled a pair of solutions for computer vision and biometric onboarding made with…

 

Fintechs crucial to trust needed for financial inclusion: Rwanda Stock Exchange CEO

The growing emergence of fintechs in Africa represents major hope for the continent’s financial inclusion efforts. However, one serious problem…

 

Age assurance enforcement concerns raised as porn sites drag feet

A barking dog is alarming, but if it has no legs, it’s not scary for very long. This is particularly…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events