AWS introduces innovative features for Alexa, Amazon Rekognition
Amazon Web Services has made Alexa Voice available on low-powered devices which previously did not have enough processing power to integrate voice control, because the service has “a minimum requirement of at least 100 megabytes of on-device RAM and an ARM Cortex ‘A’ class microprocessor,” writes SiliconANGLE.
To cut back on costs, Amazon is transferring tasks such as processing requirements, retrieving, buffering, decoding and mixing audio on devices to the cloud, making voice control, and potentially biometrics, possible even for light switches.
The AWS IoT Greengrass service has been extended for use with IoT devices. By offering support for Docker containers, data collection and analysis can be performed at the edge by using built-in analytics in AWS IoT Greengrass. Another important update is Greengrass Stream Manager that makes it easier for companies to collect and manage data at the edge. Developers no longer waste time with trying to build stream management systems using AWS Lambda functions.
“When you know the state of your physical assets, you can solve a lot of things,” AWS Vice President of IoT Dirk Didascalou told SiliconANGLE. “You can also create a lot of new services. A lot of our customers have this need.”
To speed up workloads for IoT developers, AWS is integrating new features and capabilities related to connectivity and control services such as Fleet Provisioning for AWS IoT Core for easier onboarding to the AWS cloud, Configurable Endpoints for AWS IoT Core through which companies scale up faster by easily migrating devices from a self-managed infrastructure to fully managed AWS IoT services, and Secure Tunneling for AWS IoT Device Management that enables remote troubleshooting communication between IoT devices.
Amazon Alexa features two new emotional capabilities available for the U.S. market: speaking in news or music style to move away from a neutral tone. Users can now speak with a personal assistant that can respond in a both happy/excited and disappointed/empathetic voice, Amazon announced. Depending on the content, Alexa can adapt its speaking tone and intonation to a TV news anchor, for example, or an excited voice if a victory is involved, to deliver a better customer experience. In Australia, the service can be adapted to an Australian accent.
According to Amazon customer feedback, 30 percent said they had a better experience when Alexa expressed emotion and responded in a more natural voice. To make this feature possible, Alexa uses text-to-speech technology (Neural TTS (NTTS)) that turns written text into synthesized speech. It uses a service called Amazon Polly, which has shifted toward neural-network-based text-to-speech systems to make it more natural sounding.
Amazon Web Services customers can also now create their own machine learning image analysis thanks to a new feature added to Amazon Rekognition called Amazon Rekognition Custom Labels, available as of December 3, 2019. The feature can identify unique objects, for example when an auto repair shop wants to detect machine parts in inventory, the algorithm can now be trained, without needing machine learning expertise, to differentiate between “turbochargers” and “torque converters,” explains in a blog post Anushri Mainthia, Senior Product Manager on the Amazon Rekognition team and product lead for Amazon Rekognition Custom Labels.
For each machine part, users need 10 sample images that they will upload and label in the console. When the dataset is finalized, Amazon Rekognition Custom Labels will take over. When the image processing stage is done, Amazon Rekognition Object and Scene detection will list all the machine parts in inventory, while Amazon Rekognition Custom Labels will categorize the parts and list their names. Amazon claims its tool “can process tens of thousands of images stored in Amazon S3 in an hour.”
Article Topics
Amazon | Amazon Rekognition | biometrics | biometrics at the edge | cloud services | facial recognition | IoT | voice
Comments