Humanoid smart robots use facial recognition, NLP to entertain event guests, astronauts, kids

Humanoid smart robots use facial recognition, NLP to entertain event guests, astronauts, kids

CloudMinds has developed a smart robot rental program that provides intelligent cloud service humanoid robots for entertainment at trade shows, weddings, special events, conferences and offices, the company announced.

The robots come equipped with advanced natural language processing and sophisticated task ability which makes them accessible to everyday people through smart interactions and conversations in multiple languages. They can also entertain people through dancing and gestures, and deliver tailored brand experience.

“We’re bringing the power of cloud artificial intelligence-powered robots, which learn with human input, closer to society,” said Bill Huang, founder and chief executive officer of CloudMinds. “This further strengthens our foundation in providing an even wider range of intelligent compliant service robots from CloudMinds – from wheeled to two-legged form factors. Ultimately, we’re elevating what is now the ‘new normal’ expectation of helpful, friendly robots for homes and businesses.”

According to Alberto Scherb, senior director of the CloudMinds Smart Robot rental program, the rental program will start with the Cloud Pepper robot, which is already very popular in several industry verticals across the globe, with future plans including Cloud Cleaning, Cloud Assistant, Cloud Patrol and Cloud Vending robots.

Cloud Pepper is developed on SoftBank Robotics’ Pepper, considered the first personal humanoid robot, and runs on cloud-based AI developed by CloudMinds. The robot is more intelligent because it integrates CloudMinds’ Human Augmented Robotics Intelligence (HARIX) platform (or “Cloud Brain”) and an ultra-secure virtual backbone intranet (“Nerve Network”).

It uses facial recognition to identity people and can understand human moods. The NLP feature lets Cloud Pepper communicate in different languages, record demographics and analyze customer interactions. Suggested use cases include brand ambassador at tech or medical tradeshows, fashion events and book launches, receptionist or representative at airports, schools, hospitals and other public spaces. Cloud Pepper can be rented for both short and long-term projects, from a starting price of $5,500.

Robo astronaut uses emotional intelligence, facial recognition to communicate in space

SpaceX’s Dragon capsule includes a robo astronaut called Cimon-2 to entertain the crew for the next three years, writes WTKR.

Cimon-2 is a brand-new model of the Crew Interactive Mobile Companion, built by Airbus at the German Aerospace Center. The AI is developed on IBM’s Watson technology. The robot is autonomous, has emotional intelligence and will assist with different tasks on the space station, including serve as a conversation companion. It has two facial recognition cameras for eyes and five extra cameras for autonomous navigation and video recording. With the help of psychologists, Cimon-2 has a personality and can use different tones in conversation.

Its first version, Cimon, joined the space station in 2018.

During a 90-minute demonstration with European Space Agency astronaut Alexander Gerst, Cimon played music, gave instructions, and used facial recognition to recognize the person it was interacting with, as well as make eye contact and have a conversation. The robot comes with photo and video recording capabilities which raised concerns about what it will end up sharing.

“The new Cimon has a built-in switch that enables the data streams from all cameras and microphones to be interrupted from the ISS. The astronaut has control over Cimon at all times, which was especially important for us,” said Judith Buchheim, researcher involved in the robot’s ethics evaluation.

Upon its return in August, the project was considered successful and the team began upgrading Cimon-2.

“When it was first deployed on the ISS, Cimon proved that it could understand not only the content within its given context, but also the intention behind it,” said Matthias Biniok, IBM’s Lead Watson Architect for Germany. “Cimon-2 goes one step further. With the help of the IBM Watson Tone Analyzer from the IBM cloud in Frankfurt, it is now able to evaluate the emotions of the astronauts and respond to the situation in an appropriate way if this is desired by the astronauts, or if its emotional analysis capabilities are being tested as part of an experiment. This allows Cimon-2 to transition from a scientific assistant into an empathetic companion, as required.”

Ubtech showcases humanoid robot that sneezes

Chinese company Ubtech has released in South Korea a portable humanoid robot that can talk, read, sing and dance, writes Robotics & Automation News.

AlphaMini is an advanced robot based on an extensible AI platform for an enhanced human-robot interactive experience. It comes with vivid facial expressions, fast facial recognition and can even sneeze.

“With the recent rising interests in digital transformation, interest in robots is expanding as well. The way of communication has changed,” said Kim Dong Jin, CEO of J. Mediator, UBTECH’s local partner. “We can communicate through not only verbal elements but also non-verbal elements such as gestures, postures, facial expressions, eye contact, voice and intonation.”

Acknowledged as the most automated country in a recent report released by the International Federation of Robotics, South Korea is the first market where AlphaMini was released.

Ubtech partnered with South Korean service engine company Naver to integrate AlphaMini with its Clover AI voice platform for localized voice interaction. AlphaMini uses LED eyes for over 100 expressions and 14 flexible and light servos as body joints to perform a number of activities including dance, Kungfu and pushups. AlphaMini is great to help kids be more familiar with AI through facial and object recognition and scratch programming.

Misty Robotics showcases personal robot at CES

Misty Robotics has developed a robot that can not only see and talk, but it can also smell and touch, and is now empowering over 23 million software developers to get involved in the code, according to a Forbes interview with Misty Robotics founder Ian Bernstein.

The first Misty robot was presented at CES 2018. This led to a Kickstarter campaign after which Bernstein started selling the robot for prices between $2,899-3,199. He hopes to bring the price below $1,000. Misty II will be showcased at CES 2020.

Misty II is equipped with a great number of features including facial recognition, voice, capacitive touch, spatial mapping, path planning and environmental sensors. The robot operates on Microsoft Windows IoT Core and has both a 3D and a 4K camera, two Qualcomm chips and a wide-angle fisheye lens, among others. It can be programmed with Javascript and C#, and Python might be added in the future.

According to Bernstein, his ten-year plan aims to turn Misty into a genuine personal robot that cleans and cooks. Equipped with facial recognition, Misty can recognize people, and uses simultaneous localization and mapping to move. Misty can even be programmed to receive guests at party, take pictures and bring drinks.

For now, it can’t perform sentiment analysis, but developers are working on defining her personality. With 100 skills already under its belt, the plan is to further develop Misty for eldercare as there are already a few companies in Europe using it for that purpose. Bernstein’s goal is for Misty to be completely autonomous and use the sensors to choose skills on her own.

Related Posts

Article Topics

 |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics