Mobile robot takes orders from silhouetted hand gestures (not that one)
Researchers have created a new digital sense, something between vision and touch, for robots interacting with humans.
A paper published by the Association for Computing Machinery describes how even a low-resolution camera on a computer or robot topped with a woven nylon tent could read hand shadows as biometric commands.
Although of marginal practical use by itself, the development by Cornell University researchers points in necessary directions for robotics and biometrics.
A demonstration video shows what looks approximately like a tube-shaped cloth hamper turned on its head and perched on a battery-powered four-wheeled base. The very top of the tube is angled not unlike a small kiosk.
A gesture recognition algorithm using densely connected convolutional networks to translate six fuzzy silhouettes into responses and actions.
The researchers report between 87.5 percent and 96 percent accuracy under three lighting conditions.
Most experimental and commercial systems use sensors on or under the exterior of systems. But as physicians can attest, any nerve-deadened areas of a body will not function appropriately and will fail, ultimately.
The bigger the robot and the farther and faster it travels in any direction, the bigger the cost of giving it a useful sense of touch. A development like this might lead in new directions for roboticists.
It could lead to an interesting solution to the question of privacy. Systems in a business or a home collect continuous video while operating, potentially revealing information their owners want kept secret.
The researchers illustrate this in the video by placing a translucent half-dome over a home digital assistant’s camera, allowing it to operate and potentially recognize gestures or detect actions without capturing high-resolution images.
Then there are the COVID realities. For a growing segment of humanity, the less touched outside the home the better.