Explainer: Gesture recognition
Gesture recognition has been defined as the mathematical interpretation of a human motion by a computing device. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Ideally, gesture recognition enables humans to communicate with machines and interact naturally without any mechanical intermediaries. Utilizing sensors that detect body motion, gesture recognition makes it possible to control devices such as televisions, computers and video games, primarily with hand or finger movement. With this technology you can change television channels, adjust the volume and interact with others through your TV.
Recognizing gestures as input allows computers to be more accessible for the physically-impaired and makes interaction more natural in a gaming or 3D virtual world environment. Using gesture recognition, it is even possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant.
Gesture recognition, along with facial recognition, voice recognition, eye tracking and lip movement recognition are components of what software and hardware designers and developers refer to as a “perceptual user interface”.
The goal of perceptual user interface is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline known as usability. In personal computing, gestures are most often used for input commands. Hand and body gestures can be amplified by a controller that contains “accelerometers” and gyroscopes to sense tilting, rotation and acceleration of movement, or the computing device can be outfitted with a camera so that software in the device can recognize and interpret specific gestures. A wave of the hand, for instance, might terminate the program.
Arguably, one of the most famous gesture recognition applications is the “Wiimote”, which is used to obtain input movement from users of Nintendo’s Wii gaming platform. The device is the main controller for the Wii console. It contains an “accelerometer” in the controller which works to measure acceleration along three axes. An extension that contains a gyroscope can be added to the controller to improve rotational motions. The controller also contains an optical sensor allowing to determine where it is pointing. For that, a sensor bar highlighting IR LEDs is used to track movement.
Microsoft is also a leader in gesture recognition technology. The firm’s line of motion sensing input devices for its Xbox 360 and Xbox One video game consoles and Windows PCs are centered around a webcam-style, add-on peripheral. The unit allows users to control and interact with their gaming console or computer without the need for a game controller, through a natural user interface using gestures. The technology uses synchronous camera input derived from the user’s movement.
Systems that incorporate gesture recognition rely on algorithms. Most developers differentiate between two different algorithmic approaches in gesture recognition: a 3D-based and appearance-based model. The most popular method makes use of 3D information from key body parts in order to obtain several important parameters, like palm position or joint angles. In contrast, appearance-based systems use images or videos for direct interpretation.
In addition to the technical challenges of implementing gesture recognition, there are also social challenges. Gestures must be simple, intuitive and universally acceptable. Further, input systems must be able to distinguish nuances in movement.
Find gesture recognition solutions here.
Article Topics
authentication | biometrics | gesture | Recognition
Explainer: Gesture recognition http://t.co/efDcq1fizS via @BiometricUpdate #biometrics