FB pixel

Explainer: Gesture recognition

 

Gesture recognition has been defined as the mathematical interpretation of a human motion by a computing device.  Gestures can originate from any bodily motion or state but commonly originate from the face or hand.

Ideally, gesture recognition enables humans to communicate with machines and interact naturally without any mechanical intermediaries.  Utilizing sensors that detect body motion, gesture recognition makes it possible to control devices such as televisions, computers and video games, primarily with hand or finger movement. With this technology you can change television channels, adjust the volume and interact with others through your TV.

Recognizing gestures as input allows computers to be more accessible for the physically-impaired and makes interaction more natural in a gaming or 3D virtual world environment. Using gesture recognition, it is even possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant.

Gesture recognition, along with facial recognition, voice recognition, eye tracking and lip movement recognition are components of what software and hardware designers and developers refer to as a “perceptual user interface”.

The goal of perceptual user interface is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline known as usability. In personal computing, gestures are most often used for input commands.  Hand and body gestures can be amplified by a controller that contains “accelerometers” and gyroscopes to sense tilting, rotation and acceleration of movement, or the computing device can be outfitted with a camera so that software in the device can recognize and interpret specific gestures. A wave of the hand, for instance, might terminate the program.

Arguably, one of the most famous gesture recognition applications is the “Wiimote”, which is used to obtain input movement from users of Nintendo’s Wii gaming platform. The device is the main controller for the Wii console. It contains an “accelerometer” in the controller which works to measure acceleration along three axes. An extension that contains a gyroscope can be added to the controller to improve rotational motions. The controller also contains an optical sensor allowing to determine where it is pointing. For that, a sensor bar highlighting IR LEDs is used to track movement.

Microsoft is also a leader in gesture recognition technology.  The firm’s line of motion sensing input devices for its Xbox 360 and Xbox One video game consoles and Windows PCs are centered around a webcam-style, add-on peripheral.  The unit allows users to control and interact with their gaming console or computer without the need for a game controller, through a natural user interface using gestures.  The technology uses synchronous camera input derived from the user’s movement.

Systems that incorporate gesture recognition rely on algorithms. Most developers differentiate between two different algorithmic approaches in gesture recognition: a 3D-based and appearance-based model. The most popular method makes use of 3D information from key body parts in order to obtain several important parameters, like palm position or joint angles. In contrast, appearance-based systems use images or videos for direct interpretation.

In addition to the technical challenges of implementing gesture recognition, there are also social challenges. Gestures must be simple, intuitive and universally acceptable. Further, input systems must be able to distinguish nuances in movement.

Find gesture recognition solutions here.

Article Topics

 |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

One Reply to “Explainer: Gesture recognition”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events