Faceunity updates facial analysis algorithms in AR SDK for live streamers
Faceunity, a Chinese company that specializes in augmented reality (AR) and virtual avatars, has added new features and facial analysis algorithm overhauls to the software development kit (SDK) of its video effect program with a focus for live streaming.
The company’s AR video effect program utilizes biometric data derived from real-time face tracking, human motion tracking, portrait segmentation, and gesture recognition. The algorithm is said to establish up to 241 dense facial landmarks for the mouth and eyes, recognize 56 facial expression coefficients and 25 human body landmarks; and perform portrait segmentation, head segmentation, and hair segmentation for contour recognition, among other biometric recognition for human motion and gestures.
With its AR video effects, Faceunity says it can perform face beautification, portrait filters, makeup and hair styling, body reshaping, face-driven virtual images, among a plethora of actions for live streamers. The product also comes packaged with the FU Creator for developers to create their own AR special effects like 2D and 3D stickers, AR masks, and beauty makeup alongside the ability to trigger special effects with facial expressions and gestures.
The SDK is available on iOS, Android, PC, Mac, and Unity, and can be integrated into other audio and video manufacturers, Faceunity says.
German 3D sensor-maker OQmented recently opened a Silicon Valley office to bring its image-capturing technology closer to more augmented reality developers.
Article Topics
augmented reality | biometric data | biometrics | expression recognition | face biometrics | facial analysis | gesture recognition | SDK
Comments