This topic describes the capabilities of the smart retouching feature and provides a method for you to download the demo of Queen SDK. You can use the demo to try the smart retouching feature.
Demos
Platform | Demo | Sample project | Integration guide |
Android | Scan the QR code in DingTalk to download the Queen SDK demo for Android or iOS. | ||
iOS | |||
Web | |||
Windows | - | ||
macOS | - |
Statement of use for third-party software
Platform | Open-source component | License agreement | Modification | URL | Modified code |
Windows | Qt Core 5.14.2 | No | None |
Face retouching
The smart retouching feature provides various face retouching effects, such as skin whitening, skin smoothing, blemish concealing, and teeth whitening. Face retouching provides automatic presets for each effect and the option to modify each effect. You can change the level of these effects to achieve a naturally beautiful look.
The face retouching effects automatically adapt to all kinds of lighting conditions and environments. This improves user experience.
The following figure shows the effects of face retouching.
Face shaping
Face shaping is performed based on a highly-accurate facial keypoint detection technology and an advanced intelligent vision algorithm. You can use face shaping effects to modify facial features and facial contours. For example, you can change the size of your eyes, face, and chin.
The smart retouching feature provides various face shaping effects. You can change the level of each effect based on different aspects.
The following figure shows the effects of face shaping.
Makeup
Makeup effects adapt to changes in facial expressions and movements. This delivers a consistent makeup effect on videos.
The library of makeup types and materials is continually expanded to fit more use cases.
The following figure shows the effects of makeup.
Face play
Based on proprietary algorithms and technologies, Queen SDK begins to support funny facial effects such as pixelation and facemasks. In the future, more and more interesting effects will be added.
The following figure shows the effects of face play.
Filters
The smart retouching feature provides various filters. These filters are rendered in real time. You can use these filters in various scenarios.
The filter library is continuously expanded, and filter effects are continuously improved. In addition, ApsaraVideo Live is planning to provide a platform that you can use to manage your filters.
The following figure shows the effect of a filter.
Stickers
The smart retouching feature provides stickers that adapt to changes in facial expressions and movements.
The sticker library is continuously expanded to include more animated and static stickers.
The following figures show the effects of stickers.
Chroma key
Chroma key is used based on color gamut detection and an image segmentation algorithm. Chroma key supports blue screens and green screens. You can use blue or green screens of different textures and color gamuts. This helps you better manage background segmentation and color spills.
Chroma key can accurately extract still or moving subjects under all kinds of lighting conditions and angles.
The following figure shows the effect of chroma key.
Background replacement
The background replacement effect can accurately extract still or moving subjects under all kinds of lighting conditions and angles, even if the background is complex or noisy.
The following figure shows the effect of background replacement.
Gesture recognition
Gesture recognition can accurately detect 21 key points on hands in real time and identify 25 common gestures by using proprietary algorithms. Gesture recognition supports right-hand and left-hand recognition. Up to eight hands can be recognized at the same time.
The following figure shows the effect of gesture recognition.
Movement detection
Based on proprietary algorithms, movement detection can accurately detect 18 key points on human body in real time and identify 13 static postures, such as standing upright, raising hands, hand heart, arms akimbo, and superman pose, and 9 movements, such as rope jumping, jumping jack, squat, push-up, and sit-up. Feedback on count of these postures and movements is provided in real time.
The following figures show the effects of movement detection.
Auto face retouching
Auto face retouching is realized after you enable intelligent dynamic optimization. This capability adapts to environment and light changes to dynamically adjust retouching effects by using proprietary algorithms. Auto face retouching implements intelligent adjustment, real-time adaptation, and rapid response to retouching requirements in various environments.
The following figure shows the effect of auto face retouching.
Body shaping
Body shaping is performed by using proprietary algorithms. You can slim bodies, legs, arms, necks, and waists, lengthen legs, resize heads, and enlarge breasts based on different human body shapes. The natural body shaping effects make bodies well-proportioned. Body shaping is suitable for various scenarios such as live streaming and panoramic photographing.
The following figure shows the effect of body shaping.
Hairdressing
Queen SDK can accurately recognize hair in real time by using proprietary algorithms. You can change the hair color to achieve a dye effect for the hair. This capability can distinguish different hairstyles in various postures and background environments. You can specify the hair color based on your business requirements.
The following figure shows the effects of hairdressing.
AR writing
Queen SDK can recognize the trajectory of finger key points by using gesture recognition algorithms. The content that you write is rendered in the video based on whether your hand starts writing or stops writing. This way, you can realize the AR writing effect for videos. AR writing can be used in various scenarios such as live streaming, teaching, and online interaction.
The following figure shows the effect of AR writing.
Animojis
Queen SDK can capture different angles and expression changes of human faces and drive specific Animojis to make corresponding changes based on the proprietary face recognition and expression recognition algorithms. This creates an entertaining and funny effect. Fifty-one expressions such as single-eye wink, double-eye wink, mouth movements, eye movements, and eyebrow movements are supported.
The following figure shows the effects of Animojis.
High-fidelity background
High-fidelity background can reduce the impact on the background tone and texture during retouching in common scenarios. By default, high-fidelity background is enabled for skin whitening, rosy cheeks, skin smoothing, image sharpening, and auto face retouching. You do not need to configure related parameters.
The following figure shows the effect of high-fidelity background.