Skip to content

@geenee/bodyprocessors

Pose Tracking

PoseProcessor estimates 3D pose keypoints, it locates the person/pose region-of-interest (ROI) and predicts the pose keypoints providing smooth, stable and accurate pose estimation. 2D pixel pose keypoints - points in the screen coordinate space. X and Y coordinates are normalized screen coordinates (scaled by width and height of the input image), while the Z coordinate is depth within orthographic projection space, it has the same scale as X coordinate (normalized by image width) and 0 is at the center of hips. These points can be used for 2D pose overlays or when using orthographic projection. Estimation of Z coordinate is not very accurate and we recommend to use only XY for 2D effects. 3D metric points - points within 3D space of perspective camera located at the space origin and pointed in the negative direction of the Z-axis. These points can be used for 3D avatar overlays or virtual try-on. Rigged and skinned models can be rendered on top of the pose aligning skeleton/armature joints with 3D keypoints. 3D and 2D points are perfectly aligned, projections of 3D points coincide with 2D pixel coordinates within the perspective camera. Pose processor may estimate an accurate & stable segmentation mask. Segmentation mask - monochrome image, where every pixel has value in range [0..1] denoting the probability of it being a foreground. Mask is provided for normalized rect region of the original image, it has a fixed size in pixels and should be scaled to image space. Optional temporal smoothing of a segmentation mask may be enabled. Estimated mask may be used for background substitution, effects like bokeh or focal blur, advanced occluder materials utilizing a mask, regional patchers, and other foreground/background shader effects.

PoseProcessor emits PoseResult storing results of pose tracking. They’re passed to @geenee/armature!Renderer. PoseEngine is a straightforward specialization of @geenee/armature!Engine for PoseProcessor.

Simple application utilizing PoseProcessor. In this application we add runtime switching between front and rear cameras. Note how we disable the button to prevent concurrent pipeline state change.

import { PoseEngine } from "@geenee/bodyprocessors";
import { CustomRenderer } from "./customrenderer";
import "./index.css";
let rear = false;
const engine = new PoseEngine();
const token = location.hostname === "localhost" ?
"localhost_sdk_token" : "prod.url_sdk_token";
async function main() {
const container = document.getElementById("root");
if (!container)
return;
const renderer = new CustomRenderer(
container, "crop", !rear, "model.glb");
const cameraSwitch = document.getElementById(
"camera-switch") as HTMLButtonElement | null;
if (cameraSwitch) {
cameraSwitch.onclick = async () => {
cameraSwitch.disabled = true;
rear = !rear;
await engine.setup({ size: { width: 1920, height: 1080 }, rear });
await engine.start();
renderer.setMirror(!rear);
cameraSwitch.disabled = false;
}
}
await Promise.all([
engine.addRenderer(renderer),
engine.init({ token: token })]);
await engine.setup({ size: { width: 1920, height: 1080 }, rear });
await engine.start();
}
main();

Documentation of the following packages provides details on how to build more extensive application with custom logic:

Face and Head Tracking

FaceProcessor estimates 3D face landmarks, it detects and tracks face mesh providing smooth, stable and accurate results. Processor evaluates 2D pixel and 3D metric points as well as face pose (translation+rotation+scale) aligning reference face model. 2D pixel face landmarks - points in the screen coordinate space. X and Y coordinates are normalized screen coordinates (scaled by width and height of the input image), while the Z coordinate is depth within orthographic projection space. These points can be used for 2D face filters or when using orthographic projection. 3D metric points - points within 3D space of perspective camera located at the space origin and pointed in the negative direction of the Z-axis. These points can be used to apply texture face mask. 3D and 2D points are perfectly aligned, projections of 3D points coincide with 2D pixel coordinates within the perspective camera. Face pose - transformation matrix (translation+rotation+scale) aligning reference face 3D model with the measured 3D face mesh. Applying this transformation one can align 3D object with the detected face. If the model’s initial position is aligned with the reference face, relative transformation will be preserved. Face processor may estimate an accurate & stable segmentation mask. Segmentation mask - monochrome image, where every pixel has value in range [0..1] denoting the probability of it being a foreground. Mask is provided for normalized rect region of the original image, it has a fixed size in pixels and should be scaled to image space. Optional temporal smoothing of a segmentation mask may be enabled. Estimated mask may be used for background substitution, effects like bokeh or focal blur, advanced occluder materials utilizing a mask, regional patchers, and other foreground/background shader effects.

FaceProcessor emits FaceResult storing results of face tracking that are passed to @geenee/armature!Renderer. When you create and setup application @geenee/armature!Engine for PoseProcessor, you need to provide FaceParams to enable or disable optional results of face tracking. Disabling an unused estimation can improve application performance. FaceEngine is a straightforward specialization of @geenee/armature!Engine for FaceProcessor.

Simple application utilizing FaceProcessor. In this application we use Snapshot helper to take a picture on click event. Additionally we will add occluder using a plugin.

import { FaceEngine } from "@geenee/bodyprocessors";
import { Snapshoter } from "@geenee/armature";
import { CustomRenderer } from "./customrenderer";
import "./index.css";
const engine = new FaceEngine();
const token = location.hostname === "localhost" ?
"localhost_sdk_token" : "prod.url_sdk_token";
async function main() {
const container = document.getElementById("root");
if (!container)
return;
const renderer = new CustomRenderer(container);
const snapshoter = new Snapshoter(renderer);
container.onclick = async () => {
const image = await snapshoter.snapshot();
if (!image)
return;
const canvas = document.createElement("canvas");
const context = canvas.getContext('2d');
if (!context)
return;
canvas.width = image.width;
canvas.height = image.height;
context.putImageData(image, 0, 0);
const url = canvas.toDataURL();
const link = document.createElement("a");
link.hidden = true;
link.href = url;
link.download = "capture.png";
link.click();
link.remove();
URL.revokeObjectURL(url);
}
await Promise.all([
engine.addRenderer(renderer),
engine.init({ token: token, transform: true })]);
await engine.setup({ size: { width: 1920, height: 1080 });
await engine.start();
document.getElementById("dots")?.remove();
}
main();

Documentation of the following packages provides details on how to build more extensive application with custom logic:

Hand and Wrist Tracking

HandProcessor estimates 2D and 3D hand keypoints, it locates the hand region-of-interest (ROI) and predicts the pose keypoints providing smooth, stable and accurate pose estimation fot the hand. 2D pixel hand keypoints - points in the screen coordinate space. X and Y coordinates are normalized screen coordinates (scaled by width and height of the input image), while the Z coordinate is depth within orthographic projection space, it has the same scale as X coordinate (normalized by image width). 2D points can be used for 2D overlays, math analyzes, or when using orthographic camera. 3D metric points - points within 3D space of perspective camera located at the space origin and pointed in the negative direction of the Z-axis. These points can be used for 3D model overlays or virtual try-on. Rigged and skinned models can be rendered on top of the pose aligning skeleton/armature joints with 3D keypoints. 3D and 2D points are perfectly aligned, projections of 3D points coincide with 2D pixel coordinates within the perspective camera.

Additionally hand processor detects wrist 2D position and direction. Wrist detection provides 3 lines in the screen coordinate space. Middle line defines 2D wrist base/center point and unit direction vector of the wrist. Two more lines define wrist edges by 2D screen points at the end of the wrist along transversal section through the base point and associated direction vectors. Wrist detection provides for virtual try-on of accessories like watches and bands.

HandProcessor emits HandResult storing results of hand tracking. They’re passed to @geenee/armature!Renderer. HandEngine is a straightforward specialization of @geenee/armature!Engine for HandProcessor.

Simple application utilizing HandProcessor. In this application we add runtime switching between front and rear cameras. Note how we disable the button to prevent concurrent pipeline state change.

Simple application utilizing HandProcessor. In this application we add runtime switching between front and rear cameras. Note how we disable the button to prevent concurrent pipeline state change.

import { HandEngine } from "@geenee/bodyprocessors";
import { CustomRenderer } from "./customrenderer";
import "./index.css";
let rear = false;
const engine = new HandEngine();
const token = location.hostname === "localhost" ?
"localhost_sdk_token" : "prod.url_sdk_token";
async function main() {
const container = document.getElementById("root");
if (!container)
return;
const renderer = new CustomRenderer(
container, "crop", !rear, "model.glb");
const cameraSwitch = document.getElementById(
"camera-switch") as HTMLButtonElement | null;
if (cameraSwitch) {
cameraSwitch.onclick = async () => {
cameraSwitch.disabled = true;
rear = !rear;
await engine.setup({ size: { width: 1920, height: 1080 }, rear });
await engine.start();
renderer.setMirror(!rear);
cameraSwitch.disabled = false;
}
}
await Promise.all([
engine.addRenderer(renderer),
engine.init({ token: token })]);
await engine.setup({ size: { width: 1920, height: 1080 }, rear });
await engine.start();
}
main();

Documentation of the following packages provides details on how to build more extensive application with custom logic:

Mask tracking

MaskProcessor estimates accurate human segmentation masks. Segmentation mask - monochrome image, where every pixel has value in range [0..1] denoting the probability of it being a foreground. Mask is provided for normalized rect region of the original image, it has a fixed size in pixels and should be scaled to image space. Optional temporal smoothing of a segmentation mask may be enabled. Estimated mask may be used for background substitution, effects like bokeh or focal blur, advanced occluder materials utilizing a mask, regional patchers, and other foreground/background shader effects.

MaskProcessor emits MaskResult storing results of segmentation. They’re passed to @geenee/armature!Renderer. MaskEngine is a straightforward specialization of @geenee/armature!Engine for MaskProcessor.

Simple application utilizing MaskProcessor.

import { MaskEngine } from "@geenee/bodyprocessors";
import { CustomRenderer } from "./customrenderer";
import "./index.css";
let rear = false;
const engine = new HandEngine();
const token = location.hostname === "localhost" ?
"localhost_sdk_token" : "prod.url_sdk_token";
async function main() {
const container = document.getElementById("root");
if (!container)
return;
const renderer = new CustomRenderer(
container, "crop", !rear);
await Promise.all([
engine.addRenderer(renderer),
engine.init({ token: token })]);
await engine.setup({ size: { width: 1920, height: 1080 }, rear });
await engine.start();
}
main();

Classes

Interfaces

Type Aliases

Variables

Functions