Skip to main content

Module: @geenee/bodyprocessors

Pose Tracking

PoseProcessor estimates 3D pose keypoints, it locates the person/pose region-of-interest (ROI) and predicts the pose keypoints providing smooth, stable and accurate pose estimation. 2D pixel pose keypoints - points in the screen coordinate space. X and Y coordinates are normalized screen coordinates (scaled by width and height of the input image), while the Z coordinate is depth within orthographic projection space, it has the same scale as X coordinate (normalized by image width) and 0 is at the center of hips. These points can be used for 2D pose overlays or when using orthographic projection. Estimation of Z coordinate is not very accurate and we recommend to use only XY for 2D effects. 3D metric points - points within 3D space of perspective camera located at the space origin and pointed in the negative direction of the Z-axis. These points can be used for 3D avatar overlays or virtual try-on. Rigged and skinned models can be rendered on top of the pose aligning skeleton/armature joints with 3D keypoints. 3D and 2D points are perfectly aligned, projections of 3D points coincide with 2D pixel coordinates within the perspective camera.

PoseProcessor emits PoseResult storing results of pose tracking. They're passed to Renderer. PoseEngine is a straightforward specialization of Engine for PoseProcessor.

Simple application utilizing PoseProcessor. In this application we add runtime switching between front and rear cameras. Note how we disable the button to prevent concurrent pipeline state change.

import { PoseEngine } from "@geenee/bodyprocessors";
import { CustomRenderer } from "./customrenderer";
import "./index.css";

let rear = false;
const engine = new PoseEngine();
const token = location.hostname === "localhost" ?
"localhost_sdk_token" : "prod.url_sdk_token";

async function main() {
const container = document.getElementById("root");
if (!container)
return;
const renderer = new CustomRenderer(
container, "crop", !rear, "model.glb");

const cameraSwitch = document.getElementById(
"camera-switch") as HTMLButtonElement | null;
if (cameraSwitch) {
cameraSwitch.onclick = async () => {
cameraSwitch.disabled = true;
rear = !rear;
await engine.setup({ size: { width: 1920, height: 1080 }, rear });
await engine.start();
renderer.setMirror(!rear);
cameraSwitch.disabled = false;
}
}

await Promise.all([
engine.addRenderer(renderer),
engine.init({ token: token })]);
await engine.setup({ size: { width: 1920, height: 1080 }, rear });
await engine.start();
}
main();

Documentation of the following packages provides details on how to build more extensive application with custom logic:

Face Tracking

FaceProcessor estimates 3D face landmarks, it detects and tracks face mesh providing smooth, stable and accurate results. Processor evaluates 2D pixel and 3D metric points as well as face pose (translation+rotation+scale) aligning reference face model. 2D pixel face landmarks - points in the screen coordinate space. X and Y coordinates are normalized screen coordinates (scaled by width and height of the input image), while the Z coordinate is depth within orthographic projection space. These points can be used for 2D face filters or when using orthographic projection. 3D metric points - points within 3D space of perspective camera located at the space origin and pointed in the negative direction of the Z-axis. These points can be used to apply texture face mask. 3D and 2D points are perfectly aligned, projections of 3D points coincide with 2D pixel coordinates within the perspective camera. Face pose - transformation matrix (translation+rotation+scale) aligning reference face 3D model with the measured 3D face mesh. Applying this transformation one can align 3D object with the detected face. If the model's initial position is aligned with the reference face, relative transformation will be preserved.

FaceProcessor emits FaceResult storing results of face tracking that are passed to Renderer. When you create and setup application Engine for PoseProcessor, you need to provide FaceParams to enable or disable optional results of face tracking. Disabling an unused estimation can improve application performance. FaceEngine is a straightforward specialization of Engine for FaceProcessor.

Simple application utilizing FaceProcessor. In this application we use Snapshot helper to take a picture on click event. Additionally we will add occluder using a plugin.

import { FaceEngine } from "@geenee/bodyprocessors";
import { Snapshoter } from "@geenee/armature";
import { CustomRenderer } from "./customrenderer";
import "./index.css";

const engine = new FaceEngine();
const token = location.hostname === "localhost" ?
"localhost_sdk_token" : "prod.url_sdk_token";

async function main() {
const container = document.getElementById("root");
if (!container)
return;
const renderer = new CustomRenderer(container);

const snapshoter = new Snapshoter(renderer);
container.onclick = async () => {
const image = await snapshoter.snapshot();
if (!image)
return;
const canvas = document.createElement("canvas");
const context = canvas.getContext('2d');
if (!context)
return;
canvas.width = image.width;
canvas.height = image.height;
context.putImageData(image, 0, 0);
const url = canvas.toDataURL();
const link = document.createElement("a");
link.hidden = true;
link.href = url;
link.download = "capture.png";
link.click();
link.remove();
URL.revokeObjectURL(url);
}

await Promise.all([
engine.addRenderer(renderer),
engine.init({ token: token, transform: true })]);
await engine.setup({ size: { width: 1920, height: 1080 });
await engine.start();
document.getElementById("dots")?.remove();
}
main();

Documentation of the following packages provides details on how to build more extensive application with custom logic:

Classes

Interfaces

Type Aliases

PosePoints

Ƭ PosePoints: { [key in PointName]: PosePoint }