Skip to main content

Module: @geenee/bodyrenderers-babylon

Renderer

@geenee/armature!Renderer is the core visualization and logical part of any application. It's attached to the @geenee/armature!Engine. Basically, renders define two methods load() and update(). The first one is used to initialize assets and prepare the scene (lightning, environment map). The second is used to update the scene according to results of video processing. This's where all the logic happens. Renderers can be extended with plugins. Plugins do simple rendering task, for example add object to that follows the head or render avatar overlay.

This package provides set of ready-made renderers and plugins to simplify development of applications. They can be used as both atomic building blocks or you can use them as starting points, inherit and override / extend class functionality. By extending @geenee/armature!Renderer#load and @geenee/armature!Renderer#update of a Renderer or Plugin's @geenee/armature!ScenePlugin#load and @geenee/armature!ScenePlugin#update you can add any custom logic, interactions, animations, post-processing, effects, gesture recognition, etc.

Module utilizes babylon.js rendering engine for visualization. Three.js renderers and plugins can be found in @geenee/bodyrenderers-three! package.

Basics

Set of abstract classes that specialize generic renderers and plugins for @geenee/bodyprocessors!PoseProcessor and @geenee/bodyprocessors!FaceProcessor. These classes are used as base parents to simplify API, they do not implement any logic or visualization.

Pose Tracking

PoseAlignPlugin

Universal plugin aligning node's rig and pose estimated by @geenee/bodyprocessors!PoseProcessor. It's a base of try-on, twin, and other plugins. You can use this class as a starting point and customize alignment method or add features. Basically, PoseAlignPlugin evaluates positions and rotations of armature bones based on 3D pose keypoints, then applies these transforms to bones following the armature hierarchy. Plugin supports model rigs compatible with Mixamo armature, e.g. any model from Mixamo library or Ready Player Me avatars. This is common standard of armature/skeleton for human-like / anthropomorphic models supported by many game/render engines. The scene node must contain an armature among its children. Armature's bones must follow Mixamo / RPM naming convention. Models rigged and skinned manually or using Mixamo tool can variate depending on anthropomorphous topology of the model. For example animated characters can have disproportional body parts like a much bigger head or longer arms. In such cases PoseAlignPlugin can apply number of fine-tuning adjustments to basic alignment improving model fitting or making it look more natural. PoseTuneParams explains tuning options. As an example turning off adjustment of spine curvature gives better results in virtual garment try-on experiences, while for full-body avatar overlaying it can provide more natural look. Depending on the use case and model's topology you can try to tune different options and see what works better in practice. By default the plugin is fine-tuned for RPM avatars so you can simply replace person with the avatar model in the scene.

ClothAlignPlugin

Universal plugin aligning node's rig and pose estimated by @geenee/bodyprocessors!PoseProcessor. This is the best starting point for advanced virtual try-on of apparel. Basically, ClothAlignPlugin evaluates positions and rotations of armature bones based on 3D pose keypoints, then applies these transforms to bones following the armature hierarchy. To improve accuracy of alignment, detected 3D points and skeleton sizes are mutually adjusted to fit each other. Plugin supports rigs compatible with Clo3D and Marvelous Designer avatars. This is the most common standard of rigs in cloth/apparel modeling software. Controlled scene node must contain an armature among its children nodes. Bones of armature must follow Clo3D naming convention and hierarchy. ClothAlignPlugin can apply number of fine-tuning adjustments to basic alignment improving model fitting or making it look

PoseOutfitPlugin (Deprecated)

PoseOutfitPlugin is extension of PoseAlignPlugin that allows to specify body meshes of the avatar's node as occluders and optionally hide some child meshes (parts). It's a good starting point for virtual try-on applications. OutfitParams defines available options of outfit. You can download any Ready Player Me avatar which outfit similar to final result, edit its outfit, re-skin model if necessary. Then simply use this plugin to build try-on app. Armature bones must follow Mixamo / RPM naming convention.

Deprecated: Use OccluderMaterial or OccluderMaskPlugin directly to make meshes of the node into occluders and hide them manually by setEnabled(), this's more flexible approach.

OutfitSkirtPlugin

OutfitSkirtPlugin is an extension of PoseAlignPlugin that controls auxiliary skirt bones of armature if presented. Skirt bones are driven by legs but has additional kinematic constrains to mimic deformation of a fabric more naturally. This technic provides for higher fidelity virtual try-on of apparels that have a loose bottom skirt by controlling this part independently and not making it stick to legs so tightly. Auxiliary skirt bones must be clones of up leg and leg bones and skirt part should be skinned against them instead of legs.

PoseTwinPlugin

PoseAlignPlugin extension for digital twins mirroring the pose and residing beside a user. When rendering a twin we do not translate bones to align with keypoint coordinates and only preserve relative rotations. After projecting the detected pose onto a twin, twin's scene node can be further transformed relative to the initial position - centers of hips are the same.

BodyPatchPlugin

Plugin patches (inpaints/erases) foreground region of image defined by body segmentation mask from @geenee/bodyprocessors!PoseProcessor. It can be used in avatar virtual try to remove parts of a user's body that stick out (not covered). Evaluation of body segmentation must be enabled in init() method enabling (@geenee/bodyprocessors!PoseParams#mask) flag.

BodypartPatchPlugin

Plugin conditionally patches (inpaints/erases) foreground regions of image defined by body segmentation mask from @geenee/bodyprocessors!PoseProcessor. There are 2 types of regions: "patch" and "keep", both are defined by corresponding sets of scene meshes. Plugin patches foreground/masked pixels that belong to "patch" regions but are not part of "keep" regions. This can be used in apparel virtual try to remove parts of a body that stick out of (not covered by) outfit. In this case, "patch" region is defined by outfit meshes and "keep" region is the reference body model that at the same time serves as occluder. BodypartPatchPlugin is compatible with BabylonUniRenderer derivatives. Evaluation of body segmentation mask must be enabled in @geenee/bodyprocessors!PoseProcessor#init setting @geenee/bodyprocessors!PoseParams#mask to true.

Pose Tracking Example

  • Download pose tracking example for babylon.js
  • Get access tokens on your account page.
  • Replace placeholder in .npmrc file with your personal NPM token.
  • Run npm install to install all dependency packages.
  • In src/index.ts set your SDK access tokens (replace stubs).
  • Run npm run start or npm run start:https.
  • Open http(s)://localhost:3000 url in a browser.
  • That's it, you first pose tracking AR application is ready.

Preparing Models

Guides on preparing models for pose tracking:

Face Tracking

HeadTrackPlugin

Plugin attaches provided scene node to the head. Pose of the node (translation + rotation + scale) continuously updates according to pose estimation by @geenee/bodyprocessors!FaceProcessor. Children nodes inherently include this transform. The node can be seen as a virtual placeholder for real object. It's recommended to attach top-level nodes that don't include transforms relative to parent, otherwise head transform that is a pose in the world frame will be applied on top of them (will be treated as relative instead of absolute). Optionally anisotropic fine-tuning of the scale can be applied. In this case model will additionally adapt to shape of the face. If face isn't detected by FaceProcessor plugin recursively hides the node.

Download reference face model: face.glb.

Simplified implementation:

async update(result: FaceResult, stream: HTMLCanvasElement) {
if (!this.loaded)
return;
const { transform } = result;
if (!transform) {
this.node.setEnabled(false);
return super.update(result, stream);
}
// Mesh transformation
const translation = babylon.Vector3.FromArray(transform.translation)
const uniformScale = new babylon.Vector3().setAll(transform.scale);
const shapeScale = babylon.Vector3.FromArray(
transform.shapeScale).scale(transform.scale)
const rotation = babylon.Quaternion.FromArray(transform.rotation);
// Align node with the face
this.node.setEnabled(true);
this.node.rotationQuaternion = rotation;
this.node.position = translation;
this.node.scaling = this.shapeScale ? shapeScale : uniformScale;
// Render
return super.update(result, stream);
}

FaceTrackPlugin

Plugin attaches provided node to the face point. Pose of the node (translation + rotation + scale) continuously updates according to pose estimation by @geenee/bodyprocessors!FaceProcessor. Children nodes inherently include this transform. The node can be seen as a virtual placeholder for real object. It's recommended to attach top-level nodes that don't include transforms relative to parent, otherwise head transform that is a pose in the world frame will be applied on top of them (will be treated as relative instead of absolute). Optionally anisotropic fine-tuning of the scale can be applied. In this case model will additionally adapt to shape of the face. If face isn't detected by FaceProcessor plugin recursively hides the node.

Download reference face model: face.glb.

Simplified implementation:

async update(result: FaceResult, stream: HTMLCanvasElement) {
if (!this.loaded)
return;
const { transform, metric } = result;
if (!transform || !metric) {
this.node.setEnabled(false);
return super.update(result, stream);
}
// Mesh transformation
const translation = babylon.Vector3.FromArray(metric[this.facePoint])
const uniformScale = new babylon.Vector3().setAll(transform.scale);
const shapeScale = babylon.Vector3.FromArray(
transform.shapeScale).scale(transform.scale)
const rotation = babylon.Quaternion.FromArray(transform.rotation);
// Align node with the face point
this.node.setEnabled(true);
this.node.rotationQuaternion = rotation;
this.node.position = translation;
this.node.scaling = this.shapeScale ? shapeScale : uniformScale;
// Render
return super.update(result, stream);
}

FaceMaskPlugin

Adds a Mesh object to the scene that reflects detected face mesh. FaceMaskPlugin creates Mesh and defines indices, uvs and normals of vertices in load(), while vertex positions are updated in update() according to current face tracking estimations. The plugin uses StandardMaterial with a diffuse texture provided as image url.

Download face UV map: faceuv.png

Simplified implementation:

async load(scene?: babylon.Scene) {
if (this.loaded || !scene)
return;
// Mask
const material = new babylon.StandardMaterial(
"MaskMaterial", scene);
material.diffuseTexture = new babylon.Texture(
this.url, scene, undefined, false);
material.diffuseTexture.hasAlpha = true;
material.useAlphaFromDiffuseTexture = true;
material.sideOrientation =
babylon.Material.CounterClockWiseSideOrientation;
material.backFaceCulling = true;
var data = new babylon.VertexData();
data.positions = meshReference.flat();
data.indices = meshTriangles.flat();
data.uvs = meshUV.flat();
data.normals = [];
babylon.VertexData.ComputeNormals(
data.positions, data.indices, data.normals);
this.mask = new babylon.Mesh("Mask", scene);
this.mask.material = material;
this.mask.renderingGroupId = 1;
data.applyToMesh(this.mask, true);
return super.load();
}

async update(result: FaceResult, stream: HTMLCanvasElement) {
if (!this.loaded)
return;
if (!this.mask)
return super.update(result, stream);
const { metric } = result;
if (!metric) {
this.mask.setEnabled(false);
return super.update(result, stream);
}
// Update mesh coordinates
this.mask.setEnabled(true);
this.mask.updateVerticesData(
babylon.VertexBuffer.PositionKind, metric.flat());
// Render
return super.update(result, stream);
}

Face Tracking Example

  • Download face tracking example for babylon.js
  • Get access tokens on your account page.
  • Replace placeholder in .npmrc file with your personal NPM token.
  • Run npm install to install all dependency packages.
  • In src/index.ts set your SDK access tokens (replace stubs).
  • Run npm run start or npm run start:https.
  • Open http(s)://localhost:3000 url in a browser.
  • That's it, you first face tracking AR application is ready.

Preparing Models

Guides on preparing models for face tracking:

Hand Tracking

WristTrackPlugin

Plugin attaches provided scene node to the wrist. Pose of the node (translation + rotation + scale) continuously updates according to pose estimation by @geenee/bodyprocessors!HandProcessor. Children nodes inherently include this transform. The node can be seen as a virtual placeholder for real object. It's recommended to attach top-level nodes that don't include transforms relative to parent, otherwise wrist transform that is a pose in the world frame will be applied on top of them (will be treated as relative instead of absolute). If wrist/hand isn't detected plugin hides the node. One of approaches to accurately align meshes with a wrist pose when modeling a scene is to make them children of one node at the origin and set their relative transforms using the wrist base mesh as the reference, then instantiate WristTrackPlugin for this scene node. You can also apply relative transforms of children of the wrist-attached parent node programmatically. It's useful to add occluder model (base mesh of a wrist) as a child of the node. Another possible but less scalable approach is to have all meshes be built relative to the origin and aligned with the base mesh of whe wrist, in this case you can create WristTrackPlugin for each mesh. This can be handy when parts are stored separately.

Hand Tracking Example

  • Download hand tracking example for babylon.js
  • Get access tokens on your account page.
  • Replace placeholder in .npmrc file with your personal NPM token.
  • Run npm install to install all dependency packages.
  • In src/index.ts set your SDK access tokens (replace stubs).
  • Run npm run start or npm run start:https.
  • Open http(s)://localhost:3000 url in a browser.
  • That's it, you first hand tracking AR application is ready.

Occluders

Occluders are elements of a scene that are not rendered by themselves but still participate in occlusion queries. Usually, occluder is a base mesh (average approximation) of a body representing its real counterpart in a scene. Occluders are used to mask visible virtual objects behind them (like geometries of a 3D scene behind user's body).

OccluderMaterial

Applying OccluderMaterial to a mesh makes it an occluder.

Example of usage:

setOccluders(scene: babylon.Scene) {
const body = scene.getMeshByName("Body");
const head = scene.getMeshByName("Head");
if (!body && !head)
return;
const material = new OccluderMaterial("Occluder", scene);
if (body)
body.material = material;
if (head)
head.material = material;
}

OccluderMaskPlugin

Plugin creates instances of OccluderMaskMaterial. Masked occluder material is a more advanced version that takes into account a detected body segmentation mask. It writes depth value only for pixels covered by a mask providing more realistic occlusion aligned with a body, additionally reducing effect of unnatural cuts in VTO. OccluderMaskMaterial is created by the plugin, that owns and updates the mask texture and sets shader uniforms. (please do not construct OccluderMaskMaterial directly).

OccluderMaskMaterial

Masked occluder material is a more advanced version of the OccluderMaterial that takes into account a body segmentation mask provided by the PoseProcessor. It writes depth value only for pixels covered by a mask providing more realistic occlusion aligned with a body, additionally reducing effect of unnatural cuts in VTO. This material is created by OccluderMaskPlugin, please do not construct instance of OccluderMaskMaterial directly, plugin owns and updates texture of the mask.

OccluderPlugin (Deprecated)

Makes provided node an occluder. This is achieved by setting disableColorWrite=true to materials of node's meshes. This flag tells rendering engine to not write to color buffer but still write to depth buffer. Then meshes are effectively not rendered (fragment color write is skipped) and only occlude all other meshes of the scene (during depth test).

Deprecated: Apply OccluderMaterial or use OccluderMaskPlugin directly to make meshes into occluders, this's more flexible approach.

Simplified implementation:

async load(scene?: babylon.Scene) {
if (this.loaded || !scene)
return;
// Occluder material
if (this.node instanceof babylon.AbstractMesh) {
if (this.node.material) {
this.node.material.disableColorWrite = true;
this.node.material.needDepthPrePass = true;
}
this.node.renderingGroupId = 0;
}
this.node.getChildMeshes(false).forEach((mesh) => {
if (mesh.material) {
mesh.material.disableColorWrite = true;
mesh.material.needDepthPrePass = true;
}
mesh.renderingGroupId = 0;
});
return super.load(scene);
}

Scene Control

LightsPlugin

Plugin controls intensities of all light sources in the rendered scene according to estimated level of brightness. Extends @geenee/bodyrenderers-common!BrightnessPlugin providing callback that automatically adjusts parameters of lights. On initialization or scene update it remembers props of all light sources as references corresponding to maximum brightness level. When brightness changes the plugin adjusts controlled props using the new brightness value as the factor. On upload() intensities of all controlled lights are restored.

Classes

Interfaces

Type Aliases

BoneName

Ƭ BoneName: typeof BoneList[number]

Union type of skeleton bones


SkeletonTransforms

Ƭ SkeletonTransforms: SkeletonMap<BoneTransform>

Skeleton transformations


SkirtSkeletonMap

Ƭ SkirtSkeletonMap<T>: BoneMap<SkirtBoneName, T>

Map of additional skirt bones

Object with properties corresponding to each additional SkirtBoneList | bone of skirt.

Type parameters

NameDescription
TType of map values

SkirtSkeletonTransforms

Ƭ SkirtSkeletonTransforms: SkirtSkeletonMap<BoneTransform>

Skirt transformations

Variables

BoneList

Const BoneList: readonly ["hips", "spine", "spine1", "spine2", "neck", "head", "headEnd", "shoulderL", "shoulderR", "armL", "armR", "forearmL", "forearmR", "handL", "handR", "uplegL", "uplegR", "legL", "legR", "footL", "footR", "toeL", "toeR"]

List of skeleton bones

Functions

mapBones

mapBones<B, T>(mapFn, bones): { [key in "hips" | "spine" | "spine1" | "spine2" | "neck" | "head" | "headEnd" | "shoulderL" | "shoulderR" | "armL" | "armR" | "forearmL" | "forearmR" | "handL" | "handR" | "uplegL" | "uplegR" | "legL" | "legR" | "footL" | "footR" | "toeL" | "toeR"]: T }

Mapping of bones

Creates a BoneMap | mapped object where values of properties for selected bones of skeleton are evaluated by provided mapping function.

Type parameters

NameTypeDescription
Bextends readonly ("hips" | "spine" | "spine1" | "spine2" | "neck" | "head" | "headEnd" | "shoulderL" | "shoulderR" | "armL" | "armR" | "forearmL" | "forearmR" | "handL" | "handR" | "uplegL" | "uplegR" | "legL" | "legR" | "footL" | "footR" | "toeL" | "toeR")[]Readonly list of bones
TTType of mapped object values

Parameters

NameTypeDescription
mapFn(b: B[number]) => TFunction mapping a bone to a value
bonesB-

Returns

{ [key in "hips" | "spine" | "spine1" | "spine2" | "neck" | "head" | "headEnd" | "shoulderL" | "shoulderR" | "armL" | "armR" | "forearmL" | "forearmR" | "handL" | "handR" | "uplegL" | "uplegR" | "legL" | "legR" | "footL" | "footR" | "toeL" | "toeR"]: T }

Mapped object of selected skeleton bones


mapSkeleton

mapSkeleton<T>(mapFn): Object

Mapping of skeleton

Creates a SkeletonMap | mapped object where value of property for each bone of skeleton is evaluated by provided mapping function.

Type parameters

NameDescription
TType of mapped object values

Parameters

NameTypeDescription
mapFn(b: "hips" | "spine" | "spine1" | "spine2" | "neck" | "head" | "headEnd" | "shoulderL" | "shoulderR" | "armL" | "armR" | "forearmL" | "forearmR" | "handL" | "handR" | "uplegL" | "uplegR" | "legL" | "legR" | "footL" | "footR" | "toeL" | "toeR") => TFunction mapping a bone to a value

Returns

Object

Mapped object of all skeleton bones

NameType
armLT
armRT
footLT
footRT
forearmLT
forearmRT
handLT
handRT
headT
headEndT
hipsT
legLT
legRT
neckT
shoulderLT
shoulderRT
spineT
spine1T
spine2T
toeLT
toeRT
uplegLT
uplegRT