Skip to content

@geenee/armature

Intro

Hello and welcome to the Geenee SDK Documentation. On this page you will find basic information about architecture of our SDK. Processors page describes pose, face, and hand tracking features. Renderers documentation is about how to build AR experiences using Geenee SDK. Or you can start building your AR app right now:

Getting Started

To develop with the Geenee SDK you’ll need access tokens that can be created on your account page. There are two access tokens: NPM token and SDK tokens. An NPM token gives access to our package registry and is required to download SDK’s npm packages. Add @geenee:registry=https://npm.geenee.ar line to the global or a project-specific .npmrc file to set Geenee registry as the provider of @geenee packages. Additionally, set your NPM access token adding //npm.geenee.ar/:_authToken="npm.geenee.ar_token line. Example of .npmrc file is provided with every demo app listed below. An SDK token is used to authenticate user account and enable the SDK on the current url. SDK token is created for the url where a web app will be hosted, e.g. yourdomain.com/ar_app. Note: validation of a url doesn’t include protocol, url params, or port, for example, for https://www.domain.io/ar/?param1=true&param2=abc address in a browser you’ll need a token for www.domain.io/ar url. You’ll need a separate SDK token for each web app you will deploy. It is worth to create an additional SDK tokens for local development, e.g. for localhost or 127.0.0.1 urls. Development access tokens do not contribute to total number of views. SDK token has to be provided to initialize instance of Engine.

  • Download and unpack one of examples:
    • Pose tracking example for babylon.js
    • Pose tracking example for three.js
    • Face tracking example for babylon.js
    • Face tracking example for three.js
    • Hand tracking example for babylon.js
    • Hand tracking example for three.js
  • Get access tokens on your account page.
  • Replace placeholder in .npmrc file with your personal NPM token.
  • Run npm install to install all dependency packages.
  • In src/index.ts set your SDK access tokens (replace stubs).
  • Run npm run start or npm run start:https.
  • Open http(s)://localhost:3000 url in a browser.
  • That’s it, you first AR application is ready.

Engine

Engine is a core of any application and organizer of its pipeline. It does all the work controlling lower-level instances and at the same time provides simple and user-friendly interface. Engine manages data (video) streams, processing and rendering. It is created for particular Processor. Processor constructor is provided to the Engine and former is responsible to initialize, setup and control processor during life-circle of the application. Results of processing are passed to a Renderer attached to the Engine. Renderers use provided results to define application’s logic and visualization.

All core components of @geenee/armature!: Engine, Processor, and Renderer, are generic classes parametrized by type of processing results emitted by Processor. If Engine is created for Processor emitting ResultT data, only Renderers accepting ResultT can be attached to it. Optional type parameter of instance settings can also be defined, it controls object’s behavior. For example for Processors it’s usually a set of flags that enable or disable evaluation of particular result. If result is not required for Renderer its computation can be skipped increasing performance and speed of the application.

To build AR experience you only need to implement Renderer where all application logic happens. SDK provides set of ready made Processors as well as set of predefined Renderer classes that can be used as a starting points. SDK is framework-agnostic, so you can utilize any rendering engine like three.js, babylon.js, etc, for visualization.

Via Engine instance you can setup, start, pause the pipeline. Before starting the pipeline call both initialization methods init() and setup(), these methods setup processing and video capture respectively. To initialize an Engine instance you need an SDK access token associated with the current url where the web app is deployed. If you call setup() when the pipeline is in playing state, Engine will be reset() automatically so you’ll need to call start() to resume playing state. Do not call Engine’s state control methods concurrently.

AsyncEngine is an experimental extension of basic Engine thet does processing in the background. Its pipeline provides for better performance and more stable frames per second. By using async type of engine in some cases you can achieve smoother and faster experience in the app. AsyncEngine and Engine are compatible, you can use any of them without additional code adjustments. AsyncEngine is experimental.

Example

To build an app you simply need to create an Engine instance for Processor and attach a Renderer:

import { Engine } from "@geenee/armature";
import { FaceProcessor } from "@geenee/bodyprocessors";
import { YourRenderer } from "./yourrenderer";
import "./index.css";
const engine = new Engine(FaceProcessor);
// Equivalently
// const engine = new FaceEngine();
const token = location.hostname === "localhost" ?
"localhost_sdk_token" : "prod.url_sdk_token";
async function main() {
const container = document.getElementById("root");
if (!container)
return;
const renderer = new YourRenderer(container);
await Promise.all([
engine.addRenderer(renderer),
engine.init({ token: token, transform: true })
]);
await engine.setup({ size: { width: 1920, height: 1080 } });
await engine.start();
}
main();

The SDK provides the next ready-made engine specializations:

Processors

Processor is the core computation part of an Engeenee app. Engine is created for particular Processor. Its constructor is provided to the Engine and former is responsible to initialize, setup and control processor during life-circle of the application. Results of processing are passed to a Renderer attached to the Engine. Renderers use provided results to define application’s logic and visualization.

SDK provides the next ready-made processors:

For more details refer documentation of @geenee/bodyprocessors! module.

Renderer

Renderer is the core visualization and logical part of any application. It’s attached to the Engine. Basically, renders define two methods load() and update(). The first one is used to initialize all assets and prepare the scene for example set up lightning, environment map. Engine will call load() method during pipeline initialization or when renderer is attached. The second one is used to update the scene according to results results of video processing. This’s where all the logic happens. Renderer itself is a generic abstract class defining common API.

We provide a number helper classes derived from Renderer that can be used as starting points:

PluginRenderer

Extends Renderer implementing render plugin system. Plugins can be attached to an instance of the PluginRenderer. Usually they perform simple tasks that can be separated from bigger app context into atomic building blocks, for example control object on a scene to follow (be attached to) user’s head, apply image effect (smoothing, beautification), recognize gestures or poses, notify about state changes or perform other kinds of transformations, pre/post-processing, or analyzes with a 3D scene, video stream or raw data from a Processor. Plugin is a abstraction level to single out ready-made helpers that can be reused as atomic building blocks of an app.

Plugins are very similar to Renderer but do only one task, they also should implement two basic methods load() and update(). PluginRenderer initializes all attached plugins calling their load() method and providing itself as an argument for plugin to acquire required resources, for example canvas context or reference to 3d scene. Every rendering cycle PluginRenderer calls update() of all attached and successfully loaded plugins passing results of video processing and current video frame.

Plugins are ordered depending on processing or rendering stage they should step in, this order is defined by plugin ordinal number. For example there can be a plugin that filters results of processing by some constraint, let’s say it accepts only poses where upper body is in the field of view and asks a user to step back for better virtual try-on experience, this plugin should update poses before plugin that renders virtual apparel.

CanvasRenderer

Generic Renderer utilizing ResponsiveCanvas helper. Refer their documentation for more details. CanvasRenderer can have several layers and there’re two basic usage patterns. Use separate layers for video and scene and effectively render scene on top of the video stream. Advantage of this approach is that image and scene can be processed independently and one can apply different effects or postprocessing. This pattern is also easier to implement. Or one can use only one canvas layer and embed video stream into the scene as object via a texture or background component. This approach will have more complex implementation dependent on particular renderer. On the other hand, rendering effects affecting the whole scene will also apply to the video stream. This can improve performance and allows advanced rendering/postprocessing techniques to be used

CanvasParams defines parameters of ResponsiveCanvas. ResponsiveCanvas will be created within provided HTMLElement container. There’re three fitting modes: fit, pad and crop. When “fit” mode is used ResponsiveCanvas adjusts its size to fit into the container leaving margin fields to keep aspect ratio, “pad” mode behavior is the same, but margins are filled with highly blurred parts of the input video instead of still background. These modes provide for maximum field of view. In “crop” mode the canvas extends beyond the container to use all available area, or, equivalently, is cropped to have the same aspect ratio as container. This mode doesn’t have margins but may reduce FoV when ratios of video and container don’t match. Style of container will be augmented with overflow="hidden". Optionally user can mirror canvas output that can be useful for selfie/front camera applications.

VideoRenderer

Video renderer is based on CanvasRenderer and uses two canvas layers: one for video stream and another to render 3D scene on top of it. This usage pattern is the easiest to implement, but more limited as video is not embedded into the scene and e.g. renderer’s postprocessing effects or advanced techniques can’t be applied to video. VideoRenderer is a good starting point for you application.

ShaderRenderer

ShaderRenderer is based on CanvasRenderer and uses two canvas layers: one for video stream and another to render 3D scene on top of it. Video is rendered by WebGL shaders, this allows to apply complex computationally demanding post-processing effects to the input stream. For example, simple mono-chrome or sepia effects, or more complex face beatification and dynamic geometry filters. Shader effects can be encapsulated in a form of plugins. Plugins are levels of abstraction allowing to single out ready-made helpers that are used as atomic building blocks.

SceneRenderer

Extends VideoRenderer to be used with the particular WebGL engine e.g. Babylon.js or Three.js. Type of the scene is additional parametrization of generic. ScenePlugin written for WebGL engine can be attached to a SceneRenderer. Usually they perform simple scene tasks that can be separated from the main context into atomic building blocks, for example control node of a scene to follow (be attached to) user’s head or replace its geometry with detected face mesh (mask effect).

Example

Example of a simple Renderer creating a scene with a 3D model that follows a head. It can be used as starting point for virtual hat try-on application. The next example uses three.js:

import { FaceRenderer } from "@geenee/bodyrenderers-three";
import { FaceResult } from "@geenee/bodyprocessors";
import * as three from "three";
import { GLTFLoader } from "three/examples/jsm/loaders/GLTFLoader";
import { FBXLoader } from "three/examples/jsm/loaders/FBXLoader";
export class HatRenderer extends FaceRenderer
{
// Scene
protected object?: three.Group;
protected head?: three.Group;
protected light?: three.PointLight;
protected ambient?: three.AmbientLight;
async load() {
if (this.loaded)
return;
await this.setupScene();
await this.setupGeometry();
await super.load();
}
async setupScene() {
// Lighting
this.light = new three.PointLight(0x888888, 1);
this.ambient = new three.AmbientLight(0x888888, 1);
this.camera?.add(this.light);
this.scene?.add(this.ambient);
}
async setupGeometry() {
// Occluder
this.head = await new FBXLoader().loadAsync("head.fbx");
const mesh = (this.head.children[0] as three.Mesh)
mesh.material = new three.MeshBasicMaterial();
mesh.material.colorWrite = false;
this.scene?.add(this.head);
// Model
const gltf = await new GLTFLoader().loadAsync(this.model);
this.object = gltf.scene;
this.scene?.add(this.object);
}
async update(result: FaceResult, stream: HTMLCanvasElement) {
// Render
const { mesh, transform, metric } = result;
if (mesh && transform) {
// Mesh transformation
const translation = new three.Vector3(...transform.translation)
const uniformScale = new three.Vector3().setScalar(transform.scale);
const shapeScale = new three.Vector3(
...transform.shapeScale).multiplyScalar(transform.scale)
const rotation = new three.Quaternion(...transform.rotation);
// Align model with mesh
if (this.object) {
this.object.visible = true;
this.object.position.copy(translation);
this.object.scale.copy(uniformScale);
this.object.setRotationFromQuaternion(rotation);
this.object.updateMatrix();
}
if (this.head) {
this.head.visible = true;
this.head.position.copy(translation);
this.head.scale.copy(shapeScale);
this.head.setRotationFromQuaternion(rotation);
this.head.updateMatrix();
}
}
else {
for (let obj of [this.object, this.head]) {
if (!obj)
continue;
obj.visible = false;
}
}
return super.update(result, stream);
}
}

Plugins

Plugins are very similar to Renderer but do only one task, they also should implement two basic methods load() and update(). PluginRenderer initializes all attached plugins calling their load() method and providing itself as an argument for plugin to acquire required resources, for example canvas context or reference to 3d scene. Every rendering cycle PluginRenderer calls update() of all attached and successfully loaded plugins passing results of video processing and current video frame.

Plugins are ordered depending on processing or rendering stage they should step in, this order is defined by plugin ordinal number. For example there can be a plugin that filters results of processing by some constraint, let’s say it accepts only poses where upper body is in the field of view and asks a user to step back for better virtual try-on experience, this plugin should update poses before plugin that renders virtual apparel.

ScenePlugin

ScenePlugins can be attached to SceneRenderer instances. Usually they control a scene node and implement simple tasks that can be separated from main rendering context. For example, make a scene node follow (be attached to) person’s head, or make node an occluder, or create face mesh node and set texture as a mask. On load() plugin prepares or modifies the attached node if required and reference to the scene object is cached to be used in update() and unload(); update() implements main logic and updates the scene node according to provided results.

ShaderPlugin

ShaderPlugin is a specialization of a Plugin for ShaderRenderer. They apply complex computationally demanding post-processing effects to the input stream. For example, simple mono-chrome or sepia effects, or more complex face beatification and dynamic geometry filters. ShaderPlugin shares webgl context with the main renderer. Basic implementation uses ShaderProgram created for shaders provided to plugin’s constructor. Plugins are organized in chain within ShaderRenderer, input of the next shader is output of the previous and initial input is original video image, output of the last plugin is rendered.

VideoPlugin

VideoPlugin is a specialization of a Plugin for VideoRenderer. Usually they perform simple 2D drawing tasks on a video canvas (for example, simplest face effects or adding debug information / graphics). VideoPlugin gets access to 2d canvas of VideoRenderer in load() and draws on this canvas directly in update(), on top of video frame using provided processing data.

Helpers

SDK provides number of helper classes that can be useful in AR applications:

Snapshoter

Takes a snapshot of the ResponsiveCanvas backing a CanvasRenderer. In general, ResponsiveCanvas is multi-layer therefore two capturing modes are available: capture all layers separately or merge them into one image. When you call snapshot() method Snapshoter waits for the next render update and makes a copy of all canvas layers.

Example of usage (capture snapshot):

const container = document.getElementById("root");
const renderer = new AvatarRenderer(container, "crop", true);
const snapshoter = new Snapshoter(renderer);
container.onclick = async () => {
const image = await snapshoter.snapshot();
if (!image)
return;
const canvas = document.createElement("canvas");
const context = canvas.getContext('2d');
if (!context)
return;
canvas.width = image.width;
canvas.height = image.height;
context.putImageData(image, 0, 0);
const url = canvas.toDataURL();
const link = document.createElement("a");
link.hidden = true;
link.href = url;
link.download = "capture.png";
link.click();
link.remove();
URL.revokeObjectURL(url);
};

Recorder

Records a video of the ResponsiveCanvas backing a CanvasRenderer. ResponsiveCanvas is multi-layer. Every rendering update Recorder merges all snapshots onto the recording canvas. Video of snapshot series is recorded into final video file.

Example of how to record 10 seconds video and download it:

const container = document.getElementById("root");
const renderer = new AvatarRenderer(container, "crop", true);
const recorder = new Recorder(renderer);
container.onclick = async () => {
recorder?.start();
setTimeout(async () => {
const blob = await recorder?.stop();
if (!blob)
return;
const url = URL.createObjectURL(blob);
const link = document.createElement("a");
link.hidden = true;
link.href = url;
link.download = "capture.webm";
link.click();
link.remove();
URL.revokeObjectURL(url);
}, 10000);
};

UniRecorder

Records the selected layer of the ResponsiveCanvas backing a CanvasRenderer. Simple and straightforward recording of a canvas. Encoded video chunks are cached to later be merged into one blob containing the final video file.

Streamer

Streams video of the ResponsiveCanvas backing a CanvasRenderer. ResponsiveCanvas is multi-layer, Every rendering update Streamer merges all snapshots onto the recording canvas. MediaStream instance is created for this canvas and provides access the generated video stream.

Enumerations

Classes

Interfaces

Type Aliases