Skip to main content

Renderers

Renderer

Renderer is the core visualization and logical part of any application. It's attached to the Engine. Basically, renders define two methods load() and update(). The first one is used to initialize all assets and prepare the scene for example set up lightning, environment map. Engine will call load() method during pipeline initialization or when renderer is attached. The second one is used to update the scene according to results results of video processing. This's where all the logic happens. Renderer itself is a generic abstract class defining common API.

We provide a number helper classes derived from Renderer that can be used as starting points:

CanvasRenderer

Generic Renderer utilizing ResponsiveCanvas helper. Refer their documentation for more details. CanvasRenderer can have several layers and there're two basic usage patterns. Use separate layers for video and scene and effectively render scene on top of the video stream. Advantage of this approach is that image and scene can be processed independently and one can apply different effects or postprocessing. This pattern is also easier to implement. Or one can use only one canvas layer and embed video stream into the scene as object via a texture or background component. This approach will have more complex implementation dependent on particular renderer. On the other hand, rendering effects affecting the whole scene will also apply to the video stream. This can improve performance and allows advanced rendering/postprocessing techniques to be used

CanvasParams defines parameters of ResponsiveCanvas. ResponsiveCanvas will be created within provided HTMLElement container. There're three fitting modes: fit, pad and crop. When "fit" mode is used ResponsiveCanvas adjusts its size to fit into the container leaving margin fields to keep aspect ratio, "pad" mode behavior is the same, but margins are filled with highly blurred parts of the input video instead of still background. These modes provide for maximum field of view. In "crop" mode the canvas extends beyond the container to use all available area, or, equivalently, is cropped to have the same aspect ratio as container. This mode doesn't have margins but may reduce FoV when ratios of video and container don't match. Style of container will be augmented with overflow="hidden". Optionally user can mirror canvas output that can be useful for selfie/front camera applications.

VideoRenderer

Video renderer is based on CanvasRenderer and uses two canvas layers: one for video stream and another to render 3D scene on top of it. This usage pattern is the easiest to implement, but more limited as video is not embedded into the scene and e.g. renderer's postprocessing effects or advanced techniques can't be applied to video. VideoRenderer is a good starting point for you application.

Example

Example of a simple Renderer creating a scene with a 3D model that follows a head. It can be used as starting point for virtual hat try-on application. The next example uses three.js:

import { FaceRenderer } from "@geenee/bodyrenderers-three";
import { FaceResult } from "@geenee/bodyprocessors";
import * as three from "three";
import { GLTFLoader } from "three/examples/jsm/loaders/GLTFLoader";
import { FBXLoader } from "three/examples/jsm/loaders/FBXLoader";

export class HatRenderer extends FaceRenderer
{
// Scene
protected object?: three.Group;
protected head?: three.Group;
protected light?: three.PointLight;
protected ambient?: three.AmbientLight;

async load() {
if (this.loaded)
return;
await this.setupScene();
await this.setupGeometry();
await super.load();
}

async setupScene() {
// Lighting
this.light = new three.PointLight(0x888888, 1);
this.ambient = new three.AmbientLight(0x888888, 1);
this.camera?.add(this.light);
this.scene?.add(this.ambient);
}

async setupGeometry() {
// Occluder
this.head = await new FBXLoader().loadAsync("head.fbx");
const mesh = (this.head.children[0] as three.Mesh)
mesh.material = new three.MeshBasicMaterial();
mesh.material.colorWrite = false;
this.scene?.add(this.head);
// Model
const gltf = await new GLTFLoader().loadAsync(this.model);
this.object = gltf.scene;
this.scene?.add(this.object);
}

async update(result: FaceResult, stream: HTMLCanvasElement) {
// Render
const { mesh, transform, metric } = result;
if (mesh && transform) {
// Mesh transformation
const translation = new three.Vector3(...transform.translation)
const uniformScale = new three.Vector3().setScalar(transform.scale);
const shapeScale = new three.Vector3(
...transform.shapeScale).multiplyScalar(transform.scale)
const rotation = new three.Quaternion(...transform.rotation);
// Align model with mesh
if (this.object) {
this.object.visible = true;
this.object.position.copy(translation);
this.object.scale.copy(uniformScale);
this.object.setRotationFromQuaternion(rotation);
this.object.updateMatrix();
}
if (this.head) {
this.head.visible = true;
this.head.position.copy(translation);
this.head.scale.copy(shapeScale);
this.head.setRotationFromQuaternion(rotation);
this.head.updateMatrix();
}
}
else {
for (let obj of [this.object, this.head]) {
if (!obj)
continue;
obj.visible = false;
}
}
return super.update(result, stream);
}
}

SceneRenderer and Plugins

Renderers support plugin system. SceneRenderer extends VideoRenderer to be used with particular WebGL engine for example babylon.js or three.js. Type of the scene object is additional parametrization of generic. The most important feature of SceneRenderer is plugin system. Plugins written for particular WebGL engine can be attached to SceneRenderer. Usually plugins control a scene node and implement simple tasks that can be separated from the main rendering context. For example, make a scene node follow (be attached to) the person's head, or make node an occluder, or create a face mesh node and set texture as mask.

ScenePlugin isn't very different to SceneRenderer and very much alike, but it implements only one task on a node. As well as Renderer it should implement two basic methods: load() to setup attached scene node and update() to control the node according to results provided by Processor. Plugins are levels of abstraction allowing to single out ready-made helpers that are used as atomic building blocks.

Another plugins type is VideoPlugin performing simple image transformations on video stream, e.g. smooth effects or gamma correction. Idea is the same but update() works with input image provided in canvas element. VideoPlugin updates the image according to provided results. Image is updated in place drawing on canvas directly.