Skip to main content

Preparing Models

Occluders

Occluders are visual elements that represent real objects in a scene and are used to hide parts of AR objects behind them. They are not rendered themselves but participate in depth test, in other words they only signal not to draw parts of meshes behind them being completely transparent at the same time. For example in face tracking, AR occluder is a generic model of a head representing user’s head in a 3D AR scene. For body tracking AR, especially try-on, body occluder helps to hide parts of apparel behind user's real body. This basic model of a body can be used as occluder in body tracking applications.

Usage:

  • Use OccluderPlugin to make any scene node an occluder.
  • Use face or body tracking plugins to control occluder's pose.

Pose tracking

This section will show how to prepare 3D models for apps utilizing body tracking. This type of tracking estimates position and rotation of bones of a body skeleton. The most common use case it to align the armature of a skinned 3D model with pose of the body - avatar overlay and virtual apparel try-on AR experiences. You can use one of examples as a starting point:

Basics

Our SDK supports the next structure of armature/skeleton:

Hips
Spine
Spine1
Spine2
Neck
Head
HeadTop_End
LeftShoulder
LeftArm
LeftForeArm
LeftHand
...
RightShoulder
RightArm
RightForeArm
RightHand
...
LeftUpLeg
LeftLeg
LeftFoot
LeftToeBase
...
RightUpLeg
RightLeg
RightFoot
RightToeBase
...

You can download reference armature here. Any model with this skeleton will be compatible with our body tracking. It is required that names of bones are as described. One exception - you can use any prefixes in bone names, for example, "mixamo:LeftUpLeg" or "prefixLeftArm".

This armature/skeleton is standard and default for many game engines (Godot, Unreal, Unity), tools (Mixamo, Lens Studio), avatar systems (Ready Player Me), etc. For example, any Ready Player Me avatar is compatible with our body tracking out-of-the-box. You can also use these avatars as a starting point for your character or outfit model.

Usage:

  • Use PoseAlignPlugin to align rig of a scene node and estimated pose. Node's armature will follow the pose and overlay on top of the real body.
  • Use PoseOutfitPlugin to specify body meshes of avatar's scene node as occluders and optionally hide some child meshes for virtual try-on.
  • Use PoseTwinPlugin to make a rigged scene node be a digital twin mirroring the pose and residing beside a user.
  • Example for babylon.js
  • Example for three.js

In general, preparation of models for body tracking is the same as any other 3D model and you can use any 3D tool you prefer: Blender, Maya, Cinema4D, etc. The main difference is that final model should be skinned. This process requires some skill, but no worries - it's not so difficult. There're a lot of materials on how to do it. In the next section we'll describe several simple methods of skinning a model.

Skinning

Skinning is the process of binding the actual 3D mesh to the joint setup (armature). This means that the joints you setup will have influence on the vertices of your model and move them accordingly. We will describe several simple approaches to skin a model.

Automatic Weights in Blender

Automatic weights is the most standard technique of model skinning. In some cases it doesn't provide very accurate results, works best with models in T-pose. Weight painting may be required to fine-tune initial automatic skinning that is used as the first approximation.

Extended version with screenshots.

  • Download any reference model:
  • Import the reference model into a scene.
  • Import your model into the scene.
  • Align the asset with the reference model as accurately as possible using translation, rotation and scale. If you're using reference body meshes and would like to leave them as occluders make sure there're no intersections of meshes. Minor intersections that can be removed with further pose alignment are fine.
  • Bake transformations into the mesh: select your model in Object mode and use Object -> Apply -> All Transforms.
  • Some tips on how to prepare mesh for better automatic weighting:
    • Automatic weighting works best if model is initially in T-pose.
    • Remove holes in the mesh - make sure all the vertices on your mesh are welded/merged together. You can use Merge By Distance tool available in Modeling mode (Mesh -> Merge -> By Distance).
    • Recalculate normals. In Modeling mode, select all vertices and use Mesh -> Normals -> Recalculate Outside
  • Now it's time to align armature's pose with the model. It's easier to use reference body mesh at this stage. It'll give you sense or body part sizes. Moreover reference mesh can be left in the scene and later be used as an occluder.
  • In Pose mode rotate bones to align armature with the model.
  • Make aligned pose the rest pose:
    1. Pose -> Apply -> Apply Visual Transform to Pose.
    2. Pose -> Apply -> Apply Pose as Rest Pose.
  • If you are using reference model with a mesh, e.g. reference body model (not armature-only), and intend to use it as occluder, it may be needed to apply the pose modifier to the mesh before setting rest pose:
    1. In Object mode select reference model.
    2. Go to its modifiers and apply Armature.
    3. Make current pose the rest pose as described earlier.
    4. Go back to object modifiers and set Armature back.
    5. Now the base model has a new rest pose. Test it in the Pose mode.
  • When everything is aligned it's time to skin your model.
  • Make armature a parent of the model (Armature Deform Parenting):
    1. Select all child objects that will be influenced by the armature.
    2. Then lastly, select the armature object itself.
    3. Press Ctrl+P and select one of Armature Deform methods.
    4. Preferred method is With Automatic Weights.
    5. Try different methods and choose the one that gives better results. Sometimes With Envelop Weights provides better skinning.
  • Check skinning in Pose mode. Make sure meshes deform correctly.
  • Automatic weighting may not provide 100% accurate result. Use Weight Paint tool to correct weights and achieve higher accuracy.
  • Remove meshes you don't need, Armature is required part of a model.
  • Export your model (optionally with body mesh to be used as occluder).
  • In your app use one of body pose plugins to add model to a scene.

Several iterations of model skinning may be required to achieve good results. If model behaves well in Pose mode but renders weird in AR when used with the SDK, consider tunning PoseTuneParams as per their documentations. Try to use pre-skinned body models as references and verify that meshes of apparel and body deform identically, do not overlap in certain poses, etc. Also, double check armature alignment inside the model. Spine bones may be to close to the chest or shoulders are to far from edges.

You can find more materials about skinning in Blender documentation:

Weight Transfer in Blender

Weight transfer technique can provide much better results than automatic weighting, especially in cases when you have already skinned mesh similar to your model. For example, weights from base body model are transferred to a t-shirt model, or you've already skinned another model with similar apparel. Many steps are the same as for automatic weighting described in the previous section.

Extended version with screenshots.

  • Download any model you'll transfer weights from (reference):
  • Import the reference model into a scene.
  • Import your model into the scene (target).
  • Align the asset with the reference model as accurately as possible using translation, rotation and scale. If you're using reference body meshes and would like to leave them as occluders make sure meshes do not intersect. Minor intersections that can be removed with further pose alignment are fine.
  • Bake transformations into the target mesh: select your model in Object mode and use Object -> Apply -> All Transforms.
  • Some tips on how to prepare mesh for more accurate skinning:
    • Remove holes in the mesh - make sure all the vertices on your mesh are welded/merged together. You can use Merge By Distance tool available in Modeling mode (Mesh -> Merge -> By Distance).
    • Recalculate normals. In Modeling mode, select all vertices and use Mesh -> Normals -> Recalculate Outside
  • Now it's time to fit pose of reference mesh with the target model.
  • In Pose mode rotate bones to align armature with the model.
  • Apply current pose modifiers to reference models:
    1. In Object mode select each reference model.
    2. Go to its modifiers and apply Armature.
  • Make current pose the rest pose:
    1. Pose -> Apply -> Apply Visual Transform to Pose.
    2. Pose -> Apply -> Apply Pose as Rest Pose.
  • Set Armature modifier back for all reference models.
  • Transfer weights from the reference mesh to the target.
    1. Go into Weight Paint mode.
    2. Select models to transfer weights from (reference).
    3. Then lastly, select the model to transfer weights to (target).
    4. Transfer weights: Weights -> Transfer Weights.
    5. In the menu select By Name in Source Layers dropdown.
    6. Select Vertex Mapping method. We recommend to try Nearest Face Interpolated first. Other methods may provide better results depending on geometry of reference and target.
  • The only thing left is to add Armature modifier to the target. After that current pose will influence vertices of the target mesh.
  • Optionally you can make Armature the parent of the target model to reflect scene hierarchy.
  • Check skinning in Pose mode. Make sure meshes deform correctly.
  • Weight transfer may not provide 100% accurate result. You can use Weight Paint tool to correct weights and achieve higher accuracy.
  • Remove meshes you don't need, Armature is required part of a model.
  • Export your model (optionally with body mesh to be used as occluder).
  • In your app use one of body pose plugins to add model to a scene.

In general, weight transfer, if applicable, provides better skinning than automatic weighting. If you used pre-skinned body model as the reference, target model after skinning deforms correctly in different poses, and you will use the reference as occluder - everything should work fine in AR. If model behaves well in Pose mode but still renders weird in AR using the SDK, consider tunning PoseTuneParams as per their documentations.

You can find more materials about weight transfer and painting in Blender docs:

Mixamo

Mixamo is a free and easy-to-use tool for rigging and skinning human-like models. It provides very good results. The only drawback is that rig itself may variate a little depending on models geometry and selection of keypoints. For example, relative bone proportions are not necessary follow our base model (reference armature), position of shoulder bones may be different, head bone is not connected to neck. Models exported from Mixamo are compatible with the SDK out-of-the-box, but we recommend to tune PoseTuneParams as per their documentations to achieve better alignment of rig and the detected pose.

Extended version with screenshots. If you have separate apparel model, you need to add a body and align them to build full-body model compatible with Mixamo. In case you already have complete 3D full-body model you can skip the next steps:

  • Download body model that fits the asset best:
  • Import the body model into a scene.
  • Import your model into the scene.
  • Align the asset with the body model as accurately as possible using translation, rotation and scale. Make sure there're no intersections between meshes. Minor intersections that can be removed with further pose alignment are fine.
  • Bake transformations into the model's mesh: select your model in Object mode and use Object -> Apply -> All Transforms.
  • Some tips on how to optimize mesh for more accurate rigging/skinning:
    • Remove holes in the mesh - make sure all the vertices on your mesh are welded/merged together. You can use Merge By Distance tool available in Modeling mode (Mesh -> Merge -> By Distance).
    • Recalculate normals. In Modeling mode, select all vertices and use Mesh -> Normals -> Recalculate Outside
  • Fit body into the model adjusting the pose. In Pose mode rotate bones to align body with the model.
  • Apply current pose modifiers to the body model:
    1. In Object mode select each mesh of the body object.
    2. Go to its modifiers and apply Armature.
  • Now we can remove Armature as it doesn't influence any mesh and Mixamo will re-rig the whole model providing a new skeleton.

Rigging with Mixamo:

  • Mixamo works only with .fbx format. Export full-body model into .fbx.
  • Make sure model is export together with all textures: in export dialog select Path Mode: Copy and check Embed Textures switch.
  • In Mixamo web app click Upload Character button.
  • Drag and drop model exported into .fbx file.
  • Follow the step-by-step auto-rigging instructions.
  • Download rigged and skinned model.
  • Convert .fbx to .glb model:
    1. Import .fbx model in Blender.
    2. Export as .glb file.
  • In your app use one of body pose plugins to add model to a scene.

Models exported from Mixamo are compatible with the SDK out-of-the-box, but we recommend to tune PoseTuneParams as per their documentations to achieve better alignment of the rig and the detected pose.

Face tracking

In this section we will explain how to prepare models and build scenes for face tracking applications. Face tracking estimates face mesh and head pose. This allows to attach 3D models to the head or to the particular face point. The later can provide better alignment for models like glasses or beards so model will follow selected keypoint and not head in general. You can use one of examples as a starting point:

Basics

General approach on preparing scenes for face tracking is to align meshes relative to reference head or face model. And attach corresponding scene node to head or face point using one of face tracking plugins. Then mesh will follow user's head/face and will be aligned the same way it was aligned relative to a reference model.

Usage:

  • Use HeadTrackPlugin to attach a scene node to the head. Children meshes will be attached to the head the same way they are positioned around reference models.
  • Use FaceTrackPlugin to attach a scene node to a face point. Children meshes will be attached to this point the same way they are positioned relative to the point of the reference face model.
  • Use OccluderPlugin to make scene node an occluder. Usually, occluder is a generic model of a head that is used to hide scene geometries behind real user's head. In other words, it represents user’s head in the scene but not rendered itself (transparent).
  • Example for babylon.js
  • Example for three.js

In general, preparation of models for face tracking is the same as any other 3D model and you can use any 3D tool you prefer: Blender, Maya, Cinema4D, etc. The main difference is that in the final scene model should be aligned with the reference head or face. In the next sections we will describe how to do it.

Models Attached to a Head

Scene nodes can be attached to user's head, they will follow its current pose (translation, orientation, and optionally scale). This attachment can be used for adding accessories such as headwear, or glasses, or other 3D models and effects.

Extended version with screenshots. Download reference models:

Create a scene (Blender):

  • Create a new empty project.
  • Import or build 3D models.
  • Import head or face model, to use them as a reference for alignment. You can leave the head model in the final scene and later use it as an occluder in the AR experience.
  • Create an empty node with identity transformation (position at the origin, no rotation and unit scale). This scene node will be attached to the head using HeadTrackPlugin. Plugin will continuously update transform of the node according to the current pose of a head. All children nodes will hierarchically include this transform. Pivot node can be seen as a virtual placeholder of a real head.
  • Make all 3D objects that will be attached to a head children of this node.
  • Align 3D objects around the reference assigning relative transform respect to their parent pivot node. In AR they will be aligned with the real face/head exactly the same way. Relative transforms will be preserved while transform of parent node will follow the current head pose. In other words, a scene built around reference models will be renderer around user's head and follow its pose (position, orientation and scale).
  • Remove reference objects (face model) from the scene. Optionally leave head model to later use it as occluder utilizing OccluderPlugin.
  • Export the final scene into .glb or embedded .gltf file.
  • Use HeadTrackPlugin to attach the pivot node to a head.
  • If scene includes occluders additionally initialize OccluderPlugin for the corresponding scene nodes.

This was a very basic guide, flexibility of the SDK allows different approaches and use patterns, for example:

  • You can use scene's root node itself as pivot. In this case all objects will be top-level nodes. HeadTrackPlugin is created for the root scene node.
  • You can initialize HeadTrackPlugin per each object in the scene, in this case it is recommended that objects are top-level nodes with identity transform. The easiest way is to apply Object -> Apply -> All Transforms.

Models Attached to a Face Point

Scene nodes can be attached to points on the face, they will follow these points plus rotation and scale of the head. This attachment can be used for adding accessories like glasses, mustaches or other 3D models and effects.

Extended version with screenshots. Download reference models and uv map:

Create a scene (Blender):

  • Create a new empty project.
  • Import or build 3D models.
  • Import face model to use it as a reference for alignment. Optionally import head model to later use it as an occluder in the AR experience.
  • Create an empty node. This scene node will be attached to the face point using FaceTrackPlugin. Plugin will continuously update transform of the node according to the current face mesh estimation. Children nodes will hierarchically include this transform. This pivot node can be seen as a virtual placeholder of a real point on the face.
  • Set initial translation of the parent/pivot node equal to coordinates of the face point. This will help to setup relative transforms of children meshes that properly align them with the face.
  • Select a vertex of the face model in Edit Mode on Modeling tab you would like 3D object to be attached to.
  • Snap cursor to the selected vertex. Right Click -> Snap Vertices -> Cursor to Selected.
  • Snap pivot node's position to the cursor. In Oject Mode select pivot scene node and Right Click -> Snap -> Selection to Cursor.
  • Make all 3D objects that will be attached to the selected face point children of the pivot node.
  • Align 3D objects with the reference face model assigning relative transforms respect to their parent pivot node. In AR they will be aligned with the real face exactly the same way. Relative transforms will be preserved while transform of parent node will follow the pose of the face point. In other words, a scene built around reference model will be renderer around user's head and follow the selected face point.
  • Remove reference face model from the scene.
  • Export the final scene into .glb or embedded .gltf file.
  • Find index of the face point in face map.
  • Use FaceTrackPlugin to attach the pivot node to the face point.