Are you usually the one behind the camera, but planning to record yourself to control your characters? Even if you’re used to performing, the motion capture process is very different from stranding in front of a video camera.

The Kinect captures a lot of subtle aspects of your performance, so there’s more to it than just making sure you make the appropriate gesture at the appropriate time: things like your stance and how you carry your weight do a lot to make you (or your character) “you” — and the Kinect *will* pick them up. Unless you’re actually a talented actor, it’s often best to have a different person play each of your characters.

For example, here are four different people controlling the same George Washington puppet:


They each give him a very distinct personality. (And can you tell which performer is female?) It’s pretty easy to spot who’s who — if you do enough of this stuff you’ll even start to recognize who was responsible for a particular motion track from the movement of the dots alone.

But you’ll also notice there are places where the puppet looks great and places where it falls apart. What can you do to minimize that? There are two things you need to take into account: the limitations of your character model, and the limitations of the hardware.

Be aware of the limits of your character model. In this case, Washington obviously wasn’t built for skipping, jumping and rocking out. He’s made out of an old painting, and was designed to do stiffer, Founding Father-y stuff. The way the legs attach to the pelvis and the way the coattails move both look great with less exaggerated movements, but pushed to the limits they break in distracting ways.

If you know in advance how your character is likely to move, you can plan how much the layers need to overlap and what will happen at their extremes. And if you know what works and what doesn’t with your existing rig, you can keep it in mind when performing. It’s not a bad idea to rig after recording, or at the very least capturing a test for reference.

Always record facing the camera. Yes, even if your character won’t be facing forward in the scene. Notice that most of the places where Washington falls apart are ones where the actor recording the track turned to one side or the other — or even tried to spin around. Keep in mind that even with the 3D template setup you’re still controlling 2D layers, and if he has a single piece torso like Washington does it’s going to squash strangely when you rotate — the Kinect sees that as the shoulder and hip points getting closer together.

But that isn’t to say you can’t face another direction once you’ve recorded! The 3D template setup enables you to reposition the camera in post and rig your character relative to its new viewing angle. One of the best ways to set up a project is to use a single track of motion data with multiple models of the same character drawn from different perspectives. Check out these skeletons:


You can even switch between different camera angles of your scene using Premiere’s multicam feature via Dynamic Link, which keeps them all in sync. But movements with a lot of rotation (like spinning around) still won’t map super well because your character doesn’t have a back layer. One other potential solution is to build a multi-layer torso, with separate pieces for the left and right sides. It wouldn’t collapse so dramatically when the actor turns toward the side.

Make sure to stay within the camera’s range. Your head, hands and feet should be visible to the Kinect at all times. Use “cam” mode to set up your shot. Be gentle if you attempt to tilt your Kinect manually — the gears in the base are quite fragile.

Try to minimize Z-axis movement. That is, don’t walk toward the camera if you can avoid it. Use a presentation remote if possible so you don’t end every recording with a capture of you walking toward your computer.

Be aware of the limitations of the hardware and its drivers. The inability to estimate occluded joints is the biggest one. This is just a technical way of saying that the Kinect can’t track points it can’t see. Microsoft’s drivers will hazard a guess as to where they are, but the OpenNI drivers KinectToPin uses do not yet support this feature. So if you put your hands behind your back, for example, it will lose track of them. It also means that it can be really hard to get good results if your actor is seated, because the Kinect can’t see their hips.

— Another limitation is that KinectToPin records position but not rotation, so it’s difficult to register motions like nodding your head or laughter that doesn’t involve your shoulders. Hand and foot rotation is also not supported yet, and is currently controlled by estimation expressions on the After Effects side of things. There are some other tools in development that may soon enable you to record hand position — stay tuned.