So the tutorial series for the first version of KinectToPin was nearly an hour long. And I’ll be honest: while it was exciting that it was even possible, getting motion capture data into After Effects in a usable manner was probably more trouble than it was worth.

But a lot has changed since then:

SOFTWARE SETUP

  • Stability: OpenNI, NITE, SensorKinect, SimpleOpenNI et al are all a lot more stable, and the tracking data they generate is better.
  • Easier installation via SimpleOpenNI bundles
  • New issues with installation on Windows: There’s a new kink in the installation process, however, if you’re using Windows: on new installs, Microsoft is automatically overwriting the OpenNI drivers with their own. You need to go into your control panel’s Device Manager and manually select the right drivers, or nothing will work right. Fortunately, you should only have to do this once.
  • What’s still missing: We don’t have hand/foot tracking or occluded joint estimation yet on the open source side of things, but you no longer have to spend time recording six versions of the same track and hoping you get at least one where your head stays attached. Now you can record six versions just to try out different performances.

 

KINECTTOPIN

KinectToPin has a *ton* of major new features.

  • New output types: KinectToPin now outputs three types of data: a text file containing puppet pin keyframes, a text file of 3D point control keyframes, and a JSX file that automates a big chunk of the setup process if you’re rigging with point controls.
  • Access to Z data: Yes, that’s right: I said *3D* point controls. You now have access to the Z data… if you can figure out what to do with it. (The Puppet Tool is still 2D.)
  • Automatically build the rig template: Part 3 of the old tutorial? Gone. You no longer need to set up a rigging template and add a ton of expressions manually — it does that for you! The JSX output generates a comp with the point control data and control nulls ready to go.
  • Built in smoothing: if you load your track via the JSX, smoothing expressions will be applied with default values.
  • Error correction: Tracking errors just repeat the previous frame’s position value instead of making your leg fly off the screen.
  • Multiprocessing support
  • Remote control: start and stop track recording remotely with a presentation controller.

 

INSIDE AFTER EFFECTS

  • Resolution independence: No more cramming everything into 640×480! You can just work full size from the start. (Note: you’ll still need to precomp shape layers and scale the precomp down in the rigging comp if you’re planning to use collapse transforms later on.)
  • Offset nulls for arbitrary body shapes, human and otherwise. Just drag to reposition.
  • Easier head and hand animation: expressions to automate rotation that give you the flexibility to keyframe different positions when required.

 

Full tutorial for the new workflow coming soon!