Kinect…back on it

Got a Kinect from Best Buy today for $99. Holiday prices I guess…good deal eh? :)

Starting to play with Kinect again. Did it at work earlier this year when I was still on the East Coast. Back then we were testing out its capabilities to determine whether it would be a good platform to use for a science museum activity. We got the OpenNI, NITE(from Primesense…the company that actually developed the hardware in the Kinect sensor) samples working just to see how well skeleton tracking works.

Fast forward to the present…

So, I’ve just downloaded Cinder and their Kinect Cinderblock(their term for a library) and ran the samples. Pretty neat. It allows you to essentially get the sensor information from the Kinect and also allows you to control some of the hardware on it such as the motor so you can adjust the tilt of the Kinect. Here’s their example of a visual point cloud composed from Kinect’s depth information.

PrimeSense and ‘recently’ Microsoft released SDKs to help you work with the Kinect. (These SDKs, also come with libraries that help you interpret and make sense of the Kinect sensor information. I.e., they take the raw color and depth info from the cameras and (if a person is standing in front of it) will give you information about the body…such as the location of their head, hands and joints. Without it, the info from the cameras are essentially just color information and depth information.

Here’s an article comparing the features of Microsoft’s SDK and PrimeSense’s SDK. The one thing that I see is a major benefit for Microsoft is that there’s no calibration pose needed. From an interaction designer’s perspective, this is huge because user’s don’t expect a calibration step. It would be best if the technology just works!

Leave a Reply