While the Micro$oft Kinect has catalyzed massive interest in depth capture technology, the gathering interest, necessary work, and eventual market encompasses not just this one capture device, not just this one breed of capture device, and not just capture devices. Volumetric display needs to be figured out. Media standards for point-cloud, CSV, and volumetric data need to be figured out. What kind of bitrates are ideal for spacial data streams? Gosh darn there's just so much to roll up the sleeves at!
So I confess I bought a Kinect this weekend without any clue what to plug it in to or what platform or what type of code to start teaching myself or - ha - what sort of application I might develop or reason for doing so! But the good folks I met tonight confirmed that (1) my impulse buy was appropriate, and (2) the spacial interaction revolution is now, and I didn't miss much so far.
On top of that, my "what platform what language" consternation was cut short once Aiden (woot woot!) showed me Daniel Shiffman's OpenKinect Library for Processing. Stop the presses, that's it. It's Processing. Processing will now be what keeps me up at night.
Who is Daniel Shiffman? I've heard of MostPixelsEver and I've almost visited the IAC building and I know someone who did NYU's ITP but besides that all I can say is this is a great man on whose shoulders I will now crouch and eventually stand.
I have to briefly mention OpenKinect.org, the reading of which was a great starting point; but as a non-coder I got lost very quickly. (Commit branch Git fork wrapper huh??) These folks will be interesting to watch from the sidelines: Whilst a handful of companies are already publishing full software sets of useful abstractions for the device, OpenKinect plans to build from the ground-up a full library of code that meets the needs of a broad swath of geeks. So OpenNI and Microsoft and the Belgian (?) company are in a race against time to monetize and "productize" their proprietary libraries. Good luck capitalists!!!