Tuesday, March 20, 2012
Friday, February 17, 2012
Thursday, July 28, 2011
Light Field capture, a new kind of photography
Have been reading about light field capture, which has poked into the news as a couple of products make it to market. A German company called __ has had something available for a little while, and Popular Mechanics just reported on a CA firm called Lytra. (Thanks James!)
http://www.popularmechanics.com/technology/gadgets/reviews/the-lytro-light-field-camera-how-it-works?src=rss
The most commonly marketed application at this point is "focus after capture." Take a meta-picture from which a certain selection of focus can be made after-the-fact. Computational photography.
What I wonder, is can this technology (which is more a cluster of methods than a tried-and-true science at this point) be combined into a time-of-flight (ToF) Camera? Such speculation is for another post.....
Thursday, March 3, 2011
Music videos are great and all, but the daily bread of experimental technology is the oh-so clever demo video. This is from Amir Hirsch, aka Tinker Heavy Industies
Friday, February 25, 2011
Moullinex - Catalina from Moullinex on Vimeo.
Watch this fantastic music video, this is an example of KINECT + PROCESSING + A BAND
Post with description of how it was done [here]
Thursday, February 24, 2011
Meeeeeeetup
Just got back from a "Kinect Meetup" which by the end of the meet had turned into "the OpenVOXEL Meetup" and I think some version of that name is going to stick. Big-ups SEAN KEAN for emphasizing the following core principle:
While the Micro$oft Kinect has catalyzed massive interest in depth capture technology, the gathering interest, necessary work, and eventual market encompasses not just this one capture device, not just this one breed of capture device, and not just capture devices. Volumetric display needs to be figured out. Media standards for point-cloud, CSV, and volumetric data need to be figured out. What kind of bitrates are ideal for spacial data streams? Gosh darn there's just so much to roll up the sleeves at!
So I confess I bought a Kinect this weekend without any clue what to plug it in to or what platform or what type of code to start teaching myself or - ha - what sort of application I might develop or reason for doing so! But the good folks I met tonight confirmed that (1) my impulse buy was appropriate, and (2) the spacial interaction revolution is now, and I didn't miss much so far.
On top of that, my "what platform what language" consternation was cut short once Aiden (woot woot!) showed me Daniel Shiffman's OpenKinect Library for Processing. Stop the presses, that's it. It's Processing. Processing will now be what keeps me up at night.
Who is Daniel Shiffman? I've heard of MostPixelsEver and I've almost visited the IAC building and I know someone who did NYU's ITP but besides that all I can say is this is a great man on whose shoulders I will now crouch and eventually stand.
I have to briefly mention OpenKinect.org, the reading of which was a great starting point; but as a non-coder I got lost very quickly. (Commit branch Git fork wrapper huh??) These folks will be interesting to watch from the sidelines: Whilst a handful of companies are already publishing full software sets of useful abstractions for the device, OpenKinect plans to build from the ground-up a full library of code that meets the needs of a broad swath of geeks. So OpenNI and Microsoft and the Belgian (?) company are in a race against time to monetize and "productize" their proprietary libraries. Good luck capitalists!!!
Subscribe to:
Posts (Atom)