IAT 320 – Frame It!

Photo_on_2011-02-07_at_11Photo_on_2011-02-07_at_11

I’m sitting in my IAT 320 lab right now and it’s the presentation day for our first sketch. Our group, as I’ve mentioned previously, is doing something that we called as Frame It. We designed the concept of Frame It based on the thought of how capturing the right moment is just one part of basic photography, but there’s also the framing part. We are often constrained by the fixed ratio of our camera and are forced to crop them to our liking in the later stage. While there are many ways to figure out how one would take a picture, it is often easier to pull out their hands and figure out how to frame the picture before actually capturing the moment forever with their camera. Frame It is designed to question how flexible and effective it is to take snapshots using only our body movement.

It was an interesting experience to see how we implement the different part of the programming to work together. I was really impressed with Shane’s skills in MaxMSP; he was able to program the colour tracking along with making the rectangle for the frame. Ken, on the other hand, was in charge of capturing the video and transforming it into an image. My part was mostly just the research component – precedence and the language we needed. 

While doing my research, I encouentered one very interesting TEDTalk. I was seriously amazed by it. It’s a project designed, programmed, and implemented by Pranav Mistry, a PHD student in the Fluid Interfaces Group at MIT’s Media Lab. At such a young age, he has worked in Microsoft and has done so much about the integration of the digital informational experience with our real-world interactions. Afterall, it’s his biggest passion! Wonder if it could ever be mine as well? 

Anyways, in this TEDIndia, he showed a demo of the tools that may help the physical world interaction with the world of data. You have to watch it! http://www.ted.com/talks/pranav_mistry_the_thrilling_potential_of_sixthsense_…

Along with Mistry’s project, I also looked at two other projects that ours can be benefitted from:
– The Camera Mouse (by Margrit Betke, James Gips, and Peter Fleming): The Camera Mouse wasdesigned as an assistive technology device that translated the movements of the user into the movements of the mouse pointer on the screen. It used two computers that are linked together: the “vision computer” is used to receive and display a live video of the user and the other one, the “user computer” took the signals received from the vision computer, scaled them to, coordinated, and substituted them to the current screen resolution and coordinates of the cursor.
– The Manual Input Workstation (by Golan Levin and Zachary Lieberman): Using an overhead projector, the visitors’ hand gestures and finger movements are interpreted by a computer vision system. Using a unique combination of analog and digital projectors, the workstation creates an unusual dynamic light along with synthetics graphics and sounds. They are produced in response to the forms and movements of the visitors’ actions. In a way, the Manual Input Workstation is an augmented-reality shadow play. 

All three projects use the natural body movement as the foremost crucial input which make them good precedence for Frame It.

Here’s the final presentation slides for this Frame It project.

Screen_shot_2011-02-23_at_12Screen_shot_2011-02-23_at_12Screen_shot_2011-02-23_at_12Screen_shot_2011-02-23_at_12

And here’s the code as uploaded by Ken: http://www.sfu.ca/~kkk1/iat320/frameIt.maxpat

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s