IAT 320 – Sketch #1: Research Dump

Over the last few days, I have been spending my time researching the net to find precedences that are related to our proposed sketch. I found some really interesting ones (as listed below). It really is quite fascinating on the things that people have done in the past, whatmore I found some that were done over a decade ago. Seriously, was technology that advanced back then? Even until now I’m still having trouble playing with MaxMSP and other programming language but not these interactive artists. They were definitely great programmers with a big artsy vision and they just push through every obstacle they face to finally put everything all together.
So all of these below are just copied and pasted information I found from various abyss of the web. I’m now looking into the gesture language that Apple has set for their iPhones and iPad and how the manage to using familiar motions to interact with the screen.

The closest one to ours:

The Manual Input Workstation (2004-2006: Golan Levin and Zachary Lieberman) presents a series of audiovisual vignettes which probe the expressive possibilities of hand gestures and finger movements. Interactions take place on a combination of custom interactive software, an analog overhead projector, and a digital computer video projector. The analog and digital projectors are aligned such that their projections overlap, resulting in an unusual quality of hybridized, dynamic light. During use, the visitors’ hand gestures are interpreted by a computer vision system as they pass across the glass top of the overhead projector. In response, the software generates synthetic graphics and sounds that are tightly coupled to the forms and movements of the visitors’ actions. The synthetic responses are co-projected over the organic, analog shadows, resulting in an almost magical form of augmented-reality shadow play.

You should include three types forms of research:
• Research on other projects that are related to yours. Make sure the projects relate to you
concept and not just the technology.
• Research on theories and philosophies related to your project. The closer the relationship is
to core concept the better. Please reference these properly.
• Research/ experiments on your own experience in the system

The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities (http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1021581)
– The system tracks the computer user’s movements with a video camera and translates them into the movements of the mouse pointer on the screen. Body features such as the tip of the user’s nose or finger can be tracked.
– Assistive technology devices have been developed to help them use their voluntary movements to control computers.
– Chen et al. developed a system that contains an infrared transmitter, mounted onto the user’s eyeglasses, a set of infrared receiving modules that substitute the keys of a computer keyboard, and a tongue-touch panel to activate the infrared beam [6]. Helmets, electrodes, goggles, and mouthsticks are uncomfortable to wear or use.
– Most important, some users, in particular young children, dislike to be touched on their face and vehemently object to any devices attached to their heads.
– Corneal reflection systems have the disadvantages that they need careful calibration, require the user to keep his or her head almost completely still, and are not inexpensive.
– Given people’s experiences with currently available assistive technology devices, our goal has been to develop a nonintrusive, comfortable, reliable, and inexpensive communication device that is easily adaptable to serve the special needs of quadriplegic people and is especially suitable for children.
– The CameraMouse system currently involves two computers that are linked together—a “vision computer” and a “user computer.”
– The vision computer receives and displays a live video of the user sitting in front of the user computer. The video is taken by a camera that is mounted above or below the monitor of the
user computer.
– The user computer runs a special driver program in the background that takes the signals received from the vision computer, scales them to , coordinates in the current screen resolution, and then substitutes them for the coordinates of the cursor.

– To test other body features, not just facial features, the thumb was selected. Although it was tracked successfully, as shown in sequence E in Fig. 7, it has two main flaws as a tracking point.
First, the camera has difficulties in focusing on it. As can be seen in sequence E, the thumb takes up such a small portion of the screen that the camera’s autofocus mechanism focuses on
the objects in the background and not the thumb.

Virtual Body Language http://delivery.acm.org/10.1145/270000/261143/p37-tromp.pdf?key1=261143&k…
– Users learn to  identify  with  their virtual embodiment through &nb
sp;interaction with  it,  provided  that  they  can perceive straightforward,  consistent  relations between their  actions on  the virtual  embodiment and the results of those actions for the embodiment and the CVE [S].

– Some of  the more important  issues for virtual  embodiment design include:

Location;  in  shared spaces users need to  know  their  own  position and location in

relation to other users and objects.

Identity;  users have to recognise who someone is  from  the embodiment, and be able  

to  differentiate  between agents and other users.

Activity;  embodiments should help in  conveying a sense of on-going activity.

Availability  and degree of presence; it  is useful  to convey information about the

availability for interaction.

Gesture and facial expression; gesture and facial expression play an important role in

human interaction.

History  of  activity;  embodiments can  support historical awareness of who has been

present and what activities  they’ performed.

Manipulating  one’s view  of  other  people;  to  be  able  to  control  the view of other

people’s bodies in order to reduce machine load, and to create subjective views  [2].

Representations across multiple  media;  this  has  to  be considered because

embodiment extends itself  not only in the  graphical, but also in the audio  and textual

media domain.

Autonomous and distributed body parts; people can be in several places in one WE, or

in  several CVEs at the same time

– The central finding  from the ITW  studies is that the embodiment  does not provide enough control. This  is perhaps not a surprising  finding  given the limited  nature of  the existing embodiment, but  by  starting  from  primitive  embodiments, many  insights  have  been gained through the usability  studies about in  what  way  the embodiment does not provide enough control, and what  kinds of control users would  like  to have.

Towards a Biological Theory of Emotional Body Language
– Edmund Sapir famously remarked that blowing a candle produces a gesture and a sound that are identical to the gesture and sound made when pronouncing the (German) consonant W.
– Theorists of motor behavior also must confront this issue and ask on what grounds motor behavior can convey meaning.
– EBL = emotional body language

Apple Multitouch Language


US Patent & Trademark Office for Apple multitouch-



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s