Archive for October, 2012

Squishy Recognition for Performance and Sensing

Wednesday, October 17th, 2012

The Project that I would like to pitch for our midterm builds off my previous design challenge for the Sean Curran Dance Company. I want to suggest explorations of the Disney Research Touche system for applications beyond HCI gesture-detection. I wish to examine this technology in areas of human and animal performance and in conjunction with feedback systems from other technologies like computer vision or actuation. The proposal consists of three parts:

  • Building our own Touche system with Arduinos
  • Testing Touche directly with alternative applications
  • Experimenting with Touche feedback systems
The core activity of the class will be using the system to experiment and conduct many small performances.

Build a system

First we would build a couple of systems with the instructable about the Touche system: http://www.instructables.com/id/Singing-plant-Make-your-plant-sing-with-Arduino-/

Then we would thrash the system to determine its responsiveness, robustness, and noisyness. We would probably reimplment a lot of their gestural examples to see how it actually functions minus all the hype.

Alternate Applications

Once we have a better, tacit understand of how the device can work, we can try experimenting! Here are some suggestions I have thought of.

Feedback

It will be interesting to incorporate feedback into the system. This can be done directly, as with the proposed puppetry idea where actuators would manipulate a plant to make the Touche sensor recognize a particular gesture. It can also be indirectly, where a performative system (like a human or animal) recieves the feedback from the sensor (like in sonification) and the system alters itself accordingly.

 

Two interesting technologies to tie in would be, actuation and computer vision. The CV and Touche system could readily augment each other since they collect complementary data.

Constellation

Thursday, October 11th, 2012

This performance explores the interplay of weight and lightness to reimagine the construction of heavenly bodies as products of collaborative movement on earth. As dancers perform a set piece involving their interactions with each other on stage, a digital intervention captures traces of their position and saves them above the stage as astral objects with subtle movements of their own.

Stars are composed of the same material components as our bodies: carbon, oxygen, and metallic elements. The idea that mysterious elements of outer space arise from dancers’ movement on earth is something the audience can ponder while watching the performance unfold.

Modern dance embraces a dancer’s contact with the floor, liberated from ballet’s formal restrictions of ascension into space. Thus, contact with the earth that generates ascending digital forms is made more salient through a juxtaposition of process and product.

Technical Implementation
Dancers are outfitted in form-fitting costumes featuring spots of color at five different points on their body: on the feet/ankles, hands and pelvis. Each dancer sports a different color.
Using computer vision, a camera tracks the movement of these color groups as dancers move through space. When the dancer makes a swift upward movement, the acceleration of these points will cross a computational threshold and trigger the generation of digital forms: A projection mapped to the stage appears to throw these five points into the sky from these points on the dancer’s body.

This action generates a digital form with physical properties, allows it to move gently about the space as if it were a constellation in the night sky. Existing constellations can fade as new ones are generated from movements below.

This framework is extensible. Sound can play when constellations are generated, becoming gradually less intense as they fade. Dancers are able to generate the set for their performance as a result of set movements. Exploiting the inaccuracies of computer vision tracking, the resulting night sky appears different with every performance no matter how consistently the phrases of movement persist.

Joint Relationships

Wednesday, October 10th, 2012

Inspirations:
On our call, Elizabeth Giron emphasized the importance of problem solving in the choreography process.  She referred to it as a “verbal problem turned into a movement problem.”
Two components of “Force of Circumstance” inspired this proposal:

  • making movement accumulate (as Elizabeth demonstrated with her S phrase).
    • The accumulation aspect reminded me of a looper, a device usually used for music and sound design. Loopers have been adapted to video for use in dance performances (Movement Looper at MIT or Dance Loops at Utah Valley University)
  • spatial counterpoint
    • Sean Curran’s emphasis on clean lines, body shape and linearity  reminded me of an animation made for Issey Miyake’s APOC collection in 2007 (http://www.youtube.com/watch?v=x4_mK9CebB4). The animation is a loop of 3D tracking data from a walking model.  Her joints are represented by white dots on a black background, with lines occasionally joining the dots in a variety of patterns, some resembling shapes of the body and Issey’s clothing, some more abstract.

This digital intervention would combine looping with minimalist skeleton tracking.

Setup:
Kinect and Laptop with skeleton tracking application that can map at least 13 points/joints
Projector
Wireless device (worn by dancer to start and stop recording a loop)

Process:
The dancers’ movements are tracked with dots, using the tracking application:

The dancers can start and stop recording a loop with a wireless device. Using the laptop, lines can be drawn, connecting dots within one dancers “skeleton,” or the lines can connect the same joint on multiple dancers.

   

 

Since Sean is “a hawk for detail” and gives much consideration to line and shape, I wanted to give him and his dancers a platform to highlight his choreography. By turning the dancers’ bodies in points and lines that can be reshaped and manipulated, the technology provides thousands of relationships between parts of one body and parts of many bodies. It’s a new kind of exploration of body shape and movement.

Full Stage Multiplayer Theremin

Wednesday, October 10th, 2012

1. Set up Processing application that maps sound pitch, volume, pan, and timing to motion detection (video camera delta will work for this).

2. Point the camera at the performance.

3. Start the Processing application.

4. Offer the resulting real-time audio as a new way to experience the show’s fast and slow bursts, follow shifts of energy locations on-stage, and types of movements by dancers.

Iteration would be required to achieve the types of tones and timings desired by the team. The present pre-alpha version of the software is for demonstration purposes only, and at this time mostly reflects that tone, pitch, and amplitude can be made a function of total motion detected (frame differences) within different areas of the camera.

Experimentation with how to “play” any given motion-to-audio mapping could promote different types of exploratory movements. In addition to providing an optional audio dimension to the movements, conceivably with enough improvement this design could provide a way for visitors with severe vision disabilities to enjoy the pacing and stage action of the performance – roughly similar in principle to the aquarium research across the hall from DWIG.

Guide: Text Page

Guide: Images Page

Minimally-invasive Semantic Registration for Dance

Tuesday, October 9th, 2012

For my design concerning our visit with the Sean Curran Dance Company, I propose a simple system for identifying and responding to the individual poses of the dancers. As described by Elizabeth Giron, their company focuses on experimental grammars of movement but within a highly formalistic setting. There is minimal stage design or additional props, and the focus always seems to be on the synthesis of the music and the ritualized actions of the participants. I sought to design a system for recognizing full body gestures without interfering with the dancers’ movements.

Computer Vision

The first concept to spring to mind, was to use a computer vision system. In a highly controlled environment, like the standard sized theater on which they typically perform,  several different types of computer vision systems could be calibrated to perform quite well. A generic 2D system could segment the background and foreground, and try to infer dance poses based upon matching the profiles of the dancers to pre-determined models. This could function in a somewhat responsive way, but the granularity of its detections would be poor. More sophisticated setups could synthesize the input from multiple camera arrays to capture 3 dimensional data, but this also significantly increases the cost of the setup, the complexity of the processing, and its sensitivity to the original calibration. Cheap devices like the kinect could be used, which also help automate the process of skeleton finding and pose estimation for humans. The sensing range of the kinect, however is quite limited, and it is also designed to estimate poses for only 1-2 humans at a time. In all the mentioned computer vision concepts, you also run into lots of problems when one dancer occludes the other from the camera’s visual range, or when they intertwine or connect bodies. Moving props will also interfere with the vision. Another problem with the computer vision approach is  scalability. Most systems that work with 1-2 people well, (like the kinect) will not transfer this ability to larger crowds. If the spatial dimensions of the performance area change, this will also result in a needed recalibration, or recoding of the processing.

 

 

 

<p>Capturing Dance - Exploring movement with computer vision<br></br></p>
from “Capturing Dance – Exploring movement with computer vision”

Haptic Gestural Recognition

We could also outfit our dancers in specially designed clothing, which detected the kinestetic movements of the wearers. Many ideas, like power-glove style concepts, have been implemented in the past. This method ties the performative device to the user’s particular outfit however, and thus is poorly scalable, and requires re-implementation for different clothing. Also the coverage of the sensors determines the effectiveness of the device. Thus you have a trade-off between expense, sensor density, full body coverage, and freedom of movement and dress.

Swept Frequency Capacitance

Disney Research recently released an impressive demo describing a relatively new method for identifying poses. Whereas most systems (like the computer vision) always first attempt to track the position individual segments of a target object (like a body, or hand), and use this tracking data to determine the current pose, Disney’s new Touché system determines gestures and poses without regard to spatial positioning. Instead they send an array of small currents through the human body at several different frequencies. The different frequencies penetrate the body in different ways when the body is in different poses. Thus you can build a profile for each individual pose, and when this specific profile is reached you know that the body is assuming this particular pose. The best part about this product is that the only interface between the human and the machine are two simple electrodes taped to parts of his or her body. The small microprocessor needs to be carried with the performer, but its apportage is not fixed to a specific spot on the body. The data can also be sent wirelessly from this device to the master computer.

Sato, M., Poupyrev, I., & Harrison, C. (n.d.). Touché : Enhancing Touch Interaction on Humans , Screens , Liquids , and Everyday Objects, (c).

 

The main problem with this approach was that due to its novelty, few people knew how to implment such a device. Luckily, a clever hacker posted a series of instructables illustrating how to enact the Touche system with an arduino and a few additional components! http://www.instructables.com/id/Touche-for-Arduino-Advanced-touch-sensing/

Thus I propose that we build some wireless, Touche systems of our own, connect them to dancers and begin to play. Interesting points to consider will be:

  • For full body gesture detection, where are the optimal locations for attaching the electrodes? Wrist and opposing ankle?
  • How sensitive is the device to these gestures, what kind of fine granulatiry of pose and movement can be achieved?
  • What intelligent, expressive ways can we attach the two other elements featured in the dance, light and sound, to this device?
  • What happens when two performers contact each other? Presumably this would scramble the gesture recognition, but could also lead to quite interesting results.
————————————

Also as a bonus, this final application in the video is where you can see a glimpse into the sad, overworked lives of the creators (embedded video below queued up to the correct time):

DM Carnival

Tuesday, October 9th, 2012

The ecosystem I am studing is the DM Program at Georgia Tech.

The system is characterized by asymetry in terms of interest between different types of actors. The following proposels are performative interventions that aim to amplify communication between the actor types and to provide a better work together atmosphere:

 

1. DM Message Cleaner

A modified Intelligent Robot Cleaning device is not only constantly cleaning offices, classrooms and the hallways in the DM program, but also delivers Messages via a Text-to-speech generator, which Actors of the DM Program uploaded anonymously via a online portal.

 

2. DM Symposium

The DM Symposium is a collaborative project of everyone in the DM Project. The goal of the project is to develop within a year a transdisciplinary event that utilizes all core strengths of all actors in one big event, that last over 3 days and is open to the public. The overarching theme is the mergence of theory and practice.

 

3. DM Carnival

The DM Carnival is a yearly event of two weeks where all actors in the DM program which there roles for two weeks. The role selections happens by random, a computer makes the selection. The actors have to run a diary of their experience for the whole two weeks online (video, text, audio, etc.), which makes sure that nothing is going to be edited afterwards.


After the DM Carnival is over the data gets presented on a permanent Installation at the entrance of the 3rd Floor office area to remind everyone about the different perspectives inscribed in the system. The goal of the annual tradition is to provide the actors with a sensibility for their different roles. This is an entirely internal event, which contributes to the inner psychological stability and balance of the system. Additionally the carnival is a wonderful opportunity to do things the way they think they are supposed to be done.

 

 

 

Supermarket Sweep

Wednesday, October 3rd, 2012

The supermarket brings together a vast and different variety of different products and life which is prepared for human consumption in a way which has become a complex system of codes and conventions. These conventions are rarely considered by the consumer unless the delivery method is slightly changed. This is heightened by trying to purchase different products in different countries. For example, in Spain: fruit is weighted and measured by the consumer whom then organises the price tag from a ticket machine. This makes someone from a country where the convention is different to that experience the purchase in a whole new light.

Supermarket Sweep attempts to take the environment of the supermarket and push this concept one step further. Key elements that can be changed are the different products, the staff and the customers themselves. Products can be changed by turning dead produce into live produce: In the eggs section, there will be a thousand live chickens, all running around inside a fridge display cabinet. In the vegetable section the veg will be still subterranean or live on trees, potatoes in the ground and grapes still on the vine. People would have to pick what they want as if they were farming it.

The key environmental components to be explored in this study are the customers and the staff which are substituted by actors, turing the supermarket into both a playground and theatre. Staff declaring undying love for each other on the public address system and asking other customers to find colleges for them to ask them to marry them. Customers having fights, arguments and general drama in the isles. People performing magic tricks in different food sections… like turning the eggs into chickens. Trolly races being declared on the public address that will be coming down certain isles and around certain corners. Juggling acts with tins of food.

Performers will then gather at the exit and hold out contribution boxes and thanking customers for attending the show, all in an attempt to change perception of the environment from that of a supermarket to anything but a supermarket.

Dark Colony

Wednesday, October 3rd, 2012

For my ecosystem I chose (big suprise) an ant hill. The target I had for my performances was to create interactions that manipulated the creatures’ environment and individual roles on a daily basis. The inspiration was partially from the film “Dark City” where a group of mysterious others experiments on a city of humans by reshaping the lives, memories, and environment while the humans sleep.

 

http://www.youtube.com/watch?v=QJh359H57UA&feature=player_detailpage#t=2249s

The other half of inspiration came from a part in Niko Tinbergen’s book, “Curious Naturalists,” where he re-arranges the local environment of some insects to deceive their homing capabilities.

Therefore, I wanted human/digital/ and ant interactions which were split between day and night. During the day, at the height of ant-interaction, the digital “other” should primarily observe and sense. Then, when the ants return at night to their colony, the digital and human components re-arrange the outside world based on their earlier observations.

 

I came up with 3 performance ideas based on this concept.

  • 1) The ants’ trails around the entrance are recorded and tracked during the day. This input generates a new route for the human on her daily commute.
  • 2) The ants’ trails around the entrance are recorded and tracked during the day. The observing/tracking digital device then squirts a viscous liquid which hardens into ant-height cylinders over all the trails. The movements of the ants during the day, are recreated as walls during the night, forcing the ants to constantly re-think new, optimal paths. This could be accomplished with a peristaltic pump:  http://vimeo.com/13532728#at=0
  • 2 alt) Instead of squirting out walls, the ants are surrounded by a mesh of actuated  dowels forming a grid. The dowels raise or lower depending on the day’s interactions. The more movement in an area, the higher the dowel. This also forms the walls mentioned before, but he daily routes do not accumulate.
  • 3) Tiny cheap robots (linked bristlebots with sensors), are scattered around the ants’ nest. They record proximity in 3 directions. High proximity is mapped semantically to high levels of ant-interaction. At night, bots with low interaction, re-arrange themselves, while high-ones freeze.

 

For do-ability reasons in a short time-scale, I decided to elaborate on the 3rd option.

 

The Robot City

Our robot obstacles are based off the cheaply locomotable “bristle bots”

A power source is connected to a vibrating motor (a motor with an offset weight), and motion goes into the phalanges triggering movement. Here are the additions to create interactive, shifting buildings for the ants in the project. Three bristlebots will be tied together for semi-directable motion. Each bristlebot will be connected to a cheap proximity sensor. The bristlebot’s amount of movement will be regulated by the amount of interactions it receives during the day through close-proximity to ants. Areas of high-interaction will move less, than those of low interaction. This will result in dramatic interruptions to whatever the ant’s optimized routes for the previous day were.  The bots will be housed in small building facades to reinforce the “shifting city” concept to outside human observers.

Alternate

The bots could be optionally made by attaching the vibrator to a pinecone or other natural element in the ants’ world.

 

Library Interventions

Wednesday, October 3rd, 2012

Ecological System: Georgia Tech Library, “performance zone” (tables on the first floor near where there used to be a coffeeshop, across the glass)

Actors/Entities:
• Students that have lots of work to do (majority here are male, average age ~21)
• Laptops, turned on, most presumably with internet connection
• Security camera pointed at the big screen+speakers presentation system

Nonentities:
• 13 tables
• Outlets all along the wall
• Outlets hanging from the ceiling in 4 places
• About 3 dozen chairs
• Vending machine drinks, snacks
• Pens, pencils, paper
• Presentation screen with large screen and speakers
• 5 overhead hanging speakers

Notable Conditions:
• Consistently chilly air
• Consistently bright fluorescent tube lighting
• A looming sense of focus and/or despair

Performance Interventions:
1. The aim of this intervention is to affect relations between entities, by getting strangers to sit at the same table. A simple Arduino device is attached to the underside of the centermost table, with switches wired to the edges where nearby tables can be pulled to hold the switches in. When any switches depress, a signal notifies the performer to return to this area and return the tables to formation.

2. The aim of this intervention is to confirm that the ecological system’s strength in asserting its identity as an entity having its own PRA. The performer’s task in this case is to see whether he or she will be rejected by the space, rather than accepted by the people in it. This is done by doing something that the people would not object to, if it were not being done in this ecological system. The performer brings a laptop with an obviously 2-player game, classic Street Fighter 2, plus 2 USB gamepads. Volume turned off (blaring audio annoyance would be too obviously a violation), the performer plays the game in single player, until A New Challenger Approaches. They play a round or a few, but when that person leaves to work, the person continues playing, with controller 2 laying out again as bait. If/when someone asks the performer to leave, the space has asserted its identity as separate from that of its individual entities.

3. The aim of this intervention is to extend the ecological system by scattering its entities into alternative contexts. As set up, the performer creates a php file that will recommend another specific study location on campus, selected at random, and direct the student to go study there. Ideally, some navigation information or map should be provided as well. Below that should be links to these articles from LifeHacker and Lawyerist, which both suggest that changing study/work location yields improved results. A QR code directing to this URL can then be printed and taped to the corner of each table in the room. If people get curious to check the QR Code, they may be persuaded to try studying at one of the other recommended locations. The performers role in this case is to hang out working at the table, periodically taking pictures of the QR code then leaving for a bit, later returning and repeating, to entice people to try it. If this proves effective, it may help those students discover a new favorite study location to alternate, and it also makes this otherwise popular section less crowded, creating a partial vacuum which may inspire more students to wander over and try it (first studying there, then being coaxed into trying some other location as well).

Guide: Text Page

Guide: Images Page