Minimally-invasive Semantic Registration for Dance

October 9th, 2012 By NOTAndrew Quitmeyer

For my design concerning our visit with the Sean Curran Dance Company, I propose a simple system for identifying and responding to the individual poses of the dancers. As described by Elizabeth Giron, their company focuses on experimental grammars of movement but within a highly formalistic setting. There is minimal stage design or additional props, and the focus always seems to be on the synthesis of the music and the ritualized actions of the participants. I sought to design a system for recognizing full body gestures without interfering with the dancers’ movements.

Computer Vision

The first concept to spring to mind, was to use a computer vision system. In a highly controlled environment, like the standard sized theater on which they typically perform,  several different types of computer vision systems could be calibrated to perform quite well. A generic 2D system could segment the background and foreground, and try to infer dance poses based upon matching the profiles of the dancers to pre-determined models. This could function in a somewhat responsive way, but the granularity of its detections would be poor. More sophisticated setups could synthesize the input from multiple camera arrays to capture 3 dimensional data, but this also significantly increases the cost of the setup, the complexity of the processing, and its sensitivity to the original calibration. Cheap devices like the kinect could be used, which also help automate the process of skeleton finding and pose estimation for humans. The sensing range of the kinect, however is quite limited, and it is also designed to estimate poses for only 1-2 humans at a time. In all the mentioned computer vision concepts, you also run into lots of problems when one dancer occludes the other from the camera’s visual range, or when they intertwine or connect bodies. Moving props will also interfere with the vision. Another problem with the computer vision approach is  scalability. Most systems that work with 1-2 people well, (like the kinect) will not transfer this ability to larger crowds. If the spatial dimensions of the performance area change, this will also result in a needed recalibration, or recoding of the processing.




<p>Capturing Dance - Exploring movement with computer vision<br></br></p>
from “Capturing Dance – Exploring movement with computer vision”

Haptic Gestural Recognition

We could also outfit our dancers in specially designed clothing, which detected the kinestetic movements of the wearers. Many ideas, like power-glove style concepts, have been implemented in the past. This method ties the performative device to the user’s particular outfit however, and thus is poorly scalable, and requires re-implementation for different clothing. Also the coverage of the sensors determines the effectiveness of the device. Thus you have a trade-off between expense, sensor density, full body coverage, and freedom of movement and dress.

Swept Frequency Capacitance

Disney Research recently released an impressive demo describing a relatively new method for identifying poses. Whereas most systems (like the computer vision) always first attempt to track the position individual segments of a target object (like a body, or hand), and use this tracking data to determine the current pose, Disney’s new Touché system determines gestures and poses without regard to spatial positioning. Instead they send an array of small currents through the human body at several different frequencies. The different frequencies penetrate the body in different ways when the body is in different poses. Thus you can build a profile for each individual pose, and when this specific profile is reached you know that the body is assuming this particular pose. The best part about this product is that the only interface between the human and the machine are two simple electrodes taped to parts of his or her body. The small microprocessor needs to be carried with the performer, but its apportage is not fixed to a specific spot on the body. The data can also be sent wirelessly from this device to the master computer.

Sato, M., Poupyrev, I., & Harrison, C. (n.d.). Touché : Enhancing Touch Interaction on Humans , Screens , Liquids , and Everyday Objects, (c).


The main problem with this approach was that due to its novelty, few people knew how to implment such a device. Luckily, a clever hacker posted a series of instructables illustrating how to enact the Touche system with an arduino and a few additional components!

Thus I propose that we build some wireless, Touche systems of our own, connect them to dancers and begin to play. Interesting points to consider will be:

  • For full body gesture detection, where are the optimal locations for attaching the electrodes? Wrist and opposing ankle?
  • How sensitive is the device to these gestures, what kind of fine granulatiry of pose and movement can be achieved?
  • What intelligent, expressive ways can we attach the two other elements featured in the dance, light and sound, to this device?
  • What happens when two performers contact each other? Presumably this would scramble the gesture recognition, but could also lead to quite interesting results.

Also as a bonus, this final application in the video is where you can see a glimpse into the sad, overworked lives of the creators (embedded video below queued up to the correct time):

Leave a Reply

You must be logged in to post a comment.