Author Archive

Squishy Recognition for Performance and Sensing

Wednesday, October 17th, 2012

The Project that I would like to pitch for our midterm builds off my previous design challenge for the Sean Curran Dance Company. I want to suggest explorations of the Disney Research Touche system for applications beyond HCI gesture-detection. I wish to examine this technology in areas of human and animal performance and in conjunction with feedback systems from other technologies like computer vision or actuation. The proposal consists of three parts:

  • Building our own Touche system with Arduinos
  • Testing Touche directly with alternative applications
  • Experimenting with Touche feedback systems
The core activity of the class will be using the system to experiment and conduct many small performances.

Build a system

First we would build a couple of systems with the instructable about the Touche system: http://www.instructables.com/id/Singing-plant-Make-your-plant-sing-with-Arduino-/

Then we would thrash the system to determine its responsiveness, robustness, and noisyness. We would probably reimplment a lot of their gestural examples to see how it actually functions minus all the hype.

Alternate Applications

Once we have a better, tacit understand of how the device can work, we can try experimenting! Here are some suggestions I have thought of.

Feedback

It will be interesting to incorporate feedback into the system. This can be done directly, as with the proposed puppetry idea where actuators would manipulate a plant to make the Touche sensor recognize a particular gesture. It can also be indirectly, where a performative system (like a human or animal) recieves the feedback from the sensor (like in sonification) and the system alters itself accordingly.

 

Two interesting technologies to tie in would be, actuation and computer vision. The CV and Touche system could readily augment each other since they collect complementary data.

Minimally-invasive Semantic Registration for Dance

Tuesday, October 9th, 2012

For my design concerning our visit with the Sean Curran Dance Company, I propose a simple system for identifying and responding to the individual poses of the dancers. As described by Elizabeth Giron, their company focuses on experimental grammars of movement but within a highly formalistic setting. There is minimal stage design or additional props, and the focus always seems to be on the synthesis of the music and the ritualized actions of the participants. I sought to design a system for recognizing full body gestures without interfering with the dancers’ movements.

Computer Vision

The first concept to spring to mind, was to use a computer vision system. In a highly controlled environment, like the standard sized theater on which they typically perform,  several different types of computer vision systems could be calibrated to perform quite well. A generic 2D system could segment the background and foreground, and try to infer dance poses based upon matching the profiles of the dancers to pre-determined models. This could function in a somewhat responsive way, but the granularity of its detections would be poor. More sophisticated setups could synthesize the input from multiple camera arrays to capture 3 dimensional data, but this also significantly increases the cost of the setup, the complexity of the processing, and its sensitivity to the original calibration. Cheap devices like the kinect could be used, which also help automate the process of skeleton finding and pose estimation for humans. The sensing range of the kinect, however is quite limited, and it is also designed to estimate poses for only 1-2 humans at a time. In all the mentioned computer vision concepts, you also run into lots of problems when one dancer occludes the other from the camera’s visual range, or when they intertwine or connect bodies. Moving props will also interfere with the vision. Another problem with the computer vision approach is  scalability. Most systems that work with 1-2 people well, (like the kinect) will not transfer this ability to larger crowds. If the spatial dimensions of the performance area change, this will also result in a needed recalibration, or recoding of the processing.

 

 

 

<p>Capturing Dance - Exploring movement with computer vision<br></br></p>
from “Capturing Dance – Exploring movement with computer vision”

Haptic Gestural Recognition

We could also outfit our dancers in specially designed clothing, which detected the kinestetic movements of the wearers. Many ideas, like power-glove style concepts, have been implemented in the past. This method ties the performative device to the user’s particular outfit however, and thus is poorly scalable, and requires re-implementation for different clothing. Also the coverage of the sensors determines the effectiveness of the device. Thus you have a trade-off between expense, sensor density, full body coverage, and freedom of movement and dress.

Swept Frequency Capacitance

Disney Research recently released an impressive demo describing a relatively new method for identifying poses. Whereas most systems (like the computer vision) always first attempt to track the position individual segments of a target object (like a body, or hand), and use this tracking data to determine the current pose, Disney’s new Touché system determines gestures and poses without regard to spatial positioning. Instead they send an array of small currents through the human body at several different frequencies. The different frequencies penetrate the body in different ways when the body is in different poses. Thus you can build a profile for each individual pose, and when this specific profile is reached you know that the body is assuming this particular pose. The best part about this product is that the only interface between the human and the machine are two simple electrodes taped to parts of his or her body. The small microprocessor needs to be carried with the performer, but its apportage is not fixed to a specific spot on the body. The data can also be sent wirelessly from this device to the master computer.

Sato, M., Poupyrev, I., & Harrison, C. (n.d.). Touché : Enhancing Touch Interaction on Humans , Screens , Liquids , and Everyday Objects, (c).

 

The main problem with this approach was that due to its novelty, few people knew how to implment such a device. Luckily, a clever hacker posted a series of instructables illustrating how to enact the Touche system with an arduino and a few additional components! http://www.instructables.com/id/Touche-for-Arduino-Advanced-touch-sensing/

Thus I propose that we build some wireless, Touche systems of our own, connect them to dancers and begin to play. Interesting points to consider will be:

  • For full body gesture detection, where are the optimal locations for attaching the electrodes? Wrist and opposing ankle?
  • How sensitive is the device to these gestures, what kind of fine granulatiry of pose and movement can be achieved?
  • What intelligent, expressive ways can we attach the two other elements featured in the dance, light and sound, to this device?
  • What happens when two performers contact each other? Presumably this would scramble the gesture recognition, but could also lead to quite interesting results.
————————————

Also as a bonus, this final application in the video is where you can see a glimpse into the sad, overworked lives of the creators (embedded video below queued up to the correct time):

Dark Colony

Wednesday, October 3rd, 2012

For my ecosystem I chose (big suprise) an ant hill. The target I had for my performances was to create interactions that manipulated the creatures’ environment and individual roles on a daily basis. The inspiration was partially from the film “Dark City” where a group of mysterious others experiments on a city of humans by reshaping the lives, memories, and environment while the humans sleep.

 

http://www.youtube.com/watch?v=QJh359H57UA&feature=player_detailpage#t=2249s

The other half of inspiration came from a part in Niko Tinbergen’s book, “Curious Naturalists,” where he re-arranges the local environment of some insects to deceive their homing capabilities.

Therefore, I wanted human/digital/ and ant interactions which were split between day and night. During the day, at the height of ant-interaction, the digital “other” should primarily observe and sense. Then, when the ants return at night to their colony, the digital and human components re-arrange the outside world based on their earlier observations.

 

I came up with 3 performance ideas based on this concept.

  • 1) The ants’ trails around the entrance are recorded and tracked during the day. This input generates a new route for the human on her daily commute.
  • 2) The ants’ trails around the entrance are recorded and tracked during the day. The observing/tracking digital device then squirts a viscous liquid which hardens into ant-height cylinders over all the trails. The movements of the ants during the day, are recreated as walls during the night, forcing the ants to constantly re-think new, optimal paths. This could be accomplished with a peristaltic pump:  http://vimeo.com/13532728#at=0
  • 2 alt) Instead of squirting out walls, the ants are surrounded by a mesh of actuated  dowels forming a grid. The dowels raise or lower depending on the day’s interactions. The more movement in an area, the higher the dowel. This also forms the walls mentioned before, but he daily routes do not accumulate.
  • 3) Tiny cheap robots (linked bristlebots with sensors), are scattered around the ants’ nest. They record proximity in 3 directions. High proximity is mapped semantically to high levels of ant-interaction. At night, bots with low interaction, re-arrange themselves, while high-ones freeze.

 

For do-ability reasons in a short time-scale, I decided to elaborate on the 3rd option.

 

The Robot City

Our robot obstacles are based off the cheaply locomotable “bristle bots”

A power source is connected to a vibrating motor (a motor with an offset weight), and motion goes into the phalanges triggering movement. Here are the additions to create interactive, shifting buildings for the ants in the project. Three bristlebots will be tied together for semi-directable motion. Each bristlebot will be connected to a cheap proximity sensor. The bristlebot’s amount of movement will be regulated by the amount of interactions it receives during the day through close-proximity to ants. Areas of high-interaction will move less, than those of low interaction. This will result in dramatic interruptions to whatever the ant’s optimized routes for the previous day were.  The bots will be housed in small building facades to reinforce the “shifting city” concept to outside human observers.

Alternate

The bots could be optionally made by attaching the vibrator to a pinecone or other natural element in the ants’ world.

 

Skipping Homes | Throwing Rocks | Bucolic Building

Wednesday, September 19th, 2012

I propose a relaxing, non-teleological system for throwing rocks and creating unique construction materials. For the human the process will be: first skip/hurl stones into a lake, stop whenever, then finally assemble together the resulting uniquely shaped logs.

Setting: Quiet lake-shore strewn with rocks, pebbles, dirt, sand, leaves and twigs

1) Ubiquitous skill/ play property; Throwing Rocks

Throwing rocks into water, or the slightly more advanced process of skipping stones, is a meditative rewarding process. My design seeks to incorporate the whole of this process with minimal digital intervention. Thus the person will perform all aspects of stone skipping as if the digital device did not exist.

Scan the shore for choice rocks. Dig up rocks with your hands. Weigh, compare, absorb the information of the stone. Fling the rock towards the water. Visually and Aurally connect with your object during its brief, flaming period of life. The feedback from the rock’s performance is incorporated into your body and encourages further throwing in order to validate the newly learned information.

Throwing rocks is enjoyable because it constitutes the the core function of intelligence and learning: continuous analysis of prediction.

 2) Digital Analysis

There will be one, minute change to the typical stone skipping process. Before flinging, the human will attach a thin strip of reflective tape to the rock’s edge. This is the only interfering component of the system. Next to the human, on the shore will be a smart phone whose camera is facing over the water. The camera has a small peice of infrared filter over the lens. An infrared flood lamp sits next to the phone, also directed over the lake. When the rock is thrown, the mirrored strip will beam pulses of information back to the camera lens. The stone’s relative position, velocity and spinning frequency can be determined through non-difficult computer vision methods. The splashes will also probably reflect the infrared radiation in a manner which can help the system collect more information about the flight and its aftermath.

3) Digital Exploitation of Ubiquitous Skill for Production: Strut Casting

Tethered to the computer vision system is a simple two-axis pivoting head which controls a spray nozzle. The head’s orientation and spray will be controlled by the information collected through the camera. The substance sprayed will be a thin line of a foaming, bonding agent that rigidly hardens within seconds or minute. Ideally this substance would be a biodegradable version of Dow’s “Great Stuff” foaming sealant. The rigid lines would be cast directly onto the surface of the beach forming dirty logs which physically incorporate the environment. Every stone tossed generates a new line. The user can keep throwing logs and the system will keep squirting onto the previous log, making it thicker and thicker. Whenever she wants, she can kick (or dig) the generated strut out of the way.

The exact material for the rigid foaming substrate is not totally fleshed out yet, but here are some biodegradable / bioincorporative alternatives to Dow’s Great Stuff:

Plastic make from milk and vinegar (takes two days to set): http://www.instructables.com/id/Homemade-Plastic/step3/Strain/

Robot makes sandcastles: http://www.futuredude.com/stone-spray-robot-makes-sand-castles-last-forever/

 

4) Construction

At the end, the user gathers her generated logs and uses them to assembled a shelter for the night, or (if the rigid foaming substrate works out) a raft for traversing the lake.

Evolving Design

Wednesday, September 12th, 2012

Setup
The stated goal for this week’s design challenge is to think of a way to push design-time and use-time closer together in a digital device. This is based off Maceli’s Human Actor’s paper discussing meta-design. In my preliminary ponderings about this concept some thoughts rose up. First, that in other fields (like systems engineering) the processes of design and use can be thought of as a control system. Control systems are functions that take a stated goal (rotate the car 15 degrees left), produce an output in the real world (car’s new position is 13.8 degrees left), and (sometimes) receive feedback to bring the desired and the actual closer together (car only moved 13.8, move an additional 1.2 degrees). Generally the faster a control system can receive and process feedback, the more perfectly the system functions.

Traditional design could be viewed as a very poor control system with little feedback. A designer creates with a goal in mind (a perfectly comfortable chair), and the user deals with what comes out (this chair feels alright). More robust design-use systems feature closer feedback with user-testing and use-analysis for more iterative design process. As we shrink the feedback time and make our designerly control system more responsive, we get closer to this design challenge of pushing design and use time closer together. Let’s imagine a chair made of an even more perfect version of Hiroshi Ishii’s posit, “Perfect Red” (a digitally manipulable matter which allowed one to perform CAD functions on the object itself), which perfectly understood a users thoughts, words and actions. A user could receive a blank, “Perfect Chair,” sit in it, and command it physically, verbally, and emotionally until the user was perfectly content. This, I believe would present a concept of design and use being as close together as possible. The tightest possible feedback leading to beautifully responsive design.

A problem with this perfect control system, is that although the chair can give us whatever we want, we don’t always know what that is. Omnipotence kills innovation. One might not ever realize the benefits of a cup-holder in one’s “perfect chair.” The thought of splitting off a “perfect Ottoman” might be one of those things that doesn’t happen until you see it at a neighbor’s house. “What a great idea” one might say when seeing a fresh new type of “perfect chair” in an airport lobby. Some of these people with fresh new chair ideas might start receiving commisions to come up with their designs. Soon we are back to splitting apart design and use-time once more!

This is why mutations and arbitrary changes are so important in nature. This is why we have sex. Something can be optimized in its own niche, but without new or outside information, it cannot adapt. My answer to the design challenge attempts to push together design and use time closer, but only to a point where the design can still be meditated on, played with, and innovated.

Evolving Design

I propose objects that are responsive to their users, and the innovations of similar objects, though only in an indirect manner. The object’s shape and functionality will change according to evolutionary principles.

Rules

  • Everyone’s device starts out the same.
  • Everyone’s device possesses a code describing its current state and configuration (“genes”). This code can feature markup describing higher level functionality and descriptions (“alleles”).
  • Every night the device “dies,” automatically reconfigures itself, and is reborn as its own child. This is like asexual reproduction but the number of resulting objects remains the same.
  • The child’s genes are taken in some part from the parent device and a smaller amount are taken randomly (“mutation”).
  • The genes passed on are determined by a fitness function which results from how the user interacts with the device.
  • Two devices can reproduce sexually by leaving them  in close proximity overnight. This results in each splitting the code normally passed on to their singular children, but for each other’s child

 

An alternate idea I had was for Lamarkian Evolving Furniture. In this case the main difference would be that physical changes that happen to the device-creature, are passed on through its genes to the next subject. That is, you could beat your chair into a new shape, and its “child” would show signs of your previous physical manipulations.

Hungry, Hungry Anteaters

Sunday, September 9th, 2012

The goal of this week’s design challenge is to create a “messing about” that opens people up into simultaneous social and analytical thinking. This challenge was based on Ratto’s paper, “Critical Making: Conceptual and Material Studies in Technology and Social Life” in which he discusses the act of “making” as a lubricant for opening up shared social experience and critical thinking.It also spawned from discussions of Greer’s paper “Taking Back the Knit: Creating Communitites via Needlecraft” who promotes the idea of crafts for their socially engaging qualities.

I was attracted to Ratto’s discussion of how “making” brings with it the emotional dimension of learning which tends to be neglected in most positivistic educational methods. He states,

“The importance of affectual relations in meaning-making has also been emphasized in Knorr-Cetina’s work (1997) on the relationship between scientists and the “epistemic objects” with which they work. For us, affect serves as a way to begin to understand the importance of personal investment in linking conceptual understandings of technology’s potential and its problems to everyday experience.”

My work deals with animals, their behavior and performance. Often, I am challenged with the task of thinking of interesting things to do with the ants, or questioning why certain behaviors exist. This is a daunting task, and difficult to pursue in an entirely abstract, mental manner. The same couple of standard facts about ants tend to cycle over and over in my mind, striking me as boring or impractical.

I find most of my successes in digital-biotic design come from direct combinations of abstract research and physical play. For the design project today, I have made a game to explore the “materiality” of ants.

Preparation

First, I went and collected some local, harmless wood-ants from a nearby hiking trail. Next I put the colony under anasthesia in order to paint them with my very own magnetic insect paint. Finally I took some plastic containers and coated them with a non-stick teflon paint (fluon, or “Insect-a-slip”).

Gameplay

The goal of the project is to see what concepts, emotions, and comprehensions arise from the activity.  To play, each person chooses an anteater character which represents a different modality for interacting with the insects. In all, the picker-uppers include: Two types of magnetic grabbers, a sticky-grabber,  warm mammalian hands, and cold, accurate tweezers. There is also a bonus power-up where people can use an insect aspirator to vacuum up the ants.

The game starts by dumping the prepared ants into the arena, and then, in the Hungry Hungry Hippos style, everyone tries to collect as many into her or his own buckets.

 

Original Design

Antmongous

Tuesday, February 7th, 2012

Here is the text from my proposition for a FLUX 2012 project. The full PDF with pictures can be viewed here: Antmungous_AQ_01

————————-

Antmongous

Proposal for FLUX 2012

Andrew Quitmeyer (GAtech phd researcher)

Digital world and Image Group | Multi-Agent Systems and Robotics Lab

Antmongous is an embodied, interactive exhibit featuring live, continuous communication between humans and a live colony of ants. The exhibit will be physically spread throughout the Flux festival but exist in harmony with the other works. It will encourage participant exploration by emergently provoking individuals or groups to follow ant-designated paths, but its design will not detract from or overshadow the exhibits situated along these routes. In essence, it becomes a collaborative scavenger hunt between humans and ants that runs in parallel with the festival, and requires no outside technological distractions (eg. Smartphones) on behalf of the participant.

Any money received from the commission will be spent exclusively on materials. The artist and his supporting labs will waive any other fees.

Entomological Background

Ants, such as our Aphaenogaster cockerelli, are able to work collectively to find and recover nutrients in the environment without direct communication. Simple, distributed behaviors enable many separate individuals to display emergent, large scale behaviors such as optimal path finding. The goal of Antmongous is to situate humans within this network of the ants’ actions, and have them replicate the ants’ behaviors.

CORE DESIGN

Castleberry Hill for Ants

First, we shall replicate the networked layout of the Castleberry Hill festival location, as an abstracted, 1/175 ant-scale abstracted model. Since the core area of the Flux festival takes place in a roughly 300m X 400m geographical area, the ant-sized model will be approximately 1.7m x 2.3m, or the size of a large table. Pathways such as roads, alleys, and building interiors will be featured in the model as areas accessible to the ants. Restricted areas such as rooftops or sides of buildings will be correspondingly elevated in the model and coated in Teflon paint (Fluon) to make sure humans and ants both only have access to analogous areas.

The ant colony will be loaded into the miniature model. The Queen and brood will be placed in the area of the model corresponding to the model’s location in the real world.

Track Ants

I created open-source software for analyzing the positions and movements of our ants in the laboratory environment. This same software can be used for tracking the ants in the Castleberry hill model in real-time.

Project Ants

This position data will be sent wirelessly to a mesh network of inexpensive, battery-powered XBee microcontrollers. The XBees, in turn, will turn lamps lining the sidewalks on and off corresponding to the presence or absence of an ant in the analogous location. Thus an ant at the model’s virtual intersection of Bradbury Street and Fair Street will illuminate lamps in the actual location.

In this way the ants can be felt crawling throughout the village.


Interaction / Gameplay

Setup

At the beginning of the event, food sources (mealworms and agar paste) will be placed in various locations within the ants’ model. In the corresponding geographical locations, we will also hide prize packages. The packages will be of varying sizes, and some will be weighted and supplied with many handles to ensure that multiple people are needed to transport it.

Before the ants have discovered a particular food cache, humans will be able to “feel” the passages of ants wandering about the city somewhat randomly. Trails of light will travel down the streets and alleys relating to the foragers underlying search algorithms. Even if new arrivals to the festival know nothing beforehand of this particular project, the presence of other moving agents should be unmistakable.

Participants may feel the urge of their own free will to follow along with these light movement patterns, and end up exploring the festival in tandem with the actual ants exploring their environment.

Eventually the ants will find of the caches and develop static transportation lines leading directly to the food. In the human world this will translate into illuminated paths directly connecting the nest (hub of the exhibit) and the hidden prizes. At this point, humans coming into contact with the Antmongous exhibit will need only to follow the illuminated paths to discover the hidden treasures.

Before ant discovery

After multi-ant discovery

Capture

There is an interesting mechanic in the ant world which ensures that foragers return captured food to the colony. Adult ants have very tiny throats and do not possess chewing teeth capable of breaking down the food to swallow able sizes. Instead they must bring all captured food back to the nest and share it with the larvae that grind and regurgitate the food into a form that is edible for the adult ants.

To replicate this fascinating feature of the natural world, the packages placed in the human world will need to be transported back to the nest before they unlock. The method of unlocking is yet to be determined, but several possibilities are possible ranging from simply having the humans running the exhibit unlock it for you to mandating interaction with the actual colony.

Unlocking Methods

A)       Human: Participants return with the prizes, and the humans running the exhbit simply unlock it for them.

B)       Ant combination: All prize boxes have a tag with secret combination codes written on them in sugar water. The tag is placed into the ants’ environment, and the numbers area revealed by the ants clustering in the sugary spots.

C) Ant return: all prize boxes have a food scented tag. This tag is placed into their environment, and when it is brought

Prizes

The prizes could be one (or a mixture) of the following ideas

A)       Shared food: The prize boxes contain mealworm burger patties and seasoned agar paste (for vegetarians) that the participants can enjoy next to the ants who are eating the same meal.

B)       Interactive craft: The prize boxes contain random craft components which have pairs of parts in Human and Ant (1/175) sizes. The humans use these parts to design a small and large sculpture which they can place to interact with the ants’ model and into the real world.

C)       Tickets for manipulation: The returning participants can add new food sources/prize locations to the ants’ model and the real world.

Estimated Budget

This is the estimated budget for the project. The primary variable price point is in the number of nodes created in the XBee mesh network. 50 nodes should get us decently high resolution coverage of the entire event area, and the feeling of immersion and movement will reach levels of great subtlety. The amount of nodes can still be increased or decreased due to budgetary conditions.

Item Cost Per Unit Quantity Total
Core
XBees $22 x50 $1,100
Lamp strips $18 x50 $900
Central Processing Computer (laptop) $0 x1 $0
Computer Vision Cameras $350 x1 $350
Batteries $5 x50 $250
Additional Power Supplies $10 x15 $150
Human Prize Boxes
Box Material $10 x16 $160
Prizes $5 x16 $80
Illumination (LED) $4 x16 $64
CH Acrylic Abstract Model Ant Farm x1
Model Materials $275 x1 $275
Cutting $50 x1 $50
Ant Colonies $100 x1 $100
Ant Food $35 x1 $35
Miscellaneous
Wiring+Circuit Printing $110 x1 $110
Custom Software and Firmware $0 x1 $0
Artists’ Fees $0 x1 $0
Additional Batteries $5 x3 $15
Grand Total $3,639

Real Ant Moebius Strip

Monday, January 30th, 2012

UPDATES: more recent pics at the end (video coming soon!)

The goal of this craft is to create a hanging Moebius strip for live ants to crawl upon. It is inspired by the escher drawings of ants on Moebius strips, and also the Moebius strip’s archetypal description that “If an ant were to crawl along the length of this strip, it would return to its starting point having traversed every part of the strip (on both sides of the original paper) without ever crossing an edge.”

We are going to make a primarily sculptural/aesthetic device to hang over our boxes of ant nests. We should be able to load ants onto the strips and watch them wander around in endless crazy loops. To keep with the current aestethic of our ant containers, the strips will be forged from transparent acrylic. The primary crafting experience in this project comes from building the tacit knowledge of turning and manipulating hot molten acrylic.

Materials:

To create this project you will need:

  • 24 inch long acrylic sheets (1/8th in)
  • 3 Hard rubber (somewhat heat resistant) clamps
  • Heat gun
  • Monofilament (thin fishing string)
  • Cylindrical, heavy, heat resistant wrapping surface. I used 7-8inch diameter glass ash tray.
  • Additional heat resistant cylinder (comes in handy sometimes)
  • Something to cut the acrylic sheets (I used a laser cutter, a dremel with the right bit could also work, or a bandsaw)
  • Wet sponge. If you want to make a part of the acrylic instantly cooler and freeze into place, dab it with the sponge.

Step One: Cut

First you will need to make several strips from your original acrylic sheet. Follow the attached adobe Illustrator template to laser cut it, or base your cutting off. In short, each strip should be about 24 x 3/4 inches, and have several small holes in each end. Make several strips to test out the techniques on before doing the final one.

Step Two: Play

Prop the heatgun on your worktable, and using some scrap strips of acrylic, and grabbing it with your clamps, practice smoothly bending and twisting the plastic. Get a feel for how quickly it heats up, and at what distances from the heat gun. The goal is to avoid extremely localized heating which can result in sharp bends, or really hot areas that can make it bubble.

Step Three: Twist

Clamp one end of the strip to the thick ash tray, and hold the other clamped end of the strip in your hand. Slide the strip slowly back and forth in front of the heat gun while rotating the end in your hand 180 degrees. It is best to twist it even a little further than 180, because when we start wrapping the strip into a circle it tends to try to unwind itself.

Step Four: Wrap

Now that your strip has a nice smooth twist in it, keep the entire thing heated up and begin slowly bending it to try to make the ends touch. Do not bend it too quickly or the strip might snap! Don’t leave the heat gun pointed in one area too long or you may get a very sharp bend, which may or may not be the aesthetic you are looking for.

Step Five: Connect and Smooth

When your bend is good enough to connect both ends of the strip, clamp them both onto the ash tray in the same spot. Use the heat gun to go back around the strip and smooth out, or make certain areas more circular and bendy. Keep your ends clamped together around the dish in the moebius shape until it has thoroughly cooled.

Step Six: Tie it up

Once cool, use your fishing wire to tie the ends together. Lace the holes together tightly and trim the ends.

Step Six Alternate: Fuse

If you are really good with acrylic you could also try to use weld-on to fuse the ends together, or simply heat the ends up REALLY hot and squish them together. I have tried both, but they were hard to make it look nice.

Step Seven: Hang and load

Loop another string of mono-filament through the end holes to make a long strap for hanging your moebius strip.

 

Digital Media DIY and Political Stances – Skycopters

Wednesday, November 30th, 2011

Quick production turnaround and fast access has shifted the source of journalistic video footage from centralized news organizations to distributed crowdsourced video or small groups of independent journalists. These new modes were made salient with the waves of distributed media content flowing from the middle-east uprisings starting in late 2010 (“Arab Spring”), and hammered in by the Occupy Wall St protests. Protesters using Mobile phones and cameras flooded social video sites like youtube with up-to-the-minute coverage of major events and attrocities, and they were supported by censorship evading software and proxies provided by groups of internet denizens as well as major governments like the US (though they would often also be working in parralel with the oppressive governments http://blixblog.com/?p=396).

A combination of technical availability and cultural need, has also spurred the use of live-streaming mobile newsfootage collected by individuals with smartphones and unlimited data plans. Though it had existed for a while, the popularity (and therefore conventionalized use) of Ustream grew with its use by a handful of citizen journalists broadcasting live happenings within the Occupy Wall St. movement at different locations around the US.

As the new wave of distributed journalists takes over the roles  of traditional news they are also absorbing and manipulating many of its conventions. One of the most popular Ustream reporters of OWS is now even poised to begin creating aerial drones which would emulate newscopters but be able to reach closer regions to the ground within a city (“Dronecam Revolution” http://boingboing.net/2011/11/23/theother99.html ).

Aerial videography was typically only used by hobbyists (RC flying cameras http://www.youtube.com/watch?v=zDpL8aQlCDA&feature=related) or production studios http://www.skycamusa.com/. The ongoing imperialistic wars in Iraq and Afghanistan also popularized the concept with the US’s use of aerial drones http://www.youtube.com/watch?v=CgKN2Q5EgKU.  The standardization of technologies, as well as popular demand helped lead to results like the filming of a Nov 2011 protest in warsaw.

At the same time, this technology was also being reappropriated by commercial organizations creating toys like the ARdrone (http://ardrone.parrot.com/parrot-ar-drone/usa/ar.free-flight-for-android). The web of hackers and DIY enthusiast saw this packaged set of hardware however, and began a re-reappropriation with the release of an open-source ARdrone API (https://projects.ardrone.org/projects/show/ardrone-api).

Unfortunately true advances in the new mediums tend to be slow to realize and new technology is typically used in facsimile of older production codes but with larger cheaper distribution. Perhaps further meditations will lead to truly digital new devices, like fleets of semi-autonomous newscopters that are owned and controlled by the public via the internet.

Digital Mirror Flashlight

Tuesday, October 18th, 2011

Full PDF

This is a proposal built off the initial “Digital Mirror Parade” where kids and students would go on a historical walk, compile a 3D map of this walk, and re-project it onto the actual space.

After further discussion, it looks like we are going to focus instead on a single building or place. Here is a revised version of the idea to attempt to provide a feasible concept. This new concept consists of three parts:

A)      MAPPING                               Kids map an area using RGBD6D SLAM

B)       MANIPULATION   Let kids manipulate 3D content

C)       INTERACTION/DISPLAY        Interact via Re-project onto actual space

Insect Stories

Wednesday, September 28th, 2011

Stop the humanoid solipsism. Join the world of the creatures:



http://andy.dorkfort.com/art/myt/Insect%20Stories.pdf

Digital Mirror Parade

Tuesday, September 27th, 2011

For the design challenge for the Atlanta School, I propose a performance project involving hauling a kinect down a historical path to map the space and change it:

http://andy.dorkfort.com/art/myt/Digital%20Mirror%20Parade%20(2).pdf

Vertical Segregation

Tuesday, September 27th, 2011

We divide groups of people in many different manners, I propose a new one:

http://andy.dorkfort.com/art/myt/Vertical%20Segregation.pdf

Mark Your Territory

Tuesday, September 27th, 2011

Lots of initial design went into this project and the first parts are described a bit below, and can maybe be chatted about more later.

The current design of the physical system is detailed in this PDF:

http://andy.dorkfort.com/art/myt/MarkYourTerritory_1.pdf

—————————–

The first step was to prototype the purely physical stage of the product. This means prototyping and designing a marker with several intersecting qualities related to taking ownership of a particular locale. The key features that I initially needed to design for come down to the following:

1) Visibility – Bright, eye catching, indicative of a user’s goals and actions.
2) Resilience – especially to being soaked with water/urine
3) Conductivity – To measure the power of your pee with the arduino
4) Mutability – to provide a bit more engagement and feelings of accomplishment for the interactor, the device should react and change in accord with the user’s actions (peeing on it makes it different)
5) Semi-Permanence – Littering sucks, and also this increasing the dynamics of claiming spaces; once your marks bio-degrade you need to maintain your trips and markings of a place to retain leadership.
At first, I wanted to deal primarily with points 1 and 2 (and a little bit of 4).
My very first goal was to create a little marker that, when peed on, would reveal a secret message.
A nice discussion of possibilities for doing this was held here:
My beginning attempts focused on methods of imprinting an invisible message onto supposedly ordinary looking paper which would reveal itself once soaked in water or a mildly acidic or basic solution (urine). This was the tricky part- there are lots of methods for making invisible ink that reveals itself when activated by heat or UV, but liquid alone proved to be challenging.
I experimented with those crayola color changing markers, and tried to located that special crayola water color paper used by very young children, where the colors are activated by a paintbrush with ordinary water.
The color changing markers failed to respond to water or urine in order to reveal the secret message. I tried various other chemicals too.
Another difficulty to this problem was that the method should be mechanically repoducable (i wanted to print it) and the message was of high visual complexity (a QR code), and I needed high contrast for it to be machine readable.

Architecture Against Death

Wednesday, February 9th, 2011

This isn’t my idea, just inspiration for some things to share in class today:

http://www.reversibledestiny.org/Reversible_Destiny_-_Arakawa_and_Gins_-_We_Have_Decidede_Not_to_Die/Architecture_Against_Death.html

Watching TV in the early 21st century

Wednesday, February 2nd, 2011

A ritual my wife and I have is typically to watch a couple of tv shows in the evening. Since the days of cable television are behind us, we rely entirely on somewhat scrappy ways to get our video fix.

Instead of just plopping on the couch and shifting channels until you find something good (as in the days of old), the first step is locating a target TV show. Currently we accomplish this via a) torrenting popular tv and movies, b) a friend’s Netflix account, or c) recording HDTV broadcast over the airwaves (only for jeopardy).

For option A) we have to scour the internet a bit to see if people have ripped a certain episode already, or if we will have to wait a couple more days for the internet goons to get a good torrent going of a particular episode. Then we have to wait for it to download. Depending on how popular the show is, this can vary from under a minute (like “It’s Always Sunny in Philadelphia”) or take several days (like old episodes of “Pete and Pete”).  Before, we would have to start the episode playing on the computer (which sent a signal to the TV in the other room), and then run over to the living room to start watching it, but now we have a fancy new TV that can access our computer like a media server. This is awesome because it brings the browsing experience back to the couch, but it also causes lots of grief when we add new files and they don’t show up on our tv for DAYS.

Using option B), a friend’s netflix account (yeah we are that cheap), tends to run much more smoothly, but there are still many times when a certain video that we added to the queue (still from the computer in the other room) refuses to show up on the TV’s netflix app.

Option C) is the most dynamic and troublesome option. Even though “Jeopardy” has YEARS worth of, interesting, varied television content, they lock up their information harder than any other television show. You cannot find torrents or episodes hardly anywhere on the web, but they do broadcast individual episodes once a night for free over the HD airwaves. So we hook up an antenna, to a digital tuner that feeds into my computer. The computer is then set to record a specific channel from 7:30 -8:00 mon-sat to catch episodes of jeopardy. These episodes then (ideally) show up automatically on our computer’s media server, but usually lead to the same kind of frustration as when we download any new file. One thing that often leads to exciting physical drama, is that, even though we don’t have to watch the episode of Jeopardy from 7:30-8:00 we have to make sure the computer is ON during these times. This has caused many scrambles through the house when we realized that it was 7:42, and the computer was OFF!

Once a TV show is working, and we have finished it, we either play a game of boggle, do some weird art project, work, read, or start the process over and search for a new piece of content to watch.

Parking Pals dolls

Tuesday, November 23rd, 2010

Illustrator File