ADDA

Screen Shot 2016-05-02 at 20.15.23

ADDA

By William Primett and Vytautas Niedvaras

Project Description

ADDA is a musical performance that uses embodied technologies and muscle stimulation hardware. We’ve made a pair of sensory gloves that will track the movement of the performer’s hands along with the relative position of their fingers, we can use this data to detect different types of motion and gestures expressed. When used in conjunction, the performer is able to control the sound output. We will also be controlling a TENS muscle stimulator which will override the control of the performer when current is imposed.


Concepts

 

Click to enlarge:

Screen Shot 2016-04-26 at 21.09.45

With our initial project proposal we were planning to explore quite a lot of subjects at the same time (fetish, control, tensions between natural and enhanced/altered human expression, collective fears and expectations of the future).

The final piece was built by action-reaction approach. By working and continuously questioning initial brief, we have remoulded our initial intentions in to something that, we felt, bears the starting energy and feel, but is stripped of any gimmicks or distractions.

With ADDA (analog to digital digital to analog) we are exposing an ongoing dialog between physical and digital realms. By creating a system composed of embodied technology and computer controlled muscle stimulation devices we are able to place performer in a setting where digital system has a direct access to performers human body hardware.


Hardware: The glove

IMG_0993

First prototype of the glove. Using Teensy 3.2 micro controller, custom flex sensors by Flexpoint and MPU6050 3 axis accelerometer/gyro. At this point glove hardware was fully functional, we were getting continuous flow of data from flex sensors and MPU6050.

Even though the system was working, we’ve had to develop it further for it to be usable in a performance setting. We have started by making a pair of custom gloves that would house all of the hardware components.

After some research we have come across a list of tutorials (including a whole playlist of Youtube videos on sensory glove making) by Hannah Perner-Wilson. In these tutorials Hannah goes through different sowing techniques as well as selecting the right fabric and sensors.

We have started by downloading, printing and cutting glove pattern from http://dev-blog.mimugloves.com/step-2/

We have used pins to secure the fabric on to the pattern before cutting

IMG_0999

We began running in to problems short after that. The sowing machine we were using would tear and crease the fabric. To solve this we have used tissue paper as a fabric support for sowing. This prevented damage to the fabric while sowing and made it much more stable, however it was practically impossible to remove all of the paper after pieces were sown together.

IMG_0944

After some research we found a solution. Dissolvable fabric support for machine embroidery!

A test piece with dissolvable fabric support before and after washing:

IMG_0950 copy

We have used this technique to sow “veins” to the top of the glove:

Pinning glove piece to fabric support before sowing:

IMG_0962

Piece after sowing veins to the top of the glove with dissolvable support fabric

IMG_0996

The same piece after washing it:

IMG_0998

While making the glove we have experimented with few different materials. The first finished glove piece was made entirely of power mesh. Unfortunately it was not flexible/stretchy enough so for the final design we ended up using spandex as bottom part of the glove.

PowerMesh glove:

IMG_0947

IMG_0946

Sowing top glove piece (power mesh, with veins sown on top) and bottom (spandex) together:

IMG_1003

Next we have had to make a brace that would hold Teensy and MPU6050. We have made it using neoprene backed with some non stretch fabric. For additional durability wires connecting Teensy, MPU6050 and flex sensors were firmly fixed to fabric by zigzag stitching.

IMG_0992

Finished brace, with Teensy and MPU6050 in place, ready for soldering:

IMG_0965

Circuid diagram for connecting Teensy, MPU6050 and flex sensors:

IMG_1065

Preparing flex sensors, by removing stock connector and soldering on wires:

IMG_0985

Each flex sensor needed two pull up resistors(blue wires coming from resistors would go to ground, orange ones to Teensy’s analog in):

IMG_0990

Finished brace with all of the hardware soldered together:

IMG_0981

Nearly finished glove, fully working hardware:

IMG_0980

Added another layer of fabric on the brace, for additional component protection. Finished glove:

IMG_1005

Hardware: The Zapper

 

For muscle stimulation we needed a safe and reliable device that could be controlled by Openframeworks application.

We’ve used medical grade TENS (transcutaneous electrical nerve stimulation) device, Arduino and four solid state relays, normal relays make clicking noise whenever state is changed, this would have interfered with the performance.

IMG_1058

Using this approach we managed to avoid interfering with the actual device. Testing the system with an oscilloscope:

 To make this system more stable we have soldered everything together.

IMG_1061


Data and Sound Design Process

We needed to obtain clean data so movements and gestures changes would be accurate and reliable when controlling the instruments and effects, the performer was essentially giving the illusion that the gloves were an instrument of their own when the digital mechanisms were fully abstracted from the audience.

Before analysing any data we wanted to use the accelerometer values from each hand to trigger sounds. However, we found that the raw values were very noisy to begin with realising that the reading were relative to gravity. The main problem as that the values wouldn’t indicate when the hands weren’t moving add we needed some sort of trigger state.

We presented our works in progress to one of our tutors who specialized in embodied controllers, Rebecca Fiebrink to get pointers in the area of embodied sensor data. She confirmed our findings up until now weren’t unusual and that we’d have to thoroughly analyze the data with our own eyes and ears, linking the actions that caused undesired changes in data. We graphed and filtered all our values in openFrameworks and found the gyroscope values were a lot smother in general and values didn’t change when the hand was stationary. The values would give rotations upon 3 axis which were already relative to the sensor’s position in space. By combining all 3 rotation changes over time, we managed to get values that would quite accurately represent the speed of the hand when moving. When the sensor was placed on the front wrist, we would have a strong indication of general movement.

Our new flex sensors also gave very clean values too. We tested them out by trying different hand gestures to Wekinator which was very responsive to changing states. We wanted one hand to be able to switch through melodic presets and the other to manipulate effect parameters.

Sound

For our sound output we were interested in Laetitia Sonami‘s more abstract approaches to embodied sound. The minimal sound design is very genuine to the movement of the gloves and perceived as a unique instrument. On the contrary the consumer built Mi.Mu gloves are built commercially to suite many types of sound control so was worth researching some of the interactions used. One of particular interest being the twist and turn to change effect parameters which would be responsive to our data, easy for the performer and convincing to the audience.

Wanting to focus on a polished end sound, we had to use something outside openFrameworks to actually generate the audio. We started with the ofxAbletonLive add on which allows control of Ableton Live tracks and devices through LiveOSC messages. This was useful to begin with, we could easily setup some simple control mechanisms to launch preset MIDI loops(clips) and control filters allowing experimentation with different base sounds. However, we weren’t convinced by the performer interaction. As mentioned, the clips were all pre-recorded, even assigning single held notes to each clip still gave us no real diversity upon velocity and duration which were crucial for producing a range of sonic textures.

After this experimentation, we had a stronger idea of how we wanted our base of sound interactions but wanted stronger control over the virtual synthesizers by sending MIDI messages to Ableton Live. We would work towards a structure where the performer would be able to layer some simple drone like textures, then be able to add and control heavy filters that would override these simplistic textures into something more bizarre. The imposed muscle stimulation(yet to be properly practiced at this point) would then disrupt the performer and take control over this mechanism. These ideas formulated from our own practices as well as the works listed above. Particularly the approaches of Laetitia Sonami’s feedback layering with Spring Spyre and Giuseppe Torre’s Twister use of dramatic filters.

General control mapping:

27174-NWKELP (1)

Twister in particular is a strong example of how embodied controllers end up having an affect on the whole body which then has influence on the recorded data as a result. We got in touch with Torre earlier in the year after seeing his piece to ask about his sound setup. He allowed us to explore his MaxMSP patch and accompanying Live set.

Torre’s patch:

Screen Shot 2016-04-29 at 16.29.24

At first very daunting to dissect, our collective experience with MaxMSP was minimal. The patch as a whole would take in OSC messages from the Twister device and generating note and controller MIDI messages to Ableton Live. We started by porting our own data into the UDP receiver to get an idea of what each component was doing. The gloves could sweep through scales with the motion of one hand which we could switch between with the rotation of the other. We kept editing components to try and gain stronger control of the instruments which developed our understanding of the patch along with MaxMSP in general. This gradually allowed us to replace larger parts of the patch ourselves to suite our needs and eventually got us to the point where we managed to construct a new patch from scratch to control our Live set

ADDA patch

Screen Shot 2016-04-29 at 16.30.52

full patch text on gitlab

  • Two main operator instruments were separated by MIDI channels in Ableton
  • Gestures from Wekinator would initialize the synths assigning notes and velocities based on random path selection in a set complimentary range, one channel mainly dealing with lower notes, the other with higher.
  • Notes could be layered onto one another or killed with a neutral gesture
  • Gestures would also be used to select filters to change in the return tracks.
  • Once selected, the device could be initialized and altered with the appropriate movements from the performer

Controlling Max patch with Wikinator(via OSC), representing gesture changes combined with glove movement:

Finally we needed a third sound that would trigger upon impulse of the TENS device which complimented the ambient drones but had a percussive impact that would clearly relate to each ‘shock’. This was especially important to sonify the irregular rhythms we’d impose on the muscles. This would present itself in 3 different sections. We combined a noise oscillator with an analog percussion in an instrument rack with a single parameter controlled by a counter in Max. This parameter would gradually increment across the performance(10 – 15 minutes) to start of with a subtle click and end with a heavily overdriven ‘bang’ resembling a harsh knock.

demoing/checking sound features by changing values manually in Max:

Full Setup:

IMG_20160426_201008


Performance

Practicing with setup:

PROGRESS PLAYLIST

The initial rehearsals were mostly and introduction to the new sound interaction design. We worked with initializing and switching between notes, adding filters and sequencing shocks.

  • In the second demo video the noise gate effect is discovered to sound effective but uncomfortable to hold; we re-calibrated the mapping of the gyro twists to allow easier interaction. We could then alter the range of values taken by the parameter to suite the value changes
  • During rehearsal we found certain effects such as the ‘rate’ of the phasor/flanger gate didn’t suite the moody tone set but the rest of the sound and made the mix genereally muddy, so we kept this contant
  • Sometimes notes would just ‘stick’ where they wouldn’t die and other interaction would no longer respond. We got around to cleaning our trigger/layer code to keep this from happening. Extra notes could be layered to a given limit but one gesture would kill all notes
  • After going through the main sound interaction we got a chance to try and sequence our ‘shocks’ from the TENS device. We would be able to time bursts of muscle stimulation of either arm
  • We got feedback from our peers on when these shocks were most visually powerful in the performance finding for a start that the shock pads were most effective around the tricep and shoulder area for the most impact across the whole body whilst avoiding excessive discomfort to the performer.
  • We were also told that the simulations would appear most unnatural when the performer’s own fluid movements were suddenly interrupted. This was also the case during our rhythmic phasing sections, where we’d impose irregular rhythms and the arms would go slightly out of sync from one another.

Practicing in SIML:

Screen Shot 2016-04-29 at 18.10.44

  • We got access to the SIML to practice and setup prior to the final performance evening
  • Learning how to use the 12 point sound system opened a lot of doors for us, we were able to separate our different components across different locations of the room
  • We panned our channels between the front and back of the room. Our effect chains would pan glistening textures behind the audience whilst the percussive parts would come from the very front of the room creating a whole-room dynamic
  • From here we would do an iterative progression of rehearsals going through the entire piece and discussing suitable tweaks up until the final performance

Basic performance structure:

IMG_20160430_114956

The progression would allow the audience to gradually gain an understanding of how the sounds were being controlled, whilst muscle stimulation techniques would become more intense.

For our night of performance, we invited other people to showcase their work of similar nature. The work would portray the process of digitally processing human interactions to generate feedback. Certain technicalities of each piece would be abstracted from the audience. This would create an appropriate atmosphere for the individual pieces which would compliment one another.

http://www.gold.ac.uk/calendar/?id=9876ADDAevent 


Issues Encountered

  1. Accelerometers on their own didn’t give proper indication of when the hands weren’t moving(more detail above), needed to use gyroscope values to detect movement
  2. Couldn’t get ofxMidi addon didn’t compile properly, we needed to use another program to send MIDI messages to Ableton LIve.
  3. Shocks didn’t affect the movements of the arms enough to make any audible changes to the initial sound mechanic so we needed to send a midi message to every time a shock was sent. This turned out to create an interesting dynamic for the sound as a whole however there was ambiguity to how the sounds were being triggered
  4. With most the devices being hidden from the audience, we found it wasn’t clear to everyone what was happening when the performer was begin shocked. However, this created quite an interesting effect as sounds would be played just a fraction of a second before movement would happen. It would be apparent that there was an element external from the gloves that had impact on the sound, but thorough engagement with the performance would provide a better understanding the technicalities.
  5. Throughout the development, we were rapidly switching between ideas of visual concepts to suite the concepts behind the performance. This ranged from a range of ambitious prototypes including a deformed human model interacting alongside the performer, simple shapes animating with the music and presenting an alternative control sequencer live to the audience. However, we found that this would be an overkill for audience stimulation and would reveal technicalities that were more effective when abstracted
  6. In the days towards the last week of production(including day of performance, some flex sensors would stop working which would limit our range of gesture controlls, luckily our minimal interaction design meant we just had enough data from the gloves to control the system comfortably.

Project Evaluation

Despite going against a lot of our original plans and having to try out many ideas, we’re definitely happy with what we got to show. The performance was engaging by presenting a range an interesting sound textures with a dramtic progression that would hold interest through a convincing and reliable control mechanism. Designing and building our own controllers allowed them to be suited around our technical interactions and performance environment. Our iterative style of prototyping lead to the production of two high quality sensor gloves reliable enough for our performance.

The more abstract approach taken allowed the audience to raise questions about the performance and also guide them to generating ideas around our initial concepts upon human constraint. This ambiguity created tension between the audience and performer.

Nevertheless, we could’ve avoided the stress and confusion about how the final piece was play out. We went in with some set ideas about presenting a case of artificial constraint but open to flexible experimentation particularly with the sound design and visual elements. This allowed us to be flexible towards what best suited us throughout production however we weren’t in sync with each other with as our end goals were either conflicting or unclear. This was an apparent problem for which our representative rehearsals wouldn’t be ready earlier than 4 days before the performance. Maybe a better approach would of been to have a number of different visions that all stuck to a major notion. These could then be prototyped and presented to an audience earlier on so we could gain valuable feedback throughout the production process to work towards a clearer ambition.

With that said, we feel there’s solid potential with our project and plan to develop it further for future shows.


Git Repository

Gitlab Link


Software Used

  1. Teensy
  2. Firmata Library
  3. openFrameworks
  4. Wekinator
  5. MaxMSP
  6. Ableton Live
  7. MPU6050 library by Kris Winer

References

  1. Twister – Dr. Giuseppe Torre
  2. Laetitia Sonami – Lady’s Glove
  3. Imogen Heap – Mi.Mu gloves
  4. MiMu gloves dev blog  – dev-blog.mimugloves.com