Cycle is a series of technology-assisted performances, incorporating the use of robotics and sound. It was inspired by the interrelating concepts from Graphic Notation and East Asian Calligraphy/Ink Wash Paintings. In each unique recurrence, Cycle explores the theme of spontaneity and individuality transpired within a structured framework as the performers present their own interpretation of a set of instructions.
Each performance lasts approximately three minutes, give or take a minute; the performers end it at their own discretion. During the performance, a sole performer walks around the ‘Ink Stick Rotation Machine’ (ISRM) in a seemingly undefined way. The ISRM grinds an ink stick on an ink stone according to how the performer walks. Ambient sounds and vibrations generated from the constant moving contact of the ink stick and ink stone are amplified by speakers through a microphone located on the sides of the ink stone in real time.
In the performer’s interpretation from a set of rules constructed by the graphic score’s composer, control over the manner of performance is removed from the composer’s authority which alludes to a spontaneous creation of the performance by leaving it to ‘chance’. Unlike music represented in traditional notation, different performances of one graphic score do not have the same melody yet still articulate similar notions expressed in the score. In the cases of Ink Wash painting, the rules in posture, way of holding the brush, and practiced strokes, the results cannot be fully controlled by the painter and are still unpredictable due to human error and the nature of ink and water – their interaction take on a life of its own.
The audience sees and listens – nothing really comes out of watching the performance. Yet, even if the audience does not understand the concepts implicated in this work due to requiring some background knowledge about the act of grinding an ink stick, to experience Cycle, they merely have to practice being in a state of calmness and ambiguity. Just like when a painter or calligrapher prepares ink by manually grinding the ink stick, it is to ebb their flow of thoughts, momentarily forget about the things that are happening outside of the performance and just watch and listen. The performance would be both like a ‘performance’ and a non-religious ‘ritual’ at the same time. The feeling that one would sense is like when one is a non-Buddhist listening to the chants of Buddhist monks. Strangely calming, yet it could get annoying when one listens to an ununderstandable language for too long.
For the performers, I would hope that they would be in a world of their own without minding the presence of the audience and focus on their body walking in a circular path, yet I can imagine that they would perhaps be nervous in front of an audience, especially if they are performing for the first time. As a recurring theme in my work, ‘walking’ is a simple movement that can be of disinterest and a distraction all the same. It not only refers to the bodily action of moving your legs as a mode of transport but also signifies the act of repetition, which is structural, and the mundane. As the performer walks after a few times, the performer may build up a personal routine or choose to walk a different manner each time.
After my research on Graphic Notation and East Asian Ink Wash Paintings, I have drawn connections between these two distinctively different genres in art and show their overlapping characteristics in which my artwork attempts to embody conceptually. I likened graphic notation to instructions that were rather open-ended yet specific in certain ways, hence, I decided on creating a performance that Borrowing the motif of ink grinding, which is in itself the stage that happens before the actual painting is executed, and combining it with the imagined sound that graphic notation alludes to, I made the ISRM a framework for the performers. The performers actions are translated to 26 rotations speeds and merely two rotating directions on the ISRM. Within the structure of the ISRM itself, I also found it ironic to have a physically mechanical device replace the mechanical and repeated motions of ink stick grinding. I was unsure of the exact sound that would be produced at the beginning as the sound that is amplified would be quite different from the tiny scratching noise that I am familiar with when grinding ink.
Borrowing the motif of ink grinding, which is in itself the stage that happens before the actual painting is executed, and combining it with the imagined sound that graphic notation alludes to, I made the ISRM a framework for the performers. The performers actions are translated to 26 rotations speeds and merely two rotating directions on the ISRM. Within the structure of the ISRM itself, I also found it ironic to have a physically mechanical device replace the mechanical and repeated motions of ink stick grinding. I was unsure of the exact sound that would be produced at the beginning as the sound that is amplified would be quite different from the tiny scratching noise that I am familiar with when grinding ink. With the addition of the sound of the motor, I thought that the sound would be a nice hybrid between the organic and inorganic materials.
In the late 1950s and the first half of the 1960s, many prominent international avant-garde composers such as Roman Haubenstock-Ramati, Mauricio Kagel, and Karlheinz Stockhausen, as well as experimental composers such as John Cage, Morton Feldman, and Christian Wolff started to produce graphic scores that used new forms of notation and recorded them on sheets that were very divergent from traditional music notation in size, shape, and colour. This new way to convey ideas about music alters the relationship of music/sound to the composer and musician. “In contrast to scores with traditional notation, graphic notation emphasized concepts and actions to be carried out in the performance itself, resulting in unexpected sounds and unpredictable actions that may not even include the use of musical instruments.” (Kaneda, 2014)
Here, I focus on how graphical notation evolved from John Cage’s musical practice and then on Treatise, one of the greatest graphical scores, by Cornelius Cardew.
Influence of Zen Buddhism in Cage’s Work
In Cage’s manifesto on music, his connection with Zen becomes clear: “nothing is accomplished by writing a piece of music; nothing is accomplished by hearing a piece of music; nothing is accomplished by playing a piece of music” (Cage, 1961).
This reads as if a quote from a Zen Master: “in the last resort nothing gained.” (Yu-lan, 1952). Cage studied Zen with Daisetz Suzuki when the Zen master was lecturing at Columbia University in New York. Zen teaches that enlightenment is achieved through the profound realization that one is already an enlightened being (Department of Asian Art, 2000). Thus we see that Cage has consciously applied principles of Zen to his musical practice: he does not try to superimpose his will in the form of structure or predetermination in any form (Lieberman, 1997).
Cage created a method of composition from Zen aesthetics which was originally a synthetic method, deriving inspiration from elements of Zen art: the swift brush strokes of Sesshū Tōyō (a prominent Japanese master of ink and wash painting) and the Sumi-e (more on this in the next section) painters which leave happenstance ink blots and stray scratches in their wake, the unpredictable glaze patterns of the Japanese tea ceremony cups and the eternal quality of the rock gardens. Then, isolating the element of chance as vital to artistic creation which is to remain in harmony with the universe, he selected the oracular I Ching (Classic of Changes, an ancient Chinese book) as a means of providing random information which he translated into musical notations. (Lieberman, 1997)
Later, he moved away from the I Ching to more abstract methods of indeterminate composition: scores based on star maps, and scores entirely silent, or with long spaces of silence, which the only sounds are supplied by nature or by the uncomfortable audience in order to “let sounds be themselves rather than vehicles for man-made theories or expressions.” (Lieberman, 1997)
John Cage: Atlas Eclipticalis, 1961-62
Atlas Eclipticalis is for orchestra with more than eighty individual instrumental parts. In the 1950s, astronomers and physicists believed that the universe was random. Cage composed each part by overlaying transparent sheets of paper over the ‘Atlas Eclipticalis’ star map and copied the stars, using them as a source of randomness to give him note heads. (Lucier, 2012)
In Atlas, the players watch the conductor simply to be appraised of the passage of time. Each part has arrows that correspond to 0, 15, 30, 45, and 60 seconds on the clock face. Each part has four pages which have five systems each. Horizontal space equals time. Vertical space equals frequency (pitch). The players’ parts consist of notated pitches connected by lines. The sizes of note heads determine the loudness of the sound. All of the sounds are produced in a normal manner. There are certain rules about playing notes separately, not making intermittent sounds (since stars don’t occur in repetitive patterns), and making changes in sound quality.
Cornelius Cardew: Treatise, 1963-67
After working as Stockhausen’s assistant, Cornelius Cardew began work on a massive graphic score, which he titled Treatise; the piece consisting of 193 pages of highly abstract scores. Instead of trying to find a notation for sounds that he hears, Cardew expresses his ideas in this form of graphical notation, leaving their interpretation free, in confidence that his ideas have been accurately and concisely notated (Cardew, 1971). The scores were a guide which focused each individual’s creative instinct on a problem to be solved – how to interpret a particular system of notation using one’s own musical background and attitudes. (Tilbury, 2008)
As John Tilbury writes in Cornelius Cardew: A Life Unfinished (2008), ” The instructions were a guide which focused each individual’s creative instinct on a problem to be solved – how to interpret a particular system of notation using one’s own musical background and attitudes.”
“A Composer who hears sounds will try to find a notation for sounds. One who has ideas will find one that expresses his ideas, leaving their interpretation free, in confidence that his ideas have been accurately and concisely notated.” – Cornelius Cardew
In the Treatise Handbook which guides the performer on the articulation of the score, Cardew writes that in Treatise, “a line or dot is certainly an immediate orientation as much as the thread in the fog” and for performers to “remember that space does not correspond literally to time.” (A Young Persons Guide to Treatise, 2009)
East Asian Ink Wash Painting
The Enso, or Zen circle, is one of the most appealing themes in Zen art. The Enso itself is a universal symbol of wholeness and completion, and the cyclical nature of existence, as well as a visual manifestation of the Heart Sutra, “form is void and void is form.” (Zen Circle of Illumination)
Despite there being many specific technicalities in Cage’s work, these are all qualitative instructions which are open-ended, ultimately leaving it up to the performer’s or conductor’s judgement on how they would play the piece as implied by Cardew’s ideas. In a sense, the individuality of each performance of the graphic score by different performers emerges. This is mirrored in appropriating the creation of the Enso in Cycle by the performer. Every painter draws a circle but every circle is different. Bodily and mindfully engaged in drawing the circle, the circle becomes an allegory of the individual.
The performer not only becomes both the painter and the medium in creating the circle, the performer is also a musician with the indirect control of the device that grinds ink – the instrument with a naturalistic sound created from the contact between the ink stick and the ink stone. To quote Cage’s approach to what defines music, it is the “the difference between noise and music is in the approach of the audience” (Lieberman, 1997).
The act of grinding the ink stick becomes the juxtaposition between the ritualistic and the improvised. Also, ink that is produced after each performance are of different quality each time as no two performances will last the exact same time nor will the performers be able to replicate their performance exactly.
Communication between the phone and the computer is through OSC. The ISRM is made up of an Arduino Uno, which controls a stepper motor, which is directly connected to the computer with a USB cable. The speed and direction of the performer would be measured by the built-in sensors in a phone on the performer. Data from the orientation sensor and accelerometer of the phone is computed in a C++ program on the computer which maps the speed and direction of the performer to that of the ISRM.
Controlling the Stepper Motor with C++
The Arduino part was pretty straightforward as there was the Firmata library for the Arduino that enabled serial communication with a C++ program. However, there was no stepper library in C++, so I translated the Arduino stepper library to C++. Working through the technical details of the stepper motor that I had with some trial and error, this was the circuit that I used to test controlling the stepper motor through a C++ program.
Here’s me testing the program out:
To hold the ink stone, ink stick, and the stepper into a single functional entity, I started off with a preliminary design of a 3D model in Blender, which eventually I was going to 3D print.
I got the idea of the rotation wheel and axis from the driving wheels of steam locomotives, but I was not satisfied with the motions of the rotating mechanism in the first prototype. It caused the ink stick to rotate in a rather awkward manner that did not keep the ink stick facing the same direction. I also removed the water tank as I felt that it was visually obstructive and had no better purpose than to provide the ink stick with water, which I did not manage to figure out a fail-safe method of channeling the water into the ink stone. I thought of using a wick to transfer water from the tank to the ink stone, but water transfer was too slow, or a small hole with a pipe dripping water to the ink stone, but the rate of dripping will change when the water in the tank decreases due to decrease in pressure. Also, it would damage the ink stick if I let it touch the water for too long periods of time, hence I scraped the water tank from then on and decided to manually add water before every performance.
There were many difficulties trying to get the holder for the ink stick to fit. I realised that it was never going to fit perfectly as the dimensions of the ink stick itself was not uniform; one end of the stick could be slightly larger than the other end, which made it either too loose or too tight when I tried to pass through the entire length of the stick through the holder. I resolved this by making the holder slightly larger and added sponge padding on the inside of the holder so that it would hold the ink stick firmly no matter the slight difference in widths. The ink stick was shaky when it rotated so I increased the height of the holder to make it more stable. I also added a ledge on each side of the holder for rubber bands such that the rubber bands could be used to push the ink stick downwards as it gets shorter during grinding.
Before arriving at the final design, there were just wheels that were only connected to each other through the rod. The rotation did not work like expected of a locomotive wheel and I realised that it was because the wheel not connected to the motor had no driving force that ensured it spun in the right direction. Therefore, I changed the wheels to gears.
The printed parts did not fit perfectly and that was not because of the wrong measurements as there was a factor of unpredictability in the quality of 3D printing. I tried using acetone vapour on the parts that need to move independently of each other to smooth the surface, but the acetone vapour also managed to increase the size of the plastic. The plastic became more malleable so I easily shaved them down with a penknife.
This process was too slow and I ended up using a brush to brush on the acetone directly to the plastic parts and waited for a few seconds for it to soften before using a penknife. Super glue was then used to hold parts that were not supposed to move together. The completed ISRM:
I used electret microphones that were connected to a mic amp breakout, then connected to a mixer for the performance. I got an electret microphone capsule to use with the Arduino but I did not know that the Arduino was not meant to be used for such purposes and the microphone was not meant for the Arduino.
So, I got another kind which could directly connect to output as I did not want to use the regular large microphone which would look quite ostentatious with the small ISRM.
Trying to amplify the sound of making ink (sound is very soft because I only had earphones at that time, and I was trying to get the phone to record sound from the earphones):
Sensor Data & Stepper Motor Controls
I initially thought of creating an android application to send data to the C++ program via Bluetooth, but there was the issue of bad Bluetooth connectivity, especially the range and speed of communication. Hence, I switched to using OSC to communicate the data. Before finally deciding on using an OSC app, oscHook, I made an HTML5 web application with Node.js to send sensor data. It worked well except for speed issues as there was a buffer between moving the phone and getting the corresponding data that made it rather not ‘real-time’, and it also sent NaN values quite often which would crash the program if there were no exception handlers.
For controlling the speed of the stepper motor, I mapped the average difference of the acceleration of the y-axis (up and down when the phone is perpendicular to the ground) within the last X values directly to the speed of the motor. Prior to this, I looked at various ways to get the speed and direction of walking, from pedometer apps to compass apps. As different people had different sensor values with the phone, I created a calibration system that adjusted the values of the mean acceleration when the performer is not moving and when the performer was moving at full speed. This ensured that the stepper will be able to run at all speeds for all performers.
Link to Git Repo.
Performance & Installation
Videos of performances were playing on the screen for the second day of Symbiosis. The TV was covered with white cloth on the first day. The ISRM was placed on a white paper table cover with the microphone next to it.
Instructions for Performers
Besides having to run a calibration before their performances, I requested the performers to wear “normal clothes in darker colours” to make a contrast with the white room walls. I decided not to specifically ask for black as it was too formal and intimidating. Although the performance exudes the sense of a ‘ritual’, it was not meant to be solemn or grievous, as was such cultural connotations of fully black clothes in a rather ritualistic setting.
During the performance, the performers were to heed these instructions:
- Walk around the room.
- When you stop, stop until you hear the sound indicates that the motor is at its lowest speed.
- End the performance when it is three minutes since the start.
Prior to completing the program that controls the stepper motor, I wanted to attach the phone to a belt and hide it under the clothes of the performers such that they would be walking hands-free. I realised that it was quite abrupt to merely end the performance with the performer standing still as there was no indication if the performer was pausing or stopping entirely to the audience. Hence, after realising that by placing the phone parallel to the ground caused the motor (and in turn the sound) to stop in an elegant manner, I decided that the performer would hold the phone (which I covered in white paper to remove the image a phone) in their hand and have them place it on the ground to signify the end of the performance.
There was a total of eight performances by three people, Yun Teng, Leah, and Haein. These are videos* of the performances by each of them on the Symbiosis opening night and their thoughts on their experience of performing:
*The lights in the room were off during the day, hence videos of the earlier performances look quite dark. If you do not hear any sound from the video, please turn up the volume.
“Being asked to perform for this piece was an interesting experience. For me, it was seeing how (even on a conceptual level, as it turned out) that my physical movement can be translated through electronics and code into the physical movement of the machine and the audio heard. Initially, although we were given simple instructions to follow and even, to some extent, encouraged to push these instructions, I was at a loss to how to interpret them, and just walked in a circular fashion. I tried to vary the pace, speed and rhythm of my walking in order to create variation, but ultimately fell back into similar rhythms of fast, slow, and fast again. It would have been interesting to perhaps push this even further if the machine was more sensitive to height changes, or arm movements – as a dancer who is used to choreography, this was a challenge for improvisation and exploration. In addition, due to the size of the room, the space was limited and hence the walking could only take place in certain patterns.” – Yun Teng
“At first, the walker was uncertain, distracted and anxious. She explored the link between sound and her unchoreographed strides and expected the connection to be instantaneous and obvious. However, it was not. There were delays and inconsistencies; the electronic and mechanic could not accurately reflect the organic. A slight panic arose from the dilemma of illustrating the artist’s concept to the audience and accepting its discrepancies as part of the performance. Slowly she started to play around with the delay, stopping suddenly to hear the spinning sound trailing on, still at high speed, and waited for it to slow down. Rather than a single-sided mechanical reaction to movement, the relationship between the walker and the machine becomes visible and reciprocal. Rather than just walking, now she also had to listen, to wait, and by doing so interact with the machine on a more complicated level. Through listening, she felt the shadow of her movements played back to her by the machine. The observation sparked contemplation on the walker’s organic presence versus the machine’s man-made existence and the latter’s distorted yet interesting reflection of the former.” – Leah
“The whole practice first was received as confusing and aimless as there was too much freedom for one to explore. It was challenging to perform the same act (walking/running) for more than two minutes. At first, I performed more than four minutes, unable to grasp the appropriate time, but it decreased as I repeated the practice. This repetitive performance was quite meditative and physically interactive with the work that caused me to wonder about the close relationship between myself and sound piece (which changes according to my walking speed). The most pleasant part of the performance was that I got to control the active aspect of the work and directly interact with it.” – Haein
The audience was very quiet, probably so that they could hear the sound that was very soft even at its loudest. When they first came in, they did not know what to do as there was no visible sitting area (so I directed them to sit at places that allowed the performer to roam most of the room). It was a huge contrast to the audience that interacted with my previous work as only the performer gets to have a direct interaction with the ISRM. Even then, the ISRM was visibly moving during the performances.
Just hours before the opening night, the ISRM broke at (fig. A & B). It was a mistake on my part as I was reapplying super glue (fig. B) to the base as it had somehow loosened from the previous application of super glue. In hindsight, I did not make extra parts (I did print extras of certain, not all, parts but they of no use when I did not bring them on site, nor were they ‘acetoned’ to fit together.), could not manage to salvage the parts, and I knew that I would not be able to reprint the parts in time. In the end, I slightly altered my work as the ISRM could no longer function as intended. Instead of having the microphones stuck to the sides of the ink stone, I stuck them to the stepper motor instead. Although the sound no longer had an organic element from the ink stick and ink stone, it was completely mechanical now.
After undertaking this project, I have learnt not to limit myself by my tools, but to explore different methods and tools before limiting myself in the creation of the work. I had a misconception that 3D printing was the most efficient way. In some ways, it was because it was the printer that was doing the hard work, not me and I did want to try 3D printing. Despite that, I should not have limited myself by my lack of consideration in using other materials to build the ISRM, such as the traditional way of putting together wood and gears. On the other hand, I do not regret my attempts to build an android app (which I quickly decided was not worth my time for the simple thing I was trying to accomplish) and a web application for sending the sensor data from the phone with Node.js as it is something new that I learnt even though I did not use it in my final work.
Fortunately, I managed to finish the design of the ISRM and print it out in time, but I felt that I should have focused more on the ISRM instead of coding in the earlier phase of the project timeline. 3D printing takes a lot of time, as I have experienced through this project, and any botched prints needed to be printed again as they are rarely salvageable even after being in print for hours. It is also tricky to get the settings right (i.e. infill) such that the printing time is minimised without compromising the quality.
Apart from the many technical things, I also learnt how to organise a performance art (this is my first performance art) and through making this artwork, there many more implications and questions that arise from what I created. For the performance, there were many things to keep track of, such as rehearsing with the performers beforehand, the attire of performers, the schedule of performances, getting the camera to film for documentation and managing the audience. In conclusion, despite being unable to carry out the performances as I have originally planned, I am glad that I have managed to still put together what is left of the entire work even when the ISRM failed to work correctly and the original intentions behind the artwork are still largely intact.
References & Bibliography
Works Cited in Background Research
A Young Persons Guide to Treatise. (12 December, 2009). Retrieved 2 November, 2015, from http://www.spiralcage.com/improvMeeting/treatise.html
Asian Brushpainter. (2012). Ink and Wash / Sumi-e Technique and Learning – The Main Aesthetic Concepts. Retrieved 2 November, 2015, from Asian Brushpainter: http://www.asianbrushpainter.com/blog/knowledgebase/the-aesthetics-of-ink-and-wash-painting/
Cage, J. (1961). Silence: Lectures and Writings. Middletown, Connecticut: Wesleyan University Press.
Cardew, C. (1971). Treatise Handbook. Ed. Peters; Cop. Henrichsen Edition Limited.
Department of Asian Art. (2000). Zen Buddhism. Retrieved 11 December, 2015, from Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art: http://www.metmuseum.org/toah/hd/zen/hd_zen.htm
Kaneda, M. (13 May, 2014). Graphic Scores: Tokyo, 1962. Retrieved 2 November, 2015, from Post: Notes on Modern & Contemporary Art Around the Globe: http://post.at.moma.org/content_items/452-graphic-scores-tokyo-1962
Lieberman, F. (24 June, 1997). Zen Buddhism And Its Relationship to Elements of Eastern And Western Arts. Retrieved 10 December, 2015, from UCSC: http://artsites.ucsc.edu/faculty/lieberman/zen.html
Lucier, A. (2012). Music 109: Notes on Experimental Music. Wesleyan University Press.
Tilbury, J. (2008). Cornelius Cardew (1936-1981): A Life Unfinished. Copula.
What Ink Stick Should You Choose For Japanese Calligraphy? (2015). Retrieved 3 December, 2015, from Japanese Calligraphy: Modern Japanese Calligraphy inspired in Buddhism and Zen: http://www.theartofcalligraphy.com/ink-stick
Williams, M. L. (1981). Chinese Painting – An Escape from the “Dusty” World. Cleveland Museum of Art.
Yu-lan, F. (1952). A History of Chinese Philosophy. Princeton, New Jersey: Princeton University Press.
Code References & Software
Wobble – By Johan & Cormac
We have created an environmental synthesizer which scans its location and interprets light and topographical information to produce sound. The device, named Wobble, is designed with the intent to not only help children understand sound and music making on an abstract level but to also give them the opportunity to modify the device for further experimentation. Wobble is built on the linux platform running on a Raspberry Pi with an easy to use and understand breakout breadboard. Wobble is powered by a rechargeable battery and rechargeable speaker. With all the hardware built into the device it is totally wireless and can be used in any environment to make a wide array of synthesized sound.
The software running on the Raspberry Pi is all written in C++ utilising three libraries which form the bedrock on which the architecture of the program is built on. The first and most important of these libraries is WiringPi, which allows access to the GPIO pins of a Raspberry Pi with C++. The second is Maximillian, a C++ audio and Digital Signal Processing (DSP) library which is the linchpin in the synthesisation of the audio. The third and final library is a small user-made library to control the TSL2561 Lux sensor from Adafruit.
We also use pulse width modulation to control the speed of the motor spinning the sensors on the device to produce a multitude of effects.
Our Intended Audience
Our audience is 6 to 10 year old children who are just about to start or are already learning an instrument. Wobble is used for interactive play with music, sound and lights to get children interested in technology and music through interaction. Our intended outcome would be to build something that would engage them musically and technically. They would be able to create music in any environment they wanted on an abstract level. The older children would be able to then continue on their experimentation by modifying the open source code and easy to use breadboard setup to create their own sounds and music.
Our background research had us looking into artists using their pieces to change the environment like Andy Goldsworthy, Javier Riera, Jim Sanborn and Olafur Eliasson. Biomimicry, particularly sonar and ultrasonic mapping of environments played a large part in our research. Wobble fundamentally grew from the idea of reinterpreting an environment like the bat and how they navigate through environments quite unlike any other mammal.
We had a pretty good idea about how we wanted Wobble to sound. We wanted to have a device that had a very unique synthesized sound, and we wanted the sound to be both interesting and fun for kids. The main way we tested our sounds was to generate pure sine waves and then add envelopes and frequency modulation. To find the sound that we wanted, we used ofxOsc for Openframeworks and OSCpack on our Raspberry Pi. Through OSC messages we could now communicate between our two devices and play around with parameters regarding the sound output. By moving the mouse in our Openframeworks program the mapped position of the mouse changed the sound parameters inside our running code on our Raspberry Pi. Through this we could interact and change the sound in real time.
Throughout the project we have experimented with the Maximilian library to find the right sound. We started making some simple tests, such as playing specific frequencies when you press certain keys on the keyboard to change the keyboard input to be replaced by our sensors. Then we moved on and added more functionalities from the library such as envelopes (ADSR), filters (Hi-res and Lo-res), delay and compression such as noise gate. We also made our own filters such as a foldback distortion and a Moog filter to make the sound even more interesting and fun.
To start we created the paper prototype of the device for some real world context and to test whether our hardware would fit into it. Below is the first 3D model.
We chose to have the shape in two parts as it was both aesthetically pleasing and economic in regards to space. We wanted the top half to spin while the bottom remained still as a base. The two sketches below show some variations of mechanics proposed. Eventually we decided on a electronics housing that sat on a lazy susan bearing that would be driven in a circular motion with an inverted cog and gear system (see below for demonstration).
We then made a cardboard prototype of the housing to once again test the sizing and aesthetics of the device. Using the Pepakura suite we broke down the 3D model, created in Maya, into a 2D mesh that was printed and glued to the cardboard, cut folded and glued into shape.
The finished cardboard prototype.
The prototype with testing rig of electronic.
Now that we had the basis for our housing complete we started to assemble the 3D models for the housing and the internal drive system. Below is the final representation of the internal assembly. The small inner cog will be held with a DC motor attached to the inner electronics housing placed in the centre.
We started 3D printing the housing with clear ABS plastic to make the final product opaque so that light can shine through from inside.
The whole printing process took 44 hours for the final product, not including the failed attempts which could easily be over 15 hours. The printing was spread over three weeks of iteration and testing and then final printing. The cogs and electronics housing was printed in white as we were doubtful of getting a full print from the clear plastic we had left.
Some of the pieces were too large for the printer bed so the incidentally had to be sectioned. The pieces were then welded back together with a plastic adhesive.
Once the two bottom sections were printed we could start assembling. Below is the lazy susan bearing placed at the bottom of the device.
Below is the first iteration of the ultrasonic sensor we are going to use only with a Arduino instead of a Raspberry Pi. The approach for the Pi however was the same, we knew we wanted to use the breadboard so that any user could modify their own Wobble to suite their project.
Below is all of our sensors and our motor wired into the breadboards for testing. As you can see on the right breadboard we are using a Pi-Cobbler to extend the Pi’s GPIO pins onto the breadboard.
Here is the current working model of the breadboard with the essential circuitry wired in with short wires to be flush with the boards. A needed improvement to the above circuitry.
The first version of the center piece. This one is much smaller in diameter and has the small and not so powerful motor mounted of the outside of the center piece.
The Video below is the new center piece with the which is much larger and the new larger motor is mounted on the inside through a hole in the center piece.
Here is Wobble with the top on.
Here is the all the components setup for testing.
Here is the full setup placed inside the center piece.
Below is an image of the first test with the LED lights and the Arduino.
Links to executables
Link to video of the first test with the new motor: https://youtu.be/yrZ8t-VzUTo
Test with lights and motor: https://youtu.be/M77j2RyFHrw
Final test: https://youtu.be/xQt04OLUIRA
Our Building Process
When approaching the build of Wobble we knew that we wanted to reach a point of minimum viable product, or at least a working prototype, that could be used as a showcase of our skills. Because of this we leapt headfirst into building methodically with the intent to reach our weekly milestones each week.
Even though one of our main features was to have Wobble spin we couldn’t actually test the spinning mechanism until very near the end of the build, once we had finished our final 3D printed parts and sourced all of our hardware. Knowing this, we had to make a leap of faith by waiting until the end of the process for this vital piece of hardware. By planning for this delay in testing we were able to keep it in mind when designing the other features of the device, making sure that the end result would be able to be powered by the motor.
One of the first issues we encountered was getting the initial data from the sensors we were using. Both the ultrasonic and lux sensors were extensively documented in both Python and the Arduino IDE. Because of this there were no libraries built for our intended use. After much research, we found two user made libraries for both sensors that we modified to suit our needs by extracting only the data that we were going to use and using some of the code to make classes. The real saving grace to this issue was by way of WiringPi, the GPIO access library in C++. When installed onto the Pi we could use any pin as if we were developing in Python.
A major issue then sprung up with the ultrasonic sensors. The distance data that was being calculated would be received and printed in our console for testing. After an indeterminate amount of time – from five to twenty seconds – the console would throw a number unlike any other, usually far larger than expected, and then nothing else would be printed. What was happening was the distance gathering function was being called so often, more than once every couple milliseconds, that the sensor was being physically overloaded trying to measure the many signals it was receiving. To combat this we implemented a timer function that measured the seconds elapsed and an if statement that only let the sensor update its distance function every 0.2 seconds. This time is arbitrary, it can be as fast or slow as you like up till the point where it overloads itself.
Another issue we came across near the end of our build was that when everything was mounted together we realized first that our DC motor was much too weak to pull all the weight of our components. We also found out that our first centerpiece that would fit all the components was also too small. If we had kept this print we would have great issues with fitting everything inside of it. So, we decided to do one more print where the centerpiece would have a much larger diameter, this would give us much more space for all of our components.
Our project was semi-successful. It turned into a strong first prototype that demonstrates the ideas and the technology used to great success. The 3D printed housing came out fully and as we intended with very few flaws. The construction is solid and the look is very aesthetically pleasing. The sound of the motor and the bearing spinning is relatively low and doesn’t detract from the overall experience.
The sound can be manipulated very easily and intuitively even on first use. The fast response and visible sensors allow the user to understand how Wobble works at first spin. The wirelessness of Wobble is by far one of the best selling points of the device; being able to take Wobble with you wherever you go is a boon when testing it out in unique environments. We would have limited our users greatly if we had not gotten to this point of completion.
Some of our success was marred by running out of time to finish some of our hardware features, the first of which was not fully integrating lighting in the device. We were trying to install programmable LED lights that would change with the audio being produced by Wobble. Instead of this,we connected them to an Arduino. This was a quick implementation so that we could show the aesthetics of the lights even without the functionality.
The second feature to not make the final prototype was to allow the user to control the speed of the motor. This was in part due to the original motor used for testing not being powerful enough, and also in part the pulse width modulation causing some motors to be not powerful enough when running at slower speeds. To combat this we have sourced a physically geared down motor to run at a steady 10rpm, however, we’ve had to wire it into a power source to make it run constantly.
All in all it was a successful first try at the device and there are clear improvements to be made. We could redesign the assembly to make the device smaller and we would like to fix the issues with the lights and motor so that they are both controlled by the Pi. We could even give Wobble more life by letting it record the data it takes in from the environment to create a pseudo memory. This memory could then be mimicked by other users to hear particular environments.
Reference & Bibliography
WiringPi – http://wiringpi.com
Maximillian – https://github.com/micknoise/Maximilian
Pepakura – http://www.tamasoft.co.jp/pepakura-en/
Autodesk 123D – http://www.123dapp.com/design
Digital Signal Processing – http://musicdsp.org/archive.php
By William Primett and Vytautas Niedvaras
ADDA is a musical performance that uses embodied technologies and muscle stimulation hardware. We’ve made a pair of sensory gloves that will track the movement of the performer’s hands along with the relative position of their fingers, we can use this data to detect different types of motion and gestures expressed. When used in conjunction, the performer is able to control the sound output. We will also be controlling a TENS muscle stimulator which will override the control of the performer when current is imposed.
Click to enlarge:
With our initial project proposal we were planning to explore quite a lot of subjects at the same time (fetish, control, tensions between natural and enhanced/altered human expression, collective fears and expectations of the future).
The final piece was built by action-reaction approach. By working and continuously questioning initial brief, we have remoulded our initial intentions in to something that, we felt, bears the starting energy and feel, but is stripped of any gimmicks or distractions.
With ADDA (analog to digital digital to analog) we are exposing an ongoing dialog between physical and digital realms. By creating a system composed of embodied technology and computer controlled muscle stimulation devices we are able to place performer in a setting where digital system has a direct access to performers human body hardware.
Hardware: The glove
First prototype of the glove. Using Teensy 3.2 micro controller, custom flex sensors by Flexpoint and MPU6050 3 axis accelerometer/gyro. At this point glove hardware was fully functional, we were getting continuous flow of data from flex sensors and MPU6050.
Even though the system was working, we’ve had to develop it further for it to be usable in a performance setting. We have started by making a pair of custom gloves that would house all of the hardware components.
After some research we have come across a list of tutorials (including a whole playlist of Youtube videos on sensory glove making) by Hannah Perner-Wilson. In these tutorials Hannah goes through different sowing techniques as well as selecting the right fabric and sensors.
We have started by downloading, printing and cutting glove pattern from http://dev-blog.mimugloves.com/step-2/
We have used pins to secure the fabric on to the pattern before cutting
We began running in to problems short after that. The sowing machine we were using would tear and crease the fabric. To solve this we have used tissue paper as a fabric support for sowing. This prevented damage to the fabric while sowing and made it much more stable, however it was practically impossible to remove all of the paper after pieces were sown together.
After some research we found a solution. Dissolvable fabric support for machine embroidery!
A test piece with dissolvable fabric support before and after washing:
We have used this technique to sow “veins” to the top of the glove:
Pinning glove piece to fabric support before sowing:
Piece after sowing veins to the top of the glove with dissolvable support fabric
The same piece after washing it:
While making the glove we have experimented with few different materials. The first finished glove piece was made entirely of power mesh. Unfortunately it was not flexible/stretchy enough so for the final design we ended up using spandex as bottom part of the glove.
Sowing top glove piece (power mesh, with veins sown on top) and bottom (spandex) together:
Next we have had to make a brace that would hold Teensy and MPU6050. We have made it using neoprene backed with some non stretch fabric. For additional durability wires connecting Teensy, MPU6050 and flex sensors were firmly fixed to fabric by zigzag stitching.
Finished brace, with Teensy and MPU6050 in place, ready for soldering:
Circuid diagram for connecting Teensy, MPU6050 and flex sensors:
Preparing flex sensors, by removing stock connector and soldering on wires:
Each flex sensor needed two pull up resistors(blue wires coming from resistors would go to ground, orange ones to Teensy’s analog in):
Finished brace with all of the hardware soldered together:
Nearly finished glove, fully working hardware:
Hardware: The Zapper
For muscle stimulation we needed a safe and reliable device that could be controlled by Openframeworks application.
We’ve used medical grade TENS (transcutaneous electrical nerve stimulation) device, Arduino and four solid state relays, normal relays make clicking noise whenever state is changed, this would have interfered with the performance.
Using this approach we managed to avoid interfering with the actual device. Testing the system with an oscilloscope:
To make this system more stable we have soldered everything together.
Data and Sound Design Process
We needed to obtain clean data so movements and gestures changes would be accurate and reliable when controlling the instruments and effects, the performer was essentially giving the illusion that the gloves were an instrument of their own when the digital mechanisms were fully abstracted from the audience.
Before analysing any data we wanted to use the accelerometer values from each hand to trigger sounds. However, we found that the raw values were very noisy to begin with realising that the reading were relative to gravity. The main problem as that the values wouldn’t indicate when the hands weren’t moving add we needed some sort of trigger state.
We presented our works in progress to one of our tutors who specialized in embodied controllers, Rebecca Fiebrink to get pointers in the area of embodied sensor data. She confirmed our findings up until now weren’t unusual and that we’d have to thoroughly analyze the data with our own eyes and ears, linking the actions that caused undesired changes in data. We graphed and filtered all our values in openFrameworks and found the gyroscope values were a lot smother in general and values didn’t change when the hand was stationary. The values would give rotations upon 3 axis which were already relative to the sensor’s position in space. By combining all 3 rotation changes over time, we managed to get values that would quite accurately represent the speed of the hand when moving. When the sensor was placed on the front wrist, we would have a strong indication of general movement.
Our new flex sensors also gave very clean values too. We tested them out by trying different hand gestures to Wekinator which was very responsive to changing states. We wanted one hand to be able to switch through melodic presets and the other to manipulate effect parameters.
For our sound output we were interested in Laetitia Sonami‘s more abstract approaches to embodied sound. The minimal sound design is very genuine to the movement of the gloves and perceived as a unique instrument. On the contrary the consumer built Mi.Mu gloves are built commercially to suite many types of sound control so was worth researching some of the interactions used. One of particular interest being the twist and turn to change effect parameters which would be responsive to our data, easy for the performer and convincing to the audience.
Wanting to focus on a polished end sound, we had to use something outside openFrameworks to actually generate the audio. We started with the ofxAbletonLive add on which allows control of Ableton Live tracks and devices through LiveOSC messages. This was useful to begin with, we could easily setup some simple control mechanisms to launch preset MIDI loops(clips) and control filters allowing experimentation with different base sounds. However, we weren’t convinced by the performer interaction. As mentioned, the clips were all pre-recorded, even assigning single held notes to each clip still gave us no real diversity upon velocity and duration which were crucial for producing a range of sonic textures.
After this experimentation, we had a stronger idea of how we wanted our base of sound interactions but wanted stronger control over the virtual synthesizers by sending MIDI messages to Ableton Live. We would work towards a structure where the performer would be able to layer some simple drone like textures, then be able to add and control heavy filters that would override these simplistic textures into something more bizarre. The imposed muscle stimulation(yet to be properly practiced at this point) would then disrupt the performer and take control over this mechanism. These ideas formulated from our own practices as well as the works listed above. Particularly the approaches of Laetitia Sonami’s feedback layering with Spring Spyre and Giuseppe Torre’s Twister use of dramatic filters.
General control mapping:
Twister in particular is a strong example of how embodied controllers end up having an affect on the whole body which then has influence on the recorded data as a result. We got in touch with Torre earlier in the year after seeing his piece to ask about his sound setup. He allowed us to explore his MaxMSP patch and accompanying Live set.
At first very daunting to dissect, our collective experience with MaxMSP was minimal. The patch as a whole would take in OSC messages from the Twister device and generating note and controller MIDI messages to Ableton Live. We started by porting our own data into the UDP receiver to get an idea of what each component was doing. The gloves could sweep through scales with the motion of one hand which we could switch between with the rotation of the other. We kept editing components to try and gain stronger control of the instruments which developed our understanding of the patch along with MaxMSP in general. This gradually allowed us to replace larger parts of the patch ourselves to suite our needs and eventually got us to the point where we managed to construct a new patch from scratch to control our Live set
full patch text on gitlab
- Two main operator instruments were separated by MIDI channels in Ableton
- Gestures from Wekinator would initialize the synths assigning notes and velocities based on random path selection in a set complimentary range, one channel mainly dealing with lower notes, the other with higher.
- Notes could be layered onto one another or killed with a neutral gesture
- Gestures would also be used to select filters to change in the return tracks.
- Once selected, the device could be initialized and altered with the appropriate movements from the performer
Controlling Max patch with Wikinator(via OSC), representing gesture changes combined with glove movement:
Finally we needed a third sound that would trigger upon impulse of the TENS device which complimented the ambient drones but had a percussive impact that would clearly relate to each ‘shock’. This was especially important to sonify the irregular rhythms we’d impose on the muscles. This would present itself in 3 different sections. We combined a noise oscillator with an analog percussion in an instrument rack with a single parameter controlled by a counter in Max. This parameter would gradually increment across the performance(10 – 15 minutes) to start of with a subtle click and end with a heavily overdriven ‘bang’ resembling a harsh knock.
demoing/checking sound features by changing values manually in Max:
Practicing with setup:
The initial rehearsals were mostly and introduction to the new sound interaction design. We worked with initializing and switching between notes, adding filters and sequencing shocks.
- In the second demo video the noise gate effect is discovered to sound effective but uncomfortable to hold; we re-calibrated the mapping of the gyro twists to allow easier interaction. We could then alter the range of values taken by the parameter to suite the value changes
- During rehearsal we found certain effects such as the ‘rate’ of the phasor/flanger gate didn’t suite the moody tone set but the rest of the sound and made the mix genereally muddy, so we kept this contant
- Sometimes notes would just ‘stick’ where they wouldn’t die and other interaction would no longer respond. We got around to cleaning our trigger/layer code to keep this from happening. Extra notes could be layered to a given limit but one gesture would kill all notes
- After going through the main sound interaction we got a chance to try and sequence our ‘shocks’ from the TENS device. We would be able to time bursts of muscle stimulation of either arm
- We got feedback from our peers on when these shocks were most visually powerful in the performance finding for a start that the shock pads were most effective around the tricep and shoulder area for the most impact across the whole body whilst avoiding excessive discomfort to the performer.
- We were also told that the simulations would appear most unnatural when the performer’s own fluid movements were suddenly interrupted. This was also the case during our rhythmic phasing sections, where we’d impose irregular rhythms and the arms would go slightly out of sync from one another.
Practicing in SIML:
- We got access to the SIML to practice and setup prior to the final performance evening
- Learning how to use the 12 point sound system opened a lot of doors for us, we were able to separate our different components across different locations of the room
- We panned our channels between the front and back of the room. Our effect chains would pan glistening textures behind the audience whilst the percussive parts would come from the very front of the room creating a whole-room dynamic
- From here we would do an iterative progression of rehearsals going through the entire piece and discussing suitable tweaks up until the final performance
Basic performance structure:
The progression would allow the audience to gradually gain an understanding of how the sounds were being controlled, whilst muscle stimulation techniques would become more intense.
For our night of performance, we invited other people to showcase their work of similar nature. The work would portray the process of digitally processing human interactions to generate feedback. Certain technicalities of each piece would be abstracted from the audience. This would create an appropriate atmosphere for the individual pieces which would compliment one another.
- Accelerometers on their own didn’t give proper indication of when the hands weren’t moving(more detail above), needed to use gyroscope values to detect movement
- Couldn’t get ofxMidi addon didn’t compile properly, we needed to use another program to send MIDI messages to Ableton LIve.
- Shocks didn’t affect the movements of the arms enough to make any audible changes to the initial sound mechanic so we needed to send a midi message to every time a shock was sent. This turned out to create an interesting dynamic for the sound as a whole however there was ambiguity to how the sounds were being triggered
- With most the devices being hidden from the audience, we found it wasn’t clear to everyone what was happening when the performer was begin shocked. However, this created quite an interesting effect as sounds would be played just a fraction of a second before movement would happen. It would be apparent that there was an element external from the gloves that had impact on the sound, but thorough engagement with the performance would provide a better understanding the technicalities.
- Throughout the development, we were rapidly switching between ideas of visual concepts to suite the concepts behind the performance. This ranged from a range of ambitious prototypes including a deformed human model interacting alongside the performer, simple shapes animating with the music and presenting an alternative control sequencer live to the audience. However, we found that this would be an overkill for audience stimulation and would reveal technicalities that were more effective when abstracted
- In the days towards the last week of production(including day of performance, some flex sensors would stop working which would limit our range of gesture controlls, luckily our minimal interaction design meant we just had enough data from the gloves to control the system comfortably.
Despite going against a lot of our original plans and having to try out many ideas, we’re definitely happy with what we got to show. The performance was engaging by presenting a range an interesting sound textures with a dramtic progression that would hold interest through a convincing and reliable control mechanism. Designing and building our own controllers allowed them to be suited around our technical interactions and performance environment. Our iterative style of prototyping lead to the production of two high quality sensor gloves reliable enough for our performance.
The more abstract approach taken allowed the audience to raise questions about the performance and also guide them to generating ideas around our initial concepts upon human constraint. This ambiguity created tension between the audience and performer.
Nevertheless, we could’ve avoided the stress and confusion about how the final piece was play out. We went in with some set ideas about presenting a case of artificial constraint but open to flexible experimentation particularly with the sound design and visual elements. This allowed us to be flexible towards what best suited us throughout production however we weren’t in sync with each other with as our end goals were either conflicting or unclear. This was an apparent problem for which our representative rehearsals wouldn’t be ready earlier than 4 days before the performance. Maybe a better approach would of been to have a number of different visions that all stuck to a major notion. These could then be prototyped and presented to an audience earlier on so we could gain valuable feedback throughout the production process to work towards a clearer ambition.
With that said, we feel there’s solid potential with our project and plan to develop it further for future shows.