Cycle is a series of technology-assisted performances, incorporating the use of robotics and sound. It was inspired by the interrelating concepts from Graphic Notation and East Asian Calligraphy/Ink Wash Paintings. In each unique recurrence, Cycle explores the theme of spontaneity and individuality transpired within a structured framework as the performers present their own interpretation of a set of instructions.
Each performance lasts approximately three minutes, give or take a minute; the performers end it at their own discretion. During the performance, a sole performer walks around the ‘Ink Stick Rotation Machine’ (ISRM) in a seemingly undefined way. The ISRM grinds an ink stick on an ink stone according to how the performer walks. Ambient sounds and vibrations generated from the constant moving contact of the ink stick and ink stone are amplified by speakers through a microphone located on the sides of the ink stone in real time.
In the performer’s interpretation from a set of rules constructed by the graphic score’s composer, control over the manner of performance is removed from the composer’s authority which alludes to a spontaneous creation of the performance by leaving it to ‘chance’. Unlike music represented in traditional notation, different performances of one graphic score do not have the same melody yet still articulate similar notions expressed in the score. In the cases of Ink Wash painting, the rules in posture, way of holding the brush, and practiced strokes, the results cannot be fully controlled by the painter and are still unpredictable due to human error and the nature of ink and water – their interaction take on a life of its own.
The audience sees and listens – nothing really comes out of watching the performance. Yet, even if the audience does not understand the concepts implicated in this work due to requiring some background knowledge about the act of grinding an ink stick, to experience Cycle, they merely have to practice being in a state of calmness and ambiguity. Just like when a painter or calligrapher prepares ink by manually grinding the ink stick, it is to ebb their flow of thoughts, momentarily forget about the things that are happening outside of the performance and just watch and listen. The performance would be both like a ‘performance’ and a non-religious ‘ritual’ at the same time. The feeling that one would sense is like when one is a non-Buddhist listening to the chants of Buddhist monks. Strangely calming, yet it could get annoying when one listens to an ununderstandable language for too long.
For the performers, I would hope that they would be in a world of their own without minding the presence of the audience and focus on their body walking in a circular path, yet I can imagine that they would perhaps be nervous in front of an audience, especially if they are performing for the first time. As a recurring theme in my work, ‘walking’ is a simple movement that can be of disinterest and a distraction all the same. It not only refers to the bodily action of moving your legs as a mode of transport but also signifies the act of repetition, which is structural, and the mundane. As the performer walks after a few times, the performer may build up a personal routine or choose to walk a different manner each time.
After my research on Graphic Notation and East Asian Ink Wash Paintings, I have drawn connections between these two distinctively different genres in art and show their overlapping characteristics in which my artwork attempts to embody conceptually. I likened graphic notation to instructions that were rather open-ended yet specific in certain ways, hence, I decided on creating a performance that Borrowing the motif of ink grinding, which is in itself the stage that happens before the actual painting is executed, and combining it with the imagined sound that graphic notation alludes to, I made the ISRM a framework for the performers. The performers actions are translated to 26 rotations speeds and merely two rotating directions on the ISRM. Within the structure of the ISRM itself, I also found it ironic to have a physically mechanical device replace the mechanical and repeated motions of ink stick grinding. I was unsure of the exact sound that would be produced at the beginning as the sound that is amplified would be quite different from the tiny scratching noise that I am familiar with when grinding ink.
Borrowing the motif of ink grinding, which is in itself the stage that happens before the actual painting is executed, and combining it with the imagined sound that graphic notation alludes to, I made the ISRM a framework for the performers. The performers actions are translated to 26 rotations speeds and merely two rotating directions on the ISRM. Within the structure of the ISRM itself, I also found it ironic to have a physically mechanical device replace the mechanical and repeated motions of ink stick grinding. I was unsure of the exact sound that would be produced at the beginning as the sound that is amplified would be quite different from the tiny scratching noise that I am familiar with when grinding ink. With the addition of the sound of the motor, I thought that the sound would be a nice hybrid between the organic and inorganic materials.
In the late 1950s and the first half of the 1960s, many prominent international avant-garde composers such as Roman Haubenstock-Ramati, Mauricio Kagel, and Karlheinz Stockhausen, as well as experimental composers such as John Cage, Morton Feldman, and Christian Wolff started to produce graphic scores that used new forms of notation and recorded them on sheets that were very divergent from traditional music notation in size, shape, and colour. This new way to convey ideas about music alters the relationship of music/sound to the composer and musician. “In contrast to scores with traditional notation, graphic notation emphasized concepts and actions to be carried out in the performance itself, resulting in unexpected sounds and unpredictable actions that may not even include the use of musical instruments.” (Kaneda, 2014)
Here, I focus on how graphical notation evolved from John Cage’s musical practice and then on Treatise, one of the greatest graphical scores, by Cornelius Cardew.
Influence of Zen Buddhism in Cage’s Work
In Cage’s manifesto on music, his connection with Zen becomes clear: “nothing is accomplished by writing a piece of music; nothing is accomplished by hearing a piece of music; nothing is accomplished by playing a piece of music” (Cage, 1961).
This reads as if a quote from a Zen Master: “in the last resort nothing gained.” (Yu-lan, 1952). Cage studied Zen with Daisetz Suzuki when the Zen master was lecturing at Columbia University in New York. Zen teaches that enlightenment is achieved through the profound realization that one is already an enlightened being (Department of Asian Art, 2000). Thus we see that Cage has consciously applied principles of Zen to his musical practice: he does not try to superimpose his will in the form of structure or predetermination in any form (Lieberman, 1997).
Cage created a method of composition from Zen aesthetics which was originally a synthetic method, deriving inspiration from elements of Zen art: the swift brush strokes of Sesshū Tōyō (a prominent Japanese master of ink and wash painting) and the Sumi-e (more on this in the next section) painters which leave happenstance ink blots and stray scratches in their wake, the unpredictable glaze patterns of the Japanese tea ceremony cups and the eternal quality of the rock gardens. Then, isolating the element of chance as vital to artistic creation which is to remain in harmony with the universe, he selected the oracular I Ching (Classic of Changes, an ancient Chinese book) as a means of providing random information which he translated into musical notations. (Lieberman, 1997)
Later, he moved away from the I Ching to more abstract methods of indeterminate composition: scores based on star maps, and scores entirely silent, or with long spaces of silence, which the only sounds are supplied by nature or by the uncomfortable audience in order to “let sounds be themselves rather than vehicles for man-made theories or expressions.” (Lieberman, 1997)
John Cage: Atlas Eclipticalis, 1961-62
Atlas Eclipticalis is for orchestra with more than eighty individual instrumental parts. In the 1950s, astronomers and physicists believed that the universe was random. Cage composed each part by overlaying transparent sheets of paper over the ‘Atlas Eclipticalis’ star map and copied the stars, using them as a source of randomness to give him note heads. (Lucier, 2012)
In Atlas, the players watch the conductor simply to be appraised of the passage of time. Each part has arrows that correspond to 0, 15, 30, 45, and 60 seconds on the clock face. Each part has four pages which have five systems each. Horizontal space equals time. Vertical space equals frequency (pitch). The players’ parts consist of notated pitches connected by lines. The sizes of note heads determine the loudness of the sound. All of the sounds are produced in a normal manner. There are certain rules about playing notes separately, not making intermittent sounds (since stars don’t occur in repetitive patterns), and making changes in sound quality.
Cornelius Cardew: Treatise, 1963-67
After working as Stockhausen’s assistant, Cornelius Cardew began work on a massive graphic score, which he titled Treatise; the piece consisting of 193 pages of highly abstract scores. Instead of trying to find a notation for sounds that he hears, Cardew expresses his ideas in this form of graphical notation, leaving their interpretation free, in confidence that his ideas have been accurately and concisely notated (Cardew, 1971). The scores were a guide which focused each individual’s creative instinct on a problem to be solved – how to interpret a particular system of notation using one’s own musical background and attitudes. (Tilbury, 2008)
As John Tilbury writes in Cornelius Cardew: A Life Unfinished (2008), ” The instructions were a guide which focused each individual’s creative instinct on a problem to be solved – how to interpret a particular system of notation using one’s own musical background and attitudes.”
“A Composer who hears sounds will try to find a notation for sounds. One who has ideas will find one that expresses his ideas, leaving their interpretation free, in confidence that his ideas have been accurately and concisely notated.” – Cornelius Cardew
In the Treatise Handbook which guides the performer on the articulation of the score, Cardew writes that in Treatise, “a line or dot is certainly an immediate orientation as much as the thread in the fog” and for performers to “remember that space does not correspond literally to time.” (A Young Persons Guide to Treatise, 2009)
East Asian Ink Wash Painting
The Enso, or Zen circle, is one of the most appealing themes in Zen art. The Enso itself is a universal symbol of wholeness and completion, and the cyclical nature of existence, as well as a visual manifestation of the Heart Sutra, “form is void and void is form.” (Zen Circle of Illumination)
Despite there being many specific technicalities in Cage’s work, these are all qualitative instructions which are open-ended, ultimately leaving it up to the performer’s or conductor’s judgement on how they would play the piece as implied by Cardew’s ideas. In a sense, the individuality of each performance of the graphic score by different performers emerges. This is mirrored in appropriating the creation of the Enso in Cycle by the performer. Every painter draws a circle but every circle is different. Bodily and mindfully engaged in drawing the circle, the circle becomes an allegory of the individual.
The performer not only becomes both the painter and the medium in creating the circle, the performer is also a musician with the indirect control of the device that grinds ink – the instrument with a naturalistic sound created from the contact between the ink stick and the ink stone. To quote Cage’s approach to what defines music, it is the “the difference between noise and music is in the approach of the audience” (Lieberman, 1997).
The act of grinding the ink stick becomes the juxtaposition between the ritualistic and the improvised. Also, ink that is produced after each performance are of different quality each time as no two performances will last the exact same time nor will the performers be able to replicate their performance exactly.
Communication between the phone and the computer is through OSC. The ISRM is made up of an Arduino Uno, which controls a stepper motor, which is directly connected to the computer with a USB cable. The speed and direction of the performer would be measured by the built-in sensors in a phone on the performer. Data from the orientation sensor and accelerometer of the phone is computed in a C++ program on the computer which maps the speed and direction of the performer to that of the ISRM.
Controlling the Stepper Motor with C++
The Arduino part was pretty straightforward as there was the Firmata library for the Arduino that enabled serial communication with a C++ program. However, there was no stepper library in C++, so I translated the Arduino stepper library to C++. Working through the technical details of the stepper motor that I had with some trial and error, this was the circuit that I used to test controlling the stepper motor through a C++ program.
Here’s me testing the program out:
To hold the ink stone, ink stick, and the stepper into a single functional entity, I started off with a preliminary design of a 3D model in Blender, which eventually I was going to 3D print.
I got the idea of the rotation wheel and axis from the driving wheels of steam locomotives, but I was not satisfied with the motions of the rotating mechanism in the first prototype. It caused the ink stick to rotate in a rather awkward manner that did not keep the ink stick facing the same direction. I also removed the water tank as I felt that it was visually obstructive and had no better purpose than to provide the ink stick with water, which I did not manage to figure out a fail-safe method of channeling the water into the ink stone. I thought of using a wick to transfer water from the tank to the ink stone, but water transfer was too slow, or a small hole with a pipe dripping water to the ink stone, but the rate of dripping will change when the water in the tank decreases due to decrease in pressure. Also, it would damage the ink stick if I let it touch the water for too long periods of time, hence I scraped the water tank from then on and decided to manually add water before every performance.
There were many difficulties trying to get the holder for the ink stick to fit. I realised that it was never going to fit perfectly as the dimensions of the ink stick itself was not uniform; one end of the stick could be slightly larger than the other end, which made it either too loose or too tight when I tried to pass through the entire length of the stick through the holder. I resolved this by making the holder slightly larger and added sponge padding on the inside of the holder so that it would hold the ink stick firmly no matter the slight difference in widths. The ink stick was shaky when it rotated so I increased the height of the holder to make it more stable. I also added a ledge on each side of the holder for rubber bands such that the rubber bands could be used to push the ink stick downwards as it gets shorter during grinding.
Before arriving at the final design, there were just wheels that were only connected to each other through the rod. The rotation did not work like expected of a locomotive wheel and I realised that it was because the wheel not connected to the motor had no driving force that ensured it spun in the right direction. Therefore, I changed the wheels to gears.
The printed parts did not fit perfectly and that was not because of the wrong measurements as there was a factor of unpredictability in the quality of 3D printing. I tried using acetone vapour on the parts that need to move independently of each other to smooth the surface, but the acetone vapour also managed to increase the size of the plastic. The plastic became more malleable so I easily shaved them down with a penknife.
This process was too slow and I ended up using a brush to brush on the acetone directly to the plastic parts and waited for a few seconds for it to soften before using a penknife. Super glue was then used to hold parts that were not supposed to move together. The completed ISRM:
I used electret microphones that were connected to a mic amp breakout, then connected to a mixer for the performance. I got an electret microphone capsule to use with the Arduino but I did not know that the Arduino was not meant to be used for such purposes and the microphone was not meant for the Arduino.
So, I got another kind which could directly connect to output as I did not want to use the regular large microphone which would look quite ostentatious with the small ISRM.
Trying to amplify the sound of making ink (sound is very soft because I only had earphones at that time, and I was trying to get the phone to record sound from the earphones):
Sensor Data & Stepper Motor Controls
I initially thought of creating an android application to send data to the C++ program via Bluetooth, but there was the issue of bad Bluetooth connectivity, especially the range and speed of communication. Hence, I switched to using OSC to communicate the data. Before finally deciding on using an OSC app, oscHook, I made an HTML5 web application with Node.js to send sensor data. It worked well except for speed issues as there was a buffer between moving the phone and getting the corresponding data that made it rather not ‘real-time’, and it also sent NaN values quite often which would crash the program if there were no exception handlers.
For controlling the speed of the stepper motor, I mapped the average difference of the acceleration of the y-axis (up and down when the phone is perpendicular to the ground) within the last X values directly to the speed of the motor. Prior to this, I looked at various ways to get the speed and direction of walking, from pedometer apps to compass apps. As different people had different sensor values with the phone, I created a calibration system that adjusted the values of the mean acceleration when the performer is not moving and when the performer was moving at full speed. This ensured that the stepper will be able to run at all speeds for all performers.
Link to Git Repo.
Performance & Installation
Videos of performances were playing on the screen for the second day of Symbiosis. The TV was covered with white cloth on the first day. The ISRM was placed on a white paper table cover with the microphone next to it.
Instructions for Performers
Besides having to run a calibration before their performances, I requested the performers to wear “normal clothes in darker colours” to make a contrast with the white room walls. I decided not to specifically ask for black as it was too formal and intimidating. Although the performance exudes the sense of a ‘ritual’, it was not meant to be solemn or grievous, as was such cultural connotations of fully black clothes in a rather ritualistic setting.
During the performance, the performers were to heed these instructions:
- Walk around the room.
- When you stop, stop until you hear the sound indicates that the motor is at its lowest speed.
- End the performance when it is three minutes since the start.
Prior to completing the program that controls the stepper motor, I wanted to attach the phone to a belt and hide it under the clothes of the performers such that they would be walking hands-free. I realised that it was quite abrupt to merely end the performance with the performer standing still as there was no indication if the performer was pausing or stopping entirely to the audience. Hence, after realising that by placing the phone parallel to the ground caused the motor (and in turn the sound) to stop in an elegant manner, I decided that the performer would hold the phone (which I covered in white paper to remove the image a phone) in their hand and have them place it on the ground to signify the end of the performance.
There was a total of eight performances by three people, Yun Teng, Leah, and Haein. These are videos* of the performances by each of them on the Symbiosis opening night and their thoughts on their experience of performing:
*The lights in the room were off during the day, hence videos of the earlier performances look quite dark. If you do not hear any sound from the video, please turn up the volume.
“Being asked to perform for this piece was an interesting experience. For me, it was seeing how (even on a conceptual level, as it turned out) that my physical movement can be translated through electronics and code into the physical movement of the machine and the audio heard. Initially, although we were given simple instructions to follow and even, to some extent, encouraged to push these instructions, I was at a loss to how to interpret them, and just walked in a circular fashion. I tried to vary the pace, speed and rhythm of my walking in order to create variation, but ultimately fell back into similar rhythms of fast, slow, and fast again. It would have been interesting to perhaps push this even further if the machine was more sensitive to height changes, or arm movements – as a dancer who is used to choreography, this was a challenge for improvisation and exploration. In addition, due to the size of the room, the space was limited and hence the walking could only take place in certain patterns.” – Yun Teng
“At first, the walker was uncertain, distracted and anxious. She explored the link between sound and her unchoreographed strides and expected the connection to be instantaneous and obvious. However, it was not. There were delays and inconsistencies; the electronic and mechanic could not accurately reflect the organic. A slight panic arose from the dilemma of illustrating the artist’s concept to the audience and accepting its discrepancies as part of the performance. Slowly she started to play around with the delay, stopping suddenly to hear the spinning sound trailing on, still at high speed, and waited for it to slow down. Rather than a single-sided mechanical reaction to movement, the relationship between the walker and the machine becomes visible and reciprocal. Rather than just walking, now she also had to listen, to wait, and by doing so interact with the machine on a more complicated level. Through listening, she felt the shadow of her movements played back to her by the machine. The observation sparked contemplation on the walker’s organic presence versus the machine’s man-made existence and the latter’s distorted yet interesting reflection of the former.” – Leah
“The whole practice first was received as confusing and aimless as there was too much freedom for one to explore. It was challenging to perform the same act (walking/running) for more than two minutes. At first, I performed more than four minutes, unable to grasp the appropriate time, but it decreased as I repeated the practice. This repetitive performance was quite meditative and physically interactive with the work that caused me to wonder about the close relationship between myself and sound piece (which changes according to my walking speed). The most pleasant part of the performance was that I got to control the active aspect of the work and directly interact with it.” – Haein
The audience was very quiet, probably so that they could hear the sound that was very soft even at its loudest. When they first came in, they did not know what to do as there was no visible sitting area (so I directed them to sit at places that allowed the performer to roam most of the room). It was a huge contrast to the audience that interacted with my previous work as only the performer gets to have a direct interaction with the ISRM. Even then, the ISRM was visibly moving during the performances.
Just hours before the opening night, the ISRM broke at (fig. A & B). It was a mistake on my part as I was reapplying super glue (fig. B) to the base as it had somehow loosened from the previous application of super glue. In hindsight, I did not make extra parts (I did print extras of certain, not all, parts but they of no use when I did not bring them on site, nor were they ‘acetoned’ to fit together.), could not manage to salvage the parts, and I knew that I would not be able to reprint the parts in time. In the end, I slightly altered my work as the ISRM could no longer function as intended. Instead of having the microphones stuck to the sides of the ink stone, I stuck them to the stepper motor instead. Although the sound no longer had an organic element from the ink stick and ink stone, it was completely mechanical now.
After undertaking this project, I have learnt not to limit myself by my tools, but to explore different methods and tools before limiting myself in the creation of the work. I had a misconception that 3D printing was the most efficient way. In some ways, it was because it was the printer that was doing the hard work, not me and I did want to try 3D printing. Despite that, I should not have limited myself by my lack of consideration in using other materials to build the ISRM, such as the traditional way of putting together wood and gears. On the other hand, I do not regret my attempts to build an android app (which I quickly decided was not worth my time for the simple thing I was trying to accomplish) and a web application for sending the sensor data from the phone with Node.js as it is something new that I learnt even though I did not use it in my final work.
Fortunately, I managed to finish the design of the ISRM and print it out in time, but I felt that I should have focused more on the ISRM instead of coding in the earlier phase of the project timeline. 3D printing takes a lot of time, as I have experienced through this project, and any botched prints needed to be printed again as they are rarely salvageable even after being in print for hours. It is also tricky to get the settings right (i.e. infill) such that the printing time is minimised without compromising the quality.
Apart from the many technical things, I also learnt how to organise a performance art (this is my first performance art) and through making this artwork, there many more implications and questions that arise from what I created. For the performance, there were many things to keep track of, such as rehearsing with the performers beforehand, the attire of performers, the schedule of performances, getting the camera to film for documentation and managing the audience. In conclusion, despite being unable to carry out the performances as I have originally planned, I am glad that I have managed to still put together what is left of the entire work even when the ISRM failed to work correctly and the original intentions behind the artwork are still largely intact.
References & Bibliography
Works Cited in Background Research
A Young Persons Guide to Treatise. (12 December, 2009). Retrieved 2 November, 2015, from http://www.spiralcage.com/improvMeeting/treatise.html
Asian Brushpainter. (2012). Ink and Wash / Sumi-e Technique and Learning – The Main Aesthetic Concepts. Retrieved 2 November, 2015, from Asian Brushpainter: http://www.asianbrushpainter.com/blog/knowledgebase/the-aesthetics-of-ink-and-wash-painting/
Cage, J. (1961). Silence: Lectures and Writings. Middletown, Connecticut: Wesleyan University Press.
Cardew, C. (1971). Treatise Handbook. Ed. Peters; Cop. Henrichsen Edition Limited.
Department of Asian Art. (2000). Zen Buddhism. Retrieved 11 December, 2015, from Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art: http://www.metmuseum.org/toah/hd/zen/hd_zen.htm
Kaneda, M. (13 May, 2014). Graphic Scores: Tokyo, 1962. Retrieved 2 November, 2015, from Post: Notes on Modern & Contemporary Art Around the Globe: http://post.at.moma.org/content_items/452-graphic-scores-tokyo-1962
Lieberman, F. (24 June, 1997). Zen Buddhism And Its Relationship to Elements of Eastern And Western Arts. Retrieved 10 December, 2015, from UCSC: http://artsites.ucsc.edu/faculty/lieberman/zen.html
Lucier, A. (2012). Music 109: Notes on Experimental Music. Wesleyan University Press.
Tilbury, J. (2008). Cornelius Cardew (1936-1981): A Life Unfinished. Copula.
What Ink Stick Should You Choose For Japanese Calligraphy? (2015). Retrieved 3 December, 2015, from Japanese Calligraphy: Modern Japanese Calligraphy inspired in Buddhism and Zen: http://www.theartofcalligraphy.com/ink-stick
Williams, M. L. (1981). Chinese Painting – An Escape from the “Dusty” World. Cleveland Museum of Art.
Yu-lan, F. (1952). A History of Chinese Philosophy. Princeton, New Jersey: Princeton University Press.
Code References & Software
Led Jumper – Daniel Sutton-Klein and Sebastian Smith
LED Jumper is a 2D platformer by Daniel Sutton-Klein and Sebastian Smith inspired “The Impossible Game” displayed across a 32×16 LED matrix where the only control is to shout to jump. This game takes advantage of the LEDs bright visuals and creates a neon-esque aesthetic for the player to be immersed in as they progress through the game.
Processing + Teensyduino code: Download
We set out to make a fun and simple game for anyone to play. Since the game is controlled only by the user’s voice, our audience isn’t limited to any particular group of gamers but instead literally anyone that can shout loud enough or make enough noise.
We mainly had to focus on the hardware side of things when it came to background research as we had little experience with physical computing prior. We added all the relevant information we needed to a google docs file and researched as much as we could about the different options of compatible hardware we could use to ensure that we wouldn’t waste money on anything useless. Below is a snapshot of this document which shows how we gathered the information for building our LED display.
The concept of our project, a game running on an LED array, dictated all the design that followed. We had some options about the density of LEDs on the strip, either 30, 60, or 144 LEDs per metre. Thinking about the sound input nature of the game, we imagined people shouting jump at the display to avoid dying, it seemed appropriate to try and get the largest display we could. When prioritising the size of the display, it was cost efficient to opt for the 30 leds/meter strips. We considered different aspect ratios and arrangements of the LEDs (pixels could be aligned diagonally or in a traditional display grid). Platform games rely on being able to see far ahead of the player, so we decided on approximately 2:1 for the aspect ratio, with a regular grid arrangement to keep it simple. The display would be approximately 30×15 LEDs (~100 x 50cm).
The rest of the design process started with prototyping game concepts and mechanics. Without knowledge of the technical side of transmitting the game onto the display, we still knew that a 2-dimensional data structure which emulated the display was the first step in prototyping, as it set up a simple and logical way to set the LEDs when the physical side and libraries were complete. After implementing this, we started working on the core game mechanics that would influence the gameplay and user experience of the game. For jumping, we knew early on that we wanted the user to control the player by simply shouting at the screen, so we used minim to analyse the amplitude of the audio input, convert that into decibels and then make the player jump while the audio input was above a certain threshold. Below are snapshots of the prototypes showing the first 2D data structure array and core mechanics implementation.
From here we built up and focused on developing a playable game on processing while we waited for our hardware to arrive.
With level design, we created the final level as a PNG image in Photoshop and implemented a way in processing to use the image data as the level of our game by loading the pixel information. This way we could also easily develop level mechanics by referencing the colour of a certain unit and how it affects the player when they are next to it. Below is a snapshot of the final game and the Photoshop file to show this worked.
Commentary of build process
With the premise of the project decided, and zero experience with LED control or development boards like Arduino, we started by learning the very basics.
Starting small, we borrowed an Arduino Duemilanove which came with everything we needed to practice small & simple tasks with LEDs. After being able to turn a single LED on and off, we moved to multiple LEDs which we had flashing sequentially. Knowing that we would be having audio input in the finished project, we tried to make a VU meter (volume unit meter) with the Arduino and LEDs, but soon found out that the microphones that plug into Arduino pins have limitations which might make it problematic for monitoring voices. The other alternative was to use the microphone on a laptop, which we found out required a whole other area of expertise, serial communication, which felt beyond our capability and advised against in web forums. Overall, our testing with the Arduino was very useful in the way that it gave us an idea of how development boards and the Arduino IDE works.
After lots of research, we planned to use the OctoWS2811 library on the Teensy to control the LEDs, as it was designed for the Teensy and came with examples which would allow us to stream the video from a laptop (with the VideoDisplay example on the Teensy, and movie2serial on Processing on a laptop) without much extra work.
When the components arrived (Teensy 3.2, WS2812B strips, 30A PSU, and the logic level converter to change 2.2V to 5V), the first test was to control a single LED. After all of the connections on the breadboard were made (with lots of important connections to LOW and HIGH rails, for example to set the direction of the logic converter), we ran the OctoWS2811 library on the Teensy, and confirmed with a multimeter that the data signal to the LED was 5V. This was our first time controlling a WS2812B LED.
After the successful test with a single LED, we moved onto the next step and hooked up a short strip of 22 pixels. Using the ‘Basic Test’ example from OctoWS2811 library, which is meant to change the pixels to a different colour one by one, we saw that there were serious flickering issues, pixels appearing to not update and flashing random colours. Now knowing why this was happening, we tried the test example from an alternative library to OctoWS2811, FastLED, which instantly gave better results. We took this, being able to control a strip of LEDs, as a cue to build the full 32×16 display.
The first step in building the display was to get a piece of wood the right size, which we then painted black. After considering how we could attach the strips, we started measuring out points for holes across the board, which would evenly space the strips in a precise way. We drilled those holes (of which there were a few hundred), then used zipties to secure the strips into place.
The strips have power, data and ground connections, which we soldered to wires that went through holes at the end of each strip. After everything was set up, we loaded the Basic Test for OctoWS2811 again, and despite the initial excitement of seeing all the pixels light up for the first time, we realised there were serious communication issues. The test was meant to light up all the LEDs the same colour, then change the colour of each pixel in order, one by one. Instead of the clean result we hoped for, lengths of strips seemed to not update colour, and many colours would flicker.
At this point we split all the data cables into 2 CAT5 wires instead of 1, the idea being that it would minimise cross-talk and stop interference in the data signal. This helped a bit, but the same problems were still there. With an oscilloscope, we looked at the waveform of the data signal coming out the Teensy, and saw that it wasn’t clean and the timing (which has to be VERY specific) was wrong. This was in comparison to the OctoWS2811 library website’s graph which showed us exactly how the waveform should be for the WS2812B chips. Going back to FastLED, we saw a major improvement, and decided that OctoWS2811 was not reliable enough to use.
Although FastLED outputted better signals, it came with it’s own problems. It wasn’t designed for the Teensy and only had single pin output by default. Latency and signal degradation would be expected trying to control 512 LEDs on a single pin, not to mention it would mean changing all the wiring on the display, so we looked into the different options to output to the 8 pins that were set up. The Multi-Platform Parallel output method described on the FastLED wiki was supposed to do what we needed, but after spending a lot of time on it we just couldn’t get it to work. We then tried the method used in FastLED’s ‘Multiple Controller Examples’ which created multiple FastLED objects. This worked and we were able to light up the whole display with colours we wanted (see the colour gradient photos).
The next problem was serial data communication. Our original plan was to use OctoWS2811 to control the LEDs, with the VideoDisplay + movie2serial examples which would deal with streaming video from a laptop to the Teensy automatically. Now using FastLED, we would have to write our own code to do this manually. Arduino and Processing both have Serial libraries which are used for serial communication, and after looking at the references and Googling for other people doing similar thing, it still wasn’t clear how to make it work. After almost giving up a day before the deadline, a section at the bottom of FastLED’s wiki for ‘Controlling LEDs’ gave us a clue:
Serial.readBytes( (char*)leds, NUM_LEDS*3);
With some trial and error we were able to send serial data to the Teensy from Processing which, using FastLED, successfully (and beautifully) lit up our test strip of 22 LEDs on 1 pin. After that, we moved the whole project upstairs where it would be easier to work on code at the same time, but as we got up, the whole thing seemed to stop working. We brought up the oscilloscope to see what was happening, and indeed there was no data signal. Stumped, we spend 2 hours trying to figure it out, as when we moved back downstairs it magically worked again. At one point the Teensy stopped responding completely and we feared we’d shorted it, which lost us for a while again until we saw that the power supply ground was plugged into the HIGH on the breadboard. The next day we tried the working serial communication to control a whole strip of 64 LEDs on the full display from Processing, which didn’t look correct at first, and we were worried about the size limit of the serial data receive buffer, but it turned out the Processing sketch has to be stopped before restarting or the serial data gets cut in. To make a long story short, after changing and configuring code in Teensy and Processing, we achieved serial communication sending data for the full array.
Implementing the Processing serial communication code into the game was fairly straight forward, although due to the layout of the strips (where 1 strip is 2 rows and they zig zag in direction), the first time we ran the game, each 2nd row was reversed. This was fixed with some code to alternate the direction of LEDs sent each row.
Due to us only getting the display working on the day before the deadline, we didn’t have much time to experiment further to make the game perfect. We did however run tests looking at brightness of the LEDs, as they were very bright, and found that dimming the pixels is most effective between 3/255 and ~20/255, which has potential applications for the game. We also tried putting a sheet in front of the display to diffuse the light, which could look really nice if we built a frame to stretch it around.
For what we wanted to achieve, we certainly accomplished the overall concept of our design by having a playable game work with audio input on our LED matrix. Despite our accomplishments, many of our ideas weren’t realised in the final piece due to various reasons. We were not successful with implementing a lot of specific mechanics to our game such as wanting to have secret levels that would be activated when the player went a specific route as well as noise cancellation and non-linear jumping for a smoother experience. These mechanics weren’t so much of a priority as a usual game session per person would only be a couple of minutes meaning that they wouldn’t heavily alter the gameplay experience.
After all the time we spent getting the game working on the LEDs, there was little time to experiment with different brightness settings to add contrast to our game in certain areas, as well as creating new colour palettes optimised for the LEDs. As you can see in the video there were still some flickering issues with the LED display where random segments of LEDs would turn on for the length of a frame which was determined by changing the frame rate and seeing how it was affected. However, with the amount of flickering we had prior to this, it’s a miracle they weren’t at all worse, although so far we have been unsuccessful in finding a way to fully eliminate this problem.
Our last success, despite hearing how susceptible our components are to blowing, and even purchasing spares in anticipation of this, everything remained intact throughout the duration of the project.
I have studied and practiced music from a young age, therefore music has played a significant role in my creative practices. As a current music computing student, I have gained the ability to combine my passion for music with code which has enabled me to create interesting pieces of work. In my project I chose to create a sketch, which would be altered and synchronised with a piece of music. I plan to use experimental visuals based on the music nightlife scene and by analysing the sound of my chosen piece of music, I would use this to render my sketch into reactive imagery.
When creating my Music Visualiser I wanted to ensure that I had an appropriate and a specific audience in mind that I wanted to target. I decided that I wanted my target audience to be centred on the nightlife scene, towards the DJ’s who could use this visualizer alongside their music performance and for the crowds of ravers that attend these nights. I wanted to use my Music Visualiser to show people the effects of synchronising sound with visuals, and how this can enhance the overall experience of listening to a piece of music. I chose to predominantly gear my project towards people who attend events specified in playing electronic music and the artists that perform there. This is because I feel that the music from this scene responds better to the visual imagery that I created compared with other genres. The combination of visuals and sound, particularly in electronic and dance music, has proven to be a way to enhance the feelings of euphoria and perceptions. I wanted to try and create something like this through my project, so that the audience could appreciate the numerous ways that computational art and sound can compliment one another.
As mentioned, I wanted to be able to create something that would allow me to integrate my interest in music with coding. To create the project in mind I needed the music to act as a guide for the movement of the visuals. Additionally I planned to create a piece of code, which could then be used at various music events and live performances.
Living in London, the vibrancy of culture and the variety of music that these cultures introduce means that there is always an opportunity to experience this diversity yourself. The music and nightlife scene in London is booming and being a music computing student I find that I take the opportunity to explore these avenues regularly. My interest lies predominantly in electronic music, therefore the clubs that I visit tend to play this genre. What I have noticed is that visuals play a significant part in enhancing and intensifying the experience for the audience while listening to the music. The main focus is usually primarily on the lighting of the dance floor and the environment this creates. A clear example of this is at the club Fabric, where the lighting shows have garnered fame for their complexity and creativity. While researching the use of visuals at Fabric I came across an interview with Dave Parry the sound and lighting maestro of the club who made a point of how his aim was always to “manipulate the sonic and visual aspect of a club to…take them away from everyday life and immerse them in the most beautiful music…”. I truly believe this is possible when the visuals are done correctly, and in my project I hoped that if in the right setting my visuals could so.
Additionally, there are also certain DJ’s and clubs that incorporate some sort of music visualiser projected onto a surface into their event. These visuals correspond to the DJ set in order to further heighten the raver’s experience. One act in particular that I found used this to their advantage was the music artist Flying Lotus. During his music sets he uses projected visuals to enhance the experience for the crowds of people attending. However, although Flying Lotus always has a fantastic display of visuals during his music sets I also found that many of the graphics projected are not reactive and do not correspond exactly to the music being played. This is where I wanted my project to differ, as I wanted the intended audience to be able to observe the correlation between the sound and the image.
Other examples of artists using visuals to effectively perform include Radiohead as seen below:
In order to ensure that what I created was in line with my intended outcome for the design, I made a plan so that I could follow it step by step to create my visualiser.
As I began to research into Music Visualiser designs I found that many visual and sound instillations happen to incorporate circular and geometric shapes into the imagery.
Therefore, I decided to experiment with the same concept when producing design ideas for my visualizer. Additionally, as I had researched into the lighting laser shows used in club events I wanted to find a way in which my design could compliment or mimic these types of visuals so that they could both be combined to improve the environment created for the audience. By using these concepts I designed a few simple sketch’s to give me an idea of how I was going to design my visuals. Here is one example:
I then attempted to replicate this simple geometric design into Processing, however, the more that I began to experiment the more that my design was starting to change and mould into something different. I was more pleased with the end result of my design as I believe that as there is more complexity to the design, aesthetically it looks much better than what I had anticipated.
I decided that I wanted to add more elements to my design in order to make the overall visual more complex. Initially I was going to incorporate some sort of circular imagery, however once again, as I began to create this my idea for the design changed. Instead I chose to create a particle system visual which would surround the geometric shape that I had previously created.
I then combined both of them together in order to create the interesting visuals that I was hoping for…
Once I had combined both of the sketches I found that the particle system in the background was barely visible and that the large geometric shape was overshadowing the rest of the visuals. Once I had managed to synch the movement of the visuals to the sounds of the music, this was made more noticeable as it was difficult to make how the particles where being rendered by the sound. As a result, I chose to make the small particles spread further out with the sound, which created a interesting effect as a result.
Build / Problems I Encountered
In order to build my Music Visualizer, one of the most important tasks that I needed to complete was finding a way to synchronise the music with the image. Therefore I began to research into using the Minim library in order to do so. After finding what I believed to be a solution to combining the visuals and the sound, I thought that the end result of the visuals looked really interesting however I soon discovered that sound was not being analysed the way that I had hoped as the correlation between the image and the sound wasn’t quite right. I found that the code I had implemented was using the output of the left and right channel to determine the movement of the visuals, which meant that no real analysis of the sound was taking place.
(Although visual the effects look very interesting, the way in which the image was produced was incorrect)
In order to correct my mistake, I chose to use “minim.urgens*” to decide how the visuals would interact. Ugen works by generating a sample value, and by using this I was able to correctly analyse the sound for my Music Visualizer. In my code I used an envelope follower, with this I was able to show the level of the wave and determine the general idea of what the volume of the signal is. By doing this, I was able to use the output of the envelope follower to scale the objects and graphics in my code.
I would like to think that my project was quite successful, as I believe that the aesthetics of the Music Visualizer has the potential to integrate well into the environment in which I planned for it to be used. My goal was to create visuals that were not only interesting but could also be used in correlation with the lighting and laser shows used in many club events. I find that the design and colour scheme of the Visualizer would be successful in working with the two as the geometric shape in the centre of the sketch almost mimics what the laser lighting would be reflecting onto the audience.
By Ahmed &Eoin
- Project Description
The idea for our project was to have graphics synchronized with MP3 audio to create an engaging piece that would have a lasting effect on our intended audience. My partner and I will be able to manipulate the graphics and audio in real time for a performance. This project was built using processing, access to the minim library and added music (MP3) files.
Controls: UP, LEFT and DOWN key to switch windows. Left click feature used on the first window to add mini-audio visualizers.
We had decided to split the roles of who would be handling the graphics and audio of the project between us.
Download link to project: AhmedCP
- Intended Audience and Background Research.
Our intended audience are for artistic enthusiasts who enjoy performances that are different from the media norm. However, this project can be enjoyed by most people regardless of age, gender or profession. This was so people can experience music in a new and more interactive way. The audience can feel more involved and engaged in the performance .our background research involved looking into many different styles of generative art and audio pieces, and how best they can complement each other in our project.
These images acted as inspiration for us to start building the foundations for the graphics we was going to create.
Using minim played a big role in the successfulness of our project,since we relied on it heavily.we look through “http://code.compartmental.net/minim/” so that we could gain a better understanding of how to best utilize it.We decided to focus on the audioplayer (so that we could loop our MP3 file in our sketch),FFT( to analyse the spectrum of our audio buffer) and beatdetect (to recognise rhythms for better synchronization of our graphics and audio).
- Design Process
I liked the idea of having the window filled with mini-audio visualizers that move independently but also keep the commonality of reacting off the audio being fed into it. I began my design sketching on paper, these were the initial designs I came up with. I then began thinking how I wanted my mini-audio visualizers to be displayed on my window, how they move, if they can go off screen, how I would plan for collisions. Once I had the basic idea, I wanted to add some interactivity. I added buttons to my design so that we could use it as a filter on the audio, which will then in turn effect the graphics. For example, using high and low pass filters. Also buttons that would change the shape of my mini-audio visualizers, yet keeping the same functionality.
- (Initial design for particle systems contained in window)
- (Design for controls on screen,with filter and shape manipulation buttons)
- (First designs for the mini-audio visualizers)
I ended up scraping my idea of have buttons present on the screen to as filters,although no being that hard to implement i felt as though using this will create a disconnect with my audience during the performance. i decided that for my controls,they should operate as much as possible of screen so that they do not distract from the performance and possibly inconvenience our audience.
After brainstorming my ideas and building code for my first drafts, I came to the conclusion that a mix of my ideas for the mini-audio visualizers, interactivity and filters would be best suited for our final project. These were the final design for our project and so we eagerly started building our code to best fit our vision.
I decided to make three windows for my project, each with different graphics so that I could manoeuvre between them mid performance. I thought that this would add room for more creativity and also enjoyment for the audience.For the graphics i planned for a mix of both 3D and 2D.
One of the issues i had trouble with was finding the most suitable audio for the graphics i was creating.If there was a low compatibility the project would lose a lot of value as this was a key component we sought for from the beginning. With thorough searching and comparing,i cam across an MP3 that would best match our piece.It was from here that i could begin creating the graphics around it.
Also,my idea for the mini-audio visualizers was difficult to actualise. I began trying to pursue a particle system like feature for my first window,however the outcome was too sporadic and not as i had imagined .I then implemented class’s and array’s together with mousepressed to have better control over how and where they would appear.
I believe that our project was successful in that the graphics of our project synchronised with the audio to a satisfactory level was achieved. With the three different widows with individual graphics that best complement the audio. However I believe there was also much room for improvement. We were not able to complete all our objectives, because of this time never allowed us to present our project to an audience as we originally intended. We were not able to implement the filters we planned for in the beginning also.
- Reference and bibliography
Images used(labelled for non-commercial reuse and modification):
DIGITAL WINDCHIME- Bilal and Ed
With this project our goal was to explore the creative application of a piezo sensors hooked up to an Arduino board as a musical instrument. Traditionally found in children’s toys and fire alarms as cheap amplifiers piezo sensors take advantage of the piezoelectric effect, in which certain crystals (often quartz) can transform mechanical energy into an voltage and vice versa. Specifically were interested in finding a new and interesting way of using the output of these sensors to manipulate sound.
When looking for similar projects we found that the sensors were most commonly implemented in one of two ways: as a more traditional contact mic as a pickup on stringed instruments, or as a trigger for percussive instruments. Initially we chose to experiment with the former, but struggled to find an innovative way to interact with our device. Eventually, and largely by chance, we realised that when left to hang in the air, the piezo sensors were sensitive enough to be triggered by even gentle wind. From here we began to develop our final piece, an electronic ‘wind-chime’ that would hang and generate sounds from its environment.
——-> CODE DOWNLOAD <——-
INTENDED TARGET AUDIENCE
Before we set out to actually start with designs of our idea we wanted to make sure that we had a specific target audience in mind. We decided that our product will appeal to the more creative and alternative minded people. These people fit into a small bracket when compared to the more mainstream, pop cultured, socially normed people. We did not think that age plays a factor due to the fact age is not a boundary to how creative one is. We believe that even though this specific target audience is rather niche, it is actually densely passionate full of people who are looking for new, and innovative creations to stand out and fulfil their own creative flair. We created some personas to help give a more vivid representation of individuals in our target audience.
Age – 21
Gender – Male
Interests – Experimenting with sounds and producing Music using alternative methods.
Location – Brighton
Sam is an aspiring producer, however he does not want to be mainstream and use sounds that have already been regurgitated via loops on common programmes. He wants to produce sounds using alternative methods like household objects. He believes that doing this will make him stand out and ultimately boost his profile. He is currently working on a project where he can use wool strings tied across an empty kitchen draw to create a heavy bass guitar sound with the help of an arduino and a computer programme to generate the sound. He aims to create more sounds to use with wacky objects to create obscure sounds.
Age – 20
Gender – Female
Interests – Interior Designing
Location – Camden
Sophie is an enthusiastic and ambitious interior designer. Her outlook on her work is somewhat different to the conventional work. She works in a specific, alternative style which includes using unusual finds to create the ambience. This includes objects such as electric plants which change colour with movement detection. She is always in search of common household objects that have an alternative use or function. We believe an item like the one we are creating will cater to her needs because our chime is a twist on a decorative piece in the sense that it is programmed.
Our ideal outcome would be to create an alternative twist on a wind chime. We want to use an arduino to control piezo sensors that when interacted with, create a gritty sound. The image below is a drawn design of what we expect the initial base outcome of our idea to look like.
BACKGROUND RESEARCH (Inc. bibliography)
Some background research we took included looking at many websites for inspiration and aid on creating our idea.
An example of a website website we used frequently was www.instructables.com.It is a website specialising in user-created and uploaded do-it-yourself projects, which other users can comment on and rate for quality. It consists of many projects using the arduino. This helped us to see what equipment is necessary for our project and it also helped us create an idea that people had not really attempted before.
Another website we frequently visited was https://www.arduino.cc It is an open-source electronic prototyping platform allowing to create interactive electronic objects. The website was a big help when we were confused of how to use it. This is the website that we also chose to purchase the arduino.
Some more research we looked into was at the minim library. Prior to the start of the project we had only dipped and dabbed into the library, however to create the sounds we wanted we had to research into it ourselves. We did this through looking at the minim website: http://code.compartmental.net/minim/ which helped us with understanding its functions and how to use it.
We created these initial designs to help us choose what we thought was best. It consists of the ideas we both thought were good and this allowed us to put into perspective whether this was a realistic achievement. One thing we noticed when putting together the base of the Arduino was, that the piezo sensors are very sensitive, so sensitive that when you blow onto them from a distance, they still react and create the sound. We used this new found knowledge to attempt to create something more unusual than the typical ‘drum midi’ created using the piezo.
The image below is the Arduino that we purchased with the piezo sensors attached.
We created a video so we could display the stages we went through in the production of the actual arduino. This can be seen in the video below.
The image below is a document of what we expect of the code. We are using two programmes, Arduino IDE and Processing.
We had to ensure that we kept to our deadlines and were on time with each other. We did this by creating a table and filling it in with what we aim for the week to be done and if we achieved this. Click the link >> TODO to see the file.
PROBLEMS WE ENCOUNTERED
Some problems that we encountered included using the minim library. We both had substantial knowledge of the library but we were not confident with implementing code to alter the notes played. We wanted to add effects such as bitcrusher. To overcome this we ensured that one of our aims for the week was to look at the minim library web page and learn more about it. We also made it our aim to create some code displaying that the effects work and therefore aid us with implementing the code to the actual project.
Another problem we encountered was getting an analogue signal from the Arduino IDE to processing over serial. After much trial and error we decided to find an alternate solution, instead sending a message from the Arduino to processing every time a sensor’s reading breached a predetermined threshold.
A big problem we encountered near the start of the project was that the piezo sensors were very sensitive and you didn’t need to even touch for it to be triggered. Instead of panicking and go buy new sensors, we used this as the basis for our wind-chime idea.
Personally we believe that the project was a half success. We are satisfied with the quality of the code as the sound is exactly how we wanted it. We aimed for the juxtaposition of the nature around, versus the technical, almost alien sounding chime, which I believe we achieved well. The sounds produced are rather obscure and would cater to the sound orientated creative people in our target audience.
The aspect we believe let us down a bit is the appearance of the chime. We were rather tight on time and had to deal with the resources available to us. It is a fully functional product which really works well, however with a bit more time we believe we could perfect it’s exterior look.
One feature of the chime that we are very happy with is the LED lights. They react to the sounds create and we believe it gives the chime that extra unique feature and appeal. We aimed to use the lights so that it could cater to our creative target audience and we know this was a good thing to include.
The fact that the force of the wind can effect the piezo reaction was something we suddenly found out. Finding this fact, which was almost an accident allowed to create a more, in our opinion, creative item. As stated in our intended target audience, these people are always in search for new, innovative creations, which we have created.
Our biggest disappointment was the inability to send an analogue signal from the Arduino to processing, and having to compromise by using random values when the sensors were triggered. Given more time we would have like to either find a way to model the effects parameters in a more natural, less random way according to the size of the sensor reading; or, find a way to continuously send the sensors readings as discrete values down the serial and then map these values to use as the parameters.
Wobble – By Johan & Cormac
We have created an environmental synthesizer which scans its location and interprets light and topographical information to produce sound. The device, named Wobble, is designed with the intent to not only help children understand sound and music making on an abstract level but to also give them the opportunity to modify the device for further experimentation. Wobble is built on the linux platform running on a Raspberry Pi with an easy to use and understand breakout breadboard. Wobble is powered by a rechargeable battery and rechargeable speaker. With all the hardware built into the device it is totally wireless and can be used in any environment to make a wide array of synthesized sound.
The software running on the Raspberry Pi is all written in C++ utilising three libraries which form the bedrock on which the architecture of the program is built on. The first and most important of these libraries is WiringPi, which allows access to the GPIO pins of a Raspberry Pi with C++. The second is Maximillian, a C++ audio and Digital Signal Processing (DSP) library which is the linchpin in the synthesisation of the audio. The third and final library is a small user-made library to control the TSL2561 Lux sensor from Adafruit.
We also use pulse width modulation to control the speed of the motor spinning the sensors on the device to produce a multitude of effects.
Our Intended Audience
Our audience is 6 to 10 year old children who are just about to start or are already learning an instrument. Wobble is used for interactive play with music, sound and lights to get children interested in technology and music through interaction. Our intended outcome would be to build something that would engage them musically and technically. They would be able to create music in any environment they wanted on an abstract level. The older children would be able to then continue on their experimentation by modifying the open source code and easy to use breadboard setup to create their own sounds and music.
Our background research had us looking into artists using their pieces to change the environment like Andy Goldsworthy, Javier Riera, Jim Sanborn and Olafur Eliasson. Biomimicry, particularly sonar and ultrasonic mapping of environments played a large part in our research. Wobble fundamentally grew from the idea of reinterpreting an environment like the bat and how they navigate through environments quite unlike any other mammal.
We had a pretty good idea about how we wanted Wobble to sound. We wanted to have a device that had a very unique synthesized sound, and we wanted the sound to be both interesting and fun for kids. The main way we tested our sounds was to generate pure sine waves and then add envelopes and frequency modulation. To find the sound that we wanted, we used ofxOsc for Openframeworks and OSCpack on our Raspberry Pi. Through OSC messages we could now communicate between our two devices and play around with parameters regarding the sound output. By moving the mouse in our Openframeworks program the mapped position of the mouse changed the sound parameters inside our running code on our Raspberry Pi. Through this we could interact and change the sound in real time.
Throughout the project we have experimented with the Maximilian library to find the right sound. We started making some simple tests, such as playing specific frequencies when you press certain keys on the keyboard to change the keyboard input to be replaced by our sensors. Then we moved on and added more functionalities from the library such as envelopes (ADSR), filters (Hi-res and Lo-res), delay and compression such as noise gate. We also made our own filters such as a foldback distortion and a Moog filter to make the sound even more interesting and fun.
To start we created the paper prototype of the device for some real world context and to test whether our hardware would fit into it. Below is the first 3D model.
We chose to have the shape in two parts as it was both aesthetically pleasing and economic in regards to space. We wanted the top half to spin while the bottom remained still as a base. The two sketches below show some variations of mechanics proposed. Eventually we decided on a electronics housing that sat on a lazy susan bearing that would be driven in a circular motion with an inverted cog and gear system (see below for demonstration).
We then made a cardboard prototype of the housing to once again test the sizing and aesthetics of the device. Using the Pepakura suite we broke down the 3D model, created in Maya, into a 2D mesh that was printed and glued to the cardboard, cut folded and glued into shape.
The finished cardboard prototype.
The prototype with testing rig of electronic.
Now that we had the basis for our housing complete we started to assemble the 3D models for the housing and the internal drive system. Below is the final representation of the internal assembly. The small inner cog will be held with a DC motor attached to the inner electronics housing placed in the centre.
We started 3D printing the housing with clear ABS plastic to make the final product opaque so that light can shine through from inside.
The whole printing process took 44 hours for the final product, not including the failed attempts which could easily be over 15 hours. The printing was spread over three weeks of iteration and testing and then final printing. The cogs and electronics housing was printed in white as we were doubtful of getting a full print from the clear plastic we had left.
Some of the pieces were too large for the printer bed so the incidentally had to be sectioned. The pieces were then welded back together with a plastic adhesive.
Once the two bottom sections were printed we could start assembling. Below is the lazy susan bearing placed at the bottom of the device.
Below is the first iteration of the ultrasonic sensor we are going to use only with a Arduino instead of a Raspberry Pi. The approach for the Pi however was the same, we knew we wanted to use the breadboard so that any user could modify their own Wobble to suite their project.
Below is all of our sensors and our motor wired into the breadboards for testing. As you can see on the right breadboard we are using a Pi-Cobbler to extend the Pi’s GPIO pins onto the breadboard.
Here is the current working model of the breadboard with the essential circuitry wired in with short wires to be flush with the boards. A needed improvement to the above circuitry.
The first version of the center piece. This one is much smaller in diameter and has the small and not so powerful motor mounted of the outside of the center piece.
The Video below is the new center piece with the which is much larger and the new larger motor is mounted on the inside through a hole in the center piece.
Here is Wobble with the top on.
Here is the all the components setup for testing.
Here is the full setup placed inside the center piece.
Below is an image of the first test with the LED lights and the Arduino.
Links to executables
Link to video of the first test with the new motor: https://youtu.be/yrZ8t-VzUTo
Test with lights and motor: https://youtu.be/M77j2RyFHrw
Final test: https://youtu.be/xQt04OLUIRA
Our Building Process
When approaching the build of Wobble we knew that we wanted to reach a point of minimum viable product, or at least a working prototype, that could be used as a showcase of our skills. Because of this we leapt headfirst into building methodically with the intent to reach our weekly milestones each week.
Even though one of our main features was to have Wobble spin we couldn’t actually test the spinning mechanism until very near the end of the build, once we had finished our final 3D printed parts and sourced all of our hardware. Knowing this, we had to make a leap of faith by waiting until the end of the process for this vital piece of hardware. By planning for this delay in testing we were able to keep it in mind when designing the other features of the device, making sure that the end result would be able to be powered by the motor.
One of the first issues we encountered was getting the initial data from the sensors we were using. Both the ultrasonic and lux sensors were extensively documented in both Python and the Arduino IDE. Because of this there were no libraries built for our intended use. After much research, we found two user made libraries for both sensors that we modified to suit our needs by extracting only the data that we were going to use and using some of the code to make classes. The real saving grace to this issue was by way of WiringPi, the GPIO access library in C++. When installed onto the Pi we could use any pin as if we were developing in Python.
A major issue then sprung up with the ultrasonic sensors. The distance data that was being calculated would be received and printed in our console for testing. After an indeterminate amount of time – from five to twenty seconds – the console would throw a number unlike any other, usually far larger than expected, and then nothing else would be printed. What was happening was the distance gathering function was being called so often, more than once every couple milliseconds, that the sensor was being physically overloaded trying to measure the many signals it was receiving. To combat this we implemented a timer function that measured the seconds elapsed and an if statement that only let the sensor update its distance function every 0.2 seconds. This time is arbitrary, it can be as fast or slow as you like up till the point where it overloads itself.
Another issue we came across near the end of our build was that when everything was mounted together we realized first that our DC motor was much too weak to pull all the weight of our components. We also found out that our first centerpiece that would fit all the components was also too small. If we had kept this print we would have great issues with fitting everything inside of it. So, we decided to do one more print where the centerpiece would have a much larger diameter, this would give us much more space for all of our components.
Our project was semi-successful. It turned into a strong first prototype that demonstrates the ideas and the technology used to great success. The 3D printed housing came out fully and as we intended with very few flaws. The construction is solid and the look is very aesthetically pleasing. The sound of the motor and the bearing spinning is relatively low and doesn’t detract from the overall experience.
The sound can be manipulated very easily and intuitively even on first use. The fast response and visible sensors allow the user to understand how Wobble works at first spin. The wirelessness of Wobble is by far one of the best selling points of the device; being able to take Wobble with you wherever you go is a boon when testing it out in unique environments. We would have limited our users greatly if we had not gotten to this point of completion.
Some of our success was marred by running out of time to finish some of our hardware features, the first of which was not fully integrating lighting in the device. We were trying to install programmable LED lights that would change with the audio being produced by Wobble. Instead of this,we connected them to an Arduino. This was a quick implementation so that we could show the aesthetics of the lights even without the functionality.
The second feature to not make the final prototype was to allow the user to control the speed of the motor. This was in part due to the original motor used for testing not being powerful enough, and also in part the pulse width modulation causing some motors to be not powerful enough when running at slower speeds. To combat this we have sourced a physically geared down motor to run at a steady 10rpm, however, we’ve had to wire it into a power source to make it run constantly.
All in all it was a successful first try at the device and there are clear improvements to be made. We could redesign the assembly to make the device smaller and we would like to fix the issues with the lights and motor so that they are both controlled by the Pi. We could even give Wobble more life by letting it record the data it takes in from the environment to create a pseudo memory. This memory could then be mimicked by other users to hear particular environments.
Reference & Bibliography
WiringPi – http://wiringpi.com
Maximillian – https://github.com/micknoise/Maximilian
Pepakura – http://www.tamasoft.co.jp/pepakura-en/
Autodesk 123D – http://www.123dapp.com/design
Digital Signal Processing – http://musicdsp.org/archive.php
‘Oljud’ meaning ‘Noise’ in Swedish is a collaboration between Terry Clark and Gustaf Svenungsson. The aim was to create an immersive, interactive audiovisual installation using a 3D Camera and a Digital Audio Workstation to manipulate sounds and then for the audio to effect the visuals.
We believed at first that our audience would have been someone, interested by music technology or digital art, between early teens to 30 years of age and that this would be an installation piece that anyone could try out. However as the project progressed we found that there is a bit of setup time required and a learning curve with the gestures/interactions and thus the target audience changed to someone that would use this as part of their live set up and would spend time in programming certain elements of their music for these specific set of interactions / gestures. Ideally they would be an Ableton user and comfortable exploring programming and setting up.
We learnt about what technologies were available to us particularly wanting to use the Xbox Kinect to capture skeleton information and midi to transmit information to Logic. However, we knew that we would need to gather more information before we proceeded. Our first few weeks were productive as we began researching about different kinds of technologies, libraries and processes we would need to adopt in order to produce the final piece. We found that the Xbox Kinect offered the necessary motion capture elements we needed including a point cloud and human body skeleton tracking information. Additionally, we found that other projects had also used Ableton in conjunction with Open Sound Control (OSC) which provided the ability to communicate over a wireless network. This enabled us to send skeleton information and other triggers between two computers over the same network. This meant that we could distribute the workload and overall processing power. This set the foundation of the installation and we moved further into what data we could collect and how we wanted to display it.
The work was originally split into two halves, Terry worked on the visual part of the installation and Gustaf the audio as we both had previous experience in these areas and felt that we would naturally learn from each other as the project progressed. Splitting the work proved to be of a huge benefit as we were able to rapidly produce prototypes, create a soundtrack and refactor the master code as we went along. Although this was the divide we found that we were constantly looking and altering each others work as it gave an outsider’s perspective on the way we both wrote code and as we moved through the project we become more accustomed to understanding where our particular bugs were coming from. The flowchart below describes the class structure of the project and shows both computer and user interactions.
In order to setup our project you will need the following equipment and software/libraries installed.
- 2 Laptops
- Processing 2 for Computer 1 code (due to SimpleOpenNI)
- Processing 3 for Computer 2 code
- Xbox Kinect V1 – 1414
- Ableton Live
- LiveOsc (An Ableton Hack)
- oscP5 (processing library for sending and receiving OSC messages)
- SimpleOpen NI (The processing library)
- Minim (A processing Library)
- Soundflower (for routing audio into processing
The initial concept involved midi messages being sent to logic. After some successful prototyping and discussion in class we came across OSC which for our purposes was better to use. This led to abandoning Logic in favour of Ableton live because while Logic is a great sound studio built DAW, it drained a lot of resources from the computer and it’s more traditional, linear workflow proved cumbersome. Ableton on the other hand proved more useful on account of being faster and having a built in workflow of organising clips within scenes (i.e. a clip is a music part and a scene is a music section of a piece). This made it easier to abstract the structure of the messages. OSC allowed us to be flexible about how much workload we would put on each computer.
Since the Kinect data and running the music software would be the two most processing intensive “things” it was decided those would be split between the two computers. One running Kinect, one running the DAW. Apart from that OSC allowed us to run the other code as we needed. For example, one computer is running beat detection on the audio, the result of the analysis can easily be sent via OSC an message to the a second computer.
Writing OSC messages proved a lot more intuitive since we could mix & match messages that had been pre-defined by the live OSC api with our own custom messages. Because of our schedule of trying to stay in sync and having a prototype up Gustaf ended up writing a first draft of the particle system with two important extra features:
Making our own mini projects to present our ideas allowed us to be both understand the direction we both wanted to go in. Terry created a Kinect visual and started by experimenting with the SimpleOpen NI library along with checking out youTube videos, blogs and books to find example code in order to learn more about how others tracked skeleton information from the Kinect and this formed the basis of our project.
Some of the visual references were found on youtube listed below:
However the below video captured our attention and we decide to try and recreate a particle system visually whilst also creating and interactive musical piece.
The most important factors for the visuals were that:
- It needed to be scalable. We were needing as much performance as we could get since we knew we would be pushing processing so we tried out a number of ways to reduce particles being drawn such as:
- if (frameRate < “x”) only create every 4th particle
- if (particles.size > “x”) start deleting the last particle of the arrayList (i.e. the oldest)
- if (millis() % = 2) you are allowed to create new particles
However, we found that after tweaking the depth of that the particles were drawn and the lifespan to decrease faster this allowed for a faster framerate drawn to screen. Because we wanted to attach the particle system to a pointcloud, the code was modified so that the origin point of any given particle was defined by an array of PVectors and our arrayList of particles would whenever asked would create one particle at each vector.
The gestures and interactions came about three quarters of the way through the project once we had understand completely how the Kinect used hand, elbow and shoulder vectors. It was then about finding the distance between these joints which then activated certain functionality such as playing and changing the section, then entering ‘Beat’ mode.
We explored the possibility of having hand gestures instead of body gestures, but found that the close proximity needed for the Kinect to correctly analyse the hand shape was a compromise we needed to decide upon. Furthermore the extra processing power meant that other parts of the visual would lag.
Thus we opted for a more obvious gesture selection as highlighted below:
Throughout the project we needed to overcome a variety of challenges. For Example, when trying to implement hand gestures we found that the user needed to be in close proximity to the Kinect in order to capture individual finger movements. This then created an issue with lagging within the point cloud particle system due to overloading the graphics card.
Our user testing also gave us a deeper insight into our product and that there was a bit of a learning curve when they were trying to interact with it. This led us into changing out target audience and we felt now that maybe now the UI was not a necessary component for an artist when performing live. Mapping the vector information was another challenge as we needed to test the maximum distance x, y and z that the kinect could go and then we decided later to map this to the middle of the person in order to allow a freer movement from the user.
Another slight issue, which we believe to be a fundamental problem with 3D tracking ,was that the Kinect kept dropping the user and thus this made it difficult for the user to feel fully immersed as attention would then be on trying to reconnect.
While the particle system was straightforward to setup we found that since it was being mapped quite a few points, not overloading computer required some tinkering (we ended up solving this by skipping points of the pointcloud). Furthermore to make the particle system look appealing required a lot of tinkering. It needed to look busy and detailed (lots of particles being drawn) while being visually coherent and not just a mess.
The audio challenges where twofold, technical and “artistic”.
Figuring out what type of music the user would interact with proved went through numerous phases. A person who’s not musically trained wouldn’t just be able to pick up and have fun with a theremin like setup of mapping x, y coordinates to pitch and volume. It turned out that giving the user any direct control over pitch demanded that they understood the music piece as a whole, once again this did not fit with our aim of being intuitive, fun and immersive people regardless of skill level. The second challenging aspect once we had decided to let the user switch between sections and manipulate parameters within ableton live was to have a piece of music which it made sense to play around with it. For example when a musician brings his effects and pedals to perform at a concert he won’t alter the parameters of said effects between min and max constantly. There will be a very specific range of sounds that makes sense for different sections of different songs. We found trying to recreate that experience made the most sense.
The first iterations of the project had midi and logic in mind. Logic while a great sounding piece of software required far too much computing power to pull off what we wanted. We opted to not use midi on account of doing anything more complex than noteOn/noteOff requiring reference tables. Using the LiveOSC api and their easy to read and understand documentation meant we could write code that itself read meaningfully i.e:
The difficulty then was with understanding the what values different parameters took. some accepted floats from 0 – 1, others integers between 0 – 127 while other more rhythmically oriented parameters wanted a sudvision such as: 1/4, 1/16, 1/32 ,1/ 64 etc…
Having someone test the program while another person sat behind the screens proved to be very useful since the user can feel something is not responsive while you can clearly see the parameters moving up and down on your end.
We feel the project ended up as a mixed success. While we set out to craft a tool that allows “anyone” to have a meaningful interaction with music, it became clear to us that “anyone” doesn’t actually exist and that people have radically different expectations on what they can do and how they can interact with a piece of technology such as the Kinect.
After testing it with different people we found that people fundamentally had two different reactions:
- People who saw it as an audiovisual piece with which you could recreate the feeling of “dropping the bass” or “the dub-step breakdown”. These where normally the audience who could already relate to our aesthetics.
- Those that didn’t necessarily connect to our musical or visual choices directly but saw it as more of a high concept tool for purposes such as music therapy.
Because of our music backgrounds we were more interested in the piece as being purely sonically oriented. This meant we decided to focus more on people who already had knowledge of electronic music and who already interact with music to some capacity. It’s not intended for “experts” or necessarily professionals but for “hobbyists”. We think that aesthetically and musically we successfully completed what we set out do which was to have an interactive experience using technology which we had no previous experience with and furthering our knowledge of vectors and data transferal.
However we feel we could’ve done a much better job with making the piece easier to use in terms of calibration and gestures. The biggest issue with software is that it is in it’s current state complicated to use. You need to be told what the gestures are and even then they require too much training to internalize. Spending more time fine tuning the music and the gestures, combining, removing and making sure the gestures flowed better from section to section. We also feel that our kinect connection is currently too unreliable. It frequently loses tracking of the user. Another library, programming software and possibly kinect v2 may help us fix this. We also wanted to make something more interesting with the visuals. We did set out to make a pointcloud and draw a particle system on top of it. However we feel that the visuals would need to be even more dynamic to hold the user’s attention. This would’ve been done by inserting more interactions with the fft such as further physics and colour manipulation.
Kinect v1 Skeleton
OSC & NetP5
ParticleSystem = Part of Previous project, further alteration through advice in class
Making Things See: 3D vision with Kinect, Processing, Arduino and MakerBot