2016















Flappy Colors
Project Description
The project was an audiovisual application that combined hardware and software to generate sounds. Overall, it intended to explored how visual stimuli, colours and different forms of sound can be combined as well as show another approach to music and one that would be easily understood by the audience.
The projects software is a program that draws six lines in a circular motion, setting each of them to a different and random HSB colour. The hue of each of the lines is interpreted by the program and turned into an integer that lies between 0 and 360. All the lines hues are then averaged and depending on where the final output value lies between all the 60 degree ranges, the program then plays a specific note (if larger than 330 and smaller than 30, the colour is red and the program plays C1 and so on). The program lies on chance of which is the dominating colour of the lines and then sends the instructions to the Arduino.
The hardware part is thus triggered by the averaged colour variable and depending on which one of the six colours is true the corresponding floppy drive plays a note. Each floppy drive plays only a certain note and each note is associated with one of the six(yellow, red, cyan, magenta, green and blue) HSB main colour hues.
The notes on the floppy drives are configured to be played using the the speed of the stepper motor that moves the pedal inside it. By changing the speed of the motor to a certain amount a specific note can be generated. As an end result the idea was to attempt to represent colours through sound, as well as explore the variety of how audio can be represented.
Documentation
The project started out wanting to experiment with sound and the variety of ways that sounds and notes can be created. The intended audience thus, from the beginning were people with not the best of knowledge and depth into musical theory or even computing for that matter. I wanted to give the viewer an opportunity to create sounds of his/her liking simply through experimentation and through easily comprehended outputs. From the beginning these sound outputs were intended to be ticks that would differ in notes depending on what the material the dials were ticking against. I created prototypes for this using Processing to see how this would visually look and as well as I was trying to find a way where this output of sound could be independent, not changing due to the user’s input, but due to some other instance or obstacle.
I then tried concentrating rather on just one type of rotating output or input and experimented with just one stepper motor to see what would be the sound that it made when ticking. The code was at this stage done just in the Arduino IDE.
After experimenting with the rotation of the motor and the Arduino I thought about using that movement of the circle and experimented with sketches that would use circular motion to create data visualisations. I looked at different ways to use circles and how they have been used in music and I came across a Spectogram that output frequencies using the distance of dots from the centre and output amplitude as the brightness of these dots. The higher the amplitude the higher the point and the lower the amplitude the darker the point. I used that to create a sketch that would draw randomised points across a circle to see how the output would look, nevertheless without the analysis of the frequency. However, I still wanted an audio output within the project and one that would to an extent have a system of some sort. As a result with the intention to experiment with colour I created circular lines drawn in different randomised colours deciding to use them as the triggers for the audio outputs. For aesthetic and simplicity purposes I used the HSB colour model as it displayed a more pleasing set of colours as well as it would be easier to get the average colour by just detecting where between 0 and 360 it is in the hue.
From then on I wrote my program in C++ using Openframeworks and continued experimenting with the physical audio outputs. The stepper motor was not enough and I came across Sam1Am Moppy Software’s Github page that apparently was the beginning of a still growing community which uses floppy drives to output certain notes and thus even play songs (preferably ones which are within the range that can be played on the drive). I used the instructions along with his software to see what the actual output would sound like.
I experimented with the Moppy program and used the code as reference to understand how the speed of the motor within the drives is changed to generate a specific note and then tried getting those notes within my program. However, I could not get the right notes as I encountered many problems with moving the floppy drive backwards and forwards. I started with moving one, and whilst that worked from the Arduino IDE I could not manage to trigger all the floppy drives realtime from the program that was running in Openframeworks.
As an end result the floppy drives did not trigger correctly depending on the colours and instead either lagged or played all at once without the correct delays. Thus from the viewers perspective it seemed that not only the colours were generated randomly on the screen, but also the physical audio output was chaotic as well.
Comments on the build
I did not come across with many major problems when dealing with the code for the circular color line. Some smaller and more tedious ones where getting the hue of each of the members of the rings class after they had been changed. Essentially from the beginning the randomness of the colour of each of the rings was defined in setup, thus when it got changed within the member functions I could not pickup the new changed hue. For that I used a pointer to each of the members of the class to access the new hue of each of them to get their overall average.
Another problem when using Openframeworks and C++ I encountered was connecting to the Arduino. As when previously using Processing the connection as well as the documentation for it was more straightforward, whilst at the time it was hard to find not outdated Openframeworks connection examples with the Arduino as they varied depending on the version Openframeworks and it libraries. However, fortunately within the newest version of Openframeworks there is already an inbuilt Arduino class that can be used to connect to it, taking into account that the Arduino board has the StandardFirmata sketch uploaded unto it from the examples folder. However, even after sorting out the connection I still came across various problems when triggering the function for the floppy drives from Openframeworks.
Whilst the movement of the motors worked as intended from the Arduino IDE, when porting them to C++, I couldn’t manage to achieve the right result. The main struggle was not only triggering them to move the right way, but also to play the right note. I did manage to achieve a similar version of just the movement I had written in the Arduino IDE, however there still remains space for much improvement regarding the outputs of the floppy drives.
Evaluation
After going through all the prototyping and exploration of different approaches and triggers of sound I was satisfied with the end concept of the project. The interface that showed the randomised colours seemed clear and simplistic enough. However, due to the difficulties that I went through to get the right notes playing on the floppy drives it eventually came to sound like what I feared most – just randomised sound. Thus the end result did not seem as the easily understood application that it should have been.
Apart from that, when getting the right notes, I wished to build upon the application to also use the saturation of the colours to trigger the seventh note. Further on I want to find a more systemised generation of the colours so that the notes in some ways would evolve into repetition that would allow it to seem as a track itself.
Link to gitlab repository that contains the final project as well as documentation files:
http://gitlab.doc.gold.ac.uk/lsarm001/linasarma_creativeprojects2.git
References:
1.https://www.bram.us/2012/03/21/spectrogram-canvas-based-musical-spectrum-analysis/
2.http://www.bewitched.com/song.html
3.http://sitsshow.blogspot.co.uk/2014/01/geometry-of-music-basics-of-music-system.html?m=1
4.https://github.com/SammyIAm/Moppy

Interactive Audio Piece
The original idea of my project was to make an interactive audio piece, linking audio synthisis with the pyshical world to create an original way of manipulating sound. Blurring the line between noise and music, analog and digital and exploring the possibilities of the digital media. Concretely the project was to consiste in a simple way of interacting which was to permit a large and varied range of outputs. Three strings stretched across a wooden plank, an electric current passing through each one of them. This enables simple but sufficient possibilities of interaction, the wires’s voltage change when they are touched (the human body being conductive), in addition to that “binary” input, the wires’s resistance is variable and increases as the cord is being stretched, this more continusous data gives us the possibility of a “glissando” effect.
The project’s idea is based on early electronic instrument such as the theremine and the fingerboard theremine, these instruments, while they still somewhat resemble standard instruments, in sound and appearance, they already have distinctively “electronic” features, they are at the intersection of those two worlds.
In a similar way my project sits at the same intersection, between the analog and the digital, the data from the input is in analog form, it is converted and processed in digital bits, to finaly return as analog in the output.


To me, To you
By Tom Pinsent
Project Description
‘To me, To you’ is a collaboration between me and my computer. Inspired by a bombardment of screens, wires, beeps and boops alongside a lack of traditional, more physical forms of work within the digital art world, the electronic part of my piece is kept solely within the production. This leaves the final product independent of digital technology, un-reliant upon electricity to be experienced, although having been heavily influenced by it.
The paintings are created through a step-by-step process. It starts with a simple shape or form painted on to a canvas. My program detects this through a webcam and compares the shape to a pre-trained set of shapes, with the most similar projected back on to the canvas. I add these shapes to the painting, and the process repeats until the picture is done.
Audience and Intentions
Through my piece I wanted to convey an interplay between contemporary artistic practice and the
traditional, through the counterposition of modern technology with the conventional, seemingly
dated, medium of paint. Furthermore, I’d hoped to suggest a dialogue that is the collaboration between an artificial, digital artist, consisting of both hardware and software, and
myself. I hoped to show viewers that contemporary art with a heavy use of technology does not
have to be all “beeps” and “boops”, and can in fact exist within the real world (as opposed to the ephemeral cyber-space realm), in a form that is well known and recognised throughout art history. My work might attract those with either professional or personal interests within art or computer science fields, as well as hopefully providing insight to members of the general public who may not have been exposed to these kind of ideas before.
Background Research
When starting with this project, I looked at a range of artists and works that dealt with both contemporary and traditional methods of creative practice, where neither aspect holds a greater importance than the other. New approaches are combined seamlessly with the conventional, bringing both styles into a new context, therefore producing art that feels fresh yet familiar.
Names include Yannick Jacquet, Joanie Lemercier and more.
For more info, visit: http://igor.gold.ac.uk/~skata001/hiveMind/2015/10/30/relationship-between-contemporary-and-traditional-creative-practices/
I also conducted further technical research, by looking at computer vision, machine learning and neural network techniques.
Resources used:
Udacity Deep Learning course (mainly convolutional neural network section)
https://www.udacity.com/course/deep-learning–ud730
Self Organising Map tutorial
http://www.ai-junkie.com/ann/som/som1.html
I initially thought this would be useful since a Kohonen Self Organising Map is a unsupervised machine learning algorithm widely used for grouping together images (and many other types of data) through similarities.
Design Process and Commentary
I began with an initial list of things my program needed to do:
1. Scan canvas and detect shapes
2. Analyse detected shapes and composition
3. Create new shapes and forms based on how well detected forms fit a desired aesthetic
4. Display new forms on to canvas
I then began exploring methods and techniques used in computer vision, machine learning and neural networks, since these are areas that are used within similar applications, such as image classification/face detection.
1.
When scanning and detecting shapes, I wanted my setup to work like this, which would be the simplest and easiest during the actual use of the program (the painting of the canvases), and meant it would be suitable for live scenarios:
However, I found this was impractical for a few reasons: The camera should not detect anything apart from the desired shapes on the canvas – even with cropping, the distance combined with the noise from a low quality webcam makes for very bad shape detection. At this stage I did not know how computationally intensive the detection and generation of shapes was going to be, so I though committing to a setup and build geared for quick and easy use may have led to problems later on, towards the end of completion, and a live scenario might not have been viable. Also, there was an issue of getting the same perspective between camera and projector, for accurate representation of shapes from both my point of view and my program’s.
Instead, I ended up holding the webcam up to the canvas and pointing it at the desired shape. This worked well, since with the web cam closer up, there is greater detail in the image and hence more accurate contour detection. I also found that a ‘1-to-1’ representation of forms on the canvas and displayed shapes was not necessary at all, which I will come to later.
In terms of the shape detection itself, I started with ofxOpenCV. I used a variety of test images (as the webcam was unavailable at this stage) and results were OK, but contained a lot of noise – rough, jagged edges were picked up all over contour detection. I later implemented the use of smoothing and resampling functions within the ofPolyLine class in openFrameworks, which made some improvements. In the end I switched to the ofxCv add-on, a package with greater functionality than ofxOpenCv, closer to the OpenCv library. Contours detected with this library were much better.
2. and 3.
So I wanted to find the similarity between a painted shape and a set of test shapes, then generate shapes from there. Initially my test set was going to be a collection of images, textures and forms that fit some sort of aesthetic of my choosing. I thought the use of genetic algorithms, such as L-systems, cellular automata or other rule based systems, to generate my test set would be interesting, further increasing the influence of my computer on the work. I assumed this data would be in the form of .jpgs, so implemented various functions for loading and analysing data from image files. Yet as experimentation continued, I was informed about SuperFormula, an algorithm for the generation of shapes, most of which resemble forms throughout nature. The beauty of this algorithm is that such a range of shapes can be created just from the alteration of four individual variables. I also realised that my training set not longer needed to be in the form of imgs, and i can just take vertex positions and feed that straight into my program. I edited and modified a version of superformula in processing, allowing me to create and save a training set.
Finding a value for the similarity, or difference, of two shapes was very tricky, since there are many factors to take in to account, such as size, orientation, location… After much research and testing, I settled on a method using a turning function, as described on page 5 (section4.5) here:
http://www.staff.science.uu.nl/~kreve101/asci/smi2001.pdf
Using this website as a reference along the way:
https://sites.google.com/site/turningfunctions/
This function allows you to map the change of angle from vertices along the perimeter of a shape onto a graph. It is then (relatively) easy to calculate a distance, or similarity, between a two or more sets of points (for example, shape A’s first angle change is compared to that of shape B’s and saved, with the same happening for every other vertex. A mean average of similarity is found at the end, and therefore a similarity between two / more shapes).
So now I have a range of base training shapes, each with a similarity to an initially drawn shape. I then looked into clustering, so i could group together similar similarities. However, after more research I found it was quite inefficient to implement most clustering algorithms on 1 dimensional data, my single similarity value, and instead looked to the Jenks Natural Breaks algorithm. This is used for splitting up a set of data into sub groups with similar values ( so shapes with similarities 1,7,15 would be in a different group to those with 150,120 or 110 for example). After my data had been split into sub groups, the most similar group, so that containing the lowest values, were used to generate images.
However, for the actual generation, this was all – the initially generated superformula shapes in the training set were just re-presented. I tried to use the values fed into superformula to generate the most similar shapes as a starting point, and slightly tweak the values and regenerate new shapes. But I found after even small tweaks, variation was too great and any detection of similarity was lost.
4.
I would have liked to include some sort of composition calculation too, yet I found within the time I had this was not possible. Instead, for the paintings produced I tried different methods of composition. One was my own choice of placement of generated shapes, the next was a layering of shapes on top of one another, from most similar to least, and the last I tried distorting the projection itself. As I was painting, I noticed how the projection distorted along with the distortion of the surface itself. This made me think about how my program had influence over the piece through the code, the software, but I am also painting over a projection. How can the hardware and positioning of this affect the work?
Evaluation
Overall, I have learnt a lot from this project. With initially very little understand of machine learning or computer vision techniques, I had to read about and jump over many small hurdles and problems.
I would have liked to have a more accurate measure of similarity, since currently my turning function does not work well with changes in rotation or lots of noise on the contours.
The generation could have been better too, since in the end it is almost just recycling the initial forms giving to it (however, parallels can be seen between this and the age old mystery of what true creation really is, if it really exists, or if we ourselves just recycle and reproduce ideas presented to us). I would also like to implement some sort of colour and composition training and generation. My code is also quite messy, with many functions included that weren’t used in the final painting. This could have been avoided by better planning of exactly what functions and methods I was going to use, with a clear idea of exactly what I needed.
Furthermore, I believe the majority of interest within my project lies in the production, in the technology, leaving the traditional values that I wanted to place emphasis on severely lacking. I succeeded in my plans to push technology out of the limelight, yet it left the public had little understanding of the process of its creation. For similar projects in the future, I would need to spend at least as much time upon the physical, non-electronic side of things as I did on the electronic if I would like them to hold the same importance, as well as finding a way to communicate the process clearer through the work itself (in other ways than a short video documentation on the process).
Gitlab Link:
http://gitlab.doc.gold.ac.uk/tpins001/Creative_Projects_2

Cycle
Project Description
Cycle is a series of technology-assisted performances, incorporating the use of robotics and sound. It was inspired by the interrelating concepts from Graphic Notation and East Asian Calligraphy/Ink Wash Paintings. In each unique recurrence, Cycle explores the theme of spontaneity and individuality transpired within a structured framework as the performers present their own interpretation of a set of instructions.
Each performance lasts approximately three minutes, give or take a minute; the performers end it at their own discretion. During the performance, a sole performer walks around the ‘Ink Stick Rotation Machine’ (ISRM) in a seemingly undefined way. The ISRM grinds an ink stick on an ink stone according to how the performer walks. Ambient sounds and vibrations generated from the constant moving contact of the ink stick and ink stone are amplified by speakers through a microphone located on the sides of the ink stone in real time.
Aesthetic Themes
Concepts
In the performer’s interpretation from a set of rules constructed by the graphic score’s composer, control over the manner of performance is removed from the composer’s authority which alludes to a spontaneous creation of the performance by leaving it to ‘chance’. Unlike music represented in traditional notation, different performances of one graphic score do not have the same melody yet still articulate similar notions expressed in the score. In the cases of Ink Wash painting, the rules in posture, way of holding the brush, and practiced strokes, the results cannot be fully controlled by the painter and are still unpredictable due to human error and the nature of ink and water – their interaction take on a life of its own.
The audience sees and listens – nothing really comes out of watching the performance. Yet, even if the audience does not understand the concepts implicated in this work due to requiring some background knowledge about the act of grinding an ink stick, to experience Cycle, they merely have to practice being in a state of calmness and ambiguity. Just like when a painter or calligrapher prepares ink by manually grinding the ink stick, it is to ebb their flow of thoughts, momentarily forget about the things that are happening outside of the performance and just watch and listen. The performance would be both like a ‘performance’ and a non-religious ‘ritual’ at the same time. The feeling that one would sense is like when one is a non-Buddhist listening to the chants of Buddhist monks. Strangely calming, yet it could get annoying when one listens to an ununderstandable language for too long.
For the performers, I would hope that they would be in a world of their own without minding the presence of the audience and focus on their body walking in a circular path, yet I can imagine that they would perhaps be nervous in front of an audience, especially if they are performing for the first time. As a recurring theme in my work, ‘walking’ is a simple movement that can be of disinterest and a distraction all the same. It not only refers to the bodily action of moving your legs as a mode of transport but also signifies the act of repetition, which is structural, and the mundane. As the performer walks after a few times, the performer may build up a personal routine or choose to walk a different manner each time.
Rationale
After my research on Graphic Notation and East Asian Ink Wash Paintings, I have drawn connections between these two distinctively different genres in art and show their overlapping characteristics in which my artwork attempts to embody conceptually. I likened graphic notation to instructions that were rather open-ended yet specific in certain ways, hence, I decided on creating a performance that Borrowing the motif of ink grinding, which is in itself the stage that happens before the actual painting is executed, and combining it with the imagined sound that graphic notation alludes to, I made the ISRM a framework for the performers. The performers actions are translated to 26 rotations speeds and merely two rotating directions on the ISRM. Within the structure of the ISRM itself, I also found it ironic to have a physically mechanical device replace the mechanical and repeated motions of ink stick grinding. I was unsure of the exact sound that would be produced at the beginning as the sound that is amplified would be quite different from the tiny scratching noise that I am familiar with when grinding ink.
Borrowing the motif of ink grinding, which is in itself the stage that happens before the actual painting is executed, and combining it with the imagined sound that graphic notation alludes to, I made the ISRM a framework for the performers. The performers actions are translated to 26 rotations speeds and merely two rotating directions on the ISRM. Within the structure of the ISRM itself, I also found it ironic to have a physically mechanical device replace the mechanical and repeated motions of ink stick grinding. I was unsure of the exact sound that would be produced at the beginning as the sound that is amplified would be quite different from the tiny scratching noise that I am familiar with when grinding ink. With the addition of the sound of the motor, I thought that the sound would be a nice hybrid between the organic and inorganic materials.
Background Research
Graphic Notation
In the late 1950s and the first half of the 1960s, many prominent international avant-garde composers such as Roman Haubenstock-Ramati, Mauricio Kagel, and Karlheinz Stockhausen, as well as experimental composers such as John Cage, Morton Feldman, and Christian Wolff started to produce graphic scores that used new forms of notation and recorded them on sheets that were very divergent from traditional music notation in size, shape, and colour. This new way to convey ideas about music alters the relationship of music/sound to the composer and musician. “In contrast to scores with traditional notation, graphic notation emphasized concepts and actions to be carried out in the performance itself, resulting in unexpected sounds and unpredictable actions that may not even include the use of musical instruments.” (Kaneda, 2014)
Here, I focus on how graphical notation evolved from John Cage’s musical practice and then on Treatise, one of the greatest graphical scores, by Cornelius Cardew.
Influence of Zen Buddhism in Cage’s Work
In Cage’s manifesto on music, his connection with Zen becomes clear: “nothing is accomplished by writing a piece of music; nothing is accomplished by hearing a piece of music; nothing is accomplished by playing a piece of music” (Cage, 1961).
This reads as if a quote from a Zen Master: “in the last resort nothing gained.” (Yu-lan, 1952). Cage studied Zen with Daisetz Suzuki when the Zen master was lecturing at Columbia University in New York. Zen teaches that enlightenment is achieved through the profound realization that one is already an enlightened being (Department of Asian Art, 2000). Thus we see that Cage has consciously applied principles of Zen to his musical practice: he does not try to superimpose his will in the form of structure or predetermination in any form (Lieberman, 1997).
Cage created a method of composition from Zen aesthetics which was originally a synthetic method, deriving inspiration from elements of Zen art: the swift brush strokes of Sesshū Tōyō (a prominent Japanese master of ink and wash painting) and the Sumi-e (more on this in the next section) painters which leave happenstance ink blots and stray scratches in their wake, the unpredictable glaze patterns of the Japanese tea ceremony cups and the eternal quality of the rock gardens. Then, isolating the element of chance as vital to artistic creation which is to remain in harmony with the universe, he selected the oracular I Ching (Classic of Changes, an ancient Chinese book) as a means of providing random information which he translated into musical notations. (Lieberman, 1997)
Later, he moved away from the I Ching to more abstract methods of indeterminate composition: scores based on star maps, and scores entirely silent, or with long spaces of silence, which the only sounds are supplied by nature or by the uncomfortable audience in order to “let sounds be themselves rather than vehicles for man-made theories or expressions.” (Lieberman, 1997)
John Cage: Atlas Eclipticalis, 1961-62

Atlas Eclipticalis is for orchestra with more than eighty individual instrumental parts. In the 1950s, astronomers and physicists believed that the universe was random. Cage composed each part by overlaying transparent sheets of paper over the ‘Atlas Eclipticalis’ star map and copied the stars, using them as a source of randomness to give him note heads. (Lucier, 2012)
In Atlas, the players watch the conductor simply to be appraised of the passage of time. Each part has arrows that correspond to 0, 15, 30, 45, and 60 seconds on the clock face. Each part has four pages which have five systems each. Horizontal space equals time. Vertical space equals frequency (pitch). The players’ parts consist of notated pitches connected by lines. The sizes of note heads determine the loudness of the sound. All of the sounds are produced in a normal manner. There are certain rules about playing notes separately, not making intermittent sounds (since stars don’t occur in repetitive patterns), and making changes in sound quality.
Cornelius Cardew: Treatise, 1963-67
After working as Stockhausen’s assistant, Cornelius Cardew began work on a massive graphic score, which he titled Treatise; the piece consisting of 193 pages of highly abstract scores. Instead of trying to find a notation for sounds that he hears, Cardew expresses his ideas in this form of graphical notation, leaving their interpretation free, in confidence that his ideas have been accurately and concisely notated (Cardew, 1971). The scores were a guide which focused each individual’s creative instinct on a problem to be solved – how to interpret a particular system of notation using one’s own musical background and attitudes. (Tilbury, 2008)
As John Tilbury writes in Cornelius Cardew: A Life Unfinished (2008), ” The instructions were a guide which focused each individual’s creative instinct on a problem to be solved – how to interpret a particular system of notation using one’s own musical background and attitudes.”
“A Composer who hears sounds will try to find a notation for sounds. One who has ideas will find one that expresses his ideas, leaving their interpretation free, in confidence that his ideas have been accurately and concisely notated.” – Cornelius Cardew
In the Treatise Handbook which guides the performer on the articulation of the score, Cardew writes that in Treatise, “a line or dot is certainly an immediate orientation as much as the thread in the fog” and for performers to “remember that space does not correspond literally to time.” (A Young Persons Guide to Treatise, 2009)
East Asian Ink Wash Painting

Ink Wash painting was strongly influenced by Chinese Calligraphy, Chan and Zen Buddhism, and was done by Chinese and Japanese monks as a mental and meditative practice. Many Buddhist ideas were transferred into painting such as reduction of the subject matter to its essence, abandonment of needless details and the directness of the brushstroke associated with the immediacy of enlightenment. Today, Ink and wash painting is no longer practiced only by monks, but the aesthetics remain the same (Asian Brushpainter, 2012).
Materials & Tools
The close relationship between the materials and tools used influenced the evolution of artistic forms and techniques in Ink Wash painting. As the materials and tools used in calligraphy and painting are essentially the same, calligraphic styles and techniques can also be used in painting. The majority of painters used a brush to apply ink onto the silk or paper ground. The most important part of this artistic instrument is the bristled end, made of soft rabbit’s and wolf’s fur or stiffer pony’s hair or mouse’s whiskers. Artists carefully select the correct brush for their personal painting style. To create an even, linear brush stroke, the artist would a soft brush while using stiff brushes to attain fluctuating and uneven lines. (Williams, 1981)
An equally important element in ink wash painting is the ink which is used. It is a solid stick, a compressed mixture of vegetable soot and glue that the calligrapher grinds with some water on a special ink stone to produce liquid ink. Until recently Chinese and Japanese calligraphers never had liquid ink, they always had to make fresh ink. A calligrapher that wants to brush a serious artwork will always make her/his own ink because bottled ink is always inferior to freshly ground ink (What Ink Stick Should You Choose For Japanese Calligraphy?, 2015). Therefore, the ritual of grinding fresh ink from an ink stick and water is performed by the painter every time before the painter paints. This becomes a stage where the painter prepares, physically grinding the ink and mentally attuned to starting on a piece of work.
Technicalities & Principles
There are many traditional principles and technical aspects of Ink Wash Painting, but I am interested in the traditional methods that I consider much more restricting than painting in the ‘Western’ style, yet are still visually expressive and romantic. In Ink Wash Painting, there are principles (not rules, because more experienced painters may not observe them) observed from holding the brush to how the subject matter should be expressed. The brush techniques emphasized in Chinese painting include not only line drawing but also the stylized expressions of shade and texture and the dotting methods used mainly to differentiate trees and plants and also for embellishment.

These techniques often require the painter’s discretion such as the estimation of water to ink ratio to create varying shades of black or the amount of pressure to place on the brush. These are achieved through practice and countless failed paintings before a painter is able to spontaneously paint without mistakes.

Once the brush is placed on the paper, the stroke has to be executed. This immediacy and quickness also mean that a painter can easily mistakes which are not correctable. However, this is one of the aesthetic characteristics of Ink Wash painting that a painting may contain small imperfections, such as a brushstroke with a frayed end or a line that is too thin. Such imperfections reveal not only the painter’s individual style but also add a personal note to the picture.
Calligraphic Influences in Zen Art
Though its origins are from India, Zen Buddhism was formalized in China then transmitted to Japan and took root there in the thirteenth century. Enthusiastically received in Japan, it became the most prominent form of Buddhism between the fourteenth and sixteenth centuries. The immigrant Chinese prelates were educated men, who introduced not only religious practices but also Chinese literature, calligraphy, philosophy, and ink painting to their Japanese disciples. (Department of Asian Art, 2000)
Zen Buddhism’s emphasis on simplicity and the importance of the natural world generated a distinctive aesthetic, such that a misshapen, worn peasant’s jar is considered more beautiful than a pristine, carefully crafted dish. While the latter pleases the senses, the former stimulates the mind and emotions to contemplate the essence of reality (Department of Asian Art, 2000). The Enso is what captures these values – it is a circle that is hand-drawn in one, and in some cases two, uninhibited brushstrokes to express a moment when the mind is free to let the body create.

The Enso, or Zen circle, is one of the most appealing themes in Zen art. The Enso itself is a universal symbol of wholeness and completion, and the cyclical nature of existence, as well as a visual manifestation of the Heart Sutra, “form is void and void is form.” (Zen Circle of Illumination)
Research Conclusion
Despite there being many specific technicalities in Cage’s work, these are all qualitative instructions which are open-ended, ultimately leaving it up to the performer’s or conductor’s judgement on how they would play the piece as implied by Cardew’s ideas. In a sense, the individuality of each performance of the graphic score by different performers emerges. This is mirrored in appropriating the creation of the Enso in Cycle by the performer. Every painter draws a circle but every circle is different. Bodily and mindfully engaged in drawing the circle, the circle becomes an allegory of the individual.
The performer not only becomes both the painter and the medium in creating the circle, the performer is also a musician with the indirect control of the device that grinds ink – the instrument with a naturalistic sound created from the contact between the ink stick and the ink stone. To quote Cage’s approach to what defines music, it is the “the difference between noise and music is in the approach of the audience” (Lieberman, 1997).
The act of grinding the ink stick becomes the juxtaposition between the ritualistic and the improvised. Also, ink that is produced after each performance are of different quality each time as no two performances will last the exact same time nor will the performers be able to replicate their performance exactly.
Design Process
Communication between the phone and the computer is through OSC. The ISRM is made up of an Arduino Uno, which controls a stepper motor, which is directly connected to the computer with a USB cable. The speed and direction of the performer would be measured by the built-in sensors in a phone on the performer. Data from the orientation sensor and accelerometer of the phone is computed in a C++ program on the computer which maps the speed and direction of the performer to that of the ISRM.
ISRM
Controlling the Stepper Motor with C++
The Arduino part was pretty straightforward as there was the Firmata library for the Arduino that enabled serial communication with a C++ program. However, there was no stepper library in C++, so I translated the Arduino stepper library to C++. Working through the technical details of the stepper motor that I had with some trial and error, this was the circuit that I used to test controlling the stepper motor through a C++ program.
Here’s me testing the program out:
3D Modelling
To hold the ink stone, ink stick, and the stepper into a single functional entity, I started off with a preliminary design of a 3D model in Blender, which eventually I was going to 3D print.

I got the idea of the rotation wheel and axis from the driving wheels of steam locomotives, but I was not satisfied with the motions of the rotating mechanism in the first prototype. It caused the ink stick to rotate in a rather awkward manner that did not keep the ink stick facing the same direction. I also removed the water tank as I felt that it was visually obstructive and had no better purpose than to provide the ink stick with water, which I did not manage to figure out a fail-safe method of channeling the water into the ink stone. I thought of using a wick to transfer water from the tank to the ink stone, but water transfer was too slow, or a small hole with a pipe dripping water to the ink stone, but the rate of dripping will change when the water in the tank decreases due to decrease in pressure. Also, it would damage the ink stick if I let it touch the water for too long periods of time, hence I scraped the water tank from then on and decided to manually add water before every performance.
There were many difficulties trying to get the holder for the ink stick to fit. I realised that it was never going to fit perfectly as the dimensions of the ink stick itself was not uniform; one end of the stick could be slightly larger than the other end, which made it either too loose or too tight when I tried to pass through the entire length of the stick through the holder. I resolved this by making the holder slightly larger and added sponge padding on the inside of the holder so that it would hold the ink stick firmly no matter the slight difference in widths. The ink stick was shaky when it rotated so I increased the height of the holder to make it more stable. I also added a ledge on each side of the holder for rubber bands such that the rubber bands could be used to push the ink stick downwards as it gets shorter during grinding.

Before arriving at the final design, there were just wheels that were only connected to each other through the rod. The rotation did not work like expected of a locomotive wheel and I realised that it was because the wheel not connected to the motor had no driving force that ensured it spun in the right direction. Therefore, I changed the wheels to gears.



The printed parts did not fit perfectly and that was not because of the wrong measurements as there was a factor of unpredictability in the quality of 3D printing. I tried using acetone vapour on the parts that need to move independently of each other to smooth the surface, but the acetone vapour also managed to increase the size of the plastic. The plastic became more malleable so I easily shaved them down with a penknife.








This process was too slow and I ended up using a brush to brush on the acetone directly to the plastic parts and waited for a few seconds for it to soften before using a penknife. Super glue was then used to hold parts that were not supposed to move together. The completed ISRM:
Sound
I used electret microphones that were connected to a mic amp breakout, then connected to a mixer for the performance. I got an electret microphone capsule to use with the Arduino but I did not know that the Arduino was not meant to be used for such purposes and the microphone was not meant for the Arduino.

So, I got another kind which could directly connect to output as I did not want to use the regular large microphone which would look quite ostentatious with the small ISRM.

Trying to amplify the sound of making ink (sound is very soft because I only had earphones at that time, and I was trying to get the phone to record sound from the earphones):
Sensor Data & Stepper Motor Controls
I initially thought of creating an android application to send data to the C++ program via Bluetooth, but there was the issue of bad Bluetooth connectivity, especially the range and speed of communication. Hence, I switched to using OSC to communicate the data. Before finally deciding on using an OSC app, oscHook, I made an HTML5 web application with Node.js to send sensor data. It worked well except for speed issues as there was a buffer between moving the phone and getting the corresponding data that made it rather not ‘real-time’, and it also sent NaN values quite often which would crash the program if there were no exception handlers.
For controlling the speed of the stepper motor, I mapped the average difference of the acceleration of the y-axis (up and down when the phone is perpendicular to the ground) within the last X values directly to the speed of the motor. Prior to this, I looked at various ways to get the speed and direction of walking, from pedometer apps to compass apps. As different people had different sensor values with the phone, I created a calibration system that adjusted the values of the mean acceleration when the performer is not moving and when the performer was moving at full speed. This ensured that the stepper will be able to run at all speeds for all performers.
Link to Git Repo.
Performance & Installation
Performance Space

Videos of performances were playing on the screen for the second day of Symbiosis. The TV was covered with white cloth on the first day. The ISRM was placed on a white paper table cover with the microphone next to it.

Instructions for Performers
Besides having to run a calibration before their performances, I requested the performers to wear “normal clothes in darker colours” to make a contrast with the white room walls. I decided not to specifically ask for black as it was too formal and intimidating. Although the performance exudes the sense of a ‘ritual’, it was not meant to be solemn or grievous, as was such cultural connotations of fully black clothes in a rather ritualistic setting.
During the performance, the performers were to heed these instructions:
- Walk around the room.
- When you stop, stop until you hear the sound indicates that the motor is at its lowest speed.
- End the performance when it is three minutes since the start.
Prior to completing the program that controls the stepper motor, I wanted to attach the phone to a belt and hide it under the clothes of the performers such that they would be walking hands-free. I realised that it was quite abrupt to merely end the performance with the performer standing still as there was no indication if the performer was pausing or stopping entirely to the audience. Hence, after realising that by placing the phone parallel to the ground caused the motor (and in turn the sound) to stop in an elegant manner, I decided that the performer would hold the phone (which I covered in white paper to remove the image a phone) in their hand and have them place it on the ground to signify the end of the performance.
Evaluation
The Performances
There was a total of eight performances by three people, Yun Teng, Leah, and Haein. These are videos* of the performances by each of them on the Symbiosis opening night and their thoughts on their experience of performing:
*The lights in the room were off during the day, hence videos of the earlier performances look quite dark. If you do not hear any sound from the video, please turn up the volume.
“Being asked to perform for this piece was an interesting experience. For me, it was seeing how (even on a conceptual level, as it turned out) that my physical movement can be translated through electronics and code into the physical movement of the machine and the audio heard. Initially, although we were given simple instructions to follow and even, to some extent, encouraged to push these instructions, I was at a loss to how to interpret them, and just walked in a circular fashion. I tried to vary the pace, speed and rhythm of my walking in order to create variation, but ultimately fell back into similar rhythms of fast, slow, and fast again. It would have been interesting to perhaps push this even further if the machine was more sensitive to height changes, or arm movements – as a dancer who is used to choreography, this was a challenge for improvisation and exploration. In addition, due to the size of the room, the space was limited and hence the walking could only take place in certain patterns.” – Yun Teng
“At first, the walker was uncertain, distracted and anxious. She explored the link between sound and her unchoreographed strides and expected the connection to be instantaneous and obvious. However, it was not. There were delays and inconsistencies; the electronic and mechanic could not accurately reflect the organic. A slight panic arose from the dilemma of illustrating the artist’s concept to the audience and accepting its discrepancies as part of the performance. Slowly she started to play around with the delay, stopping suddenly to hear the spinning sound trailing on, still at high speed, and waited for it to slow down. Rather than a single-sided mechanical reaction to movement, the relationship between the walker and the machine becomes visible and reciprocal. Rather than just walking, now she also had to listen, to wait, and by doing so interact with the machine on a more complicated level. Through listening, she felt the shadow of her movements played back to her by the machine. The observation sparked contemplation on the walker’s organic presence versus the machine’s man-made existence and the latter’s distorted yet interesting reflection of the former.” – Leah
“The whole practice first was received as confusing and aimless as there was too much freedom for one to explore. It was challenging to perform the same act (walking/running) for more than two minutes. At first, I performed more than four minutes, unable to grasp the appropriate time, but it decreased as I repeated the practice. This repetitive performance was quite meditative and physically interactive with the work that caused me to wonder about the close relationship between myself and sound piece (which changes according to my walking speed). The most pleasant part of the performance was that I got to control the active aspect of the work and directly interact with it.” – Haein
The audience was very quiet, probably so that they could hear the sound that was very soft even at its loudest. When they first came in, they did not know what to do as there was no visible sitting area (so I directed them to sit at places that allowed the performer to roam most of the room). It was a huge contrast to the audience that interacted with my previous work as only the performer gets to have a direct interaction with the ISRM. Even then, the ISRM was visibly moving during the performances.
Just hours before the opening night, the ISRM broke at (fig. A & B). It was a mistake on my part as I was reapplying super glue (fig. B) to the base as it had somehow loosened from the previous application of super glue. In hindsight, I did not make extra parts (I did print extras of certain, not all, parts but they of no use when I did not bring them on site, nor were they ‘acetoned’ to fit together.), could not manage to salvage the parts, and I knew that I would not be able to reprint the parts in time. In the end, I slightly altered my work as the ISRM could no longer function as intended. Instead of having the microphones stuck to the sides of the ink stone, I stuck them to the stepper motor instead. Although the sound no longer had an organic element from the ink stick and ink stone, it was completely mechanical now.



After undertaking this project, I have learnt not to limit myself by my tools, but to explore different methods and tools before limiting myself in the creation of the work. I had a misconception that 3D printing was the most efficient way. In some ways, it was because it was the printer that was doing the hard work, not me and I did want to try 3D printing. Despite that, I should not have limited myself by my lack of consideration in using other materials to build the ISRM, such as the traditional way of putting together wood and gears. On the other hand, I do not regret my attempts to build an android app (which I quickly decided was not worth my time for the simple thing I was trying to accomplish) and a web application for sending the sensor data from the phone with Node.js as it is something new that I learnt even though I did not use it in my final work.
Fortunately, I managed to finish the design of the ISRM and print it out in time, but I felt that I should have focused more on the ISRM instead of coding in the earlier phase of the project timeline. 3D printing takes a lot of time, as I have experienced through this project, and any botched prints needed to be printed again as they are rarely salvageable even after being in print for hours. It is also tricky to get the settings right (i.e. infill) such that the printing time is minimised without compromising the quality.

Apart from the many technical things, I also learnt how to organise a performance art (this is my first performance art) and through making this artwork, there many more implications and questions that arise from what I created. For the performance, there were many things to keep track of, such as rehearsing with the performers beforehand, the attire of performers, the schedule of performances, getting the camera to film for documentation and managing the audience. In conclusion, despite being unable to carry out the performances as I have originally planned, I am glad that I have managed to still put together what is left of the entire work even when the ISRM failed to work correctly and the original intentions behind the artwork are still largely intact.
References & Bibliography
Works Cited in Background Research
A Young Persons Guide to Treatise. (12 December, 2009). Retrieved 2 November, 2015, from http://www.spiralcage.com/improvMeeting/treatise.html
Asian Brushpainter. (2012). Ink and Wash / Sumi-e Technique and Learning – The Main Aesthetic Concepts. Retrieved 2 November, 2015, from Asian Brushpainter: http://www.asianbrushpainter.com/blog/knowledgebase/the-aesthetics-of-ink-and-wash-painting/
Cage, J. (1961). Silence: Lectures and Writings. Middletown, Connecticut: Wesleyan University Press.
Cardew, C. (1971). Treatise Handbook. Ed. Peters; Cop. Henrichsen Edition Limited.
Department of Asian Art. (2000). Zen Buddhism. Retrieved 11 December, 2015, from Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art: http://www.metmuseum.org/toah/hd/zen/hd_zen.htm
Kaneda, M. (13 May, 2014). Graphic Scores: Tokyo, 1962. Retrieved 2 November, 2015, from Post: Notes on Modern & Contemporary Art Around the Globe: http://post.at.moma.org/content_items/452-graphic-scores-tokyo-1962
Lieberman, F. (24 June, 1997). Zen Buddhism And Its Relationship to Elements of Eastern And Western Arts. Retrieved 10 December, 2015, from UCSC: http://artsites.ucsc.edu/faculty/lieberman/zen.html
Lucier, A. (2012). Music 109: Notes on Experimental Music. Wesleyan University Press.
Tilbury, J. (2008). Cornelius Cardew (1936-1981): A Life Unfinished. Copula.
What Ink Stick Should You Choose For Japanese Calligraphy? (2015). Retrieved 3 December, 2015, from Japanese Calligraphy: Modern Japanese Calligraphy inspired in Buddhism and Zen: http://www.theartofcalligraphy.com/ink-stick
Williams, M. L. (1981). Chinese Painting – An Escape from the “Dusty” World. Cleveland Museum of Art.
Yu-lan, F. (1952). A History of Chinese Philosophy. Princeton, New Jersey: Princeton University Press.
Code References & Software

Led Jumper – Daniel Sutton-Klein and Sebastian Smith
LED Jumper is a 2D platformer by Daniel Sutton-Klein and Sebastian Smith inspired “The Impossible Game” displayed across a 32×16 LED matrix where the only control is to shout to jump. This game takes advantage of the LEDs bright visuals and creates a neon-esque aesthetic for the player to be immersed in as they progress through the game.
Processing + Teensyduino code: Download
Initial Intentions
We set out to make a fun and simple game for anyone to play. Since the game is controlled only by the user’s voice, our audience isn’t limited to any particular group of gamers but instead literally anyone that can shout loud enough or make enough noise.
For contextual research and creative inspiration we looked at examples of LED matrices used in interesting visual ways such as in music performances (Ratatat, Four Tet).
We mainly had to focus on the hardware side of things when it came to background research as we had little experience with physical computing prior. We added all the relevant information we needed to a google docs file and researched as much as we could about the different options of compatible hardware we could use to ensure that we wouldn’t waste money on anything useless. Below is a snapshot of this document which shows how we gathered the information for building our LED display.
The concept of our project, a game running on an LED array, dictated all the design that followed. We had some options about the density of LEDs on the strip, either 30, 60, or 144 LEDs per metre. Thinking about the sound input nature of the game, we imagined people shouting jump at the display to avoid dying, it seemed appropriate to try and get the largest display we could. When prioritising the size of the display, it was cost efficient to opt for the 30 leds/meter strips. We considered different aspect ratios and arrangements of the LEDs (pixels could be aligned diagonally or in a traditional display grid). Platform games rely on being able to see far ahead of the player, so we decided on approximately 2:1 for the aspect ratio, with a regular grid arrangement to keep it simple. The display would be approximately 30×15 LEDs (~100 x 50cm).
The rest of the design process started with prototyping game concepts and mechanics. Without knowledge of the technical side of transmitting the game onto the display, we still knew that a 2-dimensional data structure which emulated the display was the first step in prototyping, as it set up a simple and logical way to set the LEDs when the physical side and libraries were complete. After implementing this, we started working on the core game mechanics that would influence the gameplay and user experience of the game. For jumping, we knew early on that we wanted the user to control the player by simply shouting at the screen, so we used minim to analyse the amplitude of the audio input, convert that into decibels and then make the player jump while the audio input was above a certain threshold. Below are snapshots of the prototypes showing the first 2D data structure array and core mechanics implementation.
From here we built up and focused on developing a playable game on processing while we waited for our hardware to arrive.
With level design, we created the final level as a PNG image in Photoshop and implemented a way in processing to use the image data as the level of our game by loading the pixel information. This way we could also easily develop level mechanics by referencing the colour of a certain unit and how it affects the player when they are next to it. Below is a snapshot of the final game and the Photoshop file to show this worked.
Commentary of build process
With the premise of the project decided, and zero experience with LED control or development boards like Arduino, we started by learning the very basics.
Starting small, we borrowed an Arduino Duemilanove which came with everything we needed to practice small & simple tasks with LEDs. After being able to turn a single LED on and off, we moved to multiple LEDs which we had flashing sequentially. Knowing that we would be having audio input in the finished project, we tried to make a VU meter (volume unit meter) with the Arduino and LEDs, but soon found out that the microphones that plug into Arduino pins have limitations which might make it problematic for monitoring voices. The other alternative was to use the microphone on a laptop, which we found out required a whole other area of expertise, serial communication, which felt beyond our capability and advised against in web forums. Overall, our testing with the Arduino was very useful in the way that it gave us an idea of how development boards and the Arduino IDE works.
After lots of research, we planned to use the OctoWS2811 library on the Teensy to control the LEDs, as it was designed for the Teensy and came with examples which would allow us to stream the video from a laptop (with the VideoDisplay example on the Teensy, and movie2serial on Processing on a laptop) without much extra work.
When the components arrived (Teensy 3.2, WS2812B strips, 30A PSU, and the logic level converter to change 2.2V to 5V), the first test was to control a single LED. After all of the connections on the breadboard were made (with lots of important connections to LOW and HIGH rails, for example to set the direction of the logic converter), we ran the OctoWS2811 library on the Teensy, and confirmed with a multimeter that the data signal to the LED was 5V. This was our first time controlling a WS2812B LED.
After the successful test with a single LED, we moved onto the next step and hooked up a short strip of 22 pixels. Using the ‘Basic Test’ example from OctoWS2811 library, which is meant to change the pixels to a different colour one by one, we saw that there were serious flickering issues, pixels appearing to not update and flashing random colours. Now knowing why this was happening, we tried the test example from an alternative library to OctoWS2811, FastLED, which instantly gave better results. We took this, being able to control a strip of LEDs, as a cue to build the full 32×16 display.
The first step in building the display was to get a piece of wood the right size, which we then painted black. After considering how we could attach the strips, we started measuring out points for holes across the board, which would evenly space the strips in a precise way. We drilled those holes (of which there were a few hundred), then used zipties to secure the strips into place.
The strips have power, data and ground connections, which we soldered to wires that went through holes at the end of each strip. After everything was set up, we loaded the Basic Test for OctoWS2811 again, and despite the initial excitement of seeing all the pixels light up for the first time, we realised there were serious communication issues. The test was meant to light up all the LEDs the same colour, then change the colour of each pixel in order, one by one. Instead of the clean result we hoped for, lengths of strips seemed to not update colour, and many colours would flicker.
At this point we split all the data cables into 2 CAT5 wires instead of 1, the idea being that it would minimise cross-talk and stop interference in the data signal. This helped a bit, but the same problems were still there. With an oscilloscope, we looked at the waveform of the data signal coming out the Teensy, and saw that it wasn’t clean and the timing (which has to be VERY specific) was wrong. This was in comparison to the OctoWS2811 library website’s graph which showed us exactly how the waveform should be for the WS2812B chips. Going back to FastLED, we saw a major improvement, and decided that OctoWS2811 was not reliable enough to use.


Although FastLED outputted better signals, it came with it’s own problems. It wasn’t designed for the Teensy and only had single pin output by default. Latency and signal degradation would be expected trying to control 512 LEDs on a single pin, not to mention it would mean changing all the wiring on the display, so we looked into the different options to output to the 8 pins that were set up. The Multi-Platform Parallel output method described on the FastLED wiki was supposed to do what we needed, but after spending a lot of time on it we just couldn’t get it to work. We then tried the method used in FastLED’s ‘Multiple Controller Examples’ which created multiple FastLED objects. This worked and we were able to light up the whole display with colours we wanted (see the colour gradient photos).
The next problem was serial data communication. Our original plan was to use OctoWS2811 to control the LEDs, with the VideoDisplay + movie2serial examples which would deal with streaming video from a laptop to the Teensy automatically. Now using FastLED, we would have to write our own code to do this manually. Arduino and Processing both have Serial libraries which are used for serial communication, and after looking at the references and Googling for other people doing similar thing, it still wasn’t clear how to make it work. After almost giving up a day before the deadline, a section at the bottom of FastLED’s wiki for ‘Controlling LEDs’ gave us a clue:
Serial.readBytes( (char*)leds, NUM_LEDS*3);
With some trial and error we were able to send serial data to the Teensy from Processing which, using FastLED, successfully (and beautifully) lit up our test strip of 22 LEDs on 1 pin. After that, we moved the whole project upstairs where it would be easier to work on code at the same time, but as we got up, the whole thing seemed to stop working. We brought up the oscilloscope to see what was happening, and indeed there was no data signal. Stumped, we spend 2 hours trying to figure it out, as when we moved back downstairs it magically worked again. At one point the Teensy stopped responding completely and we feared we’d shorted it, which lost us for a while again until we saw that the power supply ground was plugged into the HIGH on the breadboard. The next day we tried the working serial communication to control a whole strip of 64 LEDs on the full display from Processing, which didn’t look correct at first, and we were worried about the size limit of the serial data receive buffer, but it turned out the Processing sketch has to be stopped before restarting or the serial data gets cut in. To make a long story short, after changing and configuring code in Teensy and Processing, we achieved serial communication sending data for the full array.
Implementing the Processing serial communication code into the game was fairly straight forward, although due to the layout of the strips (where 1 strip is 2 rows and they zig zag in direction), the first time we ran the game, each 2nd row was reversed. This was fixed with some code to alternate the direction of LEDs sent each row.
Due to us only getting the display working on the day before the deadline, we didn’t have much time to experiment further to make the game perfect. We did however run tests looking at brightness of the LEDs, as they were very bright, and found that dimming the pixels is most effective between 3/255 and ~20/255, which has potential applications for the game. We also tried putting a sheet in front of the display to diffuse the light, which could look really nice if we built a frame to stretch it around.
Evaluation
For what we wanted to achieve, we certainly accomplished the overall concept of our design by having a playable game work with audio input on our LED matrix. Despite our accomplishments, many of our ideas weren’t realised in the final piece due to various reasons. We were not successful with implementing a lot of specific mechanics to our game such as wanting to have secret levels that would be activated when the player went a specific route as well as noise cancellation and non-linear jumping for a smoother experience. These mechanics weren’t so much of a priority as a usual game session per person would only be a couple of minutes meaning that they wouldn’t heavily alter the gameplay experience.
After all the time we spent getting the game working on the LEDs, there was little time to experiment with different brightness settings to add contrast to our game in certain areas, as well as creating new colour palettes optimised for the LEDs. As you can see in the video there were still some flickering issues with the LED display where random segments of LEDs would turn on for the length of a frame which was determined by changing the frame rate and seeing how it was affected. However, with the amount of flickering we had prior to this, it’s a miracle they weren’t at all worse, although so far we have been unsuccessful in finding a way to fully eliminate this problem.
Our last success, despite hearing how susceptible our components are to blowing, and even purchasing spares in anticipation of this, everything remained intact throughout the duration of the project.
Bibliography

Circle Collage
Project Description:
My project is a creative tool that allows the user to make a collage out of 4 images by selecting the shape and size of an area they would like to copy and where it should be placed. The possibilities are open to simple swaps between images or abstract compositions made from many shapes layered on top of each other.
Background Research:
I first was interested in exploring the possibilities for manipulating pixels using Processing after seeing the images produced from Kim Asendorf’s pixel sorting program a couple years ago, and seeing the code itself to try to understand how it works.
Additionally, I discovered the webcollage page near the beginning of starting work on my project. It presents a visual snapshot of the web by collecting images from various sources such as Flickr and Instagram and blending them all together on one canvas. I liked many of the ideas it utilized, especially collecting random images from the web. I knew I wanted the images in my program to be arranged in a particular way based off who was interacting with them. I also wanted the initial images to be able to be mixed beyond almost all recognition if the user interacts long enough with the program. I realized the input wasn’t so crucial for my program but I still needed a source for my images.

I also came across the work of Jessie Thatcher while researching my project, in particular her work with collages. I liked the way they were arranged, with similar textures overlapping. I hoped to be able to create a similar effect with the images produced from my program, with more emphasis on different shapes and patterns.

This interest in patterns lead me to the images produced from the Eyescream Project from the Facebook AI Research team, which were similar to other deep dream projects . I always think they are interesting as they give us a glimpse into how computers see the world and the associations it makes between images. The images themselves are stunning.
Intended outcomes:
I wanted the program to be very intuitive to use from the onset, as well as quite simple so that a few clicks could result in something very visually pleasing. I wanted the user to gain a quick understanding of what the program is capable of by just playing with it for a little while.

I hoped the user would be rewarded for continuous clicking by generating images such as the one below which is completely different from the source above. I wanted the process of interacting with my program to yield new ways of looking at a set of images, perhaps with the user not fully aware why they select certain areas to use repeatedly.

The Build:
During making the program one of the most difficult things was making a circle out of pixels. I experimented with various methods for accessing the pixel array so that I could move only certain areas but none of my methods were working. The solution was to check the location of every pixel and see if it was larger than the radius of the circle squared, only those that are smaller are used to make the shape.
Evaluation:
I feel my program does some of the things I set out to do better than expected and some not so well. I would of liked to explore the capabilities of the OpenCV library to further create patterns with the collages. Perhaps use the control P5 library for a more fluid GUI experience.

Crowley’s Ladder by Brian and Jack
Project Summary
Crowley’s Ladder is a browser-based game made in Javascript and p5-play. It is a platformer where the player, known Al Crowley, must reach the books at the end of the stages in order to beat them. In order to do so, they must dodge continuous attacks from a fire entity tasked with protecting the books, and be quick to beat the stage, or else the lava of the room will slowly rise and reach Al. Being an Apprentice, they know little magic, and can only use a summoning spell to summon up to ten *box for now, something else* to serve as platforms to help them reach their goal.
Targeted Audience and Intended Outcomes
The game is for everyone (intended outcomes ?), and in order to create it, we had to research how to use the p5-play library for Javascript. We used the programs Seashore and Adobe Photoshop to create the graphics for the game.
Background Research
We researched into occult history and art to find inspiration for the characters, narrative and setting of the game. The games protagonist is a parody of the historical occult figure Alistair Crowley who during the late 19th and early 20th centuries was key in a large-scale revival of occult religions and literature in Europe. In his prime Alistair Crowley gained a reputation as socialite of the elite with a sinister personality, however in our game we made our imitation of him a slacker apprentice who gets into trouble with higher-level demons. Al’s mentor in the game is Mephistopheles, whose name derives from the demonic antagonist in the Faust legend in German Folklore. The games title is based from the biblical story ‘Jacobs Ladder’, however rather than being a stairway to heaven like in the original story ‘Crowley’s Ladder’ in contrast is meant to humorously imply a stairway to hell. In order to find appropriate level designs for a game set The games logo of a golden pyramid with a human eye pointing upwards is inspired by the Illuminati in pop-culture, such as their alleged imagery on the American dollar bill, and the symbolism in Alistair Crowley’s cult the Golden Dawn. The Necronomicon , or the spell book that acts as the catalyst for the story, is based on the fiction grimoire of the same name in stories by H.P Lovecraft and Sam Raimi’s ‘Evil Dead’ films.
Design Process
Our main problems when it came to building the game was figuring out how to use the p5-library, and how to incorporate it along with Javascript, since we needed functions from both languages at the same time. This was solved by looking at the source code of the library and getting an in-depth understanding of its classes and functions.
For the story we went through different phases of idea’s that developed over time into the finalized game. We worked from a storyboard that presented each sequence of the narrative in order to give the level’s context and then we adapted the storyboard into frames that depict the prologue of the videogame.

In this first design we thought a casual aesthetic for the character would be a humorous contrast with the hellish setting, however this design didn’t really translate very well into a pixelated animation of a walk or jump cycle.

This design worked better because the animations were more straightforward and was more fitting for the setting.

sequence.
Initially we had nine frames for the prologue but frame 5 was taken out because it was unnecessary for the story.

Production Planning
Week 1:
Decide what programming language to use, and what extra libraries might be needed.
Week 2:
Started thinking of ideas for the game.
Week 3:
With game theme and genre in mind, started learning about the library to create basic outlines of the game.
Week 4:
Created basic background, got keyboard input and gravity mechanics working.
Week 5:
Created platforms, started working in level victory logic and implemented box creation mechanic.
Week 6:
Created menu buttons, started working on menu button input and flow; fixed some collision detection bugs.
Week 7:
Created character sprites, animated player movements, and started designing next levels.
Week 8:
Implemented more levels, started working on lava movement and enemy programming.
Week 9:
Finished implementing lava and enemy attack programming; created and implemented lava animation; did some playtesting and debugging.
Week 10:
Final polish, addition of introduction images and logo in the title screen.
Evaluation
During the production of our game, our main concern was to learn how to use Javascript and use the p5 library with it to create a simple platform game. Although we were unable to build all the features we originally planned, the main mechanic of movement and platform creation was implemented, which allowed us to make enough to complete the game to an acceptable level.
Links
Game link: http://igor.gold.ac.uk/~broch014/crowleysladder/index.html
Source code: http://igor.gold.ac.uk/~broch014/crowleysladder.zip
Bibliography
http://p5play.molleindustria.org/

Duppy Tree
Project description
/
Duppy Tree is an installation based on the concept of the Iroko bottle tree. The Iroko bottle tree can be found in West Africa and the Caribbean and its purpose is to keep bad spirits or bad vibes away. With what we have learned throughout the year, we were intrigued to see how these skills could alter and enhance the bottle tree concept, essentially creating an immersive experience for the user. We decided that we wanted to play with light so that as the user walks closer to the tree, the light gets brighter and as the user walks further away from the tree, the light gets dimmer. To achieve this, we used an ultrasonic proximity sensor via openFramework with the Arduino.
Audience
Although this installation appeals to artists and Interior designers, we believe that the tools we used for our project peak the interest of professionals amongst a vast range of fields ranging from: musicians, artists , interior designers, education purposes and set designers.
In this instance; the concept is an interpretation of the Iroko spirit bottle tree which is present in West African and Caribbean cultures. Therefore upon our artistic interpretation of the tree, it is neither a part of the past, on time or coterminous with European avant-garde modernist art, which follows the trajectory of a succession of styles (such as Fauvism, Cubism, Expressionism, Surrealism, Abstract Expressionism, etc.). This opens the possibility for a new artistic language.
Design Process/Prototyping
Timeline:
- In the first week, Alabo attended “Sound System Culture London Exhibition” and shared various ideas that linked to sound and culture. Next, upon playing around with different signal paths and add-ons on OpenFrameworks and Max-msp, we thought that it would be interesting to use sound alongside the light and proximity sensor so that now, as the user walks closer to the tree, the light gets brighter and a sound is triggered.
- The following week, we started doing more research and decided that the best option for this project was a Raspberry Pi because we wanted it to be a stand-alone object. The Arduino was a good option but we thought the Pi was a better option due to its portability.
- For the project we agreed to equally split the costs, retaining a maximum budget of £500 for equipment. However, we managed to spend less. So we pretty much had the structure of Alabo focusing predominantly on sound, whereas Eddie was focusing on the audio reactive visuals.
- At this point our idea was to create an interactive speaker
- By week 4 we pretty much had the majority of our equipment, which enabled us to experiment with the Raspberry Pi. The Raspberry Pi was quite a challenging prospect. Ideally, we wanted to spend at most 3 weeks on getting the sound to work with proximity. If we managed to get something working, then we wanted to go out and get user feedback. This ultimately would have allowed us to know if we had a lot of work to do or not. We went beyond 3 weeks struggling with getting the raspberry Pi setup.
- The first thing we struggled with was getting the raspberry Pi to work on a Raspbain image. This OS allows openFramework to run on it. So we had 2 Raspberry Pi’s, Raspberry Pi Number 1 and 2. Now we both struggled on installing the image onto an SD card. Eddie managed to install it onto number 2 using a tutorial and it worked really well. Alabo had issues with the SD card he had, but one of us at least managed to install something, (after a while his SD card also had a Raspbian image.) The next step was to install OpenFrameworks onto the Pi which proved to be a big challenge, as we have never done anything like this before. Both of us started looking through the OpenFrameworks website and learned that we must install Linux arm6 onto the Pi so we tried. For some reason everything we tried kept failing so Eddie decided to add a blog post on the openFrameworks website and we got a response from Arturo. He suggested installing the nightly build and openFrameworks started compiling projects. We even added a camera and did simple codes just to check if it worked or not and it did.
- By this time Alabo had bought the LiFX lights and had begun trying to get them to communicate with OpenFrameworks but to no avail, they proved to be difficult to work with between home and university, due to needing to sign into a Wifi connection to access the lights.
- So the next step was to link the Pi with the ultrasonic sensor we bought using a breadboard, two resistors, and various wires. We checked online for many ways to set up which was a struggle to find. We had to ensure that the ultrasonic sensor outputs 3.3 volts or else the Raspberry Pi dies. The hardest and most challenging part was to compile a project that executes the sensor so that we could detect the distance. This was a real struggle. We tried and searched for various solutions, however, we couldn’t find solutions so we spent weeks on trying to get the sensor working with the Pi but ended up having to give up on it and deciding to use the Arduino instead.
- Alabo then purchased the Philips hue lights, to prototype on max-msp and TouchOSC leading us to see that this idea to merge sound with light was easily achievable with other platforms.
Touch OSC Test
- Trying to get the ultrasonic sensor to work proved a struggle, and after searching through forums and various tutorials we came across Cormac and Johan’s technical research which helped us set up the ultrasonic sensor with openFrameworks.
Philips Hue Light Test
Gitlab Repository:
http://gitlab.doc.gold.ac.uk/cjoyc002/CreativeProjects2FinalProject
Problems with our Build
The main problems we encountered with our OpenFrameworks code were to do with sending a request to communicate with the LiFX lights. Despite trying different addons, getting assistance from our lecturers and searching through the forums, we still couldn’t find a solution. Upon discovering that python could also send a request to the lights, we decided to approach the task by using python and communicating between the two programs with OSC messages.
However, still unsatisfied with the approach, Alabo bought the Philips hue lights due to there being a lot more online support and even an ofx addon that enabled you to change parameters from within OpenFrameworks. Now comfortable with the Philips Hue lights and aware of the similarities between the Philips Hue and LiFX’s API, Alabo was able to implement the same principles with the LiFX in OpenFrameworks with success. The solution lied in having to add a Syscommand.cpp file to the project, which allows you to then call the cURL request to the light.
Now, the only problem left was that the lights were not responding to the proximity sensor due to the fact that the sensor was sending data too quick for the network to respond. To produce the results we were after, we had to increase the delay number in the Arduino sketch and add it after ‘Serial.write’
Evaluation
Reflecting back on the project we realised that what we were actually trying to create wasn’t one product, but several that could be interconnected products like the Apple HomeKit. The sound speaker idea was a good idea however we just fell into many potholes approaching the deadline. If we did manage to get the sensor working on time then we would have had the trouble of playing the music wirelessly and working with the Raspberry Pi was a tedious process . So the decision to switch to Arduino saved the project because there wasn’t any time. In regards to functionality, we noticed several improvements that could be made. Firstly, our results for the prototype were very restricted and linear. To improve the project, we would like to use more proximity sensors and chain them in a circle around the tree to get readings from all angles. To conclude, in the future, we can look into developing our own Voice activation feature and creating an Intelligent Assistant similar to the likes of: Siri, Amazon Echo, Google voice control and Lifx Jasper (we can even create a way to implement these ready-made systems as well).
References:
Raspberry Pi ultrasonic sensor – https://ninedof.wordpress.com/2013/07/16/rpi-hc-sr04-ultrasonic-sensor-mini-project/
Raspberry Pi Motion Sensor Tutorial – https://www.pubnub.com/blog/2015-06-16-building-a-raspberry-pi-motion-sensor-with-realtime-alerts/
Lifx API – http://api.developer.lifx.com/docs/set-state
Philips hue max msp OSC – https://www.youtube.com/watch?v=0D3tL4Wv9aQ
Arduino Ultrasonic Proximity Sensor HC-Sr04 – http://www.instructables.com/id/Arduino-HC-SR04-Ultrasonic-Sensor/
cURL Request in OpenFrameworks Code – https://github.com/wouterverweirder/devineClassOf2016/blob/master/src/SysCommand.h

CONVARTSATION
By Mateusz Janusz and Becky Johnson
PROJECT DESCRIPTION:
Fractals and recursion are one of the most exciting and fascinating part of mathematics and computer science. The fact that even those who find no pleasure whatsoever in mathematics can still find interest in their visuality and complexity shows that. Whether it is the naturally emerging fractals that occur in everyday life or the computer generated fractals such as the Mandelbrot Set or the Menger Sponge, fractals and recursion is the centre of focus of our project.
Our project is an art tool that takes the voice of the user and turns it into a piece of artwork using fractals and recursion. Using the analysis package of the Minim library, sound analysis is conducted on the audio input to alter the generated fractals. There is a choice of 5 fractal designs to be selected at any point of running the program, where the frequency and amplitude of the user’s voice alters the position, size and colour of the designs.
DOWNLOAD LINK FOR PROJECT
igor.doc.gold.ac.uk/~rjohn053/CreativeProjects/Convartsation.zip
DOCUMENTATION ON INTENDED AUDIENCE
Our idea caters to an audience who do not necessarily have an artistic ability on canvas or through digital technology. This lack of artistic ability on canvas may narrow down to those of a younger age, and the lack of digital artistic ability may focus on a senior generation who may not have a knowledge in digital mediums to create artwork.
DOCUMENTATION ON INTENDED OUTCOME
The outcome we wanted to achieve was for our user to be able to create a unique piece of artwork featuring fractals and recursion by the use of their voice. By being able to control the volume and pitch of their own voice, the user can have an element of decision over the development of the artwork. We wanted the user with this control to be able to see the effect their voice was having on the design by the change in colour, size and detail of the fractal, and the position of the design on the sketch based on these components of their voice.
The enthusiasm to achieve this outcome was fuelled by both of us not having an artistic ability on canvas, and the opportunity to create something that looked intricate and personally unique just from the use of a voice was an exciting outcome to try to achieve.
BACKGROUND RESEARCH
Firstly we researched into sound, as we felt like it was important when starting the project to make sure that we understood the fundamentals of this.
In the beginning, we were advised to use the Minim library[1] as it would be able to analyse the components of the audio input we wanted to be able. The main feature of our program involved taking the audio input from the mic of a laptop and analysing the amplitude and frequency to create a useful value, so checking that Minim would do what we needed was extremely important to us at this stage. As well as Minim, we also looked at Processing’s Sound library[2] just in case we felt like this would cater more to what we wanted. Overall, we felt that Minim seemed to be more advanced from the documentation, and decided to take the advice we were given at the start.
The main visual aspect of our project involved fractals and recursion so we extensively researched into these two topics. Firstly looking at the aspects of recursion and fractals in computer science and mathematics, and then looking at famous fractals to try and understand their algorithms to better our knowledge of recursion.
The famous fractals that we researched included the Barnsley Fern, the Mandelbrot set, the Menger Sponge, and the Sierpinski Triangle. This research into the algorithms was so that later on, once we were more comfortable using FFT and recursion, we could try to implement these fractals and manipulate them with the values obtained from analysing the audio input with FFT.
PERSONAS
Name: Ken Headford
Age: 67
Location: Kent
Occupation: Retired
Ken is now a retired senior citizen and has taken up various hobbies in his retirement, such as oil painting, golfing, bird watching, and walking with a local group.
Ken has not got much experience with digital technology. He and his wife own a shared desktop computer, and he uses it to check his emails, and to research interests and hobbies of his. He tries to avoid using the computer as much as possible, as he finds that he does not understand various components of searching online, how the computer works, or how to fix the Internet connection when it has disconnected.
Ken enjoys artwork, but has never branched into the digital side of creating art. He loves oil painting, however he finds it frustrating for the amount of time it takes to paint certain pieces.
Name: Emily Rowe
Gender: Female
Age: 9
Location: Gloucestershire
Occupation: Primary School Student
Emily is a primary school student who enjoys going to the park with her friends and family, playing video games with her siblings, and her favourite subjects in school are Art and Maths.
Emily has showed an early interest in technology, from playing video games with her siblings on consoles such as a Game Cube, Playstation 3, and on the computer. Emily also has shown an interest in Art, but does not think her drawings are very good and is impatient because she wants to make good pieces of artwork.
She dislikes using pencils and pens, as she and her siblings are either very forgetful or messy and they lose or break all of them. Their Mother does not buy them paint anymore because it is expensive and from past experiences furniture has been ruined because of it.
WIRE FRAMES
After becoming enthusiastic about the idea for our project and researching the topics associated with it, we started to focus on the user centred design and functionality of the project.
We brainstormed how to change and alter between drawing different fractals. A couple of ideas that we had initially was creating a counter that would change the current fractal mode depending on the time passed, or so that the values of the amplitude and frequency would change the current fractal mode. However we felt that both options didn’t give the user the freedom to control the visuality of the sketch as much as our intended outcome was.
We also thought about the personas that we had created, and felt that the most simplistic user interface would be the most desired for them due to age or lack of technical knowledge. We felt that the focus for the user should be on what he or she would create, not on the appearance of the project’s user interface.
WEEKLY AIMS
THE BUILD
The first part of our build was getting the audio input from the speaker of a laptop and to be able to work with the values it produced. One of our problems was that the documentation for the Minim library did not seem very simple, and the examples for how to do this were very complex and did not conclude exactly what we needed. From this, we decided to look else where to research how to do this.
We found a video[3] https://vimeo.com/7586074 that described exactly what we needed to do in order to take the input from a mic and perform FFT on the input. We transcribed this video and this served as an extremely simple template for which we could test design ideas using FFT to influence the designs.
The template worked by creating and initialising the objects we needed for analysis, a Minim, AudioInput, and FFT object. The AudioInput object uses the “getLineIn method of the Minim object to get an AudioInput where the audio buffer is 256 samples long. Later on in the build of the project, we increased the sample size to 1024. This size needs to be a power of two because the FFT object needs a buffer size of this size.
After calling the forward function on the FFT object to mix the left and right channels of the Audio Input, a for loop is created to loop through the number of bands in the FFT object. The “getBand(i)” function returns the value of the current frequency bands amplitude. With this, we could start to use the values of this input in coercion with fractal images.
In creating the project, a lot of issues we had stemmed from trying to use recursion as it was very easy to create a base case that did not escape infinite recursion and would therefore crash Processing. Once we had grasped the logic of creating and manipulating a variable to definitely change to or past a certain value, this limited the amount of crashes we had.
At first, it was very difficult to incorporate recursion using the values from FFT. Our very first sketch successfully using the values of the audio input with FFT created recursive ellipses that rotated based on the value of the frequency. When trying to use the values of our FFT as our exit conditions in recursion, this became sufficiently more difficult. Our FFT values that were returned ranged tremendously, especially when multiplying the values by a large coefficient to be able to use the values in the fractal designs.
This was solved by trial and error of measuring each coefficient as a multiplier against these values. We tested our voices by ranging the amplitude and frequency and recorded the results to investigate the range of values in order to plot our exit conditions.
Creating this simple design was the starting point however to be able to create more complex fractal designs with a knowledge on creating exit conditions using the FFT for the recursion.
Our next design was adapted from a recursive tree example by Daniel Shiffman[4] , where the coordinates of the mouse alter the number of branches as well as their length. Instead we wanted the amplitude and frequency of the audio input to alter the size, length and number of branches. So instead, we created a condition for the branches to “start drawing” if the value of the amplitude of the left channel of the stereo input was over a certain amount, and if so, rotate these branches by the amplitude of the current frequency band. Both of these values were multiplied by larger coefficients to transform the values to a more useful value.
At this point, we had a conflict with the current design we had and the initial outcome we had wanted for the project. What we had wanted was for the values of the audio input to create a growing, slowly emerging fractal image. Instead, the fractal appeared with a “stamping” effect on the sketch, where it appeared as a singular fractal shape and differed by changing the length of branches and colour from the FFT values. We tried to fix this problem by using an example of a generative design[5] that imitated the development that we wanted.
After trying to fix this problem for a long time and because of the time constraints we had on the project, we decided to try for a simpler technique of making the fractal stamp in different positions in the sketch instead of starting in the same place each time.
A solution we found to this was by translating the entire design by the FFT values, it’s x coordinate by the sine of the current amplitude of the current frequency band, and it’s y coordinate by the cosine of the current FFT frequency band. This technique we implemented for our other fractal designs using varying coefficients. We decided at this point that although it was not the outcome we had wanted for the project, it was best to move on and to try create more of these recursive fractal sketches due to the time constraints we had.
Once we had created this branching fractal design with the movement we were happy with, we decided to make a prototype of the sketch at this point with the user design we had designed based upon our wireframes.
Once happy with this prototype, we decided it was best to continue working on our own fractal designs and implementations of famous fractals that we had created in our background research. Once we were happy with each, we would continue to add the designs to the prototype with the same functionality as the first.
Some of those which we had researched in the beginning were too difficult for us to incorporate using FFT analysis with. For example, we tried to use the values of the amplitude and frequency to alter the appearance of the Mandelbrot set. However, for us it was too difficult to manipulate the formula for plotting which numbers diverged to infinity or bounded, and we encountered a lot of problems with infinite recursion as a result. The values in the Mandelbrot set coupled with the values from the FFT resulted in being far too complicated for us to create something mildly visually satisfying so we moved onto other fractals that we had researched.
These included the Sierpinski Triangle and the Barnsley Fern. Through our research, the algorithms for these two fractals had been the easiest for us to understand. The Sierpinski Triangle is an example of a finite subdivision rule, where the shape is broken down into smaller pieces of itself. The Barnsley Fern is an example of an iterated function system, meaning it is composed of several contractive copies of itself that draws the points of itself closer and closer together for however many iterations.
One problem that we faced at this point when trying to implement all of our designs together was that our “button menu” that we had designed was being drawn over by the fractals. At this point, the code for the project was incredible messy, as we underestimated how difficult it would be to implement all of these drawing modes together instantly.
It was an incredibly tedious problem where simple things such as different colour modes being active, such as using RGB for the button menu and using HSB for colouring the fractals. This as well as different stroke, fill, pushMatrix, and popMatrix function calls, interfered with each other. This made the solution a case of reading carefully through the project and neatening up the code until there was no more interference between functions.
After adding all of our drawing modes together and completing the functionality of the user centred design, it was a case of trial and error to initiate the drawing to start above a specific frequency value for each fractal. This did not take a long time in the build, and was the finishing touches to our project.
EVALUATION OF PROJECT
In terms of what our intended outcome was for this project, there are a few things that we think were successful and a few that did match up to our original hopes.
For instance, we had wanted our fractals to “grow” so that the users could see exactly how their voice was altering the position and movement of the design. This issue we had had in the beginning of the build of our project, and due to time constraints and after many days of trying to achieve this effect, we realised it was not practical to continue trying in this instance. Instead, we tried to make up for this problem by keeping this “stamping” effect, but using the values of the FFT on the user’s voice to change the position of where the fractal stamped. We felt that although this was not want we had initially wanted, it still gave the user freedom to partially control the design and that was the main desired outcome of our project.
We felt that we had successfully achieved correlating the values of the amplitude and frequency to noticeable colour changes that the user would be see and control the colour from this. For example, the higher pitch or louder the voice, the stronger the hue and saturation of the colours, resulting in the reds, pinks, and bright blues.
We feel that the program does not give the user the extreme control that we had hoped for though. For example, the user does not know exactly what his or her voice will create in each drawing mode, and in the beginning that was one of the aspects that we were really enthusiastic about. We understand that this lack of control stems from our previous problem of not being able to design the fractals to grow with the person’s voice, and that this desired outcome outweighed our knowledge in programming so it was unrealistic for us to expect to be able to create it so easily. However, we are happy with the results that can be obtained from using the program because we feel that it can create unique and complex pieces of artwork, and that was one of our desired outcomes.
For both of us, our knowledge of sound analysis and Minim started as null, and after completing this project we feel our comfortability in this area has increased immensely. Although it is not perfect and in-sync with our original desired outcomes, we feel we have achieved a large percentage of the overall outcome we wanted.
CONVARTSATION ART GALLERY
BIBLIOGRAPHY
[1] Documentation for Minim library – http://code.compartmental.net/minim/
[2] Processing’s Sound library – https://processing.org/reference/libraries/sound/
[3] How To Use FFT and Minim Tutorial Video – https://vimeo.com/7586074
[4] Daniel Shiffman, Recursive Tree – https://processing.org/examples/tree.html
[5] Generative Design – https://github.com/generative-design/Code-Package-Processing-3.x/blob/master/01_P/P_2_2_4_01/P_2_2_4_01.pde
RESEARCH REFERENCES:
Fractals:
[] Nature Of Code, fractal and recursion research -http://natureofcode.com/book/chapter-8-fractals/ [] FractalFoundation – http://fractalfoundation.org/resources/what-are-fractals/Recursion:
[] Introduction to Programming in Java – http://introcs.cs.princeton.edu/java/23recursion/ [] Khan Academy, Recursive Algorithms – https://www.khanacademy.org/computing/computer-science/algorithms/recursive-algorithms/a/recursion [] Wikapedia, Recursion – https://en.wikipedia.org/wiki/Recursion_(computer_science)Minim:
[] Audio Visual Programming -http://sweb.cityu.edu.hk/sm1204/2012A/page20/index.html [] Minim documentation – http://code.compartmental.net/minim/javadoc/ddf/minim/analysis/FFT.html [] Processing Tutorials by Daniel Shiffman – https://www.youtube.com/user/shiffman [] Generative Drawing – http://www.generative-gestaltung.de/P_2_2_4_01

DIGITAL WINDCHIME- Bilal and Ed
PROJECT DESCRIPTION
With this project our goal was to explore the creative application of a piezo sensors hooked up to an Arduino board as a musical instrument. Traditionally found in children’s toys and fire alarms as cheap amplifiers piezo sensors take advantage of the piezoelectric effect, in which certain crystals (often quartz) can transform mechanical energy into an voltage and vice versa. Specifically were interested in finding a new and interesting way of using the output of these sensors to manipulate sound.
When looking for similar projects we found that the sensors were most commonly implemented in one of two ways: as a more traditional contact mic as a pickup on stringed instruments, or as a trigger for percussive instruments. Initially we chose to experiment with the former, but struggled to find an innovative way to interact with our device. Eventually, and largely by chance, we realised that when left to hang in the air, the piezo sensors were sensitive enough to be triggered by even gentle wind. From here we began to develop our final piece, an electronic ‘wind-chime’ that would hang and generate sounds from its environment.
——-> CODE DOWNLOAD <——-
INTENDED TARGET AUDIENCE
Before we set out to actually start with designs of our idea we wanted to make sure that we had a specific target audience in mind. We decided that our product will appeal to the more creative and alternative minded people. These people fit into a small bracket when compared to the more mainstream, pop cultured, socially normed people. We did not think that age plays a factor due to the fact age is not a boundary to how creative one is. We believe that even though this specific target audience is rather niche, it is actually densely passionate full of people who are looking for new, and innovative creations to stand out and fulfil their own creative flair. We created some personas to help give a more vivid representation of individuals in our target audience.
SAM
Age – 21
Gender – Male
Interests – Experimenting with sounds and producing Music using alternative methods.
Location – Brighton
Sam is an aspiring producer, however he does not want to be mainstream and use sounds that have already been regurgitated via loops on common programmes. He wants to produce sounds using alternative methods like household objects. He believes that doing this will make him stand out and ultimately boost his profile. He is currently working on a project where he can use wool strings tied across an empty kitchen draw to create a heavy bass guitar sound with the help of an arduino and a computer programme to generate the sound. He aims to create more sounds to use with wacky objects to create obscure sounds.
SOPHIE
Age – 20
Gender – Female
Interests – Interior Designing
Location – Camden
Sophie is an enthusiastic and ambitious interior designer. Her outlook on her work is somewhat different to the conventional work. She works in a specific, alternative style which includes using unusual finds to create the ambience. This includes objects such as electric plants which change colour with movement detection. She is always in search of common household objects that have an alternative use or function. We believe an item like the one we are creating will cater to her needs because our chime is a twist on a decorative piece in the sense that it is programmed.
INTENDED OUTCOME
Our ideal outcome would be to create an alternative twist on a wind chime. We want to use an arduino to control piezo sensors that when interacted with, create a gritty sound. The image below is a drawn design of what we expect the initial base outcome of our idea to look like.
BACKGROUND RESEARCH (Inc. bibliography)
Some background research we took included looking at many websites for inspiration and aid on creating our idea.
An example of a website website we used frequently was www.instructables.com.It is a website specialising in user-created and uploaded do-it-yourself projects, which other users can comment on and rate for quality. It consists of many projects using the arduino. This helped us to see what equipment is necessary for our project and it also helped us create an idea that people had not really attempted before.
Another website we frequently visited was https://www.arduino.cc It is an open-source electronic prototyping platform allowing to create interactive electronic objects. The website was a big help when we were confused of how to use it. This is the website that we also chose to purchase the arduino.
Some more research we looked into was at the minim library. Prior to the start of the project we had only dipped and dabbed into the library, however to create the sounds we wanted we had to research into it ourselves. We did this through looking at the minim website: http://code.compartmental.net/minim/ which helped us with understanding its functions and how to use it.
WIREFRAMING/ PROTOTYPING
We created these initial designs to help us choose what we thought was best. It consists of the ideas we both thought were good and this allowed us to put into perspective whether this was a realistic achievement. One thing we noticed when putting together the base of the Arduino was, that the piezo sensors are very sensitive, so sensitive that when you blow onto them from a distance, they still react and create the sound. We used this new found knowledge to attempt to create something more unusual than the typical ‘drum midi’ created using the piezo.
The image below is the Arduino that we purchased with the piezo sensors attached.
We created this prototype as an example of how we want the design to look like.
DESIGN PROCESS
We created a video so we could display the stages we went through in the production of the actual arduino. This can be seen in the video below.
The image below is a document of what we expect of the code. We are using two programmes, Arduino IDE and Processing.
PLANNING
We had to ensure that we kept to our deadlines and were on time with each other. We did this by creating a table and filling it in with what we aim for the week to be done and if we achieved this. Click the link >> TODO to see the file.
PROBLEMS WE ENCOUNTERED
Some problems that we encountered included using the minim library. We both had substantial knowledge of the library but we were not confident with implementing code to alter the notes played. We wanted to add effects such as bitcrusher. To overcome this we ensured that one of our aims for the week was to look at the minim library web page and learn more about it. We also made it our aim to create some code displaying that the effects work and therefore aid us with implementing the code to the actual project.
Another problem we encountered was getting an analogue signal from the Arduino IDE to processing over serial. After much trial and error we decided to find an alternate solution, instead sending a message from the Arduino to processing every time a sensor’s reading breached a predetermined threshold.
A big problem we encountered near the start of the project was that the piezo sensors were very sensitive and you didn’t need to even touch for it to be triggered. Instead of panicking and go buy new sensors, we used this as the basis for our wind-chime idea.
EVALUATION
Personally we believe that the project was a half success. We are satisfied with the quality of the code as the sound is exactly how we wanted it. We aimed for the juxtaposition of the nature around, versus the technical, almost alien sounding chime, which I believe we achieved well. The sounds produced are rather obscure and would cater to the sound orientated creative people in our target audience.
The aspect we believe let us down a bit is the appearance of the chime. We were rather tight on time and had to deal with the resources available to us. It is a fully functional product which really works well, however with a bit more time we believe we could perfect it’s exterior look.
One feature of the chime that we are very happy with is the LED lights. They react to the sounds create and we believe it gives the chime that extra unique feature and appeal. We aimed to use the lights so that it could cater to our creative target audience and we know this was a good thing to include.
The fact that the force of the wind can effect the piezo reaction was something we suddenly found out. Finding this fact, which was almost an accident allowed to create a more, in our opinion, creative item. As stated in our intended target audience, these people are always in search for new, innovative creations, which we have created.
Our biggest disappointment was the inability to send an analogue signal from the Arduino to processing, and having to compromise by using random values when the sensors were triggered. Given more time we would have like to either find a way to model the effects parameters in a more natural, less random way according to the size of the sensor reading; or, find a way to continuously send the sensors readings as discrete values down the serial and then map these values to use as the parameters.

Virtual simulation of tactile sound
submitted link to my project on Igor: download
Project description:
The initial idea of the project was to create a compiled simulation that initialises three senses simultaneously; hearing, sight, and touch. The simulation would present the user with a sample, while the graphics represented the FFT analysis used to create different stimulants as the audio sample progressed. As the user sees these graphics at play, they would place their fingers inside the holes of a machine. These contain sensors that vibrate with the FFT analysis, and each finger experience a different vibration as each finger of their hand represents a different frequency.
Audience: The intended audience for this project are music-enthusiasts interested in visual art that represents sound with tactile elements.
Ideal scenario & story board:
Story board shows a user interacting with the machine. The user places their hands inside the holes where the solenoids are placed. The user then experiences a visual simulation on a project board of what the machine is doing. Finally as the sample plays, the solenoids vibrate causing movement on the finger tips.
Design Process:
The design of the project came from a conceptual art project called “amazing tangible media” by MIT. To achieve the effects used by MIT, I needed to understand the basics of FFT in process to get graphical variables to move based on a frequency spectrum. For this, I used processing as the references and code examples were comprehendible to understand. My first target was to research on finding quantised frequency amplitude from a sound/decibel, and how to split quantised frequencies using FFT.
This is the first sketch version of my project demonstrating my understanding of FFT:
Link to my first sketch: download
This version also contained a library called ControlP5, which I used to experiment on manipulation variables in real time such as the speed of the bars, height, and sensitivity. At the early stages, the design was not a concern as I was more focused with the functionality of the concept.
After the first sketch, I begun creating hand-drawn designs of what I imagined the machine would physically look like.
first design:
second design
These designs helped me plan a better simulation with the features intended for tactile sound, as well as what the simulation should emulate. As I was designing the these, I was thinking that it would be too typical if the solenoids were represented by bands/bars in a FFT spectrum. So for the simulation, I came up with the idea of having the solenoids be represented by small clip-art speaker called “finger-units”. This gives better control of the manipulation each frequency what to do with it in terms of analysis, such as beat detection.
sketch of final simulation:
Problems that occurred:
I was unable to find a solution to get the moogFilter class to operate as intended on a sample. When I tried to implement the class, it would not apply a lowPassFilter to the sample. No matter how many code examples I tried to analyse, it could not work with my code structure. Due this, the idea of using filters had to be scrapped.
There was also a problem aligning the finger units. I could find a way to iterate the finger units to create a diamond allignment. So as a resort, Some of the finger units were hard-coded (outside the for-loop) in a specific coordinate on screen to get the shape I wanted them to be aligned in.
Another issue I had with the finger units was making them respond to the mouse going over them by turning red. The idea of finger units was for them to represent different frequencies in the sample, and movement would occur at the users finger tips. However, I wanted more control over the finger units such as stopping one individually at real time. Doing so can result more cognitive combinations for user to experience if a particular solenoid deactivated and reactivate during the sample, as well as more conditionals for graphics to change to. This was difficult to implement because the if-conditionals for the mouse to hover them would not respond, and it would not change the state of finger unit to a highlighted image; they were either all activated on mouse click or it just didn’t function at all . The solution was to manually find the radius of the finger units and create a two boolean operations for when the mouse is hovers over them (highlighted) and then clicked on in that area (deactivated).
Production notes from my computer notepad:
- LED lights.
- a human ear.
- One single saolenoid moving up and down.
- a sphere (maybe mine bomb).
- orbiting spheres.
- draw the full spectrum with draw the linear averages
- since linear averages group equal numbers of adjacent frequency bands
- we can simply precalculate how many pixel wide each average’s rectangle should be.
functionality: solenoids stop at mouse click.
Evaluation:
I think the overall concept of the work was proficient in combining cognitive dissonance between the audio aspects of the project and the visual stimuli that incorporated the conceptual functions of touch. I feel though the practical work of project could have went further in taking advantages of manipulating graphics through changes in variable values. Some of the targets in my to-do list were not completed such as making the finger units respond to different frequencies.
references:
MIT amazing technology- tangible media: https://www.youtube.com/watch?v=lvtfD_rJ2hE
Inspiration for unusual designs of the machine: http://www.demilked.com/unusual-cool-speakers/

Meet the Watsons
“Meet the Watsons” are a set of four projection mapped sculptures who discuss and deconstruct a twitter account live in front of an audience, looking to make you question what you put online, who can access it and the possible tone of your communications. I wanted to expand upon a subject that I looked at previously with another piece ‘You probably live in Horsham’ 1, specifically exploring the security and availability of our metadata2 and what can be derived from looking at it in detail. Asking in a very humanised manner, do you know what you put online? If so, do you think it should be public?
Accessibility was a key thought with this piece as I wanted the Watsons to be accessible to all including the hugely differing demographics of who uses twitter. Therefore, I had to find an outlet for my findings that was all-inclusive. For this I turned to creating physical representations of humans as a way of voicing my own concern, they provide a non-threatening almost homely reminder to the user, guiding them through each step of what they could gather by looking at the tweets. It also acts to remove both the reading and language barrier (as the Watsons can speak other languages) seen in “You probably live in Horsham”. In addition to this, the human face helps the users interact more freely with the piece connecting somewhat to the overly cute and perfect personas of the Watsons, something specifically considered in order to create something relatable and kind. I actually used what I would determine to be a representation of corporate America to help form the personality for the Watsons, choosing thick American accents and a typical white middle-class family as the template, something I find to be very typical of the capitalist sector. This cute image, however, is juxtaposed by the thick, dark harrowing shadows cast behind the piece which helps represent the true, much more sinister side behind the ‘cute’ image.
The Watsons are comprised of six parts; a set of four sculptures, a projection keystone application, a language analysis system, text to speech, scripting engine and a front-end system allowing the user to interact with the piece. All of which were carefully created to make the interaction between the user and the piece seamless and incredibly simple, something essential in a gallery situation. However, it was key that the simplicity should never compromise the detail of the information. Therefore early on I decided not to use text analysis API’s such as the IBM Watson platform which I have used before and instead create a completely bespoke system designed specifically to analyse tweets in the given context. This was a long laborious process but yielded successful results using a mix of my own technology, a few research papers from institutions including Stanford and a few php wrappers by Ian Barber a Developer Advocate for Google+.
Above: The process of creating the Watsons, left: inner structure, middle: rough shape, end: finished sculpture.
Sculptures
For the sculptures, I chose porcelain as the material as this much finer clay has surprisingly strong properties but also leaves a bright, flawless white matt finish once fired, making the material perfect for projecting as it would disperse the bright light perfectly. All four statues were created in ~12 hours non-stop as leaving gaps between the creation of the pieces could lead to inconsistent drying. This could be disastrous as differing levels of moisture in the clay could result in the pieces exploding once fired in the kiln. Thankfully this seemed to work as once fired the pieces came out without any issues whatsoever.
The level of detail in the sculptures varies according to the amount of animation projected onto them. For example, the boy seen above has both his eyes and mouth simultaneously projected alongside a full face painting. As this could potentially alter the face shape quite significantly his eyes and mouth have been smoothed allowing the projection to dictate the shadows and highlights of his face, allowing me more freedom in the look of the projection. The baby, however, has minimal animation so both it’s eyes and mouth are fully formed.
I had three main inspirations when forming the look of the Watsons; one being Julian Opie’s Large portrait sculptures8 (left image) which has this amazing smoothed aesthetic, rather suiting of projection in fact. Another is probably one of my the earliest memories of projection mapping which unusually was displayed during a Lady Gaga concert (2011), the projection mapped face dubbed “Mother GOAT” (right image) was a fantastic example of how projection mapping when done correctly can be incredibly realistic and was likely the precursor to my idea of the Watsons. It also helped me form the faces of the Watsons so the projection mapping was as natural looking as possible. I believe “Mother GOAT” was created by Nick Knight, however, the exact details of its creator cannot be found.
Technology
As the technology behind the Watsons is quite substantial I will explain it in the order as executed by the piece when a user interacts with the system.
1. Front end system
To initiate the interaction with the Watsons the user goes to the site joe.ac on their phone which will redirect them to a portal allowing their name, twitter username and pronoun to be inputted. When submitted their phone will become a ticket sending their username for analysis and placing them in a queue, once analysed (after ~20 seconds) and at the front of the queue the Watsons will begin talking to them with their phone acting as a live transcript showing exactly what the Watsons are saying. This system is programmed in Javascript using AJAX for communication with the database and PHP to handle the caching of the script etc. The following happens during the analysis stage.
2. The analysis engine
When the user engages the front end system the analysis engine follows a simple but process heavy structure (note: it typically takes around 1 second for a tweet to be analysed in it’s entirety), this process is completely programmed and executed in PHP:
- Firstly take the username of the twitter account (fed to the process by the front-end system and input by the viewer) and get the 10 most recent tweets using a php wrapper of the twitter fabric api3.
- These tweets are analysed looking for embedded geotags or specific place names extracting this data before studying the frequency of locations guessing the likely location for the user to live.
- After this, the tweets are then fed through a process that looks at usernames mentioned in the tweets, user description and tweet content to try and find a link to a university or school, it’ll look for specific keywords and attempt to extract the name of the potential university alongside their role e.g. student or lecturer.
- Next, the full tweet strings have their sentiment analysed to determine if they are positive, negative or neutral. This gives a general understanding of how the content is viewed by the user and potentially reveals relations with specific users. The system includes James Hennessey’s4 implementation of the ‘naive bayes’ algorithm to help detect sentiment. Before the algorithm is run a list of noise words (e.g. and, a, of, me) are removed from the phrase to reduce the chance of skew.
- After the sentiment is analysed keywords are extracted from the tweets by removing stop words (e.g. afterwards, or, and) compiled from a list by DarrenN from Github5, once removed we are left with the most important keywords from within the tweets allowing us to use the sentiment determined previously to gather an understanding on whether the user likes something or someone. E.g. the user hates ‘Southern Rail’ and ‘Strike’, giving us a good idea of what the user means within this tweet.
- Now we have the keywords and sentiment of the tweet we parse the entire unchanged string through a POS (part of speech) tagger, this system allows us to determine what words are the most important by labelling them by their grammatical categories e.g noun or verb. This allows us to very accurately determine what words are the most relevant in the tweet. This is matched alongside the keywords gathered from stage 5 and alongside the sentiment is sent to the scripting engine.
3. The scripting engine
After a full analysis of the user’s profile has been undertaken the data gathered is sent to the scripting engine which is generated dynamically, however it follows a basic structure if applicable. This system is also programmed in PHP:
- The characters greet the audience explaining who they are and what they do, this is selected from ~455 different combinations.
- The Watsons begin to talk about where the user lives pulling live geo-relevant data regarding the area around that location to provide contextual relevance (e.g. ‘oh he lives in Horsham, we often walk the dog in the Horsham Park’). This is an attempt to make the user feel less at ease hopefully making them listen more intently. The location data is gathered during the analysis engine stage and the live contextual relevant data is kindly provided by GeckoLandmarks who gave me an incredibly generous free licence.
- They then explore the links found to any universities.
- Then moving on to the specific tweets analysed looking for negative or extremely positive skewed responses.
- After completing this it’ll thank the user for taking part before saying goodbye.
The entire conversation’s duration varies depending on the quality of the content however, it will typically last around 1 and 1/2 minutes. This compiled script is then saved temporarily in a JSON format and the mapping program is notified that the user can be accepted if at the front of the queue.
{“success”:1,”script”:[{“character”:”Sarah”,”text”:”Hey+Joe+it%27s+great+to+see+you”,”eyes”:”l”,”plain_text”:”Hey Joe it’s great to see you”,”pause_count”:0}]}
Above: an example of the JSON formatted script.
4. The Watsons projection application
Mapping system:
The program which is written entirely in Processing and Java has a very specific mapping system created based on the now defunct “Keystone” framework by David Bouchard in 20139. I took this framework, altered it to best suit the animations and multiple projecting within my application and used it to help map the faces created in Photoshop to the sculptures. The textures were created by hand in photoshop using photos of the finished porcelain sculptures. I took photos of all 4 at the same height and angle, then I would use photoshop and photos of friends, families and royalty free images of celebrities to help compile a face that perfectly fit the features of the piece. This image would then be broken down into a face, eyes, teeth and mouth which could be then layered upon each other and then animated to make them move.
Animation:
After the application has loaded and had it’s textures mapped to the sculptures the program enters a searching mode where it will look for completed scripts and then begin reading them. Once a user has been found the program will tell the user’s phone the session has begun and the script synchronised between the two programs. The program will send each line’s encoded string to the speech analysis server held at joemcalister.com.
“Hey+Joe+it%27s+great+to+see+you”
Above: an example of an url encoded string
The text to speech service is provided by Amazon’s IVONA service6 and is streamed from their datacenter in Ireland to my server in Germany, chunked encoding7 is used during this process to reduce the latency between the audio being generated to milliseconds, allowing speech to flow so smoothly. However this means no duration is logged in the HTTP request, therefore, it is not as simple to determine when speech has ended. To solve this I created a class in processing that uses minim and an FFT to monitor the amplitude of the incoming streamed audio, looking for drops to 0.0 and logging them alongside the pause count found within the JSON encoded script. This allows the program to accurately determine when the speech has ended and the next line of the script should be read out.
The application simply reads out the entire script using this process and once at the end a termination signal is sent to the user’s phone sending a goodbye message and the search process restarts looking for the next user.
Conclusion
I’m incredibly happy with the response the Watsons received during the Symbiosis exhibition, nearly 300 people interacted with the piece over the two days and many more photos and selfies with the Watsons were taken. In particular, I was pleased with the shock at the accuracy and regularity of the information quoted by the Watsons with many since remarking that they will make their twitter account private from now on. As well as this despite the huge overwhelming crowd that surrounded the Watsons the mapping application never faltered, only failing interaction-wise 3 times, once because a twitter rate limit was exceeded (after 30 requests in less than 15 minutes!), the second time Eduroam disconnected and third IGOR (the departmental server) of which the Watsons were hosted had a small timeout issue none of which were due to my software, showing my fail safes built into the program worked perfectly. This was a big achievement for me as I had to largely predict the demand the Watsons would see and my estimations were greatly under the final figure.
Therefore, after talking to many people when experiencing the piece I believe my core message of realising the intimacy of what we put online was heard, but more importantly heard in a pleasant and understanding way, with fewer shock tactics used and instead careful considerate questions being proposed.
Images and video
Above: a video of the Watsons interacting with my Twitter account
Source code (the server and processing components are separated for convenience):
Click here
Sources
1 You probably live in Horsham: https://joemcalister.com/you-probably-live-in-horsham
2 Metadata: https://en.wikipedia.org/wiki/Metadata
3 Twitter’s Fabric API: https://get.fabric.io
4 James Hennessey: http://jwhennessey.com
5 DarrenN stopwords list: https://gist.github.com/DarrenN/802249
6 IVONA: https://www.ivona.com
7 Chunked encoding: https://en.wikipedia.org/wiki/Chunked_transfer_encoding
8 Julian Opie’s Sculptures: http://koreajoongangdaily.joins.com/news/article/Article.aspx?aid=2985071
9 David Bouchard’s Keystone: http://keystonep5.sourceforge.net

Oljud {Noise}
Oljud {Noise}
Project Overview
‘Oljud’ meaning ‘Noise’ in Swedish is a collaboration between Terry Clark and Gustaf Svenungsson. The aim was to create an immersive, interactive audiovisual installation using a 3D Camera and a Digital Audio Workstation to manipulate sounds and then for the audio to effect the visuals.
Intended Audience
We believed at first that our audience would have been someone, interested by music technology or digital art, between early teens to 30 years of age and that this would be an installation piece that anyone could try out. However as the project progressed we found that there is a bit of setup time required and a learning curve with the gestures/interactions and thus the target audience changed to someone that would use this as part of their live set up and would spend time in programming certain elements of their music for these specific set of interactions / gestures. Ideally they would be an Ableton user and comfortable exploring programming and setting up.
Research
Overview
We learnt about what technologies were available to us particularly wanting to use the Xbox Kinect to capture skeleton information and midi to transmit information to Logic. However, we knew that we would need to gather more information before we proceeded. Our first few weeks were productive as we began researching about different kinds of technologies, libraries and processes we would need to adopt in order to produce the final piece. We found that the Xbox Kinect offered the necessary motion capture elements we needed including a point cloud and human body skeleton tracking information. Additionally, we found that other projects had also used Ableton in conjunction with Open Sound Control (OSC) which provided the ability to communicate over a wireless network. This enabled us to send skeleton information and other triggers between two computers over the same network. This meant that we could distribute the workload and overall processing power. This set the foundation of the installation and we moved further into what data we could collect and how we wanted to display it.

The work was originally split into two halves, Terry worked on the visual part of the installation and Gustaf the audio as we both had previous experience in these areas and felt that we would naturally learn from each other as the project progressed. Splitting the work proved to be of a huge benefit as we were able to rapidly produce prototypes, create a soundtrack and refactor the master code as we went along. Although this was the divide we found that we were constantly looking and altering each others work as it gave an outsider’s perspective on the way we both wrote code and as we moved through the project we become more accustomed to understanding where our particular bugs were coming from. The flowchart below describes the class structure of the project and shows both computer and user interactions.

In order to setup our project you will need the following equipment and software/libraries installed.
- 2 Laptops
- Processing 2 for Computer 1 code (due to SimpleOpenNI)
- Processing 3 for Computer 2 code
- Xbox Kinect V1 – 1414
- Ableton Live
- LiveOsc (An Ableton Hack)
- oscP5 (processing library for sending and receiving OSC messages)
- SimpleOpen NI (The processing library)
- Minim (A processing Library)
- Soundflower (for routing audio into processing
Audio
The initial concept involved midi messages being sent to logic. After some successful prototyping and discussion in class we came across OSC which for our purposes was better to use. This led to abandoning Logic in favour of Ableton live because while Logic is a great sound studio built DAW, it drained a lot of resources from the computer and it’s more traditional, linear workflow proved cumbersome. Ableton on the other hand proved more useful on account of being faster and having a built in workflow of organising clips within scenes (i.e. a clip is a music part and a scene is a music section of a piece). This made it easier to abstract the structure of the messages. OSC allowed us to be flexible about how much workload we would put on each computer.
Since the Kinect data and running the music software would be the two most processing intensive “things” it was decided those would be split between the two computers. One running Kinect, one running the DAW. Apart from that OSC allowed us to run the other code as we needed. For example, one computer is running beat detection on the audio, the result of the analysis can easily be sent via OSC an message to the a second computer.
Writing OSC messages proved a lot more intuitive since we could mix & match messages that had been pre-defined by the live OSC api with our own custom messages. Because of our schedule of trying to stay in sync and having a prototype up Gustaf ended up writing a first draft of the particle system with two important extra features:
Visuals
Making our own mini projects to present our ideas allowed us to be both understand the direction we both wanted to go in. Terry created a Kinect visual and started by experimenting with the SimpleOpen NI library along with checking out youTube videos, blogs and books to find example code in order to learn more about how others tracked skeleton information from the Kinect and this formed the basis of our project.

Some of the visual references were found on youtube listed below:
However the below video captured our attention and we decide to try and recreate a particle system visually whilst also creating and interactive musical piece.
The most important factors for the visuals were that:
- It needed to be scalable. We were needing as much performance as we could get since we knew we would be pushing processing so we tried out a number of ways to reduce particles being drawn such as:
- if (frameRate < “x”) only create every 4th particle
- if (particles.size > “x”) start deleting the last particle of the arrayList (i.e. the oldest)
- if (millis() % = 2) you are allowed to create new particles
However, we found that after tweaking the depth of that the particles were drawn and the lifespan to decrease faster this allowed for a faster framerate drawn to screen. Because we wanted to attach the particle system to a pointcloud, the code was modified so that the origin point of any given particle was defined by an array of PVectors and our arrayList of particles would whenever asked would create one particle at each vector.
Gestures/Interactions
The gestures and interactions came about three quarters of the way through the project once we had understand completely how the Kinect used hand, elbow and shoulder vectors. It was then about finding the distance between these joints which then activated certain functionality such as playing and changing the section, then entering ‘Beat’ mode.
We explored the possibility of having hand gestures instead of body gestures, but found that the close proximity needed for the Kinect to correctly analyse the hand shape was a compromise we needed to decide upon. Furthermore the extra processing power meant that other parts of the visual would lag.
Thus we opted for a more obvious gesture selection as highlighted below:


Challenges
Throughout the project we needed to overcome a variety of challenges. For Example, when trying to implement hand gestures we found that the user needed to be in close proximity to the Kinect in order to capture individual finger movements. This then created an issue with lagging within the point cloud particle system due to overloading the graphics card.
Our user testing also gave us a deeper insight into our product and that there was a bit of a learning curve when they were trying to interact with it. This led us into changing out target audience and we felt now that maybe now the UI was not a necessary component for an artist when performing live. Mapping the vector information was another challenge as we needed to test the maximum distance x, y and z that the kinect could go and then we decided later to map this to the middle of the person in order to allow a freer movement from the user.
Another slight issue, which we believe to be a fundamental problem with 3D tracking ,was that the Kinect kept dropping the user and thus this made it difficult for the user to feel fully immersed as attention would then be on trying to reconnect.
While the particle system was straightforward to setup we found that since it was being mapped quite a few points, not overloading computer required some tinkering (we ended up solving this by skipping points of the pointcloud). Furthermore to make the particle system look appealing required a lot of tinkering. It needed to look busy and detailed (lots of particles being drawn) while being visually coherent and not just a mess.
The audio challenges where twofold, technical and “artistic”.
Figuring out what type of music the user would interact with proved went through numerous phases. A person who’s not musically trained wouldn’t just be able to pick up and have fun with a theremin like setup of mapping x, y coordinates to pitch and volume. It turned out that giving the user any direct control over pitch demanded that they understood the music piece as a whole, once again this did not fit with our aim of being intuitive, fun and immersive people regardless of skill level. The second challenging aspect once we had decided to let the user switch between sections and manipulate parameters within ableton live was to have a piece of music which it made sense to play around with it. For example when a musician brings his effects and pedals to perform at a concert he won’t alter the parameters of said effects between min and max constantly. There will be a very specific range of sounds that makes sense for different sections of different songs. We found trying to recreate that experience made the most sense.
The first iterations of the project had midi and logic in mind. Logic while a great sounding piece of software required far too much computing power to pull off what we wanted. We opted to not use midi on account of doing anything more complex than noteOn/noteOff requiring reference tables. Using the LiveOSC api and their easy to read and understand documentation meant we could write code that itself read meaningfully i.e:
msg.addrPattern(“live/track/device/pan/”)
msg.add(theValue)
msg.send(targetIP, targetPort)
The difficulty then was with understanding the what values different parameters took. some accepted floats from 0 – 1, others integers between 0 – 127 while other more rhythmically oriented parameters wanted a sudvision such as: 1/4, 1/16, 1/32 ,1/ 64 etc…
Having someone test the program while another person sat behind the screens proved to be very useful since the user can feel something is not responsive while you can clearly see the parameters moving up and down on your end.
Evaluation
We feel the project ended up as a mixed success. While we set out to craft a tool that allows “anyone” to have a meaningful interaction with music, it became clear to us that “anyone” doesn’t actually exist and that people have radically different expectations on what they can do and how they can interact with a piece of technology such as the Kinect.
After testing it with different people we found that people fundamentally had two different reactions:
- People who saw it as an audiovisual piece with which you could recreate the feeling of “dropping the bass” or “the dub-step breakdown”. These where normally the audience who could already relate to our aesthetics.
- Those that didn’t necessarily connect to our musical or visual choices directly but saw it as more of a high concept tool for purposes such as music therapy.
Because of our music backgrounds we were more interested in the piece as being purely sonically oriented. This meant we decided to focus more on people who already had knowledge of electronic music and who already interact with music to some capacity. It’s not intended for “experts” or necessarily professionals but for “hobbyists”. We think that aesthetically and musically we successfully completed what we set out do which was to have an interactive experience using technology which we had no previous experience with and furthering our knowledge of vectors and data transferal.
However we feel we could’ve done a much better job with making the piece easier to use in terms of calibration and gestures. The biggest issue with software is that it is in it’s current state complicated to use. You need to be told what the gestures are and even then they require too much training to internalize. Spending more time fine tuning the music and the gestures, combining, removing and making sure the gestures flowed better from section to section. We also feel that our kinect connection is currently too unreliable. It frequently loses tracking of the user. Another library, programming software and possibly kinect v2 may help us fix this. We also wanted to make something more interesting with the visuals. We did set out to make a pointcloud and draw a particle system on top of it. However we feel that the visuals would need to be even more dynamic to hold the user’s attention. This would’ve been done by inserting more interactions with the fft such as further physics and colour manipulation.
References
Kinect v1 Skeleton
OSC & NetP5
PointCloud
LiveOSC
ParticleSystem = Part of Previous project, further alteration through advice in class
Making Things See: 3D vision with Kinect, Processing, Arduino and MakerBot

UN-REACTABLE
UN-REACTABLE
By George Sullivan and Leon Fedden
Abstract
Our project is an interactive installation using gesture and expression to explore sound scapes. We wanted to design an immersive environment where sound is manipulated through the users presence and kinetic movement of ‘nodes’. We have created a sound scape for the user to adapt and explore through intuitive ways which, we hope, sounds and looks pleasing.
By using AR computer vision techniques we have build a system that gives us a positional data stream for each node present on our table. Using this data we have created different ways of interacting with Reason [1] to manipulate sounds and textures present in our sonic landscape.
We built our own box in which to house the equipment and covered it with a transparent lid. Placing ‘nodes’ on top with AR code graphics facing down into the table, we are able to track the X and Y position of each node, as well as the angle it is to the camera, giving us a constant data stream of position to manipulate. In our code we have used multiple libraries to make this possible, and rely on several different input streams to be passed through our code and into Reason to create and shape the sounds heard around the room.
Target Audience & Intended Outcomes
Our original aim was to create an installation which offered someone of any musical background a platform to experiment with sound design. To make this interesting we needed our system to be responsive to the user’s gestures and thus inspire them to interact intuitively with the space. This offers an experience that is not exclusive, but instead, sound design to those with or without any previous knowledge. The control of each sound must be obvious when heard but implemented in creative ways, we created different methods of interaction for different nodes and sounds thus making an interesting piece which encourages experimentation. Although initially we aimed to create synthesis techniques with coherent actions, after research and experimentation of our system we decided that in order to create an interesting piece which sounded rewarding. We would have to give the user control over higher level audio processes rather than the gritty DSP side.
We spent a lot of time considering the control methods used in our project. After researching into embodiment we were inspired by the idea of creating an interface which provided a seamless connection between the computer system, the user, and the environment surrounding them. Consequently, we wanted our project to conceal all technology within the box we built, and allow our users to interact with physical objects within the space around it. However to succeed in creating this connection not only a tangible interface was necessary, but thoughtful and natural output from our program.
We researched other projects to see what methods were successful in similar pieces. Most notable was the Polyphonic Playground [2], which George visited at the start of term. After talking about it, we decided that although it was an impressive exhibit, the output sound from the system was not coherent with the interaction from the audience, especially when so many people were using it. The way the sounds work together was where we thought it fell short, and so in order for our project to be sonically impressive we decided tha
t we would need to put in a lot of thought into the sound materials used. Another problem we saw with it was the users interaction was limited essentially to multiple switches. Consequently we decided that our nodes should exhibit differences in interaction to remain interesting.
It is, of course, worth noting Reactable [3], a recent project that is not too dissimilar with our idea. Believe it or not, we were unaware of this when we initially started thinking of our design. But since discovering its parallels with our ideas, it absolutely shaped some of our decisions for our design. A recurring criticism of Reactable is that has a feature set which can be difficult to learn, especially without a background in music. Understandably Reactable has a different audience, budget and time constraints, however it highlighted a few importances in our build.
Leon went to a guest lecture in the Whitehead building by Dr. Nicholas Ward [4]. The lecture was how movement should be considered into the design of Musical Instruments and partially shaped how we mapped our nodes to our sound output. After researching further into some of his work we felt it really reinforced what George took from the Polyphonic Playground instillation. For example, In a paper written for a NIME conference ( New Interfaces for Musical Expression ), Ward explains the design process for his own ‘musical interface’ (The Twister) and discusses the importance of gesture design and sensible mapping which was becoming a recurring theme in our ideas. “The number of useful gestures discovered represents the starting point for the subsequent development of a musical gesture vocabulary” [5]
Design Process and Build Commentary
We had our users or target audience integrated into the plan from around the time of the project proposal; it was then we elected to build a system that was high level enough for anyone – interested in sound design or not – to be able to make interesting sounds. After some conversations with one-another we settled on a physical and digital form for the project. Having a shared vision we set out to draw components and how we envisaged users using them and how the system would be manipulated to achieve our desired output.
Having the ideas on paper really helped to ensure that there were no discrepancies in our expectations of the project. The next task was to reductively identify the key components of the project. Once we had a list of components we subsequently allocated time to build mini-projects to create each item on the list.
This was the fun bit; we were still green to openFrameworks and C++ (and still are in many ways) and we were opened up to the rather large ecosystem of addons and libraries that we might want to use. Aside from building the components as lego-bricks that plug together into a bigger cohesive model, the mini-projects served a second function; they were an objective method of rating the libraries effectiveness in providing the horsepower in some of the more complex computations. We also spent some time making classes or simple models which verified the sanity of our plans; checking and predicting approximate system dynamics for example by making a ‘table-top’ sketch controlled via mouse and keyboard that outputs midi much like the final model would.
Next is a roughly chronologically ordered list of mini-projects and what they achieved. The names reflect the titles of the projects submitted in GitLab:
ofxArucoExample: This was one of the first things we did as the whole project hinged on being able to track objects of some description in the real, physical world. Initially there was a fruitless folly of a foray into ofxARToolkit. The library the addon wrapped at the time was most unfortunately not maintained and compilation was futile. However the omnipresent Arturo was soon on deck to help with a little library he wrote called ofxAruco. This project was (or is) his example that we used ensuring marker detection worked to a decent standard and it was worth proceeding with. It is worth mentioning that as of very recently, ARToolkit is back in development with a new API and subsequently a new addon Github page [6] has been made – something worth keeping an eye on perhaps.
- MarkerTracking: This was the class that took the example and wrapped it up into a “deceptively simple” (Theodoros Papatheodorou) class that kept track of the AR markers, their position, rotation and whether they could be seen or not. Weeks of struggling with an inaccurate reading of rotation and quaternions and matrices led to us waiving the white flag and bringing out the gaffertape with a cheap, hacky solution – sometimes it is easy to get caught in the finer details and forget the overarching picture, so we pressed onwards.
- NodeExample1:
A small project to model the tabletop and markers – essentially circles that could be dragged around using the mouse. This was then turned into a class to interface with the forthcoming midi-library explorations. - MidiAttempt1:
Here we worked through an example to explore ofxMidi [7] and get familiar with the API. The documentation was straight forward and the examples provided gave us enough to implement our own basic midi messages.
- MidiToReason:
We then took that knowledge of ofxMidi and spent time working out how to route midi signals on a Mac from our program to Reason. As basic note-on and note-off messages had been covered, we now wanted to create our own software interface to allow control over parameters inside of reason. This was where the documentation fell short and we spent a long time figuring out how (and verifying it was possible to) to send these controller change messages. After researching into how midi itself works, and digging deeper into ofxMidi, we were able to create our own midi channels and send controller data from our software. - About ten assorted ofxMaxim projects: A lot of these projects initially stemmed or were from Leon’s tutorials for his technical research. They all fall under the bracket of fairly low level sound design so have been grouped together here. The real take away from these was that Maximillion was too low level for our users and means. The proceeding files are; Saw wave with pitch, Amplitude & Ring Modulation , Stereo Output , Frequency Modulation, and finally a combination of these processes.
- PS3EyeGrabber: This project took the openFrameworks add-on’s [8] functionality and wrapped it into a class that allowed for straightforward interfacing with the MarkerTracking class API. Essentially it copies the openFrameworks base class VideoGrabber API so the same functions can be called in the same place.
These mini-projects were key to refining our ideas. Often the usage of an add-on unearthed new possibilities and pathways for the project’s journey; one of the major realisations during this project was that Dr Mick Grearson’s Maximillion library [9] was too low level for what we had in mind sonically and by extension our target audience. It was here we decided to do the ofxMidi route into a Digital Audio Workstation (DAW). We went with reason because Leon had the most experience with that.
After we felt we had reasonable examples of projects that covered most components necessary for the final build we began to piece them together. From here on we iteratively refined the final code base to improve performance, remove bugs and to add new features like selecting modes, function pointer arrays to simplify modes (each mode could then have its own function in an efficient manner) and other changes.
The second stage of our design process was the physical build of the box and nodes. For this we worked alongside Henry Clements, who is a 2nd year Design student at Goldsmiths. We brainstormed initial design ideas, focussing on building something that allowed us to optimise space for interaction, and not limit the processes we wish to get out of our system. We also wanted to create something that was easily transportable and could be set up or taken apart easily, as well as allow space for the webcam, computer/raspberry pie, and any other wires or parts required in the future. Together we built a prototype design and created some initial blue prints.
Once we settled on a final design, we experimented with the range of the webcam to figure out how large our build would need to be. It was important to provide enough space for the user, and furthermore a large range of pixel data. We also took into account the size of the nodes, and the amount we would be using. In conclusion we decided to build the box to be 90 x 90 x 90 cm, giving us enough depth to get a large surface to play with and space inside for the webcam (and anything else that needed to be hidden), while retaining a design that was still portable.
You can see in the images above; blue print sketches for each piece of our box (center & left), a screen grab from the CAD software we used to get the wood cut using a laser cutter (right). We booked into a woodwork studio to have it cut, before measuring up all the dimensions for the lid.
To finish off, we added a stylish gold tint to our box and were ready to go. After spending a little time calibrating the camera and AR codes, we pieced all our code together and began designing sounds and interaction methods.
An Evaluation
Regarding problems and dually our solutions, we were lucky not to have any major show stopping issues over the duration of this project. It would not be true to say it went without a hitch however; we have compiled a list of reflections that we have learnt from. Note largely this list is for the reader who is not our senior; we are sure they are beyond such mistakes. For all others have a good laugh at our errors if you don’t fancy learning from them!
- Sometimes the code in the library just doesn’t work. Or more frustratingly, it does work but it doesn’t give the expected amount. Even more frustratingly, it might feel like one is a maths class away from actually being able to re-write the damn thing. The issue in question was obtaining the z-axis rotational value of any given marker. The code as standard returned approximately twenty to three-hundred and forty degrees when a full rotation happened. We tried to get help on the forums and by talking to anyone who’d listen at University – start there, but in the end we had to settle for a (relatively) ugly hack, mapping the returned range to the desired range. It’s in our opinion better to keep moving rather than get caught up in the minor details.
- Midi was also an issue for us – we had a lot of trouble routing Midi through Mac OSX to our desired DAW. Here we can only recommend ‘Google-fu’, and if you’re working in C++, reading the header files! That eventually got us through.
- Sometimes libraries don’t compile, and are broken or outdated beyond repair or changing a few pre-processing directives. Don’t be afraid to throw in the towel and find a new alternative or better yet, depending on the scale, writing the solution yourselves.
- If working with hardware, paint or glue specifically, then ensure you test a corner before committing to the whole sheet. We eagerly tinted our Acrylic sheet gold before realising the tint was too dark for effective AR tracking. The solution was subsequently spending hours removing glue residue and it has never got back to one-hundred percent the transparency it used to have.
- A major, really stupid mistake I – Leon – made borders arrogance. Do yourself a favour and never make a major change to operating systems mid-way through an important computing project; having to change development ecosystems can really stifle progress whilst you have to wrap your head around new ways of doing things. I in particular spent time talking to different Tutors and friends to weigh up whether I wanted a new Mac computer or Linux system. Whilst I don’t regret the change ultimately; I have enjoyed the rather arcane art of some of the more masochistic Linux operating systems, I regret the timing and wish I had waited until the end of term.
- A more abstract error on our part was perhaps too much ambition. A personal view of ours is that it’s always worth being ambitious because the results are usually larger in a project the more optimistic you begin, but perhaps we expected we would be able to deliver a completely professional project on such a small budget and the time we set for ourselves. Time management of course played a part and there is always room for improvement on that end, but I never thought we were being lazy. The solution to this is not to stifle ambition, but perhaps be realistic in terms of the expected final project and remember you are only human!
Conclusion & Future Proposals
We are pleased with what we have achieved within this project. As we are using reason we can change and alter the source sounds appropriately.We managed to implement data mapping not just using X and Y positions but also from each nodes; speed, distance between them, how many are within a short distance, and rotation. It is possible to add new AR codes into our system providing you can identify them (use drawData function in markerDetection.cpp). We believe that if it was an exhibition piece, it would be of importance to be able to create the output for particular occasion, hence why we designed our software to be flexible. The video above is a short performance to demonstrate what is possible using our system. On a final note, we would like to add that it is difficult to get a feel for what is capable from our installation without spending some time experimenting with it.
GitLab Repository http://gitlab.doc.gold.ac.uk/lfedd001/CP2_Final_Project_Interactive_Synth.git
Here speaking as myself, Leon, I can talk about what I took from the project, and more pertinently what it has inspired me to go on to do. After being introduced to the world of AR, I have taken an interest into the mechanics behind it and the applications of it.
Largely I would like to partake in more adventures in the blend of the digital and real world – mixed reality. Hololens [10] from Microsoft is a great real world example. Unfortunately however there are two issues here, firstly it’s Microsoft and I have little interest working in that eco-system and secondly the level of polish of their project is likely beyond my scope of possible achievement. However I like the idea of three-dimensional data representation and visualisation in the real world and would be interested in exploring avenues along those lines.
References
- Propellerheads.se. (2016). Create more music, record and produce with Reason | Propellerhead. [online] Available at: https://www.propellerheads.se/reason [Accessed 21 Apr. 2016].
- Studiopsk.com. (2016). Polyphonic Playground. [online] Available at: http://www.studiopsk.com/polyphonicplayground.html [Accessed 21 Apr. 2016].
- Technology, R. (2016). Reactable. [online] – Music Knowledge Technology. Available at: http://reactable.com/ [Accessed 21 Apr. 2016].
- DMARC | Digital Media and Arts Research Centre. (2016). Dr. Nicholas Ward. [online] Available at: http://www.dmarc.ie/people/academic-staff/nicholas-ward/ [Accessed 21 Apr. 2016].
- Ward, N. and Torre, G. (2014). Constraining Movement as a Basis for DMI Design and Performance. [online] NIME. Available at: http://www.nime.org/proceedings/2014/nime2014_404.pdf [Accessed 18 Apr. 2016].
- GitHub. (2016). naus3a/ofxArtool5. [online] Available at: https://github.com/naus3a/ofxArtool5 [Accessed 21 Apr. 2016].
- GitHub. (2016). danomatika/ofxMidi. [online] Available at: https://github.com/danomatika/ofxMidi [Accessed 21 Apr. 2016].
- GitHub. (2016). bakercp/ofxPS3EyeGrabber. [online] Available at: https://github.com/bakercp/ofxPS3EyeGrabber [Accessed 21 Apr. 2016].
- GitHub. (2016). micknoise/Maximilian. [online] Available at: https://github.com/micknoise/Maximilian [Accessed 21 Apr. 2016].
- Microsoft HoloLens. (2016). Microsoft HoloLens. [online] Available at: https://www.microsoft.com/microsoft-hololens/en-us [Accessed 21 Apr. 2016].