The project was an audiovisual application that combined hardware and software to generate sounds. Overall, it intended to explored how visual stimuli, colours and different forms of sound can be combined as well as show another approach to music and one that would be easily understood by the audience.
The projects software is a program that draws six lines in a circular motion, setting each of them to a different and random HSB colour. The hue of each of the lines is interpreted by the program and turned into an integer that lies between 0 and 360. All the lines hues are then averaged and depending on where the final output value lies between all the 60 degree ranges, the program then plays a specific note (if larger than 330 and smaller than 30, the colour is red and the program plays C1 and so on). The program lies on chance of which is the dominating colour of the lines and then sends the instructions to the Arduino.
The hardware part is thus triggered by the averaged colour variable and depending on which one of the six colours is true the corresponding floppy drive plays a note. Each floppy drive plays only a certain note and each note is associated with one of the six(yellow, red, cyan, magenta, green and blue) HSB main colour hues.
The notes on the floppy drives are configured to be played using the the speed of the stepper motor that moves the pedal inside it. By changing the speed of the motor to a certain amount a specific note can be generated. As an end result the idea was to attempt to represent colours through sound, as well as explore the variety of how audio can be represented.
The project started out wanting to experiment with sound and the variety of ways that sounds and notes can be created. The intended audience thus, from the beginning were people with not the best of knowledge and depth into musical theory or even computing for that matter. I wanted to give the viewer an opportunity to create sounds of his/her liking simply through experimentation and through easily comprehended outputs. From the beginning these sound outputs were intended to be ticks that would differ in notes depending on what the material the dials were ticking against. I created prototypes for this using Processing to see how this would visually look and as well as I was trying to find a way where this output of sound could be independent, not changing due to the user’s input, but due to some other instance or obstacle.
I then tried concentrating rather on just one type of rotating output or input and experimented with just one stepper motor to see what would be the sound that it made when ticking. The code was at this stage done just in the Arduino IDE.
After experimenting with the rotation of the motor and the Arduino I thought about using that movement of the circle and experimented with sketches that would use circular motion to create data visualisations. I looked at different ways to use circles and how they have been used in music and I came across a Spectogram that output frequencies using the distance of dots from the centre and output amplitude as the brightness of these dots. The higher the amplitude the higher the point and the lower the amplitude the darker the point. I used that to create a sketch that would draw randomised points across a circle to see how the output would look, nevertheless without the analysis of the frequency. However, I still wanted an audio output within the project and one that would to an extent have a system of some sort. As a result with the intention to experiment with colour I created circular lines drawn in different randomised colours deciding to use them as the triggers for the audio outputs. For aesthetic and simplicity purposes I used the HSB colour model as it displayed a more pleasing set of colours as well as it would be easier to get the average colour by just detecting where between 0 and 360 it is in the hue.
From then on I wrote my program in C++ using Openframeworks and continued experimenting with the physical audio outputs. The stepper motor was not enough and I came across Sam1Am Moppy Software’s Github page that apparently was the beginning of a still growing community which uses floppy drives to output certain notes and thus even play songs (preferably ones which are within the range that can be played on the drive). I used the instructions along with his software to see what the actual output would sound like.
I experimented with the Moppy program and used the code as reference to understand how the speed of the motor within the drives is changed to generate a specific note and then tried getting those notes within my program. However, I could not get the right notes as I encountered many problems with moving the floppy drive backwards and forwards. I started with moving one, and whilst that worked from the Arduino IDE I could not manage to trigger all the floppy drives realtime from the program that was running in Openframeworks.
As an end result the floppy drives did not trigger correctly depending on the colours and instead either lagged or played all at once without the correct delays. Thus from the viewers perspective it seemed that not only the colours were generated randomly on the screen, but also the physical audio output was chaotic as well.
Comments on the build
I did not come across with many major problems when dealing with the code for the circular color line. Some smaller and more tedious ones where getting the hue of each of the members of the rings class after they had been changed. Essentially from the beginning the randomness of the colour of each of the rings was defined in setup, thus when it got changed within the member functions I could not pickup the new changed hue. For that I used a pointer to each of the members of the class to access the new hue of each of them to get their overall average.
Another problem when using Openframeworks and C++ I encountered was connecting to the Arduino. As when previously using Processing the connection as well as the documentation for it was more straightforward, whilst at the time it was hard to find not outdated Openframeworks connection examples with the Arduino as they varied depending on the version Openframeworks and it libraries. However, fortunately within the newest version of Openframeworks there is already an inbuilt Arduino class that can be used to connect to it, taking into account that the Arduino board has the StandardFirmata sketch uploaded unto it from the examples folder. However, even after sorting out the connection I still came across various problems when triggering the function for the floppy drives from Openframeworks.
Whilst the movement of the motors worked as intended from the Arduino IDE, when porting them to C++, I couldn’t manage to achieve the right result. The main struggle was not only triggering them to move the right way, but also to play the right note. I did manage to achieve a similar version of just the movement I had written in the Arduino IDE, however there still remains space for much improvement regarding the outputs of the floppy drives.
After going through all the prototyping and exploration of different approaches and triggers of sound I was satisfied with the end concept of the project. The interface that showed the randomised colours seemed clear and simplistic enough. However, due to the difficulties that I went through to get the right notes playing on the floppy drives it eventually came to sound like what I feared most – just randomised sound. Thus the end result did not seem as the easily understood application that it should have been.
Apart from that, when getting the right notes, I wished to build upon the application to also use the saturation of the colours to trigger the seventh note. Further on I want to find a more systemised generation of the colours so that the notes in some ways would evolve into repetition that would allow it to seem as a track itself.
Link to gitlab repository that contains the final project as well as documentation files:
To me, To you
By Tom Pinsent
‘To me, To you’ is a collaboration between me and my computer. Inspired by a bombardment of screens, wires, beeps and boops alongside a lack of traditional, more physical forms of work within the digital art world, the electronic part of my piece is kept solely within the production. This leaves the final product independent of digital technology, un-reliant upon electricity to be experienced, although having been heavily influenced by it.
The paintings are created through a step-by-step process. It starts with a simple shape or form painted on to a canvas. My program detects this through a webcam and compares the shape to a pre-trained set of shapes, with the most similar projected back on to the canvas. I add these shapes to the painting, and the process repeats until the picture is done.
Audience and Intentions
Through my piece I wanted to convey an interplay between contemporary artistic practice and the
traditional, through the counterposition of modern technology with the conventional, seemingly
dated, medium of paint. Furthermore, I’d hoped to suggest a dialogue that is the collaboration between an artificial, digital artist, consisting of both hardware and software, and
myself. I hoped to show viewers that contemporary art with a heavy use of technology does not
have to be all “beeps” and “boops”, and can in fact exist within the real world (as opposed to the ephemeral cyber-space realm), in a form that is well known and recognised throughout art history. My work might attract those with either professional or personal interests within art or computer science fields, as well as hopefully providing insight to members of the general public who may not have been exposed to these kind of ideas before.
When starting with this project, I looked at a range of artists and works that dealt with both contemporary and traditional methods of creative practice, where neither aspect holds a greater importance than the other. New approaches are combined seamlessly with the conventional, bringing both styles into a new context, therefore producing art that feels fresh yet familiar.
Names include Yannick Jacquet, Joanie Lemercier and more.
I also conducted further technical research, by looking at computer vision, machine learning and neural network techniques.
Udacity Deep Learning course (mainly convolutional neural network section)
Self Organising Map tutorial
I initially thought this would be useful since a Kohonen Self Organising Map is a unsupervised machine learning algorithm widely used for grouping together images (and many other types of data) through similarities.
Design Process and Commentary
I began with an initial list of things my program needed to do:
1. Scan canvas and detect shapes
2. Analyse detected shapes and composition
3. Create new shapes and forms based on how well detected forms fit a desired aesthetic
4. Display new forms on to canvas
I then began exploring methods and techniques used in computer vision, machine learning and neural networks, since these are areas that are used within similar applications, such as image classification/face detection.
When scanning and detecting shapes, I wanted my setup to work like this, which would be the simplest and easiest during the actual use of the program (the painting of the canvases), and meant it would be suitable for live scenarios:
However, I found this was impractical for a few reasons: The camera should not detect anything apart from the desired shapes on the canvas – even with cropping, the distance combined with the noise from a low quality webcam makes for very bad shape detection. At this stage I did not know how computationally intensive the detection and generation of shapes was going to be, so I though committing to a setup and build geared for quick and easy use may have led to problems later on, towards the end of completion, and a live scenario might not have been viable. Also, there was an issue of getting the same perspective between camera and projector, for accurate representation of shapes from both my point of view and my program’s.
Instead, I ended up holding the webcam up to the canvas and pointing it at the desired shape. This worked well, since with the web cam closer up, there is greater detail in the image and hence more accurate contour detection. I also found that a ‘1-to-1’ representation of forms on the canvas and displayed shapes was not necessary at all, which I will come to later.
In terms of the shape detection itself, I started with ofxOpenCV. I used a variety of test images (as the webcam was unavailable at this stage) and results were OK, but contained a lot of noise – rough, jagged edges were picked up all over contour detection. I later implemented the use of smoothing and resampling functions within the ofPolyLine class in openFrameworks, which made some improvements. In the end I switched to the ofxCv add-on, a package with greater functionality than ofxOpenCv, closer to the OpenCv library. Contours detected with this library were much better.
2. and 3.
So I wanted to find the similarity between a painted shape and a set of test shapes, then generate shapes from there. Initially my test set was going to be a collection of images, textures and forms that fit some sort of aesthetic of my choosing. I thought the use of genetic algorithms, such as L-systems, cellular automata or other rule based systems, to generate my test set would be interesting, further increasing the influence of my computer on the work. I assumed this data would be in the form of .jpgs, so implemented various functions for loading and analysing data from image files. Yet as experimentation continued, I was informed about SuperFormula, an algorithm for the generation of shapes, most of which resemble forms throughout nature. The beauty of this algorithm is that such a range of shapes can be created just from the alteration of four individual variables. I also realised that my training set not longer needed to be in the form of imgs, and i can just take vertex positions and feed that straight into my program. I edited and modified a version of superformula in processing, allowing me to create and save a training set.
Finding a value for the similarity, or difference, of two shapes was very tricky, since there are many factors to take in to account, such as size, orientation, location… After much research and testing, I settled on a method using a turning function, as described on page 5 (section4.5) here:
Using this website as a reference along the way:
This function allows you to map the change of angle from vertices along the perimeter of a shape onto a graph. It is then (relatively) easy to calculate a distance, or similarity, between a two or more sets of points (for example, shape A’s first angle change is compared to that of shape B’s and saved, with the same happening for every other vertex. A mean average of similarity is found at the end, and therefore a similarity between two / more shapes).
So now I have a range of base training shapes, each with a similarity to an initially drawn shape. I then looked into clustering, so i could group together similar similarities. However, after more research I found it was quite inefficient to implement most clustering algorithms on 1 dimensional data, my single similarity value, and instead looked to the Jenks Natural Breaks algorithm. This is used for splitting up a set of data into sub groups with similar values ( so shapes with similarities 1,7,15 would be in a different group to those with 150,120 or 110 for example). After my data had been split into sub groups, the most similar group, so that containing the lowest values, were used to generate images.
However, for the actual generation, this was all – the initially generated superformula shapes in the training set were just re-presented. I tried to use the values fed into superformula to generate the most similar shapes as a starting point, and slightly tweak the values and regenerate new shapes. But I found after even small tweaks, variation was too great and any detection of similarity was lost.
I would have liked to include some sort of composition calculation too, yet I found within the time I had this was not possible. Instead, for the paintings produced I tried different methods of composition. One was my own choice of placement of generated shapes, the next was a layering of shapes on top of one another, from most similar to least, and the last I tried distorting the projection itself. As I was painting, I noticed how the projection distorted along with the distortion of the surface itself. This made me think about how my program had influence over the piece through the code, the software, but I am also painting over a projection. How can the hardware and positioning of this affect the work?
Overall, I have learnt a lot from this project. With initially very little understand of machine learning or computer vision techniques, I had to read about and jump over many small hurdles and problems.
I would have liked to have a more accurate measure of similarity, since currently my turning function does not work well with changes in rotation or lots of noise on the contours.
The generation could have been better too, since in the end it is almost just recycling the initial forms giving to it (however, parallels can be seen between this and the age old mystery of what true creation really is, if it really exists, or if we ourselves just recycle and reproduce ideas presented to us). I would also like to implement some sort of colour and composition training and generation. My code is also quite messy, with many functions included that weren’t used in the final painting. This could have been avoided by better planning of exactly what functions and methods I was going to use, with a clear idea of exactly what I needed.
Furthermore, I believe the majority of interest within my project lies in the production, in the technology, leaving the traditional values that I wanted to place emphasis on severely lacking. I succeeded in my plans to push technology out of the limelight, yet it left the public had little understanding of the process of its creation. For similar projects in the future, I would need to spend at least as much time upon the physical, non-electronic side of things as I did on the electronic if I would like them to hold the same importance, as well as finding a way to communicate the process clearer through the work itself (in other ways than a short video documentation on the process).
The Space that Brought Us Here
The space that Brought Us Here is a piece that challenges audiences perception of their surroundings in a more intimate way, though the physical engagement with the piece itself. Screens are used to show sections of the same space. The viewer is able to then reorientate the screens, through movement, creating new-found compositions.
The idea behind ‘The Space that Brought Us Here’ was to challenge a viewer’s understanding of the space around them. As someone walks around a space they form an understanding about how they are operating within their surroundings and how other people are also interacting with the Space. Individuals form their own perspectives and compositions of the space. The piece had 6 tablets suspended on metal steel wire within a wooden frame. A viewer was able to move the tablets up and down the wire, rotate the tablets on the wire. The tablets took video from there rear camera and displayed the video on the screens. However, as all the tablets were connected to the same local area network they could share the video between one and another. A viewer could tap on the screen of the tablet in order to change the video shown to one of the other 5 tablets. This resulted in mismatched video, creating confusion but intrigued about how that perspective had come about.
Intended outcomes & background research
The piece was based on a lot of background research I had done. Particularly with the work of Olafur Eliasson. I was intrigued with him talking about public space with ‘New York City Waterfalls’. Here the waterfalls acted as an intervention in public space. This allowed people to evaluate the space around them. Following on from this Eliasson’s work on the glass façade in the Harpa building also demonstrates this by it’s “three-dimensional quasi-brick structure” that creates “fivefold symmetry” . The sections create shifts in their appearance and colour according to people in the building and the environment. Looking through the facade distorts the view, creating new found compositions. Also with Jeppe Hein’s work, ‘Please Touch The Art’, where he created a mirror labyrinth made out of a spiral of mirrored stainless steel planks set against views of lower Manhattan. The posts are arranged in various arcs that distort the surrounding park and city. I found this piece intriguing with how the mirrors create distortions which created extra perceptions of the space.
I wanted to take this further to explore how individuals could form different understanding of a space, in terms of perspective and how other viewers have an impact on the space that is viewed. Just prior to this I had worked with perspective and perception with Focal Grid which split the same view in to multiple different focus points. Interaction controlled the different focus points that were shown, from a tablet with virtual buttons. From the exhibition of this piece it was clear that the interaction was not always obvious and the viewer didn’t feel connected with the scene that was shown.
I wanted to make the viewer more of a part of the piece through physical engagement with the piece. In the reorientation of the tablets and being able to be part of the video shown. This removes conventional boundaries with how the piece can be interacted with. There is a sense of playful fun, moving different objects independently of one another like a jigsaw, allowing a viewer to see different possibilities with how the objects can be positioned.
I wanted the video shown on the tablets from the outset to be familiar to the viewer. This was in order to augment and intervene with a space they were familiar with in order to challenge there understandings of the space. I achieved this by using live streamed video of the space from the tablets, for the desire to create engagement within the viewers surroundings.
I had decided upon using the Amazon Kindle fire for my project due to it’s relative low cost and running android which openFrameworks could compile on to. I started initial tests by trying to set up of the openFrameworks android examples on a Kindle Fire. This took a lot longer than expected. I found that using Android Studio was a better way of compiling on to the tablets.
I worked on some rotation tests with the accelerometer in the tablet to help me with the rotations of video that would be displayed. I had issues with being able to get the values required to allow continued rotation of the tablet because of the lack of a gyroscope within the tablet. But this still meant that I could impact the rotation of the image displayed based on if the tablet had been rotated left or right.
Following on from this I was not able to get the camera examples for openFrameworks Android compiling on to an Android device. This resulted in the program crashing very quickly after compiling on to the device. I tried this with 2 devices running new and older versions of Android. The solution for this was to use an IP Cam that ran in the background of the tablet which would then stream the video to my openFrameworks app. I used IP WebCam. In order to get the video into my oF app I used the oF addon ofxIpVideoGrabber this took in a list of IP cams via an XML file to display them on screen.
From this solution being used it allowed me to share the video between all of the tablets which worked very well. I got some good feedback from people I showed this too.
I needed to build a frame that would allow the tablets to be suspended within the frame and also, allow them to move up and down and rotate on the wire all with keeping the tablets fixed within the frame. It became clear after talking with Nicky Donald, one of the technical support team, that the frame and the attachments for the tablets would be easiest custom made as we were not able to find anything commercially available to create this. As a result, the frame would be made out of wood and metal steel wire would be used to attach the tablets to the frame.
The attachments would need to be 3D printed to allow them to grip on to the wire and rotate the tablet at that position. I set about looking for 3D designs on thingiverse.com to find the basis of the parts. From this I started modifying the designs, going through tens of iterations of design and print tests.
I was unable to find a suitable case for the for the kindle fires that I was using with my project. I came to the conclusion that it would be best to print cases for the tablets. This came with the advantage of being able to have the attachment plate printed on to the case, saving having to glue resulting in a messy look. However, there were issues with the printing of the cases. It was very difficult to perfect. The cases took around 7 hours each to print meaning that it took a while to be able to make modifications to the design and get all printed at a high enough quality. I also had issues with the rafts and supports that the 3D printer uses to support the print as it is being built. The supports were not consistent in the print due to the complexity of the design, this resulted in weaker prints. I started using a program called MeshMixer, this allowed me to adjust my cased in real world measurements to fine tune the cases. It was also used make the cases optimum for 3D printing. However this still did not solve the issues with the supports for the printing and as the 3D printers that I had access to didn’t always print correctly.
I had issues with the main rotation part snapping when the cases were assembled. This damaged some of the cases meaning that modifications had to be made to them so some were not able to rotate on the steel wire.
I had initially been considering a square wooden frame for my piece. However, through thinking about how my piece would sit in the gallery space I thought I would opt for a portrait frame. This was for several reasons including, creating the look of a window or portate that would frame the tablets to allow for this change of perspective. Also to match the aspect ratio of the tablets within the frame.
The first frame that I build wouldn’t have been strong enough to support the tension from the metal steal wire. Nicky Donald advised me to use structural wood used for building to support the tension from the wire. As a result, I purchased more wood and added metal braces as well as wooden mitred braces for each corner. This resulted in a very strong frame that was capable of supporting the tensioned wire.
For the steel wire, I used wire grippers and Gripples, which allow steel wire to be gripped but adjusted for tensioning of the wire. The wire grippers were put at the top of the frame and the Gripples at the bottom. Pliers were used to pull the wire through the Gripple to tension the wire.
I was really pleased to see people’s intrigue at the opening night. This intrigue initially was from the video being very often missmatched which drew viewers in to interact further with the piece. Mainly viewers rotated and moved the tablets up and down the wire, however usually quite tentative as most seemed to think the tablets look a bit precarious. One interaction I had not been expecting was people to twist the tablets left or right on the wire to see around. This surprised some people as it became very clear then that the tablet that they were moving didn’t always respond to their movements. This was due to the feed shown on the tablet being another tablet within the frame. This discovery lead to alot of viewers interacting with the exhibition further. However, for some the mismatched video wasn’t enough for them to physically engage with the piece.
There were a couple of tablets that were a bit precarious on the frame. This resulted in 2 of them falling off. This did cause issues with viewers wanting to interact with the piece due to these issues from the 3D printed attachments.
The Space that brought us here was a successful piece. I felt like my piece lived up to the original concept of challenging a viewers perception of their surroundings. I had ideas of using the rotation to control the rotation of the image and using an image buffer to create a delay. Although I did not explore these ideas fully, it became clear that this would add too much complexity to my piece and confusion in physical interactivity. Through my solution of using an IP Cam to capture the video instead of using the cameras locally allowed me the ability to share the video between all of the tablets. This created a very subtle shift in perspective but was obvious enough from just looking that the perspective had been altered. I feel that this was better than adding a time dimension or unexpected rotation of the video as I wanted to make the intentions of the piece clear. However, due to the latency of the video stream, there was sometimes a small delay that was noticeable when someone walked in front of the tablets. This was useful to illustrate how the tablets were creating a different perception of the surroundings and allowed a viewer to think about the various operations and alternative compositions occurring within the space.
The issues with 3D printing did hamper the viewers ability to interact with the piece. This was due to my difficulties with the complexity of the printing. It also resulted in some of the attachments looking quite rough. If I was to do that component again I would be hesitant to 3D print the components due to the complexities of it and breakages that are hard to prevent. Using proper cases for a tablet and metal or molded plastic for the rotation and vertical movement would be better. However considering I haven’t seen someone else suspend tablets and have the ability to move them vertically and rotate on steel wire I think my solution was good. More time should have been put towards the mechanics of the structure in order for this to work better.
Overall I am happy with my piece. I negotiated a lot of challenges with the 3D printing as well as issues with compiling software for android with openFrameworks. I feel that my piece striking and allowed viewers to explore the space differently through the intervention of the frame.
References and bibliography
Cycle is a series of technology-assisted performances, incorporating the use of robotics and sound. It was inspired by the interrelating concepts from Graphic Notation and East Asian Calligraphy/Ink Wash Paintings. In each unique recurrence, Cycle explores the theme of spontaneity and individuality transpired within a structured framework as the performers present their own interpretation of a set of instructions.
Each performance lasts approximately three minutes, give or take a minute; the performers end it at their own discretion. During the performance, a sole performer walks around the ‘Ink Stick Rotation Machine’ (ISRM) in a seemingly undefined way. The ISRM grinds an ink stick on an ink stone according to how the performer walks. Ambient sounds and vibrations generated from the constant moving contact of the ink stick and ink stone are amplified by speakers through a microphone located on the sides of the ink stone in real time.
In the performer’s interpretation from a set of rules constructed by the graphic score’s composer, control over the manner of performance is removed from the composer’s authority which alludes to a spontaneous creation of the performance by leaving it to ‘chance’. Unlike music represented in traditional notation, different performances of one graphic score do not have the same melody yet still articulate similar notions expressed in the score. In the cases of Ink Wash painting, the rules in posture, way of holding the brush, and practiced strokes, the results cannot be fully controlled by the painter and are still unpredictable due to human error and the nature of ink and water – their interaction take on a life of its own.
The audience sees and listens – nothing really comes out of watching the performance. Yet, even if the audience does not understand the concepts implicated in this work due to requiring some background knowledge about the act of grinding an ink stick, to experience Cycle, they merely have to practice being in a state of calmness and ambiguity. Just like when a painter or calligrapher prepares ink by manually grinding the ink stick, it is to ebb their flow of thoughts, momentarily forget about the things that are happening outside of the performance and just watch and listen. The performance would be both like a ‘performance’ and a non-religious ‘ritual’ at the same time. The feeling that one would sense is like when one is a non-Buddhist listening to the chants of Buddhist monks. Strangely calming, yet it could get annoying when one listens to an ununderstandable language for too long.
For the performers, I would hope that they would be in a world of their own without minding the presence of the audience and focus on their body walking in a circular path, yet I can imagine that they would perhaps be nervous in front of an audience, especially if they are performing for the first time. As a recurring theme in my work, ‘walking’ is a simple movement that can be of disinterest and a distraction all the same. It not only refers to the bodily action of moving your legs as a mode of transport but also signifies the act of repetition, which is structural, and the mundane. As the performer walks after a few times, the performer may build up a personal routine or choose to walk a different manner each time.
After my research on Graphic Notation and East Asian Ink Wash Paintings, I have drawn connections between these two distinctively different genres in art and show their overlapping characteristics in which my artwork attempts to embody conceptually. I likened graphic notation to instructions that were rather open-ended yet specific in certain ways, hence, I decided on creating a performance that Borrowing the motif of ink grinding, which is in itself the stage that happens before the actual painting is executed, and combining it with the imagined sound that graphic notation alludes to, I made the ISRM a framework for the performers. The performers actions are translated to 26 rotations speeds and merely two rotating directions on the ISRM. Within the structure of the ISRM itself, I also found it ironic to have a physically mechanical device replace the mechanical and repeated motions of ink stick grinding. I was unsure of the exact sound that would be produced at the beginning as the sound that is amplified would be quite different from the tiny scratching noise that I am familiar with when grinding ink.
Borrowing the motif of ink grinding, which is in itself the stage that happens before the actual painting is executed, and combining it with the imagined sound that graphic notation alludes to, I made the ISRM a framework for the performers. The performers actions are translated to 26 rotations speeds and merely two rotating directions on the ISRM. Within the structure of the ISRM itself, I also found it ironic to have a physically mechanical device replace the mechanical and repeated motions of ink stick grinding. I was unsure of the exact sound that would be produced at the beginning as the sound that is amplified would be quite different from the tiny scratching noise that I am familiar with when grinding ink. With the addition of the sound of the motor, I thought that the sound would be a nice hybrid between the organic and inorganic materials.
In the late 1950s and the first half of the 1960s, many prominent international avant-garde composers such as Roman Haubenstock-Ramati, Mauricio Kagel, and Karlheinz Stockhausen, as well as experimental composers such as John Cage, Morton Feldman, and Christian Wolff started to produce graphic scores that used new forms of notation and recorded them on sheets that were very divergent from traditional music notation in size, shape, and colour. This new way to convey ideas about music alters the relationship of music/sound to the composer and musician. “In contrast to scores with traditional notation, graphic notation emphasized concepts and actions to be carried out in the performance itself, resulting in unexpected sounds and unpredictable actions that may not even include the use of musical instruments.” (Kaneda, 2014)
Here, I focus on how graphical notation evolved from John Cage’s musical practice and then on Treatise, one of the greatest graphical scores, by Cornelius Cardew.
Influence of Zen Buddhism in Cage’s Work
In Cage’s manifesto on music, his connection with Zen becomes clear: “nothing is accomplished by writing a piece of music; nothing is accomplished by hearing a piece of music; nothing is accomplished by playing a piece of music” (Cage, 1961).
This reads as if a quote from a Zen Master: “in the last resort nothing gained.” (Yu-lan, 1952). Cage studied Zen with Daisetz Suzuki when the Zen master was lecturing at Columbia University in New York. Zen teaches that enlightenment is achieved through the profound realization that one is already an enlightened being (Department of Asian Art, 2000). Thus we see that Cage has consciously applied principles of Zen to his musical practice: he does not try to superimpose his will in the form of structure or predetermination in any form (Lieberman, 1997).
Cage created a method of composition from Zen aesthetics which was originally a synthetic method, deriving inspiration from elements of Zen art: the swift brush strokes of Sesshū Tōyō (a prominent Japanese master of ink and wash painting) and the Sumi-e (more on this in the next section) painters which leave happenstance ink blots and stray scratches in their wake, the unpredictable glaze patterns of the Japanese tea ceremony cups and the eternal quality of the rock gardens. Then, isolating the element of chance as vital to artistic creation which is to remain in harmony with the universe, he selected the oracular I Ching (Classic of Changes, an ancient Chinese book) as a means of providing random information which he translated into musical notations. (Lieberman, 1997)
Later, he moved away from the I Ching to more abstract methods of indeterminate composition: scores based on star maps, and scores entirely silent, or with long spaces of silence, which the only sounds are supplied by nature or by the uncomfortable audience in order to “let sounds be themselves rather than vehicles for man-made theories or expressions.” (Lieberman, 1997)
John Cage: Atlas Eclipticalis, 1961-62
Atlas Eclipticalis is for orchestra with more than eighty individual instrumental parts. In the 1950s, astronomers and physicists believed that the universe was random. Cage composed each part by overlaying transparent sheets of paper over the ‘Atlas Eclipticalis’ star map and copied the stars, using them as a source of randomness to give him note heads. (Lucier, 2012)
In Atlas, the players watch the conductor simply to be appraised of the passage of time. Each part has arrows that correspond to 0, 15, 30, 45, and 60 seconds on the clock face. Each part has four pages which have five systems each. Horizontal space equals time. Vertical space equals frequency (pitch). The players’ parts consist of notated pitches connected by lines. The sizes of note heads determine the loudness of the sound. All of the sounds are produced in a normal manner. There are certain rules about playing notes separately, not making intermittent sounds (since stars don’t occur in repetitive patterns), and making changes in sound quality.
Cornelius Cardew: Treatise, 1963-67
After working as Stockhausen’s assistant, Cornelius Cardew began work on a massive graphic score, which he titled Treatise; the piece consisting of 193 pages of highly abstract scores. Instead of trying to find a notation for sounds that he hears, Cardew expresses his ideas in this form of graphical notation, leaving their interpretation free, in confidence that his ideas have been accurately and concisely notated (Cardew, 1971). The scores were a guide which focused each individual’s creative instinct on a problem to be solved – how to interpret a particular system of notation using one’s own musical background and attitudes. (Tilbury, 2008)
As John Tilbury writes in Cornelius Cardew: A Life Unfinished (2008), ” The instructions were a guide which focused each individual’s creative instinct on a problem to be solved – how to interpret a particular system of notation using one’s own musical background and attitudes.”
“A Composer who hears sounds will try to find a notation for sounds. One who has ideas will find one that expresses his ideas, leaving their interpretation free, in confidence that his ideas have been accurately and concisely notated.” – Cornelius Cardew
In the Treatise Handbook which guides the performer on the articulation of the score, Cardew writes that in Treatise, “a line or dot is certainly an immediate orientation as much as the thread in the fog” and for performers to “remember that space does not correspond literally to time.” (A Young Persons Guide to Treatise, 2009)
East Asian Ink Wash Painting
The Enso, or Zen circle, is one of the most appealing themes in Zen art. The Enso itself is a universal symbol of wholeness and completion, and the cyclical nature of existence, as well as a visual manifestation of the Heart Sutra, “form is void and void is form.” (Zen Circle of Illumination)
Despite there being many specific technicalities in Cage’s work, these are all qualitative instructions which are open-ended, ultimately leaving it up to the performer’s or conductor’s judgement on how they would play the piece as implied by Cardew’s ideas. In a sense, the individuality of each performance of the graphic score by different performers emerges. This is mirrored in appropriating the creation of the Enso in Cycle by the performer. Every painter draws a circle but every circle is different. Bodily and mindfully engaged in drawing the circle, the circle becomes an allegory of the individual.
The performer not only becomes both the painter and the medium in creating the circle, the performer is also a musician with the indirect control of the device that grinds ink – the instrument with a naturalistic sound created from the contact between the ink stick and the ink stone. To quote Cage’s approach to what defines music, it is the “the difference between noise and music is in the approach of the audience” (Lieberman, 1997).
The act of grinding the ink stick becomes the juxtaposition between the ritualistic and the improvised. Also, ink that is produced after each performance are of different quality each time as no two performances will last the exact same time nor will the performers be able to replicate their performance exactly.
Communication between the phone and the computer is through OSC. The ISRM is made up of an Arduino Uno, which controls a stepper motor, which is directly connected to the computer with a USB cable. The speed and direction of the performer would be measured by the built-in sensors in a phone on the performer. Data from the orientation sensor and accelerometer of the phone is computed in a C++ program on the computer which maps the speed and direction of the performer to that of the ISRM.
Controlling the Stepper Motor with C++
The Arduino part was pretty straightforward as there was the Firmata library for the Arduino that enabled serial communication with a C++ program. However, there was no stepper library in C++, so I translated the Arduino stepper library to C++. Working through the technical details of the stepper motor that I had with some trial and error, this was the circuit that I used to test controlling the stepper motor through a C++ program.
Here’s me testing the program out:
To hold the ink stone, ink stick, and the stepper into a single functional entity, I started off with a preliminary design of a 3D model in Blender, which eventually I was going to 3D print.
I got the idea of the rotation wheel and axis from the driving wheels of steam locomotives, but I was not satisfied with the motions of the rotating mechanism in the first prototype. It caused the ink stick to rotate in a rather awkward manner that did not keep the ink stick facing the same direction. I also removed the water tank as I felt that it was visually obstructive and had no better purpose than to provide the ink stick with water, which I did not manage to figure out a fail-safe method of channeling the water into the ink stone. I thought of using a wick to transfer water from the tank to the ink stone, but water transfer was too slow, or a small hole with a pipe dripping water to the ink stone, but the rate of dripping will change when the water in the tank decreases due to decrease in pressure. Also, it would damage the ink stick if I let it touch the water for too long periods of time, hence I scraped the water tank from then on and decided to manually add water before every performance.
There were many difficulties trying to get the holder for the ink stick to fit. I realised that it was never going to fit perfectly as the dimensions of the ink stick itself was not uniform; one end of the stick could be slightly larger than the other end, which made it either too loose or too tight when I tried to pass through the entire length of the stick through the holder. I resolved this by making the holder slightly larger and added sponge padding on the inside of the holder so that it would hold the ink stick firmly no matter the slight difference in widths. The ink stick was shaky when it rotated so I increased the height of the holder to make it more stable. I also added a ledge on each side of the holder for rubber bands such that the rubber bands could be used to push the ink stick downwards as it gets shorter during grinding.
Before arriving at the final design, there were just wheels that were only connected to each other through the rod. The rotation did not work like expected of a locomotive wheel and I realised that it was because the wheel not connected to the motor had no driving force that ensured it spun in the right direction. Therefore, I changed the wheels to gears.
The printed parts did not fit perfectly and that was not because of the wrong measurements as there was a factor of unpredictability in the quality of 3D printing. I tried using acetone vapour on the parts that need to move independently of each other to smooth the surface, but the acetone vapour also managed to increase the size of the plastic. The plastic became more malleable so I easily shaved them down with a penknife.
This process was too slow and I ended up using a brush to brush on the acetone directly to the plastic parts and waited for a few seconds for it to soften before using a penknife. Super glue was then used to hold parts that were not supposed to move together. The completed ISRM:
I used electret microphones that were connected to a mic amp breakout, then connected to a mixer for the performance. I got an electret microphone capsule to use with the Arduino but I did not know that the Arduino was not meant to be used for such purposes and the microphone was not meant for the Arduino.
So, I got another kind which could directly connect to output as I did not want to use the regular large microphone which would look quite ostentatious with the small ISRM.
Trying to amplify the sound of making ink (sound is very soft because I only had earphones at that time, and I was trying to get the phone to record sound from the earphones):
Sensor Data & Stepper Motor Controls
I initially thought of creating an android application to send data to the C++ program via Bluetooth, but there was the issue of bad Bluetooth connectivity, especially the range and speed of communication. Hence, I switched to using OSC to communicate the data. Before finally deciding on using an OSC app, oscHook, I made an HTML5 web application with Node.js to send sensor data. It worked well except for speed issues as there was a buffer between moving the phone and getting the corresponding data that made it rather not ‘real-time’, and it also sent NaN values quite often which would crash the program if there were no exception handlers.
For controlling the speed of the stepper motor, I mapped the average difference of the acceleration of the y-axis (up and down when the phone is perpendicular to the ground) within the last X values directly to the speed of the motor. Prior to this, I looked at various ways to get the speed and direction of walking, from pedometer apps to compass apps. As different people had different sensor values with the phone, I created a calibration system that adjusted the values of the mean acceleration when the performer is not moving and when the performer was moving at full speed. This ensured that the stepper will be able to run at all speeds for all performers.
Link to Git Repo.
Performance & Installation
Videos of performances were playing on the screen for the second day of Symbiosis. The TV was covered with white cloth on the first day. The ISRM was placed on a white paper table cover with the microphone next to it.
Instructions for Performers
Besides having to run a calibration before their performances, I requested the performers to wear “normal clothes in darker colours” to make a contrast with the white room walls. I decided not to specifically ask for black as it was too formal and intimidating. Although the performance exudes the sense of a ‘ritual’, it was not meant to be solemn or grievous, as was such cultural connotations of fully black clothes in a rather ritualistic setting.
During the performance, the performers were to heed these instructions:
- Walk around the room.
- When you stop, stop until you hear the sound indicates that the motor is at its lowest speed.
- End the performance when it is three minutes since the start.
Prior to completing the program that controls the stepper motor, I wanted to attach the phone to a belt and hide it under the clothes of the performers such that they would be walking hands-free. I realised that it was quite abrupt to merely end the performance with the performer standing still as there was no indication if the performer was pausing or stopping entirely to the audience. Hence, after realising that by placing the phone parallel to the ground caused the motor (and in turn the sound) to stop in an elegant manner, I decided that the performer would hold the phone (which I covered in white paper to remove the image a phone) in their hand and have them place it on the ground to signify the end of the performance.
There was a total of eight performances by three people, Yun Teng, Leah, and Haein. These are videos* of the performances by each of them on the Symbiosis opening night and their thoughts on their experience of performing:
*The lights in the room were off during the day, hence videos of the earlier performances look quite dark. If you do not hear any sound from the video, please turn up the volume.
“Being asked to perform for this piece was an interesting experience. For me, it was seeing how (even on a conceptual level, as it turned out) that my physical movement can be translated through electronics and code into the physical movement of the machine and the audio heard. Initially, although we were given simple instructions to follow and even, to some extent, encouraged to push these instructions, I was at a loss to how to interpret them, and just walked in a circular fashion. I tried to vary the pace, speed and rhythm of my walking in order to create variation, but ultimately fell back into similar rhythms of fast, slow, and fast again. It would have been interesting to perhaps push this even further if the machine was more sensitive to height changes, or arm movements – as a dancer who is used to choreography, this was a challenge for improvisation and exploration. In addition, due to the size of the room, the space was limited and hence the walking could only take place in certain patterns.” – Yun Teng
“At first, the walker was uncertain, distracted and anxious. She explored the link between sound and her unchoreographed strides and expected the connection to be instantaneous and obvious. However, it was not. There were delays and inconsistencies; the electronic and mechanic could not accurately reflect the organic. A slight panic arose from the dilemma of illustrating the artist’s concept to the audience and accepting its discrepancies as part of the performance. Slowly she started to play around with the delay, stopping suddenly to hear the spinning sound trailing on, still at high speed, and waited for it to slow down. Rather than a single-sided mechanical reaction to movement, the relationship between the walker and the machine becomes visible and reciprocal. Rather than just walking, now she also had to listen, to wait, and by doing so interact with the machine on a more complicated level. Through listening, she felt the shadow of her movements played back to her by the machine. The observation sparked contemplation on the walker’s organic presence versus the machine’s man-made existence and the latter’s distorted yet interesting reflection of the former.” – Leah
“The whole practice first was received as confusing and aimless as there was too much freedom for one to explore. It was challenging to perform the same act (walking/running) for more than two minutes. At first, I performed more than four minutes, unable to grasp the appropriate time, but it decreased as I repeated the practice. This repetitive performance was quite meditative and physically interactive with the work that caused me to wonder about the close relationship between myself and sound piece (which changes according to my walking speed). The most pleasant part of the performance was that I got to control the active aspect of the work and directly interact with it.” – Haein
The audience was very quiet, probably so that they could hear the sound that was very soft even at its loudest. When they first came in, they did not know what to do as there was no visible sitting area (so I directed them to sit at places that allowed the performer to roam most of the room). It was a huge contrast to the audience that interacted with my previous work as only the performer gets to have a direct interaction with the ISRM. Even then, the ISRM was visibly moving during the performances.
Just hours before the opening night, the ISRM broke at (fig. A & B). It was a mistake on my part as I was reapplying super glue (fig. B) to the base as it had somehow loosened from the previous application of super glue. In hindsight, I did not make extra parts (I did print extras of certain, not all, parts but they of no use when I did not bring them on site, nor were they ‘acetoned’ to fit together.), could not manage to salvage the parts, and I knew that I would not be able to reprint the parts in time. In the end, I slightly altered my work as the ISRM could no longer function as intended. Instead of having the microphones stuck to the sides of the ink stone, I stuck them to the stepper motor instead. Although the sound no longer had an organic element from the ink stick and ink stone, it was completely mechanical now.
After undertaking this project, I have learnt not to limit myself by my tools, but to explore different methods and tools before limiting myself in the creation of the work. I had a misconception that 3D printing was the most efficient way. In some ways, it was because it was the printer that was doing the hard work, not me and I did want to try 3D printing. Despite that, I should not have limited myself by my lack of consideration in using other materials to build the ISRM, such as the traditional way of putting together wood and gears. On the other hand, I do not regret my attempts to build an android app (which I quickly decided was not worth my time for the simple thing I was trying to accomplish) and a web application for sending the sensor data from the phone with Node.js as it is something new that I learnt even though I did not use it in my final work.
Fortunately, I managed to finish the design of the ISRM and print it out in time, but I felt that I should have focused more on the ISRM instead of coding in the earlier phase of the project timeline. 3D printing takes a lot of time, as I have experienced through this project, and any botched prints needed to be printed again as they are rarely salvageable even after being in print for hours. It is also tricky to get the settings right (i.e. infill) such that the printing time is minimised without compromising the quality.
Apart from the many technical things, I also learnt how to organise a performance art (this is my first performance art) and through making this artwork, there many more implications and questions that arise from what I created. For the performance, there were many things to keep track of, such as rehearsing with the performers beforehand, the attire of performers, the schedule of performances, getting the camera to film for documentation and managing the audience. In conclusion, despite being unable to carry out the performances as I have originally planned, I am glad that I have managed to still put together what is left of the entire work even when the ISRM failed to work correctly and the original intentions behind the artwork are still largely intact.
References & Bibliography
Works Cited in Background Research
A Young Persons Guide to Treatise. (12 December, 2009). Retrieved 2 November, 2015, from http://www.spiralcage.com/improvMeeting/treatise.html
Asian Brushpainter. (2012). Ink and Wash / Sumi-e Technique and Learning – The Main Aesthetic Concepts. Retrieved 2 November, 2015, from Asian Brushpainter: http://www.asianbrushpainter.com/blog/knowledgebase/the-aesthetics-of-ink-and-wash-painting/
Cage, J. (1961). Silence: Lectures and Writings. Middletown, Connecticut: Wesleyan University Press.
Cardew, C. (1971). Treatise Handbook. Ed. Peters; Cop. Henrichsen Edition Limited.
Department of Asian Art. (2000). Zen Buddhism. Retrieved 11 December, 2015, from Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art: http://www.metmuseum.org/toah/hd/zen/hd_zen.htm
Kaneda, M. (13 May, 2014). Graphic Scores: Tokyo, 1962. Retrieved 2 November, 2015, from Post: Notes on Modern & Contemporary Art Around the Globe: http://post.at.moma.org/content_items/452-graphic-scores-tokyo-1962
Lieberman, F. (24 June, 1997). Zen Buddhism And Its Relationship to Elements of Eastern And Western Arts. Retrieved 10 December, 2015, from UCSC: http://artsites.ucsc.edu/faculty/lieberman/zen.html
Lucier, A. (2012). Music 109: Notes on Experimental Music. Wesleyan University Press.
Tilbury, J. (2008). Cornelius Cardew (1936-1981): A Life Unfinished. Copula.
What Ink Stick Should You Choose For Japanese Calligraphy? (2015). Retrieved 3 December, 2015, from Japanese Calligraphy: Modern Japanese Calligraphy inspired in Buddhism and Zen: http://www.theartofcalligraphy.com/ink-stick
Williams, M. L. (1981). Chinese Painting – An Escape from the “Dusty” World. Cleveland Museum of Art.
Yu-lan, F. (1952). A History of Chinese Philosophy. Princeton, New Jersey: Princeton University Press.
Code References & Software
Making Some Noise
Initially my project hoped to define and implement a physical, meaningful link between visual and auditory processes within the computer. Specifically working from the perspective that the user would make use of visual input/manipulation to produce audio output.
Two potential avenues were considered
1. the VGA Port: potential processing of VGA (a precursor to modern graphical output)with the signal being fed back into the computer and used to implement audio processing.
2. the main graphics pipeline itself (through monitoring processes happening on the chip or following the journey of pixels being processed).
With the idea for graphical input being to make use of keyboard and mouse to control a set of base shader functions.
Ultimately, approximately half way into the project these avenues were abandoned, not through lack of potential but more through lack of technical skill on my part and the time constraints for the project. Moving away from the physical connection between processes to the functional representation of signals (as a multitude of sine waves) was the new focus. With this switch it was necessary to shift from having a display based on direct input to one which would allow the user to explore the audio landscape of photographs or pre-rendered images.
Research process and outcomes:
Initial focus was on trying to understand the processing of a digital signal to produce a visual stimulus. Working from the top down I discovered that the visual output was stored as a bitmap in the frame buffer and the bit pattern per pixel was read to a graphics display at a refresh rate. So, how did these pixel values come to be generated and could I monitor the process to produce audio?
This question led me to the Graphics Pipeline where I was confronted with images such as this:
Clearly a pixel went through numerous stages and memory locations before reaching the Frame Buffer. And importantly these locations were on the graphics card.
A Brief History of the Graphics Card:
In the past, graphics operations were performed by the Video Graphics Array (VGA), a memory controller and display generator connected to some DRAM. Over time, these controllers moved to incorporate specific graphics necessary functions such as hardware for triangle setup and rasterisation, texture mapping and shading; and they became more programmable, with programmable processors replacing fixed function logic. Now, GPUs have become vastly parallel processors containing many processor cores, typically organised into multi-threaded multiprocessors.
Looking into these, it became apparent that most companies kept the inner workings of their processors to themselves. But there was one exception. The documentation for the GPU located on the Raspberry Pi (the Broadcom VideoCore IV 3D) has been released by Broadcom, leading to another diagram.
And even another one representing a QPU(Quad Processing Unit) pipeline:
These diagrams illustrate well, the minute scale from which pixel fragments are combined to form a pixel. And after reading the relevant portions of the documentation and the experiences of a couple of people who had worked with GPGPU on the raspberry pi I accepted that this was too complicated for my current project. If I were to go back to it I would perhaps start by looking at the texture and memory lookup unit but not only is this too complicated technically, resolving these processes into a form which would allow for meaningful audio output would be challenging to say the least.
Moving from this, I looked to the VGA port of my laptop. This is a 15Pin output with pins for red, green, blue, V-sync and Hsync to describe the image. Attempting to run these through an arduino proved fruitless. With me being unable to utilise the xrandr function to force a signal through the VGA port.
It was here that I switched to looking into implementation of fft to analyse an image and use this analysis to produce sound. Initially interested in 2DFFt implementations for calculating spacial frequencies I eventually settled for 1demensional transforms in ofxMaxim after being unable to implement the specific libraries offering a 2D feature (these being ‘FFTW’ and ‘OfxFft’).
The program is split into two parts. The ‘sample’ class runs in the main thread, setting up the audio context, fft and loading the images along with selecting and manipulating the image. The ‘noisemaker’ class runs in a separate thread and deals with calculating the audio samples and organising the bins by magnitude when necessary.
The image above shows the program running. The portion of the image being sampled is the highlighted central square whilst the full image is covered by a translucent mask. The fft analysis of the central square is represented by the white peaks towards the bottom of the image.
The central square (altered by moving the mouse) contains 32768 pixels. This was chosen as the largest power of 2 that could be processed by the fft. A power of 2 was desirable as it speeds up the fourier transform considerably. One downside to having this size of sample is the time it takes for the audio to be calculated. Using the standard wave equation of A*sin(TWO_PI * f * t+phi) for 16000 bins per audio sample means that unless you keep the mouse fixed in one spot for a long time, the current image portion being sampled will only account for a fraction of the sound being played. One bonus of this is that some interesting effects can be created through movement of the mouse around the image.
Basic image resizing functions are provided by using the ‘UP’ and ‘Down’ arrow keys while pressing ‘r’ will reset the image to its original state.
There are functions to filter the bins being used for a particular image sample sound by magnitude. Pressing ‘m’ enters this mode with the ‘UP’ and ‘Down’ arrows increasing or decreasing the number of bins to include. This was initially intended to provide a way of breaking up the noise with something different and offer an audio to image mode by generating the image based on this bin selection. However, this came with unforeseen problems. Firstly, it is necessary to wait for the noisemaker to finish its current cycle, and this takes a long time, breaking any feeling of engagement or exploration for the user. Secondly, because fewer frequencies are included there is less dampening from phase-shifting of sin waves and considering the ones which are present are of the highest magnitude – there is a large increase in volume as soon as the magnitude filter begins, this volume decreases the more bins are included.
Unfortunately I was unable to resolve this issue nor provide further image or sound manipulation functionality.
Evaluation and Conclusion:
Overall this was a poor project. The final outcome is limited and it is devoid of any deep understanding of the chosen area of study. The sound/image link is not particularly successful considering the lag and it is not fun enough to justify the arbitrary audio/visual connection.
Areas for improvement practically include increased functionality. For example the ability to save samples and alter multiple images, offering a range of image filters and sample playback effects, and smoother handling of levels and memory management.
However, the real downfall of this project was the planning. It would have been far better to start from a position of understanding and build from there (perhaps where the technical research left off) rather than attempting to learn three completely new topics. Ultimately the difficulty of the initial research bogged me down and I was unable to make any practical gains or headway.
On a positive note I will be more prepared in future and this project has given me new ideas regarding the handling of signals and data and the potential for making meaningful connections between the human experience of the digital and the way computers are used to abstract and analyse this experience.
Bibliography and References:
‘Computer Organization And Design: the hardware/software interface’ by David A. Patterson and John L. Hennessy
VideoCore IV 3D documentation: https://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf
Walkthrough for GPGPU on Raspberry Pi: https://rpiplayground.wordpress.com/2014/05/03/hacking-the-gpu-for-fun-and-profit-pt-1/
A project making use of Pi GPU before documentation was released: https://github.com/hermanhermitage/videocoreiv
A linux graphics overview: http://moi.vonos.net/linux/graphics-stack/
A VGA project converting audio to Video: http://crackedraytube.com/pdfs/hacking_a_vga_monitor.pdf
Figure 1: https://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf page 13
Figure 2: https://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf page 17
spider! was supposed to be a program in which you would be able to interact with a realistic spider, this was to be done by using the Leap Motion and Google Cardboard, unfortunately this wasn’t to be
Research and intended outcome
Spider! was supposed to be a therapeutic tool used to be one of the first stages of treating arachnophobia, this was to be done by using the leap motion in conjunction with google cardboard, the user would be able to interact with a spider by picking it up or scaring it into fleeing or attacking.
simple representation of what we was going for
Egle’s first port of call was to study the Strand beast movement that moves like so
we started looking at how spider legs work and how they move with each other
spiders legs individually move like the diagram displays, a spider moves 4 legs at a time (2 on either side) to keep balance as the next diagram will display
image edited from www.mirrorservice.org
the audience that we was going for is people who are interested in overcoming their fear of spiders by desensitizing themselves to the look of them, but don’t want to actually be near a real spider. It would then be the first step towards getting rid of this fear the second step would to be around an actual spider.
the idea would be to have all the infomation in openframeworks sent to unity so that is can easily move 3D models and visualize the product
The Project unfortunately ended up a failure as Egle and Jamie were bombarded with problems at every turn.
our original intent was to make a virtual environment where a person can interact with a 3d spider that will respond naturally to the users movement. unfortunately we ended up making a simple informative app on spiders.
Our work time was dwindled from the beginning with Jamie having to work in his free time and egle having to go back and visit famil, due to this communication between us has been strained.
It all started when we decided to use the leap motion, a cool piece of tech that is used to get hand and fingers position data and display them on screen via infa-red, and the google Cardboard, a cheap VR head set that is made of cardboard as its so aptly names after, setting up the environment was a disaster as following the software development kits instructions on the leaps developers walkthrough was confusing and it didn’t give us all the files needed ect, with simon’s help we managed to set up an open frameworks environment that is used to work with the leap motion, this was done by using the example in the addon ofxLeapMoation , soon it was time to get around to implementing the google cardboard environment together, unfortunately that proved too hard as we would of had to communicat via a java environment thats when jamie came up with the idea to use unity as the set up in unity was basically drag and drop this caused a bit of concern because point of the project was it was supposed to be done in openframeworks the solution was that whilst everything is visualised in unity all the data would come from openframeworks. There was a bit of confusion on how to make openframworks communicate with unity jamie made the example osc working unity on keyPressed and open frameworks side so the Bosc was working the problem jamie had was getting the position information from the leap to send over on leap mostly because of bad access errors the problem seemed to be that there is no way that you can access the information manually there for i was unable to use this information to create interaction
It’s at this point that both Jamie and Egle became disheartened with the project and started to pull attention into other directions from Jamie’s experiance he became down and unable to work on something that seemed too big to work on try as he might he couldn’t get it in him to carry on, whilst eagle was spending a lot of time learning a new 3D modelling program Maya spending most nights watching tutorial videos and not sleeping this in turn made her unable to work at a hundred percent.
this exhaustion and other uni related stresses unfortunately lead to Egle having second thoughts about the course and having priorities else where
Upon reflection I believe that the project was too ambitious for our skill levels, the time we had available, whilst the project may have failed we have learned a lot about ofxAddon libraries, unity assets, maya and of course, ourselves. If i was to do this project again i would probably try to do it without using the other external devices which would greatly simplifly the project and make our lives easier. This is because external devices have the capacity to fail that is unmatched by any other add-on we h=could have used for this project. This was the main problem with our project because the external devices, such as leapmotion, and google cardboard, because they do not have any documentation available which made our lives much more difficult than they could have been. A knock on effect this had was to make our project a monster we didn’t want to face, it also made us realise the importance of getting the lecuterers help as they really made our lives easier and helped us through the difficult times. The main thing I would have done differently would have been the very subject of the project. We would have made something simplier, something easier to build and something with more documentation. I feel like if we had done this then it would have made our lives easier, it would have made the project simpler and it would have helped us make a program worthy of creative projects.
Duppy Tree is an installation based on the concept of the Iroko bottle tree. The Iroko bottle tree can be found in West Africa and the Caribbean and its purpose is to keep bad spirits or bad vibes away. With what we have learned throughout the year, we were intrigued to see how these skills could alter and enhance the bottle tree concept, essentially creating an immersive experience for the user. We decided that we wanted to play with light so that as the user walks closer to the tree, the light gets brighter and as the user walks further away from the tree, the light gets dimmer. To achieve this, we used an ultrasonic proximity sensor via openFramework with the Arduino.
Although this installation appeals to artists and Interior designers, we believe that the tools we used for our project peak the interest of professionals amongst a vast range of fields ranging from: musicians, artists , interior designers, education purposes and set designers.
In this instance; the concept is an interpretation of the Iroko spirit bottle tree which is present in West African and Caribbean cultures. Therefore upon our artistic interpretation of the tree, it is neither a part of the past, on time or coterminous with European avant-garde modernist art, which follows the trajectory of a succession of styles (such as Fauvism, Cubism, Expressionism, Surrealism, Abstract Expressionism, etc.). This opens the possibility for a new artistic language.
- In the first week, Alabo attended “Sound System Culture London Exhibition” and shared various ideas that linked to sound and culture. Next, upon playing around with different signal paths and add-ons on OpenFrameworks and Max-msp, we thought that it would be interesting to use sound alongside the light and proximity sensor so that now, as the user walks closer to the tree, the light gets brighter and a sound is triggered.
- The following week, we started doing more research and decided that the best option for this project was a Raspberry Pi because we wanted it to be a stand-alone object. The Arduino was a good option but we thought the Pi was a better option due to its portability.
- For the project we agreed to equally split the costs, retaining a maximum budget of £500 for equipment. However, we managed to spend less. So we pretty much had the structure of Alabo focusing predominantly on sound, whereas Eddie was focusing on the audio reactive visuals.
- At this point our idea was to create an interactive speaker
- By week 4 we pretty much had the majority of our equipment, which enabled us to experiment with the Raspberry Pi. The Raspberry Pi was quite a challenging prospect. Ideally, we wanted to spend at most 3 weeks on getting the sound to work with proximity. If we managed to get something working, then we wanted to go out and get user feedback. This ultimately would have allowed us to know if we had a lot of work to do or not. We went beyond 3 weeks struggling with getting the raspberry Pi setup.
- The first thing we struggled with was getting the raspberry Pi to work on a Raspbain image. This OS allows openFramework to run on it. So we had 2 Raspberry Pi’s, Raspberry Pi Number 1 and 2. Now we both struggled on installing the image onto an SD card. Eddie managed to install it onto number 2 using a tutorial and it worked really well. Alabo had issues with the SD card he had, but one of us at least managed to install something, (after a while his SD card also had a Raspbian image.) The next step was to install OpenFrameworks onto the Pi which proved to be a big challenge, as we have never done anything like this before. Both of us started looking through the OpenFrameworks website and learned that we must install Linux arm6 onto the Pi so we tried. For some reason everything we tried kept failing so Eddie decided to add a blog post on the openFrameworks website and we got a response from Arturo. He suggested installing the nightly build and openFrameworks started compiling projects. We even added a camera and did simple codes just to check if it worked or not and it did.
- By this time Alabo had bought the LiFX lights and had begun trying to get them to communicate with OpenFrameworks but to no avail, they proved to be difficult to work with between home and university, due to needing to sign into a Wifi connection to access the lights.
- So the next step was to link the Pi with the ultrasonic sensor we bought using a breadboard, two resistors, and various wires. We checked online for many ways to set up which was a struggle to find. We had to ensure that the ultrasonic sensor outputs 3.3 volts or else the Raspberry Pi dies. The hardest and most challenging part was to compile a project that executes the sensor so that we could detect the distance. This was a real struggle. We tried and searched for various solutions, however, we couldn’t find solutions so we spent weeks on trying to get the sensor working with the Pi but ended up having to give up on it and deciding to use the Arduino instead.
- Alabo then purchased the Philips hue lights, to prototype on max-msp and TouchOSC leading us to see that this idea to merge sound with light was easily achievable with other platforms.
Touch OSC Test
- Trying to get the ultrasonic sensor to work proved a struggle, and after searching through forums and various tutorials we came across Cormac and Johan’s technical research which helped us set up the ultrasonic sensor with openFrameworks.
Philips Hue Light Test
Problems with our Build
The main problems we encountered with our OpenFrameworks code were to do with sending a request to communicate with the LiFX lights. Despite trying different addons, getting assistance from our lecturers and searching through the forums, we still couldn’t find a solution. Upon discovering that python could also send a request to the lights, we decided to approach the task by using python and communicating between the two programs with OSC messages.
However, still unsatisfied with the approach, Alabo bought the Philips hue lights due to there being a lot more online support and even an ofx addon that enabled you to change parameters from within OpenFrameworks. Now comfortable with the Philips Hue lights and aware of the similarities between the Philips Hue and LiFX’s API, Alabo was able to implement the same principles with the LiFX in OpenFrameworks with success. The solution lied in having to add a Syscommand.cpp file to the project, which allows you to then call the cURL request to the light.
Now, the only problem left was that the lights were not responding to the proximity sensor due to the fact that the sensor was sending data too quick for the network to respond. To produce the results we were after, we had to increase the delay number in the Arduino sketch and add it after ‘Serial.write’
Reflecting back on the project we realised that what we were actually trying to create wasn’t one product, but several that could be interconnected products like the Apple HomeKit. The sound speaker idea was a good idea however we just fell into many potholes approaching the deadline. If we did manage to get the sensor working on time then we would have had the trouble of playing the music wirelessly and working with the Raspberry Pi was a tedious process . So the decision to switch to Arduino saved the project because there wasn’t any time. In regards to functionality, we noticed several improvements that could be made. Firstly, our results for the prototype were very restricted and linear. To improve the project, we would like to use more proximity sensors and chain them in a circle around the tree to get readings from all angles. To conclude, in the future, we can look into developing our own Voice activation feature and creating an Intelligent Assistant similar to the likes of: Siri, Amazon Echo, Google voice control and Lifx Jasper (we can even create a way to implement these ready-made systems as well).
Raspberry Pi ultrasonic sensor – https://ninedof.wordpress.com/2013/07/16/rpi-hc-sr04-ultrasonic-sensor-mini-project/
Raspberry Pi Motion Sensor Tutorial – https://www.pubnub.com/blog/2015-06-16-building-a-raspberry-pi-motion-sensor-with-realtime-alerts/
Lifx API – http://api.developer.lifx.com/docs/set-state
Philips hue max msp OSC – https://www.youtube.com/watch?v=0D3tL4Wv9aQ
Arduino Ultrasonic Proximity Sensor HC-Sr04 – http://www.instructables.com/id/Arduino-HC-SR04-Ultrasonic-Sensor/
cURL Request in OpenFrameworks Code – https://github.com/wouterverweirder/devineClassOf2016/blob/master/src/SysCommand.h
by Akira Fiorentino and Uyen Tran Hong Le
XPLOR is an immersive exploration game / visual experience, viewed from a top-down perspective, where the player moves in 2D but the environment is in 3D.
The player controls a virtual fish-like life-form, guiding it around in an abstract environment populated by large, pulsating blob-like creatures.
The goal is to stay alive by consuming the food whilst avoid getting absorbed into the giant creatures. Eating the food will increase the score. The player has 3 lives and once they’re all lost it triggers a game over, which displays the score and prompts the player to restart.
Our first focus is to develop a game that employs generative randomness, used in the creatures. They are symmetrical shapes made up using one equation called the Superformula, discovered by Johan Gielis in 1997. In the game, every time the creatures go off the screen, their appearances (colours, shading colours, resolution) and behaviours (amount of oscillation, speed and direction of rotation) are randomised.
Randomness also applies the positions of the food objects. Altogether, this creates a sense of unpredictability, capturing the player’s attention.
The second focus is on visual aesthetics, which we wanted to explore ways to create a mesmerising piece of art game with a futuristic and mystical style. This is shown through the alluring shading of the creatures, glowy food spheres and a dark background that subtly changes colours, in contrast with the tiny particles to create a nice gleaming effect.
We believe our program can appeal to both tech-savvy people, who will take interest in the super formula and its implementation, as well as people who aren’t familiar with programming concepts, who will simply enjoy the visual experience. In particular, our program could easily appeal to a very young audience (ages 2 to 10), given the simplicity of the controls, and the colorful visual appeal.
Our main sources of inspirations are flOw and Spore, in terms of the game mechanic, aesthetics style, top-down view with 2D gameplay in a 3D environment.
The generative randomness are inspired from the idea of things changing when the player isn’t looking (Creepy Watson).
(More on this can be found on our Creative research)
Understanding the Superformula:
The Superformula is a geometrical equation that can create many organic and natural shapes with 6 parameters: a, b, m, n1, n2, n3. It is made up of the Pythagoras theorem with exponent ‘n’ instead of the squared exponent. Below is our notes to understanding the formula on a low-level:
(More on background research: Technical Research)
We started off by working on the creatures, using Superformula3D by Kamen Dimitrov. In his code, the shape is a plain white mesh and does not move, therefore we needed to customise this. Having the GUI allowed us to adjust the values of the 6 parameters, making it much easier to understand how the formula and the mesh work together to create beautiful symmetrical shapes:
- Making the movement and polishing the appearance:
We wanted to create something similar to the pulsating movement in Cindermedusae, which we found very natural and mesmerising. To do this, we added sine and cosine to each of the 6 parameters, for each vertex of the mesh. We then used booleans as buttons to turn each parameter on and off, allowing us to see how adding oscillations to each parameter would affect the movement of shape as a whole:
(Note: n1value is not used because it defines number of angles (m))
Once we’re happy happy and satisfied with the movement, the next step is to turn one creature into a family of creatures, each having its own unique appearance and behaviour.
First, for each creature to have a unique behaviour, we randomised the oscillations of the 6 parameters (amount and speed of oscillation), direction and speed of rotation, number of vertices (or resolution) and size.
Second, we added the shading using ofLight (sets point light, ambient, diffuse and specular colour) and ofMaterial (sets the brightness, saturation and shininess) to create a glossy and polished look. All of these, together with the mesh colour, are randomised with ofRandom(). Doing this has made the creatures look more realistic and captivating!
THE FISH-LIKE LIFE FORM:
Initially, we wanted to create an object that looks and moves like a realistic fish. In the first version, we created the fish using many ofConePrimitive objects added together, which we were not satisfied with:
We realised it would require a lot of work to create a realistic 3D object for beginners like us, let alone creating the movement for it. At this point, we were suggested to look into Karl Sims‘ swimming behaviour in Evolving Virtual Creatures. This behaviour is achieved by ‘turning off gravity and adding the viscous water resistance effect,’ as explained by Karl in his paper. However, we did not wish to employ the same method, but rather used this as an inspiration for our life-form.
Since the life-form is controlled by the player, it needs to follow the mouse position. We therefore looked into Processing example follow3. It is a simple 2D snake that does not move but only rotates itself towards the mouse position.
Below video is the our recreation of follow3 in 3D:
This, combined with Karl Sims’ swimming behaviour, were exactly what we need. Finally, we managed to create a 3D life-form which follows the mouse and moves like Karl Sims-inspired behaviour.
We wanted the food to have a glowy / bloomy effect to make them stand out against the dark shader background, and thus making it easier for the player to look for the food. We stumbled upon a beautiful glowy mesh made by a digital artist called Reza Ali, which looks very close to what we wanted:
The final food object (below) is made up of many nodes, drawn in an orientation of a 3D sphere, where each node is texture mapped with a blurry white dot image to create the bloom effect:
THE ABSTRACT ENVIRONMENT:
This is created with little shiny particles floating freely around, on top of a dark shader background that smoothly changes colours as the player moves to create the illusion of movement:
- The background: The background simply uses the RGB values of the background color and assigns increments or decrements to them. Then once one of the values reaches the minimum or the maximum, give it a new random increment or decrement accordingly. The background was subsequently changed from using the entire RGB colour space, to a much darker tone, to add contrast with the glowing particles.
- Particles: Adapted from openFrameworks’ billboardExample, the particles are little floating nodes that move randomly using noise. Together with the smooth 360 camera rotation, they helped to create a fluid and floaty environment.
We loved the look and feel of the Superformula creatures and we thought that they could be the focus for the visual aesthetics of the game. Looking at them moving, we felt that there is an element of something rough and strong, set out by restrained forms of the skeletal mesh, yet at the same time is so free, expressive and magical in the way they look and move. Altogether, they have a harmonious balance between the two elements, and this has therefore helped us to decide the visual style of the game as futuristic and mystical.
MUSIC AND SOUND EFFECTS:
To accompany with the chosen art style, we chose an ambient background music and ambient musical notes as the sound effects. These further enhanced the gameplay experience by add a thrilling and adventurous atmosphere to the game.
Problems & How We Solved Them
- Adding movement and constraining sizes: We thought that creating movement would have the same approach to adding colours (that is, looping through each vertex and adding oscillation). However this was the wrong approach as this changed the positions of the vertices, and thus distorted the mesh. To solve this we added oscillation straight to the 6 parameters instead of each vertex, with booleans to turn each parameter on and off consecutively to see how oscillation of each parameter would affect the movement of the shape as a whole. This way, we are actually using the super formula parameters, not distorting the pre-made shape.
Doing this also allowed us to see exactly which parameter(s) makes the creatures big (since the values are random), which we noticed they are n3value and n4value, so we reduced the amplitude and vertical shift for these 2 parameters to prevent the creatures getting too big.
- Speed vs Resolution tradeoff: Setting a high resolution would make the creatures look better but would significantly slows down the game and causes lagging, therefore we had to reduce it a bit to ensure a smooth gameplay experience.
- Randomising mesh colour: Unfortunately, we were never able to randomise the colour of the creatures themselves when they go off screen. So at the moment, the 3 creatures change their shapes, movements and lighting colours, but their mesh colour stays the same.
- Lighting limit: In the game, each of the 3 creatures has a specular light and a point light. We wanted to have around 5-7 creatures to make the game more exciting, however, this caused the light on the life-form to crash, and we also got this error:
[error] ofLight: setup(): couldn't get active GL light, maximum number of 8 reached
According to this page, openGL introduced a fixed-function pipeline of only 8 lights allowed per scene. Also, removing the life-form light significantly reduces its appearance and does not match with the environment of the game. Therefore, to keep the player light, we could only have 3 creatures.
with lighting (creates nice scales)
- Moving vertices and bloom effect: To start off with, we used the same approach that created the creatures: create a sphere mesh, loop through all the vertices and add oscillation to each vertex. However we could not get all the vertices to move (due to how the sphere mesh was made) and also struggled to find a way to create the bloom effect.
At this point we found the openframeworks example called pointsAsTexture which is exactly what we needed. Therefore we decided to use this example and tweaked it our own way.
- Initially, before we had the glowing particles, we wanted to make the background more appealing, as well as give the player a more obvious sense of direction when its moving. So we thought of making these massive gradient circles, which could act as ‘zones’ which the player can move into, which are of a fixed colour, as opposed to the changing background. Our vision for this was strong, and we spent a long time working on it. Initially we used many ofCircles drawn from the inside to the outside, with decreasing opacity. However, this was extremely laggy, due to the numerous sine and cosine calculations that need to happen every frame.This led us to having to learn about openGL: we created the circles by using the triangle-circle method, which was much more efficient, and allowed for even further possibilities in modifying the colour and look in interesting ways.However, this seemed to cause problems with the super formula shapes, that no longer appeared 3D but turned ‘flat’.
This led us to decide to let go of those shaders, and we didn’t think they were absolutely necessary for the look of the game anyway.
We have managed to create a basic exploration game that works well without any bugs or lagging. Most of our goals were completed within schedule, with only the parallax background and random creatures’ mesh colours left out.
Our 2 focuses were also successfully implemented, especially the visual aesthetics, as we have worked hard to make it look as good as possible within our abilities.
On the other hand, we had to remove lots of ideas, such as using AI, genetic algorithm, voice input, parallax background as they are beyond the scope of our abilities. Also, we were unable to carry out user testing on kids as we do not know any, therefore we only tested on adults. However, this would have been very interesting, as we are quite convinced of its possible appeal to this audience.
- flOw: http://www.jenovachen.com/flowingames/flowing.htm
- Spore: https://youtu.be/WoP5thatpr4
- MURMUR by Reza Ali: https://www.instagram.com/p/BAueZPpneiT/?taken-by=syedrezaali
- Evolving Virtual Creatures by Karl Sims: http://www.karlsims.com/evolved-virtual-creatures.html
- https://youtu.be/JBgG_VSP7f8 (swimming behaviour at 0:26)
- Cindermedusae by Marcin Ignac: http://marcinignac.com/projects/cindermedusae/
- Creepy Watson: https://www.youtube.com/watch?v=13YlEPwOfmk
- Supershapes by Paul Bourke: http://paulbourke.net/geometry/supershape/
openFrameworks documentation and tutorials:
- ofEasyCam: http://openframeworks.cc/documentation/3d/ofEasyCam/
- ofMesh: http://openframeworks.cc/ofBook/chapters/generativemesh.html#basicsworkingwithofmesh
- ofShader: http://openframeworks.cc/ofBook/chapters/shaders.html
(All free for commercial use)
- Superformula3D by Kamen Dimitrov: http://www.kamend.com/2013/12/superformula-3d/
- openFrameworks and Processing examples:
- follow3 (Processing > Examples > Interaction)
- ELIXIA font: https://uispace.net/1069-elixia-font
- Sound effects by Akira
Gitlab Repo & Extras
All Youtube videos above can be found in this playlist.
Also, take a look at our Work Diary for weekly documentation and progress 🙂
Growth of Dependency
‘Growth of Dependency’ by Sapphira Abayomi.
Growing up I found my passion for art through painting and very palpable materials. As my art practice developed I transitioned toward experimenting with different digital mediums. However more recently my work has been driven by the notion of integrating the untouchable and tangible. ‘Growth of Dependency’ is a painting which uses a generative system inspired by organic movement to explore the relationship between the virtual and the tangible. The synthesis between both elements is what completes the piece. ‘Growth of Dependency’ is representative of a commensal symbiotic relationship whereby the physical brushstrokes do not need the virtual element to exist. However the same can not be said for the virtual growth forms, those growths can only begin their life and growth forms once they have a physical element that can act as their host. ‘Growth of Dependency’ is a collaboration between myself and the generative system, in which the system uses my outputs as its way to thrive.
From my personal experience I have encountered a naïve understanding of what my friends and family imagine to be computational art, they don’t quite grasp what it is but I also pin that down to their lack of exposure to this art form.I often get asked what exactly it is that I am studying and after announcing the long-winded title of my course the reply is often along the lines of “oh so you do a lot of graphic design” or “do you create screen savers”. Even after fully explaining what the course is people still cannot fully place a finger on what the art would emerge into. However, that is exactly the beauty of computational art it can manifest itself into any form you can imagine. What I want to achieve with this piece is to show that computational art does not have to be very mechanical in nature; it can manifest itself into any form the artist can dream of. For example there are numerous computational art pieces that are performance, sculptural, or instillations based, and the list goes on. What you can do with fine art you can achieve with computational art and more. I want the audience to look at ‘Growth of Dependency’ and be amazed with its beauty and to be intrigued by this work of art having been made by a collaboration between a computer system and an artist. I want people to see how digital art can fit in the contemporary art world.
My intended outcome for ‘Growth of dependency’ was for it to develop into a series of static physical paintings. This series was inspired by the notion of mixing the virtual and the tangible to produce visuals inspired by organic growths which would then be captured in a physical form. I had intended for the generative system to act as my artistic brain, for the system to decide exactly what was to be painted, the composition, colour etc. and myself to act as the vessel by which the physical painting was produced. Almost as if I was the computers body. I started by running some basic preliminary test programs which setup this type of relationship between myself and the generative system. After testing out this kind of dynamic, I discovered how unsatisfying and mundane this was for myself. I really felt like robot, just completing what I was told to do. The picture below is the outcome of this test.
This type of relationship wasn’t stimulating enough for me. I needed to find a way for the system and myself to have more of an equal collaborative role in the final outcome. This is where the current idea developed from. The way I decided to achieve this new dynamic between myself and the system was to have the system create it’s growth patterns according to my brushstrokes on the canvas. i.e when I paint thick brushstrokes on the canvas the generative system would then start it’s growth process originating from my brushstrokes.
Generative art is a frequently reoccurring topic within the digital art community, but what exactly does this term refer to? The two determining elements of generative art are firstly the artwork must make use of a type of system, generally a machine or computational system. Secondly that system has a certain degree of autonomy to it. A definition that I find quite clear when defining the art term is by Philip Galanter who explains that “Generative art refers to any art practice where the artist uses a system, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is set into motion with some degree of autonomy contributing to or resulting in a completed work of art”.
A generative artist has control over some element in the work but they do not have complete creative control. The autonomous system has a portion of the creative license in how the outcome of the work takes form. This is the beauty of generative art. The algorithms that the artists create use principles of cause and effect and numerous parameter spaces to determine the various aesthetic outcomes. I say ‘various outcomes’ because the outcome is always unique due to the computer having an element of control, therefore there are a variety of different outcomes create.
A generative artist can be understood as a musical conductor and the generative system is their orchestra. They have an influence and an input into the work but they don’t have control over the outcome of what is generated after their input has been made. The orchestra or generative system decides that outcome.
In my art practice, I want to look at using a generative system as a part of the process to create the final pieces, rather than the system itself being the final piece shown. Ideally I would like to take the various outcomes produced by the generative system and then based on them, create a tangible version: a painting or sculpture. Consequently the final product will be a material piece.
I have done some research around some artists who have implemented this style of working using generative art within their practice. Mauris Watz for example is a digital artist who works a lot with generative systems. Throughout his artistic practice he began to move away from using the screen as the default output. He began exploring ways to manifest his generative systems into tangible objects. His core interests in representations within his art are the intersections between organic and mechanical forms. He represents this by trying to create organic movements out of code, which is completely inorganic. He uses a lot of geometric structures to try and achieve this. The relationship his art has with the computer is very inter-reliant. His art relies heavily on the unpredictability created by the computer. He is very interested in finding the interaction between these algorithms he creates and parameter spaces he can use to create a system with this unpredictability. To be able to have these software’s that create things that humans could not fascinates him and drives his practice. Although computers play such an important role in his process, Watz grew tired of using computers to represent the entirety of the pieces. He explains that with generative art often the screen becomes your material; the outcomes that are produced are displayed on screens of all different shapes and size. However using screens to display work fosters so many problems on its own. The quality of the displayed work gets reduced by an incredible margin. This exact problem is what sparked Watz’s practice to manifest itself into tangible material. He shifted away from screens and tried to find ways that he could represent his work in a physical space.
Watz started exploring many different palpable ways to represent the intangible outputs of the software. He used 3D printing, light fixtures, laser drawings and even wall drawings. What Watz began to notice by materializing these productions was that the relationship between the art and the viewer would change as well. This relationship often became a lot more intense with some of the physical objects he portrayed. He especially noticed this with some of his first 3D printed objects. As virtual objects there was nothing special about the 3D models but when they materialized there was a transformation. They acquired more of a depth and allowed there to be a sort of interaction with the viewer examining all the details and angles etc. In some cases I feel like the virtual representation can alienate the audience. A material representation starts a very different dialog between the work and viewer, which Watz has personally discovered in his own practice.
A further example is the Generative Jigsaw, which is a collaboration project developed by a company called Nervous systems and artist Jonathan Mccabe. The final results were a series of unique jigsaw puzzles. Nervous System and Mccabe shared an equal interest around the different patterns that occur in nature. This translated seamlessly into the project, which was heavily influenced by complex growth patterns and fossilized cephalopod shells.
I think this project is particularly interesting because it uses two separate generative systems to produce the two different aspects of the project. One system was used to generate the puzzle piece pattern, the second system for the artwork that fits on top of the puzzles.
To produce the system for the puzzle pieces, the Nervous system reverse engineered crystal growth processes that they then translated into code. Through changing the parameter space they created a myriad of pattern variations.
For the artwork Mccabe used a generative system that married three processing in order to produce the artwork. The first process is a modified idea based on Alan Turing’s chemistry paper in 1952 on “ The chemical basis of morphogenesis.” Morphogenesis refers to the appearance of an organisms body plan, for example the patterns on a cheetah or zebra. The paper discussed the theory that the spontaneous pattern formations found on organisms are a result of a diffusion and reaction of various substances. They in turn then either inhibit or activate one another. Moreover the rate at which the substances move through the tissues also varies. The other process Mccabe used is fluid flow, which combines coloured dots, and strips together forming sharp edges, there is also a dispersal process that yields patterns of movement and colour in the fluid. The last process essentially averages the values of colour and movement around the circle with in each instance; the slowly changing colour field is then recorded within each instant. At this point the selection process of the most visually appealing begins. Both of the final products from the two generative systems are then put together to create the one of a kind generative jigsaw.
Much like Nervous System I am very interested in implementing elements of nature into the work, like I previously stated there will be an over all organic makeup to the series.
Design and Build Process
The above picture is when I started testing a simple generative system made on processing to try and develop my idea on the relationship between myself and the computer. By exploring this I discovered that I did not what the computer to completely control the outcome of the painting, I wanted there to be a mixture of input from myself and the computer to make the final painting.
I wanted to develop a way for the generative system to take what I had painted and generate it’s own painting based on that. I started testing the infrared (IR) camera on the Kinect to see if I could find a way for the generative system to tell the difference between what I have painted and what it was projecting. I searched long and hard for a IR reflective or absorbent material to achieve this, however that was considerably hard to find. I struggled to find a paint or fabric which fit my criteria. Eventually in my research I discovered that Titanium dioxide was on the IR spectrum, and that it was an ingredient in titanium white. I decided to test different kinds of paints which I already had ( as seen in the above picture) , to my surprise black and white worked very well with the IR lens. I really liked the idea of the colour palette for the paint being only black and which and the colour was only introduced by the projection. However as the project developed I no longer had a use for the IR camera to detect which was my brushstroke.
My next step was to start getting openCV up and running and work with the background subtracting and contour finding that I would need to detect when there has been a change to the canvas ( the above picture). Once I had this working I started working on the polylines that I would later need to calculate where the generative growths would appear from.
I now needed to work out how to calculate the normals of the polylines and rotate the growths according to the normal angle. I have often found it a lot easier to work in processing and develop what I need in a processing sketch first, which is what I did with the normals. However this was a mistake on my part, with some of the structural difference in openframeworks, porting the code became quite a nightmare. I think at this stage I was just confusing myself with trying to port the code. In the end because all the elements I needed were there but it was just a few syntax errors which were giving me the last few issues.
From the beginning I knew that I wanted the growths to have an organic movement to them. I wanted them to look like they were really growing organically. To do this I started playing around with noise to try and achieve that type of movement.
The above screen shot is taken when testing the growths starting point from the circle trail created by the mouse movement. These trails are a place holder for the future paintbrush strokes.
A test of the aesthetic without the mouse trail.
I now started to combine the growth aesthetic with the openCv code. I started running into some issues here. For the growths to be produced the background could not refresh itself, however for the for the openCv to detect when a new brush stroke had been created it needed to constantly update the background causing the growths to turn into floating dots. The solution for this was to organise the code into modes so that the different elements such as, background subtraction, polylines and growths would happen at the appropriate time and not all at once.
Making the modes made a great difference to the appearance of the program. I could finally draw different brushstrokes on a white board and have the growths generate according to that. Now that all the elements were slowly coming together I wanted to sort out the actual colour aesthetic of the growths. A cohesive colour palette is really important to me when I am actually painting and there was no reason that it should be any different for this type of painting. However picking colours that merge well together is very different on a computer than on a mixing plate, the colour spaces are perceptually different. The solution for this was to take an image which had a cohesive colour palette and have the generative system base it’s own colours off of that image. I also like this because it allowed me to change the colour palette very easily for different paintings if this ever developed into a series.
My next step was to try and calibrate the camera and the projector so that what was painted matched up perfectly with what was going to be projected. I started by testing it on a a big white board, is was the easiest option for testing without making any permanent marks on my canvas. This stage of the process was something that I greatly overlooked. As always you learn from experience and with no previous experience with a project like this I did not realise how much time I needed for this stage of calibration. At this stage I admit to being very naive with my expectations of how the projection and painting would combine together as one piece. The research that I was doing suggested that I could use projection mapping to calibrate everything. However I also had another problem where the kinect itself didn’t seem to be calibrated properly and it was causing the polylines to be displaced by quite a bit. This problem was rapidly eating into my exhibition set up time and the painting process was held back because of this issue. I had to think of a plan B, a plan that was sure to result in a finished piece because time was running out and at this rate I wouldn’t have a piece in the exhibition. My plan B was to create a diptych. I got an identical canvas to the one I already had. On one canvas would be the physical painting and on the second canvas would be the output produced by the generative system based on the painting. Plan B solved many issues I was having because It meant that I could definitely stay clear of running any OpenCV during the exhibition and everything could be completely finished and prerecorded before the exhibition with very little worry about the piece during the exhibition.
Meanwhile I had been sketching different compositions for the physical painting. I knew that I wanted it to be very shape and line based, however this is a very different style of painting than what I’m used to producing so I wasn’t sure how successful the style would be. I was drawing inspiration from both Kandinsky and Paul Klee and there use of lines and shapes.
This is how the painting turned out in the end as I painted it I recorded the the computers output so that I could create the projection for the second canvas. which you can see in the photo below. I had a couple of issues during the painting process the generative system could only pick up objects of a certain size, as a result during the recording process I had to change some of the things I had already painted. Eventually I got the footage that I wanted which I then turned into a video that looping. (You can see this video in the executables section bellow.) My reasoning for this was because it was imperative for me that the audience get a glimpse of the process that the computer output goes through. Once the growth lines occur is starts to obscure the identifiable relationship between the original painting and the generative process. The beginning of the video allows the audience to really grasp the relationship between the two canvases. Another reasoning for this is because I really wanted to encapsulate that growth process in the projection, I didn’t want to project a static image, the organic movement was one of the key elements I wanted to capture in this project and even with plan B, I still had to stay true to what I really wanted to achieve with this project.
In the above picture you can see how the diptych was set up in the gallery space. In my original planning I wanted to box off my section of the gallery with black material, I believed that this would add to my piece because it would isolate the view and really draw their focus into what was happening in this diptych, minus any other distractions happening in the room. It would also further emulate this boxed appearance that was occurring with the canvases. However due to a lack of material I was unable to achieve this presentation aspect. This however was not detrimental to the piece.
A video of the computers output, which was projected on the second canvas.
A video of how the diptych was set up in the gallery.
Over all I believe that the project was very successful. Although there was some bumps in the development, and there were times the problems I ran into clouded my vision for this project. However despite all of that I still achieved the essential elements of what I wanted to produce – organic growth and a painting which explored the relationship between the physical and the virtual . Even though I had to change my original plan of creating one painting with the projection occuring on top of the painting, I believe that my plan B actually worked better for this exhibition because it allowed the audience to really see the difference between my input and the computers output, and the growth process of the generative system. I would still like to further develop the project and test it with the original one canvas plan. However for now I am more than happy with the over all turn out of the piece.
Artist Project / Tree Drawings. (2007). Cabinet, [online] (ssue 28 Bones Winter 2007/08). Available at: http://cabinetmagazine.org/issues/28/knowles.php [Accessed 20 Dec. 2015].
Ball, P. (2012). Turning Patterns. [online] Chemistry World. Available at: http://www.rsc.org/chemistryworld/2012/05/turing-patterns [Accessed 3 Oct. 2015].
Bit forms Gallery, (2011). Recorded Delivery. [online] Available at: http://www.bitforms.com/exhibitions/tim-knowles-recorded-delivery/press-release [Accessed 21 Dec. 2015].
Bohnacker, H., Gross, B., Laub, J., Lazzeroni, C. and Frohling, M. (n.d.). Generative design
Budds, D. (2015). This Code-Generated Architecture Can Only Exist On Paper. [Blog] Co.Design. Available at: http://www.fastcodesign.com/3053214/this-code-generated-architecture-can-only-exist-on-paper [Accessed 23 Dec. 2015].
Doan, A. (2008). TREE DRAWINGS! Tim Knowles Arbor Interpretations. [Blog] Inhabitat. Available at: http://inhabitat.com/tree-drawings-arbor-interpretations-by-tim-knowles/ [Accessed 21 Dec. 2015].
Docs.opencv.org. (2016). Basic Thresholding Operations — OpenCV 18.104.22.168 documentation. [online] Available at: http://docs.opencv.org/2.4/doc/tutorials/imgproc/threshold/threshold.html [Accessed 29 Feb 2016].
Docs.opencv.org. (2016). OpenCV: Background Subtraction. [online] Available at: http://docs.opencv.org/master/db/d5c/tutorial_py_bg_subtraction.html#gsc.tab=0 [Accessed 29 Feb 2016].
Docs.opencv.org. (2016). OpenCV: How to Use Background Subtraction Methods. [online] Available at: http://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html#gsc.tab=0 [Accessed 29 Feb 2016].
Docs.opencv.org. (2016). OpenCV: Image Thresholding. [online] Available at: http://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html#gsc.tab=0 [Accessed 29 Feb 2016].
Dorin, A., McCabe, J., McCormack, J., Monro, G. and Whitelaw, M. (2012). A framework for understanding generative art. Digital Creativity, 23(3-4), pp.239-259.
DW, (2012). How old is Generative art?. [Blog] Graphic Dimensions. Available at: https://graphicdimensions.wordpress.com/2012/12/18/how-old-is-generative-art/ [Accessed 23 Dec. 2015].
Filippetti, J. (2012). generative jigsaw puzzles by nervous system. [online] designboom | architecture & design magazine. Available at: http://www.designboom.com/design/generative-jigsaw-puzzles-by-nervous-system/ [Accessed 13 Oct. 2015].
Galanter, P. (2004). Generative art is as old as art.
Galanter, P. (2012). Generative art after computers.. 1st ed. [ebook] XV Generative art conference. Available at: http://www.generativeart.com/GA2012/phil.pdf [Accessed 22 Dec. 2015].
Galanter, P. (n.d.). What is Generative Art? Complexity Theory as a Context for Art Theory. 1st ed. [ebook] Available at: http://www.philipgalanter.com/downloads/ga2003_paper.pdf [Accessed 13 Nov. 2015].
Galleries around the world are finally “starting to take digital art seriously”. (2015). [Blog] Dezeen. Available at: http://www.dezeen.com/2015/02/17/movie-galleries-around-world-starting-to-take-digital-art-seriously/ [Accessed 20 Dec. 2015].
Gupta, K. (2013). Color Tracking in OpenFramework. [online] KUNAL GUPTA. Available at: https://dirtydebiandevil.wordpress.com/2013/01/21/color-tracking-in-openframework/ [Accessed 4 Mar 2016].
Kaganskiy, J. (2010). Losing Control: Generative Art and the Art of Letting Go. [Blog] The Creators Project. Available at: http://thecreatorsproject.vice.com/blog/losing-control-generative-art-and-the-art-of-letting-go [Accessed 19 Dec. 2015].
Knowles, T. (n.d.). Tree Drawings. [Generative art, Pen on canvas].
Kurosu, M. (2015). Human-Computer Interaction: Interaction Technologies. Cham: Springer International Publishing.
Levin, P. (2015). Interactive Art & Computational Design.
Lieberman, Z. (2004). ofBook – Foreword. [online] Openframeworks.cc. Available at: http://openframeworks.cc/ofBook/chapters/foreword.html [Accessed 16 Feb. 2016].
MacCormack, J. (2013). Representation and Mimesis in Generative Art: Creating Fifty Sisters. 1st ed. [ebook] Italy: xCoAx. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.348.5670&rep=rep1&type=pdf [Accessed 18 Dec. 2015].
Mallick, S. (2015). OpenCV Threshold ( Python , C++ ) | Learn OpenCV. [online] Learnopencv.com. Available at: http://www.learnopencv.com/opencv-threshold-python-cpp/ [Accessed 4 Mar 2016].
Mccarthy, C. (2015). THE CODE-GENERATED ARCHITECTURAL DRAWINGS OF MIGUEL NÓBREGA. [Blog] Kill Screen. Available at: https://killscreen.com/articles/the-code-generated-architectural-drawings-of-miguel-nobrega/ [Accessed 23 Dec. 2015].
McCormack, J., Bown, O., Dorin, A., McCabe, J., Monro, G. and Whitelaw, M. (2014). Ten Questions Concerning Generative Computer Art. Leonardo, 47(2), pp.135-141.
McDonald, K. (2012). kylemcdonald/ofxCv. [online] GitHub. Available at: https://github.com/kylemcdonald/ofxCv/wiki/Intermediate-Computer-Vision-with-openFrameworks [Accessed 20 Feb 2016].
Mufson, B. (2015). MC Escher Meets Mondrian in Code-Generated Marker Drawings. [Blog] The creators project. Available at: http://thecreatorsproject.vice.com/blog/mc-escher-meets-mondrian-in-code-generated-marker-drawings [Accessed 23 Dec. 2015].
Nóbrega, M. (2015). Possible structures, Potential structures, Plausible spaces.. [Generative art, computer and Plotter machine] Superficie. Online gallery.
Nokami.com, (2016). Generative Art. [online] Available at: http://www.nokami.com/generative-art/ [Accessed 19 Dec. 2015].
Pearson, M. (2011). Generative art. Shelter Island, NY: Manning.
Reis, R. (2001). Careers in art and graphic design. Hauppauge, NY: Barron’s Educational Series.
Research.philips.com, (n.d.). Philips Research – Technologies – Generative Art: protoquadro. [online] Available at: http://www.research.philips.com/technologies/protoquadro/ [Accessed 28 Dec. 2015].
Sahin, O. (2012). Generative & Algorithmic Art LEA Call for Papers. Leonardo Electronic Almanac. [online] Available at: http://www.leoalmanac.org/generative-algorithmic-art-lea-call-for-papers/ [Accessed 21 Dec. 2015].
Scholz, A. (2012). Generative Jigsaw Puzzles – Nervous System toys with a classic enigma. [online] CreativeApplications.Net. Available at: http://www.creativeapplications.net/processing/generative-jigsaw-puzzles-nervous-system/ [Accessed 13 Oct. 2015].
Solaas, L., Watz, M. and Whitelaw, M. (2016). GENERATIVE PRACTICE. THE STATE OF THE ART.
Visnjic, F. (2015). Possible, Plausible, Potential – Drawings of architecture generated by code. [online] CreativeApplications.Net. Available at: http://www.creativeapplications.net/processing/possible-plausible-potential-drawings-of-architecture-generated-by-code/ [Accessed 23 Dec. 2015].
Wang, H. (2014). Painting with Code. How generative art reveals new possibilities in visual expression and design. [Blog] IDEO labs. Available at: https://labs.ideo.com/2014/06/04/painting-with-code/ [Accessed 19 Dec. 2015].
Wassilykandinsky.net. (2016). Wassily Kandinsky – “Composition VIII”, 1923. [online] Available at: http://www.wassilykandinsky.net/work-50.php [Accessed 20 Mar. 2016].
Watz, M. (2010). Closed systems: Generative art and Software Abstraction. 1st ed. [ebook] Available at: http://mariuswatz.com/wp-content/uploads/2012/03/201005-Marius-Watz-Closed-Systems.pdf [Accessed 17 Nov. 2015].
Watz, M. (n.d.). MARIUS WATZ: CODE, ART & AESTHETIC.
www.wikiart.org. (2016). Paul Klee, Late Works.. [online] Available at: http://www.wikiart.org/en/paul-klee [Accessed 20 Mar. 2016].
Lux – Dat & Kevin
We wanted to build a creative tool which encouraged users to explore visual aspects of light painting by adding their own creative flair to a blank digital environment.
We decided to follow through with an idea that surfaced as a result of our creative research albeit with some slight modifications. Our final project idea was to create a digital environment which acted as a canvas for the user to explore aspects of lighting artwork.
The user can navigate through the 3D world with the following controls:
- WASD keys: general movement in world
- QE keys: Move up and down the Y axis
- F key: move quicker when used with WASD & QE keys
- LMB: for drawing
- TAB key: this is used to switch from drawing mode to edit mode and vice versa when wanting interact with the GUI interface
- SPACEBAR: move selected light sphere to player position
Lux was created in openFrameworks and written in C++.
For the research part of the project, the brief outlined that we should concentrate on ‘aspects relating to design and aesthetics’ and to ‘avoid looking…at technical aspects [like] libraries [and] frameworks’. Owing to this, we decided to focus entirely on lighting as an outlet for art, paying particular attention on light painting and light graffiti due to the visual properties they possess.
After an in-depth research on the history of lighting artwork and its process, we highlighted two aspects which we felt could be improved on. The first was how dated the technique was. If we wanted to create a light painting piece right now, we would still be using the same procedure used in 1886 by the founding fathers of light painting, Georges Demeny and Étienne-Jules Marey. The process is still a ‘photographic technique in which exposures are made by moving a…light source while taking a long exposure photograph’.The second problem we found was the possible restrictions light painting holds for someone who wants to explore the art form for the first time. We personally felt that this restriction lay in the cost of the equipment that needed to be used. This was backed up by articles we read online which expressed the importance of using a DSLR camera owing to the full control you have over the shutter speed (which is the duration of exposure), the aperture (which is the opening of the lens which light passes through), and the ISO speed (which controls how much light enters). We felt that this was a costly investment for anyone if their motive was solely to experiment with the capabilities of light painting.
These problems and our interest in the aesthetics of lighting artwork spurred us into wanting to create a tool which allowed people to explore visual aspects of light painting. With this in mind, we decided that our application should offer a strong foundation for people who wanted to explore light painting as a creative outlet but may lack the knowledge or equipment to do so – this was our desired target audience. When accepting this as our target demographic, we understood that difficulties could arise as there was no specific age group or gender we wanted to focus on. This meant that when it came to designing the application, there had to be a medium, as a fourteen-year-old male would have to hold the same interest as a twenty-four-year-old female when using the program.
We were adamant in producing this project on openFrameworks without any additional physical components. By creating it purely through openFrameworks and C++, we felt that we could share it with anyone who had a computer and was interested in exploring the visual properties of lighting artwork in a digital world.
When it came down to our initial prototyping, we were very keen on the theme of light versus darkness. We felt that this would be suitable to the nature of light photography as the brightness of colours used always strike through, creating a huge contrast to the dark environment it is in.
Continuing with this thought process, we began thinking of potential objects we could add into our environment to really hone in on this theme. The first image which popped into our heads was a tree since it connotes the idea of growth and therefore positivity and light. However, a tree also has negative connotations when it is bare, as it alludes to the idea of death and darkness.
We looked into fractals and used Daniel Shiffman’s tutorial to get the foundation of the tree object and then developed it to fit our project needs.
However, after some testing we ultimately had to scrap this idea as the recursion technique used to draw the trees meant that our application became extremely slow when adding more than one fractal tree – this was because the recursion calculation was continuously looping as it was run in the draw section.
Nevertheless, we still intended to keep with this idea of light and darkness but to channel it in a different aspect. We opted instead to have multiple light sources which would light up the dark environment and the objects surrounding it.
This was also the stage where we decided to allow the user to move freely wherever they desired in the 3D world as opposed to a flat terrain with boundaries. We believed that by doing this, we would meet our personal criteria of creating a digital environment which had no restrictions for the user when drawing.
Moving on to the light drawing prototype, we had an idea of what we wanted the output to look like but we did not physically create any sketches. The reasoning behind this was because we found it difficult to draw the line strokes the way we wanted and there were already a lot of resources online that we could base our final design on – Yellowtail by Golan Levin and inkSpace by Zach Lieberman being some of these. Both of these applications had implemented a very natural brush stroke which enhanced the sense of control that the user had when drawing.
We split up the the build into two sections and aimed to complete the tasks in this order:
- The build of the environment and mouse picking
- The functionality of the drawing method
The 3D Environment and mouse picking
This part of the build was the most time consuming. We did expect this to be the case however, as both of us had never created a 3D environment before. Initially, we based our environment off a ofxBullet example as we had already researched into this as a result of our technical research. The benefits of using this addon was that all the collisions and mouse picking could have been easily implemented. However, the limitation to this was that you were restricted to only using ofxBullet objects; unfortunately, these were very simple shapes and would not have given us the flexibility we required for our light drawings. We spent some time implementing the ofxBullet world before realising our mistake but we did take something away which was useful. We discovered that ofxBullet adds a texture to its shapes which means that the usual implementation of lighting would not generate the results you would expect. From learning this, we were able to solve our own issues with lighting later on in our project.
After our blunder with the ofxBullet world, we began to look at building an environment from scratch again. The first week of starting this environment build was tricky for us as we were unsure of the correct terminology to search for to get the correct information to help us. Fortunately, we stumbled upon an article by Marco Alamia which explained in-depth the different spaces required to map a 3D world on a 2D screen. After many more articles and tutorials, we began to understand the logistics of creating a 3D environment but we were still unable to successfully implement theory into code. At this stage, we were becoming increasingly worried that we would remain behind on our personal schedule.
Fortunately, we were able to progress after some help from Simon who had constructed a 3D environment as a result of showing us mouse picking. This was extremely helpful as we had a copy of code which we could use as reference to understand the online tutorials that we had previously read. For instance, we were aware that we had to create a frustum which contained the near and far plane but we did not understand how to put this into code. We realised after looking at Simon’s code that this was a simple fix as ofCamera deals with all of this behind the scenes. There were many other instances where we overcomplicated the code when there was no need to. We felt that this was a recurring problem we faced throughout the project which affected our progression.
After successfully building the 3D environment, we moved on to populating our world with shapes like spheres and boxes. Our decision in doing this was because we felt the world was not as encouraging as we would have liked for the user to start drawing. This was also the stage where we attempted to implement the mouse picking code which Simon gave us. Instead of just copying Simon’s code without understanding the ray, we attempted our own implementation of mouse picking and were able to successfully re-create the ray from scratch without using the ofCamera and ofNode built in functions like screenToWorld(). This was our code:
Unfortunately, we were unsure with how to use the ray in mouse picking. We felt that the problem lay in the fact that we were comparing our world coordinates with our normalised ray values which were incompatible with each other. We spent a lot of time trying to work out what we were doing wrong but in the end, we had to move on from mouse picking and go with a simpler interaction.
The drawing methods and objects in our world
After the completion of the environment, we started to tackle the drawing methods in our application. We found a drawing tutorial to base off of from the openFrameworks page and followed the steps to create interesting graphics which could impersonate the lighting visuals we wanted.
These tutorials we followed used ofPolyline objects to draw points onto the screen; we fed in the world coordinates into the draw() parameter of the ofPolyline object which was how we were able to draw accurately in our 3D world.
We realised that this on its own was quite bland, so we implemented a GUI using ofxGui so the user could select different brushes to draw with. We also added lighting which the user could change to make the environment more vibrant and exciting.
Our original goal was to create a tool which allowed users to explore visual aspects of light painting. Keeping this in mind, we feel that we were able to meet this brief to a certain degree. For instance, our light cluster stroke certainly emulates a more modern take on light painting whereas our standard line stroke takes our users back to the beginning where a more primitive approach was used to reflect the first light painting piece by Demeny and Marey. However, we were unable to truly represent light painting in the way we wanted to. From the very beginning of the project, we were motivated to create a digital environment because we felt that it could offer a different perspective to light painting that the normal creation process would not be able to provide. The bubble stroke was one way of attempting this as we wanted to add a different and interesting perspective to the creative process.
One positive we can take from this is that our knowledge in working in a 3D environment has improved greatly. We now understand the process in creating a 3D environment from scratch, the differences between world coordinates and screen coordinates and the importance of understanding the distinction between the two, and also the reasoning for transformation matrices. Moreover, even though our attempt at mouse picking was unsuccessful, we were able to code the ray from scratch which highlights our understanding and development in this field.
Overall, we believe that the project as a light painting creative tool was partly successful. It does represent the visual aspect of light painting through the light cluster stroke and we have been able to produce a digital environment where the user can roam freely which were our original goals. However, we understand that we could not represent the full capabilities that light painting has to offer. We could have given a better representation of different aspects which could have then provided greater control for the user to create new and unique visuals. What made our progress stagnant was the amount of time we spent understanding and implementing perspective projection and mouse picking. These were challenging concepts to become familiar with but we feel that now, if a similar task was given to us in the future, we would be able to overcome the difficulties we faced this time around as we have a greater understanding of its implementation. This would then allow us to spend more time focusing on the visuals of our light drawing.
Images used from external sources:
Articles/Tutorials used for research: