To me, To you

To me, To you

By Tom Pinsent

Project Description

‘To me, To you’ is a collaboration between me and my computer. Inspired by a bombardment of screens, wires, beeps and boops alongside a lack of traditional, more physical forms of work within the digital art world, the electronic part of my piece is kept solely within the production. This leaves the final product independent of digital technology, un-reliant upon electricity to be experienced, although having been heavily influenced by it.

The paintings are created through a step-by-step process. It starts with a simple shape or form painted on to a canvas. My program detects this through a webcam and compares the shape to a pre-trained set of shapes, with the most similar projected back on to the canvas. I add these shapes to the painting, and the process repeats until the picture is done.

Audience and Intentions

Through my piece I wanted to convey an interplay between contemporary artistic practice and the

traditional, through the counterposition of modern technology with the conventional, seemingly

dated, medium of paint. Furthermore, I’d hoped to suggest a dialogue that is the collaboration between an artificial, digital artist, consisting of both hardware and software, and

myself. I hoped to show viewers that contemporary art with a heavy use of technology does not

have to be all “beeps” and “boops”, and can in fact exist within the real world (as opposed to the ephemeral cyber-space realm), in a form that is well known and recognised throughout art history. My work might attract those with either professional or personal interests within art or computer science fields, as well as hopefully providing insight to members of the general public who may not have been exposed to these kind of ideas before.

Background Research

When starting with this project, I looked at a range of artists and works that dealt with both contemporary and traditional methods of creative practice, where neither aspect holds a greater importance than the other. New approaches are combined seamlessly with the conventional, bringing both styles into a new context, therefore producing art that feels fresh yet familiar.

Names include Yannick Jacquet, Joanie Lemercier and more.

For more info, visit: http://igor.gold.ac.uk/~skata001/hiveMind/2015/10/30/relationship-between-contemporary-and-traditional-creative-practices/

I also conducted further technical research, by looking at computer vision, machine learning and neural network techniques.

Resources used:

Udacity Deep Learning course (mainly convolutional neural network section)

https://www.udacity.com/course/deep-learning–ud730

Self Organising Map tutorial

http://www.ai-junkie.com/ann/som/som1.html

I initially thought this would be useful since a Kohonen Self Organising Map is a unsupervised machine learning algorithm widely used for grouping together images (and many other types of data) through similarities.

Design Process and Commentary

I began with an initial list of things my program needed to do:

1. Scan canvas and detect shapes

2. Analyse detected shapes and composition

3. Create new shapes and forms based on how well detected forms fit a desired aesthetic

4. Display new forms on to canvas

I then began exploring methods and techniques used in computer vision, machine learning and neural networks, since these are areas that are used within similar applications, such as image classification/face detection.

1.

When scanning and detecting shapes, I wanted my setup to work like this, which would be the simplest and easiest during the actual use of the program (the painting of the canvases), and meant it would be suitable for live scenarios:

setup sketch

However, I found this was impractical for a few reasons: The camera should not detect anything apart from the desired shapes on the canvas – even with cropping, the distance combined with the noise from a low quality webcam makes for very bad shape detection. At this stage I did not know how computationally intensive the detection and generation of shapes was going to be, so I though committing to a setup and build geared for quick and easy use may have led to problems later on, towards the end of completion, and a live scenario might not have been viable. Also, there was an issue of getting the same perspective between camera and projector, for accurate representation of shapes from both my point of view and my program’s.

Instead, I ended up holding the webcam up to the canvas and pointing it at the desired shape. This worked well, since with the web cam closer up, there is greater detail in the image and hence more accurate contour detection. I also found that a ‘1-to-1’ representation of forms on the canvas and displayed shapes was not necessary at all, which I will come to later.

In terms of the shape detection itself, I started with ofxOpenCV. I used a variety of test images (as the webcam was unavailable at this stage) and results were OK, but contained a lot of noise – rough, jagged edges were picked up all over contour detection. I later implemented the use of smoothing and resampling functions within the ofPolyLine class in openFrameworks, which made some improvements. In the end I switched to the ofxCv add-on, a package with greater functionality than ofxOpenCv, closer to the OpenCv library. Contours detected with this library were much better.

2. and 3.

So I wanted to find the similarity between a painted shape and a set of test shapes, then generate shapes from there. Initially my test set was going to be a collection of images, textures and forms that fit some sort of aesthetic of my choosing. I thought the use of genetic algorithms, such as L-systems, cellular automata or other rule based systems, to generate my test set would be interesting, further increasing the influence of my computer on the work. I assumed this data would be in the form of .jpgs, so implemented various functions for loading and analysing data from image files. Yet as experimentation continued, I was informed about SuperFormula, an algorithm for the generation of shapes, most of which resemble forms throughout nature. The beauty of this algorithm is that such a range of shapes can be created just from the alteration of four individual variables. I also realised that my training set not longer needed to be in the form of imgs, and i can just take vertex positions and feed that straight into my program. I edited and modified a version of superformula in processing, allowing me to create and save a training set.

Finding a value for the similarity, or difference, of two shapes was very tricky, since there are many factors to take in to account, such as size, orientation, location… After much research and testing, I settled on a method using a turning function, as described on page 5 (section4.5) here:

http://www.staff.science.uu.nl/~kreve101/asci/smi2001.pdf

Using this website as a reference along the way:

https://sites.google.com/site/turningfunctions/

This function allows you to map the change of angle from vertices along the perimeter of a shape onto a graph. It is then (relatively) easy to calculate a distance, or similarity, between a two or more sets of points (for example, shape A’s first angle change is compared to that of shape B’s and saved, with the same happening for every other vertex. A mean average of similarity is found at the end, and therefore a similarity between two / more shapes).

So now I have a range of base training shapes, each with a similarity to an initially drawn shape. I then looked into clustering, so i could group together similar similarities. However, after more research I found it was quite inefficient to implement most clustering algorithms on 1 dimensional data, my single similarity value, and instead looked to the Jenks Natural Breaks algorithm. This is used for splitting up a set of data into sub groups with similar values ( so shapes with similarities 1,7,15 would be in a different group to those with 150,120 or 110 for example). After my data had been split into sub groups, the most similar group, so that containing the lowest values, were used to generate images.

However, for the actual generation, this was all – the initially generated superformula shapes in the training set were just re-presented. I tried to use the values fed into superformula to generate the most similar shapes as a starting point, and slightly tweak the values and regenerate new shapes. But I found after even small tweaks, variation was too great and any detection of similarity was lost.

4.

pic1

I would have liked to include some sort of composition calculation too, yet I found within the time I had this was not possible. Instead, for the paintings produced I tried different methods of composition. One was my own choice of placement of generated shapes, the next was a layering of shapes on top of one another, from most similar to least, and the last I tried distorting the projection itself. As I was painting, I noticed how the projection distorted along with the distortion of the surface itself. This made me think about how my program had influence over the piece through the code, the software, but I am also painting over a projection. How can the hardware and positioning of this affect the work?

Evaluation

Overall, I have learnt a lot from this project. With initially very little understand of machine learning or computer vision techniques, I had to read about and jump over many small hurdles and problems.

I would have liked to have a more accurate measure of similarity, since currently my turning function does not work well with changes in rotation or lots of noise on the contours.

The generation could have been better too, since in the end it is almost just recycling the initial forms giving to it (however, parallels can be seen between this and the age old mystery of what true creation really is, if it really exists, or if we ourselves just recycle and reproduce ideas presented to us). I would also like to implement some sort of colour and composition training and generation. My code is also quite messy, with many functions included that weren’t used in the final painting. This could have been avoided by better planning of exactly what functions and methods I was going to use, with a clear idea of exactly what I needed.

Furthermore, I believe the majority of interest within my project lies in the production, in the technology, leaving the traditional values that I wanted to place emphasis on severely lacking. I succeeded in my plans to push technology out of the limelight, yet it left the public had little understanding of the process of its creation. For similar projects in the future, I would need to spend at least as much time upon the physical, non-electronic side of things as I did on the electronic if I would like them to hold the same importance, as well as finding a way to communicate the process clearer through the work itself (in other ways than a short video documentation on the process).

Gitlab Link:

http://gitlab.doc.gold.ac.uk/tpins001/Creative_Projects_2

Comments are closed.