AR with L-Systems
Throughout our project we researched into Augmented Reality (AR) and our resulting final product was an A5 zine which uses a marker to augment the page through the generation of objects. A ‘natural’ scene is generated covering the marker consisting of an oscillating plane and procedurally generated ‘trees’. Through using AR and L-System generation addons for openframeworks, ofxAruco(1) and ofxRules(2) which have dependencies on ofxOpenCv,ofxCv and ofxXmlSettings and facilitate the recognition of the marker and thus place the generated object within the right dimensions of the users’ scene, we were able to realise an individual and changing AR scene for each marker. We intended for the project to consist of procedural themed scenes that would vary for each page in order to show the book pages coming to life and consisting of different scenes. The project wanted to include interactions for each scene by bringing in outside influence from the user, but we instead decided on a ‘regeneration’ button which updates the plane with randomly generated values and begins the steps of the L-System growth again. The project seeks to have users experience a procedurally generated scene through the pages of a booklet, by giving different augmented experiences.
Film documenting user interaction with the final Zine produced
Intended Audience/ Background Research
The projects intended audience wasn’t targeted towards any particular group but rather to convey an experience into a new form of media. The intended audience is for anyone wanting to experience a type of media that is not constrained to the pages of a book which led to some initial research into markerless AR from Disney Research(3) as this would have allowed direct physical interaction and potentially creation of the marker itself. The audience isn’t targeted based on the environments presented, but rather the concept of transforming two-dimensional paper media into three-dimensional environments. This was influenced by the work of painter Austin Lee who brought his prints to life through the use of AR(4). The intended audience for our project will vary from users of all ages, young children can experience the project as an interactive book to enjoy or for adults seeking to understand and experience augmented media. We hope that any age a project of such concept can seek to be informative but also enjoyable to use, we believe that any user can easily understand and use the project despite having any knowledge of the technologies being used, therefore allowing them to experience a whole new form of media with almost no knowledge or input needed.
Initially the project was intended to augment and procedurally generate objects within the user’s intended scene by using a camera to change their current location into a procedurally generated one. The project initially was to invoke an experience of augmenting a user’s surrounding, thereby allowing the user to fully relate to the scene being presented. As the intended design changed to exist upon a page of book, the original invoking experience stayed true but rather to a lesser level. Originally the concept was to use markerless AR in order to camouflage the actual markers that initiate the scenes, due to limitations of the addons we used, we had to change the design to include more visible markers.
Example of AR prototype and markers provided by ofxAruco
As the design process moved forward we initially intended to include multiple markers within each scene to invoke a more immersive experience, but after experimenting with multiple markers we felt that having one distinctive marker allowed us to focus upon one themed object within each scene. ofxRules comes with inbuilt xml files which step through the generation of L-Systems and as such we were able to easily include dynamic and animated L-Systems. We intended to use physics and created programs to do so but as we began to bring in the L-Systems we realized that the physics did not easily comply with the ofxRules addon. Therefore we showcased the L-systems as living by allowing them to recursively generate within each scene, and accomplished a similar ‘natural’ state to the physics implemented but through a change in the design concept.
Click to play gif of L-System growing
Originally, the concept was to allow lines that represent the two-dimensional version of the scene upon the paper media to interact with the augmented version. This would allow the use of textured shaders differently within in each augmented scene allowing for multiple shaders to be present upon each page. Instead we found that to create an oscillating plane that coordinated with the generated texture colours the shader could be used to alternate the y values across the plane and the input of the colours could be decided through noise and sinusoidal functions as well as being influenced by the marker recognised.
Click to play gif of oscillating plane created through textures and shaders
We decided to create a Zine to allow us to have more of a physical and single viable product to work towards. The three markers, and designs featured, produce a surreal digitally rendered scene that existed alongside the printed version of itself. Mockups of the zine were created which included several markers and different designs however, we decided that a single printed marker along with the texture coupled with the AR worked best.
Following the unsuccessful efforts of implementing ofxARToolKitPlus (another AR addon for openframeworks), we tried the application using older verisons of openframeworks but this led to other complications and thus we decided to use ofxAruco. ofxAruco is easily implemented and automatically calibrates the webcam information and configures the marker information using the provided ‘intrinsics.int’ and ‘boardConfiguration.yml’ files provided. From this point the use of the AR and implementing objects within the scene was accomplished relatively easily with slight adjustments to the scale of the objects created.
We decided to augment the pages of the book with digitally rendered scenery which took influences from nature both in its production, L-Systems use recursive algorithms found in nature, and in its aesthetic. The xml files provided by ofxRules allowed for the recursive drawing of trees to be easily implemented and so we wanted to create a ‘land’ for them to sit on that was had a similarly ‘natural’ animation. The shader examples provided by openframeworks include an example of displacing a plane primitive using the shader(5) but use a grayscale image as the texture. We adjusted this example to work with colour textures by using the green channel, which translates to the y position of the vertex, to update the y position of the plane creating a smooth, undulating surface that moves physically relates to the darkness of the colour created. The oscillating surface is created through generated 3D noise across the pixels of the textures which translates to the 3D noise of the plane.
We wanted the patterns to be generated somewhat by the user input as well and randomly generated. The inputs for the RGB colour generated use variables that return 3D noise and positive and scaled sine values which result in an interesting texture which updates in real time and can be altered by the user as the inputs – frequency, phase, noise scale and brightness – can be adjusted by a ‘random’ function.
Along with regenerating the plane we wanted to allow the user to themselves regenerate the L-System trees. ofxRules comes with many easy to use functions which allow for the clearing and updated of the L-Systems. Therefore, when the user uses the ‘a’ key to update the parameters of the plane, the existing tree is cleared and another begins to recursively grow differently from the previous.
Our project began to deviate from our original project proposal following the feedback on the feasibility of our plan. The minimum viable product was to be a piece of computational art that augments the user’s scene which we adjusted to suit a more realistic outcome for the time we had left. In this sense we succeeded as we created a scene of AR but we failed to included any more information from the users scene other than the correct dimensions of the space, used to generate a realistic augmented scene. As a result of several issues, which we will address, our project only began to follow through on a single and outcome at a later date rather than from the beginning of the project as desired.
There have been many changes since our proposal plan but ultimately our outcome is still an exploration into AR and procedural generation. We lost time trying to get addons to work with the current version of openframeworks when we should have been working towards creating an outcome that built on work created from the first weeks and was adjusted continually. As a result our decision to later create a physical outcome, a Zine, allowed us to focus towards a more interactive and interesting application. Although we had planned to implement some of our own generative work, based around the superformula(7) we had explored for our proposal, we decided to use the ofxRules as the animation and use of natural algorithms was seamless and relatively easily implemented. We understand that our project is not as successful as it could have been judging from our proposal and how far we have deviated from this and our plan which was regrettably not as realistic as it could have been considering what we wanted to achieve and the time had to achieve it in.
Any AR project is best when most accessible which means our app would be best suited for mobile as many AR projects are. Although the immersiveness of the intended outcome has not been at the level we had intended, the overall experience of augmented media was accomplished to a degree. The original concept of showcasing augmented media, procedural generation, and user interaction is all still present within our project. We’re confident that to some degree all three of these initial concepts exist within our project, although we feel frustrated and disappointed that our initial project idea hasn’t reached the level of detail we would have like. We feel that through all the hurdles we were faced with we were still able to deliver upon the concepts we initially intended to convey.
Link to GitLab Repository
Appropriate References and Bibliography
1. ofxAruco: https://github.com/arturoc/ofxAruco
Further information on ofxAruco library: http://www.uco.es/investiga/grupos/ava/node/26
2. L-Systems Library: http://www.neilmendoza.com/ofxrules/
Oscillating plane with AR tutorial: http://www.creativeapplications.net/processing/augmented-reality-with-processing-tutorial-processing/
Shader tutorials: http://openframeworks.cc/ofBook/chapters/shaders.html
Shader and Mesh tutorials:
Led Jumper – Daniel Sutton-Klein and Sebastian Smith
LED Jumper is a 2D platformer by Daniel Sutton-Klein and Sebastian Smith inspired “The Impossible Game” displayed across a 32×16 LED matrix where the only control is to shout to jump. This game takes advantage of the LEDs bright visuals and creates a neon-esque aesthetic for the player to be immersed in as they progress through the game.
Processing + Teensyduino code: Download
We set out to make a fun and simple game for anyone to play. Since the game is controlled only by the user’s voice, our audience isn’t limited to any particular group of gamers but instead literally anyone that can shout loud enough or make enough noise.
We mainly had to focus on the hardware side of things when it came to background research as we had little experience with physical computing prior. We added all the relevant information we needed to a google docs file and researched as much as we could about the different options of compatible hardware we could use to ensure that we wouldn’t waste money on anything useless. Below is a snapshot of this document which shows how we gathered the information for building our LED display.
The concept of our project, a game running on an LED array, dictated all the design that followed. We had some options about the density of LEDs on the strip, either 30, 60, or 144 LEDs per metre. Thinking about the sound input nature of the game, we imagined people shouting jump at the display to avoid dying, it seemed appropriate to try and get the largest display we could. When prioritising the size of the display, it was cost efficient to opt for the 30 leds/meter strips. We considered different aspect ratios and arrangements of the LEDs (pixels could be aligned diagonally or in a traditional display grid). Platform games rely on being able to see far ahead of the player, so we decided on approximately 2:1 for the aspect ratio, with a regular grid arrangement to keep it simple. The display would be approximately 30×15 LEDs (~100 x 50cm).
The rest of the design process started with prototyping game concepts and mechanics. Without knowledge of the technical side of transmitting the game onto the display, we still knew that a 2-dimensional data structure which emulated the display was the first step in prototyping, as it set up a simple and logical way to set the LEDs when the physical side and libraries were complete. After implementing this, we started working on the core game mechanics that would influence the gameplay and user experience of the game. For jumping, we knew early on that we wanted the user to control the player by simply shouting at the screen, so we used minim to analyse the amplitude of the audio input, convert that into decibels and then make the player jump while the audio input was above a certain threshold. Below are snapshots of the prototypes showing the first 2D data structure array and core mechanics implementation.
From here we built up and focused on developing a playable game on processing while we waited for our hardware to arrive.
With level design, we created the final level as a PNG image in Photoshop and implemented a way in processing to use the image data as the level of our game by loading the pixel information. This way we could also easily develop level mechanics by referencing the colour of a certain unit and how it affects the player when they are next to it. Below is a snapshot of the final game and the Photoshop file to show this worked.
Commentary of build process
With the premise of the project decided, and zero experience with LED control or development boards like Arduino, we started by learning the very basics.
Starting small, we borrowed an Arduino Duemilanove which came with everything we needed to practice small & simple tasks with LEDs. After being able to turn a single LED on and off, we moved to multiple LEDs which we had flashing sequentially. Knowing that we would be having audio input in the finished project, we tried to make a VU meter (volume unit meter) with the Arduino and LEDs, but soon found out that the microphones that plug into Arduino pins have limitations which might make it problematic for monitoring voices. The other alternative was to use the microphone on a laptop, which we found out required a whole other area of expertise, serial communication, which felt beyond our capability and advised against in web forums. Overall, our testing with the Arduino was very useful in the way that it gave us an idea of how development boards and the Arduino IDE works.
After lots of research, we planned to use the OctoWS2811 library on the Teensy to control the LEDs, as it was designed for the Teensy and came with examples which would allow us to stream the video from a laptop (with the VideoDisplay example on the Teensy, and movie2serial on Processing on a laptop) without much extra work.
When the components arrived (Teensy 3.2, WS2812B strips, 30A PSU, and the logic level converter to change 2.2V to 5V), the first test was to control a single LED. After all of the connections on the breadboard were made (with lots of important connections to LOW and HIGH rails, for example to set the direction of the logic converter), we ran the OctoWS2811 library on the Teensy, and confirmed with a multimeter that the data signal to the LED was 5V. This was our first time controlling a WS2812B LED.
After the successful test with a single LED, we moved onto the next step and hooked up a short strip of 22 pixels. Using the ‘Basic Test’ example from OctoWS2811 library, which is meant to change the pixels to a different colour one by one, we saw that there were serious flickering issues, pixels appearing to not update and flashing random colours. Now knowing why this was happening, we tried the test example from an alternative library to OctoWS2811, FastLED, which instantly gave better results. We took this, being able to control a strip of LEDs, as a cue to build the full 32×16 display.
The first step in building the display was to get a piece of wood the right size, which we then painted black. After considering how we could attach the strips, we started measuring out points for holes across the board, which would evenly space the strips in a precise way. We drilled those holes (of which there were a few hundred), then used zipties to secure the strips into place.
The strips have power, data and ground connections, which we soldered to wires that went through holes at the end of each strip. After everything was set up, we loaded the Basic Test for OctoWS2811 again, and despite the initial excitement of seeing all the pixels light up for the first time, we realised there were serious communication issues. The test was meant to light up all the LEDs the same colour, then change the colour of each pixel in order, one by one. Instead of the clean result we hoped for, lengths of strips seemed to not update colour, and many colours would flicker.
At this point we split all the data cables into 2 CAT5 wires instead of 1, the idea being that it would minimise cross-talk and stop interference in the data signal. This helped a bit, but the same problems were still there. With an oscilloscope, we looked at the waveform of the data signal coming out the Teensy, and saw that it wasn’t clean and the timing (which has to be VERY specific) was wrong. This was in comparison to the OctoWS2811 library website’s graph which showed us exactly how the waveform should be for the WS2812B chips. Going back to FastLED, we saw a major improvement, and decided that OctoWS2811 was not reliable enough to use.
Although FastLED outputted better signals, it came with it’s own problems. It wasn’t designed for the Teensy and only had single pin output by default. Latency and signal degradation would be expected trying to control 512 LEDs on a single pin, not to mention it would mean changing all the wiring on the display, so we looked into the different options to output to the 8 pins that were set up. The Multi-Platform Parallel output method described on the FastLED wiki was supposed to do what we needed, but after spending a lot of time on it we just couldn’t get it to work. We then tried the method used in FastLED’s ‘Multiple Controller Examples’ which created multiple FastLED objects. This worked and we were able to light up the whole display with colours we wanted (see the colour gradient photos).
The next problem was serial data communication. Our original plan was to use OctoWS2811 to control the LEDs, with the VideoDisplay + movie2serial examples which would deal with streaming video from a laptop to the Teensy automatically. Now using FastLED, we would have to write our own code to do this manually. Arduino and Processing both have Serial libraries which are used for serial communication, and after looking at the references and Googling for other people doing similar thing, it still wasn’t clear how to make it work. After almost giving up a day before the deadline, a section at the bottom of FastLED’s wiki for ‘Controlling LEDs’ gave us a clue:
Serial.readBytes( (char*)leds, NUM_LEDS*3);
With some trial and error we were able to send serial data to the Teensy from Processing which, using FastLED, successfully (and beautifully) lit up our test strip of 22 LEDs on 1 pin. After that, we moved the whole project upstairs where it would be easier to work on code at the same time, but as we got up, the whole thing seemed to stop working. We brought up the oscilloscope to see what was happening, and indeed there was no data signal. Stumped, we spend 2 hours trying to figure it out, as when we moved back downstairs it magically worked again. At one point the Teensy stopped responding completely and we feared we’d shorted it, which lost us for a while again until we saw that the power supply ground was plugged into the HIGH on the breadboard. The next day we tried the working serial communication to control a whole strip of 64 LEDs on the full display from Processing, which didn’t look correct at first, and we were worried about the size limit of the serial data receive buffer, but it turned out the Processing sketch has to be stopped before restarting or the serial data gets cut in. To make a long story short, after changing and configuring code in Teensy and Processing, we achieved serial communication sending data for the full array.
Implementing the Processing serial communication code into the game was fairly straight forward, although due to the layout of the strips (where 1 strip is 2 rows and they zig zag in direction), the first time we ran the game, each 2nd row was reversed. This was fixed with some code to alternate the direction of LEDs sent each row.
Due to us only getting the display working on the day before the deadline, we didn’t have much time to experiment further to make the game perfect. We did however run tests looking at brightness of the LEDs, as they were very bright, and found that dimming the pixels is most effective between 3/255 and ~20/255, which has potential applications for the game. We also tried putting a sheet in front of the display to diffuse the light, which could look really nice if we built a frame to stretch it around.
For what we wanted to achieve, we certainly accomplished the overall concept of our design by having a playable game work with audio input on our LED matrix. Despite our accomplishments, many of our ideas weren’t realised in the final piece due to various reasons. We were not successful with implementing a lot of specific mechanics to our game such as wanting to have secret levels that would be activated when the player went a specific route as well as noise cancellation and non-linear jumping for a smoother experience. These mechanics weren’t so much of a priority as a usual game session per person would only be a couple of minutes meaning that they wouldn’t heavily alter the gameplay experience.
After all the time we spent getting the game working on the LEDs, there was little time to experiment with different brightness settings to add contrast to our game in certain areas, as well as creating new colour palettes optimised for the LEDs. As you can see in the video there were still some flickering issues with the LED display where random segments of LEDs would turn on for the length of a frame which was determined by changing the frame rate and seeing how it was affected. However, with the amount of flickering we had prior to this, it’s a miracle they weren’t at all worse, although so far we have been unsuccessful in finding a way to fully eliminate this problem.
Our last success, despite hearing how susceptible our components are to blowing, and even purchasing spares in anticipation of this, everything remained intact throughout the duration of the project.
“2059” – By Jason & Nish
2059 is a graphical interactive fiction game based on a WWIII scenario. Touching on the 7 deadly sins, the player confides in a mandatory computing system that has been installed in the citizen’s homes for surveillance and observation purposes. But with an enticing and illusive interface, the game serves as a companion and questions one’s consciousness, encouraging the player to increase their self-awareness.
The player navigates an imaginary world with the aid of extremely simple and abstract graphics in an attempt to find himself within the real world.
Intended Audience & Outcome
Our target audience is retro gamers that are above the age of 16, therefore we aim to create a relevant but nostalgic game that has a simple user interface for an easy play.
Our primary focus was developing a narrative that concerns the issues of today’s society, in which we evoke the player to question where we are going as a race and what we can do to change direction.
Through this, we have created a plot that touches on the advancements in technology and its reverted effect on humanity’s unison, posing the question of how imminent another world war is. With the mention of individuals known for their benevolence such as Dalai Lama and Maharishi Mahesh Yogi, it is a game that attempts to combine fantasy with reality.
Secondly, our focus was the visual aesthetics. In the attempt to pay homage to the history and development of gaming, we have developed an unsophisticated programme that imitates that of a NLP nature. Accompanied by elementary graphics, we have decided to incorporate a predominantly monotone colour palette with two-dimensional graphics, utilising primitive shapes with the generation of an atmosphere through imperfection and decorative glitches.
With the combination of both the modern plot and the outdated interface, this retro game incorporates several attributes that should appeal to an eclectic array of gamers that are looking for something a little different.
One of the pronounced limitations of Processing is the ‘single-state’ nature. Expecting Processing to sufficiently handle several branching event paths would be naïve and unnecessarily difficult. Presented with many options, in terms of which programming languages are more suitable to parse the text data into Processing where the graphical sector and main structure of ‘2059’ is governed. XML, JSON and CSV were all the options to handle the script. XML and JSON are the two most common formats that can be used to store and exchange structured data, although we found the XML has several advantages over JSON. One of the biggest differences between the two is XML’s ability to communicate mixed content, i.e. strings that contain a structured mark-up utilising parent and child tags.
Our main sources of inspiration, alongside some of the original and most renowned text adventure games such as the “Zork” trilogy and “The Hitchhiker’s Guide to the Galaxy”, were films that discussed humanity vs. technology.
One being, “Ghost in the Shell” which is a Japanese animation (1995) where a cyborg federal agent and her partner track down “The Puppet Master” who illegally hacks into the minds of cyborg-human hybrids, leading her to ponder about her own genetic makeup and what life would be like if she had more human traits.
Another was “The War Game”. A banned television drama, (1965), depicting the prelude and immediate weeks of the aftermath to a Soviet vs US nuclear war.
The Charlie Brooker production “Black Mirror” was another inspiration. As a science fiction, British television anthology series, it’s speculative plots possess a dark twist with satirical themes, examining modern society, particularly in terms of the unanticipated consequences of new technologies.
To view the full narrative script click here
Whilst in midst of developing the script and graphics, we created the basic structure for ‘2059’, then testing it with another script and various animation images of an unrelated subject.
Setting ourselves the task of completing a daily diary entry for the whole development process in order to analyse our own psyche and therefore get into character for the script was integral to our design process. We tried to grasp the art of words and the formatting of these words to effectively convey the storyline, whilst trying to minimise the amount of XML branching and options of direction for the player. To achieve this, it was a combination of composing several drafts for the script and asking peers for their thoughts.
Reverting back to the initial computer graphics, and pulling reference from early gaming consoles such as ‘Magnavox Odyssey’ released in 1972, the collective decision was to ascribe the aesthetic to this era. We created several rough sketches, visually commenting on the most prominent fragments of the plot. Embracing this concept, we have decided to construct sketches that harness an imperfect and glitch orientated appearance.
Rough Processing Sketches
High Resolution Aesthetics Test
Primitive Aesthetic Test
With the intentions of creating a minimal and intuitive interface, we had to test ways of displaying the text. Attributes such as the fixed pitch font type ‘Larabie’ have been integral to the final feel of the game, as early computers and their terminals often had extremely limited graphical capabilities. Hardware implementation was simplified by using a text mode where the screen layout was addressed as a regular grid of tiles, each of which could be set to display a character by indexing into the hardware’s character map.
With the initial idea to use functions such as ‘spotLight’ and ‘directionalLight’ for a high resolution finish on each sphere, we soon had to re-evaluate this decision as the latest versions of Processing have removed the support for the fixed-function pipeline in 3.0 beta 5. This means that the immediate GL calls cannot be utilised, resulting in the necessity to create our own GLSL shaders to implement the rendering on the graphics processing unit (GPU). This was possible, but with the time limits and coding abilities, we decided to find further inspiration to redesign the graphics finish, with two dimensional aesthetics. The initial idea, we believe would have encompassed our audience much better, as our intended audience have most likely become accustomed to today’s technology capabilities with high definition graphics.
Over thirty sketches were designed with plans of implementation, however the underestimation of how long each Processing sketch would take to create, we haven’t been able to finish them all. This has prohibited us from including a fluent and narrated array of graphics that add an extra dimension to the journey. Having to compromise with this idea and utilise what we have managed to master has changed our intended outcome slightly and if we had more time, this would be something that we would have added to.
There was one large issue that we continuously came across. This was the text input controlled through Processing. There are sectors in the game where the player inserts data that is supposed to be retained and fed back into the XML script, such as name, age etc. This has worked sometimes and not others and due to it being an important part of the game, we would have liked to of explored the more systematic and Processing friendly way to implement this feature, preventing the hit-and-miss nature that it has obtained.
Creating a function that allows us to access the player’s webcam has also been rather difficult. When attempting to access the webcam the player has to refer back to the console in order to choose which camera to connect to, then typing it into the web window and pressing ‘enter’. This is not ideal. We would have liked to of created a condition that prints the consoles information/ options into the actual game, as without this, the actual coding applications have become part of the game. Or rather, writing a piece of code that enables the programme to choose for the player.
We were also planning on implementing a location API with a minimal data visualisation piece, as there is one part of the script where the player is asked to give their location. We thought it would have allowed the player to get even more lost in the concept of games through this personalised feature, however we found it difficult to find the correct API and therefore left it out.
Through the generative process being split into logic/technicality and creativity, I believed we split up the tasks in a productive fashion, using the time that we had effectively, however towards the end, we struggled to meet our weekly target, as issues such as programming bugs and the constraints of Processing that we were not aware of, caused us to fall behind. Click here to find a week by week account of our progress.
Overall, we are pleased with ‘2059’s’ outcome.
The programmed structure serves its purpose well. The XML child and parent inheritance enabled the branching of the script to be as simple as possible. Processing interacts with the XML code adequately, presenting minimal issues. If we had more time we would have implemented a ‘backward’ button that allows you to return back to the previous sector of text, through a Processing loop that can remember where you are.
As the narrative was our primary focus, we feel as though we have scripted an entertaining piece that complements our objective to question one’s morality and serve as an eye opener, in terms of self-awareness. We would have liked to of created a slightly more immersive text adventure game, taking full advantage of the freedom of exploration through imagination; giving them a wider choice of navigation through the WWIII environment. This would have also worked in favour of the 7 deadly sins theme, as we could have presented more options and therefore really gaged a more representational tally of each sin, giving the player a summary of their psyche through a simple diagram at the end of the game, which was one of our main objectives. The XML and Processing works have been structured to be able to handle this type of recording, however we have not been able to implement this.
We have worked extremely well together and taught one another a fair amount. Our graphical interactive fiction concept required a fairly balanced input in order to create a well-rounded game, which was our every intention. It has been a positive and effective learning experience for both and we may continue to finish ‘2059’, then able to further present something that has been executed all the way, with the intended outcome being met, absolute.
References and Bibliography
https://pixabay.com/en/guy-man-dark-room-curtains-drapes-690510/ (environment setting)
http://www.ifarchive.org/ (interactive fiction)
http://neogaf.com/forum/showthread.php?t=463257 (interactive fiction)
http://funprogramming.org/121-Using-a-webcam-in-Processing.html (webcam access)
https://forum.processing.org/one/topic/webcam-record-save-and-replay.html (webcam with record function)
https://processing.org/discourse/beta/num_1226730484.html (Programming text adventure games)
https://www.flickr.com/photos/chesterbr/9176534752 (Odyssey gaming)
spider! was supposed to be a program in which you would be able to interact with a realistic spider, this was to be done by using the Leap Motion and Google Cardboard, unfortunately this wasn’t to be
Research and intended outcome
Spider! was supposed to be a therapeutic tool used to be one of the first stages of treating arachnophobia, this was to be done by using the leap motion in conjunction with google cardboard, the user would be able to interact with a spider by picking it up or scaring it into fleeing or attacking.
simple representation of what we was going for
Egle’s first port of call was to study the Strand beast movement that moves like so
we started looking at how spider legs work and how they move with each other
spiders legs individually move like the diagram displays, a spider moves 4 legs at a time (2 on either side) to keep balance as the next diagram will display
image edited from www.mirrorservice.org
the audience that we was going for is people who are interested in overcoming their fear of spiders by desensitizing themselves to the look of them, but don’t want to actually be near a real spider. It would then be the first step towards getting rid of this fear the second step would to be around an actual spider.
the idea would be to have all the infomation in openframeworks sent to unity so that is can easily move 3D models and visualize the product
The Project unfortunately ended up a failure as Egle and Jamie were bombarded with problems at every turn.
our original intent was to make a virtual environment where a person can interact with a 3d spider that will respond naturally to the users movement. unfortunately we ended up making a simple informative app on spiders.
Our work time was dwindled from the beginning with Jamie having to work in his free time and egle having to go back and visit famil, due to this communication between us has been strained.
It all started when we decided to use the leap motion, a cool piece of tech that is used to get hand and fingers position data and display them on screen via infa-red, and the google Cardboard, a cheap VR head set that is made of cardboard as its so aptly names after, setting up the environment was a disaster as following the software development kits instructions on the leaps developers walkthrough was confusing and it didn’t give us all the files needed ect, with simon’s help we managed to set up an open frameworks environment that is used to work with the leap motion, this was done by using the example in the addon ofxLeapMoation , soon it was time to get around to implementing the google cardboard environment together, unfortunately that proved too hard as we would of had to communicat via a java environment thats when jamie came up with the idea to use unity as the set up in unity was basically drag and drop this caused a bit of concern because point of the project was it was supposed to be done in openframeworks the solution was that whilst everything is visualised in unity all the data would come from openframeworks. There was a bit of confusion on how to make openframworks communicate with unity jamie made the example osc working unity on keyPressed and open frameworks side so the Bosc was working the problem jamie had was getting the position information from the leap to send over on leap mostly because of bad access errors the problem seemed to be that there is no way that you can access the information manually there for i was unable to use this information to create interaction
It’s at this point that both Jamie and Egle became disheartened with the project and started to pull attention into other directions from Jamie’s experiance he became down and unable to work on something that seemed too big to work on try as he might he couldn’t get it in him to carry on, whilst eagle was spending a lot of time learning a new 3D modelling program Maya spending most nights watching tutorial videos and not sleeping this in turn made her unable to work at a hundred percent.
this exhaustion and other uni related stresses unfortunately lead to Egle having second thoughts about the course and having priorities else where
Upon reflection I believe that the project was too ambitious for our skill levels, the time we had available, whilst the project may have failed we have learned a lot about ofxAddon libraries, unity assets, maya and of course, ourselves. If i was to do this project again i would probably try to do it without using the other external devices which would greatly simplifly the project and make our lives easier. This is because external devices have the capacity to fail that is unmatched by any other add-on we h=could have used for this project. This was the main problem with our project because the external devices, such as leapmotion, and google cardboard, because they do not have any documentation available which made our lives much more difficult than they could have been. A knock on effect this had was to make our project a monster we didn’t want to face, it also made us realise the importance of getting the lecuterers help as they really made our lives easier and helped us through the difficult times. The main thing I would have done differently would have been the very subject of the project. We would have made something simplier, something easier to build and something with more documentation. I feel like if we had done this then it would have made our lives easier, it would have made the project simpler and it would have helped us make a program worthy of creative projects.
Knight for the Dawn: Demonstrations of an RPG-Making Engine
KNIGHT FOR THE DAWN: DEMONSTRATIONS OF AN RPG-MAKING ENGINE
Justin King & Saskia Burczak
PROJECT GITLAB REPOSITORY:
WHAT IS OUR PROGRAM?
Our final product (working title ‘Knight for the Dawn’, the name of our game concept) is a demonstration of our capabilities to implement the mechanics required to create a typical RPG-style game, including key states such as an interactive overworld, a turn-based battle system, choice-influenced conversations with other characters and a working inventory of usable items. We provided a number of assets and completed game states to be viewed as examples of how our code can be utilised when story ideas and the corresponding assets are applied. The style we pursued in our demonstrations exudes a retro feel, paying homage to the inspiration we drew from games previously explored in the background research phase of this project.
An additional aspect of our final product was the production of a fully functional library that allows for the use of a handheld controller with programs coded in Visual Studio, which we put forward as something that people may find useful as something to apply to their own individual projects.
We have come to view our product as base code that could potentially be used as a game-creating engine of sorts, resembling RPG Maker (http://www.rpgmakerweb.com/) in its ability to produce simple games sporting the defining elements of the genre.
The research we conducted into the JRPG genre before finalising our ideas introduced us to a basic formula that most JRPGs seem to adhere to. It involves how each game state serves its own purpose to contribute towards forming a final product, as follows:
Overworld: Create a diverse world that acts as a stage for the adventure unfolding before the main cast. The players must first become immersed in the world if they are to become interested in anything else unfolding within it.
Interaction: Unveil the story as you meet and gain information from in-game characters. Here, the characters are given their chance to appeal to the player through an expression of their personalities, giving themselves depth and therefore a reason for the player to become invested in them.
Battle: Have your characters test their strength against the forces opposing them, setting the true challenges of the adventure. Along with the story, battles are the driving force of the game; they give the player a reason to spend enough time in-game to level up and pursue quests that offer them help against their foes.
Based on this, we knew exactly how we had to divide up our code and had a clear vision of what its final purpose would be.
Throughout the course of our research into the genre, we discovered that JRPGs have an incredibly wide target audience. This possibly stems from how they frequently combine elements from other genres such as dating sim mechanics, combat and rich stories into one package; this way, they can catch the interest of players whose personal genre preferences besides JRPGs may actually be entirely dissimilar.
Due to their typical 80-hour duration, JRPGs must show strengths in many areas in order to keep the player engaged for such a long period of time. Appealing artwork, memorable characters and a good story are important areas in which a JRPG must excel. Our intended outcome was therefore to provide visually appealing assets for a game that boasts an accessible user interface to suit both newcomers and those who have had prior experience with similar game mechanics.
Forming the basic storyline of the game went hand in hand with the design choices we made regarding the game’s overall appearance. We chose to pursue a style similar to that of early JRPGs such as the first installations of the Final Fantasy franchise in order to instil a familiar, nostalgic feel; through a game balancing a medieval aesthetic with subtle fantasy elements, the player is greeted with surroundings that they can easily associate with a sense of approaching adventure.
Deciding on each separate story event allowed us to establish exactly which textures, sprites and sounds would be needed to finalise the visual interface presented to the player.
> A document containing an outline of the story’s key settings, characters and events.
Drawing from the inspiration we found in classic RPGs, all assets were produced in the basic pixel artwork program mtPaint, thus allowing for smooth tiling and a consistent art style throughout the course of the game. The resulting artwork was created in an effort to merge a range of bright background colours with endearing sprites, in hopes of formulating an attractive interface for the players to interact with.
The game features two main playable characters, although one of them only becomes active during the conversation state and in battle. While later stages of the game might have seen further characters joining the protagonists on their journey, the starting male/female duo are those that must appeal to the player in order to keep them interested in the game. We hoped to have them appear as two opposites who must work together to create a whole, consequently enforcing the party dynamic so often seen in JRPGs. Their names – Runa and Sol – are intended to enforce this idea of opposites while remaining short, sweet and memorable to the player.
> Examples of Runa’s overworld sprites, showing frames of some of her walk cycles.
As the driving force in the game, the player takes control of the female character Runa, who uses a sword in combat and whose attacks are primarily of a physical nature. Chances to express the character’s personalities present themselves through their interactions with other characters and, in a more subtle manner, also in their behaviour and move sets used in battle. For this reason the main character should appeal to the player as the sword-wielding ‘hero’ of the main two, relying on her brute strength to vanquish enemies. Sol, on the other hand, retains a mostly supportive role, with an arsenal of moves such as healing and other ‘magic’ in the form of elemental spells available for the player to use as they wish.
To highlight the differing nature of their roles, the colour palettes used to create discernible differences between the uniform armour are intended to compliment their character. Runa, who was written as the more outspoken and physical of the two, is dressed in warm shades of red, while Sol’s less explosive personality and calmer conduct in and out of combat is matched with cooler shades of purple and blue.
> Runa’s battle sprites vs. Sol’s battle sprites, in active and KO states. Their final versions are animated to add further dynamism to the battle instances.
Using a program called Tiled, we could draw our overworld environments using the images we had prepared.
> A couple of overworld map examples, created to act as templates for drawing them in Tiled. The top picture is a draft for the first dungeon, the labyrinth-like castle cellars, which was intended to act as a tutorial stage where the player kills low-level enemies to gain experience with the battle system. A large room sits near the centre of the map, intended to act as an arena for a boss battle to take place in.
The second is a design for a town area where shops and NPCs would be available to interact with.
An example of a world map drawn inTiled, on which village environments and portals to dungeons can then be placed.
> An airship sprite that can be used for the sake of travelling over wider distances. Its subtle steampunk appearance helps to add a fantastical touch to the game.
> A multitude of NPC assets including standard villagers, an airship captain, a princess, and guards. Each is a familiar aspect of a typical western fantasy setting.
> A folder containing the kinds of assets our program is able to use, including 64×64 background tiles and separate frames for sprite animations.
PROBLEMS WE FACED AND HOW WE SOLVED THEM/PROTOTYPING
Due to the complexity and volume of the code, we frequently encountered errors that significantly slowed our work process due to our recurring inability to run the program as long as they remained unsolved. For example, one of the main offenders was the ‘vector overflow’ error, which could often be a crippling issue due to our program’s heavy reliance on vectors.
We initially kept code for the three main game states separate, having them act as prototypes that would not interfere with one another should one of them throw an error. This way, it was also easier to locate the cause of the problem as we had far less code to search through. Once we had established which versions of our codes were the most effective, we could then combine them into one working program.
In our initial version of the overworld map, the program broke when the player moved their sprite too close to the edge of the space; this disrupted how the program drew each background tile, leading to it giving them random placements. We also noticed that we were experiencing problems with the program’s frame rate, which dropped significantly when it entered the map screen. To remedy these issues, we made sure to only load the area of the map appearing on screen instead of burdening the program with having to load the entire map. The for loop loading the map tiles now starts at the top left of the game window and ends at the bottom right, therefore capping the amount of tiles the program has to search through to know whether it is visible or not. Using this method, the amount of tiles being searched through dropped massively from 10,000 to a mere 1600.
Character interaction also presented some issues as we sought out a method that would best suit the choice-based conversations we hoped to include. The earliest test we conducted to progress toward our required outcome involved a pointer-based system that, depending on the choices made, displayed the corresponding dialogue ‘node’ in the console window based on its id. While this method was effective for handling a single conversation, we realised that for a dialogue-rich game it would not do to have every line of text written within the code. Instead, a system that involved reading text from external files would be needed to keep our code efficient and concise.
> Our first test into coding a conversation with a ‘tree’ structure.
> An example of a script for a ‘tree structure’ conversation, with each node and the answer leading to it having been colour-coded.
We therefore explored a variety of options such as basic text (.txt) file reading, JSON and XML. We faced problems with the XML plug-in, which threw errors in regard to its inability to parse correctly. We then turned to our final option, JSON, which proved easier to use due to its similarity to XML, a plug-in we had already become accustomed to during previous tests.
While the issues we faced unfortunately restricted the time that remained for us to fully implement the game’s story, we nevertheless met a number of the goals that we set ourselves in regard to the minimum viable product.
Since starting our development of the project, we recognised that our original concept was an overly ambitious idea that would prove very difficult for us to create in the given time with a team of only two people. Therefore, we began to prioritise the production of efficient code above all else, ensuring that we successfully completed a program that could execute each of the key mechanics required for the creation of an old-fashioned JRPG.
Each of the three main game states – the overworld, the battle instance and the character interaction – are complete, leaving us with the key code required to create a game such as the one we had envisioned. Given more time, we would take the opportunity to embellish our existing product with the story we had planned for the demo.
The controller input is fully functional, operating with the use of a library especially written for use with the game. While we had initially hoped to produce a game utilising the player’s motion as a means of control, we instead turned to the alternative option we had decided to reserve in case we failed to implement our original idea. Another of the difficulties we faced while completing our code was the lack of coherent documentation we were able to find in relevance to the ideas we wanted to execute. Finding little to no references regarding the use of a handheld controller with Visual Studio, we were instead driven to code its library almost entirely from scratch. We made an effort to make the library as accessible as possible to any other people desiring to use it, clearly documenting our code so it can be easily read and understood.
We not only managed to make out code responsive to each separate button, but also successfully activated the controller’s ‘rumble’ function which could be used to add a more dynamic aspect to various areas of any game it may be applied to.
> This video demonstrates the code handling the random encounters and the resulting battle state.
Using our code, one could potentially create their very own game by inserting their own assets, music and text files in JSON format, and here we have provided an example of the results one could achieve using assets of our own. We aim to one day utilise our product to complete the vision we had in mind, and hope that the code we provided may also prove useful to those wishing to pursue a similar goal of creating an enjoyable game shaped around their own aesthetic, characters and story.
By Ahmed &Eoin
- Project Description
The idea for our project was to have graphics synchronized with MP3 audio to create an engaging piece that would have a lasting effect on our intended audience. My partner and I will be able to manipulate the graphics and audio in real time for a performance. This project was built using processing, access to the minim library and added music (MP3) files.
Controls: UP, LEFT and DOWN key to switch windows. Left click feature used on the first window to add mini-audio visualizers.
We had decided to split the roles of who would be handling the graphics and audio of the project between us.
Download link to project: AhmedCP
- Intended Audience and Background Research.
Our intended audience are for artistic enthusiasts who enjoy performances that are different from the media norm. However, this project can be enjoyed by most people regardless of age, gender or profession. This was so people can experience music in a new and more interactive way. The audience can feel more involved and engaged in the performance .our background research involved looking into many different styles of generative art and audio pieces, and how best they can complement each other in our project.
These images acted as inspiration for us to start building the foundations for the graphics we was going to create.
Using minim played a big role in the successfulness of our project,since we relied on it heavily.we look through “http://code.compartmental.net/minim/” so that we could gain a better understanding of how to best utilize it.We decided to focus on the audioplayer (so that we could loop our MP3 file in our sketch),FFT( to analyse the spectrum of our audio buffer) and beatdetect (to recognise rhythms for better synchronization of our graphics and audio).
- Design Process
I liked the idea of having the window filled with mini-audio visualizers that move independently but also keep the commonality of reacting off the audio being fed into it. I began my design sketching on paper, these were the initial designs I came up with. I then began thinking how I wanted my mini-audio visualizers to be displayed on my window, how they move, if they can go off screen, how I would plan for collisions. Once I had the basic idea, I wanted to add some interactivity. I added buttons to my design so that we could use it as a filter on the audio, which will then in turn effect the graphics. For example, using high and low pass filters. Also buttons that would change the shape of my mini-audio visualizers, yet keeping the same functionality.
- (Initial design for particle systems contained in window)
- (Design for controls on screen,with filter and shape manipulation buttons)
- (First designs for the mini-audio visualizers)
I ended up scraping my idea of have buttons present on the screen to as filters,although no being that hard to implement i felt as though using this will create a disconnect with my audience during the performance. i decided that for my controls,they should operate as much as possible of screen so that they do not distract from the performance and possibly inconvenience our audience.
After brainstorming my ideas and building code for my first drafts, I came to the conclusion that a mix of my ideas for the mini-audio visualizers, interactivity and filters would be best suited for our final project. These were the final design for our project and so we eagerly started building our code to best fit our vision.
I decided to make three windows for my project, each with different graphics so that I could manoeuvre between them mid performance. I thought that this would add room for more creativity and also enjoyment for the audience.For the graphics i planned for a mix of both 3D and 2D.
One of the issues i had trouble with was finding the most suitable audio for the graphics i was creating.If there was a low compatibility the project would lose a lot of value as this was a key component we sought for from the beginning. With thorough searching and comparing,i cam across an MP3 that would best match our piece.It was from here that i could begin creating the graphics around it.
Also,my idea for the mini-audio visualizers was difficult to actualise. I began trying to pursue a particle system like feature for my first window,however the outcome was too sporadic and not as i had imagined .I then implemented class’s and array’s together with mousepressed to have better control over how and where they would appear.
I believe that our project was successful in that the graphics of our project synchronised with the audio to a satisfactory level was achieved. With the three different widows with individual graphics that best complement the audio. However I believe there was also much room for improvement. We were not able to complete all our objectives, because of this time never allowed us to present our project to an audience as we originally intended. We were not able to implement the filters we planned for in the beginning also.
- Reference and bibliography
Images used(labelled for non-commercial reuse and modification):
By Mateusz Janusz and Becky Johnson
Fractals and recursion are one of the most exciting and fascinating part of mathematics and computer science. The fact that even those who find no pleasure whatsoever in mathematics can still find interest in their visuality and complexity shows that. Whether it is the naturally emerging fractals that occur in everyday life or the computer generated fractals such as the Mandelbrot Set or the Menger Sponge, fractals and recursion is the centre of focus of our project.
Our project is an art tool that takes the voice of the user and turns it into a piece of artwork using fractals and recursion. Using the analysis package of the Minim library, sound analysis is conducted on the audio input to alter the generated fractals. There is a choice of 5 fractal designs to be selected at any point of running the program, where the frequency and amplitude of the user’s voice alters the position, size and colour of the designs.
DOWNLOAD LINK FOR PROJECT
DOCUMENTATION ON INTENDED AUDIENCE
Our idea caters to an audience who do not necessarily have an artistic ability on canvas or through digital technology. This lack of artistic ability on canvas may narrow down to those of a younger age, and the lack of digital artistic ability may focus on a senior generation who may not have a knowledge in digital mediums to create artwork.
DOCUMENTATION ON INTENDED OUTCOME
The outcome we wanted to achieve was for our user to be able to create a unique piece of artwork featuring fractals and recursion by the use of their voice. By being able to control the volume and pitch of their own voice, the user can have an element of decision over the development of the artwork. We wanted the user with this control to be able to see the effect their voice was having on the design by the change in colour, size and detail of the fractal, and the position of the design on the sketch based on these components of their voice.
The enthusiasm to achieve this outcome was fuelled by both of us not having an artistic ability on canvas, and the opportunity to create something that looked intricate and personally unique just from the use of a voice was an exciting outcome to try to achieve.
Firstly we researched into sound, as we felt like it was important when starting the project to make sure that we understood the fundamentals of this.
In the beginning, we were advised to use the Minim library as it would be able to analyse the components of the audio input we wanted to be able. The main feature of our program involved taking the audio input from the mic of a laptop and analysing the amplitude and frequency to create a useful value, so checking that Minim would do what we needed was extremely important to us at this stage. As well as Minim, we also looked at Processing’s Sound library just in case we felt like this would cater more to what we wanted. Overall, we felt that Minim seemed to be more advanced from the documentation, and decided to take the advice we were given at the start.
The main visual aspect of our project involved fractals and recursion so we extensively researched into these two topics. Firstly looking at the aspects of recursion and fractals in computer science and mathematics, and then looking at famous fractals to try and understand their algorithms to better our knowledge of recursion.
The famous fractals that we researched included the Barnsley Fern, the Mandelbrot set, the Menger Sponge, and the Sierpinski Triangle. This research into the algorithms was so that later on, once we were more comfortable using FFT and recursion, we could try to implement these fractals and manipulate them with the values obtained from analysing the audio input with FFT.
Name: Ken Headford
Ken is now a retired senior citizen and has taken up various hobbies in his retirement, such as oil painting, golfing, bird watching, and walking with a local group.
Ken has not got much experience with digital technology. He and his wife own a shared desktop computer, and he uses it to check his emails, and to research interests and hobbies of his. He tries to avoid using the computer as much as possible, as he finds that he does not understand various components of searching online, how the computer works, or how to fix the Internet connection when it has disconnected.
Ken enjoys artwork, but has never branched into the digital side of creating art. He loves oil painting, however he finds it frustrating for the amount of time it takes to paint certain pieces.
Name: Emily Rowe
Occupation: Primary School Student
Emily is a primary school student who enjoys going to the park with her friends and family, playing video games with her siblings, and her favourite subjects in school are Art and Maths.
Emily has showed an early interest in technology, from playing video games with her siblings on consoles such as a Game Cube, Playstation 3, and on the computer. Emily also has shown an interest in Art, but does not think her drawings are very good and is impatient because she wants to make good pieces of artwork.
She dislikes using pencils and pens, as she and her siblings are either very forgetful or messy and they lose or break all of them. Their Mother does not buy them paint anymore because it is expensive and from past experiences furniture has been ruined because of it.
After becoming enthusiastic about the idea for our project and researching the topics associated with it, we started to focus on the user centred design and functionality of the project.
We brainstormed how to change and alter between drawing different fractals. A couple of ideas that we had initially was creating a counter that would change the current fractal mode depending on the time passed, or so that the values of the amplitude and frequency would change the current fractal mode. However we felt that both options didn’t give the user the freedom to control the visuality of the sketch as much as our intended outcome was.
We also thought about the personas that we had created, and felt that the most simplistic user interface would be the most desired for them due to age or lack of technical knowledge. We felt that the focus for the user should be on what he or she would create, not on the appearance of the project’s user interface.
The first part of our build was getting the audio input from the speaker of a laptop and to be able to work with the values it produced. One of our problems was that the documentation for the Minim library did not seem very simple, and the examples for how to do this were very complex and did not conclude exactly what we needed. From this, we decided to look else where to research how to do this.
We found a video https://vimeo.com/7586074 that described exactly what we needed to do in order to take the input from a mic and perform FFT on the input. We transcribed this video and this served as an extremely simple template for which we could test design ideas using FFT to influence the designs.
The template worked by creating and initialising the objects we needed for analysis, a Minim, AudioInput, and FFT object. The AudioInput object uses the “getLineIn method of the Minim object to get an AudioInput where the audio buffer is 256 samples long. Later on in the build of the project, we increased the sample size to 1024. This size needs to be a power of two because the FFT object needs a buffer size of this size.
After calling the forward function on the FFT object to mix the left and right channels of the Audio Input, a for loop is created to loop through the number of bands in the FFT object. The “getBand(i)” function returns the value of the current frequency bands amplitude. With this, we could start to use the values of this input in coercion with fractal images.
In creating the project, a lot of issues we had stemmed from trying to use recursion as it was very easy to create a base case that did not escape infinite recursion and would therefore crash Processing. Once we had grasped the logic of creating and manipulating a variable to definitely change to or past a certain value, this limited the amount of crashes we had.
At first, it was very difficult to incorporate recursion using the values from FFT. Our very first sketch successfully using the values of the audio input with FFT created recursive ellipses that rotated based on the value of the frequency. When trying to use the values of our FFT as our exit conditions in recursion, this became sufficiently more difficult. Our FFT values that were returned ranged tremendously, especially when multiplying the values by a large coefficient to be able to use the values in the fractal designs.
This was solved by trial and error of measuring each coefficient as a multiplier against these values. We tested our voices by ranging the amplitude and frequency and recorded the results to investigate the range of values in order to plot our exit conditions.
Creating this simple design was the starting point however to be able to create more complex fractal designs with a knowledge on creating exit conditions using the FFT for the recursion.
Our next design was adapted from a recursive tree example by Daniel Shiffman , where the coordinates of the mouse alter the number of branches as well as their length. Instead we wanted the amplitude and frequency of the audio input to alter the size, length and number of branches. So instead, we created a condition for the branches to “start drawing” if the value of the amplitude of the left channel of the stereo input was over a certain amount, and if so, rotate these branches by the amplitude of the current frequency band. Both of these values were multiplied by larger coefficients to transform the values to a more useful value.
At this point, we had a conflict with the current design we had and the initial outcome we had wanted for the project. What we had wanted was for the values of the audio input to create a growing, slowly emerging fractal image. Instead, the fractal appeared with a “stamping” effect on the sketch, where it appeared as a singular fractal shape and differed by changing the length of branches and colour from the FFT values. We tried to fix this problem by using an example of a generative design that imitated the development that we wanted.
After trying to fix this problem for a long time and because of the time constraints we had on the project, we decided to try for a simpler technique of making the fractal stamp in different positions in the sketch instead of starting in the same place each time.
A solution we found to this was by translating the entire design by the FFT values, it’s x coordinate by the sine of the current amplitude of the current frequency band, and it’s y coordinate by the cosine of the current FFT frequency band. This technique we implemented for our other fractal designs using varying coefficients. We decided at this point that although it was not the outcome we had wanted for the project, it was best to move on and to try create more of these recursive fractal sketches due to the time constraints we had.
Once we had created this branching fractal design with the movement we were happy with, we decided to make a prototype of the sketch at this point with the user design we had designed based upon our wireframes.
Once happy with this prototype, we decided it was best to continue working on our own fractal designs and implementations of famous fractals that we had created in our background research. Once we were happy with each, we would continue to add the designs to the prototype with the same functionality as the first.
Some of those which we had researched in the beginning were too difficult for us to incorporate using FFT analysis with. For example, we tried to use the values of the amplitude and frequency to alter the appearance of the Mandelbrot set. However, for us it was too difficult to manipulate the formula for plotting which numbers diverged to infinity or bounded, and we encountered a lot of problems with infinite recursion as a result. The values in the Mandelbrot set coupled with the values from the FFT resulted in being far too complicated for us to create something mildly visually satisfying so we moved onto other fractals that we had researched.
These included the Sierpinski Triangle and the Barnsley Fern. Through our research, the algorithms for these two fractals had been the easiest for us to understand. The Sierpinski Triangle is an example of a finite subdivision rule, where the shape is broken down into smaller pieces of itself. The Barnsley Fern is an example of an iterated function system, meaning it is composed of several contractive copies of itself that draws the points of itself closer and closer together for however many iterations.
One problem that we faced at this point when trying to implement all of our designs together was that our “button menu” that we had designed was being drawn over by the fractals. At this point, the code for the project was incredible messy, as we underestimated how difficult it would be to implement all of these drawing modes together instantly.
It was an incredibly tedious problem where simple things such as different colour modes being active, such as using RGB for the button menu and using HSB for colouring the fractals. This as well as different stroke, fill, pushMatrix, and popMatrix function calls, interfered with each other. This made the solution a case of reading carefully through the project and neatening up the code until there was no more interference between functions.
After adding all of our drawing modes together and completing the functionality of the user centred design, it was a case of trial and error to initiate the drawing to start above a specific frequency value for each fractal. This did not take a long time in the build, and was the finishing touches to our project.
EVALUATION OF PROJECT
In terms of what our intended outcome was for this project, there are a few things that we think were successful and a few that did match up to our original hopes.
For instance, we had wanted our fractals to “grow” so that the users could see exactly how their voice was altering the position and movement of the design. This issue we had had in the beginning of the build of our project, and due to time constraints and after many days of trying to achieve this effect, we realised it was not practical to continue trying in this instance. Instead, we tried to make up for this problem by keeping this “stamping” effect, but using the values of the FFT on the user’s voice to change the position of where the fractal stamped. We felt that although this was not want we had initially wanted, it still gave the user freedom to partially control the design and that was the main desired outcome of our project.
We felt that we had successfully achieved correlating the values of the amplitude and frequency to noticeable colour changes that the user would be see and control the colour from this. For example, the higher pitch or louder the voice, the stronger the hue and saturation of the colours, resulting in the reds, pinks, and bright blues.
We feel that the program does not give the user the extreme control that we had hoped for though. For example, the user does not know exactly what his or her voice will create in each drawing mode, and in the beginning that was one of the aspects that we were really enthusiastic about. We understand that this lack of control stems from our previous problem of not being able to design the fractals to grow with the person’s voice, and that this desired outcome outweighed our knowledge in programming so it was unrealistic for us to expect to be able to create it so easily. However, we are happy with the results that can be obtained from using the program because we feel that it can create unique and complex pieces of artwork, and that was one of our desired outcomes.
For both of us, our knowledge of sound analysis and Minim started as null, and after completing this project we feel our comfortability in this area has increased immensely. Although it is not perfect and in-sync with our original desired outcomes, we feel we have achieved a large percentage of the overall outcome we wanted.
CONVARTSATION ART GALLERY
 Documentation for Minim library – http://code.compartmental.net/minim/
 Processing’s Sound library – https://processing.org/reference/libraries/sound/
 How To Use FFT and Minim Tutorial Video – https://vimeo.com/7586074
 Daniel Shiffman, Recursive Tree – https://processing.org/examples/tree.html
 Generative Design – https://github.com/generative-design/Code-Package-Processing-3.x/blob/master/01_P/P_2_2_4_01/P_2_2_4_01.pde
Fractals: Nature Of Code, fractal and recursion research -http://natureofcode.com/book/chapter-8-fractals/  FractalFoundation – http://fractalfoundation.org/resources/what-are-fractals/
Recursion: Introduction to Programming in Java – http://introcs.cs.princeton.edu/java/23recursion/  Khan Academy, Recursive Algorithms – https://www.khanacademy.org/computing/computer-science/algorithms/recursive-algorithms/a/recursion  Wikapedia, Recursion – https://en.wikipedia.org/wiki/Recursion_(computer_science)
Minim: Audio Visual Programming -http://sweb.cityu.edu.hk/sm1204/2012A/page20/index.html  Minim documentation – http://code.compartmental.net/minim/javadoc/ddf/minim/analysis/FFT.html  Processing Tutorials by Daniel Shiffman – https://www.youtube.com/user/shiffman  Generative Drawing – http://www.generative-gestaltung.de/P_2_2_4_01
N.B. A few of the clips have been sped up severalfold to fit into a video of somewhat bearable length.
A couple of results:
(The architecture can be both mapped onto the building or displayed separately without the original background)
ditto is a piece of software that uses computer vision to attempt to break down images into components we understand as humans and then rebuild other images out of the extracted components of the original image, or rebuild itself out of its own pieces.
The software is built to work specifically well in the domain of architecture, buildings, and interiors largely due to straight lines being a staple part of their form.
One some levels it aims to elicit consideration for some metaphysical and perhaps even epistemological questions, around the idea of what it means to be a piece of software with such powers in modern computer vision. It attempts to create an abstract, fragmented environment in which one building is built out of parts of another.
Audience, Research, and Intended Outcomes
Original audience writeup:
Whilst the project will be in the form of an app itself, it will aim to produce tangible pieces of ‘artwork’ based on the original image. This will, on one level, allude to the occasional erroneous nature of computer vision in unprecedented ways (comedic, dark?) depending on how the algorithm divides the image and what it deems a similar image to replace it with each time.
Conversely, It will also aim to exert a force for considering a ‘dumb’ piece of software’s autonomy and humanistic decision making along with its capacity for reasoning and human-esque visual analysis. Eliciting thoughts in the audience of how authentic the software’s ‘vision’ is and to what extent it is on-par with a human’s, the project will also encourage users to question to what extent the software is autonomous.
The audience will not necessarily be people with explicitly expressed interest in computer vision and the project will aim to make an impression on those who are maybe unaware of the competence modern Computer Vision holds.
Whilst the original intended audience and intended outcomes still hold, the project developed on another level by an exploration of how the ‘mind’ of the software is working and thinking when trying to find segments of best fit, and attempts to explore questions regarding this. (can we even talk about software having a mind or a thought process? What does it mean to be capable of thought to a human-esque level and what distinction is there between us as humans and the inevitable eventual genuinely autonomous software/AI? How do we map this to our current judicial system?)
W/r/t how this is achieved or explored in the final piece, the final imagery output is also complimented by the live video visualisation. Inspired directly from discussions with Vytas, working in an element of location-based sound and visualising the ambient background noise with which the photos were taken in fragments is also hopefully to be part of the visualisation.
The initial desires to reproduce this fragmented mapping of one building to the domain of another have attempted to be met with the tangible photos reconstructed at the end, when ditto is sure it’s found the best match for each segment given the pool of segments from the other photo.
Following Vytas’ input to map sound: Fragments of the ambient soundscape of the pool of pieces being used to rebuild are sampled with their attributes being a function of how accurately the segments compare (A less accurate comparison yields a slightly ‘off’ sound fragment):
The project undoubtedly draws massive inspiration from Bitnik’s h3333333k  and Darknet Shopper  – h3333333k aesthetically and in the way software can modify imagery and Darknet Shopper in its raising of metaphysical questions such as what it means to ‘be’ a piece of software.
Design Process (Creative)
The project’s creative idea was conceived with Matt, and we initially had the idea of simply passing each segment to the input image to a huge database of unrelated, unknown images (viz. google images similar-search api). This idea was largely inspired by Bitnik’s darknet shopper, and I felt there was potentially some interesting ethical questions to be raised regarding the liability and legality of the software in the contents of its returned images – potentially sometimes profane, offensive, jestful, just plain-weird, etc.
Via technical aspirations, and predicated on limitations in the availability of a public google images API, the idea developed into attempting to algorithmically model one style of architecture from another using two provided images, algorithmically and image-agnostically. (à la h3333333k)
Matt’s creative direction led him to pursue the exploration of the project through different means. Namely this was via a combination of Harris Corner detection and ofxDelaunay within the same image.
Matt’s post with ofxDelaunay: http://igor.gold.ac.uk/~skata001/creativeprojectspace/2016/03/25/ditto-delaunay/
Whilst both implementations fall under the same creative project guise, they both differ vastly in concept.
It’s satisfying on some primitive level to watch the software run and think about it anthropomorphically – visualising it as a person thinking hard about which pieces have most resemblance when the program takes a few milliseconds longer to make a decision about a segment.
Design process (Technical)
There are 2 prominent top-level components of the app: The input image segmentation and the segment comparison and replacement.
Initially tasked with the segmentation problem, something that struck me a priori as useful was a convolutional neural network; with the foresight that it may be useful on the comparison side of things to group and label segments semantically, this and the Bag-Of-Words approach sounded hopeful.
The semantic classification would allow easy comparison via a simple Dictionary-eqsue lookup, and the neural net could be trained with photos I had taken on burst mode.
Unfortunately a supervised machine-learning technique like mentioned prior is by definition constrained by its understanding of its training set. For this reason, it proved to not be ideal for a robust, cross-domain image segmentation procedure as segments were likely to be far too arbitrary to be classified into a vocabulary with scope to describe every type of conceivable segment.
See my blog post on CNN research:
It’s axiomatically true that a large number of architecture epochs featured a strong use of straight lines. As this holds for a large amount of architecture (viz. The work of Tekuto, used for the project initially) locating straight lines with Hough Lines [2a] appeared to be a promising first step for feature extraction based on walls, edges, windows, etc.
See my initial post on Hough Lines:
Hough Lines worked fantastically, using the probabilistic transform for finding lines, but what I really needed was to find the extremities of these lines. Harris Corner detection did its job in finding corners, but it also picked up a lot of extraneous points such as the corners of flowers in the background. This lead to me trying the non-probabilistic version of the algorithm which extends the line segments as rays, which I could then use to identify where these ‘rays’ intersected, theoretically giving me the edges of valid parts of the building. This iteration yielded fantastic results; It was truly great as general-purpose approach for finding the edges of bona fide walls and features of interest.
See my post on rethinking HoughLines:
Something that proved extremely useful for rapid prototyping was following Theo’s suggestion to build a command-line branch. This allowed a lot of the parameters to be made ‘virtual’ and allowed me to write a series of shell scripts for prototyping at great speeds (see http://gitlab.doc.gold.ac.uk/joldf001/creative-project/blob/command-line/readme.md)
Repo and Executables
http://gitlab.doc.gold.ac.uk/joldf001/creative-project (`command-line` branch)
http://gitlab.doc.gold.ac.uk/joldf001/creative-project/tree/delaunay-segs (Matt’s delaunay variation)
As mentioned prior, Hough Lines worked great for locating the lines of the buildings (and interiors too). The program then works out the intersection of the Hough Line’s vector of rays. On reflection with candour, I rather naively thought I could implement the (comparatively) advanced Sweep Line [3a] algorithm from the field of computational geometry. My poverty of knowledge w/r/t this subject meant it was not a trivial task – I did end up implementing an attempt in commit ‘d283a113’ based on this [3b] implementation, however.
My final implementation finds all intersections of the line segments on screen and creates a number of bounding-boxes between these points of intersection within reasonable distances so as to only extract what would be considered bona fide features. This marks the bounding-box for the image segments, with the corners of the features being the corners of each image segment:
The image inside this bounding box, whilst containing desired segments themselves, do have extraneous surrounding pixels due to the requisite rectangular output format:
The next step is to then use another computer vision technique to render the obsolete surrounding parts of each segment transparent. I implemented a version of this myself at first (see commit ‘e5c083ed’):
This worked fine but it really wasn’t too tidy. In the end, I decided to compute each segment’s contours with an OpenCV method, and then compute the largest area of this vector of contours returned which would give me (with some non-negligible accuracy) the position of the desired segment. By working in OpenCV-style Matrices, I created a mask out of the area of the desired contour and, at a high-level, ‘subtracted’ this from the input image matrix, leaving the desired segments themselves (git checkout ‘
23c98d0e’) See below the same segment with the much cleaner background removal:
Before and after for two building’s segments:
Some more segmentation results:
Comparison & Replacement
Intelligent comparison of complex imagery remains a challenging problem. Thankfully, each segment I had extracted contained a minimal representation of each segment (viz. The contour with the largest area). Being able to quantify the contours in these terms means that I had a uniform measurement for comparison. This contour data is used to compute the segment’s Hu Invariants, (which is independent of scale, rotation, etc.) and then a metric of how different these primitive representations are can be computed easily:
This simple dimension output is then used to find the best replacement w/r/t its contour, and is then reinserted at the original top left position which is cached from earlier in the program’s lifespan. Operations are also performed on the segment to find the best size whilst maintaining the original ratio.
I split off this whole comparison subroutine into a separate thread to have the ability to visualise this process real-time in the main thread in a non-blocking manner.
As touched on prior, the sound element to the visualisation is an extension of the concept of mapping one piece of architecture to the domain of another. Adding in the sound dimension is an attempt to further emulate the environment of the original image in the second image. (In the sound youtube example above, ditto attempts to rebuild the shard out of its segments, but it’s not allowed to put a segment back in its original place – the program can either be run recursively like this or be used with separate images.)
Each instance of the sampled ambient sound’s pitch and speed is a pure function of the segment’s binary contour comparison, and is mapped between these values.
We knew that the project would be challenging and I feel the project has been largely successful in most meaningful capacities. Looking at the end results, it’s important for me to remember that the ostensibly trivial task of re-creating images out of other images – whilst perceptual childsplay for humans – is something historically challenging for computers, and therefore achieving human-esque levels of image recognition and replacement is too fanciful of an expectation.
I’m compelled to opine that it’s somewhat regrettable that the line intersection algorithm works in O(n^2) time, rather than the more eloquent O((n+k log n) version. On the contrary, whilst the code in and of itself is a big part of the creative process, it follows that the code is a means to an end here as the potential performance hit is irrelevant (and would be merely negligible even if it were part of each frame’s routine) and worrying about efficiency with such pedantry is rendered nonsensical.
I feel that the two projects nicely compliment each other in their shared creative endeavour. Both successfully explore two different means of achieving shared ends. Due to Matt’s differing creative direction, it would have resulted in an unnecessarily-convoluted project if we had decided to try to keep it under the same build, as both have their own conceptual implications and differing technical approaches.
Whilst the results are predictably abstract and not-quite-perfect, I feel the software does a fantastic job at taking in a wide range of imagery of all types and rebuilding a piece with some resemblance. I’m largely pleased with the end technical result especially considering computer vision is a whole new thing to me prior to this project. With regards to what extent the piece elicits similar questions to that of which I was initially concerned with, is obviously not of me to decide. I do feel that the added live visualisation does a lot for me personally in alluding to the themes I was initially inspired by with the project.
References / Footnotes
|1||CNN blog post||http://james-oldfield.github.io/creative-projects-blog/2016/01/14/convolutional-neural-networks.html|
|2||Hough Lines||A – http://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html?highlight=houghlines#houghlines|
|3||Sweep Line algorithm||A – https://en.wikipedia.org/wiki/Sweep_line_algorithm|
I do not hold the rights to any of the raw images used above, minus the two photos of Peckham’s burger king.
Photos taken from:
|http://static1.squarespace.com/static/51a61546e4b040da1a8874d0/t/53b88c36e4b0d5e7513de5f0/1404603477874/ / http://www.jannickfjeldsoe.com/blog/|
by Akira Fiorentino and Uyen Tran Hong Le
XPLOR is an immersive exploration game / visual experience, viewed from a top-down perspective, where the player moves in 2D but the environment is in 3D.
The player controls a virtual fish-like life-form, guiding it around in an abstract environment populated by large, pulsating blob-like creatures.
The goal is to stay alive by consuming the food whilst avoid getting absorbed into the giant creatures. Eating the food will increase the score. The player has 3 lives and once they’re all lost it triggers a game over, which displays the score and prompts the player to restart.
Our first focus is to develop a game that employs generative randomness, used in the creatures. They are symmetrical shapes made up using one equation called the Superformula, discovered by Johan Gielis in 1997. In the game, every time the creatures go off the screen, their appearances (colours, shading colours, resolution) and behaviours (amount of oscillation, speed and direction of rotation) are randomised.
Randomness also applies the positions of the food objects. Altogether, this creates a sense of unpredictability, capturing the player’s attention.
The second focus is on visual aesthetics, which we wanted to explore ways to create a mesmerising piece of art game with a futuristic and mystical style. This is shown through the alluring shading of the creatures, glowy food spheres and a dark background that subtly changes colours, in contrast with the tiny particles to create a nice gleaming effect.
We believe our program can appeal to both tech-savvy people, who will take interest in the super formula and its implementation, as well as people who aren’t familiar with programming concepts, who will simply enjoy the visual experience. In particular, our program could easily appeal to a very young audience (ages 2 to 10), given the simplicity of the controls, and the colorful visual appeal.
Our main sources of inspirations are flOw and Spore, in terms of the game mechanic, aesthetics style, top-down view with 2D gameplay in a 3D environment.
The generative randomness are inspired from the idea of things changing when the player isn’t looking (Creepy Watson).
(More on this can be found on our Creative research)
Understanding the Superformula:
The Superformula is a geometrical equation that can create many organic and natural shapes with 6 parameters: a, b, m, n1, n2, n3. It is made up of the Pythagoras theorem with exponent ‘n’ instead of the squared exponent. Below is our notes to understanding the formula on a low-level:
(More on background research: Technical Research)
We started off by working on the creatures, using Superformula3D by Kamen Dimitrov. In his code, the shape is a plain white mesh and does not move, therefore we needed to customise this. Having the GUI allowed us to adjust the values of the 6 parameters, making it much easier to understand how the formula and the mesh work together to create beautiful symmetrical shapes:
- Making the movement and polishing the appearance:
We wanted to create something similar to the pulsating movement in Cindermedusae, which we found very natural and mesmerising. To do this, we added sine and cosine to each of the 6 parameters, for each vertex of the mesh. We then used booleans as buttons to turn each parameter on and off, allowing us to see how adding oscillations to each parameter would affect the movement of shape as a whole:
(Note: n1value is not used because it defines number of angles (m))
Once we’re happy happy and satisfied with the movement, the next step is to turn one creature into a family of creatures, each having its own unique appearance and behaviour.
First, for each creature to have a unique behaviour, we randomised the oscillations of the 6 parameters (amount and speed of oscillation), direction and speed of rotation, number of vertices (or resolution) and size.
Second, we added the shading using ofLight (sets point light, ambient, diffuse and specular colour) and ofMaterial (sets the brightness, saturation and shininess) to create a glossy and polished look. All of these, together with the mesh colour, are randomised with ofRandom(). Doing this has made the creatures look more realistic and captivating!
THE FISH-LIKE LIFE FORM:
Initially, we wanted to create an object that looks and moves like a realistic fish. In the first version, we created the fish using many ofConePrimitive objects added together, which we were not satisfied with:
We realised it would require a lot of work to create a realistic 3D object for beginners like us, let alone creating the movement for it. At this point, we were suggested to look into Karl Sims‘ swimming behaviour in Evolving Virtual Creatures. This behaviour is achieved by ‘turning off gravity and adding the viscous water resistance effect,’ as explained by Karl in his paper. However, we did not wish to employ the same method, but rather used this as an inspiration for our life-form.
Since the life-form is controlled by the player, it needs to follow the mouse position. We therefore looked into Processing example follow3. It is a simple 2D snake that does not move but only rotates itself towards the mouse position.
Below video is the our recreation of follow3 in 3D:
This, combined with Karl Sims’ swimming behaviour, were exactly what we need. Finally, we managed to create a 3D life-form which follows the mouse and moves like Karl Sims-inspired behaviour.
We wanted the food to have a glowy / bloomy effect to make them stand out against the dark shader background, and thus making it easier for the player to look for the food. We stumbled upon a beautiful glowy mesh made by a digital artist called Reza Ali, which looks very close to what we wanted:
The final food object (below) is made up of many nodes, drawn in an orientation of a 3D sphere, where each node is texture mapped with a blurry white dot image to create the bloom effect:
THE ABSTRACT ENVIRONMENT:
This is created with little shiny particles floating freely around, on top of a dark shader background that smoothly changes colours as the player moves to create the illusion of movement:
- The background: The background simply uses the RGB values of the background color and assigns increments or decrements to them. Then once one of the values reaches the minimum or the maximum, give it a new random increment or decrement accordingly. The background was subsequently changed from using the entire RGB colour space, to a much darker tone, to add contrast with the glowing particles.
- Particles: Adapted from openFrameworks’ billboardExample, the particles are little floating nodes that move randomly using noise. Together with the smooth 360 camera rotation, they helped to create a fluid and floaty environment.
We loved the look and feel of the Superformula creatures and we thought that they could be the focus for the visual aesthetics of the game. Looking at them moving, we felt that there is an element of something rough and strong, set out by restrained forms of the skeletal mesh, yet at the same time is so free, expressive and magical in the way they look and move. Altogether, they have a harmonious balance between the two elements, and this has therefore helped us to decide the visual style of the game as futuristic and mystical.
MUSIC AND SOUND EFFECTS:
To accompany with the chosen art style, we chose an ambient background music and ambient musical notes as the sound effects. These further enhanced the gameplay experience by add a thrilling and adventurous atmosphere to the game.
Problems & How We Solved Them
- Adding movement and constraining sizes: We thought that creating movement would have the same approach to adding colours (that is, looping through each vertex and adding oscillation). However this was the wrong approach as this changed the positions of the vertices, and thus distorted the mesh. To solve this we added oscillation straight to the 6 parameters instead of each vertex, with booleans to turn each parameter on and off consecutively to see how oscillation of each parameter would affect the movement of the shape as a whole. This way, we are actually using the super formula parameters, not distorting the pre-made shape.
Doing this also allowed us to see exactly which parameter(s) makes the creatures big (since the values are random), which we noticed they are n3value and n4value, so we reduced the amplitude and vertical shift for these 2 parameters to prevent the creatures getting too big.
- Speed vs Resolution tradeoff: Setting a high resolution would make the creatures look better but would significantly slows down the game and causes lagging, therefore we had to reduce it a bit to ensure a smooth gameplay experience.
- Randomising mesh colour: Unfortunately, we were never able to randomise the colour of the creatures themselves when they go off screen. So at the moment, the 3 creatures change their shapes, movements and lighting colours, but their mesh colour stays the same.
- Lighting limit: In the game, each of the 3 creatures has a specular light and a point light. We wanted to have around 5-7 creatures to make the game more exciting, however, this caused the light on the life-form to crash, and we also got this error:
[error] ofLight: setup(): couldn't get active GL light, maximum number of 8 reached
According to this page, openGL introduced a fixed-function pipeline of only 8 lights allowed per scene. Also, removing the life-form light significantly reduces its appearance and does not match with the environment of the game. Therefore, to keep the player light, we could only have 3 creatures.
with lighting (creates nice scales)
- Moving vertices and bloom effect: To start off with, we used the same approach that created the creatures: create a sphere mesh, loop through all the vertices and add oscillation to each vertex. However we could not get all the vertices to move (due to how the sphere mesh was made) and also struggled to find a way to create the bloom effect.
At this point we found the openframeworks example called pointsAsTexture which is exactly what we needed. Therefore we decided to use this example and tweaked it our own way.
- Initially, before we had the glowing particles, we wanted to make the background more appealing, as well as give the player a more obvious sense of direction when its moving. So we thought of making these massive gradient circles, which could act as ‘zones’ which the player can move into, which are of a fixed colour, as opposed to the changing background. Our vision for this was strong, and we spent a long time working on it. Initially we used many ofCircles drawn from the inside to the outside, with decreasing opacity. However, this was extremely laggy, due to the numerous sine and cosine calculations that need to happen every frame.This led us to having to learn about openGL: we created the circles by using the triangle-circle method, which was much more efficient, and allowed for even further possibilities in modifying the colour and look in interesting ways.However, this seemed to cause problems with the super formula shapes, that no longer appeared 3D but turned ‘flat’.
This led us to decide to let go of those shaders, and we didn’t think they were absolutely necessary for the look of the game anyway.
We have managed to create a basic exploration game that works well without any bugs or lagging. Most of our goals were completed within schedule, with only the parallax background and random creatures’ mesh colours left out.
Our 2 focuses were also successfully implemented, especially the visual aesthetics, as we have worked hard to make it look as good as possible within our abilities.
On the other hand, we had to remove lots of ideas, such as using AI, genetic algorithm, voice input, parallax background as they are beyond the scope of our abilities. Also, we were unable to carry out user testing on kids as we do not know any, therefore we only tested on adults. However, this would have been very interesting, as we are quite convinced of its possible appeal to this audience.
- flOw: http://www.jenovachen.com/flowingames/flowing.htm
- Spore: https://youtu.be/WoP5thatpr4
- MURMUR by Reza Ali: https://www.instagram.com/p/BAueZPpneiT/?taken-by=syedrezaali
- Evolving Virtual Creatures by Karl Sims: http://www.karlsims.com/evolved-virtual-creatures.html
- https://youtu.be/JBgG_VSP7f8 (swimming behaviour at 0:26)
- Cindermedusae by Marcin Ignac: http://marcinignac.com/projects/cindermedusae/
- Creepy Watson: https://www.youtube.com/watch?v=13YlEPwOfmk
- Supershapes by Paul Bourke: http://paulbourke.net/geometry/supershape/
openFrameworks documentation and tutorials:
- ofEasyCam: http://openframeworks.cc/documentation/3d/ofEasyCam/
- ofMesh: http://openframeworks.cc/ofBook/chapters/generativemesh.html#basicsworkingwithofmesh
- ofShader: http://openframeworks.cc/ofBook/chapters/shaders.html
(All free for commercial use)
- Superformula3D by Kamen Dimitrov: http://www.kamend.com/2013/12/superformula-3d/
- openFrameworks and Processing examples:
- follow3 (Processing > Examples > Interaction)
- ELIXIA font: https://uispace.net/1069-elixia-font
- Sound effects by Akira
Gitlab Repo & Extras
All Youtube videos above can be found in this playlist.
Also, take a look at our Work Diary for weekly documentation and progress 🙂
Lux – Dat & Kevin
We wanted to build a creative tool which encouraged users to explore visual aspects of light painting by adding their own creative flair to a blank digital environment.
We decided to follow through with an idea that surfaced as a result of our creative research albeit with some slight modifications. Our final project idea was to create a digital environment which acted as a canvas for the user to explore aspects of lighting artwork.
The user can navigate through the 3D world with the following controls:
- WASD keys: general movement in world
- QE keys: Move up and down the Y axis
- F key: move quicker when used with WASD & QE keys
- LMB: for drawing
- TAB key: this is used to switch from drawing mode to edit mode and vice versa when wanting interact with the GUI interface
- SPACEBAR: move selected light sphere to player position
Lux was created in openFrameworks and written in C++.
For the research part of the project, the brief outlined that we should concentrate on ‘aspects relating to design and aesthetics’ and to ‘avoid looking…at technical aspects [like] libraries [and] frameworks’. Owing to this, we decided to focus entirely on lighting as an outlet for art, paying particular attention on light painting and light graffiti due to the visual properties they possess.
After an in-depth research on the history of lighting artwork and its process, we highlighted two aspects which we felt could be improved on. The first was how dated the technique was. If we wanted to create a light painting piece right now, we would still be using the same procedure used in 1886 by the founding fathers of light painting, Georges Demeny and Étienne-Jules Marey. The process is still a ‘photographic technique in which exposures are made by moving a…light source while taking a long exposure photograph’.The second problem we found was the possible restrictions light painting holds for someone who wants to explore the art form for the first time. We personally felt that this restriction lay in the cost of the equipment that needed to be used. This was backed up by articles we read online which expressed the importance of using a DSLR camera owing to the full control you have over the shutter speed (which is the duration of exposure), the aperture (which is the opening of the lens which light passes through), and the ISO speed (which controls how much light enters). We felt that this was a costly investment for anyone if their motive was solely to experiment with the capabilities of light painting.
These problems and our interest in the aesthetics of lighting artwork spurred us into wanting to create a tool which allowed people to explore visual aspects of light painting. With this in mind, we decided that our application should offer a strong foundation for people who wanted to explore light painting as a creative outlet but may lack the knowledge or equipment to do so – this was our desired target audience. When accepting this as our target demographic, we understood that difficulties could arise as there was no specific age group or gender we wanted to focus on. This meant that when it came to designing the application, there had to be a medium, as a fourteen-year-old male would have to hold the same interest as a twenty-four-year-old female when using the program.
We were adamant in producing this project on openFrameworks without any additional physical components. By creating it purely through openFrameworks and C++, we felt that we could share it with anyone who had a computer and was interested in exploring the visual properties of lighting artwork in a digital world.
When it came down to our initial prototyping, we were very keen on the theme of light versus darkness. We felt that this would be suitable to the nature of light photography as the brightness of colours used always strike through, creating a huge contrast to the dark environment it is in.
Continuing with this thought process, we began thinking of potential objects we could add into our environment to really hone in on this theme. The first image which popped into our heads was a tree since it connotes the idea of growth and therefore positivity and light. However, a tree also has negative connotations when it is bare, as it alludes to the idea of death and darkness.
We looked into fractals and used Daniel Shiffman’s tutorial to get the foundation of the tree object and then developed it to fit our project needs.
However, after some testing we ultimately had to scrap this idea as the recursion technique used to draw the trees meant that our application became extremely slow when adding more than one fractal tree – this was because the recursion calculation was continuously looping as it was run in the draw section.
Nevertheless, we still intended to keep with this idea of light and darkness but to channel it in a different aspect. We opted instead to have multiple light sources which would light up the dark environment and the objects surrounding it.
This was also the stage where we decided to allow the user to move freely wherever they desired in the 3D world as opposed to a flat terrain with boundaries. We believed that by doing this, we would meet our personal criteria of creating a digital environment which had no restrictions for the user when drawing.
Moving on to the light drawing prototype, we had an idea of what we wanted the output to look like but we did not physically create any sketches. The reasoning behind this was because we found it difficult to draw the line strokes the way we wanted and there were already a lot of resources online that we could base our final design on – Yellowtail by Golan Levin and inkSpace by Zach Lieberman being some of these. Both of these applications had implemented a very natural brush stroke which enhanced the sense of control that the user had when drawing.
We split up the the build into two sections and aimed to complete the tasks in this order:
- The build of the environment and mouse picking
- The functionality of the drawing method
The 3D Environment and mouse picking
This part of the build was the most time consuming. We did expect this to be the case however, as both of us had never created a 3D environment before. Initially, we based our environment off a ofxBullet example as we had already researched into this as a result of our technical research. The benefits of using this addon was that all the collisions and mouse picking could have been easily implemented. However, the limitation to this was that you were restricted to only using ofxBullet objects; unfortunately, these were very simple shapes and would not have given us the flexibility we required for our light drawings. We spent some time implementing the ofxBullet world before realising our mistake but we did take something away which was useful. We discovered that ofxBullet adds a texture to its shapes which means that the usual implementation of lighting would not generate the results you would expect. From learning this, we were able to solve our own issues with lighting later on in our project.
After our blunder with the ofxBullet world, we began to look at building an environment from scratch again. The first week of starting this environment build was tricky for us as we were unsure of the correct terminology to search for to get the correct information to help us. Fortunately, we stumbled upon an article by Marco Alamia which explained in-depth the different spaces required to map a 3D world on a 2D screen. After many more articles and tutorials, we began to understand the logistics of creating a 3D environment but we were still unable to successfully implement theory into code. At this stage, we were becoming increasingly worried that we would remain behind on our personal schedule.
Fortunately, we were able to progress after some help from Simon who had constructed a 3D environment as a result of showing us mouse picking. This was extremely helpful as we had a copy of code which we could use as reference to understand the online tutorials that we had previously read. For instance, we were aware that we had to create a frustum which contained the near and far plane but we did not understand how to put this into code. We realised after looking at Simon’s code that this was a simple fix as ofCamera deals with all of this behind the scenes. There were many other instances where we overcomplicated the code when there was no need to. We felt that this was a recurring problem we faced throughout the project which affected our progression.
After successfully building the 3D environment, we moved on to populating our world with shapes like spheres and boxes. Our decision in doing this was because we felt the world was not as encouraging as we would have liked for the user to start drawing. This was also the stage where we attempted to implement the mouse picking code which Simon gave us. Instead of just copying Simon’s code without understanding the ray, we attempted our own implementation of mouse picking and were able to successfully re-create the ray from scratch without using the ofCamera and ofNode built in functions like screenToWorld(). This was our code:
Unfortunately, we were unsure with how to use the ray in mouse picking. We felt that the problem lay in the fact that we were comparing our world coordinates with our normalised ray values which were incompatible with each other. We spent a lot of time trying to work out what we were doing wrong but in the end, we had to move on from mouse picking and go with a simpler interaction.
The drawing methods and objects in our world
After the completion of the environment, we started to tackle the drawing methods in our application. We found a drawing tutorial to base off of from the openFrameworks page and followed the steps to create interesting graphics which could impersonate the lighting visuals we wanted.
These tutorials we followed used ofPolyline objects to draw points onto the screen; we fed in the world coordinates into the draw() parameter of the ofPolyline object which was how we were able to draw accurately in our 3D world.
We realised that this on its own was quite bland, so we implemented a GUI using ofxGui so the user could select different brushes to draw with. We also added lighting which the user could change to make the environment more vibrant and exciting.
Our original goal was to create a tool which allowed users to explore visual aspects of light painting. Keeping this in mind, we feel that we were able to meet this brief to a certain degree. For instance, our light cluster stroke certainly emulates a more modern take on light painting whereas our standard line stroke takes our users back to the beginning where a more primitive approach was used to reflect the first light painting piece by Demeny and Marey. However, we were unable to truly represent light painting in the way we wanted to. From the very beginning of the project, we were motivated to create a digital environment because we felt that it could offer a different perspective to light painting that the normal creation process would not be able to provide. The bubble stroke was one way of attempting this as we wanted to add a different and interesting perspective to the creative process.
One positive we can take from this is that our knowledge in working in a 3D environment has improved greatly. We now understand the process in creating a 3D environment from scratch, the differences between world coordinates and screen coordinates and the importance of understanding the distinction between the two, and also the reasoning for transformation matrices. Moreover, even though our attempt at mouse picking was unsuccessful, we were able to code the ray from scratch which highlights our understanding and development in this field.
Overall, we believe that the project as a light painting creative tool was partly successful. It does represent the visual aspect of light painting through the light cluster stroke and we have been able to produce a digital environment where the user can roam freely which were our original goals. However, we understand that we could not represent the full capabilities that light painting has to offer. We could have given a better representation of different aspects which could have then provided greater control for the user to create new and unique visuals. What made our progress stagnant was the amount of time we spent understanding and implementing perspective projection and mouse picking. These were challenging concepts to become familiar with but we feel that now, if a similar task was given to us in the future, we would be able to overcome the difficulties we faced this time around as we have a greater understanding of its implementation. This would then allow us to spend more time focusing on the visuals of our light drawing.
Images used from external sources:
Articles/Tutorials used for research: