The Space that Brought Us Here
The space that Brought Us Here is a piece that challenges audiences perception of their surroundings in a more intimate way, though the physical engagement with the piece itself. Screens are used to show sections of the same space. The viewer is able to then reorientate the screens, through movement, creating new-found compositions.
The idea behind ‘The Space that Brought Us Here’ was to challenge a viewer’s understanding of the space around them. As someone walks around a space they form an understanding about how they are operating within their surroundings and how other people are also interacting with the Space. Individuals form their own perspectives and compositions of the space. The piece had 6 tablets suspended on metal steel wire within a wooden frame. A viewer was able to move the tablets up and down the wire, rotate the tablets on the wire. The tablets took video from there rear camera and displayed the video on the screens. However, as all the tablets were connected to the same local area network they could share the video between one and another. A viewer could tap on the screen of the tablet in order to change the video shown to one of the other 5 tablets. This resulted in mismatched video, creating confusion but intrigued about how that perspective had come about.
Intended outcomes & background research
The piece was based on a lot of background research I had done. Particularly with the work of Olafur Eliasson. I was intrigued with him talking about public space with ‘New York City Waterfalls’. Here the waterfalls acted as an intervention in public space. This allowed people to evaluate the space around them. Following on from this Eliasson’s work on the glass façade in the Harpa building also demonstrates this by it’s “three-dimensional quasi-brick structure” that creates “fivefold symmetry” . The sections create shifts in their appearance and colour according to people in the building and the environment. Looking through the facade distorts the view, creating new found compositions. Also with Jeppe Hein’s work, ‘Please Touch The Art’, where he created a mirror labyrinth made out of a spiral of mirrored stainless steel planks set against views of lower Manhattan. The posts are arranged in various arcs that distort the surrounding park and city. I found this piece intriguing with how the mirrors create distortions which created extra perceptions of the space.
I wanted to take this further to explore how individuals could form different understanding of a space, in terms of perspective and how other viewers have an impact on the space that is viewed. Just prior to this I had worked with perspective and perception with Focal Grid which split the same view in to multiple different focus points. Interaction controlled the different focus points that were shown, from a tablet with virtual buttons. From the exhibition of this piece it was clear that the interaction was not always obvious and the viewer didn’t feel connected with the scene that was shown.
I wanted to make the viewer more of a part of the piece through physical engagement with the piece. In the reorientation of the tablets and being able to be part of the video shown. This removes conventional boundaries with how the piece can be interacted with. There is a sense of playful fun, moving different objects independently of one another like a jigsaw, allowing a viewer to see different possibilities with how the objects can be positioned.
I wanted the video shown on the tablets from the outset to be familiar to the viewer. This was in order to augment and intervene with a space they were familiar with in order to challenge there understandings of the space. I achieved this by using live streamed video of the space from the tablets, for the desire to create engagement within the viewers surroundings.
I had decided upon using the Amazon Kindle fire for my project due to it’s relative low cost and running android which openFrameworks could compile on to. I started initial tests by trying to set up of the openFrameworks android examples on a Kindle Fire. This took a lot longer than expected. I found that using Android Studio was a better way of compiling on to the tablets.
I worked on some rotation tests with the accelerometer in the tablet to help me with the rotations of video that would be displayed. I had issues with being able to get the values required to allow continued rotation of the tablet because of the lack of a gyroscope within the tablet. But this still meant that I could impact the rotation of the image displayed based on if the tablet had been rotated left or right.
Following on from this I was not able to get the camera examples for openFrameworks Android compiling on to an Android device. This resulted in the program crashing very quickly after compiling on to the device. I tried this with 2 devices running new and older versions of Android. The solution for this was to use an IP Cam that ran in the background of the tablet which would then stream the video to my openFrameworks app. I used IP WebCam. In order to get the video into my oF app I used the oF addon ofxIpVideoGrabber this took in a list of IP cams via an XML file to display them on screen.
From this solution being used it allowed me to share the video between all of the tablets which worked very well. I got some good feedback from people I showed this too.
I needed to build a frame that would allow the tablets to be suspended within the frame and also, allow them to move up and down and rotate on the wire all with keeping the tablets fixed within the frame. It became clear after talking with Nicky Donald, one of the technical support team, that the frame and the attachments for the tablets would be easiest custom made as we were not able to find anything commercially available to create this. As a result, the frame would be made out of wood and metal steel wire would be used to attach the tablets to the frame.
The attachments would need to be 3D printed to allow them to grip on to the wire and rotate the tablet at that position. I set about looking for 3D designs on thingiverse.com to find the basis of the parts. From this I started modifying the designs, going through tens of iterations of design and print tests.
I was unable to find a suitable case for the for the kindle fires that I was using with my project. I came to the conclusion that it would be best to print cases for the tablets. This came with the advantage of being able to have the attachment plate printed on to the case, saving having to glue resulting in a messy look. However, there were issues with the printing of the cases. It was very difficult to perfect. The cases took around 7 hours each to print meaning that it took a while to be able to make modifications to the design and get all printed at a high enough quality. I also had issues with the rafts and supports that the 3D printer uses to support the print as it is being built. The supports were not consistent in the print due to the complexity of the design, this resulted in weaker prints. I started using a program called MeshMixer, this allowed me to adjust my cased in real world measurements to fine tune the cases. It was also used make the cases optimum for 3D printing. However this still did not solve the issues with the supports for the printing and as the 3D printers that I had access to didn’t always print correctly.
I had issues with the main rotation part snapping when the cases were assembled. This damaged some of the cases meaning that modifications had to be made to them so some were not able to rotate on the steel wire.
I had initially been considering a square wooden frame for my piece. However, through thinking about how my piece would sit in the gallery space I thought I would opt for a portrait frame. This was for several reasons including, creating the look of a window or portate that would frame the tablets to allow for this change of perspective. Also to match the aspect ratio of the tablets within the frame.
The first frame that I build wouldn’t have been strong enough to support the tension from the metal steal wire. Nicky Donald advised me to use structural wood used for building to support the tension from the wire. As a result, I purchased more wood and added metal braces as well as wooden mitred braces for each corner. This resulted in a very strong frame that was capable of supporting the tensioned wire.
For the steel wire, I used wire grippers and Gripples, which allow steel wire to be gripped but adjusted for tensioning of the wire. The wire grippers were put at the top of the frame and the Gripples at the bottom. Pliers were used to pull the wire through the Gripple to tension the wire.
I was really pleased to see people’s intrigue at the opening night. This intrigue initially was from the video being very often missmatched which drew viewers in to interact further with the piece. Mainly viewers rotated and moved the tablets up and down the wire, however usually quite tentative as most seemed to think the tablets look a bit precarious. One interaction I had not been expecting was people to twist the tablets left or right on the wire to see around. This surprised some people as it became very clear then that the tablet that they were moving didn’t always respond to their movements. This was due to the feed shown on the tablet being another tablet within the frame. This discovery lead to alot of viewers interacting with the exhibition further. However, for some the mismatched video wasn’t enough for them to physically engage with the piece.
There were a couple of tablets that were a bit precarious on the frame. This resulted in 2 of them falling off. This did cause issues with viewers wanting to interact with the piece due to these issues from the 3D printed attachments.
The Space that brought us here was a successful piece. I felt like my piece lived up to the original concept of challenging a viewers perception of their surroundings. I had ideas of using the rotation to control the rotation of the image and using an image buffer to create a delay. Although I did not explore these ideas fully, it became clear that this would add too much complexity to my piece and confusion in physical interactivity. Through my solution of using an IP Cam to capture the video instead of using the cameras locally allowed me the ability to share the video between all of the tablets. This created a very subtle shift in perspective but was obvious enough from just looking that the perspective had been altered. I feel that this was better than adding a time dimension or unexpected rotation of the video as I wanted to make the intentions of the piece clear. However, due to the latency of the video stream, there was sometimes a small delay that was noticeable when someone walked in front of the tablets. This was useful to illustrate how the tablets were creating a different perception of the surroundings and allowed a viewer to think about the various operations and alternative compositions occurring within the space.
The issues with 3D printing did hamper the viewers ability to interact with the piece. This was due to my difficulties with the complexity of the printing. It also resulted in some of the attachments looking quite rough. If I was to do that component again I would be hesitant to 3D print the components due to the complexities of it and breakages that are hard to prevent. Using proper cases for a tablet and metal or molded plastic for the rotation and vertical movement would be better. However considering I haven’t seen someone else suspend tablets and have the ability to move them vertically and rotate on steel wire I think my solution was good. More time should have been put towards the mechanics of the structure in order for this to work better.
Overall I am happy with my piece. I negotiated a lot of challenges with the 3D printing as well as issues with compiling software for android with openFrameworks. I feel that my piece striking and allowed viewers to explore the space differently through the intervention of the frame.
References and bibliography
Led Jumper – Daniel Sutton-Klein and Sebastian Smith
LED Jumper is a 2D platformer by Daniel Sutton-Klein and Sebastian Smith inspired “The Impossible Game” displayed across a 32×16 LED matrix where the only control is to shout to jump. This game takes advantage of the LEDs bright visuals and creates a neon-esque aesthetic for the player to be immersed in as they progress through the game.
Processing + Teensyduino code: Download
We set out to make a fun and simple game for anyone to play. Since the game is controlled only by the user’s voice, our audience isn’t limited to any particular group of gamers but instead literally anyone that can shout loud enough or make enough noise.
We mainly had to focus on the hardware side of things when it came to background research as we had little experience with physical computing prior. We added all the relevant information we needed to a google docs file and researched as much as we could about the different options of compatible hardware we could use to ensure that we wouldn’t waste money on anything useless. Below is a snapshot of this document which shows how we gathered the information for building our LED display.
The concept of our project, a game running on an LED array, dictated all the design that followed. We had some options about the density of LEDs on the strip, either 30, 60, or 144 LEDs per metre. Thinking about the sound input nature of the game, we imagined people shouting jump at the display to avoid dying, it seemed appropriate to try and get the largest display we could. When prioritising the size of the display, it was cost efficient to opt for the 30 leds/meter strips. We considered different aspect ratios and arrangements of the LEDs (pixels could be aligned diagonally or in a traditional display grid). Platform games rely on being able to see far ahead of the player, so we decided on approximately 2:1 for the aspect ratio, with a regular grid arrangement to keep it simple. The display would be approximately 30×15 LEDs (~100 x 50cm).
The rest of the design process started with prototyping game concepts and mechanics. Without knowledge of the technical side of transmitting the game onto the display, we still knew that a 2-dimensional data structure which emulated the display was the first step in prototyping, as it set up a simple and logical way to set the LEDs when the physical side and libraries were complete. After implementing this, we started working on the core game mechanics that would influence the gameplay and user experience of the game. For jumping, we knew early on that we wanted the user to control the player by simply shouting at the screen, so we used minim to analyse the amplitude of the audio input, convert that into decibels and then make the player jump while the audio input was above a certain threshold. Below are snapshots of the prototypes showing the first 2D data structure array and core mechanics implementation.
From here we built up and focused on developing a playable game on processing while we waited for our hardware to arrive.
With level design, we created the final level as a PNG image in Photoshop and implemented a way in processing to use the image data as the level of our game by loading the pixel information. This way we could also easily develop level mechanics by referencing the colour of a certain unit and how it affects the player when they are next to it. Below is a snapshot of the final game and the Photoshop file to show this worked.
Commentary of build process
With the premise of the project decided, and zero experience with LED control or development boards like Arduino, we started by learning the very basics.
Starting small, we borrowed an Arduino Duemilanove which came with everything we needed to practice small & simple tasks with LEDs. After being able to turn a single LED on and off, we moved to multiple LEDs which we had flashing sequentially. Knowing that we would be having audio input in the finished project, we tried to make a VU meter (volume unit meter) with the Arduino and LEDs, but soon found out that the microphones that plug into Arduino pins have limitations which might make it problematic for monitoring voices. The other alternative was to use the microphone on a laptop, which we found out required a whole other area of expertise, serial communication, which felt beyond our capability and advised against in web forums. Overall, our testing with the Arduino was very useful in the way that it gave us an idea of how development boards and the Arduino IDE works.
After lots of research, we planned to use the OctoWS2811 library on the Teensy to control the LEDs, as it was designed for the Teensy and came with examples which would allow us to stream the video from a laptop (with the VideoDisplay example on the Teensy, and movie2serial on Processing on a laptop) without much extra work.
When the components arrived (Teensy 3.2, WS2812B strips, 30A PSU, and the logic level converter to change 2.2V to 5V), the first test was to control a single LED. After all of the connections on the breadboard were made (with lots of important connections to LOW and HIGH rails, for example to set the direction of the logic converter), we ran the OctoWS2811 library on the Teensy, and confirmed with a multimeter that the data signal to the LED was 5V. This was our first time controlling a WS2812B LED.
After the successful test with a single LED, we moved onto the next step and hooked up a short strip of 22 pixels. Using the ‘Basic Test’ example from OctoWS2811 library, which is meant to change the pixels to a different colour one by one, we saw that there were serious flickering issues, pixels appearing to not update and flashing random colours. Now knowing why this was happening, we tried the test example from an alternative library to OctoWS2811, FastLED, which instantly gave better results. We took this, being able to control a strip of LEDs, as a cue to build the full 32×16 display.
The first step in building the display was to get a piece of wood the right size, which we then painted black. After considering how we could attach the strips, we started measuring out points for holes across the board, which would evenly space the strips in a precise way. We drilled those holes (of which there were a few hundred), then used zipties to secure the strips into place.
The strips have power, data and ground connections, which we soldered to wires that went through holes at the end of each strip. After everything was set up, we loaded the Basic Test for OctoWS2811 again, and despite the initial excitement of seeing all the pixels light up for the first time, we realised there were serious communication issues. The test was meant to light up all the LEDs the same colour, then change the colour of each pixel in order, one by one. Instead of the clean result we hoped for, lengths of strips seemed to not update colour, and many colours would flicker.
At this point we split all the data cables into 2 CAT5 wires instead of 1, the idea being that it would minimise cross-talk and stop interference in the data signal. This helped a bit, but the same problems were still there. With an oscilloscope, we looked at the waveform of the data signal coming out the Teensy, and saw that it wasn’t clean and the timing (which has to be VERY specific) was wrong. This was in comparison to the OctoWS2811 library website’s graph which showed us exactly how the waveform should be for the WS2812B chips. Going back to FastLED, we saw a major improvement, and decided that OctoWS2811 was not reliable enough to use.
Although FastLED outputted better signals, it came with it’s own problems. It wasn’t designed for the Teensy and only had single pin output by default. Latency and signal degradation would be expected trying to control 512 LEDs on a single pin, not to mention it would mean changing all the wiring on the display, so we looked into the different options to output to the 8 pins that were set up. The Multi-Platform Parallel output method described on the FastLED wiki was supposed to do what we needed, but after spending a lot of time on it we just couldn’t get it to work. We then tried the method used in FastLED’s ‘Multiple Controller Examples’ which created multiple FastLED objects. This worked and we were able to light up the whole display with colours we wanted (see the colour gradient photos).
The next problem was serial data communication. Our original plan was to use OctoWS2811 to control the LEDs, with the VideoDisplay + movie2serial examples which would deal with streaming video from a laptop to the Teensy automatically. Now using FastLED, we would have to write our own code to do this manually. Arduino and Processing both have Serial libraries which are used for serial communication, and after looking at the references and Googling for other people doing similar thing, it still wasn’t clear how to make it work. After almost giving up a day before the deadline, a section at the bottom of FastLED’s wiki for ‘Controlling LEDs’ gave us a clue:
Serial.readBytes( (char*)leds, NUM_LEDS*3);
With some trial and error we were able to send serial data to the Teensy from Processing which, using FastLED, successfully (and beautifully) lit up our test strip of 22 LEDs on 1 pin. After that, we moved the whole project upstairs where it would be easier to work on code at the same time, but as we got up, the whole thing seemed to stop working. We brought up the oscilloscope to see what was happening, and indeed there was no data signal. Stumped, we spend 2 hours trying to figure it out, as when we moved back downstairs it magically worked again. At one point the Teensy stopped responding completely and we feared we’d shorted it, which lost us for a while again until we saw that the power supply ground was plugged into the HIGH on the breadboard. The next day we tried the working serial communication to control a whole strip of 64 LEDs on the full display from Processing, which didn’t look correct at first, and we were worried about the size limit of the serial data receive buffer, but it turned out the Processing sketch has to be stopped before restarting or the serial data gets cut in. To make a long story short, after changing and configuring code in Teensy and Processing, we achieved serial communication sending data for the full array.
Implementing the Processing serial communication code into the game was fairly straight forward, although due to the layout of the strips (where 1 strip is 2 rows and they zig zag in direction), the first time we ran the game, each 2nd row was reversed. This was fixed with some code to alternate the direction of LEDs sent each row.
Due to us only getting the display working on the day before the deadline, we didn’t have much time to experiment further to make the game perfect. We did however run tests looking at brightness of the LEDs, as they were very bright, and found that dimming the pixels is most effective between 3/255 and ~20/255, which has potential applications for the game. We also tried putting a sheet in front of the display to diffuse the light, which could look really nice if we built a frame to stretch it around.
For what we wanted to achieve, we certainly accomplished the overall concept of our design by having a playable game work with audio input on our LED matrix. Despite our accomplishments, many of our ideas weren’t realised in the final piece due to various reasons. We were not successful with implementing a lot of specific mechanics to our game such as wanting to have secret levels that would be activated when the player went a specific route as well as noise cancellation and non-linear jumping for a smoother experience. These mechanics weren’t so much of a priority as a usual game session per person would only be a couple of minutes meaning that they wouldn’t heavily alter the gameplay experience.
After all the time we spent getting the game working on the LEDs, there was little time to experiment with different brightness settings to add contrast to our game in certain areas, as well as creating new colour palettes optimised for the LEDs. As you can see in the video there were still some flickering issues with the LED display where random segments of LEDs would turn on for the length of a frame which was determined by changing the frame rate and seeing how it was affected. However, with the amount of flickering we had prior to this, it’s a miracle they weren’t at all worse, although so far we have been unsuccessful in finding a way to fully eliminate this problem.
Our last success, despite hearing how susceptible our components are to blowing, and even purchasing spares in anticipation of this, everything remained intact throughout the duration of the project.
Wobble – By Johan & Cormac
We have created an environmental synthesizer which scans its location and interprets light and topographical information to produce sound. The device, named Wobble, is designed with the intent to not only help children understand sound and music making on an abstract level but to also give them the opportunity to modify the device for further experimentation. Wobble is built on the linux platform running on a Raspberry Pi with an easy to use and understand breakout breadboard. Wobble is powered by a rechargeable battery and rechargeable speaker. With all the hardware built into the device it is totally wireless and can be used in any environment to make a wide array of synthesized sound.
The software running on the Raspberry Pi is all written in C++ utilising three libraries which form the bedrock on which the architecture of the program is built on. The first and most important of these libraries is WiringPi, which allows access to the GPIO pins of a Raspberry Pi with C++. The second is Maximillian, a C++ audio and Digital Signal Processing (DSP) library which is the linchpin in the synthesisation of the audio. The third and final library is a small user-made library to control the TSL2561 Lux sensor from Adafruit.
We also use pulse width modulation to control the speed of the motor spinning the sensors on the device to produce a multitude of effects.
Our Intended Audience
Our audience is 6 to 10 year old children who are just about to start or are already learning an instrument. Wobble is used for interactive play with music, sound and lights to get children interested in technology and music through interaction. Our intended outcome would be to build something that would engage them musically and technically. They would be able to create music in any environment they wanted on an abstract level. The older children would be able to then continue on their experimentation by modifying the open source code and easy to use breadboard setup to create their own sounds and music.
Our background research had us looking into artists using their pieces to change the environment like Andy Goldsworthy, Javier Riera, Jim Sanborn and Olafur Eliasson. Biomimicry, particularly sonar and ultrasonic mapping of environments played a large part in our research. Wobble fundamentally grew from the idea of reinterpreting an environment like the bat and how they navigate through environments quite unlike any other mammal.
We had a pretty good idea about how we wanted Wobble to sound. We wanted to have a device that had a very unique synthesized sound, and we wanted the sound to be both interesting and fun for kids. The main way we tested our sounds was to generate pure sine waves and then add envelopes and frequency modulation. To find the sound that we wanted, we used ofxOsc for Openframeworks and OSCpack on our Raspberry Pi. Through OSC messages we could now communicate between our two devices and play around with parameters regarding the sound output. By moving the mouse in our Openframeworks program the mapped position of the mouse changed the sound parameters inside our running code on our Raspberry Pi. Through this we could interact and change the sound in real time.
Throughout the project we have experimented with the Maximilian library to find the right sound. We started making some simple tests, such as playing specific frequencies when you press certain keys on the keyboard to change the keyboard input to be replaced by our sensors. Then we moved on and added more functionalities from the library such as envelopes (ADSR), filters (Hi-res and Lo-res), delay and compression such as noise gate. We also made our own filters such as a foldback distortion and a Moog filter to make the sound even more interesting and fun.
To start we created the paper prototype of the device for some real world context and to test whether our hardware would fit into it. Below is the first 3D model.
We chose to have the shape in two parts as it was both aesthetically pleasing and economic in regards to space. We wanted the top half to spin while the bottom remained still as a base. The two sketches below show some variations of mechanics proposed. Eventually we decided on a electronics housing that sat on a lazy susan bearing that would be driven in a circular motion with an inverted cog and gear system (see below for demonstration).
We then made a cardboard prototype of the housing to once again test the sizing and aesthetics of the device. Using the Pepakura suite we broke down the 3D model, created in Maya, into a 2D mesh that was printed and glued to the cardboard, cut folded and glued into shape.
The finished cardboard prototype.
The prototype with testing rig of electronic.
Now that we had the basis for our housing complete we started to assemble the 3D models for the housing and the internal drive system. Below is the final representation of the internal assembly. The small inner cog will be held with a DC motor attached to the inner electronics housing placed in the centre.
We started 3D printing the housing with clear ABS plastic to make the final product opaque so that light can shine through from inside.
The whole printing process took 44 hours for the final product, not including the failed attempts which could easily be over 15 hours. The printing was spread over three weeks of iteration and testing and then final printing. The cogs and electronics housing was printed in white as we were doubtful of getting a full print from the clear plastic we had left.
Some of the pieces were too large for the printer bed so the incidentally had to be sectioned. The pieces were then welded back together with a plastic adhesive.
Once the two bottom sections were printed we could start assembling. Below is the lazy susan bearing placed at the bottom of the device.
Below is the first iteration of the ultrasonic sensor we are going to use only with a Arduino instead of a Raspberry Pi. The approach for the Pi however was the same, we knew we wanted to use the breadboard so that any user could modify their own Wobble to suite their project.
Below is all of our sensors and our motor wired into the breadboards for testing. As you can see on the right breadboard we are using a Pi-Cobbler to extend the Pi’s GPIO pins onto the breadboard.
Here is the current working model of the breadboard with the essential circuitry wired in with short wires to be flush with the boards. A needed improvement to the above circuitry.
The first version of the center piece. This one is much smaller in diameter and has the small and not so powerful motor mounted of the outside of the center piece.
The Video below is the new center piece with the which is much larger and the new larger motor is mounted on the inside through a hole in the center piece.
Here is Wobble with the top on.
Here is the all the components setup for testing.
Here is the full setup placed inside the center piece.
Below is an image of the first test with the LED lights and the Arduino.
Links to executables
Link to video of the first test with the new motor: https://youtu.be/yrZ8t-VzUTo
Test with lights and motor: https://youtu.be/M77j2RyFHrw
Final test: https://youtu.be/xQt04OLUIRA
Our Building Process
When approaching the build of Wobble we knew that we wanted to reach a point of minimum viable product, or at least a working prototype, that could be used as a showcase of our skills. Because of this we leapt headfirst into building methodically with the intent to reach our weekly milestones each week.
Even though one of our main features was to have Wobble spin we couldn’t actually test the spinning mechanism until very near the end of the build, once we had finished our final 3D printed parts and sourced all of our hardware. Knowing this, we had to make a leap of faith by waiting until the end of the process for this vital piece of hardware. By planning for this delay in testing we were able to keep it in mind when designing the other features of the device, making sure that the end result would be able to be powered by the motor.
One of the first issues we encountered was getting the initial data from the sensors we were using. Both the ultrasonic and lux sensors were extensively documented in both Python and the Arduino IDE. Because of this there were no libraries built for our intended use. After much research, we found two user made libraries for both sensors that we modified to suit our needs by extracting only the data that we were going to use and using some of the code to make classes. The real saving grace to this issue was by way of WiringPi, the GPIO access library in C++. When installed onto the Pi we could use any pin as if we were developing in Python.
A major issue then sprung up with the ultrasonic sensors. The distance data that was being calculated would be received and printed in our console for testing. After an indeterminate amount of time – from five to twenty seconds – the console would throw a number unlike any other, usually far larger than expected, and then nothing else would be printed. What was happening was the distance gathering function was being called so often, more than once every couple milliseconds, that the sensor was being physically overloaded trying to measure the many signals it was receiving. To combat this we implemented a timer function that measured the seconds elapsed and an if statement that only let the sensor update its distance function every 0.2 seconds. This time is arbitrary, it can be as fast or slow as you like up till the point where it overloads itself.
Another issue we came across near the end of our build was that when everything was mounted together we realized first that our DC motor was much too weak to pull all the weight of our components. We also found out that our first centerpiece that would fit all the components was also too small. If we had kept this print we would have great issues with fitting everything inside of it. So, we decided to do one more print where the centerpiece would have a much larger diameter, this would give us much more space for all of our components.
Our project was semi-successful. It turned into a strong first prototype that demonstrates the ideas and the technology used to great success. The 3D printed housing came out fully and as we intended with very few flaws. The construction is solid and the look is very aesthetically pleasing. The sound of the motor and the bearing spinning is relatively low and doesn’t detract from the overall experience.
The sound can be manipulated very easily and intuitively even on first use. The fast response and visible sensors allow the user to understand how Wobble works at first spin. The wirelessness of Wobble is by far one of the best selling points of the device; being able to take Wobble with you wherever you go is a boon when testing it out in unique environments. We would have limited our users greatly if we had not gotten to this point of completion.
Some of our success was marred by running out of time to finish some of our hardware features, the first of which was not fully integrating lighting in the device. We were trying to install programmable LED lights that would change with the audio being produced by Wobble. Instead of this,we connected them to an Arduino. This was a quick implementation so that we could show the aesthetics of the lights even without the functionality.
The second feature to not make the final prototype was to allow the user to control the speed of the motor. This was in part due to the original motor used for testing not being powerful enough, and also in part the pulse width modulation causing some motors to be not powerful enough when running at slower speeds. To combat this we have sourced a physically geared down motor to run at a steady 10rpm, however, we’ve had to wire it into a power source to make it run constantly.
All in all it was a successful first try at the device and there are clear improvements to be made. We could redesign the assembly to make the device smaller and we would like to fix the issues with the lights and motor so that they are both controlled by the Pi. We could even give Wobble more life by letting it record the data it takes in from the environment to create a pseudo memory. This memory could then be mimicked by other users to hear particular environments.
Reference & Bibliography
WiringPi – http://wiringpi.com
Maximillian – https://github.com/micknoise/Maximilian
Pepakura – http://www.tamasoft.co.jp/pepakura-en/
Autodesk 123D – http://www.123dapp.com/design
Digital Signal Processing – http://musicdsp.org/archive.php