Second Prototype: Development

Posted on Leave a commentPosted in Programming

The second prototype builds upon our first one, taking a couple of small steps to get us in the right direction for the final application. At this point we’re still focused on working out the basic functionality of how the app will work and users will interact with it while the design for the application is still being finalised.

The main difference here is the addition of a navigation controller into the app. The navigation controller allows us the switch between different views. In this case, we have the initial view of the zoomable Magna Carta image, and it needs to switch to a different view which will contain the details about each clause when tapped on.

We also added a couple more hot spots onto the image to test and implement the class made to control the coloured overlays.

Below is a screenshot of the Overlay class. It takes three pieces of data at this point (but more can be added if needed). For it to be initialised, we need a name, a colour (which later will be used to separate categories) and a frame (which is the size and location of the overlay rectangle).

Screen Shot 2015-04-19 at 13.09.20
The class for the overlays

In the View Controller, the overlays are stored in an array as it seems to be the easiest and tidiest way to store all the initialisation data. The benefit of this method is that it is very tidy, easy to read and will be easier to implement later when we place the overlays on the image.

The array used to store the overlay data.
The array used to store the overlay data.

The overlays are then placed on top of the image using a for loop which uses iterates through and places all of the overlays in the ‘overlays’ array. They are placed down by added a subview on top of the Magna Carta image in the Scroll View Controller and then a Tap Gesture Recogniser is added to each overlay so that it can act as a button.

When one of the overlays is tapped, it transitions to a new view, which is the view where the details of each clause will be. At the moment the prototype transitions to the same page for every overlay as we, again, wanted to show the client our progress so they could get a better understanding of the type of functionality we’re aiming for. Much like the overlay class, A ‘clause detail’ class will need to be made to store all the information for each of the different clauses and create unique links on each clause.

Below is a Gfycat image of the prototype in action. Showing the zoom and scroll functionality and the transition to the detail page for the clauses.

https://gfycat.com/LateFalseAracari

In the future we want to add more functionality to this, which we will be working on as we can. One of the main things we’re interested in is adding some sort of filter for the clause overlays to separate them by categories/ themes (where the colour coding will come in handy). One of our main concerns at this point is that if we have too many overlays it could overpower the Magna Carta image and ruin its aesthetic. One possibility here could be limiting the clauses we have highlighted to only the most important or interesting ones so that the app isn’t flooded with too much information which will make it less user friendly.

First Prototype

Posted on Leave a commentPosted in Programming

Producing a prototype for the client to see is an essential part of client to agency development. We have been working in recent weeks in developing our ideas with the clients to create a section of the application to fit their needs and requirements.

Prototype-MockPrototype-Mock-2Prototype-Mock-3

From our creative ideas, the discussions with the client and the requirements  of a heritage applications we have established understanding of an application which would fulfill a specific need for the client/Cathedral. The idea was disscussed in the pervious post feedback.

From our MoSCoW (the application requirements) we have produced a prototype of the hot spots on the image working.

The idea for this prototype was to get a basic understand of how the app with function and look, both for our sake and for that of the client. For this prototype we used an image found online as a starting point as currently we don’t have the official high-resolution image we’d need to use in the final app.

The image was put into Photoshop so we could find the exact pixel coordinates so that we can draw the hot spot boxes over the clauses in Swift. At this point we don’t actually know where each clause starts and ends, so we made random selections as examples to show how it looks.

For this prototype we only had one hot spot as it would be a waste of our time at this point to bother making more as we don’t have the correct image, and we don’t know the actual locations of the clauses.

blog1

Below is the code that was commented to help the project move along, so the teams members that we programming the application would be able to understand what parts of the code do what. The main focus for the first prototype was to work out the gestures and basic functionality of the app.

Screen Shot 2015-04-20 at 11.27.31
Code for User Interface

From the code above, it defines what image is going to be used for the user interface. This code is important and its the basic structure of how the application will be used as the client wants the High res image of the Magna Carta as the core navigational structure.

Screen Shot 2015-04-20 at 11.27.43
Code for Double Tap Recognition

As part of the gestures that will be used in the application this is the double tap gesture. The benefit of using this gesture is that everyone that owns or uses an iPad are familiar with this gesture therefore to conform to the style of apple products it makes sense to have this gesture included within the app.

Code for maximum zoom
Code for maximum zoom

As we are used gestures to be able to zoom into the image, it is important to define how far we want the image to be zoomed into otherwise the user will be able to zoom into the tinniest pixel which isn’t useful therefore we have defined it by 5x zoom.

Screen Shot 2015-04-20 at 11.28.43
Code for alert

The whole concept for the app is to be able to click on parts of the Magna Carta on the clauses therefore it was crucial for our first prototype to have an element of this. This part of the code is the start this. The function that is called is when the overlay is tapped something is meant to happen, in other versions it will pull the information of the clause that is selected but as an initial prototype this code activates an alert.

Screen Shot 2015-04-20 at 11.29.10
Code for Gestures

It was important that within the tap gestures that we defined how much the double tap will scale in (zoom). At the moment we have defined the number at 1.5 so each time you double tap it zooms in by an appropriate amount. As you can see by defining the ‘newZoomScale’ of and setting a maximum it has achieved this.

View the whole commented code here.

Design Iterations Installation Project

Posted on Leave a commentPosted in Installation, University

Poster Project & Development

Iteration 1

Iteration 2

Iteration 3

Evaluation of Ambient Sound Installation

Posted on Leave a commentPosted in Installation, University

The design brief asked…

Create a piece of interactive information design for a shared public space, which is intended to elucidate/explain some an idea or concept you perceive as key to our 21st century media experience.

The goal of the installation is to present the idea of ambience of an environment and the change in it. The final installation is a combination of theories involving colour, sound, ambience, audiences and design principles. The idea of ambience was presented using a processing sketch that took in sound using a microphone and presented back to an audience on a display. The area for the installation and hardware available had major effects upon the final outcome as it dictated some design factors. These were resolved to produce an information graphic which used peoples presence with in the environment as a form of interaction.

The final piece transformed several times throughout its design iterations.

  • Initially mainly focused on presenting music back to an audience using visual screen design.
  • The introduction of audience interaction using reactivision and Minim created an activity for of participation around feelings and audience participation but, didn’t present a media concept specifically.
  • Lastly ambience and theory of passive participation allowed an audience to participate. As knowing or unknowingly were actively participating in media constantly.

In the foyer from testing the concept works well to show the change is activity within the area. Secondly adds some nice atmospheric sounds to the area describing the current environment sound level in comparison to another place using the form of sound. I think it could work well as a tool for classrooms to control sound levels or as a scale of sound measurement within busy areas

Some aspects of the final installation that didn’t work like was not being able to use the large screen high up. This created a problem in terms of the audience as it was meant to only be seen by an active audience who were looking up towards the screen. The screen used in testing is in direct line of view meaning even passive audience would see it possibly defeating the object of on of the concepts.

As an infografic it works well to describe the changing environment as it becomes busy or quite with a design that doesn’t require to must processing by an audience who are passing by. The design integrated well with the architecture of the environment well as its bright colour worked well with that of the foyer.

To improve there could be more consideration given to the scale measurements, as it might need so adjustment and a slightly more precise system of calculation to give a more better reading. As an installation its could be improved to have more user interaction this could come from use of the camera in some aspect. The motion detection attempted to do this however, it was hard for the audience to understand and a more fun and inventive way of interacting with the ambience of movement was needed.

Next time I would have liked to put it in a few different places to see what kinds of reactions of audiences are. An example of this could be an event background to encourage in opposition to this a louder and more noise within an ambient environment.

Final Installation

Posted on Leave a commentPosted in Installation, Processing, University

 

Download Full working document here

https://drive.google.com/file/d/0ByLXYHEOlMgWSFkzamhIVThhakU/view?usp=sharing

http://www.openprocessing.org/sketch/182418

Testing Development

Posted on Leave a commentPosted in Installation, Processing, University

A quick re test of the screen yet again threw some possible issues and problems to resolve. As discussed in the previous post the decision to combine the visualizer and ambient audio sketch was taken. This combination was tested on the screen to see what the results were like in the environment.

As you can see from the video there was quite allot of erratic behaviour of some elements. This was due to the some of the sensitivity of microphone but also the background music being played by Costa coffee. A possible fix for this could be by adding a variable to take away some sensitivity, which could be adjusted for different locations.

By simply adding a variable then…

This small implementation of a variable to all elements could allow and install team to adjust the sound level to that of different environments or places to cancel out unwanted noise. This would mean that it could be calibrated for different install locations if it was to be placed in other location. Even though the piece was designed to get all the noise of which most of the element do some require so dampening down as they will start to behave erratically if not

Update: Final Instaltion Sketch

Testing Outcome

Posted on Leave a commentPosted in Installation, Processing, University

Live Testing within the installation area has given me some food for thought in regards to the final design of the installation. After some issues with hardware, design and motion detection a synthesizer version of the ambient detect and representation sketch is presented below. As you can see its relatively simple in design with 2 simple and refined moving features effect by incoming microphone audio.

The visualizer design which was pushed out as a result of changing screen could be used as a way to introduce some more animating parts. Its design was compromised with the change in screen however it had some really cool features of which could be utilized. The emotive time based colour schemes and complex system of music analysis displays change in different audio frequency’s. It would need some reconfiguration with the design but could add more depth to the current layout as there is lots of empty space that could be utilized.

1280:720

Needs some re design.

Motion Ambience has been left out as the based on its recognition system and hardware ability’s to detect the difference between the motion at varying distances. The camera not being mounted in a high position meant that there was to much interaction in the close proximity however, about 4 meters back very little was being detected creating a representation which was not according to it desired effect/purpose. Some exploration of the kenect could have helped but it was decide that its importance was not a fundamentally part of sketch and could be taken out.

Combining the visualizer and ambient audio can be seen in the video below. Some of the graphics of the ambient audio sketch were not implemented original as it was considered it may not have been needed.

It was good to see all the elements working together but the design still looked messy and like before without a purpose for some of elements left them undefined. Simply re applying the infografic designs added purpose to elements in the design as it gave them a descriptive value.

Update: Testing Development

Live Testing

Posted on Leave a commentPosted in Installation, Processing, University

Testing the installation in the foyer brought about slightly different results to what I was expecting. In the previous post I discussed how the design had to be change due to the large screen being out of use which substantially effecting the design. This also had an effect upon the setup of equipment. I had a setup in mind of how the microphone for the audio input and the camera for the motion detect would be situated. As you can see from the diagram below the equipment was place in high areas to get the data from a larger sample. The microphone was placed in a high position to gain better coverage of the whole environment otherwise people closer could start to have greater effect upon this.

setup2

In the area it was more difficult to get a microphone and camera to a high areas as we had limited equipment like ladders and cables to put them in these positions. The compromise to this was facing the microphone and camera towards the main walking area which would capture the most action and best emulate the changing environment. This was not my original plan but the only way I could ensure changing ambience was similar to that of the enviroment.

First Day Testing

The videos below show the first simple visualizer working on the screen. At the time I was working on implementing the idea of Motion ambience within the sketch. The first tests gave a good insight into how the colours and size of elements changed on a large display meaning I could go away and adjust these. In terms of ambience it occurred to me that the environment was a lot noisier than expected also, the ambient music that Costa coffee was playing started to interfere. Lastly it was suggested that some idea of measurement could be implemented as current labels of each sound level to give the graphics slightly more of a descriptive element. These problems are solved in the next tests situation.

Update: Informtion Design

 

Second Testing

Implementing the Motion Ambience was relatively simple adding another dimension of interactivity to the installation. My thought was that the louder the environment got the greater the amount of movement would be. Testing proved an important aspect at this point as some unforeseen issues with motion detect aspect of Open CV.

As you can see from the video below, the motion detect was controlling the white bar that increased in size the more movement it detected. The video shows that the sound was increasing however the motion wasn’t as there was lots of people in the foyer theoretically it should have been of similar levels. From moving around in front of the sketch It occurred that the people closer to the camera were having greater effect upon the motion and the movement. This meant there was very little movement in contrast to the audio. This was something un expected and needs to be looked at in the next development.

 

In summary from testing in the environment I have found

  • That motion detect setup didn’t work as expected, it mainly was influenced by the distance away form the camera of motion and not the amount of movement itself.
  • Implementing the scale in the design explained the purpose of the installation far better without making it to forceful and full of text.
  • Adding the sound as a representation of what a similar noise level that was being experienced was effective as it grabbed peoples attention while, producing a good signification of there surroundings.

 

Hardware Malfunctions/Development

Posted on Leave a commentPosted in Installation, Processing, University

Hardware Malfunctions are something that can occurs during setting up of an installation normally due to technology. These malfunction are something that a designer has to deal with quickly and effectively so solve the situation or problem. Currently the design of the Visualizers elements for the ambient audio installation has been based around the use of a 2500 x 400 display. The key creative feature of its shape was being utilized within the design and lots of elements were designed around the use of the screen.

Unfortunately the screen is currently out of order due to a technological fault and will not be functional for use. This is going to effect the design of the visualizer as now a normal 1280 x 720 screen has to be used.

The 2500 x 400 (left) vs. 1280 x 720 (right)

 

IMG_7001IMG_7002

This effects the design, just by changing the size of the canvas you can see what happens to some of the elements.

Before …

1500:400After ..

1280:720

The design of the elements when the canvas is made higher doesn’t take advantage of the whole screen. Secondly the key features of the frequency bar was designed perfectly for this type of screen however now it doesn’t make such an impact upon the design and its purpose is slightly lost. Lastly the grouping of the elements is spread further apart making these seem singular and unconnected. This in terms of gestalt proves that the application of the principles clearly worked well within the original design. The principle of proximity is less applicable now and its effectiveness is felt by the lack of organisation.

Going forward I have looked at options of development as the current design needs some work o re thinking now its been compromised. A possible path for development is that of a more information based design which could include the use of a scale and/or, maybe some elements from more delicate in design.

 

Adding Ambient Sound

Posted on Leave a commentPosted in Installation, Processing, Uncategorized, University

Adding sound back into the environment of which is being monitored has been one of the intentions from the start of the project. It started with the idea of replaying music in relation to time & emotion but the development taking music out of the equation and introducing the idea of ambient noise in the same way.

Introducing more noise into the environment could cause some problems in relation to looping. The sound of the ambient music could be picked up by the microphone creating a looping cycle of noise. It needs to be considered that the volume of the sound playback so it doesn’t effect the environment itself.

Feedback loop: occurs when the sound from the speakers makes it back into the microphone and is re-amplified and sent through the speakers again

The sounds being played back are going to represent that of a similar level of sound in a different location. Sound levels of other environments or products need to be accurate to describe that of the adjacent environment. Another consideration is deciphering the upper and lower boundaries of noise. This could be done by using the high noise of the foyer to that of a high noise environment visa versa. e.g when its extremely noise for the foyer, industrial noise are played or at low noise level, calm and tranquil noises are played.

This table below displays that of equal noise to decibel level

Noise Source Decibel Level Decibel Effect
Boeing 737 or DC-9 aircraft at one nautical mile (6080 ft) before landing (97 dB);power mower (96 dB) 90 4 times as loud as 70 dB.Likely damage 8 hr exp
Garbage disposal, dishwasher, average factory, freight train (at 15 meters).Car wash at 20 ft (89 dB);Food blender (88 dB); 80 2 times as loud as 70 dB.Possible damage in 8 hr exposure.
Passenger car at 65 mph at 25 ft (77 dB);Living room music (76 dB);radio or TV-audio, vacuum cleaner (70 dB). 70 Arbitrary base of comparison.Upper 70s are annoyingly loud to some people.
Conversation in restaurant, office,background music,Air conditioning unit at 100 ft 60 Half as loud as 70 dB. Fairly quiet
Quiet suburb, conversation at home.Large electrical transformers at 100 ft 50 One-fourth as loud as 70 dB.
Library, bird calls (44 dB);lowest limit of urban ambient sound 40 One-eighth as loud as 70 dB.
Quiet rural area 30 One-sixteenth as loud as 70 dB. Very Quiet

Using this within the design of the graphics and a combination of Infograic design thinking added great meaning to the visualizer’s purpose. This addition of a audio in connection the visuals furthers the ambient feeling of the piece and creating a sense of realisation for audiences.

Here shows how the scale was tested in relation to different sound brackets

Play back of audio has to this point been deciphered by INT(whole value) variable of which could specify file however, the input audio is of several digits( float 0.88493) and needs to be set boundaries. Minim also can play several songs through a single Audio Player, which we don’t want this to do as to many sounds would be playing at the same time. Creating a system of selection, which waits for a file to stop playing before selecting and playing the next.

 

By using a series of if statements, variables and booleans a setup which…

  1. Start with file 1
  2. Wait for the file to finish
  3. Check the ambient sound
  4. Decided what range of sound that within
  5. Play a file in relation to that sound
  6. Back to 2

Using this will allow a different sound based on the input sound level to be selected each time. The sounds will be related to the noise level in the environment and describe that. Some further consideration needs to be given to possibility of a feed back loop and how this can be counteracted.

 

Update: Live Test

http://electronics.howstuffworks.com/gadgets/audio-music/question263.htm

Temple University Department of Civil/Environmental Engineering (www.temple.edu/departments/CETP/environ10.html), and Federal Agency Review of Selected Airport Noise Analysis Issues, Federal Interagency Committee on Noise (August 1992). Source of the information is attributed to Outdoor Noise and the Metropolitan Environment, M.C. Branch et al., Department of City Planning, City of Los Angeles, 1970.