Visualizer Design

As part of the installation some graphics need to displayed to respond to the change in of music. The design for this is based on an understanding of gestalt principles as discussed in a previous post. Gestalt has been one of the bases for my research into the effect that graphics, motion and sound have upon an audience. The principles of it can be and have been applied to many parts of successful design and creative work.  This understanding of its psychological effectiveness of its principles and how the brain process has allowed many artists to use them effectively.

It only seems full circle if the design of the graphics and motion animation are also a design influenced by and designed around the principles of Gestalt. The design of the screen hardware chosen to use is going to dictate the design as its abstract size creates a challenge for the design. The screen itself is of abstract shape, which also integrates Gestalt principle’s further within the installation making the piece ever more challenging for audience to understand but re enforcing the idea of ambience is all around us even if sometimes we don’t know it.

The concept is to represent the changing ambience of an environment of which the screen is a visual representation of this. The position of the screen is key to not directly affect the flow of people passing through, as we want the environment to stay relatively active and fluctuate so that the change in audio is reactive and versatile. If large amounts of people spend to much time watching then the ambience could reduce and the message not percived.

From the previous iteration where I explored design of a music visualizer I tried to use shapes as a form of expression and manipulation. In these post I used a section of code which was very limited to certain factors of sound. As the incoming sound will be unpredictable I need a more versatile way to present all array of sounds. In the post about frequencies I managed to create bars that manipulated to a section of frequencies however this was just a division of the range. To develop I wanted to specify the frequency ranges and target each one with different animations and shapes.

From this code and video below the bars here are being specified

import ddf.minim.*;
import ddf.minim.analysis.*;
Minim minim;
AudioInput in;
FFT fft;
int sampleRate = 44100;
int bufferSize = 512; // this is actually called timeSize
int fft_base_freq = 86; // size of the smallest octave to use (in Hz) so we calculate averages based on a miminum octave width of 86 Hz 
int  fft_band_per_oct = 1; // how many bands to split each octave into? in this case split each octave into 1 band
int numZones = 0;
int ambience = 50;
int yPos = 270;

void setup() {
  size(displayWidth,displayHeight);
  smooth();
  minim = new Minim(this);
  in = minim.getLineIn(Minim.STEREO, 512);
  fft = new FFT(in.bufferSize(), in.sampleRate());
  fft.logAverages(fft_base_freq, fft_band_per_oct); // results in 9 bands
  fft.window(FFT.HAMMING);
  numZones = fft.avgSize(); // avgSize() returns the number of averages currently being calculated
  noStroke();
}

void draw() {
  background(0);
  fft.forward(in.mix); // perform forward FFT on songs mix buffer
  int highZone = numZones - 1;
  for (int i = 0; i < numZones; i++) { // 9 bands / zones / averages

    float average = fft.getAvg(i); // return the value of the requested average band, ie. returns averages[i]
    float avg = 0;
    int lowFreq;

    if ( i == 0 ) {
      lowFreq = 0;
    } 
    else {
      lowFreq = (int)((sampleRate/2) / (float)Math.pow(2, numZones - i)); // 0, 86, 172, 344, 689, 1378, 2756, 5512, 11025
    }
    int hiFreq = (int)((sampleRate/2) / (float)Math.pow(2, highZone - i)); // 86, 172, 344, 689, 1378, 2756, 5512, 11025, 22050
    int lowBound = fft.freqToIndex(lowFreq);
    int hiBound = fft.freqToIndex(hiFreq);
    for (int j = lowBound; j <= hiBound; j++) { // j is 0 - 256
      float spectrum = fft.getBand(j); // return the amplitude of the requested frequency band, ie. returns spectrum[offset]
      // println("Spectrum " + j + " : " +  spectrum); // j is 0 - 256
      avg += spectrum; // avg += spectrum[j];
      // println("avg: " + avg);
    }
    avg /= (hiBound - lowBound + 1);
    average = avg; // averages[i] = avg;
    // ***** 0 Hz - 86 Hz ***** //
    if (i == 0) {  // if the frequency band is equal to 0 ie. between 0 Hz and 86 Hz
      if (average > 0) {
        rect(0, height, width/9, -100-average*ambience);
      }
    }
    // ***** 86 Hz - 172 Hz ***** //
    if (i == 1) { 
      if (average > 0) {
        rect(width/9*1, height, width/9, -100-average*ambience);
      }
    }
    // ***** 172 Hz - 344 Hz ***** //
    if (i == 2) { 
      if (average > 0) {
        rect(width/9*2, height, width/9, -100-average*ambience);
      }
    }
    // ***** 344 Hz - 689 Hz ***** //
    if (i == 3) { 
      if (average > 0) {
        rect(width/9*3, height, width/9, -100-average*ambience);
      }
    }
    // ***** 689 Hz - 1378 Hz ***** //
    if (i == 4) { 
      if (average > 0) {
        rect(width/9*4, height, width/9,-100-average*ambience);
      }
    }
    // ***** 1378 Hz - 2756 Hz ***** //
    if (i == 5) { 
      if (average > 0) {
        rect(width/9*5, height, width/9, -100-average*ambience);
      }
    }
    // ***** 2756 Hz - 5512 Hz ***** //
    if (i == 6) { 
      if (average > 0) {
        rect(width/9*6, height, width/9, -100-average*ambience);
      }
    }
    // ***** 5512 Hz - 11025 Hz ***** //
    if (i == 7) { 
      if (average > 0) {
        rect(width/9*7, height, width/9, -100-average*ambience);
      }
    }
    // ***** 11025 Hz - 22050 Hz ***** //
    if (i == 8) { 
      if (average > 0) {
        rect(width/9*8, height, width/9, -100-average*ambience);
      }
  }
}
}

void stop() {
in.close(); // always close Minim audio classes when you are finished with them
  minim.stop(); // always stop Minim before exiting
}

 

You cant really see a substantial difference in graphics as I’m just using the varying average of each of the 9 sections of audio frequencies to control a height value of a rectangle. However, taking this forward it could be applied to a range of shapes, particle systems or images. Having an average for each section allows me to have a specific variable being able to make changes to that sections visual appearance directly.

Taking the design of my visualizer before and applying it to these elements was relatively simple and allowed me to experiment with different positioning of shape. From the video below you can see at the start the shapes were very erratic and unorganised this mainly came from a lesser understanding of abstract design. I went forward and researched more about the design of motion to gain a better understanding of how I could apply gestalt principles to the motion of animation as well.

Update: Motion & sound

After working with the design more I applied the principles of Gestalt to the design focusing on symmetry, similarity and proximity. Applying the principles allowed the piece to become of greater ordered and not so much an abstract mess.

As you can see from this next development the different parts of the visual elements are starting to come together and take meaning from the music itself.

  • The lower base sections of the music are being represented by large circles which deform and move in slower bigger jolts
  • The mid ranges of music are being represented by re-appearing shapes. This is because their frequent appearance would make static objects far to vigorous in motion.
  • The upper ranges of the frequencies is portrayed by the rotating orbits. There are designed to show the quick vigorous movements of the upper range.

From this point I want to add certain parameters like time as a variable of interaction. This application of time came from a study of emotion and time of which interested me to understand if time had an effect of moods of an audience. This intern would have an effect on their attention responsiveness to the graphics.

Leave a Reply