EL 1 Portfolio

Secondary Experiment

Soundscape Composition: Initial plan

  • Find space with multiple sound sources
  • Record stereo omnidirectional recording
  • Use directional microphone to isolate specific sound sources
  • Manipulate the directional recordings to implement musicality eg. Custom impulse response in convolution reverb. (Record impulse response in space?)

EL 1 Portfolio

Prototype experimentation: Session 1

I managed to acquire some metal from Goldsmith’s university, choosing long thick pieces with the aim of having a lower and wider resonant frequency range. We started by grinding the sharp edges off the pieces to make them safe, then we drilled holes as a means to hang them. The heaviest piece was too thick to drill through, so we welded two hooks on.

I chose to use a larger transducer for more volume, the downside of which was the suction cup was too large for the beams, so I had to tape the speakers on. I don’t think this was the most effective method. I’ve found the struggle with this method of sound is balancing the strength of the connection (between the speaker and material) and the damping of the resonance that occurs through attachment. Im considering trying a g clamp next as they are metal and hopefully won’t dampen the sound.

My original plan was to have two beams working in stereo, but after having trouble with syncing the two transducers, and lack of volume, I decided to stick to one beam. I found the material influenced the sound a lot, and the input sound of an E-bow on a metal guitar string suited the timbre. In the interest of time I decided to play a recording I made a few weeks earlier with about 4 harmonies, luckily I had many different versions with some other instrument such as synthesisers, granulated vocal recordings and electric guitar. This gave me a better understanding of how to compose the arrangement, how certain instruments resonated with the material and what worked well when combined.

A drawback I discovered while using this method is that it’s very hard to achieve a decent volume, to solve this I decided to record the sound using a zoom Hn5 held very close to the beam. I found that the the sound drastically changed depending on where I placed the mic, I could change the amount of low end by moving the mic. Towards the end of the session I found this to be the best position, it gave me the most ‘material’ in the sound:

Through this method I could also experiment with swinging the beam to bring a more dynamic spatialisation to the recording.

I realised that the input files had too much reverb applied and the identity of the material was getting lost. Removing all of the reverb resulted in a very empty sound, and was not what I set out for so I gradually added small amounts of reverb until I found a middle ground.

Retrospectively I realised I misunderstood the order of these manipulations. My first mistake was having all 4 harmonies play at the same time through one beam, I feel this loses a lot of the detail I could have harvested from the material had it been layered. My second mistake was applying the reverb first. No reverb sounded empty to me because I was listening to the whole composition through the material, giving me the expectation of a finished piece. I believe the better method would be to record each harmony separately through the material, I can then combine these recordings in the DAW and recreate the digital landscape’s structure. These recordings can then have reverb added as a last measure to contextualise them in the mountainside.

Revised order of process:

  • Record harmonies separately with no reverb
  • Create the soundscape in the DAW
  • Add these recordings in the space with Gain adjustment for distance
  • Add a Convolution reverb to contextualise the recordings in a realistic space.

I spent the last portion of the session crudely putting together a soundscape comprised of a few woodland recordings found on BBC library, a chain recording I conducted in the performance lab quickly and a wind track. I then layered a few of the recordings from this experiment to see how they fit together. I believe it is a proof of concept, however the harmonic metal moans need to be separated and contextualised in the digital space.

Rough draft for soundscape + Experimentation recordings

EL 1 Portfolio

SOUND RESEARCH: CRISTOBAL TAPIA DE VEER (UTOPIA)

I recently watched the 2013 drama Utopia. A show about conspiracy, engineered reality and humanity turned mechanical. The whole show has a very uncomfortable undertone, like nothing is what it seems, that everything that happens is part of some kind of plan. Some characters are convinced from an early age that they can do whatever they want because they are but a tiny blip in history, they become mechanical in their actions, with no thought of morality. 

This theme of engineered reality and this feeling of constant discomfort is potently expressed in the soundtrack created by Cristobal Tapia De Veer. He achieves this through manipulating samples of organic sounds such as birds or human breathing.

In the pursuit of texture, Cristobal avoids samples that he describes as too clean, pure or sterile. Instead he looks for samples from vinyls, then ‘disrespects’ or ‘tortures’ them with compression, bit crushers etc. He describes it as making them ‘textural, grainy, dirty, flawed, alive.’ He then slows down the sample, this makes the ultra fast moving waves perceivable, as rhythm. Cristobal samples a very small part of the audio file and loops it, giving it infinite sustain. He describes hidden rhythms ‘in the DNA’ of a sound. Using these dirty, rhythmic, alive sounds, Cristobal creates haunting scores that feel unsettling, alive but not quite organic. A frankensteins monster or a score. 

Cristobal Tapia De Veer created a soundtrack that perfectly complimented the show, in context and tone. I hope to bring this kind of ingenuity to my practice, considering how you can reflect the story and tone of a piece of media in the tools and auditory material you use. Manipulating samples to mould them into something new, finding texture in sound through disrespecting and torturing it. 

Using Cristobal’s techniques listed above, I created a piece of music in the style of Utopia.

I started with a sample of a male owl from the BBC sound library, I chose an owl because their call tends to hold a note. I then increased the gain, decreased the bit resolution, and selected a small part of the audio file. I played a melody on the lower notes on the keyboard, this gave the notes a gritty texture, the note slightly wavered giving it a natural but eerie sound.

I then took a sample of myself singing a note with an ă pronunciation, I repeated the process I did for the owl sample but played it only slightly under the original pitch. I liked how it sounded almost right/natural but not quite. I played the main melody with this instrument. The main melody was supported by a sample of myself singing a higher note with an ŏ pronunciation, chorus and reverb was then added to this sample and it was placed low in the mix.

I then recorded myself breathing out, I reversed the sample then vocoded it. This resulted in a rhythmic, almost human sound. I also recorded a sample of my neighbours builders hammering something, I think it sounds like a clock that doesn’t keep time which adds to the uneasy tone I’m aiming for. For the introduction I layered three samples of birds chirping, I then vocoded it and only played the vocoded part. On top of this I added a very quiet melody using the instrument made from my voice. I used two reversed samples of myself saying something to transition into the main melodies. I recorded an egg shaker, doubled the speed of the recording and added a tremolo to it, this made a very dry percussive sound that had movement, almost like an insect moving its wings. I used a recording of myself very lightly coughing to contrast the unnatural vocoded inward breath, this is intended throw the listener into a rhythm that is very suddenly interrupted by the chorus coming in a beat early. In the chorus, I used a sample of native American people shouting and screaming, it had a slight rhythmic quality and was melodically chaotic.

If I were to do this again I would increase the tempo to make the piece more intense, also I would more clearly plan out the layers of the piece, background foreground etc…

My piece:

EL 1 Portfolio

Clarifying ‘Immersion’

I use the term ‘immersion’ quite a lot in my research and writings in this project, and I would like to clarify exactly what I mean. The definition of immersion is deep mental involvement in something. This is a very broad definition and doesn’t suit my project. When I refer to immersion I am referring to diegetic immersion, a maintaining of audiovisual correspondence between musical source and what’s on the screen. Perhaps it is useful to define what the opposite of this immersion is so I know what to avoid, this would be the audience becoming aware or being reminded of the fact that they are watching a film, that everything they are seeing and hearing is designed to influence them. Once aware of this fact, I believe that the viewer is taken out of the experience.

My practice is focused on the relationship between score and story, musical and diegetic. I would like to investigate methods of building musical scores out of diegetic sound, and possibly maintaining diegetic congruency throughout the score. This is not to say that a traditional non-diegetic score is not immersive, they can be extremely immersive. It is even possible that the reason the can be so immersive is because it sits on a different plane to the story (Chion), giving the story space to be experienced. Perhaps the emotional influence overpowers the ‘self outing’ (Revealing the processes/construction of film) nature of the non-diegetic score. Furthermore, it could be that the ritual of sitting down in front of a screen already presupposes the viewer has accepted the non-real nature of film.

Regardless of these possibilities, I will be investigating the avenue of composition for film. This is because the research I have conducted on cinematic sound designers leads me to the conclusion that a diegetic score should be more immersive than a non-diegetic score. As the audience, generally we identify with the protagonist. We see through their eyes and hear what they hear. Our experience of the story is channeled through a person (obviously there are multiple exceptions eg. dramatic irony, films lacking people). It therefore makes sense to me that if we want total narrative and experiential immersion, the audience should only be hearing what the protagonist could conceivably be able to hear. Anything outside of that draws attention to the medium.

This may be a futile experiment in the long run, it is possible that these techniques make no difference to the immersion of a film. I think it is still a worthwhile investigation that may lead to some interesting results, uncover some useful techniques and give me a more in-depth understanding of the construction of film sound/scores.