The plan is to attach particular sounds to a point in the environment (eg. A tree or plant). I’ll do this through quadraphonic mixing, then exporting as stereo allowing front to back movement of sound. To keep track of plants placement in the environment I will take a video. I have my method of mixing, I need to now determine the sounds I will use.
I will be trying to reflect the idea mechanical/natural harmony. Im going to build a collection of mechanical and natural sounds separately, then marry the two.
Natural sounds:
Vocals- I think I will use soundgrain to sustain the audio, making for a smoother more permanent auditory environment, by this I mean I hope that it will strengthen the geographical permanence of the sound.
Wind- Hopefully this will be picked up in the sound walk recording, however if it isn’t prominent enough, I’m prepared to record wind separately and mix it in.
Plants rustling- I think this will be picked up in the sound recording, I might interact with the environment to produce these sounds (running hand through leaves). This interaction could be interesting to explore, although I’m not sure what the meaning or purpose would be (maybe to reflect the human/nature relationship). This reminds me of a sound piece by TomuTomu on YouTube called ‘Plant sounds’.
‘The plant’s “state” is represented through sound, primarily frequency. The plant responds to changes in its environment such as humidity, temperature, light, and touch. The micro-voltage fluctuations of the plant are detected by Ag/AgCl electrodes. The voltage is amplified and sent into an Arduino to be digitized. The numeric values are sent to oscillators, resulting in the soundscape.’
Im not going to pretend I understand how this works, I have no idea. However, I will research this at a later date. It would be interesting to recreate this piece and use the audio in my project but I don’t think I have the time to do it. I will however use the audio as a reference for my ‘artificially created’ sounds.
Mechanical Sounds:
Vocoding- I think a sharp, mid to high pitched synth would reflect the mechanical side of this project. To geographically anchor this sound I will vocode parts of the environment, possibly the wind and maybe the rustle of leaves.
Rattling guitar strings- Placing an ebow very close to a guitar string causes the string to rattle against the plastic casing of the bow, harmonically layering this sound results in a harsh yet clean and crisp sound. This sound also has a very metallic quality, probably due to the metal guitar strings.
Rubbing metal together- Personally I love this sound, very high pitched and smooth. It evokes the feeling of movement with minimal friction. The problem with this sound is that the length of it is restricted to the length of the metal, however I plan to solve this by placing the audio in a sampler and using the alternate looping setting, alternating forward and reverse.
As a group we brainstormed ideas for out collaborative project. The future seemed to be the prevailing theme, specifically using the future to represent or reflect the past or present. We settled on the title being a quote from the bar scene in the movie ‘the shining’, as the past and present seem to melt together in this scene. The quote is ‘Ive been away, but now I’m back’. To me this title evokes nostalgia, change or development of self, a strong sense of environment and a sense of harmony or balance.
My Track:
I think I would like to create a soundscape, possibly built from a sound walk. My intention is that hopefully using a recording of a sound walk will ground the listener in a familiar reality, everyone knows the sound of walking through nature. By using this familiar base, I can hopefully build a subtle soundscape around it, invoking these themes I named above (nostalgia, change….). I also want to use an auditory pallet that reflects my idea of the future, clean, crisp, a harmony between mechanical and organic. I might also experiment with sampling songs, specifically songs that have strong place in my memory, a particular album places me at a specific moment in my past every time I hear it. This might be a fun avenue to explore but I’m not quite sure how I can utilise this yet.
If I were to do this exercise again I would plan out specific parameters. I automatically set left to right as time, but half way through making the notation I thought of making up and down the stereo field, and so some notes don’t fit accurately within the field. I would like to go back and label the parameters to make it easier to understand. My favourite part of this notation method was the blending of elements, I could quite easily express the link between the low synthesiser and the bass drum by weaving the notation. The circular symbol on the left is an attempt at notating percussion, I feel this would be a useful tool to use once I become familiar with the method.
Soundgrain is a software with which you can automate granular sound synthesis. Im sure that there are many ways to use this software, but the first thing I wanted to do was explore the human voice. I imported an audio file of me singing a constant falsetto note and started experimenting. The first thing I noticed was the fluidity of the sounds, the sound would glide smoothly between high and low frequencies. I think this is the strongest aspect of the software, typically as a producer or sound artist I find myself using a keyboard as the controller for my synthesis, which limits you to specific notes (with the minor deviation using a mod wheel). Whilst using this software I found myself relying solely on my ear to find the right frequencies to use and to find harmony between frequencies, this was a new and welcome method of sound creation.
Another benefit of the software is the ability to select very small parts of the audio file to loop indefinitely, this creates a constant drone with the timbre of a human voice. I see this as a very natural familiar sound being used in a way that doesn’t make sense in its original context, whilst retaining its organic properties, an impossible organic sound. With this technique you could create electroacoustic compositions that retain an intrinsic human organic quality.
The result of the experimentation:
Percussion is made from breaths, hitting my fist on my chest, a bass drum, a snare, a very short snippet of someone speaking, and an off set tremolo applied to a recording of wind.
The chords are played by a prophet V soft synth and 2 ebow guitar tracks. The low synth is an audio track of the sound grain synthesis with a side chained noise gate applied.
My main role in this project has been to organise work loads and try to help guide collaboration, I’m also combining peoples work and mixing it. So far I have found that mixing the pieces has not been too hard, as everyone has created unique pieces and they each serve their own purpose. They seem to work well together, Szymon’s piece creating an eery stirring and creating tension, Daniel’s piece accelerates the tension and chaos into the crescendo, Sephora’s piece creates a calm otherworldly sound-space that somehow stays within the style Szymon’s piece giving the project a full circle feeling. Toby has created sound effects that give the piece texture, subtle sounds that ease us into the madness. Raul and jack wrote the script, Raul also recorded the dialogue and sourced a voice actor.
My contribution to the piece is the crescendo and the hard cut. Appearance and audience being a strong theme I experimented with shaping, moulding and manipulating a crowd of voices. I started by bringing in the sound of a crowd very low in the mix at the point where the protagonist says ‘our society is a spectacle’, I then gradually increased the volume of the crowd. I used a bit crusher on the crowd and gradually decreased the resolution distorting the audio. I also placed a noise gate on the crowd and side-chained it with the dialogue as the input, i started with the threshold very low so the crowd sounded normal, I then gradually increased the threshold leading up to the crescendo. This had the effect of very gradually shaping the crowds voices to the singular protagonist voice, so the crowd noise increases in intensity, and concentrates into the protagonist.
We have decided to express our idea through an interview, as these are common place in radio and the format gives us room for creativity and manipulation.
Now that we had our idea we could then distribute the work, the work was divided into Dialogue editing, SFX, music and background, and broadcasting. We also had two people write a script and create a timeline of events, this allowed everyone else to begin working on sections.
We will express our piece through an interview of an instagram influencer, as this seems like the epitome of spectacle. The interview will then turn sour at one point, and will begin to turn the soundscape uncomfortable and jarring, the interviewee will begin an impassioned monologue similar to Howard Beale in Network (1976). When we reach a crescendo we will then hard cut to a relaxing auditory space, and an extract from ‘The Society of the Spectacle’ will be read out.
Ive found that he concept for the piece has been pretty much unanimously accepted which I find surprising. However I think its been hard to coordinate collaboration between people as everyone is at home, and the piece is forming as everyone makes their parts. As a result of this the piece has been an amalgamation of peoples works, different tempos and keys. This does have some positives such as giving a sense of chaos (which is a theme we are trying to convey) but I think it would’ve been nice to have people collaborate on sections knowing how they would fit together, perhaps this would be easier to do in person and with more time.
We started by suggesting themes and feelings we would like our piece to express. The most prevalent themes were uncertainty, society, inauthenticity and social media. Raul suggested using Guy Debord’s book ‘The Society of the Spectacle’, this fit perfectly with out themes so we moved forward with this as our subject. ‘The Society of the Spectacle’ was written in 1967 as a criticism of the encroaching rise of consumerism in Paris. Guy Debord believed that due to capitalist production resulting in an abundance of things, the human need for survival had been met, and subsequently replaced by a need for goods. Humans have developed a ‘need for more’, even when our needs are met. Debord believed this need for more came from appearances, he believed that ‘All having must now derive its immediate prestige and its ultimate purpose from appearances.’ Image and appearances have now become important to individuals above all else, and we are sold products with the promise that it will improve how we are perceived by other people.
My experience of the second lockdown has been primarily uncomfortable, to be specific my exposure to things that make me feel comfortable (being with loved ones) has been restricted and replaced by an artificial version ie. Text messages, phone calls, video calls. An important aspect of this experience is that the effect it has had on my mind, however impactful, has been very subtle. Although these circumstances have impacted me greatly, I didn’t notice the impact until very late into the experience. Personally, I have never experienced something like this, it’s a strange feeling. It reminds me of the boiling frog effect, where a frog will not notice a gradual increase in the temperature of the water it is immersed in, even if it is being boiled alive. A subtle transition from comfort and mental well being, to an uncomfortable, cold and artificial experience. This is the experience or feeling I will try to invoke using sound.
To do this I have to first determine what makes a sound comfortable, and what makes it uncomfortable.
Warm and cold sound- To me warm sound can be characterised by low and mid frequencies, they have a full sound perhaps because they resonate with your body. Cold sounds are characterised by higher frequencies, thin sounds that cut through a soundscape. This is not to say that higher frequencies can’t be used to invoke a comfortable feeling, this is just how I see it generally.
Harmony- I feel comfort is greatly effected by harmony. Harmony is an incredibly deep area of research and information, but for my project I think I only need to talk about a small part of it. Emotion and tension can be expressed through use of cadence, our brains like patterns, and within a musical phrase there is more often than not a resolution, this is where the chords return ‘home’ to the root note. I think the most comfortable cadence would be a perfect authentic cadence, going from the fifth to the first chord, where both chords are in root position and the ‘first’ chord ending the phrase would have the tonic note as the highest note. I will try to explore this concept to invoke comfort, but also tension by straying from this cadence.
Resolution- I think a good way to invoke discomfort is using Bit depth. People like what is real and tend to dislike poor representations, an example being the uncanny valley. I feel reducing the resolution of a sound will result in an uneasy feeling, I got this idea from Cristobal Tapia De Veer’s work on Utopia. Also this will re-enforce the presence of technology in this piece, this could remind people of connection failures in phone calls.
Stereo Field- I believe that I could use the stereo field as a representation of exposure. Mixing using the full stereo field will make the audience feel surrounded and immersed, while restricting the sounds to one area will result in the audience feeling separated from these sounds. To try and recreate my experience, I would like to play with the idea of restricting the sounds in such a way where it would sound as if it were coming only from a phone.
Sounds, instruments and techniques-
Synthesisers- I think important aspects would be low cut off, smooth waveforms and using portamento for a warm comfortable feeling.
Human voices, this is an incredibly familiar sound for people, and I believe a very comfortable one due to us being social creatures. I have some recordings of people talking about what makes them comfortable I can use.
Ebow- I have an electronic violin bow that I have been experimenting with for a few years, when places near a guitar string it creates a very smooth rich note, and when these are layered with harmony the result is like the auditory equivalent of being submerged in warm water.
Human voice- I believe the human voice is a very comforting and familiar sound. I think I can use this as a malleable tool, first used to induce comfort, then distorted to express a feeling of uneasiness. This hopefully will reflect the feeling of comfort being fragmented.
Field recordings- I recorded a couple of really useful sounds from our sound walk on the Thames. The first is a recording of me agitating a lovelock attached to a railing, I used a contact microphone to emphasise tactility in the sound. I was particularly pleased with the quality of frustration the recording had, it sounded as though something was restricted, unable to move more than a couple of inches. The second recording also used a contact microphone, I attached it to a suspension wire on a bridge, I then rubbed a key along the ridged wire. This had a very abrasive sound, and a feeling of rising tension as the key got closer to the microphone.
Reflecting on the work:
In conclusion, I’m pleased with the outcome of the piece overall. I think it sucessfully expresses a transition from comfort to a distorted, inadequate simulation of reality. Unfortunately December proved to be a very challenging time for me, and I became very unproductive as a result. I wish I took more time to make the piece longer, I feel it would’ve been beneficial to linger in the comfortable section for longer to make the transition more impactful. I also wish I had used ambient room sound, I think it would’ve been useful to create a specific soundscape to give a sense of space, so that I could then manipulate this space into something else.
This BBC radio segment was inspired by an Indian folktale about perspective. The story consists of 6 old men who were all born blind, they are very curious about the world and get their information from passing travellers. They have a specific curiosity with elephants, they are told wondrous stories about how they could trample forests, carry huge burdens and frighten people with their trumpet calls. Contrast to this image, they also knew that the Raj’s daughter rode an elephant when she travelled across her fathers land. They argue day and night about whether an elephant is a giant dangerous creature capable of killing men, or a graceful gentle giant. The men arrange to go touch the elephant in the Raj’s palace. When they get there, they all touch different parts of the elephant and declare the animal to be different things. One touching the trunk declares it to be a giant snake like creature, another touches the side of the elephant and declares it to be smooth and solid like a wall. They have a noisy argument which awakens the Raj from a nap, he asks “how can each of you be so certain you are right?” He then says “The elephant is a very large animal, each man touched only one part. Perhaps if you put the parts together, you will see the truth.”
The BBC recreated this story. A group of blind people describe what they believe an elephant to be, their descriptions are based on very little evidence. They then are given the opportunity to touch an elephant, they can then talk about their misconceptions.
I think this piece is really interesting as it highlights elements of the blind experience sighted people would not normally know about, it is also a very effective lesson in perspective. Listening to the blind people feeling each part of the elephant expresses the fragmented nature of their experience, having to perceive the animal in small parts, and attempt to stitch it together in their mind for a complete ‘picture’. To equate the piece to radio itself, the blind people are listeners of radio, and sighted people are viewers of visual media ie. Films. There is a certain creative licence afforded to listeners, without the definitive visual to cement the subject in context, the mind can create its own visual accompaniment and the subject is not so rigidly defined.
Kate Hopkins is an accomplished sound editor based in Bristol who has worked in all genres of TV and film. Due to Bristol being the home of the BBC’s natural history unit, she has specialised in sound for natural history films. Hopkins has been awarded 2 Emmys for outstanding editing for non-fiction programming, this was for work on Plant Earth ‘Pole to Pole’ (2007) and Frozen Planet ‘Ends Of the Earth’ (2012), she has also been awarded three BAFTAs. Kate has a long list of projects she has worked on, some of which are Disney’s ‘Monkey Kingdom’ (2015), National Geographic’s ‘Great Migrations’ (2010), Apple’s ‘The Elephant Queen’ (2018). More recently she has worked on David Attenborough’s ‘A life on our planet’ (2020) and Apple’s ‘Earth at night in colour’ (2020).
Kate began her career as a receptionist for a small post production company in Bristol. Her boss Nigel, she says, was a picture editor, and while fulfilling her receptionist duties (making phone calls, answering emails and ‘making endless cups of tea’) she would also assist her boss in the cutting room organising trims from 16 and 35mm film. Hopkin’s believes that the small nature of the company allowed her to experience many different fields, working alongside Aardman animation studios, and syncing up picture and sound for dramas and natural history films. After 3 and a half years working in this company, Kate received her union card allowing her to transition to freelance work.
At this time, there were a lot of 35mm drama’s being made in Bristol for large American companies such as Universal. Kate moved from Nigel’s practice to Universal and started working on these dramas as a freelance assistant editor, eventually moving into the role of assistant sound editor. Hopkins credits this moment as the birth of her passion for sound design, in her own words she ‘realised what impact a sound edit could have, if you put different atmospheres, if you put different effects in, they move on a story and add drama.’ Kate names some of her influences as the Cohen brother’s films, The godfather, Apocalypse now and Raging Bull. She says ‘these are all films which had very distinctive sound tracks’. ‘Raging Bull’, she adds, ‘was particularly good because all of the punches used a combination of real punches, but also had Lion roars and stuff put in. So it sort of illustrated how much you could do with sound design with it still feeling real. It was just adding power and drama.’
Kate began to learn about signature sounds used in dramas, for example if you wanted to portray poverty you would use distant dog barks and the sound of a baby crying, sudden silence being an excellent way to create drama. Eventually Kate would have to transition from film to digital as a freelancer, which meant no training, she remembers the process as very much ‘learning on the job’. At this time some of Kate’s colleagues from the past were setting up a production company named ‘Wounded Buffalo’, she began collaborating with these colleagues on natural history films such as ‘Natural world’. Continuing with the theme of learning on the job, Kate recalls accepting a job in Idaho (US) which was a 90 minute documentary about wool, the studio had protools which at the time, she had never been exposed to. Hopkins read the manual on the plane journey over, and thankfully produced the 90 minute soundtrack.
Hopkins began work on ‘Blue Planet’, she says this was a fantastic job as the show being underwater allowed for many creative opportunities in the sound design. Hopkins recalls enjoying designing sound to emphasise movement of organisms, and also the feeling of depth by changing ambience. She believed this work pushed sound editing further forward in the production process, sound became more important and her work became more of a collaboration between sound design and music. From ‘Blue Planet’, Kate went on to work on ‘Frozen planet’, ‘Blue planet two’, ‘Life’, and ‘the dynasty series’. She was then asked to work on Disney’s ‘Nature’ series, mixing feature films for large theatres with her colleague Tim Owens. During this time Dolby atmos started arise, which Kate believes helps sound designers achieve their goal of placing the audience in the environment on screen. One thing Kate stresses the importance of in her work is using accurate sound, she says you can layer up as many sounds as you want but It will result in a muddy mix if the right sounds aren’t used.
Hopkins describes working on horror films to be ‘an absolute joy’, due to the creative freedom. Using whatever sounds you like as long as it results in a scary sound. This contrasts heavily with the practices of natural history sound, where she is required to meticulously find specific animal calls and sounds. Kate fondly recalls working on a low budget horror films called ‘Hardware’ (1990), but states something that I think might stick with me ‘there’s nothing like a low budget film to put pressure on a sound editor.’ This really highlights the impact and importance of sound design, in the absence of high budget visuals its up to the sound to immerse the viewer.
Hopkins describes how she starts the process of working on a film or tv program, specifically in natural history. She stresses the importance of recording wild sound onsite, and says this can often get overlooked as directors can often be focused purely on visuals. Kate remarks on a moment in her career where a project was allocated a small budget for sound editors to go out on location, which she describes as a rarity. She talks about being driven around the Maasai Mara in a jeep, recording Lions very close up and wildebeest hooves among other things.
Kate describes the collaboration process between sound editors and composers, finding the balance between effects and music. This involves agreeing which scenes are going to be music drive and which are effects driven to avoid overlap and unnecessary work. They also have to work around each other to not overcrowd a mix, for example if there are a lot of bass heavy effects the composers have to avoid bass in the music.
It was at this point in the lecture where Kate showed a Protools session for a clip about meerkats from the Dynasty series. Watching this I was struck by the amount of audio files and the organisation of them. I find the idea of sculpting a believable soundscape very interesting, specifically the subtlety of the mix, doing just enough to immerse the viewer but little enough so as not to distract from the scene.