SPECIALISATION PROJECT

Mixing

As the deadline for the hand in of this project is approaching I have decided to focus on mixing from now on. I am missing certain elements in the sound design, namely the dragging of the coffin, the donkey footsteps and the sound of the character rubbing the bloody fur of the donkey as it dies.

Firstly I created a bus channel for atmosphere reverb, I then applied a convolution reverb to this channel with a ‘soccer field’ preset as I thought this was most appropriate for the mountainside. I then adjusted the levels to make it sound realistic, I also made the output of this channel 5.1 and placed the sounds slightly towards the rear, this was to give the atmosphere a sense of space.

The first time I mixed the foley, I mixed it way too loud. I think I was just trying to make sure it was in sync with the picture, but the result was that it sounded like the footsteps were right next to the camera. I fixed the levels and identified different locations so I could make bus channels with correct reverb, I then sent the foley tracks through the right channels.

I had fun mixing 01:01:51:16 to 01:02:16:05 as it allowed the most room for creativity. The scene follows our main character directly after his donkey, which he had been using to transport the coffin, dies leaving him to carry it himself. I feel as though the imagery of him dragging the coffin behind him and particularly carrying it on his back, is reminiscent of the myth of Sisyphus. An endless tremendously difficult task that has no value to anyone other than to him, the character remains focused against barrages from the outside world as he slowly slips into exhaustion. I wanted to reflect this in the sound, in particular I wanted to highlight the breath as a means to cope, similar to how people focus on breath in meditation. I did this by sending all environmental tracks (the footsteps, the coffin creaking, the wind and the atmos) to an aux channel, here I could manipulate all of them as one. I used reverb as a means to express exhaustion, giving the effect of hallucination and disassociation from reality. I automated this to start as a realistic reverb for the space, and gradually increased the wet output to more of an abstract effect. The second and most important thing I did was apply a compressor to the aux track and side chain it with the breath track as the input. I also automated the mix of the compression from 0 to 100%, this gave the effect of the environment gradually becoming more blurred, and this blurred soundscape would dip in volume when the breath comes in, mirroring the idea of focus. I think this worked well but I did have a couple of problems, I found that the environmental sound was too loud and compressed at the start (where its supposed to be realistic, I fixed this through volume automation and the automation of the mix helped. The main problem I encountered was that breath was quite a weak input, the finished effect didn’t have the contrast I was hoping for. If I were to try this again I would research side chaining a bit to see if there was a solution to this, or I would possibly just automate the volume myself.

I think that overall this project has been a successful investigation into sound design, I have worked with lots of elements of sound for film. I worked through problems and found methods to achieve synchresis, experimenting with reverb to place sounds in a space. If I were to redo the process I would prepare the session slightly differently, I think I would section out the session into locations as well as different elements. I would also create separate bus channels for the reverb at the start of the process.

SPATIALISATION

Today I attempted mixing in 5.1 in the performance lab. I did this as an exercise to grasp the process of moving sounds around a 5.1 formation, rather than it being an actual mixing session. I found the process very similar to mixing in quadrophonic, although I find quadraphony more balanced in terms of focus. Quadraphonic mixing feels like working on an equal plane, there is no focus on direction. I found 5.1 to have a clear direction, it seemed like working with a fontal plane with a lesser plane behind you. The three monitors at the front seem the most prominent, I think this is because of the visual context attached to the format. I found it very easy to move the sounds around the room and automate movement with the latch function, this was mostly useful in the slow motion scenes where I could be more creative with the sound design. I would like also use the spatialisation in a realistic manner, to create deep and textured soundscape.

This was a useful exercise for an introduction into the format. I now know what I need to do in preparation for my mixing sessions in the composition studio. I need to finish gathering and arranging atmospheres, I then need to create the FX I need to and mix them in mono. With all of the tracks I need, mixed and finished, I will go into the composition studio and mix them in 5.1. I will also research good practices for mixing in 5.1, perhaps how to structure the project, sends and busses, and some standard practices and priorities for 5.1 mixing.

SPECIALISATION PROJECT

Identification of Foley Sounds

1

  • 2 laboured footsteps on gravel
  • 1 normal footsteps on gravel
  • Donkey footsteps on gravel
  • 3 rustles of clothing
  • Coffin creaking

2

  • Roar of fire starting
  • Footsteps of man on left on gravel
  • Items on donkey shifting

3

  • 2 rustling clothing
  • Canteen
  • Sipping
  • Campfire

4

  • Rustle of clothes
  • Sipping
  • Campfire

5

  • Rustle of clothes

6

  • Campfire

7

  • Moving rope on wood
  • Rustle of clothes
  • Footsteps

8

  • Donkey movements
  • footsteps on gravel
  • Chains and items shuffling on donkey
  • Clothes rustling

9

  • Campfire
  • Clothes rustling

10

  • Donkey footsteps on gravel
  • Mans footsteps on gravel
  • items shuffling on donkey
  • Coffin creaking

11

  • All of the about (10)
  • Clothes shuffling

12

  • All of the above (10)
  • Footsteps through tall grass

13

  • Footsteps on stone
  • Clothes rustling
  • Birds (quiet)

14

  • All of above (13)
  • Birds (Louder)

15

  • Footsteps on gravel
  • Donkey footsteps on gravel
  • Clothes rustling
  • Items on donkey moving

16

  • All of the above (15)
  • More prominent clothes rustling
  • Coffin creaking

17

  • Donkey chewing
  • Clothes rustling
  • Coffin creaking

18

  • Footsteps on gravel
  • Donkey footsteps on gravel
  • Coffin creaking
  • Clothes rustling
  • Glass candle holder
  • Rope
  • Bag

19

  • All of the above (18) but quieter

20

  • All of the above (18)
  • Very prominent coffin creaks

21

  • See 18, but quieter

22

  • Campfire
  • Tree cracking

23

  • See 22
  • Clothes rustling ?

24

  • Dogs footsteps on gravel (running)

25

  • Running footsteps on gravel
  • Rustling clothes (aggressive)
  • Swinging stick ?

26

  • Clothes rustling
  • Throwing stick against gravel/coffin?

27

  • Items on donkey shifting gently
  • Rubbing blood on fur
  • dying donkey sounds

28

  • Items/clothes rustling

29

  • Very slight rustle of clothing

30

  • More prominent rustling of clothing

SPECIALISATON PROJECT

Artists Influences and Musical Response

I will be recording and arranging sound design for a short film called ‘3WW’. The film is a music video for the band Alt J, I will be removing the music and working with the visuals only. I believe that this gives me an opportunity to take inspiration and influence from both the visual directors intentions and the musical/lyrical content.

‘There was a wayward lad
Stepped out one morning
The ground to be his bed
The sky his awningNeon, neon, neon
A blue neon lamp in a midnight country field

Can’t surround so you lean on, lean on
So much your heart’s become fond of this

Oh, these three worn words
Oh, let me whisper like the rubbing hands
Of tourists in Verona
I just want to love you in my own language

Well, that smell of sex
Good like burning wood
The wayward lad laid clean
To two busty girls from Hornsea
Who left a note in black ink

Girls from above say “Hi” (hi)
The road erodes at five feet per year
Around England’s east coastline
Was this your first time?
Love is just a button we press
Last night by the campfire

Oh, these three worn words
Oh, that we whisper like the rubbing hands
Of tourists in Verona
I just want to love you in my own language’

-Lyrics of ‘3WW’ By Alt J

Alt-J approached Alex Takacs (also known as Young Replicant) to direct a short film for their song 3WW, a song about love and loss, they also requested that Takacs take influence from a Ted Hughes book of Poems ‘Birthday Letters’. At first Takacs thought that the poems and the lyrics had little in common content-wise, but after spending some time with them both he recognised a shared sense of ‘dark sensuality and morbidity’.

Ted Hughes was a very celebrated English poet, in 1956 he married another celebrated poet by the name of Sylvia Plath. They had an infamously intense relationship, until she took her own life in 1963 when she was just 30 years old. In the years past this event, Sylvia became an object of investigation for poetry enthusiasts, and a very important figure for feminist literary theorists. Due to this, Ted Hughes often became a target of criticism in light of events surrounding Plath’s suicide, them being separated at the time and her still being devoted to him. For Hughes, this criticism was a strange experience, strangers prodding into an intense love, he equates it to a pack of dogs ravaging her grave and digging up her body. ‘Pulling her remains, with their lips lifted like dog’s lips into new positions.’

This poem is about a small pocket of something beautiful being attacked from all sides, and the person inside it trying to protect it, struggling against this barrage but ultimately realising it to be a sisyphean task.

In response, Takacs created a visual story of a tragic romance between two characters, Ramon and Julina (A reference to Shakespeare). The film follows the funeral procession for Julina after her untimely death, the coffin is then carried by Ramon alone through the mountains, wherein he builds campfires, experiences a lightning strike a tree very close to him and the coffin is attacked by a pack of wild dogs.

3WW- Alt-J

Although the brunt of this project will be foley work and atmos, I want to create some melodic elements in the sound design to respond to the poem and reflect the emotions that the poem/film invokes, I believe these to be isolation, determination, remorse and nostalgia. As a reference I’m looking at ‘Romantic Works’ by Keaton Henson, specifically ‘Petrichor’ as a palette. I feel that this will be a small part of the project, I would like to keep the music subtle, as a supplement to the sound design. Through providing a realistic audio environment I would like the listener to be immersed in the otherworldly scenes. I’m hoping that this immersion will make it easier to sneak in emotional musicality without it being the focus of the piece.

https://thehundreds.uk/blogs/content/meet-alex-takacs-aka-young-replicant-director-behind-favorite-flying-lotus-lorde-music-videos?country=GB

WITTGENSTEIN

Language Games

Wittgenstein was an Austrian philosopher that had revolutionary ideas on human communication. In ‘Tractatus Logico-Philosophicus’ Wittgenstein investigates how human’s communicate ideas to one another, he suggested that humans use language to trigger images in each others minds. This idea was sparked by a Paris court case in which the judge ordered a visual recreation of the events, in this case a car crash, to gain more understanding of the situation. Wittgenstein asserts that we use words to make pictures of facts, in conversation we are exchanging pictures of scenes, although for the majority of people we find it difficult to conjure a picture in someone else’s mind that is accurate to our own, this breeds miscommunication. Another danger is that we read into other peoples explanations too much, conjuring an inaccurate image in our minds. This book was Wittgenstein’s effort to make people speak with more forethought, and control our interpretations.

‘Whereof one cannot speak, thereof one must be silent’ – Ludwig Wittgenstein

Later on in his life, Wittgenstein furthered his analysis of language with his second book ‘Philosophical Investigations’. In this, he suggested that language wasn’t just something to conjure images, it was tool that we use to play games, or rather ‘patterns of intention’. As children we learn by engaging in games, an activity with a set of rules that lay out parameters for us to move in. These parameters allow us to interact with each other effectively. Wittgenstein saw language as a game with parameters. However, within language there are many different games, one example might be a ‘stating facts’, another might be a more emotional type of game such as a ‘help and reassurance’ game. When a person enters a conversation engaging in one game, and another person is playing a different game, the wires get crossed and we misunderstand each other. Someone might say “You never help me”, this person might be trying to say, help me and I need reassurance, while the receiver could take it as “I do help you, here’s some examples”. The key to good communication is working out what games people are playing. We use language not only to understand each other, but to understand ourselves. It is reassuring to oneself when you have to hand a word that describes your mental state, a word that is universally understood by your peers.

‘Language is a public tool for the understanding of private life’ – Ludwig Wittgenstein

This suggests that the media we consume is an important means to self knowledge, reading books, watching films, and listening to discourse gives us tools to understand who we are. This is why the voice and language in media is so important to us, the human ear is predisposed to decoding these games, and through it we seek to understand ourselves.

WHY HUMANS CAN TALK

Humans use the same basic biological apparatus to make noises as chimps, lungs, throat, voice box, tongue and lips. So why are we the only ones that articulate words, talk on the phone and sing songs? Through evolution humans have developed a longer throat and smaller mouth better suited to shaping sounds, furthermore we have developed a flexibility in our mouths unique to us that allows us to make a wide range of very specific sounds.

When we talk we produce small controlled bursts of air that are pushed through our larynx, or voice box. The larynx is made of cartilage and muscle, there are two folds of mucus membrane stretched across the top, these are the vocal cords. When air is pushed through the folds vibrate, producing sound. We change the pitch of our voices by tightening the cords, and loosen them to make a lower sound.

Producing sentences is a very complex process, it involves a collaboration between the throat, tongue and lips to emit specific consonants and vowel sounds.

‘Speech is the most complex motor activity that a person acquires except from maybe violinists or acrobats etc. It takes about 10 years for children to get to the level of adults. ‘

Dr. Philip Lieberman

If you look back at human evolution, after we diverge from an early ape ancestor, the shape of our vocal track changed. Our mouths got smaller, we developed more flexible tongues and our necks got longer. Our larynx got pulled down into the throat, the extension was a way to make room for all of this vocal equipment. This very important development in the human body came at a price. As a result, when humans eat the food must pass by the larynx to get to the Oesophagus, this can sometimes get mixed up and this is why people choke to death.

One of the main differences between humans and chimps when it comes to the production of language is breath control, humans can control their breath to a very high degree, whereas chimps can only produce short bursts of air.

https://www.npr.org/templates/story/story.php?storyId=129083762&t=1634830350797

MICHEL CHION

The Acousmetre

Acousmetre refers to a sound that is unseen by the visual eye. Hearing is the only sense that is omnidirectional, however sight is the most important to humans, its the most complicated, it’s what we rely on to decipher signs and elaborate language. This is where the acousmetre derives its power, considering the importance of sight to humans, when we hear something but we don’t see it, the brain is in an uncomfortable position.

Chion elaborates his theory by comparing it with the work of Freud. Chion talks on the relationship of the Mother and offspring, during raising process they are constantly playing a game of the seen and unseen, when breastfeeding, when playing hide and seek, holding them when sleeping etc. For Freud, this was a rather disturbing/traumatic thing for the child to undertake. Chion recognised that the cinema became the perfect place for this acousmetre phenomenon to take place. Pythagorean scholars would listen to their masters speak behind a curtain for 5 years before being able to see them, they wanted to avoid the visual context impeding the context of the speech. Chion took this idea and applied it to cinema putting forward the idea that when we see the embodiment of the voice that we are hearing in cinema, it takes away from the power of the acousmetre.

Chion defines three distinct forms of the acousmetre. The first is a person that you talk to on the phone having never seen their face. The second is the visualised acousmetre, you can put a face to the invisible voice. The third is the complete acousmetre, this is a voice of something that is not yet seen but is liable to appear in the visual field at some point the future. Chion suggests that the acousmetre that has already been visualised is a comforting and reassuring presence, whereas he who never shows his face is less so. To clarify the power of acousmetre in cinema, Chion compares it to acousmetre in theatre. The offstage voice in theatre can be located by sound, you can hear it coming from a specific point left or right of the stage. For Chion, this disrupts the acousmetre, and diminishes the power. In contrast, cinema does not employ a stage, therefore the acousmetre is neither inside nor outside, further disembodying the voice. Chion raises to two questions, what is there to fear from the acousmetre? and what are their powers?

The powers of the acousmetre are as follows, omnipresence, omniscience and omnipotence. A perfect example of the acousmetre is 2001 a space odyssey. Hal the computer inhabits the entire spaceship, this is incongruent with the human experience of sound and therefore unsettling.

Chion notes that this voice without a face might take us back to before we were born, the voice was everything and it was everywhere. Being in the womb, we can hear very early in our development. Even in the first few months of life, babies lack the ability to define the visual space, the eyes need to develop to acquire clear vision. And so it could be argued that the first few months of our lives are a complete acousmetre, we constantly hear our parents voices, but it is only later that we put a visual context to these voices.

SOUND STUDIES AND AURAL CULTURES 12/10/21

Week 3: information, language and connection

Process of work for the next few weeks:

  • Primary research- blog
  • Primary statements
  • Primary recordings
  • Introduction st draft of script
  • Secondary research- blog
  • Conclusion 1ts draft of script
  • Recording of audio paper
  • Production of Audio paper
  • Citation check- pdf
  • Formatting & Post production of script audio paper

How can we structure information to provide understanding ?

Be clear with hypothesis or subject, supporting arguments/evidence and conclusion (what it means for sound design as a whole and what it means to you). Also I think that maybe the production of the audio paper effects this, support claims with sound examples and make transitions smooth and easy to follow.

This is an investigation into a subject, an issue or culture.

Who is your audience?

People who are interested in what interests me, sound designers and film enthusiasts, class mates. Share what interests you to the people who will benefit from that information.

What references be important to explain, and what references are specific to your audience and/or subject?

I’ll have to explain what makes my ‘issue’ and issue, and why my investigation is taking place.

Ideas:

Investigation into the human voice’s impact on the brain, and the development of different uses. (too broad atm).

Michel Chion, the human voice takes precedence in an audio mix, Why? (Good)

VOCOCENTRISM

SOUND FOR SCREEN 12/10/21

Watch ‘Girlhood’ by Celine Sciama

Acousmetre- ‘a kind of voice-character specific to cinema that derives mysterious powers from being heard not seen. The disembodied voice seems to come from everywhere and therefore to have no clearly defined limits to its power. Acousmetre depends for its effects on delaying the fusion of sound and image to the extreme, by supplying the sound and withholding the image of the sounds true source until nearly the very end of the film. Only then, when the audience has used its imagination to the fullest is the… ‘

– Schaeffer Acousmatic

Synchresis- The forging between something one sees and something one hears. It is the mental fusion between a sound and a visual when’s these occur at exactly the same time. For a single face in the screen there are dozens of allowable voices, just as for a shot of a hammer hundreds of sounds will do. The sound of an axe chopping wood played exactly in sync with a bat hitting a baseball, will ‘read’ as a particularly forceful hit rather than a mistake by the filmmakers.

– Michelle Chion Audio Vision

SHAPE MEANING THROUGH SOUND

The privilege of voice in cinema- it takes priority over all other sonic element in audiovisual media. There are voices, and then everything else. In every audio mix, the presence of a human voice instantly sets up a hierarchy of perception. The level and presence of the voice has to be artificially enhanced over the sounds, in order to compensate for the absence of landmarks that in live binaural conditions that allow us to isolate the voice from ambience.

Todays task- Recording atmosphere for ‘We need to talk about Kevin’ scene.

CREATIVE PROJECTS #2: REFLECTION

Overall I think this project has been successful as an investigation into scoring with diegetic sound, I think due to time restraints and inexperience the outcome isn’t as effective as I would’ve hoped for.

I found the search for a scene probably the hardest part of this project. I imagine if you are involved in the production of the film, you have more control over the audio, you can manipulate the individual sections of audio. I presume you can also create a situation in which this scoring technique would be effective. Using a scene from a film means that the audio is cemented, you can’t separate the dialogue from the room sound etc… I think this results in a significant creative restriction. It reduces the amount of things you can manipulate, and makes you rely on diegetic sound not made for the scene (in my case at least).

I thought that I would be able to think of more techniques to achieve my goal but I sort of reached a wall. After using the vocoder and noise gates I couldn’t think of any more techniques that would work with the context of the scene and my audio resources.

Vocoding-

I found that the vocoder worked well for my intentions, it allowed me to build melodies and scores from the diegetic sound. However, I found it difficult to create a diverse palette from the plugin. All of the synths had a similar sound, and I felt unable to create many layers without muddying the mix. Another problem I had was discussed shortly in the last post, struggling to get definition with complex audio inputs. I felt the ‘plates and chatter’ audio was dense with sounds, especially the high pitch clinking of the plates. I think the vocoder really struggled to pick up these sounds, and the resulting synth sound was more of a rough approximation of the sound, and I think it’s very difficult to hear the connection between the audio and the synth.

Noise gating-

I only really found one use for this which was the breathing. I wanted to diversify the palette with some organic sound, and assign a sound to the protagonist to make the audio more personal, anchoring the score to the protagonist highlighting his emotional turmoil. I think that this worked pretty well, it brought a melodic element to his laboured breathing, and the bit crusher brought a gritty distortion. The only problem I had was the distinction between the inward and outward breaths. I feel the effect would’ve been a lot more effective if I had found a way to make the vocal harmonies resemble and inward breath, I think this would have contained the sound within the character.

Conclusion-

If I were to attempt this project again I would like to be involved in the production of the film, gaining more control of the individual sounds. I would put more research into vocoders and how to retain information from the original audio input. I would also consider creating the sound for a longer scene, and not restricting myself to only instruments created through the diegetic sound. I think the most effective form of this technique would be to ease into the score using the diegetic sound and once a slow build is achieved, start adding elements of a traditional score using normal instruments.