WHY HUMANS CAN TALK

Humans use the same basic biological apparatus to make noises as chimps, lungs, throat, voice box, tongue and lips. So why are we the only ones that articulate words, talk on the phone and sing songs? Through evolution humans have developed a longer throat and smaller mouth better suited to shaping sounds, furthermore we have developed a flexibility in our mouths unique to us that allows us to make a wide range of very specific sounds.

When we talk we produce small controlled bursts of air that are pushed through our larynx, or voice box. The larynx is made of cartilage and muscle, there are two folds of mucus membrane stretched across the top, these are the vocal cords. When air is pushed through the folds vibrate, producing sound. We change the pitch of our voices by tightening the cords, and loosen them to make a lower sound.

Producing sentences is a very complex process, it involves a collaboration between the throat, tongue and lips to emit specific consonants and vowel sounds.

‘Speech is the most complex motor activity that a person acquires except from maybe violinists or acrobats etc. It takes about 10 years for children to get to the level of adults. ‘

DrPhilip Lieberman

If you look back at human evolution, after we diverge from an early ape ancestor, the shape of our vocal track changed. Our mouths got smaller, we developed more flexible tongues and our necks got longer. Our larynx got pulled down into the throat, the extension was a way to make room for all of this vocal equipment. This very important development in the human body came at a price. As a result, when humans eat the food must pass by the larynx to get to the Oesophagus, this can sometimes get mixed up and this is why people choke to death.

One of the main differences between humans and chimps when it comes to the production of language is breath control, humans can control their breath to a very high degree, whereas chimps can only produce short bursts of air.

https://www.npr.org/templates/story/story.php?storyId=129083762&t=1634830350797

MICHEL CHION

The Acousmetre

Acousmetre refers to a sound that is unseen by the visual eye. Hearing is the only sense that is omnidirectional, however sight is the most important to humans, its the most complicated, it’s what we rely on to decipher signs and elaborate language. This is where the acousmetre derives its power, considering the importance of sight to humans, when we hear something but we don’t see it, the brain is in an uncomfortable position.

Chion elaborates his theory by comparing it with the work of Freud. Chion talks on the relationship of the Mother and offspring, during raising process they are constantly playing a game of the seen and unseen, when breastfeeding, when playing hide and seek, holding them when sleeping etc. For Freud, this was a rather disturbing/traumatic thing for the child to undertake. Chion recognised that the cinema became the perfect place for this acousmetre phenomenon to take place. Pythagorean scholars would listen to their masters speak behind a curtain for 5 years before being able to see them, they wanted to avoid the visual context impeding the context of the speech. Chion took this idea and applied it to cinema putting forward the idea that when we see the embodiment of the voice that we are hearing in cinema, it takes away from the power of the acousmetre.

Chion defines three distinct forms of the acousmetre. The first is a person that you talk to on the phone having never seen their face. The second is the visualised acousmetre, you can put a face to the invisible voice. The third is the complete acousmetre, this is a voice of something that is not yet seen but is liable to appear in the visual field at some point the future. Chion suggests that the acousmetre that has already been visualised is a comforting and reassuring presence, whereas he who never shows his face is less so. To clarify the power of acousmetre in cinema, Chion compares it to acousmetre in theatre. The offstage voice in theatre can be located by sound, you can hear it coming from a specific point left or right of the stage. For Chion, this disrupts the acousmetre, and diminishes the power. In contrast, cinema does not employ a stage, therefore the acousmetre is neither inside nor outside, further disembodying the voice. Chion raises to two questions, what is there to fear from the acousmetre? and what are their powers?

The powers of the acousmetre are as follows, omnipresence, omniscience and omnipotence. A perfect example of the acousmetre is 2001 a space odyssey. Hal the computer inhabits the entire spaceship, this is incongruent with the human experience of sound and therefore unsettling.

Chion notes that this voice without a face might take us back to before we were born, the voice was everything and it was everywhere. Being in the womb, we can hear very early in our development. Even in the first few months of life, babies lack the ability to define the visual space, the eyes need to develop to acquire clear vision. And so it could be argued that the first few months of our lives are a complete acousmetre, we constantly hear our parents voices, but it is only later that we put a visual context to these voices.

SOUND STUDIES AND AURAL CULTURES 12/10/21

Week 3: information, language and connection

Process of work for the next few weeks:

  • Primary research- blog
  • Primary statements
  • Primary recordings
  • Introduction st draft of script
  • Secondary research- blog
  • Conclusion 1ts draft of script
  • Recording of audio paper
  • Production of Audio paper
  • Citation check- pdf
  • Formatting & Post production of script audio paper

How can we structure information to provide understanding ?

Be clear with hypothesis or subject, supporting arguments/evidence and conclusion (what it means for sound design as a whole and what it means to you). Also I think that maybe the production of the audio paper effects this, support claims with sound examples and make transitions smooth and easy to follow.

This is an investigation into a subject, an issue or culture.

Who is your audience?

People who are interested in what interests me, sound designers and film enthusiasts, class mates. Share what interests you to the people who will benefit from that information.

What references be important to explain, and what references are specific to your audience and/or subject?

I’ll have to explain what makes my ‘issue’ and issue, and why my investigation is taking place.

Ideas:

Investigation into the human voice’s impact on the brain, and the development of different uses. (too broad atm).

Michel Chion, the human voice takes precedence in an audio mix, Why? (Good)

VOCOCENTRISM

SOUND FOR SCREEN 12/10/21

Watch ‘Girlhood’ by Celine Sciama

Acousmetre- ‘a kind of voice-character specific to cinema that derives mysterious powers from being heard not seen. The disembodied voice seems to come from everywhere and therefore to have no clearly defined limits to its power. Acousmetre depends for its effects on delaying the fusion of sound and image to the extreme, by supplying the sound and withholding the image of the sounds true source until nearly the very end of the film. Only then, when the audience has used its imagination to the fullest is the… ‘

– Schaeffer Acousmatic

Synchresis- The forging between something one sees and something one hears. It is the mental fusion between a sound and a visual when’s these occur at exactly the same time. For a single face in the screen there are dozens of allowable voices, just as for a shot of a hammer hundreds of sounds will do. The sound of an axe chopping wood played exactly in sync with a bat hitting a baseball, will ‘read’ as a particularly forceful hit rather than a mistake by the filmmakers.

– Michelle Chion Audio Vision

SHAPE MEANING THROUGH SOUND

The privilege of voice in cinema- it takes priority over all other sonic element in audiovisual media. There are voices, and then everything else. In every audio mix, the presence of a human voice instantly sets up a hierarchy of perception. The level and presence of the voice has to be artificially enhanced over the sounds, in order to compensate for the absence of landmarks that in live binaural conditions that allow us to isolate the voice from ambience.

Todays task- Recording atmosphere for ‘We need to talk about Kevin’ scene.

WE NEED TO TALK ABOUT KEVIN RESCORE #1

I will be rescoring the opening scene to the 2011 film ‘We need to talk about Kevin’. The film opens with a dream sequence, what seems to be a very busy festival in which people are chucking tomato pulp over each other. The reality of the scene is later revealed to the audience, the protagonist is asleep in their living room and someone/people have thrown red paint at her house. This situation gives sound designers a lot of room to be creative, the two scenes being intrinsically linked and the dreamscape allowing non-figurative use of sound.

My first thought when considering the rescore is I would like to be able to ground the dream soundscape in reality by only using (and manipulating) sounds that would be heard in the context of the second scene (reality). I will start the process by identifying the sounds I can use as a base for instrumentation:

  • Wind
  • Birds
  • Television
  • Phone
  • Voices
  • Cars
  • Her breath

I will then manipulate these sounds to invoke the emotions of the dream sequence, which I think is a kind of euphoric escape, and a feeling of support from other people which maybe contrasts reality.

The next step is to identify the different sections of the scene, to highlight changes in atmosphere and the introduction or emphasis of certain sounds:

Dreamscape-

  • 00:00:45:05 Small room with open door, curtains waving. (might want to introduce these. sounds before the visual to ease viewer into the story) A
  • 00:01:19:20 Outside area, very crowded. Lots of movement, liquid being thrown. A/SFX
  • 00.01:44:14 Change of camera angle, close up approx 13 people in shot. SFX
  • 00:01:55:22 Close up of liquid being thrown SFX
  • 00:02:07:18 Close up of 2 people falling into 1 foot high liquid SFX
  • 00:02:11:13 Close up of people kicking liquid SFX
  • 00:02:16:05 Close up of person writhing around in liquid SFX
  • 00:02:18:04 Protagonist is lifted above crowd, liquid still flying (She’s smiling and laughing) A/SFX
  • 00:02:31:12 Someone is hit in the face with tomatoes SFX
  • 00:02:43:08 Protagonist is pulled back to crowd level A
  • 00:02:49:19 Protagonist is laid on floor and people throw tomatoes on her SFX
  • 00:02:53:07 Close up of protagonists face while tomatoes are thrown SFX
  • 00:02:59:02 Close up of tomato pulp on floor SFX

Reality-

  • 00:03:06:16 Small living room, window closed. A
  • 00:03:32:05 Close up of protagonists face waking up SFX
  • 00:03:43:14 Close up of window SFX
  • 00:03:57:22 Protagonists feet hit the floor FOLEY
  • 00:04:07:10 Protagonists foot stubs table leg and pills hit ground FOLEY
  • 00:04:12:11 Protagonist puts on slippers FOLEY
  • 00:04:15:14 Protagonist pulls door handle off door FOLEY
  • 00:04:18:06 Protagonist opens door FOLEY
  • 00:04:22:07 Scene change to outside, Protagonist walks down steps, walks a few paces and turns around FOLEY
  • 00:04:37:04 Protagonist walks back up steps FOLEY
  • 00:04:48:23 Door closes, end scene. FOLEY

Key:

A- Atmosphere

SFX- Sound effects

CREATIVE PROJECTS #2: REFLECTION

Overall I think this project has been successful as an investigation into scoring with diegetic sound, I think due to time restraints and inexperience the outcome isn’t as effective as I would’ve hoped for.

I found the search for a scene probably the hardest part of this project. I imagine if you are involved in the production of the film, you have more control over the audio, you can manipulate the individual sections of audio. I presume you can also create a situation in which this scoring technique would be effective. Using a scene from a film means that the audio is cemented, you can’t separate the dialogue from the room sound etc… I think this results in a significant creative restriction. It reduces the amount of things you can manipulate, and makes you rely on diegetic sound not made for the scene (in my case at least).

I thought that I would be able to think of more techniques to achieve my goal but I sort of reached a wall. After using the vocoder and noise gates I couldn’t think of any more techniques that would work with the context of the scene and my audio resources.

Vocoding-

I found that the vocoder worked well for my intentions, it allowed me to build melodies and scores from the diegetic sound. However, I found it difficult to create a diverse palette from the plugin. All of the synths had a similar sound, and I felt unable to create many layers without muddying the mix. Another problem I had was discussed shortly in the last post, struggling to get definition with complex audio inputs. I felt the ‘plates and chatter’ audio was dense with sounds, especially the high pitch clinking of the plates. I think the vocoder really struggled to pick up these sounds, and the resulting synth sound was more of a rough approximation of the sound, and I think it’s very difficult to hear the connection between the audio and the synth.

Noise gating-

I only really found one use for this which was the breathing. I wanted to diversify the palette with some organic sound, and assign a sound to the protagonist to make the audio more personal, anchoring the score to the protagonist highlighting his emotional turmoil. I think that this worked pretty well, it brought a melodic element to his laboured breathing, and the bit crusher brought a gritty distortion. The only problem I had was the distinction between the inward and outward breaths. I feel the effect would’ve been a lot more effective if I had found a way to make the vocal harmonies resemble and inward breath, I think this would have contained the sound within the character.

Conclusion-

If I were to attempt this project again I would like to be involved in the production of the film, gaining more control of the individual sounds. I would put more research into vocoders and how to retain information from the original audio input. I would also consider creating the sound for a longer scene, and not restricting myself to only instruments created through the diegetic sound. I think the most effective form of this technique would be to ease into the score using the diegetic sound and once a slow build is achieved, start adding elements of a traditional score using normal instruments.

CREATIVEPROJECTS #2: PROCESS

I have isolated the different elements of the sound design in the scene, and using the BBC sound effects library I have recreated the soundscape.

Air-conditioning unit-

I began building the score with the air conditioning, i used a vocoder to create a bass synth out of the audio. I put this sound quite low in the mix with the original aircon sound, the intention of this was that the synth would be unnoticeable as an instrument. Im trying to use a technique I discussed in the tinnitus retraining therapy section of my essay, desensitising an audience to a constant sound by surrounding it with other sounds. With this technique I have established a melody from the offset of the scene without the knowledge of the audience. I kept the note constant so as not to draw attention to it, until the emotionally substantive part of the scene. My hope is that the audience’s emotional involvement with the scene coupled with the subtlety of the change will maintain the musical suspension of disbelief.

Plates/Chatter-

I found a sound effect of plates and chatter that seemed to fit the diner scene, although it doesn’t sound exactly the same as the original audio. I used this audio to create a mid frequency synth, this carries the most of the melodic information. I found it especially difficult to create a synthesiser that felt connected to this audio, although I know the synth is inputed by the audio it could just be any synth to my ears. I think this might be because there is a lot of information in the audio, or it could be that I don’t know how to vocoded this type of audio, either way I think this is didn’t work how I wanted it to.

Breathing-

I recorded myself breathing to match the characters movements. I then recorded 4 vocal harmonies to add to the higher end of the mix, I contained all of these tracks in a stack/bus. I then noise gated the bus, and side-chained the input to be the breathing audio, this anchors the vocals to the breath. I also added a few plugins to the stack such as chorus and a vocal doubler, and as the scene draws closer to the climax I automated a decreasing bit resolution.

CREATIVE PROJECTS #2: DIEGETIC SCORE

I need to find a film scene suitable to my project, I will first outline the necessary qualities of the scene:

-No score (this will get in the way of my score defeating the object of the project)

-A lot of diegetic sound (this will give me a lot to work with in terms of side-chaining)

– Ample emotional context ( I need to have something to express through the score)

I have decided on the ending diner scene from Lynne Ramsay’s ‘You were never really here’, it fits all of the criteria. Something I realise I need to do is to isolate the different elements of the sound scape so I can manipulate them melodically. The elements that make up the scene are dialogue, chatter, plates clinking and air conditioning.

‘You were never really here’ is renowned for its sound design, it won a British Independent Film Award for Best sound and a best Score award from the Boston Online Film Critics Association. There are particular moments in the sound design where the diegetic sound blends with the score, this is the exact thing I am trying to achieve. This ending scene is devoid of a score, it only has the radio playing in the background and extenuated diegetic sound. Im going to try and use this to my advantage, vocoding the diegetic sound to create a score.

CREATIVE PROJECT #2: RESEARCH DIEGESIS

Diegesis is defined as the narrative construct a story takes place in, a world or universe with its own set of rules. Diegetic sound is sound that happens inside of this world, the characters are able to perceive it and it follows the rules of the world, ex. talking, footsteps, radio. Non-diegetic sound refers to sounds that happen outside of this world, ex. narrating or scores.

If the characters of a story can only hear diegetic sound and we want the audience to relate to the characters, it could be argued that we as sound designers should strive to use diegetic sound as much as possible. Using diegetic sound immerses the audience into the characters position. We face a problem if we want to do this in traditional cinema, a score is non-diegetic. How do we bring this non-diegetic emotional expression into the diegetic world?

This scene from Bird-man achieves this feat excellently. The drums start as non-diegetic, it’s a musical accompaniment outside of the story’s world. However by bringing the drummer into the world, the film maker has brought the audience and the characters together, we hear what they hear and so we understand to a deeper extent their perception of the scene, placing us in the story.

I think it would be very difficult to create a score that is completely diegetic, and so instead I’m going to create one that is shaped and informed by the diegetic world. It remains as non-diegetic but it has the quality of the diegetic sound.

CREATIVE PROJECTS #2: RESEARCH- HOWARD ASHMAN

MUSICAL THEATRE’S INFLUENCE ON SOUND DESIGN AND MUSIC IN FILM:

Keeping the musical framing of the story in synchronisation with the character/subject organically. 

Howard Ashman was an American playwright, artistic director and lyricist with a firm rooting in musical theatre. He wrote and directed such works as ‘God bless you’, ‘Smile’, and ‘Little shop of Horrors’. In 1986 (at the age of 36) Ashman began a collaboration with the Disney Company, turning his creative efforts to film. Disney gave him the option of three projects to work on, two live action projects and an animated musical. Ashman leapt at animation and began work on ‘the little mermaid.’ 

Ashman put a great emphasis on telling stories through music, he believed it was “central to what Disney is”. He presented a case to the staff at Disney that animation and musical storytelling were made for each other, he saw “a very very strong connection between these two media”. Howard understood that wherever you’re creating something, where the songs are in a context bigger than themselves, you’re creating musical theatre. They started taking the keynotes/high points of their story, where the characters can’t help but let their emotions run out, turning these points into pieces of musical theatre. “these songs aren’t just bolstered on, they’re the tentpoles that hold the movie up” 

The animators at Disney commended Ashman’s ability to seamlessly transition from spoken word to song, he did this by backing the musical vamp up under the dialogue preceding the song. Using this technique they could go from a contextual scene, to a musical emotional scene smoothly, keeping the audience’s suspension of disbelief afloat and keeping them invested in the story. This gradual transition of emotional intensity is intended to organically express how a characters emotional intensity will change overtime, keeping the musical framing of the story in synchronisation with the character/subject. 

Usage:

This brings me to how I could apply Ashman’s principals of seamless transition to song, and musical theatre influence to sound design for film. I would like to explore the effectiveness of creating smooth transitions from diegetic sound in contextual scenes, to more musical sound design for the emotional high points of a film. How do I create seamless transitions? One idea I would like to experiment with is using locational/contextual elements to build the score (or at least the beginning). As an example, vocoding diegetic sound to begin a score, or using the rhythm of someone’s speech or movement to build a drum score, even using diegetic sound as the samples for the beat. I want to try to tie the musical scoring of a scene as tightly as I can to the visual context. 

Sources:

Waking Sleeping Beauty Bonus Features – Howard Ashman : https://www.youtube.com/watch?v=9PggMaREbs0

The problem with Tarzan :

The life and work of Howard Ashman:

https://www.howardashman.com/howards-life-time

Disney Collaboration, 1986-1991:

https://www.howardashman.com/timeline-3/2018/2/6/disney-collaboration-1986-1991