RECORDING SESSION

First Vocal Recordings, Foley and Environmental Sound

I get a lot of unwanted sounds from the road outside my window, and a lot of unwanted reverb from the reflective surfaces in my room, so as a solution I put a duvet and some blankets on a clothes horse. This actually turned out to be really effective and I got a very clean result. I did get a little bit of low frequency bleed through the floor, but I fixed it with an EQ and a noise gate. If I were to redo this process I would place the microphone stand on the bed or something elevated. I spent a lot of the time editing out bad takes and finding a good pacing for the speech. I mixed the voice fairly loud at -4Db, I wanted it to be very clear and set to the volume so I could know how the other elements fit around it.

Quotes

The first elements I added were the quotes, I had a lot of trouble finding a way to rip the audio straight from the internet. I ended up sending the audio from the internet to my monitors and recording it. This did result in some room sound but I didn’t really have much of a choice, I might continue trying to find a way to download the mp3 files.

Crowd and Playground

The second element I added was the sound of a crowd, this is in the section where I mention the busy street. Ideally i would have liked to used a field recording of my own for this, however with time restrictions and the difficulty finding somewhere that matches what I need, I opted for using audio from a sound library. I had similar reasoning for using sound libraries for the sound of a playground but I also didn’t want to go around playgrounds with a recorder.

In Utero

Lastly for this session, I tried to emulate the sound of being inside a womb. This was the most interesting and entertaining element to record. Firstly I made two base layers of low frequency drones. For the first one I used a sample of a bass drum I had recorded and applied an enormous amount of reverb to it, only using the wet signal, I also removed the initial sound from the bass drum, faded it in, removed a lot of the high frequencies with an EQ and automated a gradual increase in volume to try and keep the volume constant (because the reverb fades out). For the second layer I used a field recording of my own, I had recorded the sound of wind and birds in my garden. I then removed the higher frequencies, added a lot of reverb only using the wet signal and pitched it down a little bit. This layer gave a bit of texture to the sound scape, and a sense of realty as it’s environmental sound. Next I used a bucket and a rubber textured cloth to create sounds of bodily fluids. These recordings were then processed using compression, reverb, EQ and pitch correction. I recorded two takes and panned them left and right making use of the stereo field to aid the sense of immersion. I also recorded some audio of me singing ‘Thats Life’ by Frank Sinatra and some general talking, I treated this audio the same as previous with less reverb. I am waiting on a recording of my female friend talking, I want to show that there are multiple voices for the section where I say the foetus can distinguish between voices.

Used to make sounds of Bodily Functions

ACOUSMETRE IN CINEMA

2001: a Space Odyssey and Psycho

In the 1968 science fiction epic ‘2001: a Space Odyssey’ HAL-9000 is a supercomputer aboard the spaceship Discovery, this computer controls all of the ships systems and is designed to support and aid the ships crew. Kubrick presents HAL predominantly as a voice, the computer is visually represented as small red lights present in all compartments of the ship. However, Kubrick doesn’t consistently reference the visual representation of HAL when he speaks, associating it’s identity primarily with the audio. The computer is a perfect representation of the acousmetre as this voice permeates all places on the ship, it is omnipresent. As a result it is also all knowing, and with control of the ship’s systems it is omnipotent. These acousmatic powers give HAL-9000’s soft friendly human voice a very sinister underlying quality.

As the film goes on HAL begins to malfunction in subtle ways, and the crew decide that it must be shut down to prevent any further serious mistakes. However, HAL realises this and decides to try and kill the astronauts in order to save himself. Herein lies the engagement with the acousmetre, the deacousmatization of HAL. Through struggle the crew manage to reach HAL’s ‘brain’, a room of computer modules which a crew member is able to remove in order to shut down the supercomputer. As the crew member removes the modules HAL begins to lose his consciousness, and with that his control. The interesting part of this for me is the engagement of dialogue with the acousmetre using the deacoumatization. HAL begins to apologise and plead for his life, expressing fear which humanises this previously God like entity. The voice, engagement with language and visual contextualisation of HAL are used to express the dynamic of power and in turn dramatically change the effect/perception of the entity for the audience.

Excerpt from Michel Chion’s ‘The Voice in Cinema’

Alfred Hitchcocks 1960 film ‘Psycho’ contains a very interesting example of an acousmetre. The film follows a young girl who’s car breaks down, she ends up staying in a motel run by Norman Bates, a man who has a dysfunctional relationship with his mother whom he looks after. The young girl overhears a dispute between Norman and his mother, this is the first exposure to the acousmetre as the audience is privy to the mothers voice but not her appearance. Hitchcock has established an acousmatic character engaged with action. Later in the film the young girl is savagely murdered in the famous shower scene by a visually obscured character. Norman cleans the scene in order to protect her from justice, he goes upstairs to try and find a place for her to hide. As Norman carries his mother while she talks, Hitchcock films the scene from above and far away, offering the audience a form of deacousmatization. This turns out to be a mislead as it is revealed that the mother is dead and Norman ‘plays her’ from time to time.

Bibliography:

Chion, M. and Gorbman, C., 2008. The voice in cinema. New York: Columbia University Press.

CHION, M., GORBMAN, C., & MURCH, W. (1994). Audio-vision: sound on screen.

Liquid Architecture. 2018. Michel Chion: The Voice in Cinema, or the Acousmêtre and Me (Liquid Architecture). [online] Available at: <https://www.youtube.com/watch?v=SRik_1-eSdE&t=456s> [Accessed 25 November 2021].

Önen, Ufuk. 2008. Acousmetre : the disembodied voice in cinema. Journal for Bilkent University Dept. of Communication and Design – Master’s degree. [Accessed 05/12/21]

EXAMPLES OF THE ACOUSMETRE IN HISTORY

Pythagorean learning, Freud and Acousmatics in Religion

The Ancient Greek mathematician and philosopher Pythagorus adopted an unusual method for teaching. He was concerned that his appearance would distract his students from the content of his speech, in order to circumvent this he would teach from behind a curtain, and he would not reveal his appearance to the students until they had been learning for substantial amount of years.

A similar practice was adopted by Freud during his psychoanalysis. He would ask his patients to lie down on a victorian day bed and look up at the ceiling, the patient mustn’t make eye contact with the psychology and vice versa. This was an effort to induce something called ‘free association’. It would create an environment that as clinical and intimate, it encouraged the patients to freely express their thoughts. Perhaps people listen more intently when there is no visual source, and express more freely when their is no visual receptor.

There are a lot of instances of people hearing voices without a source in religious texts. In the bible, Adam hears the voice of God telling him how to behave in the garden of Eden, Moses hears the voice of God telling him to lead the Hebrews out of Egypt. There are always descriptions of someone ‘hearing’ a voice telling them important information. My interpretation of this is that people are more likely to heed the word of something unseen. I see this as an example of acousmatics in early civilisation.

https://www.randallsessler.com/blog/behindthecurtain

https://www.ncbi.nlm.nih.gov/books/NBK540472/

DEVELOPMENT OF HEARING AND SIGHT IN A FOETUS

First Sounds

  • At about 18-20 weeks the physical ears start to protrude from the head.
  • At 20 weeks the neurosensory section of the auditory system starts to develop.
  • At around 25 weeks the auditory system is functional, and they can hear low frequencies from the outside world.
  • Late in the pregnancy a foetus can differentiate between voices.

Once the auditory system is functional the hair cells in the cochlea, the axons of the auditory nerve and neurons of the temporal lobe auditory cortex are tuned to receive stimuli. It is within the time of 25 weeks and 5-6 months of age that these systems calibrate to receive certain frequencies and intensities. This auditory system requires external environmental sound to refine and develop, the main two sounds that do this are speech and music. The auditory environment a foetus is in within this time can determine their ability to hear, if the foetus is continuously exposed to loud environments it can interfere with the development of their auditory system.

Our auditory perception of the world can be shaped and influenced by our first exposure to it. There is a time in which we cannot see but can hear voices, to me this is important when considering the acousmetre. The acousmetre is described as being everywhere, neither on screen nor outside it. I think that a foetus’s experience in the womb mirrors this, and perhaps the cinematic experience of the acousmetre is that which subconsciously reminds us of a venerable time in our sensory development.

Bibliography:

https://www.sciencedirect.com/science/article/abs/pii/S1527336908001347

https://www.medicalnewstoday.com/articles/324464#fetal-hearing-at-each-stage-of-development

AUDIO PAPER PRODUCTION TECHNIQUE

‘Sound Matters Podcast’

I’m listening to the ‘Sound Matters’ podcast by Tim Hinman as an example of how to create an engaging audio paper. The presenter starts the podcast by playing a field recording that relates very closely to the theme of the episode, listening to nature. The recording contains him lying in a snow covered clearing, in a forrest somewhere in Sweden, he is talking to the recorder describing his surroundings. There are large pauses in his speech, I imagine to let the surrounding sound be heard, but he also takes this as an opportunity to talk in these gaps. The result of this is that the line between the field recording and the episode is blurred, he’s in two places at once, I find this very interesting and engaging. Another technique he has used is to put a very subtle melodic drone low in the mix, I think he has done this to help keep the edits smooth, so there isn’t silence between cuts. In the transition between the field recording and the next piece of audio (Animal calls) he monologues a bit, and brings the volume of the drone up underneath the vocals, and the synths become more textured and complex, this keeps the ear entertained so we aren’t just listening to him talk, its very effective. He then fades this melodic synth out as the next section begins, it works as a very effective transition almost telling the audience to listen to what comes next. Hinman introduces his guest through presenting his work first, he then has a kind of one way conversation with him. He has interviewed him and isolated his responses, Hinman then describes the guests life events and work whist inserting snippets of the interview to give more detail. As his guest (Bernie Krause) descries the origin and development of sound on earth, Hinman supports it with sound design recreating these sounds (or an interpretation at least). I appreciate the way he has blended the vocals with the sound design in the line “now wait for it… life on earth is just about to begi (booming sound)”. He cuts his own dialogue off with the sound design, this keeps the listener really engaged with the sound design, keeping them on their toes, its almost as though the sound design is leading and Hinman is talking ‘around’ it. At one point Krause makes a comment about the human range of hearing, rather than just ask the listener to understand the words being spoken, Hinman plays a note that goes from the lowest point of the frequency spectrum to the highest point. This makes it so much easier for the audience to understand the facts being said, you are way more likely to understand and retain information if you learn it through (or it is supported by) first hand experience. A rather strange technique I found is admitting fallibility, at one point the presenter struggles to say a word that Krause coined, ‘anthropophony’. This humanised the host and makes the audience feel as though they are learning along with the host, rather than being told facts.

The goal of an audio paper is to convey information, and gain understanding from the audience. I believe that a lot of these techniques will help me to do that, I think giving audio examples of the topic is the most effective way to do this. I also think it’s very important to integrate the dialogue and sound design seamlessly.

WITTGENSTEIN

Language Games

Wittgenstein was an Austrian philosopher that had revolutionary ideas on human communication. In ‘Tractatus Logico-Philosophicus’ Wittgenstein investigates how human’s communicate ideas to one another, he suggested that humans use language to trigger images in each others minds. This idea was sparked by a Paris court case in which the judge ordered a visual recreation of the events, in this case a car crash, to gain more understanding of the situation. Wittgenstein asserts that we use words to make pictures of facts, in conversation we are exchanging pictures of scenes, although for the majority of people we find it difficult to conjure a picture in someone else’s mind that is accurate to our own, this breeds miscommunication. Another danger is that we read into other peoples explanations too much, conjuring an inaccurate image in our minds. This book was Wittgenstein’s effort to make people speak with more forethought, and control our interpretations.

‘Whereof one cannot speak, thereof one must be silent’ – Ludwig Wittgenstein

Later on in his life, Wittgenstein furthered his analysis of language with his second book ‘Philosophical Investigations’. In this, he suggested that language wasn’t just something to conjure images, it was tool that we use to play games, or rather ‘patterns of intention’. As children we learn by engaging in games, an activity with a set of rules that lay out parameters for us to move in. These parameters allow us to interact with each other effectively. Wittgenstein saw language as a game with parameters. However, within language there are many different games, one example might be a ‘stating facts’, another might be a more emotional type of game such as a ‘help and reassurance’ game. When a person enters a conversation engaging in one game, and another person is playing a different game, the wires get crossed and we misunderstand each other. Someone might say “You never help me”, this person might be trying to say, help me and I need reassurance, while the receiver could take it as “I do help you, here’s some examples”. The key to good communication is working out what games people are playing. We use language not only to understand each other, but to understand ourselves. It is reassuring to oneself when you have to hand a word that describes your mental state, a word that is universally understood by your peers.

‘Language is a public tool for the understanding of private life’ – Ludwig Wittgenstein

This suggests that the media we consume is an important means to self knowledge, reading books, watching films, and listening to discourse gives us tools to understand who we are. This is why the voice and language in media is so important to us, the human ear is predisposed to decoding these games, and through it we seek to understand ourselves.

WHY HUMANS CAN TALK

Humans use the same basic biological apparatus to make noises as chimps, lungs, throat, voice box, tongue and lips. So why are we the only ones that articulate words, talk on the phone and sing songs? Through evolution humans have developed a longer throat and smaller mouth better suited to shaping sounds, furthermore we have developed a flexibility in our mouths unique to us that allows us to make a wide range of very specific sounds.

When we talk we produce small controlled bursts of air that are pushed through our larynx, or voice box. The larynx is made of cartilage and muscle, there are two folds of mucus membrane stretched across the top, these are the vocal cords. When air is pushed through the folds vibrate, producing sound. We change the pitch of our voices by tightening the cords, and loosen them to make a lower sound.

Producing sentences is a very complex process, it involves a collaboration between the throat, tongue and lips to emit specific consonants and vowel sounds.

‘Speech is the most complex motor activity that a person acquires except from maybe violinists or acrobats etc. It takes about 10 years for children to get to the level of adults. ‘

DrPhilip Lieberman

If you look back at human evolution, after we diverge from an early ape ancestor, the shape of our vocal track changed. Our mouths got smaller, we developed more flexible tongues and our necks got longer. Our larynx got pulled down into the throat, the extension was a way to make room for all of this vocal equipment. This very important development in the human body came at a price. As a result, when humans eat the food must pass by the larynx to get to the Oesophagus, this can sometimes get mixed up and this is why people choke to death.

One of the main differences between humans and chimps when it comes to the production of language is breath control, humans can control their breath to a very high degree, whereas chimps can only produce short bursts of air.

https://www.npr.org/templates/story/story.php?storyId=129083762&t=1634830350797

MICHEL CHION

The Acousmetre

Acousmetre refers to a sound that is unseen by the visual eye. Hearing is the only sense that is omnidirectional, however sight is the most important to humans, its the most complicated, it’s what we rely on to decipher signs and elaborate language. This is where the acousmetre derives its power, considering the importance of sight to humans, when we hear something but we don’t see it, the brain is in an uncomfortable position.

Chion elaborates his theory by comparing it with the work of Freud. Chion talks on the relationship of the Mother and offspring, during raising process they are constantly playing a game of the seen and unseen, when breastfeeding, when playing hide and seek, holding them when sleeping etc. For Freud, this was a rather disturbing/traumatic thing for the child to undertake. Chion recognised that the cinema became the perfect place for this acousmetre phenomenon to take place. Pythagorean scholars would listen to their masters speak behind a curtain for 5 years before being able to see them, they wanted to avoid the visual context impeding the context of the speech. Chion took this idea and applied it to cinema putting forward the idea that when we see the embodiment of the voice that we are hearing in cinema, it takes away from the power of the acousmetre.

Chion defines three distinct forms of the acousmetre. The first is a person that you talk to on the phone having never seen their face. The second is the visualised acousmetre, you can put a face to the invisible voice. The third is the complete acousmetre, this is a voice of something that is not yet seen but is liable to appear in the visual field at some point the future. Chion suggests that the acousmetre that has already been visualised is a comforting and reassuring presence, whereas he who never shows his face is less so. To clarify the power of acousmetre in cinema, Chion compares it to acousmetre in theatre. The offstage voice in theatre can be located by sound, you can hear it coming from a specific point left or right of the stage. For Chion, this disrupts the acousmetre, and diminishes the power. In contrast, cinema does not employ a stage, therefore the acousmetre is neither inside nor outside, further disembodying the voice. Chion raises to two questions, what is there to fear from the acousmetre? and what are their powers?

The powers of the acousmetre are as follows, omnipresence, omniscience and omnipotence. A perfect example of the acousmetre is 2001 a space odyssey. Hal the computer inhabits the entire spaceship, this is incongruent with the human experience of sound and therefore unsettling.

Chion notes that this voice without a face might take us back to before we were born, the voice was everything and it was everywhere. Being in the womb, we can hear very early in our development. Even in the first few months of life, babies lack the ability to define the visual space, the eyes need to develop to acquire clear vision. And so it could be argued that the first few months of our lives are a complete acousmetre, we constantly hear our parents voices, but it is only later that we put a visual context to these voices.

SOUND STUDIES AND AURAL CULTURES 12/10/21

Week 3: information, language and connection

Process of work for the next few weeks:

  • Primary research- blog
  • Primary statements
  • Primary recordings
  • Introduction st draft of script
  • Secondary research- blog
  • Conclusion 1ts draft of script
  • Recording of audio paper
  • Production of Audio paper
  • Citation check- pdf
  • Formatting & Post production of script audio paper

How can we structure information to provide understanding ?

Be clear with hypothesis or subject, supporting arguments/evidence and conclusion (what it means for sound design as a whole and what it means to you). Also I think that maybe the production of the audio paper effects this, support claims with sound examples and make transitions smooth and easy to follow.

This is an investigation into a subject, an issue or culture.

Who is your audience?

People who are interested in what interests me, sound designers and film enthusiasts, class mates. Share what interests you to the people who will benefit from that information.

What references be important to explain, and what references are specific to your audience and/or subject?

I’ll have to explain what makes my ‘issue’ and issue, and why my investigation is taking place.

Ideas:

Investigation into the human voice’s impact on the brain, and the development of different uses. (too broad atm).

Michel Chion, the human voice takes precedence in an audio mix, Why? (Good)

VOCOCENTRISM