SOUND IN HORROR GAMES

Five nights at Freddy’s is a horror game in which the player is trapped in the security office of a pizzeria while animatronic creatures attempt to kill them. The horror feeling is built out of two concepts, absence of action and absence of information.

The absence of information refers to the fact that the animatronic creatures move when the player is not looking at them, and so the player must rely on clues as to their whereabouts. The player has access to some obvious visual clues such as the CCTV system, but a lot of the information is subtilely delivered to the player through the audio of the game. The audio is a more complicated source of information as some audio helps you and some deceives you, this adds to unsettling nightmare like sense of not knowing.

Within the game sound there is advantageous audio, this is the audio that helps the player with their goal of survival. These sounds do not reveal themselves straight away, when the player begins the game the sounds present themselves as chaotic. As more time is spent in the game, the player gains an understanding of the auditory patterns and their listening transitions from casual to semantic, deducing clues from the patterns. An example of this is the sound of pots and pans, at first this sound seems like an element of ambience, put in the game to induce panic in the player. However as the player interacts with the game more they will find that this sound becomes louder when they look at the kitchen camera (which has broken visuals, its audio only). When coupled with the player noticing that one one of the animatronics is missing, they can deduce that the animatronic is in the kitchen. These unsettling sounds have a purpose in the gameplay.

The most interesting part of the sound design for this game is the detrimental audio, the audio that is there to be a distraction from the truth. One element of this is the choice to have very poor transmission of audio from the cameras, the audio is filtered and manipulated heavily as a means to muddy the information. There are also sounds that are implemented simply to get in the way, to demand attention at a time of intense audio sensitivity. This is the mechanical sound of the cameras turning or the intermittent static. All of this sound is also lightly obscured by the fan in the office, this is a constant sound that is off a similar frequency to the audio from the cameras. These sound design choices are a very clever means to disorient and frustrate the player in a similar way to how one might experience a nightmare, like the feeling of running and going nowhere, there are simple little things that make success so much harder. When paired with a life or death scenario the feeling is intense frustration.

I think that sound has a unique capability in horror games specifically. In my opinion audio hallucination is much less obvious to the subject than visual hallucination, yet holds a great deal of weight in our sense of orientation in reality. We understand a lot of our environment through listening, and in a game context where this audio can not only be manipulated but also interacted with (giving it a greater weight in reality), sound designers can achieve a uniquely disorientating experience.

MA COLLABORATION

Mechanical Bird Sound First Draft

The first thing I did when constructing this sound was identify the different samples I could use. The sound was divided into three elements, natural, mechanical cog and Motor. I started by finding audio of a bird flapping its wings, this is what I would use as a blueprint to lay the mechanical sounds on top of. I tried to match the real sound source’s size to the fictional object’s size. The first mechanical sound I added was the sound of the cog pulling the wing down, for this I used layered camera shutters, because its a high velocity small mechanism. The second element of the mechanical bird was a motor, I imagined that a motor would be necessary for a mechanical system. The only motor that would fit in a mechanical would be a watch motor. These layered created a textured mechanical system to place under the natural sounds. In the interest of time I created a 6 flap loop, when I revisit this sound I will find more diverse samples and create a longer loop to make the sound more believable.

The natural sound provided the hardest problem when designing this sound. I found that recordings of real birds had a Doppler effect as the birds were always flying past the microphone, this was unusable for me as I needed a steady flapping. The only alternative was foley recordings I found in sound libraries, the drawback to this was they all had a very artificial quality to them. Another problem I experienced was determining the amount of low frequency that should be present in the flapping. From the perspective of a person the air displacement wouldn’t be that much, therefore there wouldn’t be a lot of low frequency in the sound. However, from the perspective of the bird there would be much more low frequency. The problem I face is that the amount of low frequency in the wing flap is an auditory signifier of the size of the bird, how do you express the size of the bird accurately?

MA COLLABORATION

Martin Stig Anderson Lecture

Martin Stig Anderson begins his lecture by reading a couple of reviews people had written about his work on the 2010 game ‘Limbo’. As you can see the reviews are completely polarised as to whether there is music in the game. Music can be much more than rhythm, harmony and melody. Ambient and environmental noises are the feature of the show. 

Anderson lays the foundation of his practice with the definition of music as ‘organised sound’, then going on to talk about Pierre Schaeffer’s music concrete. There must, however, be a distinction between soundscape composition and music concrete, although they are essentially the same practice there is an important varying factor. In the practice of soundscape composition, ‘the original sounds must stay recognisable and the listeners contextual and symbolic associations should be invoked’. – Barry Truax

Soundscape composition exists on a spectrum with music concrete, between contextual immersion, abstracted middle ground and absolute abstraction. Music concrete is the movement from a abstracted middle ground to absolute abstraction, and soundscape composition a movement from contextual immersion to abstracted middle ground. 

A technique Anderson uses to merge orchestral elements into the diegesis of the film is spectral interpolation, for the game limbo he did this with an orchestral recording and a recording of a foundry. ‘It retains the nuances of the orchestral but its not really there at the same time’. This was for one of many rotating rooms in the game. The second was a room of circular saws, for this he used a recording of a bowed cymbal, yet again he interpolated the orchestral sound with this recording. This results in a diegetic sound with complete contextual immersion, which contains harmonic/melodic qualities. Anderson uses source filtering and convolution techniques to merge the two audio 

ELEMENT 2

Possible elements for manipulation

 Ability to manipulate vs keeping original quality of sound. 

Ability to manipulate as melody = conceivable range of frequency created by sound object. 

When analysing sounds for use of melody, we must first analyse their conceivable range of frequency. A pencil on paper has a range of conceivable frequency after which it becomes no longer believable to the audience.

It might also be useful to understand the natural envelope of the sounds, if one were to put a slow attack on a slamming door it would defeat the natural sound of the action. I think only through trial and error I will find that breaking point of manipulation and mimetic continuity. I don’t want to restrict myself to the world of mimetic sound but I do think a certain level of continuity must be maintained in order to keep the audience immersed. Perhaps drastic manipulations with immersion might be made possible through careful structuring of gradual increasing manipulation. Howard Ashman’s musical theatre inspired philosophy must be kept in mind, gradual seamless transitions from differing levels of emotional intensity. 

This issue is best defined, I believe, by Martin Stig Anderson in one of his lectures in which he displays a graph explaining the spectrum from soundscape composition to acousmatic music. 

Sounscape composition —————————————-Acousmatic music 

Contextual immersion —-   Abstract middle ground —- Complete abstraction 

Close audiovisual correspondence — Remote audiovisual correspondence 

Martin Stig Anderson Lecture

Martin Stig Anderson begins his lecture by reading a couple of reviews people had written about his work on the 2010 game ‘Limbo’. As you can see the reviews are completely polarised as to whether there is music in the game. Music can be much more than rhythm, harmony and melody. Ambient and environmental noises are the feature of the show. 

Anderson lays the foundation of his practice with the definition of music as ‘organised sound’, then going on to talk about Pierre Schaeffer’s music concrete. There must, however, be a distinction between soundscape composition and music concrete, although they are essentially the same practice there is an important varying factor. In the practice of soundscape composition, ‘the original sounds must stay recognisable and the listeners contextual and symbolic associations should be invoked’. – Barry Truax

Soundscape composition exists on a spectrum with music concrete, between contextual immersion, abstracted middle ground and absolute abstraction. Music concrete is the movement from a abstracted middle ground to absolute abstraction, and soundscape composition a movement from contextual immersion to abstracted middle ground. 

A technique Anderson uses to merge orchestral elements into the diegesis of the film is spectral interpolation, for the game limbo he did this with an orchestral recording and a recording of a foundry. ‘It retains the nuances of the orchestral but its not really there at the same time’. This was for one of many rotating rooms in the game. The second was a room of circular saws, for this he used a recording of a bowed cymbal, yet again he interpolated the orchestral sound with this recording. This results in a diegetic sound with complete contextual immersion, which contains harmonic/melodic qualities. Anderson uses source filtering and convulsion techniques to merge the two audio files. 

VISITING PRACTITIONERS

Makoto Oshiro

Makoto Oshiro is a Tokyo based audiovisual and experimental music artist, making instruments with technology, installation pieces and performances.

Self made instruments

Makoto begins his lecture with a recording from one of his self made instruments, a high pitched high tempo glitching note, it sounded like an artificial synthesis of the sound of extending a role of cello-tape. The second part of the recording was a slower tempo and slightly lower pitched sound, but the slower tempo of the beats revealed a spatialisation of the sound, the instrument was moving and populating the entire stereo field. The third movement of the recording was more dense in texture, there were multiple organic sounding rhythms colliding creating fleeting/evolving patterns and beats. This was created using a self made instrument that utilised electromagnetic relay. These are usually used to switch higher voltage to lower voltage, for example the headlights in a car would use these relays. The clicking sound is caused by a small metal plate being pulled into the mechanism by electromagnetism. The frequency of clicks is controlled by a 555 timer IC, a very common chip in electronics. A volume knob controls the frequency of the on and off saw wave, this brings a human element to the instruments, the ability to feel the frequency as a means to compose. Makoto creates what he calls ‘acoustic oscillators’. Although Makoto’s focus is creation of music through electronic instruments, he tries to make the sound generators acoustic. The environment is always a component of the composition.

Makoto is updating these instruments by including an Arduino alternative as a way to drive the relays, he also added an 8 step sequencer for more creative expression. In the video he presents, you can see him change the pitch of the oscillators by placing rocks of varying size. I think this is a very interesting means of creating sound works, these instruments are not the only material to compose with, they also turn every other object into a modulator. Involving the real world in something that is usually confined to the world of electronics.

Installations

Oshiro presents a video of his 2017 audiovisual installation ‘mono-poly’. Makoto was interested in audiovisual works but did not want to arbitrarily make audio that matched the visual, he wanted them to be intrinsically linked. He started to investigate how visual signals worked, and how to use a speaker as an object rather than a sound projector. The piece was created by combining two sine waves with a difference of 1Hz, this is called ‘beating’. These waves were then fed through two speakers, one with a 2Hz beat and one with a 1Hz beat. This resulted in a visual beating and slightly audible beating, this piece was called ‘Kinema’ (2009). Makoto then created a piece called ‘Braids & knots’ (2010), in which he tried to attach objects to the speakers to provide a more drastic visual to the work. He found success in attaching a string to the centre of a subwoofer and stretching it across a room, a representation of the wave would move through the string. He attached LEDs to this string but did not manage to achieve the level of visualisation he desired.

He developed this idea further with ‘Strings'(2014), in this piece Makoto achieved a much better visualisation of the beats. Strings were attached to a subwoofer in the corner of a room, and extended to the opposite corner. There was a strobe light placed on the floor that was modulated by an Arduino to illuminate the strings at different frequencies, the result was an etherial representation of audio.

‘mono-poly’ was the final development of this series. Makoto managed to find a way to make the LED strobe produce multiple frequencies at once, this allowed for more dynamic change in the visualisation of the string.

I find Makoto Oshiro’s work fascinating, especially the visualisation of sound. His use of electronics are alien to me but still very interesting as a means of creation. I admire the simplicity of his acoustic oscillators and above all his ability to involve physical space in his works.

ELEMENT 2

‘Analysing Examples’

‘La Grande Bellezza’

Diegetic anchoring of underscore:

The great beauty- The film opens with the sun shining on multiple beautiful landmarks in Rome, tourists take their pictures, locals smoke their cigarettes, all of which underscored by a church bell ringing. The church bells are shortly accompanied by an all female choir chanting harmonies that sound somewhere between apathetic and melancholic. As the opening continues with shots of tourists admiring the beauty of Rome, a solo singer takes the lead with a melancholic passionate melody. This stark contrast between the seemingly eternal beauty of Rome and the melancholic singing is the crux of the film. A surrounding of decadence and beauty, while feeling unfulfilled, unable to reach ‘the great beauty’ in regards to the human condition. With this musical contrast, Paolo Sorrentino was able to set up the concept of the film, and by making the source of the music a diegetic element, the representation of the melancholic humanity (the choir) is made a part of the city itself, the solo vocalist stares out at the beauty as if to have a dialogue with it. 

Delicatessen (1991)

Underscore as a narrative drive:

Delicatessen- Delicatessen is a post apocalyptic black comedy about a butcher who has gained power and influence through selling human meat in a world where protein is scarce. The film explores the dynamic between the butcher, his assistant who is next lined up for the slaughter, the butchers daughter and the people living in his building. There is a scene in which directors Jean-Pierre Jeunet and Marc Caro use sound as a narrative device to demonstrate the dynamic of the building. 

The scene opens as the butcher is getting intimate with a woman on a very noisy metal spring bed, the subsequent creaking noise echoes through the pipes of the building permeating the residents rooms. First a cellist plays her scales to the tempo of the creaks, then man painting his ceiling matches the tempo, a person inflating a bike tyre, a woman beating a rug and lastly a man working a punching machine. As the creaks rise in tempo, the actions of the residents become comically fast. Sound is acting as a narrative tool to drive the story, both resulting in a comical situation and demonstrating the power dynamics of the household.