MA COLLABORATION

Integrating sounds

Now that we received a video of the game we’re able to start integrating the sounds, this has turned out to be an interesting process. Due to the circumstances of our collaboration everyone in the group has been creating the different elements of the game separately, working from the diagrams and notes provided by the MA group. Although not the best method of collaboration, this has proved to be useful due to the depth of sound required for the game. This way we were able to split the workload and cover a wider spectrum of the game, the draw back to this compromise was a lacking of coherency. There was a general coherency in sound, I believe there is a standardised ‘mini-game sound’ quality. However I feel given more time there could’ve been more coherency. There are three tracks used for the game provided by Pedro, Max and myself. Pedro’s track reflects the style of the mini games, light hearted, synthesiser heavy, playful music. Max contributed a track that provided an appropriate musical backing to the dreamscape of the 2nd level with synthesised game like elements. I feel as though my contribution is the only track that doesn’t have a game aesthetic, I went for a merger between industry and nostalgia, an accurate reflection of the theme of the game, but an aesthetic anomaly. This difficulty was a result of our disjointed collaboration, to remedy this I would have composed with instruments that had a game type timbre and integrated that into the convolution.

Working from the video proposed another level of difficulty as we had recorded and organised our foley as if it would be inputed in FMOD or Unity, by this I mean as single steps and actions. This meant when I came to input these sounds into the video I had to import and arrange individual steps and clicks, a tedious and time consuming process. If I were to undergo this process again from the start with the understanding I have now having seen the full game, I would try to merge game sound with office sound to re-enforce the narrative of the game. This would either mean gradually integrating game sounds into the foley of the office, or integrating office sounds into the dreamscape the connect these two worlds.

We have received the game project file, while I am working on this video to give an overall idea of our sound collaboration, I am going to try and get someone else to attempt to integrate some of our sounds for part of the game using Unity or FMOD. It would be good to integrate our sounds in this way as I believe they would work together better in an immersive and interactive context.

MA COLLABORATION

Merging audio using convolution reverb with custom impulse response

Using logic’s space designer I was able to customise an impulse response as a means of merging audio files, as mentioned by Martin Stig Anderson in his lecture.

I wanted to integrate music into the game in an immersive way, so as to emotionally influence the player while maintaining suspension of disbelief. By intwining the melody with something in the game I was able to diegetically anchor the music. The audio I chose to use for this was the sound of a foundry:

Foundry Audio

I chose this because I believed it shared a contextual/timbral quality with the kaleidoscopic room from the third level, this level being a good summary of the theme and feel of the game. Disorientating, curious and otherworldly.

The first experiment I conducted was applying a convolution reverb to the music, the convolution reverb had the sound of the foundry as it’s impulse response. The result was a strange distortion of the music, into a kind of soundscape. It lost its form and became a kind of fluid representation of the composition the was influenced by the foundry. I would ideally like this influence to be more prominent but I couldn’t find any way to make this the case, even after a lot of tweaking.

Music through room convolution

I wanted to try the process the opposite way to see if the effect was more successful. I found it hard to understand what each element of modulation in the convolution reverb did, but though general experimentation I managed to gain a basic understanding. The most important thing I didn’t manage to understand was how to control the degree of abstraction away from the original form of the audio. This experimentation had a lot less form than the previous one, the music lost its melodic variation and became one tone. Almost like a summary of the composition layered on the foundry.

Foundry through Music Convolution

I found a compromise by using the ‘music through room convolution’ and layering the original foundry audio with a convolution reverb that had the same impulse response (although the wet and dry levels were adjusted to be less drastic). The idea of this was to have an ‘accurate’ reverb under the original recording, that could blend with the musical reverb. It provides an extra tether to reality.

Music trough room convolution with layered foundry

This experiment was a success, it is a diegetically anchored sound scape composition. I think for this method to be more successful (by that I mean retain more of the diegetic form), I would have to understand the convolution merging technique better and possibly compose the music with the exact same timbre and similar frequencies as the diegetic sound. The composition reflects the game’s themes of nostalgia and self reflection, whilst not taking the player out of the experience, it fits the disorienting world of the game.