I gave a talk at GDC 2018 as part of a session at the AI Summit called 'Beyond Procedural Horizons'. I talk about how we combined data sonification and concepts taken from the musical approach known as Serialism to build the game's soundscape.
Transcript
Serialism & Sonification in Mini Metro
Or, how we avoided using looping music tracks completely, by using sequential sets of data to generate music and sound.
Serialism
In music, there is a technique called Serialism that uses sequential sets of data (known as series), set about on different axes of sound (pitch, volume, rhythm, note duration, etc), working together to create music.
In #Mini Metro, we apply this concept by using internal data from the game and externally authored data in tandem to generate the music.
You might have noticed that the game has a clock - the game is broken up into time increments that are represented as hours, days and weeks. (though of course faster) And before diving in, it’s important to know that we derive our music tempo from the duration of...
1 In-Game Hour = 0.8 secs = 1 beat @ 72 bpm = our master pulse.
We use this as our standard unit of measurement for when to trigger sounds. In other words, most of the sounds in the game are triggered periodically, using fractional durations of 0.8 seconds.
In #Mini Metro, the primary mode of authorship lies in drawing and modifying metro lines. They’re also the means in which everything is connected. They serve as the foundation upon which the soundscape of pitches and rhythms is designed. The lines are represented by a unique stream of music generated using data from different sources.
The simplest way to describe this system is that each metro line is represented by a musical sequence of pulses, triggered at a constant rate with a constant pitch. This rate and pitch is constant until they are shifted by a change in gameplay. Each metro station represents one pulse in that sequence. Each pulse has some unique properties, such as volume, timbre, and panning, and these are calculated using game data. Other properties still are inherited from lower-levels of abstraction, namely unique loadouts for each game level. Some levels tell the pulses to fade in gradually, other levels might tell the pulses to trigger using a swing groove instead of a constant rate, and the levels are differentiated in other ways as well.
[1, 2, 3, 4, 6]
All of this musical generation is done using sets of data. And referring back to Serialism, the data is quite often sequential, in series. These numbers actually represent multiples of time fragments. Or, to put it more simply, rhythms. In the case of rhythms and pitches, the data is authored, so we have more control over what kind of mood we’d like to evoke. This data is cycled through in different ways during gameplay to generate music.
So we’ve got some authored data generating musical sequences, but what about using game data? Ideally we could sonify it to give the player some useful feedback about what is happening.
Combinations of game and authored data are found throughout #Mini Metro’s audio system. There’s lots of game data and authored data being fed into the system, working in tandem. Authored data is often used to steer things in a musical direction, while game data is used to more closely marry things to gameplay. In some cases, authored data even made its way into other areas of the game. Certain game behaviors like the passenger spawning were retrofitted to fire using rhythm assignments specified by the sound system.
You might ask why go through the trouble to do things this way? Well, it is really fun. But beyond that there are a variety of answers and I could go into a lot of depth about it, but I think the most important reasons are:
Immediacy
Immediate feedback is often reserved for sound effects and not music, and immediate feedback in music can feel forced if not handled correctly. This type of system allows us to bring this idea of immediacy into the music in a way that feels natural.
The granularity of the system allows the soundscape to respond to the game state instantaneously and evolve with the metro system as it grows, and a holistic system handles all of the gradation for you. When your metro is smaller and less busy, the sound is smaller and less busy. As your metro grows and gets more complex, so does the music and sound to reflect that. When you accelerate the simulation, the music and sound of your metro accelerates. When something new is introduced into your system, you’re not only notified up front, but also regularly over time as its sonic representation becomes a part of the ambient tapestry.
Embodiment
And this all (hopefully) ties into a sense of embodiment. Because all of these game objects have sounds that trigger in a musical way, and all use a shared rhythmic language that is cognizant of the game clock, and use game data to further tie them to what is actually happening in the game, things start to feel communal and unified.
It’s an ideal more than a guarantee, but if executed well, I think you can start to approach something akin to a holistic experience for the player.