While at Berklee, I took a Game Audio class with Michael Sweet. Earlier this year, he asked me to share an experience from my career for his recently published book, ‘Writing Interactive Music for Games’.
We spent months prototyping a music system for a series of rain levels in “The Floor is Jelly”. The system played an individual note for each drop, as it hit a surface. These drops generated harmonies that changed as the player moved. We even made an in-game editor. Unfortunately, playing a sample for each drop was too CPU intensive. Our system trashed the frame rate. The issue crept on us because we prototyped the idea using a sparse amount of rain. When we found we couldn’t increase the notes and droplets together, we had to scratch the whole system.
Instead, we created short loops of rain-like music that change as you progress through the world. The frequency of rain is great enough that synchronizing with each droplet was unnecessary.
For ‘Cannon Brawl’, we prototyped another system. The music comprised of four bars; two for the blue team on the left side, panned to the left, and two for the red, panned to the right. The concept seemed sound; the notion of two marching bands, in a never-ending call and response. The intensity of each team’s band fluctuated, depending on the game state. While this idea seemed great on paper, “trading twos” turned out to be annoying.
The final solution involved a longer piece of looping music, made up of many stems. Certain instruments represent each team, and are often panned to their respective side. When either team gains a level, the appropriate stem gets added to the mix. The concepts of our first prototype informed the simpler solution we reached.
In both cases, our 'what if’ approach created useful roadblocks on the path to the final solution.