I spoke in San Francisco about the trajectory of chipmusic and some of the techniques that make it a unique sandbox.
Transcript
Chipmusic.
Past, Present, Future
Introduction
This talk turned out to be something completely different than what I had originally intended. I hope it at the very least gives you something to think about.
Chipwhat? Chipmunk?
So what is chipmusic, exactly? Some people call it 8-bit music, or chiptunes, or retro video game music, any of those titles bring to mind essentially the same thing: music born from the technical limitation of early videogame hardware. Any subjective differences in belief about what constitutes chipmusic are not the point of this talk, so I won’t say much more about that.
Why so popular?
The relatively slow rate at which sound technology in games improved through the 80s, and even the early 90s, is largely responsible for the well-established aesthetic of music we continue to hear in so many games, even today.
While the Super Nintendo brought newer technologies like sampled sound to the forefront in game music, other limitations such as memory availability and voice limits still persisted. And while the introduction of disc consoles like the Sega CD brought with it CD Audio, music limited to a handful of voices has continued on in handhelds well into the 21st century as a matter of necessity.
As a result of being limited in some very consistent ways for many years, methods for creating this particular style of music have been established and passed on, ensuring that the essence of chipmusic can live on, even when the limitations have been lifted.
Self-Imposed Limitations
As we’ve moved past the necessity to create chipmusic, we’ve moved into the realm of self-imposed limitation as a creative possibility. I’m not trying to convince everybody to go home and start making bleeps and bloops, but as general artistic advice, working within a self-inflicted framework has profound consequences.
Knowing the rules of the game means less time spent figuring out those rules, and more time creating. It also means working more closely with fewer parameters, which means a new side of your creativity can be given the chance to shine through.
The Rabbit Hole
For even the most seasoned composer, the nuances of making chipmusic are easy to miss. It took me years to even realize that it wasn’t all played back at one constant volume level.
I’ve heard some contemporary chipmusic soundtracks done by accomplished film composers that are compositionally very good, but yet manage to be aesthetically flatter than chipmusic from the 80s. I think this stems from a lack of understanding of how to make the most of the limitations. The general aesthetic is incredibly easy to replicate, but there’s more to it than that. The limited capabilities of sound hardware actually allowed composers to spend less time on recording, mixing and instrumentation, and more time on the second to second details of composition and production. Because the modern music environment of software like #Logic Pro or Ableton is so far removed from what chipmusic was originally made with, it can be easy to gloss over some of the most crucial elements of the sound.
But it’s not like the game composers of the 80s had it in the bag, either. It took time for the process of working within these limitations to refine itself, which not only included the ability of the composer, but also their access to composing tools. It was commonplace for music to be written outside of the system, and then transcribed into game data by a programmer, who may or may not have had any musical abilities of their own.
1983 > 1990 > 2010
I’m going to play three clips, one from the beginning of the NES console cycle in 1983, one from 1990 during its prime, and one from 2010, written without worrying about sharing memory with code and graphics. All three songs are made using the same hardware limitations. This set of limitations has remained unchanged for 30 years, but people have naturally gotten better at exploiting the rules.
It’s possible to create interesting, multi-dimensional work, using limitations that may seem draconian at first. When you find that you can do a lot with very little, than imagine what you might be able to do when you have all of your fully-fledged tools at your disposal.
Circumstantial Techniques
Limitations often force us to think outside the box, to try to solve problems in ways we wouldn’t normally do so. In this way, limitations can help us to focus. This is something easy to forget about when you’re loading up your DAW, with the 96-track template you’ve already made, and the hundreds of sample libraries and plug-ins you’ve bought. I know I’ve been guilty of this from time to time.
Some previously uncommon or unusual music techniques have come about as a way of trying to do more with less.
C64 Arpeggios
Probably the most classic example of this is the C64 arpeggio. The Commodore 64 had some features that made its sound more sophisticated than the NES, things like filters and full pulse width modulation. However, the C64 sound chip, or the SID chip as it’s known, had only 3 channels to the NES’ five.
If you want to have bass, drums, melody, and backing chords fit into 3 channels of sound, you need to get pretty creative.
As a way to fake the sound of playing chords, some clever folks started adding very quick arpeggios to their songs that would repeatedly play a series of chord tones. The technique is so effective that it is easy to be doped into hearing the arpeggios as polyphonic chords.
I’m sure many of you know what this sounds like but for the sake of thoroughness, here is a clip.
Filling in the Cracks
When you only have a handful of voices, every moment of silence in the music is an opportunity to flesh out the production of your sound, or introduce a new supporting element.
As a way to mimic the sound of delay and/or reverb, many composers used the available spaces in between the notes to generate facsimiles that could suggest the sound of a delay or reverb unit.
It’s not unheard of to use a single audio channel to convey more than one instrument part at the same time. And to prove it, here’s an example. I’ve conveniently chosen one of my own songs to demo for this. I’m going to play a 30 second piece of NES music I wrote for (Bit.Trip) Runner2, and then I’ll solo a single channel, so you can hear all of the different parts that are happening.
Duty Cycle Modulation
Modulation of the pulse width had already been a prominent feature of synthesizers, but on the NES, you were limited to only 4 widths. This limitation, along with the lack of access to many other timbres, made for some fairly interesting cases where composers would cycle through these, or use them in tandem to create a certain sound. This video does a pretty good job of explaining, check it out:
Challenge Yourself to Limit Yourself.
These are all cool little tricks and techniques that people have picked up over time, but I hope that the takeaway is this: That the things you uncover as a result of working within limitations can often have widespread application, and at the very least, the lessons learned can. So why not give yourself a chance to put a different aspect of your creativity under the microscope, and learn something new in the process.