Square Enix released some battle footage recently from their upcoming game Final Fantasy XV.
A friend and I were talking about how long we’ve come from the early days of gaming in terms of visual and audio quality. When I say “the early days”, I’m talking about the 8 bit days when Nintendo, Sega, and Atari were battling it out for your living room/bedroom television. It almost seems like the dark ages now, doesn’t it? But many memorable tunes came out of those 8 bit games. Castlevania’s easily recognizable Vampire Killer tune composed by Satoe Terashima (and Kinuyo Yamashita, but she worked on other songs in the game), Batman’s Streets of Desolation composed by Naoki Kodaka, and, of course the Super Mario Bros Overworld Theme composed by Koji Kondo. All three of the aforementioned songs have been remixed and/or covered time and time again since their creation over 25 years ago. And for good reason; they’re catchy and good songs (hear that, mom? Video games aren’t just noise!). But by what processes did they have to go through to create these songs on that hardware? For the purposes of this article, I will look primarily for the Nintendo Entertainment System, or NES for short. So let’s take a look…
8 bit era
Comparatively speaking, the hardware in the consoles of this time period were very simple and didn’t have much in terms of processing power, RAM, or fancy sound chips. If you wanted sound in your games, you had to program it into the game. Yes, that’s correct. Program it yourself. Like a software developer, except you’re focused only on the music because there were no tools available to help you from the makers. There were various ways to do this depending on what resources you had available to your studio. For the Nintendo Entertainment System (or Famicom in Japan), most (read: pretty much all) of the Japanese composers used Family BASIC to program the music sequences they wanted. Family BASIC is the consumer programming tool Nintendo made for people to write their own games for the NES using a dialect of the BASIC programming language. However, it was only available in Japan and did not work with the American or PAL versions of the NES. Even if you wanted to work with it, the instructions were in Japanese. Have fun with that!
So, what is a non-Japanese developer to do? This is where people got creative. If you were a programmer, you would generally use a 6502 Assembler to write the music and code you needed. This was the preferred way since the popular systems of this era, such as the Famicom and NES, Apple IIe, and Atari 2600 systems all used a 6502 microprocessor or a derivative. That means writing code worked very similarly through all of the systems because they all had the same basic processor. If you were a technical kind of person, you’d write your own tracker and sequence the music that way. Alberto Gonzalez, who composed music for The Smurfs and Asterix, did just that:
Compact Editor was a simple music sequencer, based on tracks, blocks, and instrument definitions, inspired by some Amiga computer trackers like NoiseTracker. A complementary PC program named ‘The Sourcer’ was used to transform the binary data created with Compact Editor into raw source code, as a text language that I could understand (basic notes, lengths, etc). This way I could then edit the songs into its fullest detail.
Compact Editor is the name he gave to the tracker he made himself to sequence music. David Wise, well known for the music to Donkey Kong Country, but composed/sound designed for Battletoads, Marble Madness, and Wizards and Warriors, had this to say as to how he programmed music:
Video games were still in their infancy, and learning that the sound chip on the NES – the Nintendo Entertainment System – was somewhat compromised, compared to a Roland D-50, certainly made things challenging. But I like a challenge! … There was no MIDI, instead, notes were entered data style into a PC. I typed in hex numbers for pitch and length and a few commands for looping subroutines. And this method of writing video game music continued right through to the end of the SNES development.
The stock sound channels for the NES weren’t very plentiful by today’s standards. You were given two pulse (square wave) channels with 4 pulse width settings each, one triangle wave channel, a noise channel (usually used for percussion sounds), and a channel that will play low-quality digital samples for a total of 5 sound channels. However, the NES supported expansion chips embedded in the cartridge to add sound channels and help with processing. The more popular games, such as the Mega Man series, Contra, The Legend of Zelda, and Gradius II to name a few, took advantage of this to their success.
Sure, you could use the sound driver that Nintendo provided for you, but that sound driver wasn’t very good (to put it nicely.) Composers like Tommy Tallarico, for the game Color A Dinosaur, used a tool made by Dr. Stephen Clarke-Willson to convert MIDI to ASCII (plain text). He then edited the sounds and saved them to a custom NES cartridge. Tommy further commented with:
There were all sorts of tweaking tricks one could do to get it to sound ‘not horrible’. Composer/programmers could write their own tools and incorporate things like vibrato and pitch bend. I mean lets face it the most memorable and popular songs in the entire history of the video game industry were created and performed on the NES! Of course I’m talking about the music from Mario Bros. Well, when it came to the NES I was no Kojii Kondo!! Especially considering I had a beat up piece of crap audio driver and a day to learn it and compose for it.
The Sega Master system boasted better hardware than the NES, but a significantly smaller game library than the NES and virtually none of the same games due to Nintendo’s licensing practices. Additionally, it used a completely different microprocessor that wasn’t standard and only 4 sound channels to work with – 3 for music and one for noise. There was a sound board expansion released for the Sega Master System, but you had to purchase it and install it yourself. Compare this to the first party and third party chips already on the cartridge for NES games, and you can see why many studios decided on the NES as their choice.
And there you have the basics for programming music on the NES during the 8 bit era. Certainly a challenge for anyone in that field. As with anything, though, the more you use it, the more proficient you become. Join me next time when I take a look at the 16 bit era, exploring both the Sega Genesis and the Super Nintendo Entertainment System.
Other geeky notes:
– PWM, or pulse width modulation, is a technique for producing square wave outputs of a specified ratio of ON time to total period. The ON time, as measured in either seconds or clock counts, is the time that the output is active. The period is the total time of the output waveform before it repeats itself, also measured in seconds or clock counts.
– MIDI stands for Musical Instrument Digital Interface and is a technical standard that describes a protocol, digital interface and connectors. It allows a wide variety of electronic musical instruments, computers, and other related devices to connect and communicate with one another.
– Square and triangle waves are a type of sine wave or sinusoid that is a mathematical curve which describes a smooth, repetitive oscillation that sounds clear to the human ear (such as a tuning fork or running a wet finger along the rim of a wine glass, for example).