Welcome to Part 2 of 2 of The Making of Game Music. In my previous article, I went over how game music was made in, primarily, the Nintendo Entertainment System. Here, we will explore the Sega Genesis (also known as the Sega Mega Drive in other areas of the world) and the Super Nintendo Entertainment System. It is important to note that when I say MIDI, I mean the file created with a MIDI synthesizer or device. MIDI is an interface protocol and nothing more. Sounds themselves aren’t “MIDI sounds.”

As a small recap, the NES and Sega Master Systems were programmed primarily in assembly along with the music and, in the later years, a MIDI synthesizer with some of the games coming with custom sounds in the ROMs themselves and samples (see Batman on the NES; not so much on the Master System as it didn’t support samples). When the 16-bit game system era hit, Nintendo and Sega took completely different routes in their hardware. Sega decided to put a Yamaha 6 channel sound chip in the Genesis system, giving developers FM synthesis and one sample channel (finally) to play with in addition to the 4 channel PSG chip (mostly for backward compatibility with the 8-bit Sega Master System) while Nintendo decided to give their system a Sony SPC700 coprocessor. What this essentially means is that the Sega Genesis had two sound chips to play with whereas the SNES had sampled instruments with a dedicated processor with its own memory and instruction set for their sound. On paper, this looks like the Sega Genesis had the superior sound setup, especially when you remember the soundtrack to games like Streets of Rage where Yuzo Koshiro went nuts with his Roland keyboard samples and anything Tommy Tallarico made on the Genesis was basically golden (Earthworm Jim 1 and 2, Aladdin, and both Maximo games, to name a few). This was a massive feat at the time considering he was using G.E.M.S., which stands for Genesis Emulation Music Software. (As a side note, G.E.M.S. was written by Jon Miller, who is Mark Miller’s brother, and Mark Miller made the music for Toe Jam & Earl.) Tommy had this to say regarding the emulator:

  “The tool plugged into the Genesis like a cartridge and connected to your PC, which enabled me to tweak the audio chip sounds on the Genesis in real time and use the Genesis to record MIDI right onto my sequencer. So it was like I was using the Genesis just like I would use a synthesizer or sampler. Without that tool I would have been screwed!”

If he really wanted to, he could go through every single note in each MIDI file of the game to tweak the right sound, value, length, volume, priority, etc. This is what made Aladdin on the Genesis sound exquisite by the opinions of many people. Exhausting process? You bet. Worth it? Abso-freaking-lutely!

Of course, you also had people who said that the Genesis sounded like wet farts or a ukulele speed metal depending on who you talked to and what games they played (here’s looking at you, John Madden games and Mutant League Football). Then there were bad ass games (basically, Thunderforce 4, Road Rash). However, that’s what happens when you give a person access to raw power and no filter; they don’t really know what to do with it at first. There’s no denying that, as time progressed, the music quality certainly got better as the composers learned how to wield that raw power as the Sega Genesis did not support 16-bit sound samples.

Over in Nintendoland, things were different. The SNES was entirely sample-based. You have the SPC700 that coordinates the audio, the 8 channel DSP of which each channel can play a 16-bit sound having left and right separate stereo volume, and support for voice panning, ADSR envelope control (usually found in high-end synthesizers), echo with filtering (via a programmable 8-tap Finite Impulse Response), and using noise as sound source (useful for certain sound effects such as wind). That’s the beauty of the SNES sound chip. You could allocate as much or as little sample RAM as necessary to create the rich, broad sound for your game.

As part of the general plan of the SNES, Nintendo decided to make it easy to interface special co-processor chips to the console rather than include an expensive CPU that would eventually see obsolescence quickly due to the nature of technology. The DSP chips had four different versions and would be included inside the cartridge itself. You could always tell which games included a DSP chip because of the presence of sixteen additional pins at the edge of the game cartridge most of the time (Pilotwings had a DSP and did not have extra pins, as did a few other games). These chips would help the SNES calculate and draw polygons, enhance AI, allow for Mode 7 graphics, enhance sound, etc. The sound was quite important for the DSPs because each DSP version were totally incompatible with one another as they were rewritten from the ground up. For example, vibrato is used as the square waveform in DSPs 1 through 3, However, in DSP 4, they made it the triangle waveform.

BEGIN GEEKY TALK

 Composers had 64kb of sample RAM to work with separate from the 128kb of main RAM. The samples were quite lossy (4-bit to be exact) and had a rate of about 32khz. When samples are loaded to the sound chip’s RAM, you can save the state of the APU to a .SPC file. This is why SPC files ripped from SNES games are exactly 66,048 bytes long. Incidentally, only 32kb of that RAM is usable. The other 32kb is reserved. Talk about limitations… You’re probably looking at that number of bytes and thinking, “Wait a minute… that’s not the right number for 64kb of RAM…” You’re correct. The samples are made up of 9 byte compression blocks. Each block holds sixteen 4-bit samples and a 1 byte header. So, 16-bit samples will get a 9/32 compression ratio, but 8-bit samples must be inflated to 16-bit before compression, giving them only 9/16 compression ratio. The first byte of each block contains the header. The sound samples are stored in RAM in compressed Bit Rate Reduction format.

END GEEKY TALK

So how does one create music on the SNES? Since it’s sample based, it’s pretty straightforward. Use any tool you want to create the samples! But, you must bear in mind that the memory has to be shared among sample data, song data, sound effects, and the actual program instructions to play music, play sound effects, and send and receive data from the main CPU (these are the infamous limitation so many composers complained about). The Gaussian filter is forced on the output to hide aliasing artifacts from the compressed samples used. This also means that there’s probably going to be a lot of reverb since the system is sample based. Nevertheless, you had great game music such as Final Fantasy 3 (Final Fantasy 6 if you’re THAT kind of person), Actraiser, and Chrono Trigger.

Speaking of Squaresoft (before they merged with Enix to become Squeenix), they took the sampling and DSP chip usage to insane levels. Secret of Mana relied heavily on the DSP. The reason why Squaresoft games were so fantastic in their music is because rather than allowing their games to play sound effects by writing to one of the four registers, Square decided this was too simple and was having none of that easy stuff. All of their games used a rather complicated protocol instead that no one else used. Would you expect anything less of them?

This ends the second part of my two part article on The Making of Game Music. I quite enjoyed researching all of this as I wasn’t even aware of the serious limitations imposed upon composers back in the early days of the consoles and their wars, yet they all made incredible music on the games we so loved.

Comments