Thursday, September 29, 2011

Writing Software to Play Music to Write Software To

Writing Software to Play Music to Write Software To by Backtrace

These are the shorter tracks from a composition I wrote and recorded in October 2003 - the inner movements of a 3-hour set. (Full piece is on last.fm here) The outer movements were each around an hour. They're all excerpts of a continuous "musical clock" composition which would generate tones based on the computer's timestamp - hence the performance name "Mod 12". The generative algorithms are loosely based on 12's in date / time (12 months, 2*12 hours in a day, 5 * 12 minutes in an hour, etc) and the tone rows of 12-tone music (the self-similar structures that Charles Wourinen describes in "Simple Composition". I tweaked the chromaticism and use the values 0 through 11 for overtones instead of pitches in the chromatic scale. (Just grab the time in seconds, modulo 12, and throw it against the frequency modulator bits in the soundcard). I think the original inspiration for this system came from an argument I had with my freshman-year college roommate about the supposed impossibility of combining "minimalism" and "serialism", which led to me sketching some things out by hand (in Cakewalk) and then later getting into generative sound programming based on matrices of pitch ratios.

The culmination of this was 5-year (off and on) project in programming for the OPL chip on old Sound Blaster cards. I had kept a Windows 98 machine around with the correct soundcard so that I could run them - something to do when I got bored with writing sequenced stuff in Buzz*. In a way all of the Supercollider stuff I do now comes from this since it's all generative, and using basic waveform synths. There's a video element to these, but I've never been able to figure out a way to put the 2 of them together for recording (save plugging an old Toshiba laptop with S-Video into a VCR and making VHS tapes of the stuff, 10 years ago.) I did a handful of public performances around Chicago and Milwaukee back then too.

Early incarnations of this program (circa 1997) actually used the internal speaker (the beeper) right off the motherboard - some mp3's are on last.fm here.

I got into this kind of programming right around the time that the SB cards with the onboard synth chips were going out in favor of cards that just did straight digital PCM sound encoding, and the built-in midi synths all went for fakey sample-table based instruments (that whole corny mid-90's "Virtual Reality" aesthetic, which I'm sure will become retro-trendy in about 3 years). I used to look at thrift stores (or curbside) for machines with the right number of audio jacks on the sound cards and buy them just to pull the cards.

VMWare emulators don't support the FM Synth chips - I'm going to look into other software emulators like DosBox and see if they work - if so I'll post some videos of the whole thing. The source code has since been lost, but I have some compiled binaries up here along with the SB drivers, if anyone wants to take a swipe at getting these up and running. Here's the guide to programming for the AdLib/OPL2 chip - this is probably the exact same document I was using 12 years ago.

 * Writing generative music in general was something I turned towards more and more as I hit my early 20's and started working for a living. As a teenager I spent most of my extra brain cycles on writing fiction, but I turned to music, and especially high-level hands-off generative music, as a way to keep doing something creative and engaging in times when the stresses and disruptions of work kept me from dwelling on the minutia of people whose lives I had invented. Prose has always been a hard thing for me to start and stop, but tweaking a few lines of code, holding the architecture** in my mind, while letting the computer fill in the details of a kind of sound world has always been easy.

 ** And maybe this has something to do with how I've always had really vivid dreams of ornate architectural spaces.

No comments:

Post a Comment