Thursday, September 29, 2011

Writing Software to Play Music to Write Software To

Writing Software to Play Music to Write Software To by Backtrace

These are the shorter tracks from a composition I wrote and recorded in October 2003 - the inner movements of a 3-hour set. (Full piece is on last.fm here) The outer movements were each around an hour. They're all excerpts of a continuous "musical clock" composition which would generate tones based on the computer's timestamp - hence the performance name "Mod 12". The generative algorithms are loosely based on 12's in date / time (12 months, 2*12 hours in a day, 5 * 12 minutes in an hour, etc) and the tone rows of 12-tone music (the self-similar structures that Charles Wourinen describes in "Simple Composition". I tweaked the chromaticism and use the values 0 through 11 for overtones instead of pitches in the chromatic scale. (Just grab the time in seconds, modulo 12, and throw it against the frequency modulator bits in the soundcard). I think the original inspiration for this system came from an argument I had with my freshman-year college roommate about the supposed impossibility of combining "minimalism" and "serialism", which led to me sketching some things out by hand (in Cakewalk) and then later getting into generative sound programming based on matrices of pitch ratios.

The culmination of this was 5-year (off and on) project in programming for the OPL chip on old Sound Blaster cards. I had kept a Windows 98 machine around with the correct soundcard so that I could run them - something to do when I got bored with writing sequenced stuff in Buzz*. In a way all of the Supercollider stuff I do now comes from this since it's all generative, and using basic waveform synths. There's a video element to these, but I've never been able to figure out a way to put the 2 of them together for recording (save plugging an old Toshiba laptop with S-Video into a VCR and making VHS tapes of the stuff, 10 years ago.) I did a handful of public performances around Chicago and Milwaukee back then too.

Early incarnations of this program (circa 1997) actually used the internal speaker (the beeper) right off the motherboard - some mp3's are on last.fm here.

I got into this kind of programming right around the time that the SB cards with the onboard synth chips were going out in favor of cards that just did straight digital PCM sound encoding, and the built-in midi synths all went for fakey sample-table based instruments (that whole corny mid-90's "Virtual Reality" aesthetic, which I'm sure will become retro-trendy in about 3 years). I used to look at thrift stores (or curbside) for machines with the right number of audio jacks on the sound cards and buy them just to pull the cards.

VMWare emulators don't support the FM Synth chips - I'm going to look into other software emulators like DosBox and see if they work - if so I'll post some videos of the whole thing. The source code has since been lost, but I have some compiled binaries up here along with the SB drivers, if anyone wants to take a swipe at getting these up and running. Here's the guide to programming for the AdLib/OPL2 chip - this is probably the exact same document I was using 12 years ago.

 * Writing generative music in general was something I turned towards more and more as I hit my early 20's and started working for a living. As a teenager I spent most of my extra brain cycles on writing fiction, but I turned to music, and especially high-level hands-off generative music, as a way to keep doing something creative and engaging in times when the stresses and disruptions of work kept me from dwelling on the minutia of people whose lives I had invented. Prose has always been a hard thing for me to start and stop, but tweaking a few lines of code, holding the architecture** in my mind, while letting the computer fill in the details of a kind of sound world has always been easy.

 ** And maybe this has something to do with how I've always had really vivid dreams of ornate architectural spaces.

Wednesday, September 28, 2011

"Rugs Not Drugs"

Spent most of the day going in circles with some scope / environment stuff in Supercollider and haven't really had the time to make sense of the docs on how to do inheritance with a Proto.  The Apollo 11 sonification idea has come back to the foreground - was listening to some old drone pieces like Whiteout Drunk c omposed back when I didn't know how to do rhythm in Supercollider - that kind of 1-dimensional restriction was good for me, especially for the narrow emotional range I was willing to engage with in my music back then.

The drone server has been finicky - keeps crashing because, being hosted on a virtual server, the cpu allocation isn't constant and at some point the scsynth process sucks up 100% of the available cpu and jackd crashes.  I'd like to write a piece with is the sonification of its own system resource usage, but that would require having a separate sclang process to monitor peakCPU (probably - but again, figuring out how to do it in a SynthDef is the kind of restriction I like.  As far as I know thought, SynthDefs are constant in the # of cycles they use once instantiated).

I'd like to turn the Rugs not Drugs pieces in to something longer (originally I was imagining them as 90 minutes and 4hrs) but my interest in that world seems to wax and wane on a monthly cycle.  I had orignally come up with those heavily phase-modulated synths back in June, while working on drones that could fit into a Twitter post.

~synthFactory.("big-drone2",{|freq=120, n=#[1,2,3,4,5,6], p=#[0.5,0.66,1,2,3,4],x=3,of=4, ra=3,tt=1|

  	  var a=p *.x n,
    	f=a*freq,
    	b=SinOsc,
    	c=FSinOsc,
    	r=b.ar(1/f,c.ar(tt,of/a,of/a,0).tanh,c.ar(1/f.reverse,0,x,x)),
    	t = c.ar(tt,pi/2,1/2,1/2);
  	b.ar(PinkNoise.ar(f.log2,f) ,Blip.ar(PinkNoise.ar(r.sum**(f.log2 / 8192.log2),f),x,pi*b.ar(1/f,r,r)),((b.ar((r+r.sum).abs.sqrt/ra,r,2,2)**3)/f)*b.ar(a/(p**t) ,b.ar(a+(r.sum)**t,r,pi*b.ar(1/a,r.sum,b.ar(1/a,r,a))),r/x,0))},{[-1,1]/(1..6)},x);

The drone created by this synth is a matrix of static pitches-amplitudes and phase modulation amounts change, via the r modulator, which is really the heart of the piece.  In 20110904 this synth is just instantiated w/ different starting values and left to run from anywhere from 5 seconds to 6 minutes.


20110904 by Backtrace I was inspired by the swaying motif's in Feldman's late orchestra stuff like "Coptic Light" - and by the way that the phrases in his music just hang in the air, each as its own little blob of sound, without really doing anything or going anywhere.  This was what I was hearing in my head as I spent an hour cleaning cat hair out of my hallway runner with a lint brush.



The entire code for 20110905 is here:
(
  ~allServers = [Server.local,Server.internal];
  ~allServers.collect({|serv|

    ~synthFactory.("ikat_yellow",{|a=#[1,1,1,1],t=#[128,129],xMod=1,xAdd=1,xRate=1,base=2,aExp=1|
      var x,p,h,
      e=1.exp,
      b=SinOsc;
      a=a*.x a*2;
      x=(b.ar((a**aExp) * xRate,0,xAdd,xMod)**e).tan;
      h =a *.x t;
      p =BrownNoise.ar(x.abs.log / base.log,t *.x a);
      b.ar(p,b.ar(p,0,((pi**(e/x)).sin)**x)*pi,6/h);
    },{[-1,1]/(1..4)},serv);

    ~synthFactory.("ikat_yellow_lower",{|a=#[1,1,1,1],t=#[128,129],xMod=1,xAdd=1,xRate=1,base=2,aExp=1|
      var x,p,h,
      e=1.exp,
      b=SinOsc;
      a=a*.x a*2;
      x=(b.ar((a**aExp) * xRate,0,xAdd,xMod)**e).tan;
      h =a *.x t;
      p =BrownNoise.ar(x.abs.log / base.log,h);
      b.ar(p,b.ar(p,0,((pi**(e/x)).sin)**x)*pi,6/h);
    },{[-1,1]/(1..4)},serv);
   

)

(
 Routine {

   ~f.set(\gate,0);
   ~g.set(\gate,0);

  ~f=Synth("ikat_yellow");
  ~f.setn(\a,[1,3/8,441/512,8/7]);
  ~f.setn(\t,[128,1.5**12]);
  ~f.set(\xRate,1/(3**8));
  ~f.set(\aExp,0);
  ~f.set(\xAdd,1.exp);
  ~f.set(\xMod,1.exp);
  ~f.set(\base,9);
  ~f.set(\amp,1);
  ~f.set(\gate,1);

  ~g=Synth("ikat_yellow_lower");
  ~g.setn(\a,[21/16,4/3,3/7,1]);
  ~g.setn(\t,[128 * (441/512) * (8/7),128]);
  ~g.set(\xRate,1/(3**8));
  ~g.set(\aExp,2);
 
  ~g.set(\xAdd,1);
  ~g.set(\xMod,14/49);
  ~g.set(\base,9);
  ~g.set(\amp,1.25);
  ~g.set(\gate,1);


  Routine {
    5.do({|x|
      var z = [8,7,9,7,8].wrapAt(x),
      w = [6,4,5,4,6].wrapAt(x);
      x.postln;
      [~f,~g].collect({|d|d.set(\xRate,(1/3**z))});
      (3**w).wait;
    })
  }.play;

  ((3**[6,4,5,4,6]).sum).wait;
  ~f.set(\gate,0);
  ~g.set(\gate,0);

}.play;
)


This piece instantiates 2 drones and just lets them run, tweaking one modulator parameter to divide the piece into blocks of [ 729, 81, 243, 81, 729 ] seconds.  The x modulator is the key here - the grumbling, ripping, and pinging sounds created as it modulates the amount of noise in each pitch layer of the drone.    The septimal scale is always mostly yellow to me (like the rug which inspired the piece).  I'm also really starting to like the 1:8/7 harmony - the inspiration for both of these pieces was just a drone on a stack of pitches at 8/7 ratios to each other.
20110905 by Backtrace


Monday, September 26, 2011

Otomata + Monome * Supercollider

Last week I got a Monome and I've been playing with some Otomata stuff using a Supercollider implementation by Corey Kereliuk.
The first composition I put together was a simple, cheerful minimalist piece and one of the first things I've done using equal temperament (midi notes) in about 5 years.


20110918 Otomata by Backtrace

I started tweaking the code so that I could add more instruments (the above example has a percussive instrument and a sustained pulse-wave instrument) and having the cellular automata trigger a callback function when they hit a wall, instead of just triggering a synth. I could then load that callback function with whatever synths I wanted. I also started polling the instantiated Otomata object itself for global data (like the x,y positions of all the automata at a given moment) so I could use that for musical data. You can hear chord changes in this piece - I had the program I was using count the # of ticks that the sequencer routine was running and store these in a global variable, which I then used to cycle through a set of different scales.

After 4 days straight of playing with this stuff, I think you can sense the burnout setting in a little with this piece (at least that's how I felt about it - not that feel burned out creatively on Otomata, but that this is where my brain goes around 3 AM after playing with sounds all day):

20110922 Otomata by Backtrace

There are multiple instruments being triggered, some effect parameters (filter sweeps) being tweaked by the global state of the Otomata board. After recording this piece I decided it was time to clean up the code for it and try to get some useful, standalone application out of it.

I modifed the original Otomata.sc class to a Proto (so I could tweak it at runtime, without having to recompile SC). I'm working on a Proto version of the Automaton class as well.


Musical information (scale, synths, starting pitch) was decoupled from the sequencer logic - scale and synth are now controlled via the synthFunc callback function and can be switched dynamically as the Otomata is running. See how synthFunc variable is added to the Automaton class in Automaton.sc, and examples of its use in otomata.scd.

Methods to add and remove automata from a running otomata - ~removeOldest, ~removeNewest, and ~removeNth.

Global metadata about the otomata to give additional musical parameters across all of the automata - see the ~dSum, ~xySum, ~xSum, ~ySum. ~age attribute can be accessed to change values over time. I'd like to add similar attributes to each automaton, like age, # of collisions, # of wall hits, "dizziness" (# of right angle turns over time).

In the example code in otomata.scd I show how to use supercollider's function composition operator <> to attach multiple sound callbacks to 1 automaton. if you have 2 functions f(x) and g(x) then h = f<>g creates the function h = f(g(x)) - so whatever g returns is passed on to x. if you're chaining synth callbacks, the function should return the same "note" value that it takes.

Thanks to Corey for posting his code, and to Batuhan Bozkurt for designing the Otomata.

The code posted here is really meant as an example of how to use callback functions in SC and a couple of other techniques - but feel free to use it if you'd like. In the near future I'll post something that's a little more conceptually coherent.

Wednesday, September 14, 2011

HOWTO: Stream mp3's with icecast on Ubuntu / Rackspace Cloud

In this post I'll explain how to set up a streaming audio server in the cloud. We'll be using Rackspace Cloud (Slicehost) running Ubuntu Lucid in the examples. I'll explain the steps in brief first, and then in detail, indicating places where you may need to back up and do extra work, depending on how your system is configured. This howto is meant for people who have experience configuring Linux servers, but are a little lost in the particulars of getting an Icecast server up and running (I couldn't find a good step-by-step guide when I did this). You should be familiar with ssh, installing packages with apt-get, and using make to compile from source.

Streaming mp3's, the basic concept: you put mp3's on your cloud server, you set up a streaming audio server, and people listen using a web client that points to your cloud server. This requires 2 applications - Icecast2 and Ices. This was the first point of confusion for me. Icecast2 is a web server that clients (like iTunes, or a browser) connect to in order to get the streaming audio signal. However, the conversion of binary files to streamable audio data is accomplished by a different app, Ices. Ices and Icecast2 don't have to run on the same server - you can configure them so that the mp3's come from one machine (like one in your home or office) and then get streamed up to your Icecast server, which in turn streams out to multiple listeners on the internet. For this example, both Ices and Icecast2 will be running on the same virtual machine.

Setting up Icecast2 is straightforward enough - I used the instructions in this tutorial on howtoforge.com. Ices2 requires a little bit more work though. The problem with the howtoforge.com example is that by default Ices only works for files encoded in the Ogg Vorbis format (.ogg). In order to get Ices to play mp3's, we need to grab the right libraries and build it from source.

If you have an ubuntuforums.org account you can check the thread here for instructions. If you don't have an ubuntuforums account, I'll explain here (since getting an account on that forum just to read an archived thread took like 20 validation steps).

You'll need to install the following packages:
build-essential
libshout3-dev
liblame-dev
libmp3lame-dev

libmp3lame-dev isn't part of the standard Ubuntu distro so you'll need to edit your /etc/apt/sources.list file to include packages from the Ubuntu Multiverse.

I also already had libxml and jackd installed on this system - so install these packages as well. (TODO: figure out if these packages or any of their dependencies (more likely) are actually needed for Icecast / Ices)

To install Ices from source:
Download the ices0 source package from icecast.org and extract it to your home directory on your cloud server. Then cd to the source directory and run the following commands:
./configure
make
sudo make install

Now you'll need to set up an Ices configuration file to tell Ices where to find your mp3's. This thread gives a good example of the format (the last code sample posted). You can also check the ices.conf.dist file in /usr/local/etc/ The file should be called ices.conf and live in /etc/ices/ I use a playlist file called "playlist.txt" in the same directory as the ices.conf. The playlist.txt file is just a plaintext file with the full path of each mp3 I want to play on its own line (be sure to trim any whitespace from the end of the lines, I had problems with ices not finding files due to trailing whitespace.).

You may need to tweak your icecast2 file in /etc/init.d - make sure it has the line
ICES="/usr/local/bin/ices -c /etc/ices/ices.conf"
To load the right ices.conf file when the daemon starts.
You can now start icecast by running /etc/init.d/icecast2 start - if you want uptime insurance, you can set up a monit process to restart icecast if it crashes. Just add the following code to your monitrc file (assumes you have icecast running on port 8000):

check process icecast2 with pidfile /var/run/icecast
start program = "/usr/bin/sudo /etc/init.d/icecast2 start"
stop program = "/usr/bin/sudo /etc/init.d/icecast2 stop"
if failed port 8000 protocol HTTP
request /
with timeout 60 seconds
then start

if failed port 8000 protocol HTTP
request /
with timeout 60 seconds
then alert