Showing posts with label supercollider. Show all posts
Showing posts with label supercollider. Show all posts

Thursday, January 16, 2014

Source Code for the First Piece I Ever Wrote in SuperCollider

This is presented more as an example of how not to write music in SuperCollider - it was within the first week or 2 of me trying things out.

The piece is a drone piece based on an instrument that plays a set of overtones of a base frequency, and slowly bends those frequencies by a microtonal increment over a period of several minutes - creating cascading overtones at higher frequencies as the sound waves cross in space and cancel each other out.
The piece starts with a base drone over 20hz and later fades in complementary drones at (20*21/16)hz and (20*4/3)hz - a septimal major third and perfect fourth.
The piece ends after 30 minutes - when the pitch bending of first drone cycle returns to a unison.
This pattern of slow pitch bending in sine clusters was something I experimented for several years. I still write a piece in this style every now and then for fun. I've learned a lot about sound design since then so it's always good to go back and apply those learnings. Orbital Slingshot - composed in late 2011 is the most recent piece I've released in this style.
Commentary follows the source code.

(

SynthDef("sine-phase-additive",  {
arg baseFreq = 32, baseFreqLFORate = 1/3600, baseFreqLFOMult = 0.01, baseFreqLFOPhase = 0 , baseFreqLFOAdd = 2, pan = 0, amp = 0.03, partials = #[1,2,4,8,16,32,3,6,9,12,18,24,25,27,36,40,48,54,56,60,64];

var base;

base = baseFreq * SinOsc.kr(baseFreqLFORate, baseFreqLFOPhase, baseFreqLFOMult, baseFreqLFOAdd);

Out.ar(0,(Mix.ar(Pan2.ar(FSinOsc.ar(base*partials,0,amp),pan))));
}).load(s);
)

s=Server.local;
s.boot;




s.sendMsg("/s_new", "sine-phase-additive", 1005);
s.sendMsg("/n_set", 1005, "baseFreq",20*(4/3));
s.sendMsg("/n_set", 1005, "pan", 0);
s.sendMsg("/n_set", 1005, "amp", 0.013);
s.sendMsg("/n_set", 1005, "baseFreqLFOPhase", 0);
s.sendMsg("/n_set", 1005, "baseFreqLFOMult", 0.0125);
s.sendMsg("/n_set", 1005, "baseFreqLFORate",1/300);
s.sendMsg("/n_set", 1005, "pan", 0.3);

s.sendMsg("/s_new", "sine-phase-additive", 1001);
s.sendMsg("/n_set", 1001, "baseFreq",20*(21/16));
s.sendMsg("/n_set", 1001, "pan", 0);
s.sendMsg("/n_set", 1001, "amp", 0.012);
s.sendMsg("/n_set", 1001, "baseFreqLFOPhase", pi);
s.sendMsg("/n_set", 1001, "baseFreqLFOMult", 0.0325);
s.sendMsg("/n_set", 1001, "baseFreqLFORate",1/120);
s.sendMsg("/n_set", 1001, "pan", -0.3);


(





s.sendMsg("/s_new", "sine-phase-additive", 1000);
s.sendMsg("/n_set", 1000, "baseFreq", 20);
s.sendMsg("/n_set", 1000, "pan", 0);
s.sendMsg("/n_set", 1000, "amp", 0.030);
s.sendMsg("/n_set", 1000, "baseFreqLFOPhase",0);
s.sendMsg("/n_set", 1000, "baseFreqLFOMult", 0.00325);
s.sendMsg("/n_set", 1000, "baseFreqLFORate",1/1200);
s.sendMsg("/n_set", 1000, "pan", 1.0);





//static drone
s.sendMsg("/s_new", "sine-phase-additive", 1003);
s.sendMsg("/n_set", 1003, "baseFreq", 20);
s.sendMsg("/n_set", 1003, "amp", 0.040);
s.sendMsg("/n_set", 1003, "baseFreqLFOMult", 0.0);
s.sendMsg("/n_set", 1003, "baseFreqLFOPhase",0);
s.sendMsg("/n_set", 1003, "pan", 0.0);


//base  hourly cycle drone
s.sendMsg("/s_new", "sine-phase-additive", 1002);
s.sendMsg("/n_set", 1002, "baseFreq", 20);
s.sendMsg("/n_set", 1002, "amp", 0.030);
s.sendMsg("/n_set", 1002, "baseFreqLFOMult", 0.00125);
s.sendMsg("/n_set", 1002, "baseFreqLFOPhase",0);
s.sendMsg("/n_set", 1002, "baseFreqLFORate", 1/1800);
s.sendMsg("/n_set", 1002, "pan", -1.0);





SystemClock.sched(480.0, {

s.sendMsg("/s_new", "sine-phase-additive", 1001);
s.sendMsg("/n_set", 1001, "baseFreq",20*(21/16));
s.sendMsg("/n_set", 1001, "pan", 0);
s.sendMsg("/n_set", 1001, "amp", 0.012);
s.sendMsg("/n_set", 1001, "baseFreqLFOPhase", pi);
s.sendMsg("/n_set", 1001, "baseFreqLFOMult", 0.0325);
s.sendMsg("/n_set", 1001, "baseFreqLFORate",1/120);
s.sendMsg("/n_set", 1001, "pan", -0.3);


 });

SystemClock.sched(1200.0, {

s.sendMsg("/s_new", "sine-phase-additive", 1005);

s.sendMsg("/n_set", 1005, "baseFreq",20*(4/3));
s.sendMsg("/n_set", 1005, "pan", 0);
s.sendMsg("/n_set", 1005, "amp", 0.013);
s.sendMsg("/n_set", 1005, "baseFreqLFOPhase", 0);
s.sendMsg("/n_set", 1005, "baseFreqLFOMult", 0.0125);
s.sendMsg("/n_set", 1005, "baseFreqLFORate",1/300);
s.sendMsg("/n_set", 1005, "pan", 0.3);



 });

SystemClock.sched(1800.0, {

s.sendMsg("/n_free",1000);
s.sendMsg("/n_free",1001);
s.sendMsg("/n_free",1002);
s.sendMsg("/n_free",1003);
s.sendMsg("/n_free",1005);
 }


)

At this point I wasn't using either the Synth or Routine abstractions. I was creating a synth and giving it attributes by sending osc messages directly to the server, and manually allocating a node ID for each synth. My timing routine was to create events on the system clock at the outset and then delay them before being played - so the command to end the piece (by freeing the synth nodes, not sending them any kind of envelope) after 1800 seconds is already set when the piece starts.
Among things I hadn't learned about synth design yet - envelopes, gates and amplitude balancing. Amplitude balancing with additive synthesis can be a little tricky. I'm starting with a base frequency (20hz in this case) and sending it an array of partials = #[1,2,4,8,16,32,3,6,9,12,18,24,25,27,36,40,48,54,56,60,64] ) which is all well and good (note that there are no multiples of 5 until you get into the highest octave). 2 things to note about amplitude for your partials - 1) sound waves at the same amplitude carry more energy at higher frequencies (more wavefronts are hitting your ears over the same span of time) so they sound louder. Also, when you're creating an array of integers, there will always be more integers in the higher range of the array. To use the standard harmonic series as an example - in the 1st octave, there are only overtones at 1 and 2 times the fundamental frequency. 3 octaves up you have 8 overtones at 8,9,10,11,12,13,14,15,16. The higher you go, the more tones are likely to fit into your tuning scheme, even if you eliminate overtones that have factors higher than some number. For this reason, you should compensate for your amplitude using something like amp = (1/(f**k)) where f is your frequencies and k is a constant - usually from 1 (pink noise) to 2 (brown noise).

Wednesday, October 23, 2013

"Horizontal Movements"


I composed these 2 pieces almost 2 years ago, but even though they are very autobiographical, I never really talked about the inspiration for them. So I've added a description on SoundCloud.
Like a lot of my music, these 2 pieces are autobiographical, at least in the sense that they were composed during and immediately in response to some significant events in my life - in this case my involvement in the Occupy Wall Street protests. They were composed in late December 2011 through early January 2012, while I was taking some time off from activism to recuperate after police chaos and setbacks of late November and December. The "Frozen Zone" of the titles refers to both the blockade that the NYPD set up in the Financial District in the middle of the night as they evicted protesters from Liberty Square, and more metaphorically to all systemic injustice.

Although the music doesn't convey any overt political messages, the act of writing it was kind of a meditation on the idea of leaderless organizing (the anarchist political philosophy of "Horizontalidad / Horizontalism"). I think of the sounds as being "horizontal" in the sense that there's no hierarchy working to organize them. The musical parts aren't divided into "instruments or "voices" - there's just a single algorithm generating clusters of sine waves. Those individual sine waves, which only differ in pitch and amplitude, converge temporarily to create harmonies, and then dissipate.

Sunday, June 23, 2013

New Album: Orbital Slingshot

A 70-minute drone piece, composed with SuperCollider. You can stream it here, or purchase a download on Bandcamp. This is part of a series of studies that may someday lead up to a full-length (195 hours) sonification of the Apollo 11 mission.

Monday, July 9, 2012

Load Balancing your Audio Rendering on a Multi-Core Processor

This block of code will define 4 servers to render audio, and assign the list of them to the variable ~z to implement round-robin load-balancing of the audio rendering SuperCollider isn't multicore-aware when rendering audio (at least in version 3.4) so to split the rendering to multiple cores, you have to assign 1 core per server (and then presumably OSX will assign each of these scsynth processes to its own core - check your Activity Monitor to see).


//instantiate 4 servers for a 4 core machine
Server.internal.boot;
Server.local.boot; 
~x = Server.new("two",NetAddr("127.0.0.1", 9989));
~x.boot; ~x.makeWindow;
~xx = Server.new("three",NetAddr("127.0.0.1", 9988));
~xx.boot; ~xx.makeWindow;

//you could also say Server.all here instead of the list of servers.
~z = [Server.internal, Server.local,~x,~xx];

//a placeholder synth
~z.collect({|z|SynthDef("my-synth",{nil}).send(z)})

//here's a simple player routine 
~r = Routine {
  inf.do({|x|
      //instantiate a synth, and set it to a random Server in ~z
      var sn = Synth("my-synth",nil,~z.choose);
      //set some arguments on the synth
      sn.set(\foo,x);
      1.wait;
  })
};
~r.play;
/*
a different example - hit each server sequentially.
this may not be balanced if each synth you instantiate takes different server resources
for example, if every other beat plays a long note, then the long notes will pile up on half of the servers.
*/
~r = Routine {
  inf.do({|x|
      //assign synths sequentially for each step on the routine
      var sn = Synth("my-synth",nil,~z.wrapAt(x));
      //set some arguments on the synth
      sn.set(\foo,x);
      1.wait;
  })
};
~r.play;

/*
keep hitting the same server until it starts to max out, then move down the list to the next server
*/
~r = Routine {
  inf.do({|x|
    var playing = false;
      ~z.size.do({|n|
        //signal starts to break up when peak cpu gets above 80 - but this can depend on your hardware
        /* 1000 Synths is the default max per Server - polling doesn't happen instantaeously so set this a little low */
        (~z.wrapAt(n).peakCPU < 80).and(~z.wrapAt(n).numSynths < 900).and(playing==false).if({
          //add our synth to nth Server in the list
          var sn = Synth("my-synth",nil,~z.wrapAt(n));
          //set some arguments on the synth
          sn.set(\foo,x);
          //break the loop
          playing=true;
        });
      });
      x.postln;
      //if all the Servers in ~z are over 80% capacity, the music carries on without playing the note 
      (0.1).wait;
  })
};
~r.play;

Saturday, July 7, 2012

Using Git with SuperCollider

SuperColliderCompositions GitHub repository

I've set up a GitHub repository of the source code of some of the music I've been writing with SuperCollider.  I'll be posting walkthroughs of some of the code as those pieces get played on the Backtrace livestream.

I've been writing music with SC for about 5 years now, and my coding habits have evolved very little in that time.  They're also curiously out of sync with how I write code for web-based projects.  If I'm writing PHP or Ruby I set up a project in TextMate, where code is split up into many files and folders based on what aspect of the application they address (templates, database queries, etc).   I always have a corresponding git repository, even if it's just a local repo to store my revision history.   I use git to deploy code to servers (just do a git pull into the web directory).  But with Supercollider, I started out in 2007 writing compositions in single text file (synthdefs at the top, followed by sequences) and I still write code that way today.

I think part of the reason I developed this habit because of the lack of dev environments in SC.  I have the SC bundle for TextMate now so I can write code the same way I do for other projects - with a tree of multiple directories and code split into different files (some for synthdefs, some for sequences, some for utility functions).  But I think more than the lack of editor support, I have totally different motivations for writing when I'm doing code in SC.  When I'm doing a web project, I have some idea of what the completed code will be before I start writing - I'm given a set of requirements (or I have my own vision for the project) before I start, and I code to that vision.   With music in SC I rarely start out with an idea of what the final piece will sound like - the act of coding is a kind of improvisation or iterative process.  I write some code, play it, and when it's interesting enough, I record some to disk, then rename the file (and as you can see in the git repo I posted, I have dozens of files that are named by the date I when was working on that file).  Since SC is such a concise language, even complex pieces rarely involve more than 200 or 300 lines of code, so they're easy enough to manage as single text files.  I've thought of moving to a different client language like Ruby which would allow better integration with the web and the rest of my dev process, but so much of my code relies on super concise language constructs like array adverbs, and in Ruby you just can't quite say something like

(64 * (4..10).nthPrime *.x (2**(0..5)))

for example, to generate an array of frequencies based on the prime numbered overtones the fundamental pitch 64 hz from 11 to 31 across 6 octaves.  (Incidentally, with SC you could write entire LaMonte Young compositions in fewer characters than their own titles).

There's also the matter of when a piece is "done".  When building web apps there's obvious points to commit and deploy - some functional component works, so you do a commit, and when you have enough features commited, then you deploy those commits to a server.  With music, when do you "commit changes"?  It's more intuitive than objective - if I like what I'm hearing as I write, I'll commit the code, but if I'm using a lot of environment variables or higher-order functions, that code may not actually represent a blueprint of the sounds being played with the same 1:1 correspondence that a traditional score (or MIDI file) gives you.  What I need is a kind of "commit snapshot" function that would commit the code and the contents of all of the environment variables at that moment - and that's really more of a "save content to database" action in my mind than a "commit code to repository" action.

What's the SC equivalent of "deploying"?  One of the reasons why I've set up the Backtrace Livestream   is to have a space for the "deploy to production" moment in my workflow (also, one of these days I should unpack just how much my own thinking about music is structured by software dev practices).  Most of the music I've been composing in the last 5 years is open-ended algorithmic stuff, so even when I do create recordings, they're not a realization of a composition as much as a document of a moment.  And even still many of them are hours long, and while disk space is cheap, at that scale (~100 mb per recording) the cost just to stash them on a virtual server (let alone in SoundCloud) does add up.    Having whatever I'm working on running in a live channel maps more closely to how I intend the music to be heard.  It also let's me not have to worry about the whole "how do you perform deterministic algorithmic digital music" question - the sounds are available to anyone with internet access, and there's no beginning or end to them.

In a sense, everything I've written in SC has been variations on a single theme. I'm not sure how doing commits to a single file with a revision history will work with my process.  Often, I'll keep code for the last 5 or 10 revisions of a piece open and switch out certain components (a harmonizer function or a sequencer function) to see how they sound - maybe some days I'm more inspired to tweak the balance of sounds in the spectrum, and other days I'm more inspired to come up with interesting rhythms.  If I move to git, I'm starting to think that I might actually start totally reconsidering how I think of my code - seeing most of the musical details as data instead of code.  Unfortunately SC doesn't have any native capabilities for saving to databases, though there is a JSON quark I think so I could set up a bridge with something like MongoDB or Redis and a simple Ruby app.  Which is a whole other topic.

Friday, July 6, 2012

Stream Supercollider on an Ubuntu Cloud Server using Darkice, Jackd and Icecast

The scenario - you have some kind of generative music app that you've written in Supercollider, or you want to have a Supercollider server in a stable place that's accessable to the whole internet, perhaps to have it collect data from or send data to other apps.  A cloud server is just a Linux machine in the cloud, and Supercollider runs on Linux, so why not run Supercollider in the cloud.

TL; DR (and caveat) - I never really had a solid deployment with this setup, and I suspect this is probably easier to do with Overtone.  Overtone is a Supercollider client / server implementation built in Clojure.  Clojure is an implementation of Common Lisp built on top of Java, so you get the whole ecosystem of tried and true Java apps wrapped in a fun client language.

If you have a supercollider stack running on a webserver, please comment - the deployment which I'm describing here is not optimal and I'd like to hear how others have done it.

Getting this set up took me a lot of moving parts and the process could probably be improved.  The deal is you get sclang running, which lets you build synths and control them, and then you need to take the signal generated by scsyth and pipe it to your Icecast server, but do it all without actually having a soundcard on your virtual machine.  This is where the recursively-named JACK Audio Connection Kit  comes into play.  Jack does a lot of things - one of them is to act as a virtual patch cable that will let you send audio signal between running apps without the use of a soundcard.

So, the whole setup is Sclang -> Scsynth -> Jackd -> Darkice -> Icecast2, plus Monit to make sure the whole thing is running, which is a lot of moving parts.


Sclang to run your supercollider code on the server.  (You could also control supercollider from a separate client on a home computer).  Scsynth to generate audio signal.  Jackd to patch that signal into your streaming server.  Darkice is the streaming server, which encodes the audio signal for the web, and Icecast2 is the application that takes the audio stream and serves it up in a web page, or for an app like iTunes to listen to.

Here's some pages that I found helpful in getting things  set up:

Setting up Darkice

Jackd and darkice, with jackd running dummy soundcards

Running darkice as a background process

What is Darkice


One of the problems I had with this set up is that if you're running under constant CPU load you're much more likely to crash on a virtual server than on a desktop computer or other dedicated hardware. The CPU allocation on a virtual server isn't really constant and spikes in CPU usage can cause your CPU allocation from scsynth to overload and the scsynth process will crash.  You can use something like Supervisor or Monit to restart the process after a crash.

This is a blog post from howtonode.org about using monit and upstart to restart a process after it crashes.

I'm not currently running this stack on backtrace.us - If I have time to get a stable deployment going, I'll post an update with more notes on configuration.

Sunday, July 1, 2012

Streaming Audio with Supercollider: Ustream vs Icecast

Live Audio Streaming of Supercollider on Backrtrace

So I've been messing around again with long-playing generative music on Supercollider.  Sequencers that play incremental variations on things on an infinite loop, with some external data thrown in here or there to tweak certain parameters.  Here's an example:

I started writing this kind of music back around 1998, while living in Florida, on a 25mhz 486 PC that I had purchased at a thirft store for $50, using the internal speaker no less.  It was a few months before I figured out how to program for the SoundBlaster in QBasic.

I had done a few installation shows or performances with this kind of music, but getting it on the internet as a continuous stream has been a goal for about 10 years.  With my involvement in Occupy I came into contact with a lot of livestreaming  - I'd actually used some of the services in 2009 while working at a digital marketing agency but that association kind of made me forget that this was also something that ordinary users could set up for themselves very easily.

So this week I set up a Ustream channel for myself and started streaming with Supercollider off of my Mac Mini.  You can hear some of the archived recordings on that page.

Ustream has been kind of a mixed bag.  On the one hand, getting up and running is dead simple - you can use a web-based broadcast app (no need to install software) and use Soundflower to grab sound from your desktop.  You can't do any video this way, only audio.   However, getting a screencast going is pretty simple with their desktop capture software.  It's also very easy to make lengthy recordings that can be played back on your channel homepage - just hit the record button and they will record and archive your stream.  Not sure what the restrictions are on how many hours you can have archived and how long they last.

Ease of use is about all I can say is positive for Ustream for this (admittedly non-standard) use case.  The commercials are hugely disruptive and seem to come about every 10 minutes.  Commercials are also played on the archived streams.  In order to stream without commercials, I'd have to get a Pro broadcaster account, which would cost several hundred dollars per month (it's paid per viewer minute, so more viewers = more cost).  Totally out of the question.  The sound quality is also unacceptably poor - and mp3 stream compressed at 96kpbs might be fine for speech, but doesn't work for music.

Ustream is working on an audio-only mode, which could provide a better experience for people who want to stream music from their desktops.

I've had an Icecast server running on backtrace.us for something like a year now.  Finally got it up and running on a proper subdomain too, instead of having the port 8000 showing up in the url.  (I did this thru reverse proxying with nginx.  I'll put up an example of the conf file in a later post.)  I've been experimenting with running a supercollider server on a headless Linux virtual machine in the cloud but overall that's been a mixed bag too.  It's cool to have a dedicated computer that can render audio forever without having to tie up your home computer (right now if I want to watch a DVD, the music streaming stops, but I'll probably separate out the music streaming to another computer eventually).   And datacenters are generally more reliable than home computers in terms of uptime as well.  Downside is that the scserver and sclang processes are finnicky on a virtual machine - they crash alot, probably due to cpu spikes from other processes running on the physical machines that share with my virtual.  Plus there's all the moving parts of syncing SC with Jack and ices in order to actually get your audio data broadcasting over the web.

What I did realize though this week from playing around with the Ustream studio software was that I could use my homecomputer as a source for the streaming server on backtrace.us.  Just like how a person uses their phone / laptop camera to get a video stream, but doesn't have 100's of viewers connect directly to their laptop, I could do the same with supercollider and sound.  I had been using Nicecast studio mostly just to archive recordings of Supercollider locally, or listen around the house, but I had never tried setting it up as a source for my streaming server.  Doing this was a snap - just go to the server tab in Nicecast and enter the url for the streaming server, plus the password for "source" users (this is defined in your /etc/icecast2/ icecast.xml).  No need to mess around with compiling a version of Ices on your linux box that will play mp3's or messing with Jackd - just set Nicecast to broadcasting and have it grab the signal from Supercollider.   If you're rendering on multiple servers (to spread the load out over multiple cores), don't forget to have Nicecast capture all sub-processes from your source (this is in the "Advanced" settings when you do Source: Application).  You can use inline effects on your stream to improve sound quality  (like compression or EQ) , and you can broadcast at whatever bitrate you want.  Nicecast isn't free, but it's pretty reasonably priced for a lifetime license.

TL DR Summary:


Streaming Audio with Ustream:


Pros:

  • Easy to set up
  • Free
  • Simple recording and playback of streams online - good for sharing
  • You can control Supercollider while it's playing
  • Screencasting is an option if you want to do livecoding performances
Cons:
  • Too many commercials
  • Low quality audio
  • Premium Service is very expensive
  • Requires a dedicated home computer

Running Supercollider with Icecast in the Cloud:


Pros:
  • Doesn't tie up your home computer
  • High-quality audio
Cons:
  • High maintenence, unreliable deployment
  • Requires considerable knowledge
  • Have to pay for hosting
  • Can't control Supercollider while it's running

Streaming with Nicecast locally, and Icecast in the Cloud (Best Option)


Pros:
  • High-quality audio
  • Nicecast is easy to set up.  Icecast2 server is not difficult if you know Linux packages
  • Total control over the stream - no commercials, no 3rd-party services
  • You can control Supercollider locally while it's playing
Cons: 
  • Requires a dedicated home computer and reliable internet connection
  • Have to pay for hosting / bandwidth for the Cloud server
  • Have to upload and host archived recordings
  • Have to pay for Nicecast
You can check out the stream here in your browser.

Or here in your mp3 player.

Next up is a walkthrough of the code used to generate the music here.


Monday, September 26, 2011

Otomata + Monome * Supercollider

Last week I got a Monome and I've been playing with some Otomata stuff using a Supercollider implementation by Corey Kereliuk.
The first composition I put together was a simple, cheerful minimalist piece and one of the first things I've done using equal temperament (midi notes) in about 5 years.


20110918 Otomata by Backtrace

I started tweaking the code so that I could add more instruments (the above example has a percussive instrument and a sustained pulse-wave instrument) and having the cellular automata trigger a callback function when they hit a wall, instead of just triggering a synth. I could then load that callback function with whatever synths I wanted. I also started polling the instantiated Otomata object itself for global data (like the x,y positions of all the automata at a given moment) so I could use that for musical data. You can hear chord changes in this piece - I had the program I was using count the # of ticks that the sequencer routine was running and store these in a global variable, which I then used to cycle through a set of different scales.

After 4 days straight of playing with this stuff, I think you can sense the burnout setting in a little with this piece (at least that's how I felt about it - not that feel burned out creatively on Otomata, but that this is where my brain goes around 3 AM after playing with sounds all day):

20110922 Otomata by Backtrace

There are multiple instruments being triggered, some effect parameters (filter sweeps) being tweaked by the global state of the Otomata board. After recording this piece I decided it was time to clean up the code for it and try to get some useful, standalone application out of it.

I modifed the original Otomata.sc class to a Proto (so I could tweak it at runtime, without having to recompile SC). I'm working on a Proto version of the Automaton class as well.


Musical information (scale, synths, starting pitch) was decoupled from the sequencer logic - scale and synth are now controlled via the synthFunc callback function and can be switched dynamically as the Otomata is running. See how synthFunc variable is added to the Automaton class in Automaton.sc, and examples of its use in otomata.scd.

Methods to add and remove automata from a running otomata - ~removeOldest, ~removeNewest, and ~removeNth.

Global metadata about the otomata to give additional musical parameters across all of the automata - see the ~dSum, ~xySum, ~xSum, ~ySum. ~age attribute can be accessed to change values over time. I'd like to add similar attributes to each automaton, like age, # of collisions, # of wall hits, "dizziness" (# of right angle turns over time).

In the example code in otomata.scd I show how to use supercollider's function composition operator <> to attach multiple sound callbacks to 1 automaton. if you have 2 functions f(x) and g(x) then h = f<>g creates the function h = f(g(x)) - so whatever g returns is passed on to x. if you're chaining synth callbacks, the function should return the same "note" value that it takes.

Thanks to Corey for posting his code, and to Batuhan Bozkurt for designing the Otomata.

The code posted here is really meant as an example of how to use callback functions in SC and a couple of other techniques - but feel free to use it if you'd like. In the near future I'll post something that's a little more conceptually coherent.

Sunday, May 29, 2011

Pendulum Studies


The period of one complete cycle of the dance is 60 seconds. The length of the longest pendulum has been adjusted so that it executes 51 oscillations in this 60 second period. The length of each successive shorter pendulum is carefully adjusted so that it executes one additional oscillation in this period. Thus, the 15th pendulum (shortest) undergoes 65 oscillations.


Inspired by this youtube video, some Supercollider instruments based on periodic fluctuations in amplitude over a series of overtones. The first one mimics the motion in the video - 15 sine waves which oscillate between 51 and 65 hz cycles per minute.




(
//t is the timing of the pendulum swinging
var t = (51..65)/60;
//p is the pitch, 15 overtones of 60 hz
var p = 60 *(1..15);
//modulate the amplitude of each pitch p by t
{Mix.ar(Pan2.ar(FSinOsc.ar(p,0,LFPulse.kr(t,0,0.5,1,0) * 0.04),SinOsc.kr(t,0,1,-0.5)))}.play;
)


Pendulum-15-60 by Backtrace


(
var t = (201..328)/256;
var p = 64 *(1..128);
{Mix.ar(Pan2.ar(FSinOsc.ar(p,0, (1/((p+64)*p.log2)) * LFPulse.kr(t,0,0.5,1,0) *48),SinOsc.kr(t,0,1,-0.5)))}.play;
)

You can always add more oscillators to this series:
Pendulum-128-256 by Backtrace

Saturday, March 26, 2011

A Tweetable Drone

A tweetable Drone
Make a drone in Supercollider than can be posted on Twitter with few enough characters to include the #supercollider hashtag.

{Mix.ar(Pan2.ar(Blip.ar([64,63]*.x(3/2**(0..5))*.x 
Blip.kr(0.0001/(99..96),3,1/2**(13..18),1),9,0.03),[-1,1]/.x(1..9)))}.play


This is how many years it takes for the cycle to repeat
(1/(0.0001 /(99..96)).product) / (3600 *24 *365.25)


/*The lowest note is
63 HZ and the highest is
63 * (3/2**5) * 9

4305.65625 Hz


The fundamental frequency of 64 and 63. 1/64 is the difference between an overtone major third and a major third in pythagorean tuning (5/4, or 80/64 vs 9/8 * 9/8, or 81/64). 1:64 is also the ratio of a pitch and another pitch 6 octaves above it, so in a sense you can think of the rhythm as the pitch, 6 octaves down.

This code defines 2 sine waves at 64 and 63 hz. I'm using the "Blip" Ugen because it has fewer characters than SinOsc and because we can use it to add more overtones to our sound without adding more characters to the code:

{Mix.ar(Pan2.ar(Blip.ar([64,63],1,0.3)))}.play;


Fundamental tone by Backtrace


Add overtones up to the 8th overtone:


{Mix.ar(Pan2.ar(Blip.ar([64,63],9,0.3)))}.play

Fundamental with overtones by Backtrace


Now, just the sine waves, but harmonized. The expression (3/2**(0..5)) produces an array with these values
[ 1, 1.5, 2.25, 3.375, 5.0625, 7.59375 ]

Which are the first 6 steps around the circle of 5ths, or if this were written in C major, you would start at C and go up a perfect 5th to G, then D, A, E and B natural, all pitches within the key of C major.

{Mix.ar(Pan2.ar(Blip.ar([64,63]*.x(3/2**(0..5)),1,0.1)))}.play;

Harmonized in 5ths by Backtrace



Now the same chord, with the overtones. This is the complete set of pitches in the piece, but due to phase cancellation all of these pitches are never heard at once.
{Mix.ar(Pan2.ar(Blip.ar([64,63]*.x(3/2**(0..5)),9,0.1)))}.play;

All pitches by Backtrace


This is the shape of the moduation on the pitches - using Blip.kr. You can hear how the pitch takes different sized dips, large and small. In the final piece, the modulation is much slower and more subtle.


{Mix.ar(Pan2.ar(Blip.ar([64] *.x Blip.kr(1/4,3,1,2),1,0.03)))}.play;


Modulator shape by Backtrace


Here, multichannel expansion produces 4 pairs of [63,64] modulating in patterns that are increasingly out of step with each other. The mutiple modulators are created using the expression (99..96) which produces an array with values [99,98,97,96] and then this signal is multipled by the frequencies [64,64] via the *.x operator.

{Mix.ar(Pan2.ar(Blip.ar([64,63] *.x Blip.kr(1/(99..96),3,1/2,1),1,0.03)))}.play;

Multiple modulators by Backtrace


Now let's hear the same, with overtones.
{Mix.ar(Pan2.ar(Blip.ar([64,63] *.x Blip.kr(1/(99..96),3,1/2,1),9,0.03)))}.play;

Overtones with large modulators by Backtrace


Now, let's slow the modulaton down 10,000 times. and make the actual pitch bending much much smaller, from a range of 1/(2**13) to 1/(2**18). These changes in pitch are far too small to hear - the largest is 1/8192 multiplied by the highest pitch 4305.65625 which is (4305.65625/8192) which is about 0.52 of 1Hz. What you hear instead is the sound of pitches up and down the overtone series fading in and out via phase cancellation.

{Mix.ar(Pan2.ar(Blip.ar([64,63]*.x Blip.kr(0.0001/(99..96),3,1/2**(13..18),1),9,0.05)))}.play;

Overtones with small modulators by Backtrace


Finally, we add our harmony, in stacked perfect 5ths.
{Mix.ar(Pan2.ar(Blip.ar([64,63]*.x(3/2**(0..5))*.x Blip.kr(0.0001/(99..96),3,1/2**(13..18),1),9,0.05)))}.play;



The last step is to revisit our Pan2 UGen and use multichannel expansion, ranges and operator adverbs to split the Blip UGens into 16 channels panned from full left to full right
[-1,1]/.x(1..9)
produces:

[ -1, -0.5, -0.33333333333333, -0.25, -0.2, -0.16666666666667, -0.14285714285714, -0.125, -0.11111111111111,
1, 0.5, 0.33333333333333, 0.25, 0.2, 0.16666666666667, 0.14285714285714, 0.125, 0.11111111111111 ]


{Mix.ar(Pan2.ar(Blip.ar([64,63]*.x(3/2**(0..5))*.x
Blip.kr(0.0001/(99..96),3,1/2**(13..18),1),9,0.03),[-1,1]/.x(1..9)))}.play

At 127 characters, we still have room to add the #supercollider hash tag
{Mix.ar(Pan2.ar(Blip.ar([64,63]*.x(3/2**(0..5))*.x 
Blip.kr(0.0001/(99..96),3,1/2**(13..18),1),9,0.03),[-1,1]/.x(1..9)))}.play #supercollider

Tweetable drone by Backtrace

Monday, February 21, 2011

A Short Study In Recursive Functions


(
//define our basic sine synth, with pink noise amplitude correction: amplitude = (1/freqency)
z = Server.internal;
SynthDef("just-sine",{|freq=100,atk=0.01, rel=10, sus=(-4), amp=0.005,pan =0,gate=0, mr=1, ma=1, mm=1|
Out.ar(0,Mix.ar(Pan2.ar(FSinOsc.ar( [1,1] *.x freq,0,amp/(freq+192)),pan)) * EnvGen.ar(Env.perc(atk,rel, 1, sus),gate, doneAction:2) );}).load(z);
)


(
//recursive function to bring frequency f below 2**14 or quit after 20 tries
~pitchCorrect = {|f,s=1|
(f < (2**14)).or(s>20).if ({f;},{ ~pitchCorrect.(f / (2**((f.log2.floor)/2).ceil),s+1);});

};

//express i as an array of binary digits (redo of the .asBinaryDigits method for integers > 8 bits
~count = {|i|
({|x|((1)*(i/(2**x)%2).floor)}!i.log2.ceil).asInt[0..(i.log2/2).floor.asInt];
};


~r = Routine {
~bounce = {|p,n,s,max|
var c = ~count.(p), pc= (2**(c.sum)).asInt, ps = (c.size+1).log2.ceil;
p.postln;
pc.factors.collect({|pp,j|
var m = Synth("just-sine"), pf=p.factors.permute(p).wrapAt(j).log2.ceil;
m.set(\freq,~pitchCorrect.(pf * 40 *(n**((1+c.sum).log2.ceil%2))*(((ps+1)/ps)**(pc.factors.size+n)))+j);
m.set(\pan, 1/p.factors.size * (-1**j));
m.set(\rel,(p.factors.size+n+s).sqrt * (1/~count.(p).size.log10),\atk,(n+s)/2048,\sus,-2 * p.factors.size.log2,\amp,32 * (1/(n+1)),\gate,1);
});
(1/(n+s)).wait;

(max<0).if({^max});

(n>s).if({~bounce.(p+1,1,~count.(p+1).sum,max-1) },{~bounce.(p,n+1,s,max)});
};

//start at 1024, go for 512 steps
~bounce.(1024,1,2,512);
};
~r.play;
)

z.scope


201102211 by Backtrace

Wednesday, January 19, 2011

Problem 3 -The Largest Prime Factor of a Composite Number

The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?

This one had me stumped for a while but mostly because of some type-casting issues with Supercolllider. sclang comes with an Integer.factors method that works great for 32-bit integers, up to a point.
So while
13195.factors 

correctly returns
[5, 7, 13, 29]


If you ask
600851475143.factors

it curiously returns
[ 2147483647 ]

which is probably not coincidentally the Mersenne Prime (2**31)-1 and probably has something to do with sclang overflowing the bit register for a 32-bit integer.

So after a few hrs of fighting w/ sclangs weird type-casting issues I came up with the following recursive function:

//function to find distinct factors of num, not complete factorization
//factors is an empty Array that we can add to

~f = {|num, f=2,factors=([])|
/*
break if we've already gotten to the square root of num
the final value of num is the last factor
because it was the last even divisor of num/f
*/
(f>num.sqrt).if({factors.add(num);factors.postln;^factors});

((num%f)==0).and(factors.detect({|c|f%c==0})==nil).if({
~f.(num/f, f+1,factors.add(f));
},{
~f.(num, f+1,factors);
});
};


//number in question has to be cast as a Float for 64-bit precision
~f.(600851475143.0);


It's pretty fast, though not tail-recursive (and the 2 function calls seems like bad form. There's probably also a better way to do recursive functions than with the pseduo-global ~variable stuff). It doesn't do a complete factorization, just finds the unique factors of the number. The trick to gaining speed is that once you find a factor of composite num, you divide by the factor so that the number that you continue to factor is a little smaller, and even if you're just testing every integer sequentially for divisibility, your factor catches up with num pretty quickly. It probably helps that in this case, the test number has no repeating factors and most of the factors are pretty large.

Problem 2 - the even Fibonacci Numbers

Problem 2:
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.



I solved this using a basic recursive function - it's a fairly brute-force solution.

~f = {|a=1,b=2,e=2|
var c = a+b;
(c<4000000).if ({
//add any even term to e
e = e+(c%2==0).if({c},{0});
~f.(b,c,e);
}, {
e;
});
};
~f.();


Here's the code for a sonification that generates a spooky little overture:
It uses the sine tone generator from Problem 1


~f = {|a=1,b=2,e=0|
var c = a+b;
(c<4000000).if ({
//add any even term to e
e = e+ (c%2==0).if({c},{0});
e.log.postln;
(e>0).if ({
Routine {
var s1,s2;
//put the wait at the front - rhythms accumulate as a duration of silence before the note
c.log.wait;
s1 = Synth("sine");
s1.set(\freq,64 * e.log,\atk,0.01,\rel,64 - c.log,\amp,0.1,\gate,1);
}.play;
});
~f.(b,c,e);
}, {
e;
});
};
~f.();


One thing that the sonification of this solution makes really apparent is how the ratio of adjacent members of the Fibonacci series approaches the same value. The function pre-generates all of the notes before they are played, each one waiting it s turn to play as determined by the value of c.log.wait in the beginning of the Routine. I usually like a lot of symmetry in my music, which means I usually do things in integers or ratios of integers yet in this sequence, the notes all sound at very regular intervals even though their start times are not based on an integer. The pitch is determined by logarithm of the value of the variable e, the running sum of even numbered terms of the sequence - which always increases every 3rd term.

For a relatively small number of pitches this generates a pretty rich sound - there's more for me to explore here.

Tuesday, January 18, 2011

Some Sonifications of Problem 1.

First - boot the server and create a really basic sine wave synth:

z = Server.internal;

SynthDef("sine",{ |freq=100, atk=1, rel=1,sus=1, amp=0.005,pan =0,gate=0 |
Out.ar(0,Mix.ar(Pan2.ar(SinOsc.ar( freq * [1,1],0,amp),pan)) * EnvGen.ar(Env.perc(atk,rel,sus),gate, doneAction:2) );
}).load(z);


The first sonification is really basic - just run through the sequence of frequencies returned by the series and if the frequency f is greater than 0, then play the note at that pitch (times 8 so that the lowest notes are audible). The amplitude is scaled back over a pink noise curve (1/f) so that the low freqs are nice and full and the high freqs are not piercing.


(
Routine {
//generate our numbers
({|i|i}!1000).collect({|i|((i%3==0).or(i%5==0)).if({i},{0})}).collect({|f,i|
var s;
f.postln;
(f != 0).if({
//allocate a sinewave synth
s = Synth("sine");
//frequency
s.set(\freq,f*8);
//scale back the amplitude
s.set(\amp,0.001,\atk,0.001,\rel, 1/3 ,\sus,-32,\gate,1, \amp, 1/(f+1) * 1/16);
},{});
(1/12).wait;
}).sum;
}.play;

)

You can hear the 3/5 rhythm really clearly in this example:


1.0 by Backtrace

In the second sonification, I leave the notes on a long release so that many of them play simultaneously, and I lock the pitches to a single octave. As the notes build up, you can hear the same 3/5 rhythm, only this time as a differnce tone between frequencies.


(
Routine {
({|i|i}!1000).collect({|i|((i%3==0).or(i%5==0)).if({i},{0})}).collect({|f,i|
var s, freq;
freq = f*(512/ (2**f.log2.ceil));
f.postln;
(f != 0).if({
s = Synth("sine");
s.set(\freq,freq );

s.set(\amp,0.001,\atk,0.001,\rel,(1000-i)/8,\sus,-32,\gate,1, \amp,(1/ freq )*1/12);
},{});
(1/12).wait;
}).sum;
}.play;
)


1.1 by Backtrace

Problem 1 - Multiples of 3 and 5

Problem 1:
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.

This is essentially FizzBuzz revisited - in Supercollider you can find the solution with one line of code:

({|i|i}!1000).collect({|i|((i%3==0).or(i%5==0)).if({i},{0})}).sum;

({|i|i}!1000)
Generates the integers 0 to 999

.collect({|i|((i%3==0).or(i%5==0)).if({i},{0})})
.collect is an iterator method that maps an anonymous function over a Collection and returns a new collection - in this case we test to see if i modulo 3 or 5 is 0 - if it is, we return the value of i. Otherwise, we return o (which won't be accounted for when we sum to get the final answer).
In Supercollider, any expression is a Boolean so you can write a conditional as (expression).if({true},{false}) - where if is a method that takes 2 functions as arguments, for the true and false conditions.

.sum
Finally, we call the .sum method on the collection to return the sum of the collection's elements.

In the next post I'll post some sonifications of this algorithm.