Thursday, October 11, 2012

Apache Benchmark Tips

Apache Benchmark (ab) is a great tool for doing quick load testing on websites. Here's a patched version that will let yourun apache benchmark against different pages on your site in the same batch (not just the homepage). Found via Rainbow Chard

Tuesday, July 24, 2012

Working with Node.js

Been working with node.js for the past week.  Going to start with some simple things like a Rock, Paper, Scissors game.  I'd like to try some music, using node to send OSC.  Here's an example (not mine) of using node.js with a monome:

Thursday, July 12, 2012

I Put an Album on Bandcamp

"Pythagorean Clouds" on Bandcamp

Because sometimes I do actually record "albums".  Used to alot more, back when I was doing more beat-oriented stuff with Buzz.  I might put some of those older things on Bandcamp too.  Since Bandcamp caps the song length for new users at 26 minutes (with some snark about jam bands and "Serious Ambient Composers" (ahem) on top of that) there's not that much that I've recorded in the last 5 years that I can post.  

Anyway, "Pythagorean Clouds" is a collection of shorter pieces I wrote last summer, inspired by seeing a Pendulum Video I had seen on BoingBoing.  That led to developing a new "instrument" based on stacking up tones with amplitude oscillators and playing cyclical sounds, as opposed to the very linear stuff I had been doing involving factors of integers.  There are some pieces in there too involving modulated sine waves that were inspired by trying to compose 140-character drones that I could fit into a Tweet.

I might post some other older stuff on Bandcamp, or some excerpts of newer stuff.

Monday, July 9, 2012

Using Git for Things Besides Code

Does anybody out there use Git for other work / projects beside writing code?
I use it to keep backups of prose text that I write (journals, stories, essays, etc, though not this blog).
There are projects out there that use Git for blogging, like  How to Node - just fork the blog, commit your article, and send a pull request.

There's also Octopress - which combines Git with Ruby to make a blogging engine.  Since Backtrace.us runs on Sinatra, and I'm starting to use a lot of custom css and html in this blog, I'd probably be better off moving my blog off of Blogger and onto a self-hosted system like Octopress.  (Then again, I've been on Blogger for almost 12 years, and the convenience is nice.)

Is anyone using Git for revisions with binaries?  Keep a repository of psd's, pdf's or word doc's ?  When I set up a repo for work projects I usually include an "assets" directory where I put things like wireframes, comps, and specs (pdf's and psd's mostly) and then commit them as revisions happen to the project docs, instead creating the anti-pattern of tracking revisions thru filenames - nothing drives me crazy like going into a project directory and seeing files with names like "Wireframes-0612-version2-final-final.pdf"

Anybody do any other crazy things?  Commit and rollback your Redis cache? Track revisions to audio / video files you're editing?  Leave a comment if you do...

Load Balancing your Audio Rendering on a Multi-Core Processor

This block of code will define 4 servers to render audio, and assign the list of them to the variable ~z to implement round-robin load-balancing of the audio rendering SuperCollider isn't multicore-aware when rendering audio (at least in version 3.4) so to split the rendering to multiple cores, you have to assign 1 core per server (and then presumably OSX will assign each of these scsynth processes to its own core - check your Activity Monitor to see).


//instantiate 4 servers for a 4 core machine
Server.internal.boot;
Server.local.boot; 
~x = Server.new("two",NetAddr("127.0.0.1", 9989));
~x.boot; ~x.makeWindow;
~xx = Server.new("three",NetAddr("127.0.0.1", 9988));
~xx.boot; ~xx.makeWindow;

//you could also say Server.all here instead of the list of servers.
~z = [Server.internal, Server.local,~x,~xx];

//a placeholder synth
~z.collect({|z|SynthDef("my-synth",{nil}).send(z)})

//here's a simple player routine 
~r = Routine {
  inf.do({|x|
      //instantiate a synth, and set it to a random Server in ~z
      var sn = Synth("my-synth",nil,~z.choose);
      //set some arguments on the synth
      sn.set(\foo,x);
      1.wait;
  })
};
~r.play;
/*
a different example - hit each server sequentially.
this may not be balanced if each synth you instantiate takes different server resources
for example, if every other beat plays a long note, then the long notes will pile up on half of the servers.
*/
~r = Routine {
  inf.do({|x|
      //assign synths sequentially for each step on the routine
      var sn = Synth("my-synth",nil,~z.wrapAt(x));
      //set some arguments on the synth
      sn.set(\foo,x);
      1.wait;
  })
};
~r.play;

/*
keep hitting the same server until it starts to max out, then move down the list to the next server
*/
~r = Routine {
  inf.do({|x|
    var playing = false;
      ~z.size.do({|n|
        //signal starts to break up when peak cpu gets above 80 - but this can depend on your hardware
        /* 1000 Synths is the default max per Server - polling doesn't happen instantaeously so set this a little low */
        (~z.wrapAt(n).peakCPU < 80).and(~z.wrapAt(n).numSynths < 900).and(playing==false).if({
          //add our synth to nth Server in the list
          var sn = Synth("my-synth",nil,~z.wrapAt(n));
          //set some arguments on the synth
          sn.set(\foo,x);
          //break the loop
          playing=true;
        });
      });
      x.postln;
      //if all the Servers in ~z are over 80% capacity, the music carries on without playing the note 
      (0.1).wait;
  })
};
~r.play;

Sunday, July 8, 2012

Output HTML Documentation of Your Code with Textmate

Just a tip - if you want to publish code examples from TextMate and include the syntax highlighting of your TextMate theme, you can run the command:

Bundles -> Textmate -> Create HTML from Document

There's also a command in there to create a css stylesheet from your theme.



  

/* define synths and send to servers */
~z.collect({|z|
SynthDef("pulser",{|freq=100,m=#[1,1,1,1], atk=1,fatk=1, rel=1, sus=1, w=0.5,amp=0.005,pan =0,gate=0,sw=8,r=(0.5),out=0,mrate=1|
  Out.ar(out,Mix.ar(Pan2.ar(RLPF.ar(Pulse.ar( m * freq ,w
,0.5 * 5 * amp/(((freq)+256))),(LFPulse.ar(mrate,0,0.5,0,w)*freq*EnvGen.ar(Env.asr(fatk,sw, rel*2,'sine'),gate, doneAction:2)).clip(32,2**14),r,1),pan)) * EnvGen.ar(Env.asr(atk,1, rel,'sine'),gate, doneAction:0) );}).send(z);
});

~z.collect({|z|
SynthDef("sine-cluster",{|freq=100, atk=1, rel=1, sus=1, amp=0.005,pan =0,gate=0|
  Out.ar(0,Mix.ar(Pan2.ar(FSinOsc.ar(  [1,1] * freq,0,7 * amp/(((freq)+256))),pan)) * EnvGen.ar(Env.perc(atk,rel, 1,sus),gate, doneAction:2) );}).send(z);
});

/* Define functions for playing our synths */
~sineClusterplayer = {|f=64,scaleDegree=1,overtones=([1,3,7,13,16]),articulation=1,cycle=1,amp=1,step=1|
  //step thru a collection of overtones
  overtones.collect({|overtone,num|
    //play a tight cluster of 8 sines, slightly detuned from the overtone
    8.do({|clusterTone|
        var sn = Synth("sine-cluster",nil,~z.wrapAt(clusterTone+num)),freq,
          accent=(((~scale.wrapAt(scaleDegree+cycle)%~scale.size)+1)/~scale.size);
        freq = f * scaleDegree * overtone;
        freq = freq * (freq > 128).if({~pitchTweak.(clusterTone,cycle,num)},{1});//just a little detune if we're not too deep in the bass
        sn.set(\freq,freq);
        sn.set(\atk,(1/64) + ((num*overtone)/64) );//higher overtones have longer attack
        sn.set(\amp,1.5 * amp);
        sn.set(\rel,(48 * ~t * ~tempo) / ~scale.size.clip(2,32).log2);
        sn.set(\sus,-1.125 * articulation * accent * (scaleDegree%overtone).clip(2,~scale.size).asInt.factors[0].log2);
        sn.set(\pan,([1,-1] /.x (2..6)).sort.wrapAt(clusterTone+step+num));
        sn.set(\gate,1);
    });
  });
};

Saturday, July 7, 2012

Using Git with SuperCollider

SuperColliderCompositions GitHub repository

I've set up a GitHub repository of the source code of some of the music I've been writing with SuperCollider.  I'll be posting walkthroughs of some of the code as those pieces get played on the Backtrace livestream.

I've been writing music with SC for about 5 years now, and my coding habits have evolved very little in that time.  They're also curiously out of sync with how I write code for web-based projects.  If I'm writing PHP or Ruby I set up a project in TextMate, where code is split up into many files and folders based on what aspect of the application they address (templates, database queries, etc).   I always have a corresponding git repository, even if it's just a local repo to store my revision history.   I use git to deploy code to servers (just do a git pull into the web directory).  But with Supercollider, I started out in 2007 writing compositions in single text file (synthdefs at the top, followed by sequences) and I still write code that way today.

I think part of the reason I developed this habit because of the lack of dev environments in SC.  I have the SC bundle for TextMate now so I can write code the same way I do for other projects - with a tree of multiple directories and code split into different files (some for synthdefs, some for sequences, some for utility functions).  But I think more than the lack of editor support, I have totally different motivations for writing when I'm doing code in SC.  When I'm doing a web project, I have some idea of what the completed code will be before I start writing - I'm given a set of requirements (or I have my own vision for the project) before I start, and I code to that vision.   With music in SC I rarely start out with an idea of what the final piece will sound like - the act of coding is a kind of improvisation or iterative process.  I write some code, play it, and when it's interesting enough, I record some to disk, then rename the file (and as you can see in the git repo I posted, I have dozens of files that are named by the date I when was working on that file).  Since SC is such a concise language, even complex pieces rarely involve more than 200 or 300 lines of code, so they're easy enough to manage as single text files.  I've thought of moving to a different client language like Ruby which would allow better integration with the web and the rest of my dev process, but so much of my code relies on super concise language constructs like array adverbs, and in Ruby you just can't quite say something like

(64 * (4..10).nthPrime *.x (2**(0..5)))

for example, to generate an array of frequencies based on the prime numbered overtones the fundamental pitch 64 hz from 11 to 31 across 6 octaves.  (Incidentally, with SC you could write entire LaMonte Young compositions in fewer characters than their own titles).

There's also the matter of when a piece is "done".  When building web apps there's obvious points to commit and deploy - some functional component works, so you do a commit, and when you have enough features commited, then you deploy those commits to a server.  With music, when do you "commit changes"?  It's more intuitive than objective - if I like what I'm hearing as I write, I'll commit the code, but if I'm using a lot of environment variables or higher-order functions, that code may not actually represent a blueprint of the sounds being played with the same 1:1 correspondence that a traditional score (or MIDI file) gives you.  What I need is a kind of "commit snapshot" function that would commit the code and the contents of all of the environment variables at that moment - and that's really more of a "save content to database" action in my mind than a "commit code to repository" action.

What's the SC equivalent of "deploying"?  One of the reasons why I've set up the Backtrace Livestream   is to have a space for the "deploy to production" moment in my workflow (also, one of these days I should unpack just how much my own thinking about music is structured by software dev practices).  Most of the music I've been composing in the last 5 years is open-ended algorithmic stuff, so even when I do create recordings, they're not a realization of a composition as much as a document of a moment.  And even still many of them are hours long, and while disk space is cheap, at that scale (~100 mb per recording) the cost just to stash them on a virtual server (let alone in SoundCloud) does add up.    Having whatever I'm working on running in a live channel maps more closely to how I intend the music to be heard.  It also let's me not have to worry about the whole "how do you perform deterministic algorithmic digital music" question - the sounds are available to anyone with internet access, and there's no beginning or end to them.

In a sense, everything I've written in SC has been variations on a single theme. I'm not sure how doing commits to a single file with a revision history will work with my process.  Often, I'll keep code for the last 5 or 10 revisions of a piece open and switch out certain components (a harmonizer function or a sequencer function) to see how they sound - maybe some days I'm more inspired to tweak the balance of sounds in the spectrum, and other days I'm more inspired to come up with interesting rhythms.  If I move to git, I'm starting to think that I might actually start totally reconsidering how I think of my code - seeing most of the musical details as data instead of code.  Unfortunately SC doesn't have any native capabilities for saving to databases, though there is a JSON quark I think so I could set up a bridge with something like MongoDB or Redis and a simple Ruby app.  Which is a whole other topic.

Friday, July 6, 2012

Stream Supercollider on an Ubuntu Cloud Server using Darkice, Jackd and Icecast

The scenario - you have some kind of generative music app that you've written in Supercollider, or you want to have a Supercollider server in a stable place that's accessable to the whole internet, perhaps to have it collect data from or send data to other apps.  A cloud server is just a Linux machine in the cloud, and Supercollider runs on Linux, so why not run Supercollider in the cloud.

TL; DR (and caveat) - I never really had a solid deployment with this setup, and I suspect this is probably easier to do with Overtone.  Overtone is a Supercollider client / server implementation built in Clojure.  Clojure is an implementation of Common Lisp built on top of Java, so you get the whole ecosystem of tried and true Java apps wrapped in a fun client language.

If you have a supercollider stack running on a webserver, please comment - the deployment which I'm describing here is not optimal and I'd like to hear how others have done it.

Getting this set up took me a lot of moving parts and the process could probably be improved.  The deal is you get sclang running, which lets you build synths and control them, and then you need to take the signal generated by scsyth and pipe it to your Icecast server, but do it all without actually having a soundcard on your virtual machine.  This is where the recursively-named JACK Audio Connection Kit  comes into play.  Jack does a lot of things - one of them is to act as a virtual patch cable that will let you send audio signal between running apps without the use of a soundcard.

So, the whole setup is Sclang -> Scsynth -> Jackd -> Darkice -> Icecast2, plus Monit to make sure the whole thing is running, which is a lot of moving parts.


Sclang to run your supercollider code on the server.  (You could also control supercollider from a separate client on a home computer).  Scsynth to generate audio signal.  Jackd to patch that signal into your streaming server.  Darkice is the streaming server, which encodes the audio signal for the web, and Icecast2 is the application that takes the audio stream and serves it up in a web page, or for an app like iTunes to listen to.

Here's some pages that I found helpful in getting things  set up:

Setting up Darkice

Jackd and darkice, with jackd running dummy soundcards

Running darkice as a background process

What is Darkice


One of the problems I had with this set up is that if you're running under constant CPU load you're much more likely to crash on a virtual server than on a desktop computer or other dedicated hardware. The CPU allocation on a virtual server isn't really constant and spikes in CPU usage can cause your CPU allocation from scsynth to overload and the scsynth process will crash.  You can use something like Supervisor or Monit to restart the process after a crash.

This is a blog post from howtonode.org about using monit and upstart to restart a process after it crashes.

I'm not currently running this stack on backtrace.us - If I have time to get a stable deployment going, I'll post an update with more notes on configuration.

Sunday, July 1, 2012

Streaming Audio with Supercollider: Ustream vs Icecast

Live Audio Streaming of Supercollider on Backrtrace

So I've been messing around again with long-playing generative music on Supercollider.  Sequencers that play incremental variations on things on an infinite loop, with some external data thrown in here or there to tweak certain parameters.  Here's an example:

I started writing this kind of music back around 1998, while living in Florida, on a 25mhz 486 PC that I had purchased at a thirft store for $50, using the internal speaker no less.  It was a few months before I figured out how to program for the SoundBlaster in QBasic.

I had done a few installation shows or performances with this kind of music, but getting it on the internet as a continuous stream has been a goal for about 10 years.  With my involvement in Occupy I came into contact with a lot of livestreaming  - I'd actually used some of the services in 2009 while working at a digital marketing agency but that association kind of made me forget that this was also something that ordinary users could set up for themselves very easily.

So this week I set up a Ustream channel for myself and started streaming with Supercollider off of my Mac Mini.  You can hear some of the archived recordings on that page.

Ustream has been kind of a mixed bag.  On the one hand, getting up and running is dead simple - you can use a web-based broadcast app (no need to install software) and use Soundflower to grab sound from your desktop.  You can't do any video this way, only audio.   However, getting a screencast going is pretty simple with their desktop capture software.  It's also very easy to make lengthy recordings that can be played back on your channel homepage - just hit the record button and they will record and archive your stream.  Not sure what the restrictions are on how many hours you can have archived and how long they last.

Ease of use is about all I can say is positive for Ustream for this (admittedly non-standard) use case.  The commercials are hugely disruptive and seem to come about every 10 minutes.  Commercials are also played on the archived streams.  In order to stream without commercials, I'd have to get a Pro broadcaster account, which would cost several hundred dollars per month (it's paid per viewer minute, so more viewers = more cost).  Totally out of the question.  The sound quality is also unacceptably poor - and mp3 stream compressed at 96kpbs might be fine for speech, but doesn't work for music.

Ustream is working on an audio-only mode, which could provide a better experience for people who want to stream music from their desktops.

I've had an Icecast server running on backtrace.us for something like a year now.  Finally got it up and running on a proper subdomain too, instead of having the port 8000 showing up in the url.  (I did this thru reverse proxying with nginx.  I'll put up an example of the conf file in a later post.)  I've been experimenting with running a supercollider server on a headless Linux virtual machine in the cloud but overall that's been a mixed bag too.  It's cool to have a dedicated computer that can render audio forever without having to tie up your home computer (right now if I want to watch a DVD, the music streaming stops, but I'll probably separate out the music streaming to another computer eventually).   And datacenters are generally more reliable than home computers in terms of uptime as well.  Downside is that the scserver and sclang processes are finnicky on a virtual machine - they crash alot, probably due to cpu spikes from other processes running on the physical machines that share with my virtual.  Plus there's all the moving parts of syncing SC with Jack and ices in order to actually get your audio data broadcasting over the web.

What I did realize though this week from playing around with the Ustream studio software was that I could use my homecomputer as a source for the streaming server on backtrace.us.  Just like how a person uses their phone / laptop camera to get a video stream, but doesn't have 100's of viewers connect directly to their laptop, I could do the same with supercollider and sound.  I had been using Nicecast studio mostly just to archive recordings of Supercollider locally, or listen around the house, but I had never tried setting it up as a source for my streaming server.  Doing this was a snap - just go to the server tab in Nicecast and enter the url for the streaming server, plus the password for "source" users (this is defined in your /etc/icecast2/ icecast.xml).  No need to mess around with compiling a version of Ices on your linux box that will play mp3's or messing with Jackd - just set Nicecast to broadcasting and have it grab the signal from Supercollider.   If you're rendering on multiple servers (to spread the load out over multiple cores), don't forget to have Nicecast capture all sub-processes from your source (this is in the "Advanced" settings when you do Source: Application).  You can use inline effects on your stream to improve sound quality  (like compression or EQ) , and you can broadcast at whatever bitrate you want.  Nicecast isn't free, but it's pretty reasonably priced for a lifetime license.

TL DR Summary:


Streaming Audio with Ustream:


Pros:

  • Easy to set up
  • Free
  • Simple recording and playback of streams online - good for sharing
  • You can control Supercollider while it's playing
  • Screencasting is an option if you want to do livecoding performances
Cons:
  • Too many commercials
  • Low quality audio
  • Premium Service is very expensive
  • Requires a dedicated home computer

Running Supercollider with Icecast in the Cloud:


Pros:
  • Doesn't tie up your home computer
  • High-quality audio
Cons:
  • High maintenence, unreliable deployment
  • Requires considerable knowledge
  • Have to pay for hosting
  • Can't control Supercollider while it's running

Streaming with Nicecast locally, and Icecast in the Cloud (Best Option)


Pros:
  • High-quality audio
  • Nicecast is easy to set up.  Icecast2 server is not difficult if you know Linux packages
  • Total control over the stream - no commercials, no 3rd-party services
  • You can control Supercollider locally while it's playing
Cons: 
  • Requires a dedicated home computer and reliable internet connection
  • Have to pay for hosting / bandwidth for the Cloud server
  • Have to upload and host archived recordings
  • Have to pay for Nicecast
You can check out the stream here in your browser.

Or here in your mp3 player.

Next up is a walkthrough of the code used to generate the music here.