Supercollider 6 – Envelopes

audio, supercollider

Day Six of 30 Days of Supercollider.

Envelopes control how a single aspect of a sound changes over time. Traditionally, this meant the volume of a sound. When you strike a bell, for example, there’s an initial fast peak of volume, which then slowly fades over a potentially long period, as the bell continues to resonate. If you graphed out that volume level, it might look like this:

Of course different bells will have different curves. A meditation bell sound seems to go on and on, whereas a cowbell fades out pretty quickly.

A flute would have a totally different curve. It will probably reach peak volume more slowly than a bell. Then it will maintain a steady volume for as long as the flutist continues to hold that note and fade out rather quickly, though the flutist could make that fade longer if they wanted to. That might look more like this:

These curves are known as envelopes and they are usually broken down into standard sections:

  • Attack. How long it takes for the sound to ramp up to its initial full peak volume.
  • Decay. Often that peak volume subsides a bit before it gets to the next phase.
  • Sustain. For sounds that can be held for a period of time, this is the volume and length of time they are held at.
  • Release. When the sound generation is stopped, how long does it take for the volume to get to zero?

Here are all those parts labeled:

Not all envelopes have all those parts. You can break down envelopes into sustaining and non-sustaining envelopes. Most bells, for example, do not have a sustain section. You strike them, they peak and then they release. The same with most drum sounds. So this kind of envelope is often called a percussive envelope. It just has an attack and a release. Since it starts its release after it reaches the peak, there’s no real decay either.

Sustaining envelopes may or may not have a decay section. So usually you’ll see “adsr” and “asr” envelopes.

In Supercollider there is an envelope class that you can use to construct all these kinds of envelopes and more. For example there is Env.perc, Env.asr and Env.adsr as well as others. To create envelopes, you need a number of volume parameters and a number of time parameters.


  • The start volume – where the volume starts – usually 0, but doesn’t have to be.
  • The peak volume at the end of the attack.
  • The sustain volume – where the volume goes down to after the decay. If there is no decay, this is the same as the peak.
  • The release volume – where the volume ends. Again, usually 0, but doesn’t have to be.

Depending on which type of envelope you are using, you may not need all of these.


  • Attack time.
  • Decay time.
  • Sustain time.
  • Release time.

As with volumes, not all envelopes use all of these. Also, most of Supercollider’s envelopes do not have a parameter for sustain time. The end of the sustain period is usually triggered by something else, such as the release of a key, or some other signal. We’ll see examples of this later.

Percussive Envelope

But let’s get started actually creating an envelope and applying it to a sound. I’m going to start with a function that has two vars, sig for the signal, and env for the envelope. And I’ll create a pulse (square) wave for the signal.

f = {
    var sig, env;
    sig =;

If you evaluate the part in parentheses, it will store that function in the global variable, f. Then you can evaluate the next line that plays f. You’ll hear a noise that will just go on forever. Press Cmd/Control + period to stop it. Next we’ll create the envelope. We’ll create a percussive envelope with Env.perc. This will need to be wrapped in an EnvGen class, which is an envelope generator. Finally, multiply the two together, the result of which gets returned by the function.

f = {
    var sig, env;
    sig =;
    env =;
    sig = sig * env;

When you play this, it sounds a bit more bell-like.

But before we do anything more to the envelope itself, open up the Node Tree window. If you’ve played this function a few times, you’ll see a bunch of items hanging around. These are sounds that are still technically playing, but the envelope brought their sound down to zero so you can’t hear them.

To fix this, add a doneAction to the EnvGen. Setting this to 2 will cause the sound to be removed when the envelope completes.

f = {
    var sig, env;
    sig =;
    env =, doneAction: 2);
    sig = sig * env;

Now we can start playing with the parameters to Env.perc to change the envelope. As mentioned, this kind of envelope just goes quickly to a peak (attack) and then slowly fades out (release). The arguments to the function are:

attackTime = 0.01, releaseTime = 1, mul = 1, curve = -4

Try changing attackTime and releaseTime to get an idea how that changes the sound. Another neat trick is to plot the envelope, which you can just to by adding .plot to the end of the call. Like so:

Env.perc(0.1, 0.3).plot;

Which gives you this:

Notice that the two lines are curved. You might guess that you can change that curve with the curve parameter. And you’d be right. A curve of 0 creates linear changes to the volume, and straight lines in the plot.

Negative numbers curve one way. The default curve for this envelope is -4, so you have seen how that looks. Positive 4 looks like this:

Higher or lower numbers change the shape of the curve. Try different curves to see and hear the changes they make.


A percussive envelope is non-sustaining. It plays and it finishes on its own. Sustaining envelopes will play indefinitely until something ends the sustain. Let’s start with the simplest of these, the ASR envelope, which is created with Env.asr(attackTime = 0.01, sustainLevel = 1, releaseTime = 1, curve = -4)

Note that there is only a sustain level, not a sustain time. Let’s just change the percussive envelope with a default ASR one, and see what happens.

f = {
    var sig, env;
    sig =;
    env =, doneAction: 2);
    sig = sig * env;

Play that and it pretty much sounds like there is no envelope at all. It goes quickly to full level and just stays there. We need some way of telling the envelope to move on to the next step. This is known as a gate and is an argument to the EnvGen. The gate is a value that is evaluated as true if it is a positive number, and false if it is zero or negative. If the gate is positive, the note will play to its sustain level and stay there. When the gate goes to zero or below, the sustain ends and the release phase begins.

In order to change the gate at run time, first you’ll need to add a gate argument to the function. Then you need to save a reference to the object created when you call play on the function. Then on that object you can call set so change the gate. Here’s what that all looks like:

f = {
    arg gate = 1;
    var sig, env;
    sig =;
    env =, gate: gate, doneAction: 2);
    sig = sig * env;

a =;
a.set(\gate, 0);

Evaluate the function, then evaluate the play line to start the sound playing. Finally evaluate the final line to end the sustain, and you’ll hear the release.

Often, these actions would be triggered by a midi controller key press. Pressing the key would trigger the sound to start, and it would play as long as the key was held down. Releasing the key would trigger the code to set the gate to zero, which would then let the note release.

Alternately, you can programatically trigger the gate. One way would be to use some other UGen which just will change from negative to positive sometimes. Here’s an example of this.

f = {
    var sig, env;
    sig =;
    env =, gate:;
    sig = sig * env;

Here, the gate argument is set direction to an low frequency pulse UGen running at a frequency of 0.5, meaning it will complete a full cycle every two seconds. So this will go positive for 1 second, playing and sustaining the note. Then it will go negative, ending the sustain and letting the note release. Then back to positive after a second. You can change the width value of the LFPulse to change how long the note plays, while still maintaining the rate at which notes occur.

You should also try to get the Env.adsr envelope working. This works almost exactly the same as the ASR envelope, but has a decay phase and a decayTime parameter to control the length of that.

Beyond all that…

Earlier I said that envelopes have been traditionally used for volume or amplitude, but they are really just functions that return a stream of values over time, so they can be used to control any parameter of pretty much anything. It’s common to use envelopes on filters for example, to change how the filter is applied to the sound over time.

There are also other envelopes to explore. Or you can just use to create a completely custom envelope. You can pass in an array of levels, times and curves for each stage of the envelope, and set which stage is the release. And you can make as many stages as you want and even set up looping envelopes. All very powerful.

Supercollider 5 – Unit Generators

audio, supercollider

Day Five of 30 Days of Supercollider.

I could write hundreds of pages about UGens. Other people have. I’ll let you read their stuff instead and just give some of the basics.

Unit Generators, or UGens for short, are one of the key building blocks for creating sound in Supercollider. If I understand it correctly, UGens create the signals that are used within Synths to describe sounds that get created in the server. Even if you create a UGen without a Synth, a default Synth is used behind the scenes to wrap that UGen and create the sound. That’s what the function play message does.

A sound could be a single UGen and Synth, or it could be made of multiple UGens hooked up to each other, with envelopes and filters and all kinds of other things in there shaping the final sound. For this post, we’ll just look at a few basic UGens wrapped in functions. Nothing complex.

UGens are defined in classes. And class names start with capital letters. So you’ll see things like SinOsc and LFTri and Saw. UGens have three key methods that can be called, ar, kr, ir. Which one you use is determined by what you’re using it for. Mostly you’ll be using ar, which stands for “audio rate” or kr which is “control rate”. ir is for “initial rate”, but we won’t be getting into that here.

You use audio rate when you are generating actual sound data. By default this creates signals at a default rate of 44,100 samples per second. Control rate generates samples at a much slower rate. It is used for many different things, but one of the more common use cases is to control some aspect of an audio rate UGen. So yeah, you can have one UGen nested inside another UGen. The inner one will usually (but not always) use kr to control some aspect of the outer one, which is actually making sound with ar.

But let’s create a sound. Type in this code and evaluate it:

{ }.play;

This creates a sine wave oscillator UGen that generates and audio rate signal at 400 hz – or 400 cps (cycles per second) if you prefer. You should hear a sound, assuming your server is booted and sound is on, etc. Note that the sound will probably only come out of the left speaker / headphone. That’ll be the case for all the sounds in this post. We’ll cover stereo later.

Change the 400 to something between 20 and 20,000 and you can hear other tones.

But there are other arguments. In full, it’s, phase, mul, add)

We already saw freq. The phase argument shifts that sine wave one way or the other. This is useful for creating two of the same waves, but getting them to sound separated. Otherwise, not too useful on a sine wave. mul controls the amplitude of the sine wave (multiplies it). It’s default is 1.0. You can think of this as amplitude, or the volume level of the resulting sound. Since you’ll often have multiple sounds playing together, and their volume adds up, you often want to set this down around 0.2 or 0.3 or even some lower amount so that the sum of all your sounds stays at 1 or below. Finally, add adds some amount to the wave, defaulting to 0. This is often more useful in kr than ar, as we’ll see soon.

You can take the defaults for phase and add and just change freq and mul, so you’ll often see something like:, mul: 0.5)

Try some of these other UGens:

{ }.play; // a square wave
{ }.play; // a triangle wave
{ }.play;   // a sawtooth wave.

Most of the arguments to the ar methods for these UGens are similar to SinOsc, but there will be some differences.

If you ever want to know what your sounds looks like, use plot instead of play:

{ }.plot;   // a sawtooth wave.

We’ll probably get into some other UGens later in this series. But I just want to show a few examples of kr with a UGen within a Ugen.

First, let’s make a SinOsc UGen using kr with a freq of 1 and a mul of 100, and plot it.

{, mul: 100)}.plot;

This gives you the following:

Not too useful. Because our frequency is 1, it’s going to take a full second to complete a full sine wave. And the plot is only showing 0.01 seconds. We can fix that by telling plot to plot 1 second:

{, mul: 100)}.plot(1);

Note that the amplitude now goes from -100 to +100, every second.

So the cool thing about UGens is that you can do math with them just like they were single values, even though they are in fact objects that produce a stream of values. So we can create a sawtooth wave and just add the above sine wave to its frequency argument.

{ +, mul: 100)) }.play;

You should now hear a siren type sound. The sawtooth wave has a base frequency of 400 hz, but that’s going to go up and down by 100 (from 300 to 500) once per second. Try playing with those numbers and getting different values. This is known as frequency modulation (yup, just like FM radio), because you are modulating the frequency.

You can do the same thing with amplitude modulation (AM radio) by using the sine wave to change the mul value of the sawtooth wave. But we probably want it to go from 0 to 1 over and over. The math for this is that we want to set mul of sine wave to 0.5 (which makes it go from -0.5 to +0.5) and then add 0.5 to that, to make it go from 0 to 1. That looks like this:

{, mul: 0.5, add: 0.5)}.plot(1);

That math can become a pain though. A shortcut is to leave it all out and call range at the end, passing in the min and max values you want the wave to hit.

{, 1) }.plot(1);

Now you use that as the mul argument to the sawtooth wave.

{, mul:, 1)) }.play;

Again, try different numbers here, but avoid going much over 1 (or -1) for that final mul value. If you’re not sure what you’re doing, it’s always safe to plot a wave before subjecting your ears to it.

The last thing I’ll show is additive synthesis. This is simply adding two UGens together. Seriously, it’s that simple.

{, mul: 0.5) +, mul: 0.5) }.play;

Here I’m using a 400 hz sine wave and a 770 hz sawtooth wave. I set both’s mul to 0.5 so it wouldn’t be too loud.

Plotting this at plot(0.05) gives us:

Quite a complex wave, for a complex sound.

Supercollider 4 – Variables, etc.

audio, supercollider

Day Four of 30 Days of Supercollider

Variables in Supercollider, not surprisingly, are rather special, compared to many other languages. I can count four rather distinct types of things that will hold a value:

  • Regular vars
  • Arguments
  • Single-letter variables
  • Environmental variables

Regular Variables

Let’s start with regular variables. These aren’t much different than variables you’d find in most other languages. You declare them with the var keyword and the name of the variable, which should really be more than one character long and has about what you’d expect for legal identifier names, as far as I know. They should also start with a lowercase letter, as identifiers starting with capital letters indicate a class name.

You can optionally decide to assign the var a value when you create it, or you can do it later. Unassigned vars have a value of nil. Vars are not typed, so you can reassign them with data of another type of you want.

var foo;
foo.postln; // nil

foo = "hello";
foo.postln; // hello

foo = 42;
foo.postln; // 42

var bar = "hello";
bar.postln; // hello

You must use the var keyword before assigning a value to a variable. i.e. you can’t do something like the following. It will throw an error that foo does not exist.

foo = 99;

Vars have scope, as you might expect. A var inside a function is scoped to that function and will have a different value than a var of the same name outside the function, as the following shows:

var foo = "apple";
var func = {
	var foo = "banana";
	postln("in function: " + foo); // banana

postln("outside function: " + foo); // apple

Also, vars declared in one region are scoped to that region and will not be available in other regions. Example:

var name = "keith";
name.postln; // keith

name.postln; // error, name is not defined.

Finally, vars must be declared before any statements are executed in a given function or region. This will fail:

var foo = "hi";

But if you switch the order of the two lines, it will be fine.


We already looked at arguments when we covered functions. As far as I know, they have all the same rules as regular vars, but they need to be declared before vars or any other code in a function. Oddly, you can declare args outside of functions and they seem to work pretty much as regular variables. So most likely they are pretty much the same under the hood.

arg age = 90;
age.postln; // 90

Single-Letter Variables

Earlier I said that regular variables should be more than one character. The reason for that is that single-letter variables are known as global variables. Global variables a to z already exist and can be used without the var keyword. And as their name suggests, they are available across regions, functions, any scope.

f = {
	a = "hello world";

a.postln; // nil
a.postln; // hello world

Evaluating the first region will assign a function to global variable f. Inside that function, global variable a is assigned a value.

Evaluating the second region calls postln on global variable a, which should not have a value yet, so it shows nil. It then calls value on the function stored in f. Although that function was declared in another region, it is still available here because f is global.

After the function is run, we postln the variable a again. Now it has the value assigned to it in the function.

This globality even works across files. If you open one file and write to a single-letter global variable, you can open a new file and read from it.

Now we’ve all been taught that global variables are bad. But in most cases, when you’re coding in Supercollider, you’re not doing hard computer science. You’re just being creative. So I think it’s OK to relax a bit. Since you’ll often be defining functions in one region and using them in another region, global vars become kind of necessary in many cases.

Of course, if you are making a plugin or some kind of reusable code library, avoiding globals is still a very smart idea.

One more vital warning here. You should avoid using the global variable s in your own code. This has been assigned as the current server. So you can do things like s.boot, s.reboot, s.stop. There’s nothing special about s other than it’s already being used. If you really, really think you need to use s, then at least reassign the server to some other variable.

z = s;
s = "foo";


Environmental Variables

Environmental variables are similar to global variables in functionality, but can be even more useful because you aren’t limited to a single character. Your variable name can actually be useful.

Environmental… ok, I’m just going to call them env vars. Env vars always begin with a tilde and do not need the var keyword. Otherwise they work pretty much like vars and global vars. You can use them in any scope. This is the same example we saw before, redone to use env vars.

~magic = {
	~message = "hello world";

~message.postln; // nil
~message.postln; // hello world

Technically though, env vars are different than global vars. They are scoped to the current environment. And really, the code ~foo = "hello"; is an alias for currentEnvironment.put(\foo, "hello"); I’m not going to go too deep into environments, but they are basically namespaces. You can create new environments, push them and pop them off a stack of other environments, etc. But until you’re actually doing things at that level, env vars are probably safe to consider essentially global. If there’s an edge case, I’m sure someone will bring it up.

Supercollider 3 – More Function Stuff

audio, supercollider

Day Three of 30 Days of Supercollider

This will be a short one.

There is some more weirdness with functions in SC that I didn’t think of yesterday. This one is actually a pretty cool language feature. Just something you don’t see in most languages. It has to do with the way methods are called, or I guess I should say the way messages are sent to objects.

Yesterday I was using syntax like this to play a unit generator:

{ }.play;

And that’s fine. But there’s an alternate syntax that does the same thing:

play({ });

In other words, you can send the play message to a function instance, or you call the play method, passing in a function to play. Both are equivalent and probably one converts to the other in the back end.

Similarly, you can send a value message to a function, or pass a function to the value method.

{ 42 }.value;

value({ 42 });

This goes way beyond functions though. In Supercollider, if you want to send a message to the Post window, you use the postln method. You can do it like this:


Or you can send the postln method to whatever you want to print.


This can be really useful for debugging, because in addition to posting the value to the Post window, it will return the value that it just posted, so tacking on postln to something is completely transparent to the logic of your code. For example…

f = {
	a = rand(10);
	b = rand(10);
	a * b;

This function chooses two random numbers and returns their product. But say instead of rand, you were calling some function that returned an important value. But your code is not doing what you expect it to so you want to trace out the values of a and b. In many languages, you’d need to add more lines for the postln calls:

f = {
	a = rand(10);
	b = rand(10);
	a * b;

But in Supercollider, you can just do this:

f = {
	a = rand(10).postln;
	b = rand(10).postln;
	a * b;

The values get posted and the code continues to work as expected with no side effects caused the the post.

Some other examples:

All the array functions.

reverse([1, 2, 3, 4, 5]);

// or...

[1, 2, 3, 4, 5].reverse;



// or


This can look a bit confusing at first when applied to floating point numbers.



But you get used to it. The hardest part is that when you’re learning and looking at other people’s examples, some will use one form of the syntax, and some will use the other form, sometimes even mixing them. So it’s best to get used to both ways.

30 Days of Supercollider Series


This will be an index of the articles I post about Supercollider.

Warning: I don’t know a lot about Supercollider yet. This will be a journal of my discoveries as much as anything else. I probably know less about music in general. Trying to learn something before my last trip around the sun. Anyway, this shouldn’t be taken as a step-by-step tutorial on learning Supercollider. Just a random collection of stuff.

The Days:

  1. The IDE
  2. Functions
  3. More Function Stuff
  4. Variables, etc.
  5. Unit Generators
  6. Envelopes

In the off chance you might be interested in the actual sounds I’m creating, you can find them here:

Supercollider 02 – Functions

audio, supercollider

Day Two of 30 Days of Supercollider

A word of warning about this series as a whole: this should not be taken as a comprehensive, step-by-step tutorial on how to use Supercollider. There are better resources out there for that. This will be a loose collection of deep, or not-so-deep, dives into different topics. A lot of it is just documenting stuff for myself. Teaching is the best way to learn.

One of the first things you’ll learn about in Supercollider is functions, because the most common way demonstrated to play sounds at first is to wrap them in a function. But it took me quite a while to wrap my head around functions in Supercollider. Like how code is evaluated in the IDE, Supercollider goes way off the beaten track with functions.

Functions are defined by code inside curly brackets. The last value in the function is the return value. Here is an empty function:

{ }

If you evaluate that line, you’ll see -> a Function in the Post Window, telling you that it is a function.

Fairly often you’ll want to assign a function to a variable. You can do that like this:

f = {};

I’m just going to use the single letter f throughout this post. You can use other names but there are some rules around all that which I’m not going to get into here. Another day. For now just use f, or another single letter.

Say you want a function that returns a value, like 42, you can do this:

f = { 42; };

// or...

f = {

Note the parentheses around the second version. They’re not necessary, but as described in the first post in this series, it makes it so you can evaluate the entire function by putting your cursor in that region and hitting Ctrl-Enter/Cmd-Enter.

By the way, semicolons are not always required on the end of lines, but more often then not if you leave one off, you’ll wind up with an error that will be really tough to debug. It will just run two lines together and try to parse them like that. You can get away with it if it’s the last line of code in a block or you’re only evaluating a single line. Otherwise, best to use them.

Now, what do you do with functions? You call them. So you’d probably guess to do something like this:


But that will give you an error:

ERROR: syntax error, unexpected ';', expecting BEGINCLOSEDFUNC or '{'
  in interpreted text
  line 1 char 4:

ERROR: Command line parse failed
-> nil

Sometimes you’ll see advice to do something like this:


And sure enough, that works, outputting what you’d expect:

-> 42

Now that just looks like some funky syntax decision, but what’s actually going on is a lot deeper. I eventually came to the realization that functions in Supercollider are not really functions like in other languages that are directly callable. I find it easier to think of them as special objects that have a few methods that can be called. You probably shouldn’t talk about it in those terms because nobody else does, but if you’re coming from another language, that may help you make sense of them.

Supercollider docs actually say:

A Function is an expression which defines operations to be performed when it is sent the value message.

So, yeah… value. That starts to look more normal:


And that works! It turns out that f.() is really just an alias for f.value(). Better, better. Also, most of the time, you don’t need the parentheses. This works too:


You will need to use parentheses when you pass arguments to functions though. So let’s cover arguments next.

Arguments are defined at the top of a function, before any other code. Use the key word arg followed by the argument name.

f = {
    arg foo;
    foo * 2;

When you evaluate it, you can now call it with value, the argument inside parentheses:


As expected, this will print -> 16 in the Post Window.

Multiple arguments can be added to the same arg line with commas:

f = {
    arg foo, bar;
    foo * bar;

And now you can call it, passing in two args:

f.value(8, 3);

And this should print -> 24.

Arguments can also use default values, just set them in the arg line:

f = {
    arg foo = 10, bar = 3;
    foo * bar;

Now you can call this with 0, 1 or both arguments.

f.value();     // 30 - using both defaults
f.value(7);    // 21 - using only the second default
f.value(7, 2); // 14 - using no defaults

Like some other languages, such as Python, you can also use named arguments, or a mix of named and unnamed.

f.value(7, 3);           // unnamed
f.value(7, bar: 3);      // unnamed and named
f.value(foo: 7, bar: 3); // both named

With named args you can order them however you want and even skip arguments, assuming they have defaults. But once you use one named argument, all the rest must be named.

f.value(bar: 3, foo: 7); // opposite order
f.value(bar: 3);        // skip the first arg
f.value(foo: 7, 3);     // illegal! will throw an error

Lastly, there’s an alternate way to specify arguments. Rather than the arg key word, you can enclose the arguments line in a pair of pipe | characters.

f = {
    | foo = 10, bar = 3 |
    foo * bar;

That’s just a matter of preference. However you want to do it is up to you.

One last thing I want to go over on functions: functions that play sound. This one confused me for quite a while. Without going into too much detail just yet, there are various classes called Unit Generators that are mostly used to generate sound. SinOsc is a common one. It generates a sine wave oscillator – a really basic sound. Normally you call the ar method of that class to generate a sound of a particular frequency, such as to generate a 400hz tone.

But that line of code does not play the sound. You’ll most often see something like this:

{; }.play;

You can type that in and evaluate it and hear the sound. We’ll go more into unit generators later.

But I could not wrap my head around this one for a while. So creates the unit generator. I’d expect that you’d play it like so:;

But that gives you an error that the play message is not understood. But you have a function… the last line of the function returns that generator, and then you call play on that generator. How is that different from just calling it directly? I finally understood it though. It turns out that like value, play is a special message that you can send to a function. The details get a bit deep, but the bottom line is that when you call play on a function, it tries to evaluate the return value of that function as something that it can send to the server and play as a sound. Just calling play on the generator itself doesn’t work because play is a message you send to a function, not a generator. It’s a bit more complex than that, and I understand a good bit of what’s actually going on there, but I’m not going to try to explain it in this post. Enough for one day.

30 Days of Supercollider – Day 1 – The IDE

audio, supercollider

Day One of 30 Days of Supercollider

Years ago when Flash was on its way out, I started looking more and more into JavaScript and HTML’s Canvas as a replacement. I started a series on my earlier blog called 30 days of JavaScript. It was popular and moreover I learned a lot, needing to learn some new aspect of the language and graphics api each day.

Now that I’m taking a deep dive into Supercollider, I decided to try that same trick again. So, I plan to make 30 posts in the next 30 days (bear with me if I miss a few days here and there) tackling some aspect of Supercollider.

Caveat: these won’t be super in-depth most likely. Some of them will seem very basic and naive to anyone who knows this stuff more than I do. I can’t even guarantee that everything I write will be correct. But, as a wise person once said (paraphrased), the best way to learn the correct way to do something is to post the wrong way; someone will instantly show up to correct you. 🙂

Supercollider IDE

So let’s dive right in. Supercollider is actually a suite of several bits of technology that come together to form an environment for programming, playing, and recording synthesized sounds and music.

The parts are:

  • The Supercollider language, called sclang. There will probably be many posts on the language itself, as it’s quite different from most languages I’ve worked in, and I suspect it will be the same for other programmers out there.
  • The Supercollider interpreter. This is what reads the sclang code that you write, interprets it and sends it to …
  • The Supercollider server. Known as scsynth. This receives the interpreted commands from the interpreter and translates it into sound and music. And other things that aren’t necessarily audible, like timings, routings, etc.
  • The Supercollider IDE. This is scide. And is what we’ll be covering lightly today.

Here’s what the IDE looks like currently on my Linux laptop:

Pretty standard stuff here. On the left is a place to write your code and on the right you have some docks – the Help Browser and Post Window. The Post Window shows the output of the commands you run, success, errors, other messages. You can also log your own messages here, which about as much debugging capability as you’ll get here.

The editor is decent. It has color coding with different available color schemes. I found and installed a gruvbox theme, which makes me feel at home (code and instructions here: It has pretty good code completion and hints for method parameters. You can do Ctrl-D or Cmd-D on a keyword and see the documentation for that item in the Help Browser.

On the bottom right is a status bar that shows what’s going on with the interpreter and server. When everything is green there, you know the server and interpreter are running and ready to convert your code into music… or something noisy anyway.

There are also a number of helper panels you can open up to visualize what’s actually going on with your compositions.

Seen here are the Node Tree, which shows the active objects, the Server Meter, showing the input/output levels across channels, the Stethoscope showing a wave form for any channels you choose, and the Frequency Analyzer. The last is particularly useful for seeing visually what different filters are doing to your sound.

One thing that took a lot to get used to is the way that Supercollider interprets the code you write. In every other language I’ve ever worked in, you write code in a file and save and do something that builds that code – either interpreting or compiling that code, possibly importing and/or linking other code files in with it and then executing the result.

This is not even remotely how Supercollider works. Part of the reason for this is that Supercollider was originally conceived of as a tool for musical performance. So you wouldn’t be just sitting down and spending a long time creating this perfect program and then running it. Instead, you’d code a little bit, run that, add a bit here, run that. Stop that bit, change it a little and re-run it. Then code a few more pieces and add them to the mix, maybe removing some of the earlier bits as you go.

So generally, the way things work is you’re evaluating one specific block of code at a time. This can be a single line by default, or you can select multiple lines, or even a portion of a line and evaluate that. Whatever you evaluate gets instantly interpreted and sent to the server and if that code creates a sound, you’ll hear that sound.

The most common shortcut you’ll use is Ctrl-Enter or Cmd-Enter on Mac. This evaluates what’s under the cursor. If nothing is selected, it will evaluate that whole line. If a part of a line or multiple lines are selected, it will evaluate the entire selection.

But say you have some code like this:

var freq;
freq = 300;
{ }.play;

In order for anything meaningful to happen, you need to select all three lines and then evaluate them. And that’s a very minor example. As you can imagine, you might have a chunk of code that is dozens of lines long that needs to all be evaluated together. For this, we have regions. Creating a region just means putting a pair of parentheses around the code you want to be evaluated as one large unit. Like so:

var freq;
freq = 300;
{ }.play;

Now you can put your cursor anywhere inside the parentheses, or even on one of the lines with a parentheses, and hit your shortcut and the whole thing will be evaluated and sent to the server. Also, if you are in a region, but only want to evaluate a single line of code, you can hit Shift-Enter and only that current line will be evaluated.

There’s also a menu item to evaluate the whole file, but there’s no shortcut by default for that. Coming from other “normal” programming languages, that seemed absurd to me and I immediately set up a shortcut for that. Eventually I figured out that you almost never want to evaluate a whole file and removed that shortcut.

The other important (very important) shortcut is Ctrl-. or Cmd-. (Control or Command + the period key). This stops all sounds from playing. You’ll work that into muscle memory quickly, especially after having a few random and unexpectedly loud noises blasting in your headphones.

Linux Audio

Just a note for you Linux nerds like me. The first week or so using Supercollider, I had to do it on my Macbook Air because the Supercollider server would not start on Linux. I knew I could fix it, but wanted to focus on learning a bit more about Supercollider itself before delving into Linux audio configuration. Eventually though, I put in the effort to figure it out.

The problem is that on Linux, Supercollider needs to use the Jack audio system. But most Linux systems right now use PulseAudio. Both of these interface to your sound card using ALSA. Jack is apparently superior and used by most serious audio software on Linux, but for some reason is not the default.

I was able to get Jack started by installing QjackCtl which gives you a nice little panel to turn Jack on and off. That got Supercollider working just fine, but it killed everything else that was running via PulseAudio on my computer. Once I turned Jack off, PulseAudo and the rest of my apps worked, but they were mutually exclusive. I finally found the solution here:

This was a little fiddly and took a couple of reboots. Possibly because Jack was still running in the background via QjackCtl. But once it started working it was fine. I just open up the Cadence app and start Jack. Now all my computer’s audio is routed through Jack and everything works as expected, including Supercollider. If I turn Jack off, everything reverts to using PulseAudio instantaneously. So I’m very happy with that. Since I’m messing with Supercollider on a regular basis, I tend to just leave Jack running all the time now.

[Update] – already had a comment on Jack vs PulseAudio that Pipewire should resolve a lot of this. I checked and I do have PipeWire installed on my system, but it doesn’t seem to be in use. I’ll be digging into this more and will update with any fun findings.


So there’s Day One. Not too exciting, but I’ll be prepping a list of other topics and as I go I will create an index to all the articles. Hopefully some of them will be useful, if not to you, at least to me.

Let there be sound


And there was sound.


As 2023 is now half over (wtf?), it’s time to look back on my end of year post, where I made some plans.

  1. Create my own interpreted language.
  2. I want to finally do something with music.
  3. I’d love to do another side project creating graphics for something.
  4. Of course, I’ll finish the Coding Curves project.

Not sure if number 1 will happen this year. Or number 2. I did wrap up Coding Curves! So that’s good.


But what about that music thing? My goal was to learn enough about music to actually release a song. Releasing meaning posting on this blog. Song meaning a sequence of sounds that is vaguely musical.

As mentioned in the post, I was dabbling with MilkyTracker at the time. It was fun, but wasn’t quite getting me where I wanted. Then I took a deep dive into LMMS. Also very powerful and fun, but was still a bit frustrating to me. Both these tools felt like they were leading me down a path to create a very specific type of music with predefined conventions. And fiddling with hundreds of settings in scores of UIs just wasn’t cutting it for me.


So I took a break for a while. Then recently I started looking into Supercollider again. I’d dipped my toes into it once or twice in the past. I found a few different resources, but then ran across this series. This is actually a college course on Supercollider and it is really good. The teacher is also writing a book on SC which will be out later this year.

I also just took delivery of The Supercollider Book by MIT Press.

This book was published in 2011, but Supercollider is pretty stable, and there are online resources, so if there are any discrepancies, I can hopefully figure them out quickly. This isn’t a cover-to-cover tutorial type book anyway, but more a series of long articles on different subjects. Good for a deep dive on a particular aspect of SC.

Where I’m at

So, here is a … “song” that I created.

I put “song” in quotes because it’s really just a stochastic series of notes. But I think it sounds rather pleasant, even if it’s way too unstructured. It’s a single synth definition, creating three different synths with different settings, playing random notes from a pentatonic scale with random but quantized timings. This isn’t straight up copied from a tutorial. I got some basics, researched some other stuff, played around and came up with some ideas of my own. Very fun.

I’m not counting this as my goal of releasing a song though. This is the equivalent of drawing 1000 random lines and calling it art. OK, maybe. But maybe not. But maybe.

But I’m currently very excited about this. Supercollider is just straight up writing code in a code editor, executing that code, and having music come out. That works with how my brain has been used to working with graphics over the last 25 years. I’m not great at tools like Illustrator or Photoshop, etc. But give me a graphics api and I can code cool images for days.

Actually, my forays into Supercollider remind me a whole lot of my early days in Flash and ActionScript. I had no idea what I was doing at first, but I was able to draw a circle. Then I was able to make that circle move with code. Then I got it to bounce off the edges of the screen. I added gravity, and dampening, and bouncing coefficients. Then I got drag and drop working and even made it so you could throw the ball. I wrote all that up in my Gravity Tutorial back in the late 90s. It predates “bit-101” by a couple of years. That article right there was the start of everything I’ve done since in technology.

With sound and music, I feel like I’m just about at that point where I have the circle bouncing off the sides of the screen. There’s so much more to learn and I feel like I can just dive into different topics and add little bits of knowledge to what I already know and make cooler and more interesting soundscapes.

The 2000’s were great, discovering all the different things I could do with Flash. Graphics, animation, physics, math, fractals, art, etc. In the 2010’s, HTML and JavaScript started taking over. You could now do most of what you could do in Flash right on the web, with no plugins, either with 2D canvas, or WebGL. Even SVG got pretty powerful. I got on the canvas bandwagon and did that for several years. But a whole lot of what I was doing was rehashing ideas I came up with in Flash the previous decade. About 6 years ago I started coding in Go with Cairographics and I’ve really been enjoying that. But again, doing a lot of the same themes over again. Yeah, I’ve found plenty of new things and have leveled up a lot of different techniques, but it’s not the same as that first bunch of years of pure discovery.

I’ve felt like I’m constantly looking around for some new graphical technique that will excite me for years. Now I’m thinking that switching from graphics to audio might be exactly the kind of move I’ve been looking for. There’s SO MUCH to learn. There’s so much stuff that I just don’t have a clue about. It’s daunting, but very exhilarating. Because I feel I CAN learn it. And it’s all brand new.

There’s also the allure of creating the graphics AND accompanying music for an animated piece. I’ve done a collaboration or two with a friend who actually knows how to make music, and I have another collab I’m working on with another friend, and I’ve created some music with Garageband to go with one animation. That worked out better than it might sound, but to write the visuals and the audio myself, in code. That’s the holy grail. So, we’ll see where this goes…