# Resource management in chucklib

This tutorial is about using PR/BP's resource management hooks to create a relatively complex process that can be invoked and run with the same simple commands as in the previous tutorial.

What you should learn in this tutorial:

• The four places to put code to create and destroy permanent and runtime resources.
• An advanced synthdef construction technique, SynthDef.wrap().

### Terminology

Let's agree on some terms up front:

• Permanent resource: an object that is needed from the time the PR is instantiated into a BP (e.g., PR(\name) => BP(\name)) until the time the BP is freed. Ideally this is a resource that does not use much CPU. It may or may not be expensive to create—file reading, for instance, or filling wavetable buffers can take an unknown amount of time to complete so it isn't a good candidate for a runtime resource.
• Runtime resource: an object that needs to be created when the BP starts playing back and which should be removed when it stops. CPU-intensive resources are better written as runtime resources so that they can use CPU only when they're actually active. There's no sense in running a costly effect synth when the effect's input is silent!

### Constructor/destructor hooks

A hook is just a place where the user can insert code into the normal flow of execution. chucklib has a lot of hooks, since I wanted it to be very flexible.

For resource management, think of the hooks in pairs:

Permanent resource hooks:

• ~prep: constructor, runs upon PR => BP
• ~freeCleanup: destructor, runs upon BP().free

Runtime resource hooks:

• ~asPattern: constructor. There are two ways to do it:

1. Create the resources inside the ~asPattern function, before building the pattern. The resources will be created as soon as the play or reset methods are called; depending on the quantization factor, the pattern may begin several beats later.

2. Create the resources as the first event in the pattern. The resources will be created exactly at pattern onset. Sound-making resources should usually be handled this way.

• ~stopCleanup: destructor, runs when the pattern actually stops (which may be later than receipt of the .stop message, depending on quantization).

You can think of them in a hierarchy, with ~prep and ~freeCleanup as the parents and ~asPattern/~stopCleanup as the children. Parents can have many children, but any given child will have only one parental pair (contemporary living arrangements excepted!). Likewise, ~prep and ~freeCleanup should only run once for the lifetime of the BP, while ~asPattern and ~stopCleanup should run as many times as the process is played and stopped.

Important: BP's by default reuse the same stream when it is played again after being stopped. That allows the process to pick up where it left off. If you are using runtime resources, they will be created only when the pattern resets. Therefore, you should set the ~alwaysReset flag to true, to make sure the runtime constructor gets executed every time you play.

Also important: If you create runtime resources, make sure you release them in ~stopCleanup. If you don't, and the resources get created again on the next play, you might have duplicate objects or synths to which you no longer have a reference. So, you might have synths that you can never free! That's bad.

I usually write code into the runtime constructor so that if the object still exists, I won't recreate it. A simple way is to save the resource in an environment variable when you create it (which you have to do if you want to free it later), then set that variable to nil when you free the resource. At creation time, check to see if the variable is nil. If so, you have to create the resource; otherwise, don't. The example will illustrate.

### Example: Overlapping effects

To show how it works in practice, let's make a process that will play a sound file buffer onto a MixerChannel repeatedly, and which will also randomly select filter synths and overlap them in a serial effect chain (cascading the effects rather than adding them).

What resources do we need?

• A buffer with a loaded soundfile
• A MixerChannel
• A buffer player synthdef
• Effect synthdefs

In addition, we need a synth to play back the buffer. That's better handled as a runtime resource, though, because the process should make noise only when it's playing.

A couple of other design decisions before jumping straight to the coding:

Is the process pattern going to have any parameters you might want to change while the process is running (see Tutorial 1)? I decided on three: fade in/out time for an effect synth, hold time for an effect, and a delta factor to control the degree of overlap (1.0 = no overlap, 0.0 = complete overlap, two effects start at the same time).

What do we need to know about the effects?

• What to call it
• What UGens to use (how to process the sound)
• How to randomize parameters for each event (the parameters and desired random ranges maybe different for each effect)

So we need a data structure that will hold these entities in a single object. For adhoc structures like this, I like Event because it has a very simple shortcut syntax. Note that "args:" is a { function } returning an [array], so that the random parameters will be reevaluated on each call.

(name: \effectname,
func: { UGens.ar(...) },
args: { [\parm1, rrand(...), \parm2, rrand(...)] }) 

All the effects will share a common input, envelope and output phase. Rather than repeat the same code for each effect, we can use a synthdef wrapping technique to embed the function along with its arguments into a wrapper synthdef. I'll explain it in more detail when we get to it in the code.

Time to look at some code!

PR(\abstractProcess).v.clone({
~event = (eventKey: \singleSynthPlayer);
~soundfile = "sounds/a11wlk01.wav";

As in Tutorial 1, cloning abstractProcess builds some useful general methods into the process. Most important is the ability to chuck patterns in while the process is playing.

We also have to specify the event type; singleSynthPlayer is a very stripped-down version of the default event (see streams-patterns-events help for details). The correct syntax is as shown here: assign ~event to an (event: ...) with a single key/value pair, eventKey: \nameOfEventPrototype. The name will be looked up in the ProtoEvent() collection.

~soundfile is there because I don't believe in hardcoding anything unless it increases the complexity to a ridiculous degree. Putting the file path in a variable means I could load a different file just as easily:

// just an example of overriding a default parameter; this isn't in the PR code!
PR(\fxOverlap).chuck(BP(\overlap), nil, (soundfile: "sounds/some-other-file.aiff")); 

Next comes an array of effect specifications, in the format explained above.

      // these will be compiled into synthdefs in ~prep
~effects = [
(name: \ringmod1, func: { |sig, freq|
SinOsc.ar(freq, 0, sig)
}, args: { [\freq, exprand(100, 2000)] }),

(name: \ringmod2, func: { |sig, freq, lofreq, hifreq|
SinOsc.ar(LFNoise1.kr(freq).range(lofreq, hifreq), 0, sig)
}, args: { [\freq, rrand(0.1, 1.6), \lofreq, exprand(100, 2000),
\hifreq, exprand(100, 2000)] }),

(name: \comb1, func: { |sig, delay, dur|
CombN.ar(sig, delay + 0.01, delay, dur)
}, args: { [\delay, exprand(50, 500).reciprocal, \dur, rrand(0.05, 1.0)] }),

(name: \comb2, func: { |sig, freq, lodelay, hidelay, dur|
CombL.ar(sig, hidelay + 0.01, LFNoise1.kr(freq).exprange(lodelay, hidelay), dur)
}, args: { var d1 = exprand(50, 500).reciprocal, d2 = exprand(50, 500).reciprocal;
[\freq, rrand(0.2, 14.0), \lodelay, min(d1, d2), \hidelay, max(d1, d2),
\dur, rrand(0.05, 1.0)] }),

(name: \echo, func: { |sig, delay, dur|
CombN.ar(sig, delay + 0.01, delay, dur)
}, args: { [\delay, rrand(0.4, 1.0), \dur, rrand(4.0, 12.0)] })
];

The ~prep Method starts to do the heavy lifting. First we load the buffer and create a synthdef to play the buffer.

   ~prep = {
~buffer = Buffer.read(s, ~soundfile);
SynthDef(\bufPlayer1, { |outbus, bufnum, gate = 1|
var   basefreq = BufDur.ir(bufnum).reciprocal * 0.94,
trig = Impulse.kr(basefreq + LFNoise1.kr(1, basefreq * 0.5, basefreq * 0.9)),
env = EnvGen.kr(Env(#[1, 1, 0], #[1, 0.2], -4, 1), gate, doneAction:2),
sig = PlayBuf.ar(1, bufnum, BufRateScale.ir(bufnum), trig,
TRand.kr(0, BufFrames.ir(bufnum) * 0.5, trig));
Out.ar(outbus, Pan2.ar(sig * env, LFNoise1.kr(1)))
}).send(Server.default);
~bufplayer = \bufPlayer1;

For each effect definition, we need a separate synthdef. All these synthdefs have some elements in common: an In UGen to read the signal from the audio bus, and an envelope to fade the effect in and out smoothly. Rather than copying paste these elements for every synthdef, it's easier to conceive of the SynthDef object as a wrapper for the function specified in the ~effects array.

The SynthDef.wrap call does exactly this: SynthDef.wrap(function, rates, prependArgs). Any arguments in the function will be promoted to synthdef inputs, addressable in the usual way [\argName, value]. We don't have to worry about the rates input (in fact, it's nil in the provided code). prependArgs is important, though. We can pass values or signals from the enclosing synthdef into the inner function—in this case, the dry signal gets passed in.

         // build effects by wrapping effect function in a fade in/out envelope
// also include random panning
~effects.do({ |fxdef|
SynthDef(fxdef.name, { |gate = 1, fadetime = 4, holdtime = 4, outbus|
var   sig = In.ar(outbus, 2),
wetenv = EnvGen.kr(Env(#[0, 1, 1, 0], [fadetime, holdtime, fadetime],
\lin), gate, doneAction:2),
wetsig;
wetsig = SynthDef.wrap(fxdef[\func], nil, [sig]);
sig = XFade2.ar(sig, Limiter.ar(wetsig, 0.9), wetenv.madd(2, -1));
ReplaceOut.ar(outbus, sig);
}).send(Server.default);
});

In the case of the first effect listed above, \ringmod1, this code will produce a synthdef just like the following. The advantage of this structure is that I have to write the wrapper only once. Further, if I want to change the wrapper, I need to change it in only one place and every effect synthdef will pick it up.

// gate, fadetime, holdtime and outbus come from the wrapper
// freq from the inner function
SynthDef(\ringmod1, { |gate = 1, fadetime = 4, holdtime = 4, outbus, freq|
var   sig = In.ar(outbus, 2),
wetenv = EnvGen.kr(Env(#[0, 1, 1, 0], [fadetime, holdtime, fadetime],
\lin), gate, doneAction:2),
wetsig;
wetsig = SinOsc.ar(freq, 0, sig);   // this comes from the effect function
sig = XFade2.ar(sig, Limiter.ar(wetsig, 0.9), wetenv.madd(2, -1));
ReplaceOut.ar(outbus, sig);
}).send(Server.default);

A handful of patterns define synth parameters that will be evaluated on each iteration. These will be referenced later using BPStream.

      ~fade = Pwhite(2.0, 8.0, inf);
~hold = Pwhite(5.0, 16.0, inf);
~dfactor = Pwhite(0.4, 0.7, inf);

~chan = MixerChannel(\bufPlayer, inChannels:2, outChannels:2);
};

Now here's a line that's easy to overlook, but which is critical. For success, the process's pattern must kick off the source (buffer player synth) every time the process is played. If the ~alwaysReset flag is false, the pattern will resume instead of restarting from the beginning and the synth will not be recreated on subsequent play requests.

This flag should be true for any process that creates resources at the beginning of play and releases them upon stop (in ~stopCleanup).

   ~alwaysReset = true;

Now, the pattern. It appears in two sections: the first, which creates the buffer player synth, and the second which periodically spawns a new effect. Pseq joins the two into an event sequence.

   ~asPattern = {
Pseq([

Pfuncn executes the function once (as many times as the number argument, actually, but here it's 1). I use makeBundle to make sure the sound of the synth appears at exactly the right time. (I'll cover the handling of latency in a later tutorial. In the meantime, if you're not familiar with the purpose of messaging latency, read the server timing help file in the SuperCollider distribution.)

The function has to return an event, specifying the length of time to wait for the next event in the delta key (it's given in beats). play: 0 suppresses the default event's action, so that this event functions as a rest.

Note that before doing anything, we make sure that ~bufsynth is empty. If it already contains a value and we go ahead with the new synth creation, we might lose the reference to an existing synth and be unable to free it later. It then becomes a resource that was allocated that cannot be released, or a memory leak.

         Pfuncn({
~bufsynth.isNil.if({  // does my runtime resource exist?
~bufsynth = ~chan.play(~bufplayer,
[\bufnum, ~buffer.bufnum]);
});
});
(play: 0, delta: rrand(2.0, 5.0))   // dummy event
}, 1),

Pbind serves as the looping mechanism, choosing the effect, envelope times, and event delta. After we get the new event out of Pbind, .collect lets us insert the effect-specific parameters into the event.

Note the formula for delta. The synth's envelope is defined in three segments (fade in, sustain, fade out) so the total duration of the synth will be holdtime + (fadetime*2). This value gets multiplied by the next value from the dfactor stream—smaller values mean more overlap, larger values, less.

         Pbind(
\chan, ~chan,
\isFx, true,
\fxdef, Pfunc({ ~effects.choose }),
\instrument, Pfunc({ |ev| ev[\fxdef].name }),
\holdtime, BPStream(\hold),
\delta, Pfunc({ |ev| (ev[\holdtime] + (ev[\fadetime] * 2)) })
* BPStream(\dfactor)
).collect({ |ev|
ev[\fxdef].args.pairsDo({ |key, value|
ev.put(key, value)
});
ev
});
], 1)
};

Sit tight, not much left! When the process is stopped, we have to terminate the source synth. set(\gate, 0) is the standard way to release a sustaining envelope. It's also necessary to clear the ~bufsynth variable so the synth can be recreated on the next play. The " !? { } " structure ensures that the cleanup will take place only if a bufsynth is currently active. If not (if the variable is nil), the cleanup should be skipped.

   ~stopCleanup = {
~bufsynth !? { ~bufsynth.set(\gate, 0); ~bufsynth = nil; };
};

And, in ~freeCleanup, we have to drop the mixer channel and the source buffer at minimum. I go one step further and release the effect synthdefs from the server. (I have a shortcut for this in my library: \aSymbol.free sends the message [\d_free, \aSymbol] to all the registered servers.)

   ~freeCleanup = {
[~chan, ~buffer].free;
~effects.do({ |fxdef| fxdef.name.free });
};
}) => PR(\fxOverlap);

That's it! Then, of course, the benefit is that playback is extremely simple.

PR(\fxOverlap) => BP(\test);
BP(\test).play;

Pwhite(0.1, 0.4, inf) =>.dfactor BP(\test);
Pwhite(0.4, 0.8, inf) =>.dfactor BP(\test);

BP(\test).stop;
BP(\test).free;

So, to recap:

• Permanent resources should persist for life of the BP. Create them in ~prep; release them in ~freeCleanup.
• Runtime resources are allocated when play starts, and released upon stop. Create them in ~preparePlay if timing is not important and as the first event in the pattern if timing is critical. Release them in ~stopCleanup.

Keeping all of a process's resources in the same PR has a couple of significant benefits for live performance:

• No variable name conflicts if you want to create multiple copies of the same process
• Resources are reliably allocated and de-allocated with easy commands (no forgetting to create a resource because you overlooked it in the code file)
• Code is bound into a package and can be brought into another musical context easily.