Part 2: Notes On Time


In this section, we're going to discuss a couple of notes on timing, and see how we can ensure that all our notes arrive on time.

When dealing with musical applications, we need to pay special attention to the aspect of timing. More so than in other information sequences like visual animation, humans are sensitive to tiny discrepancies in sound - and can detect changes at millisecond resolutions.

This means that when developing music applications, we need to make sure that we can accurately schedule sounds when we want them to happen, and have the computer reliably play back those sounds when we expect them to.

Background


A basic p5.js program (called a "sketch") generally has a setup function that runs once, and a draw loop that repeats at a certain frame rate.

var xpos = 0;

// Runs only once, when the page loads
function setup() {
  createCanvas(windowWidth, windowHeight); // Canvas size fills its container
}

// Runs after setup, and repeats 60 times a second by default
function draw() {
  background(255, 0, 0); // Paint the background red (RGB color)
  ellipse(xpos, height/2, 50, 50); // Draw an ellipse with width = height = 50

  // Increase the x-position on each loop
  xpos = xpos + 1;
  // Reset the x-position if it reaches the edge
  if (xpos > width) {
      xpos = 0;
  }
}

The setup and draw functions are special p5.js functions which are used typically for program initialization and animation respectively. In this example, the setup function simply creates a blank canvas, and the draw function draws a circle to the screen, updating its x-coordinate on every cycle (according to the frameRate).

The output of this program looks like this:

A typical p5.js program, known in the p5 community as a "sketch".

To control how frequently the draw loop repeats, we can set the frame rate of the draw loop using the frameRate() function. Keeping our eyes on the goal of making music, this implies that we can get some sort of regular clock/timer using the draw loop. For example, if we wanted to make a metronome app that beeps at 60 beats-per-minute (BPM), we might do the following:

var synth;
var bpm = 60; // Metronome ticks at 60 beats-per-minute

function setup() {
  createCanvas(100, 100);
  synth = new p5.PolySynth();
  frameRate(60 / bpm); // Frequency = 60 seconds / beats-per-minute
}

// Repeats 60 times a second by default
function draw() {
  background(255); // Paint the background white
  synth.play(440); // Play a short note at 440Hz
}

However, there’s a problem with this approach, which is that the draw loop was not designed for timing-critical applications. Therefore, the frame rate we set may not be strictly adhered to - the loop may run slower if for example the draw loop has a large amount of graphics to render or your browser is running many active tabs.

The Solution


To handle the need for precise audio scheduling, the p5-sound library offers an alternative scheduling mechanism called the SoundLoop. Using the SoundLoop, we can build a metronome as before, with just a slight change in code:

var synth;
var sloop;
var loopInterval = 1; // Loop interval of 1 second corresponds to 60 BPM

function setup() {
  noCanvas();
  synth = new p5.PolySynth();
  // Create a SoundLoop which calls mySoundLoop every loopInterval seconds
  sloop = new p5.SoundLoop(mySoundLoop, loopInterval);
  sloop.start();
}

function mySoundLoop(cycleStartTime) {
  // Play a note at 440Hz, velocity of 1 (full volume),
  // The note is scheduled to begin at the start of each cycle, 
  // and is held for a duration of 0.5s
  synth.play(440, 1.0, cycleStartTime, 0.5);
}

The SoundLoop introduces a couple of new concepts which we will explain later in this tutorial, but for now, notice that in this case there is no more draw loop - the draw loop is optional if we are only using sound, but in many cases it will be useful to have both the SoundLoop and draw loop, to handle audio and visuals respectively.

To convince ourselves that the SoundLoop is better for scheduling audio, let's compare the two approaches by listening to both metronomes together in the same sketch:

Comparing the timing accuracy of the draw loop versus the SoundLoop. The loops don't necessarily start in sync; what's important is to observe the consistency in their individual cycles. Scroll away or switch focus to a different tab to see an obvious difference in behaviour!

Notice that when you first start the sketch, the two tones are in sync, but after some time they start to have a perceptible drift in timing, especially if you scroll away or switch to a different tab (the browser allocates fewer resources to unfocused tabs, so the draw loop is neglected). The difference may be more or less obvious depending on your processor speed, but in general the SoundLoop will always be more reliable for scheduling sounds than the draw loop. This is because the SoundLoop is built on Web Audio clock, which you can find out more about in this article by Chris Wilson.

Best Practices for Musical Timing


So now, we have some inkling of how to build p5.js sketches with accurate timing information. But how do we go from our humble metronome to that supercalifabuloustic-interactive-musical-journey you have in your head?

Perhaps the best way to answer this is to look at some examples and see how we might handle them.

Instantaneous Reaction

One of the most common use cases for audio in programs is as instantaneous feedback for user interactions. Whether you’re working on a 2018 revamp of Space Invaders which goes ZZZZOOP on every laser fired, or a simple navigation menu whose buttons BOOP, you’re going to want to set up your sounds to be triggered instantaneously based on certain events.

In this basic type of scenario, there is no need for the SoundLoop or any advanced scheduling. All we need to do is simply instantiate our sound object in setup, and play that sound in the event handler.

We will explore such a scenario in the context of creating a virtual piano application. For our virtual piano, we will use a PolySynth object to produce sounds, and the keyPressed and keyReleased events to start and stop the sounds:

var synth;
var keyOrder = "ASDFGHJKL";
var keyStates = [0,0,0,0,0,0,0,0,0];

function keyPressed() {
  keyIndex = keyOrder.indexOf(key);
  // Check if valid note key pressed
  if (keyIndex >= 0) {
    // Update key state
    keyStates[keyIndex] = 1;
    // Play synth
    midiNoteNumber = baseNote + keyIndex; // 0-127; 60 is Middle C (C4)
    freq = midiToFreq(midiNoteNumber);
    synth.noteAttack(freq, velocity, 0);
  }
}

function keyReleased() {
  keyIndex = keyOrder.indexOf(key);
  // Check if valid note key pressed
  if (keyIndex >= 0) {
    // Update key state
    keyStates[keyIndex] = 0;
    // Stop synth
    midiNoteNumber = baseNote + keyIndex; // 0-127; 60 is Middle C (C4)
    freq = midiToFreq(midiNoteNumber);
    synth.noteRelease(freq, 0);
  }
}

And the result of that would be something like this:

A simple virtual piano program. Type "ASDFGHJKL" to play!

Playing A Sequence of Notes (Static Scheduling)

In another situation, directly relevant to interactive and algorithmic music composition, we might want to play a sequence of notes.

For example, let's say you want your sketch to play Mary Had A Little Lamb. First, you need to figure out how to represent your sequence of notes. There are a great many ways to do this, but perhaps one of the most intuitive representations is to just list out all the notes along with their velocity and timing information, like so:

var song = [ 
  // Note pitch, velocity (between 0-1), start time (s), note duration (s)
  {pitch:'E4', velocity:1, time:0, duration:1},
  {pitch:'D4', velocity:1, time:1, duration:1},
  {pitch:'C4', velocity:1, time:2, duration:1},
  {pitch:'D4', velocity:1, time:3, duration:1},
  {pitch:'E4', velocity:1, time:4, duration:1},
  {pitch:'E4', velocity:1, time:5, duration:1},
  {pitch:'E4', velocity:1, time:6, duration:1},
  // Rest indicated by offset in start time
  {pitch:'D4', velocity:1, time:8, duration:1},
  {pitch:'D4', velocity:1, time:9, duration:1},
  {pitch:'E4', velocity:1, time:10, duration:1},
  {pitch:'D4', velocity:1, time:11, duration:1},
  // Chord indicated by simultaneous note start times
  {pitch:'C4', velocity:1, time:12, duration:2},
  {pitch:'E4', velocity:1, time:12, duration:2},
  {pitch:'G4', velocity:1, time:12, duration:2}
];

Then, you could use a for-loop to schedule all the notes to occur at their respective timestamps:

for (var i=0; i<song.length; i++) {
    var note = song[i];
    synth.play(note.pitch, note.velocity, note.time, note.duration);
}

This works, but an important thing to realize is that once you’ve scheduled a sound to occur, you can’t stop it, alter it, or interact with it anymore. This program runs through the entire loop almost immediately, locking all current and future notes into the scheduler so there's no going back.

Imagine a case where you have composed a groovy ‘60s dance hit in your sketch, and you send it to your friend Tracy. Tracy opens up your sketch and turns it on, enjoying the music until she is rudely interrupted by the Fun Police, who remind her that this is a library and you can't play music here. The volume buttons on Tracy’s laptop are broken, and the strange flavor of Linux she uses doesn’t have a sound control panel. She tries to pause the music, but to her dismay she realizes that this isn’t possible because all the notes have already been scheduled to occur the moment she started the sketch. Ashamed, she hangs her head and tells them that she’s sorry, but you can't stop the beat.

While your users may never face this exact situation, in many cases you might realize that you don’t actually want to schedule all notes right away, and in the case of interactive music we need to be able to alter, add and remove notes on the fly according to your interactions. To add this interactivity, we turn to our trusty ol’ SoundLoop.

Playing A Sequence of Notes (Dynamic Scheduling)

In this section, we look at an alternative and more powerful way to play our sequence of notes, using the SoundLoop introduced briefly earlier. Let's begin by looking a little more closely at the SoundLoop at how it works!

The SoundLoop provides users with a way to access the Web Audio Clock for accurate audio scheduling. Recall the example from before:

var synth;
var sloop;
var loopInterval = 1; // Loop interval of 1 second corresponds to 60 BPM

function setup() {
  noCanvas();
  synth = new p5.PolySynth();
  // Create a SoundLoop which calls mySoundLoop every loopInterval seconds
  sloop = new p5.SoundLoop(mySoundLoop, loopInterval);
  sloop.start();
}

function mySoundLoop(cycleStartTime) {
  // Play a note at 440Hz, velocity of 1 (full volume),
  // The note is scheduled to begin at the start of each cycle, 
  // and is held for a duration of 0.5s
  synth.play(440, 1.0, cycleStartTime, 0.5);
}

Some important things to take note of:

  1. We need to instantiate a SoundLoop object.
  2. The SoundLoop constructor takes in two arguments: the callback function, and the loop interval.
  3. The callback function is called regularly according to the chosen loop interval, and there is a cycleStartTime parameter passed into each callback, which gives us the look-ahead time we can use to do precise scheduling.

The concept of cycleStartTime deserves a bit more explanation, because it is tricky to understand at first. The cycleStartTime gives us the ideal time that the current cycle of the SoundLoop is supposed to occur at.

For example, we want our SoundLoop to repeat at 1-second intervals, so if we start at time 0 we want the SoundLoop to repeat at time 1, 2, 3, 4, and so on. In practice, the SoundLoop code may be executed at a slightly different time, perhaps at times 1.07, 1.98, 3.11, 4.01. Therefore, whenever we schedule audio within a SoundLoop, we should always schedule against the cycleStartTime instead of the actual time that the code is running at. This ensures that even though the program execution time may deviate slightly from the plan, our scheduled audio will always be correct because we are using the ideal cycleStartTime.

Okay, enough chat! How does all of this look in practice?

Using the SoundLoop to play our sequence of notes, things get a bit more complicated than before. Instead of a using a for-loop to loop through the array, we use the SoundLoop as though it were a while-loop. We can change the interval between loop iterations to match the duration of each note and try to do something like this:


function soundLoop(cycleStartTime) {
  var note = song[noteIndex];
  synth.play(note.pitch, note.velocity, cycleStartTime, note.duration);
  this.interval = note.duration; // Hold off the next cycle until this note is done

  noteIndex++;
  if (noteIndex >= song.length) {
    this.stop(); // Stop the SoundLoop if we've reached the end of the song
  }
}

But if you look closely (or try running the example yourself), you will realize that this doesn't work because each cycle plays only one note, and notes that we intended to occur in unison (like a chord) get staggered out across multiple time steps instead.

Unfortunately, there is no quick and easy solution to this, but one approach which turns out to be useful in many cases is to:

Using this new representation, we can rewrite the sequence of notes for Mary Had A Little Lamb:

var song = [ 
  // pitch, velocity (between 0-1), time since previous event (beats), type (1:ON or 0:OFF)
  {pitch:'E4', velocity:1, timeSincePrevEvent:0, type:1},
  {pitch:'E4', velocity:1, timeSincePrevEvent:1, type:0},
  {pitch:'D4', velocity:1, timeSincePrevEvent:0, type:1},
  {pitch:'D4', velocity:1, timeSincePrevEvent:1, type:0},
  {pitch:'C4', velocity:1, timeSincePrevEvent:0, type:1},
  {pitch:'C4', velocity:1, timeSincePrevEvent:1, type:0},
  // ...
  // Omitted for brevity
  // ...
  // Chord indicated by multiple notes being ON at the same time
  {pitch:'C4', velocity:1, timeSincePrevEvent:0, type:1},
  {pitch:'E4', velocity:1, timeSincePrevEvent:0, type:1},
  {pitch:'G4', velocity:1, timeSincePrevEvent:0, type:1},
  {pitch:'C4', velocity:1, timeSincePrevEvent:2, type:0},
  {pitch:'E4', velocity:1, timeSincePrevEvent:2, type:0},
  {pitch:'G4', velocity:1, timeSincePrevEvent:2, type:0},
];

And now, we can use the same idea of looping through each element of the array using the SoundLoop, being careful to treat each note-on or note-off event differently and changing the interval according to the next timeSincePrevEvent.

var eventIndex = 0;

function soundLoop(cycleStartTime) {
  var event = song[eventIndex];
  if (event.type == 1) {
    synth.noteAttack(event.pitch, event.velocity, cycleStartTime);
  } else {
    synth.noteRelease(event.pitch, cycleStartTime);
  }
  // Prepare for next event
  eventIndex++;
  if (eventIndex >= song.length) {
    this.stop();
  } else {
    var nextEvent = song[eventIndex];
    // This cycle will last for the time since previous event of the next event
    this.interval = nextEvent.timeSincePrevEvent;
  }
}

Whew. That was a bit of effort just to play a couple of notes, wasn't it? But the important thing to realize is that this is a far more powerful way of playing notes than scheduling everything at once. When we use the SoundLoop this way, scheduling one moment at the time, we have the power to interact with the music and change it as it is being produced - and that is where all the fun begins.

Playing a sequence of notes using a SoundLoop, which allows us to start, stop, and even alter playback (eg. changing tempo) as it happens.

Synchronizing Audio and Visuals

The last topic we will look at in this tutorial is how to synchronize audio and visuals.

One of the really exciting things about working with p5.js is that you have a wealth of features to play with. Even if the focus of your sketch is on musical content, it often augments the experience of your users when you add a visual component to complement the sounds you create.

A pretty fun example of a visualization you could do is to visualize each note that your algorithm creates, by adding new components to the screen every time a note is struck.

The first way we might think of doing this is to simply add drawing functions into your SoundLoop, but that tends to not work out so well because animations would generally run at a different frame rate than the SoundLoop. Therefore, a better way is keep audio and visual components separate, using the SoundLoop for sound and the draw function for visuals. Then to synchronize between the two loops, we will use a shared program state which the two loops interact with to pass information back and forth.

In the following example, we have a ParticleSystem class (borrowed from the p5.js ParticleSystem example) which keeps track of individual Particles, each representing a note.

In the SoundLoop, each time we play a new note we also add a new particle to the system:

function soundLoop(cycleStartTime) {
  // Pick a random note, note octave based on mouse height
  var pitchClass = random(pentatonic_scale);
  var octave = baseOctave + heightLevel;
  var currentNote = pitchClass + str(octave);
  
  // Play sound
  var velocity = 1; // Between 0-1
  var duration = this.interval;
  synth.play(currentNote, velocity, cycleStartTime, duration);

  // Add a particle to visualize the note
  var pitchClassIndex = pentatonic_scale.indexOf(pitchClass);
  var xpos = width / (pentatonic_scale.length * 2) + pitchClassIndex * width / pentatonic_scale.length;
  var ypos = height - heightLevel * height / numOctaves;
  system.addParticle(xpos, ypos);
}

In the draw function, all we need is a single call to run the system (which takes care of updating the particle positions and drawing the particles):

function draw() {
  // ... Omitted

  // Update particle system
  system.run();
  
  // ... Omitted
}

The result is this neat little interactive-music demo:

A simple interactive program demonstrating audio and visuals occuring in sync. Notes are produced randomly, but you can move your cursor around the sketch to transpose up and down octaves.

Extra Credit: Step Sequencer

Using all of the concepts that we have learned, we now have what it takes to build something a little more advanced: A step sequencer. Step sequencers are a relatively modern kind of musical instrument used by electronic musicians to easily create beats and melodic patterns.

The code used to create this example builds upon the same ideas we have already seen so we won't go into the code in detail here, but feel free to check out the full code and remix it into your own creations!

A step sequencer. Click on the tiles to set/unset notes, and hit PLAY to get the groove going!

What's Next?


We've reached the end of the tutorial, and the ball is now in your court!

Brian Eno once said, "I wanted to hear music that had not yet happened, by putting together things that suggested a new thing which did not yet exist." Start from one of the examples, or start from scratch, and go wild with your own musical creations!

P.S. If you haven't already read Part 1 of this tutorial, that's a great place to go next!