Music DNA

See the experiment

I had what I thought was a fairly simple idea: what if I took a song and mapped its frequencies in a dial? The whole song would equate to 360° and each slice of the song’s “pie” would be coloured up according to the active frequencies at that point in the song. That was the rough idea, and given I had a 10 hour flight to San Francisco looming, I thought I would see what I could get done.

Audio Analysis

This is actually super easy to do with the Web Audio API. It has an FFT analyser built right in, and given the Web Audio API acts like a node graph, it’s simple to set up:

// Create the context
var audioContext = new AudioContext();

// Now create our nodes
var analyser = audioContext.createAnalyser();
var sourceNode = audioContext.createBufferSource();

// Then hook them together
sourceNode.connect(analyser);
analyser.connect(audioContext.destination);

Now all you do is tell the sound to play and, during the rendering, you ask for the current data from the analyser:

// Get the frequency data out of the analyser
// and shove it in arrayBuffer.
analyser.getByteFrequencyData(arrayBuffer);

One thing I didn’t realise at first was that a bufferSourceNode (which I call sourceNode in my code) are sort-of “one-shot nodes”, which you specifically can’t re-use. They’re kind of a view onto the audio data (which you can, of course, re-use), but the general idea is that you make one, use it, and fuggedabowdit. Anyway, just thought I’d mention that.

Rendering the dial

This is where the majority of my time went. The idea behind a lot of my creative coding is that I don’t really know what I’m actually trying to make, I’ll just try some stuff and see where it takes me. I have to have something that I think is a solid idea before I get going, of course, and up to this point I knew roughly what I wanted to make and I’d focused on having the data ready.

During the coding I actually thought to take screenshots (which I normally forget) just so I could show you a little bit of the process. Here’s what the rendering looked like at various points:

So white!
So white!

Basically at this point the rendering worked and that was about it. My overriding concern was to make sure that the rough shape and size was correct. But it clearly needed, uhhh, more love.

Randomized colouring.
Randomized colouring.

I figured out pretty quickly that I wanted colour rather than an all-white output. I tried a couple of things here:

  • Completely random colouring. Just pick a random hue and go crazy! Wooo! Looked awful.
  • Hue-based colouring, both single cycle and multi-cycle. So here for the single-cycle I just mapped the angle of where I’m rendering to the hue, which is easy because both a circle and HSL values are in the range of 0 to 360. For the multi-cycle I just wrapped around a few times (which is the picture above) and it looked hideous.
Single hue cycle, getting nearer...
Single hue cycle, getting nearer...

Eventually I settled it down to a single cycle, which you can see above. I also added a couple of other things:

  • A threshold. Part of the problem of the early versions was that the data kind of overwhelmed the visualization. There was simply too much getting rendered. By adding a threshold to the whole thing only a few key frequencies per slice actually make the cut.
  • Some contrast. I decided that another issue was that I was seeing a lot of small dots, and it needed some contrast. Entirely at random (if (Math.random() > 0.999) ...) I promote some dots to being bigger splodges. A side effect of this is that as more data gets past the threshold we get more splodges. That means the louder areas are glowier and niiiiice!
More colours, but too much noise.
More colours, but too much noise.

And then…

The final look.
The final look.

Finally after some serious number fishing (sport of Kings and Queens) I managed to settle on something that I thought looked pretty.

An accessibility concern

I got called out by Sven Schwedas about the fact that I had failed to offer an <form> for people who couldn’t use the default drag-and-drop functionality.

Sven Schwedas called me out
Got that one wrong, Lewis. Well done.

My initial response was, in honesty, frustration, not because I don’t care about accessibility (far from it), but because my main aim had been to create a fun little doodle. Fixing it was trivial but, actually, I was wrong not to include accessibility in my thinking from the start.

I’ve said that accessibility (like security and performance) need to be an underpinning attitude to development, so I should’ve applied it to my own work. Hopefully I’ll get that right next time.

Next steps

There’s been a lot of fantastic feedback from the people who have played with Music DNA. I already felt like I wanted to do more with it, but the feedback has entirely validated those ideas. So what’s next?

  • High Resolution Save. The idea of getting a print quality output from the visualization is super appealing to me. I have no idea if it’s going to be possible, because of the sheer memory weight of creating large canvas elements, but I intend to find out!
  • Multi-track Visualizations. I was asked if it could be extended to cover multiple tracks. I think that’s a super neat idea; it would be awesome to get a dial (or collection of dials) for a whole album. I think being realistic it’s more like once we have…
  • Offline mode. It’s nice to be able to listen to the track and watch the visualization appear, but I think a nice addition would be to analyze a file and spit out the graphic much more quickly, for which we need to process the audio as quickly as possible and for which we have the offline audio context. Unfortunately the offline audio context in Chrome has bugs around JavaScript nodes (boooo!) so I’m toying with other approaches.
  • Normalized analysis. I had to pick values for the audio gain and so on that favours professionally mastered pop and electronic. The downside is it doesn’t work as well for quiet or loud music, and I think there’s something I can do to have the analyser adapt to the volume of the audio. We’ll see.
  • Integration with [insert music service here]. I would love this, but I doubt it’ll be possible. Well, Soundcloud seems like the most likely, but I would personally be super excited if there was some way I could get integrated with Google Music, Spotify or another streaming service.

I’ve filed bugs for all of them. Onwards!