And We're Back!
Well it's been quite some time since I've been able to put out a new demo. Pro tip: don't change jobs twice in 6 months if you value your spare time!
I didn't have one. Not really. Like most of my demos I start out with some notion of what I want, or if something inspires me it'll spark some idea. In this case I was watching through Robert Hodgin's EYEO talk and the demos he did. I ended up wanting to recreate the Cornell box that he had, though in WebGL rather than OpenGL, but beyond that I didn't have much else. But then microphone access landed in Chrome Canary…
1,2... 1,2... Is this on?
So the theory goes that you can now access the microphone (though it's behind a flag in chrome://flags) in Chrome Canary, and pass it through to the Web Audio API, much as we've been able to do with the audio tag for a while. I decided that a) everyone loves particles and b) particles would be a good match for the 1,024 frequency buckets I knew that the Web Audio Analyser was going to give me. One particle per bucket.
What else? Well I tried having them bounce around depending on how active their bucket was. I tried having them swirl in a pattern proportional to the same value. It was all rather patchy, not least because of the fact that it's just plain difficult to get visually interesting data from the Web Audio API. The reason seems to be, to some degree, that although human hearing goes up to 22kHz (less for adults) there's really not a lot interesting above, say, 10kHz. All of a sudden our 1024 buckets just became 512 buckets. And for really interesting stuff it's even fewer. The Web Audio API is fantastic in terms of giving you the data, and quickly, but it does require adjusting before it matches up with what human hearing perceives. Thanks to Chris Wilson from Google for helping me to get my head around all that!
After a lot of trial and error, and because I didn't really give a monkeys as to whether it was accurate data that I was getting out, I munged and tweaked the data until I got something that was fun.
It still wasn't quite right, but at least all the particles were reacting. I decided that physics makes everything better, so I shamelessly reused some of the physics from Photo Particles. For this I added a central "black hole" sphere whose pull would be proportional to the mean of all the frequency buckets. Now each particle would be attracted to the central sphere according to m1 * m2 / radius2, where m1 would be actually the pull of the central sphere (not its mass, but whatevs) and m2 would be this particles frequency bucket value.
Net effect: looks neat, especially when the sphere is moving in a pattern.
Look ma, no engine!
Part of my job at Google is to advocate for developers, particularly with respect to WebGL and other GPU-dependent technologies. What I really wanted to achieve with this demo was a few things:
- A pretty demo
- Reflections / lighting (at least a pseudo lighting effect)
- Direct control via WebGL
- Establish what the workflow is (and should be) for debugging WebGL content
On the pretty side, mainly just a lot of number tweaking and additive blending. It tends to go a long way. I do owe thanks to Brandon Jones for the insight that you can use depthMask to not write to a depth buffer, while at the same time use DEPTH_TEST to read from it. This made the particle effect possible because they would, with that combination, observe the sphere's depth and be occluded by it, but they themselves would not occlude anything, particularly (har har!) each other. This is a common problem with particle effects where the particles are semi-transparent and end up cutting into each other, so it was good to get that cleared up.
The relections and lighting were challenging. What I ended up doing there was alternating inside the requestAnimationFrame callback. On one frame I would render the room, sphere and particles. On the other I would use four cameras, 3 positioned on the walls and one on the floor, all of which look into the scene. They would each render the particles from their own view into a texture. These textures I then used for drawing the room, along with the wall textures, to give the lighting effect.
The frustrating part of WebGL today is that each of those walls is a separate draw call, because you can only draw to one texture at a time. That's four textures so four separate (read: expensive) draw calls. With Multiple Render Targets, something that has been in OpenGL for a while and which has just landed in OpenGL ES 3.0's spec, one could attach the four textures to a single shader and have it write to them all in one go. After all, the scene itself wasn't changing between draws.
Because I knew I wanted to do some quite ambitious stuff with the rendering (at least for me, I'm something of a graphics hobbyist) and because I wanted to test my workflow when dealing with WebGL directly, I decided to shelve the use of any engines. The bottom line with doing it directly is, yes, you get a ton more control over what happens and when, but you also lose so much in terms of productivity and frustration. WebGL can be very time consuming to write well, and sometimes almost impossible to debug. That said, as a control freak I did enjoy directly managing the output, and once I got a (very) basic framework in there some stuff was very quick to set up.
Finally from a debugging standpoint, I can't hesitate to recommend Ben Vanik's WebGL Inspector; it saved my bacon on a few occasions. In general I did find debugging the demo challenging, especially when things weren't playing nicely. After all, you can't set a breakpoint in a shader!
I'm pretty pleased with how the demo turned out. There's more I would've liked to have done with it, but it just comes down to the law of diminshing returns. I did add the ability to use an audio file for those who don't have Canary, or would rather not sing at their computer. I won't judge you, but it can be liberating warbling in an office full of people trying to concentrate. That one's for free.
Going forward I will be doing more lab work and I won't be out of the loop as long this time. I love it at Google, so I plan to make use of my earlier pro tip!