It should "just work"

The browser is a black box, yet we often have to optimize for implementation specifics. From vendor prefixes, to just plain weird code, developers all have to perform coding gymnastics to get their projects to play nicely on the myriad of browsers we have to support. Is that fair?

  • Last updated: 26 Jun 2013
  • Est. Read Time: 7 mins
  • Tagged: #opinion

Starter for Ten

I’ve had a couple of conversations on Twitter and G+ in the past couple of days that have got me thinking. The first was with Bruce Lawson, advocate at large for Opera:

My response was terse, only because Twitter is limited to 140 characters, and I can rarely condense my thoughts well enough:

I even went so far as to include an analogy, which means I’ve probably lost my marbles:

But I actually think Bruce is right, to some degree. Why on earth should I care about how the browser implements the specifications? Surely the whole point of offering me the abstractions of HTML, CSS, JavaScript and APIs is exactly so I can live without having to actually building this stuff myself? I’d do a miserable job of most of it at any rate.

Then there was the canvas thing…

Recently, I’ve been building a little mobile web app at work (more on that in a future post, I suspect) and I ran into a performance bottleneck with the <canvas> element. I’m in an extremely privileged position of being able to wander over to Chrome engineers and ask what’s up. Having spent a bit of time looking at a trace (which is scary but badass for profiling) the solution arrived: “Oh, the canvas is smaller than 256 pixels. Yeah, that’ll be using the software path.” A quick hack later and the performance was roughly 10x and back under my self-imposed 16ms performance boundary. I shared this snippet of info and Ricardo Cabello called it:

The implicit objection seems to be: don’t offer an abstraction if it’s not optimal for my use-case.

The implicit objection seems to be: don’t offer an abstraction if it’s not optimal for my use-case.

I kind of see it vs. I kind of don’t

Since those two remarks cropped up, both of which have come from very experienced developers for whom I have plenty of time, I’ve been pondering it a little. What should our expectations be? How does this play out in a cross- browser web?

Firstly, I believe we should expect compliance with published specifications. That’s a pretty straightforward one, especially nowadays. But I do not believe we should impose expectations on how specifications are implemented. Of course we all want the fastest possible performance – and certainly healthy competition between browser vendors should help to push that – but as Alex Russell said to me recently: “the browser has the right to deprioritise your code at any point.” Because your code isn’t the only thing the browser has on at any given time, it has to deal with things as gracefully as possible, which sometimes (sadly) means a less-than- optimal experience in some cases.

The clear solution here would be to specify the implementation, right? Then we all know where we’re at. Except that the reason implementation isn’t specified is that it just makes no sense. No two browsers are alike, so how would you even do that? It would also preclude alternative approaches that would drive innovation. Like I said, as a general purpose engine there are heuristics to how a browser executes beyond what we’d like it to do. And if the end result is correct and on-spec that’s I think the most we should expect.

Sort of.

On the other side I don’t see why anyone should have to memorise a pixel value for a canvas to be hardware accelerated, especially if it’s what they want and it’s possible to have it under certain circumstances. That’s frustrating for anyone, and a value like 256px is likely to change at any time should the engineer(s) responsible make it so.

Also a magic value like that sounds awfully like a rule and not at all like profiling your application, which should ring an alarm bell for every developer. I should really do a presentation on tools, not rules, that would totally help here.

The interesting aside in this case is that there may be times when hardware accelerated canvas in Chrome would perform less well than its software counterpart, so to repeat myself ad nauseam, always profile your code.

It Works vs. It’s Awesome

The genius of the web is that everyone can create and publish content. If you have access to a computer the chances are you can, in the simplest case, use a text editor to create a file and host it somewhere on the Internet easily. Of course that’s the thinnest end of the wedge and, as our industry is maturing, we’re getting more advanced tooling, more capable browsers and, in turn, more advanced sites and applications.

The genius of the web is that everyone can create and publish content.

We are also facing new challenges (to us at any rate; some are quite old in other industries) that come with those capabilities, particularly with respect to performance. We have a black box of functionality in the browser, one that abstracts us from the dirty details, and yet here we are trying to get things downloaded and running at 60fps on a host of devices and browsers.

It’s a challenge for browser vendors to implement abstractions well; it’s a challenge for developers to use those abstractions well.

Good Code vs. Targeted Code

Ultimately it’s a compromise. We should expect to write good, clean code that runs well, without any browser-specific tweaks.

But I genuinely believe that if we want to get the very best out of the web platform, move from good to great, and get the very best performance we will need to understand as much as we can about how browsers execute our code. And sometimes, perhaps frustratingly, that’s going to mean catering for specific implementations.