Skip to content

Instantly share code, notes, and snippets.

@oelmekki
Created September 3, 2013 08:11
Show Gist options
  • Save oelmekki/6420982 to your computer and use it in GitHub Desktop.
Save oelmekki/6420982 to your computer and use it in GitHub Desktop.

Why progressive enhancement matters

A recent story on Hacker News pop up and informed us that progressive enhancement is dead.

The main point for this article was that it's kind of ridiculous in 2013 to build a whole app having in mind that user may deactivate javascript. I can't agree more with that, as did most HN commenters.

But this is not the only reason why progressive enhancement matters. And I would like to add that, in 2013, when we have so much apps that heavily rely on javascript, it matters more than ever.

Meet your execution environment

As Tom mentioned, the browser is now more an execution environment rather than a document viewer. Well, to me, it means that developers have no control over the execution environment of their code.

With server side, if it works for you, it works for everyone. With client side, you'll never know. Does your user have any browser extension that interacts with your page ? Is his system stable ? Is his network connection stable ? You can't ask your user to have a such carefully crafted environment as your servers.

What if error occurs ?

If an error occurs in a callback function, clicking that <a href="#"> link again and again will simply do nothing. If you're a developer, you'll instinctively reload the page. If you're a lambda user, you'll get frustrated and yell : "it does not work !".

What this should make us conclude is that the most heavily your app rely on javascript, the better it should be at error handling : you don't want your app to freeze, with no possibility for the user to get through.

Error will occurs

That's the whole story of locally run apps. You run your code on systems you have no way to be sure they're stable, and you even can't test that. That's why native apps often embed error report features, often built directly in the OS.

Of course, a javascript runtime is way more predictable than an OS runtime, with all its libs and concurrently running apps. But thinking you could just ignore errors has a minority case is wrong. To realize this, just add something like that in your js codebase :

window.onerror = function( error, url, lineno ){
  $.post( my_exception_url, error: { error, url: url, lineno: lineno, page_url: window.location.href );
}

window.onerror is pretty standard in browsers and will even work on older IEs. Here, I use it to post exception data to server side, so I can integrate it in my server side exception management system. Do it and let it run a while. You'll be surprised.

Progressive enhancement to the rescue

Now, how do we solve this ? A common idiom is to detect errors and displays a message that asks the user to reload the page, often providing a link to do so.

That's good, but we can do better.

Many features we build with javascript are just about comfort. For example, if I implement deletion of items in a list through ajax, this is cool, but this could easily works as well following a link.

Now, if instead of having a <a href="#" class="delete"> link, I have a <a href="/delete_path" class="delete"> link, it's easy for me to ensure failsafe execution even when error occurred by disabling the event callback. The best part of it is that user won't even realize something went wrong.

Here, progressive enhancement allows to be certain the feature is usable, no matter how screwed is user execution environment.

Even if you build a client side feature that you can't be emulated on server side, it's still a good idea to add a fallback through progressive enhancement. You can't do what user requested because an error occured and you can't perform the action on server side, but :

  1. you can warn user there was a problem and ask him to perform action again
  2. your javascript runtime has been reloaded, as you've loaded an other page

Graceful degradation is a good thing too

In the glorious past of nojs theories, progressive enhancement came along with the concept of graceful degradation. The idea was that you can use bleeding edge browser features, provided you offer other means to achieve goal for browsers not supporting them.

This can be redefined / modernized too, in the context of error handling. The progressive enhancement I mentioned is a good thing for links, but those are visible, be it in expected javascript run or when error occurred.

What about those forms which hide submit buttons on DOMready to perform actions through ajax ?

You can apply a similar concept : if an error occurs, we show them back. The difference here with progressive enhancement is that it's not something you do before the error occurs, but after : you've got to have destructor methods that you call when an error occurs and that revert your interface to an usable state.

Big javascript apps are great and are the future of web development. But they should not leave user in a frozen state. Progressive enhancement and graceful degradation are the future of that future.

Implementation example

You may wonder : right, but how can I disable callbacks / revert interface upon error ?

I solved this with three patterns :

  1. all features are build in "classes", with a destructor method. A destructor typically revert what the constructor method did

  2. any event callback is wrapped in a function that checks if window.crashed is set (and does not execute if set)

  3. window.onerror callback set the window.crashed variable and call destructors on controllers (the mentioned classes)

If you're interested in concrete implementation and don't mind reading coffeescript, you can see an example in my Darwin framework (actual js code is in app/assets/javascripts/darwin, special files of interest are controller.js and bottom of loader.js).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment