Skip to content

Instantly share code, notes, and snippets.

@richarcher
Last active August 29, 2015 14:07
Show Gist options
  • Save richarcher/8888aaa9ddf65ee67e64 to your computer and use it in GitHub Desktop.
Save richarcher/8888aaa9ddf65ee67e64 to your computer and use it in GitHub Desktop.

In the initial planning phase of the Agents project, we decided to rebuild along the principles of Progressive Enhancement.

##What is Progressive Enhancement?

"Progressive Enhancement" takes the stance that the various aspects of a website - the content, the presentation, and the behaviour is to be developed separately on top of one another in layers with the content of a site as the foundation. Should one of these aspects fail to function, they should fall away and still make it possible for content to be available to users.

There can be many reasons for these failures: broken or outdated browsers, unsupported hardware, disabled or blocked assets, interrupted network connectivity, or just good old-fashioned bugs in code.

In a typical site built using Progressively Enhanced principles, there should be a basic, "un-enhanced" version available as a fallback should the enhanced features fail. A fallback can take several forms, and range in levels of sophistication:

  • by providing a simple search form for a mapping app when the Geolocation API isn't available
  • redirecting to a login form page when the enhanced modal popup login fails to open
  • loading a .png file when SVG isn't supported
  • using a default browser font due to lack of @font-face support
  • specifying a standard hex value for a background colour when RGBa isn't supported
  • rendering a plain HTML site if CSS fails to download

rounded buttons => shaded buttons => boxy buttons => buttons

The key word here is support. If a particular browser's support of a feature doesn't cut the mustard, there should be a discussion about fallback options, how it should manifest, and if the feature is required for all users. Regardless of the technologies used, of paramount importance is that the core content is available to the user.

The oft-used analogy to describe PE is the "escalator and the steps". Think of a full-featured website as an escalator. When it fails, an escalator falls back to functioning as a set of stairs. An elevator achieves the same goals as an escalator, but a broken down elevator is useless.

escalator > staircase, escalator > box in a wall

##The Brief

Contiki pride themselves on the high quality visual designs they use on their sites. As developers, we get to take advantage of as many technologies as possible to achieve that goal. A typical page on the site uses CSS stylesheets, high quality images, extra fonts, SVG, and JavaScript. Even on a fast connection, a full uncached page can take time to download, render and initialise.

We were to refresh and improve the Agent's section of contiki.com. There had been not much maintenance performed on that aspect of the site for a few years, and it was due a spring clean. One of the main requirements beyond redesigning the UI was to make it fast.

We would be using before and after metrics to track site performance throughout the build, and so the speed index needed to be much faster for the Agents site than it currently was. The target audience were arguably the typical 'power user' persona. These users were not leisurely browsing the site, they operated with priority focus on finding trips for other people as quickly as possible.

The search page was to take the lion's share of project time. Our brief was to provide a page that would load much faster than it did at the moment. We were also tasked with using JavaScript to update the page content with new results to avoid the speed penalty of a full page refresh.

##The Build

At first, we debated whether to make the search page a fully fledged JavaScript powered single page app. After all, the "typical" reasons for developing something with PE didn't apply here. We knew, for example, that the entire site would be locked behind an authentication layer, meaning that visibility to search engine indexers and other SEO benefits was not a point of concern. Furthermore, we had a precise understanding of our user base. Even the briefest examinations of the visitor logs showed that the earliest browser we would definitely need to cater for was Internet Explorer 8, and that without exception, everyone that visited the site with JS enabled.

We had painful amounts of prior experience developing a similar search page two times for Contiki's main search page. Firstly, a JavaScript version that would handle the smooth loading of content and interactions. Secondly, a less enhanced version that used traditional HTML forms to achieve the same goals but would provide some much needed search-engine-crawl-ability and bookmark sharing. Both versions provided value, but required twice the development time.

What swung it for us was that developing with PE we provided ourselves the facility to be able to provide a fully operational site at all times.

As much as we were keen to get started with the JS version of the search page, it would be several sprints before we would have the a Single Page App robust enough to provide business value. We had a definite deadline for this project, and the last thing we wanted was for that date to fly past with the possibility of a half developed features that didn't complete basic tasks. We wanted the site search facility to be up and running as early as possible - and so, HTML first, JS enhancement second.

image showing various iterations of the planned search page

The approach of working in several passes across the site, and adding on enhancements with each pass was not new to the project. The designers (based 9500 kilometres away in London) were taking that approach in their designing and prototyping by working in an iterative, low-fidelity through to high-fidelity manner - firstly in basic pen-and-paper sketches, then in a wireframing tool. Each stage was set up with a round of user testing; poking and questioning design decisions to make sure that the basic problems were being solved first before too much costly time was spent mocking up polished, higher-fidelity designs.

This approach worked well for us in the Cape Town office as well. We were already developing the a lo-fi version of site using presenter classes in combination with the live data pulled directly from the Contiki API (read more about this in Patrick's post on presenter classes)). We had been given a reasonable mockup of the site in wireframe format, and, confident that we understood the thinking behind the designs we began to style the site to match the lo-fi wireframes.

Free of the distracting elements of typography, colours and other trappings of web design, we were able to better focus on HTML site structure and layout. Most of the site - the home and tour pages, contact and login forms, FAQ and other secondary pages were constructed in a short time. As a result, user testing was now possible via the site on our development server. (Read more about our front end styling process in Gav's post on style guides)

Because users were interacting with the “real” site, the feedback became much more focussed. Many potential issues with the layout and the information architecture were avoided at a time when it was still cheap to react. This was something that wouldn't have been as simple had we developed a higher fidelity page earlier on. Refactoring or repurposing CSS and JS is always a headache - by only developing the smallest unit of benefit to the user, we were able to adapt to changes and progress far more effectively.

This was already providing serious benefit to us; all the other aspects of the site had been completely rebuilt in HTML/wireframe form in around 3 weeks. It made sense to continue this with the search page - and in 2 more sprints we had built a basic version of the search page using native browser functionality and semantic HTML.

As the project progressed, we were provided with a high fidelity style guide from the designers, and we soon finished a first styling pass of the higher fidelity version of the site. We were ready to show it to the Product Owner in our weekly sprint review meeting.

Building the page from an already quick API, a lightweight page resulted in a very quick page. The JS dependencies we did need for datepickers and slider widgets were downloaded asynchronously and had appropriate fallbacks in place to use while these were loading. The page was always completely useable, providing a much faster perceived load time.

One piece of unexpected feedback was that it was a surprise to the stakeholders that the page was not yet performing the planned AJAX search. The page was rendering so quickly that it wasn't immediately clear that the page was performing a traditional full page reload. This was... interesting news.


The next sprint involved a fairly substantial change in plan. We were originally due to start reproducing the search page as a JS single page app. But really, to what benefit? We already had an effective, speedy, search page that was hitting all the key requirements of the project. The Project Manager was happy with the progress so far - what was the benefit of spending more time on what he felt was well achieved - when there were so many other priority features we could be working on instead?

In short, we had used Progressive Enhancement techniques to build a site that hit the emergent business requirements of the project far earlier in the project timeline than was originally planned. The project managers were granted the luxury of stepping back and re-prioritising upcoming features based on the information they had learned from using a semi-prototyped site, possibly even introducing new features that weren't even considered before.

Textbook Lean Startup.

##Summing up

As a principle, working with Progressive Enhancement provided more value than “catering to users with JS turned off”.

It provided us with the ability to surface core features quickly, providing the site with a solid foundational base, and pivot on these “early wins” to give us time to concentrate more on providing other further enhancements that the project wanted. Features due to be worked on after launch in a “Phase 2” were brought forward due to increased confidence from the PM.

It granted us a level of certainty that as many browsers as possible were always supported. It was reassuring to know that core functionality would definitely work in far more browsers than the typical suite of 20 or so. We knew that we weren't reinventing the wheel for everyone and that we could afford to spend less time testing core features and reduce the chance of introducing regression errors.

It provided us with a fast feedback cycle for performance testing. Having a “live” environment provided us with the ability to run performance audits regularly to confirm our choices and to provide metrics to stakeholders. It also made the site faster for the user by allowing the site to be usable as early as possible.

It provided a level of modular design thinking to our process. Developing the site in layers meant that we kept our concerns of content, presentation and behaviour separate. Forcing ourselves to work in this way resulted in a site that could easily adapt to changes. If, in future a feature is no longer relevant, it will be far easier to remove that from the site and replace with a new suite of features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment