Skip to content

Instantly share code, notes, and snippets.

@david-mark
Last active January 17, 2017 21:44
Show Gist options
  • Save david-mark/5ddae386d7d887a618b8308297d7c7d7 to your computer and use it in GitHub Desktop.
Save david-mark/5ddae386d7d887a618b8308297d7c7d7 to your computer and use it in GitHub Desktop.
Understanding Cross-browser Scripting

#Understanding Cross-browser Scripting

Cross-browser was invented around the turn of the century and is needed more today than ever. Unfortunately, it is also massively misunderstood, both by library developers and their users.

##What Cross-Browser Scripting is Not

Before getting into what cross-browser scripting is, let's look at what it is not. Cross-browser scripting does not imply that scripts will work in every browser and configuration known to man. Certainly a script that does work in every conceivable environment would be considered cross-browser, but such expectations are neither realistic, nor a requirement for a script to be considered cross-browser.

Depsite marketing claims, popular libraries such as jQuery and Lodash are neither cross-browser nor cross-platform. It's critical to understand that they are multi-browser and multi-platform, working in a handful of environments deemed worthy by their authors at the time of each version release. They use the same failed strategy as the old turn of the century scripts that worked in IE 4/NN 4 (often only in their default configurations) and blew up in virtually every other environment.

Thanks to DOM standardization, today's libraries have little challenge in dealing with new browser releases, but still insist on failing in unpredictable ways in older ones (e.g. IE 8). Though such libraries often use feature detection and testing to determine what to do, the results for the user aren't that much improved from the days of sniffing browser versions.

##User Experience Hierarchy of Needs

With apologies to Maslow, the user experience can be characterized by a hierarchy of needs:

  • User
  • Application (or document enhancements)
  • Framework (widgets and plugins)
  • Libraries
  • Browser

Starting from the library, each level has a go/no go decision to make that depends on the levels below it. For our purposes, we will assume that the browser always goes and ignore the levels below it (e.g. operating system, hardware, electricity, etc.)

###Libraries

Libraries must rely on the presence and reliability of browser features (e.g. DOM methods) to make their go/no go decision. Unfortunately, multi-browser scripts make no such decisions. They use feature detection to determine which way to implement their interfaces, but ignore cases where the interfaces will be unusable. To make things worse, they periodically drop feature detection entirely based on an increasingly narrow range of "supported" browsers.

Take the jQuery 1.x getStyles function for example:

if ( window.getComputedStyle ) {
	getStyles = function( elem ) {

		// Support: IE<=11+, Firefox<=30+ (#15098, #14150)
		// IE throws on elements created in popups
		// FF meanwhile throws on frame elements through "defaultView.getComputedStyle"
		var view = elem.ownerDocument.defaultView;

		if ( !view || !view.opener ) {
			view = window;
		}

		return view.getComputedStyle( elem );
	};
  
  ...
  
 } else if ( documentElement.currentStyle ) {
    getStyles = function( elem ) {
		return elem.currentStyle;
	};
 
 ...
 
 }

This is an internal function and will be defined only in browsers that feature either the document.defaultView.getComputedStyle (referred to as window.getComputedStyle in the detection) method or the currentStyle element property. So far so good as getStyles will be undefined for any environments lacking either.

Unfortunately, jQuery methods that rely on getStyles take it for granted that it exists. Take the css method for example:

css: function( name, value ) {
    return access( this, function( elem, name, value ) {
        var styles, len,
            map = {},
            i = 0;

        if ( jQuery.isArray( name ) ) {
            styles = getStyles( elem );
            len = name.length;

            for ( ; i < len; i++ ) {
                map[ name[ i ] ] = jQuery.css( elem, name[ i ], false, styles );
            }

            return map;
        }

        return value !== undefined ?
            jQuery.style( elem, name, value ) :
            jQuery.css( elem, name );
    }, name, value, arguments.length > 1 );
}

In environments where getStyles is unworkable, when an Array object is passed for the first argument, the css method will throw an exception. Practically speaking, this matters little as would have to go back to the turn of the century to find a browser that lacks both document.defaultView.getComputedStyle and the currentStyle element property.

As a side note, the function is clearly running into security issues with regard to frames (or alternate windows) and takes the very odd and ill-advised step of using the window containing jQuery as a substitute for the call to getComputedStyle. The unrelated "support" comment at the top of getStyles:

Support: IE<=11+, Firefox<=30+

...seems to indicate that the related code will never be removed, but certainly that's a misprint. More likely it means that once IE 11 is deemed unworthy in a future jQuery version (Firefox 30 is already unsupported), the attempt at a workaround will be removed.

Such observations are sprinkled throughout jQuery to be sniffed out by the developers in the future. Of course, they are no proof that IE 11- and FF 30- are the only browsers with such security measures. The indirect inferences made from these notes to remove feature detection based on browser version numbers are quite similar to those made by the browser sniffing scripts of old.

Now consider the getStyles function in jQuery 3.x:

var getStyles = function( elem ) {

    // Support: IE <=11 only, Firefox <=30 (#15098, #14150)
    // IE throws on elements created in popups
    // FF meanwhile throws on frame elements through "defaultView.getComputedStyle"
    var view = elem.ownerDocument.defaultView;

    if ( !view || !view.opener ) {
        view = window;
    }

    return view.getComputedStyle( elem );
};

The feature detection was dropped as jQuery 3.x (as well as 2.x) only claims to support IE 9+ and IE 9 has document.defaultView.getComputedStyle. Now IE 8, which was still in use at the time of the 2.x release and is still in use today (e.g. banks, hospitals and other slow-to-upgrade corporate settings), will throw an exception for the use case described above. How did the library signal this shift? By bumping its version number.

Did fix the "support" misprint though, so expect the attempted workaround to vanish as soon as IE 11 is deemed irrelevant by the jQuery developers.

###Framework

How is the framework (e.g. widgets and plugins) that relies on the described use case of the css method to make its go/no go decision? By sniffing out the library version number? Clearly such a strategy would suffer many of the same drawbacks as browser sniffing. As there are hundreds of methods using similar strategies in jQuery, frameworks relying on the library version number to determine the usability of each would be making a lot of indirect inferences and require constant maintenance to keep up with the library versions.

As the framework has no feasible way to know which library methods will work, it has no choice but to create all of its interfaces and let the chips fall as they may.

###Application

How is the application that relies on framework widgets to know whether they are usable? Clearly there's no way at all as the library two floors down has blown any chance of the framework communicating such critical information to the user.

An application has a duty to let users know from the outset whether it is going to work from start to finish. Some applications may simply blow up immediately in unsupported environments; but more often they present some semblance of an interface and leave it to the user to explore until they hit a dead end. This is why many applications (and enhanced documents) on the public Web come with a disclaimer that they may not work in their entirety when sniffed out browser versions are deemed insufficient. Back to the egg. :(

###User

The user would likely prefer not to rely on vauge warnings derived from browser sniffing, but that's all they get when multi-browser scripts form the foundation of applications.

A strategy that straightens all of this out was first introduced in My Library and is also employed by the Jessie function repository. It's through the use of a dynamic API.

##The Dynamic API

Let's look at how the previous example could have been better served by the use of a dynamic API. As noted, it started out well enough in jQuery 1.x with a getStyles function that was left undefined in browsers lacking either of the required browser features. It went off the rails with the unconditional creation of the css method. So clearly the first step is to avoid creating the css method when there is no getStyles function for it to use.

Again, this is just one example and the same strategy would have to be applied to all such library methods. Also ignore the fact that getStyles is required by just one use case as that's due to another poor design decision that is beyond the scope of this document.

Now the frameworks that rely on the css method (or other such methods) can make an informed go/no go decision based on a direct test. Either the required methods are there or they are not. All of the feature detection and testing that determines whether they are present is hidden away in the library (as opposed to being exposed in fairly useless "support" flags).

Each widget or plugin that makes up the framework can make an informed decision on whether to create their interfaces and all of the details behind the decision are hidden from the application. The application needs only to check that the required framework componenents (e.g. constructors) exist and all of the library dependencies that determine their existence are hidden away in the framework.

The application tells the user whether it will work by presenting all, parts or none of its interfaces. In the case where none of it will work, it presents the usual line about not working in their chosen browser (preferably without an insulting tone). The difference when using a dynamic API is that the application is certain of its viability based on direct inferences and can therefore abstain from presenting interfaces that will lead to user frustration.

##Moving Forward

As noted, these are not new concepts. Most library developers I've talked to over the years claim to understand the benefits of using a dynamic API, but it seems that none ever do. If worried that users of the libraries will misunderstand the benefits and not want to detect library features, then the explanation is simple.

###Library Users Don't Need to Know What's Good for Them

If library users fail to detect required library features - which is a hell of a lot simpler than detecting all of the required DOM features and far more reliable than trying to sniff out browser versions - then they can simply abstain from doing it. The libraries are going to blow up in unsupported browsers anyway.

The difference is in how they blow up. When using a dynamic API, the exception would be along the lines of:

Uncaught TypeError: jQuery.css is not a function(…)

Simple and to the point, the exception indicates exactly where the library user shirked their duty. Virtually any developer can see what that means and, if inclined, add the library feature detection to their framework component or application.

On the other hand, if not using a dynamic API, exceptions in unsupported environments emanate from deep within the library. In the given example using jQuery 1.x, they would look something like this:

Uncaught ReferenceError: getStyles is not defined(…)

And nobody outside of jQuery has likely ever heard of getStyles. Additionally, jQuery is often obfuscated in a usually pointless attempt to make it seem smaller (see HTTP compression), in which case the exception would look more like this:

Uncaught ReferenceError: _g is not defined(…)

If using jQuery 2.x or 3.x in IE 8- (or compatibility views), the message would be just as perplexing:

Uncaught ReferenceError: view.getComputedStyle is not defined(…)

What would the inevitable StackOverflow "solution" be for such mystery messages? Downgrade jQuery. But in the case of jQuery 1.x, there's nowhere to go, which underscores the point that the design was inappropriate from the beginning. It should be clear that it never made logical sense to conditionally create getStyles and then pretend that it will always exist, nor does it make sense to create it unconditionally and pretend that document.defaultView.getComputedStyle is a given.

###What about Polyfills?

The polyfill pattern is one of the worst ideas for Web development this side of the CSS reset. Polyfills modify globally available objects in a ham-fisted attempt to make a few more browsers appear worthy to multi-browser libraries. Library users abdicate responsibility for browser issues to library developers, who in turn abdicate responsibility for a subset of older browsers to polyfill developers.

In most cases, polyfills add unneeded bloat and complexity to "stacks" of scripts that are typically too bloated and complex to begin with. Futhermore, the ones that try to augment host objects are repeating another fundamental mistake that dates back decades. And besides, unless they manage to prop up every required feature in every browser and configuration, their introduction will not result in cross-browser applications.

###On Library Versioning

On a related note, libraries should never use their major version numbers to imply browser support limitations. Legacy feature detection and testing forks should be filtered out by the build configuration (as in Jessie), not by the developers on releasing new versions (and unless a DOM feature has been universally implemented and stable for several years, it should not be left completely unguarded). Major version numbers should only imply changes in behavior in whatever browsers are capable of running the scripts.

The strategy of maintaining multiple jQuery versions, each with its own range of supporting browsers is not only unworkable for cross-browser scripting, but creates additional maintenance headaches for its developers. When a lapse in logic is detected in jQuery 3.x, somebody may have to go back and patch 2.x and 1.x as well. And as that isn't likely to happen every time, it creates a sort of DLL Hell for the Web.

##Conclusion

As a result of such poor designs and gaps in logic, the Web is an absolute mess that excludes and confounds visitors on the whims of library developers. Ironically, libraries are often credited with moving the Web forward, but it won't budge until library developers stop pretending and start making use of cross-browser strategies that work. ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment