Having had some of the first words on "modern" host object detection and testing, feel like it's time to try to issue a final word. At least I hope it is the last word as I've seen a lot misinformation spread over the last several years. The Web is great for that. :)
The original observations and concepts came about from discussions on comp.lang.javascript (CLJ) and were written up by Peter Michaux almost a decade ago.
The first rule to remember is that - with regard to detection - we don't know anything about host objects. How are they implemented? Why do they behave like they do? We can never really know as - unlike objects native and built into javascript (JS) implementations - there are no ECMA specifications for host objects. They are described in those documents as implementation-dependent. No other standard information is available and it is often difficult to find any reliable information about their internal machinations.
History is all we have to go by and history says that for the first twenty years browser developers used typeof
results to give clues about whether host objects exist and how they are to be used (there's since been one major exception, which I'll get to later on). For example, a typeof
operation on some properties of ActiveX objects (as seen in IE) results in unknown
. It was only speculated at the time the above article was published, but later confirmed that this was a reference to the base ActiveX interface known as IUnknown
. Without delving into the ActiveX standard, that tells us little, but history tells us this about these properties:
- They are object references.
- They throw exceptions on any reference other than a call or
typeof
operation. - They've been responsible for a lot of confusion and resulting myths.
The second one is critical to understanding the difference between the isHostMethod
and isHostObjectProperty
functions, variations of which can be found in the above-referenced article, My Library and Jessie (among other libraries*). It comes down to what we plan to do with the object once it has been confirmed to exist.
If our code will call the object then we must use isHostMethod
. On the other hand, if it will reference the object in another way then we must not use isHostMethod
. Simple enough as isHostMethod
allows for the unknown
type (as well as object
and function
). In contrast, the isHostObjectProperty
method does not allow for the unknown
type as host objects of that type will throw an exception when used as anything but a method.
Though this logic is based on the ActiveX standard, there have never been any guarantees that it will work 100% of the time in ECMAScript host environments (e.g. browsers). What can we do if we stumble across a host object with an unexpected typeof
result? That's what feature testing
is for. In a perfect world we'd feature test every host object, but that's no more practical than detecting every host object. Eventually, features become standards and are implemented in every browser that could reasonably be expected to be encountered in the wild (e.g. document.getElementById
), so we use feature detection and testing selectively and with only history as our guide.
A recent example that has led to a lot of backsliding and ill-advised revisionism is window.ActiveXObject
, which has an undefined
typeof result. One thoroughly regressive suggestion has been to stop using typeof
operations and switch to in
as a "better" way to detect host objects. For a few different reasons, this is complete nonsense; however, examples can be found all over oft-referenced wikis like MDN and on that ministry of JS misinformation called StackOverflow. The latter often contains examples that appear to "work" for a particular time and context, which are voted on to determine the "best" solutions and then pasted all over the Web as if they were gospel. Should go without saying that most of the voters on that site are there seeking answers. But I digress.
The in
operator doesn't tell us anything about the property, other than it exists either on the object itself or on its prototype chain (if it has one). It has been wrongly speculated that it is somehow "safer" than using typeof
based on conflating host objects with built-in and native objects and referencing the ECMAScript specifications to infer that the unknown
exceptions are due to the internal [[Get]]
call on typeof
operations. Recall that those specifications say nothing helpful about host objects.
Again, it comes down to what our code will do with the host object once it has been inferred to exist. Assume for a moment that the ECMA standards apply. What exactly are we going to do with a detected object that will not result in a [[Get]]
call?
For the window.ActiveXObject
example, we may be able to tell that the property exists with an in
operation. But MS could just as easily have created a null
property as opposed to an object reference with an undefined
(or "mickeymouse" or "billgates" or whatever) typeof
result. This was speculated on during the aforementions CLJ discussions and the fact that the latter case eventually came true is irrelevant as we must feature test this object in any event. Again, we've found it, now what are we going to do with it?
There are three ways to feature test window.ActiveXObject
. The most common is to try to construct an ActiveX object inside of a try-catch
statement. Exception handling is required as the constructor throws an exception on calling in a browser where ActiveX has been disabled by the user or administrator (again, history is our only guide). Have heard the non-argument that using try-catch for detection represents a performance issue and so we should regress by at least a decade and use the inadequate in
operator to detect this object. Again, what exactly are we going to do
with the constructor after the initial one-off
detection?
If truly worried about performance in a one-off operation, Michael Haufe has pointed out that there are two other ways to detect support for a specific ActiveX class (feature detection and testing are always best when most specific). One is by using document.createElement
to construct an OBJECT
element, setting its CLASSID
attribute and detecting representative methods. Similarly, we can simply include a representative OBJECT
in our markup and detect its methods. Once again, history is the only guide.
Proponents of using the obviously insufficient in
operation for host object detection have mentioned at least one other object in at least one other browser that fails the isHostMethod/isHostObjectProperty
test (as if that negates any of the above). Don't recall the details, but clearly we'll know one when we stumble across it as it will return some unexpected typeof
result. In an imperfect world, we can't feature test every known host object, but history (and common sense) guide us in what we choose to test and what we leave to less specific detection.
Common sense says we test any host object that is not yet standardized (e.g. draft specifications) or that is less than universally implemented in browsers we can reasonably expect to host our scripts. As discussed further in the above-referenced article, bad assumptions about host objects lead to breakdowns in progressive enhancement, which can lead to scripts going down unexpected paths and leaving documents in an unexpected (and potentially unusable state).
Reliable progressive enhancement is required for writing cross-browser scripts. We choose which features to detect and/or test based on history and we vary our schemes based on specific contexts. Which browsers and configurations are expected to work? What non-browser environments (if any) are to be supported? The best tests are those that are as specific as possible. Inferences should have as direct a relationship as possible to the problem being solved. For this reason, we should never make inferences based on generic flags or functions provided by shared libraries (or copied from "best" answers on StackOverflow). It only serves to obfuscate when critical detection and testing logic is separated from the code it serves to protect. Furthermore, crowd-sourced solutions are always subject to change.
As Richard Conford noted around the time of the related CLJ discussions: a repository of alternate (based on context) function renditions is an ideal way to organize, reuse and share cross-browser functions. This is in contrast to server side environments like NodeJS where a simple module loader is generally sufficient. Unfortunately, the popular trend is to try to support both types of environments using common script loaders (e.g. RequireJS) to provide modules dynamically. This is often referred to as cross-platform (or cross-environment) scripting and the strategy is simply not conducive to cross-browser scripting. Many such scripts (e.g. jQuery are misleadingly marketed as "cross-browser".
Many of today's "popular" libraries and frameworks maintain multiple versions to support various ranges of browsers, environments and configuration. Use the "wrong" version and our script simply blows up on the user, who is then blamed for choosing the "wrong" browser. As there is never enough time to keep track of exactly what will happen when such code goes down a wrong path, developers simply tack on a vague warning (often based on UA sniffing), a technique determined to be misguided and unsupportable back around the turn of the century.
Web development is truly stuck in an endless loop. In an age where browsers stream updates, it's never been more clear that library developers need to wake up and help break us out of it. Am writing this on a MacBook Pro from around six years ago and both Chrome and Firefox implore me to upgrade (and then deny they can do it). Furthermore, every other site on the Web issues similar exhortations (undoubtedly based on sniffing UA strings) and break down in undocumented ways for no good reason. This is highly ironic as many sites exist only to take money off visitors (directly or through lead generation); apparently many developers of such sites feel the need to heed a "higher" calling to try to "move the Web forward". It's pure folly and delusion and often simply futile.
A not insignificant segment of the world's users would kill to use such a relatively modern PC and many would prefer to use newer browsers if they could; others are simply oblivious to the details and just want to send us money. Some are in emerging nations, others stuck in government jobs or in industries where upgrades lag well behind the norm. Not supporting these users is one thing, but insulting, misleading and serving them unpredictable and unusable documents goes beyond the pale. There's no good reason for it as the knowledge required has been easily found for almost a decade (and much has been available since the turn of the century). If not convinced, go back to the top and read this document again. Will find that a more productive loop. ;)
* Be aware that most other libraries botch the implementation by combining the minimum of two functions (three in Michaux's article) into one.