Skip to content

Instantly share code, notes, and snippets.

@stuartsan
Created December 13, 2016 00:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save stuartsan/fc8a483dc306cd5967975d16f31ba997 to your computer and use it in GitHub Desktop.
Save stuartsan/fc8a483dc306cd5967975d16f31ba997 to your computer and use it in GitHub Desktop.

A/B testing in single-page apps

I'm curious if anyone has experience implementing A/B testing in the context of large single-page apps. Some thoughts and questions:

  • There are a handful of well-known A/B testing products like Optimizely, VWO, etc, that seem to me to be a pretty ok fit for relatively static web pages. Generally what they do is allow you (slash non-developers) to define a code variation in a WYSIWYG editor, whose output is a JS snippet that modifies the DOM to match the desired visual changes. And then you load their third party JS library into your app, and then at runtime that library injects the code variant snippets and it modifies your DOM accordingly, depending on which variation a given user should see. Then when a conversion happens, you send an event to their servers, and they collect / eventually present the experiment data to you.

  • In the context of SPA's, to me that sounds not so great. Often some library, e.g. React, is managing the DOM and assuming that you don't horse around with it. Then here you have scripts injected and evaled that horse around with the DOM, continually. You could probably hack around this but it seems like a real tire fire to manage, IMO.

  • There are ways to use these third-party products that might make more sense with an SPA. For example, define code variations in your code base but use Optimizely or whatever to define the experiments and bucket your users, then you show them a variant based on that assignment. In other words, decouple that product from the definition of the variation code itself, to the extent that it's possible. But it still entails a frustrating out-of band workflow; especially in a large codebase with multiple developers and standard review, deployment, and QA procedures (e.g., some code controlling your UI lives entirely outside version control in a third-party system; there are security implications: anyone with access to the third party product could inject arbitrary code into your app; how do you test/QA the variations?; etc).

  • I'm curious if people have used third-party or in-house systems to implement A/B testing in large single page apps, and how well it worked. The specifics I wonder about are like: how do you define code variations? What is the system that chooses which variation to show? How do you manage a bunch of experiments at once, and clean up after experiments that are done? If you deliver your JS as a bundle, what are the implications? For example, do you put all the variations in the bundle or? Does your implementation impact testing/deployment/QA?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment