Skip to content

Instantly share code, notes, and snippets.

Last active Aug 29, 2015
What would you like to do?

Continuous Integration & Delivery pt.3

Testing Analytics

Last week we discussed setting up an integration testing server that we can post to and it can kick off a suite of tests. Now that we are storing all of our suite runs and individual tests in a postgres database we can do some interesting things like track trends over time. At Bleacher Report we like to use a tool named Librato to store our metrics, create sweet graphs, and display pretty dashboards. One of the metrics that we record on every test run is our PageSpeed Insights score.

PageSpeed Insights

If you haven't checked it out, PageSpeed insights is a tool provided by google developers that analyzes your web/mobile page and gives you an overall rating. You can use the website to get a score manually but we hooked into there api in order to get our score on page visit and submit it to librato. Each staging environment is recorded seperately so that if any of them return measurements that are off we can attribute it to a server issue.


Any server that shows an extreamly high rating is probably only loading a 500 error page and one that shows an extreamly low rating is probably some new untested JS/CSS code we are running on that server.

Example of how we submit a metric using cukebot:


require_relative 'lib/pagespeed'
Given(/^I navigate to "(.*?)"$/) do |path|
  visit path
  pagespeed =
  ps = pagespeed.get_results
  score = ps["score"]
  puts "Page Speed Score is: #{score}"
  metric = host.gsub(/http\:\/\//i,"").gsub(/\.com\//,"") + "_speed"
    puts "Could not send metric"


require 'net/https'
require 'json'
require 'uri'
require 'librato/metrics'

class PageSpeed
  def initialize(domain,strategy='desktop',key=ENV['PAGESPEED_API_TOKEN'])
    @domain = domain
    @strategy = strategy
    @key = key
    @url = "" + \
      URI.encode(@domain) + \

  def get_results
    uri = URI.parse(@url)
    http =, uri.port)
    http.use_ssl = true
    http.verify_mode = OpenSSL::SSL::VERIFY_NONE
    request =
    response = http.request(request)

  def submit(name, value)
    Librato::Metrics.authenticate "", ENV['LIBRATO_TOKEN']
    Librato::Metrics.submit name.to_sym  => {:type => :gauge, :value => value, :source => 'cukebot'}

Google's PageSpeed Insights return reletively fast but as you start recording more metrics on each visit command to get results on both desktop and mobile we suggest building a seperate service that will run a desired performance test as a post or at least in its own thread as to not tie up the tests from continuing its run or causing long running tests. Which brings us to our next topic.

Tracking Run Time

In Sauce Labs you are able to quickly spot a test that takes a long time to run but when are running hundreds of tests in parallel all of the time its hard to keep track of the ones that normally take a long time to run vs the ones that recently have started taking an abnormally long time to run. This is why our cukebot service is so important to us.

Now that each test run is stored in our database we can grab the information sauce stores for run time length and store it with the rest of the details from that test. We can then submit that metric to librato and track over time in an instrument. Once again if all of our tests take substantially longer to run on a specific environment we can use that data to investigate issues with that server.

To do this we will take advantage of cucumbers before/after hooks to grab the time it took for the test to run in sauce or track it ourselves and submit to librato. We would use the on_exit hook to record the total time of the suite and submit that as well.

Test Pass/Fail Analytics

Another thing we would like to measure to see trends over time is our pass/fail percentage for each individual test on each seperate staging environment as well as our entire suite pass/fail percentage. This will allow us to notify ops of any servers that need to get "beefed up" if we run into a lot of timeout issues on that particular setup. Also this allows us to quickly make a decision if we should proceed with a deploy or not when there are failed tests that pass over 90% of the time and are currently failing.

The easiest way to achieve this is using the cucumber after hook to query the postgres DB for total passed test runs on the current env in the last X amount of days and divide that by the total test runs on the current env in the same period to generate a percentage, store it, then track it over time to analyze trends.


Running your integration tests continuously used to be our biggest challenge now that we have finally arrived to the party we have noticed that their are many more things to be done in automation. As people strive for a better quality of product this pushes our standards for what we choose to ship. One tool we have been experimenting with and would like to add to our arsenal of automation is so far we have seen great things from them and have caught a lot of traffic related issues we would have missed otherwise

Adding tools like these will allow you to look at a dashboard after each build and give a level of confident to your team that this code is ready to be released to the wild. Most of this has been done but some is right around the corner from completion, if you believe we can enhance this process in anyway I would greatly appreciate any constructive critisizm via my twitter handle @feelobot. As Sauce says, "Automate all the Things!"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment