Skip to content

Instantly share code, notes, and snippets.

@rick
Created October 25, 2014 16:41
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save rick/e31788fa2d1be67e5910 to your computer and use it in GitHub Desktop.
Save rick/e31788fa2d1be67e5910 to your computer and use it in GitHub Desktop.
# Public: provide debugging information for tests which are known to fail intermittently
#
# issue_link - url of GitHub issue documenting this intermittent test failure
# args - Hash of debugging information (names => values) to output on a failure
# block - block which intermittently fails
#
# Example
#
# fails_intermittently('https://github.com/github/github/issues/27807',
# '@repo' => @repo, 'shas' => shas, 'expected' => expected) do
# assert_equal expected, shas
# end
#
# Re-raises any MiniTest::Assertion from a failing test assertion in the block.
#
# Returns the value of the yielded block when no test assertion fails.
def fails_intermittently(issue_link, args = {}, &block)
raise ArgumentError, "provide a GitHub issue link" unless issue_link
raise ArgumentError, "a block is required" unless block_given?
yield
rescue MiniTest::Assertion, StandardError => boom # we have a test failure!
STDERR.puts "\n\nIntermittent test failure! See: #{issue_link}"
if args.empty?
STDERR.puts "No further debugging information available."
else
STDERR.puts "Debugging information:\n"
args.keys.sort.each do |key|
STDERR.puts "#{key} => #{args[key].inspect}"
end
end
raise boom
end
@kevpl
Copy link

kevpl commented Sep 11, 2015

ooh, I like this as a notification to the person reading the test of what's going on here. Was there any work done around this to also report statistics on how often these flaky tests fail, so you can get a handle on which flaky tests are having a lot of impact compared to others?

@rick
Copy link
Author

rick commented Sep 16, 2015

There was not, not in that sense (at least during the remainder of my tenure there), though we were doing some database spelunking (akin to what we have available in our postgres databases, I think). For here I would think we could use the Failed Build Analyzer plugin (already installed in at least some of our instances) to do that, maybe. I definitely always wanted more telemetry on this stuff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment