-
-
Save rick/e31788fa2d1be67e5910 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Public: provide debugging information for tests which are known to fail intermittently | |
# | |
# issue_link - url of GitHub issue documenting this intermittent test failure | |
# args - Hash of debugging information (names => values) to output on a failure | |
# block - block which intermittently fails | |
# | |
# Example | |
# | |
# fails_intermittently('https://github.com/github/github/issues/27807', | |
# '@repo' => @repo, 'shas' => shas, 'expected' => expected) do | |
# assert_equal expected, shas | |
# end | |
# | |
# Re-raises any MiniTest::Assertion from a failing test assertion in the block. | |
# | |
# Returns the value of the yielded block when no test assertion fails. | |
def fails_intermittently(issue_link, args = {}, &block) | |
raise ArgumentError, "provide a GitHub issue link" unless issue_link | |
raise ArgumentError, "a block is required" unless block_given? | |
yield | |
rescue MiniTest::Assertion, StandardError => boom # we have a test failure! | |
STDERR.puts "\n\nIntermittent test failure! See: #{issue_link}" | |
if args.empty? | |
STDERR.puts "No further debugging information available." | |
else | |
STDERR.puts "Debugging information:\n" | |
args.keys.sort.each do |key| | |
STDERR.puts "#{key} => #{args[key].inspect}" | |
end | |
end | |
raise boom | |
end |
There was not, not in that sense (at least during the remainder of my tenure there), though we were doing some database spelunking (akin to what we have available in our postgres databases, I think). For here I would think we could use the Failed Build Analyzer plugin (already installed in at least some of our instances) to do that, maybe. I definitely always wanted more telemetry on this stuff.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
ooh, I like this as a notification to the person reading the test of what's going on here. Was there any work done around this to also report statistics on how often these flaky tests fail, so you can get a handle on which flaky tests are having a lot of impact compared to others?