Skip to content

Instantly share code, notes, and snippets.

@tallclair
Last active December 3, 2019 21:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tallclair/c7011226826914155ab05b454a52ca8b to your computer and use it in GitHub Desktop.
Save tallclair/c7011226826914155ab05b454a52ca8b to your computer and use it in GitHub Desktop.
Code Review Checklist

This checklist is consolidated from Tim Hawkin's "How To Be A Bad-Ass Code Reviewer" (KubeCon Contributor Summit, Nov 2019).

Out of scope: API review, KEP review

  • Pre-work
    • Do I have enough time for this review?
    • Read linked issues
    • Read PR description
    • Read over past discussions
    • Does this change require domain specific knowledge? Do I have it?
  • Motivation: Bug Fix
    • Is the bug cited?
    • Is the bug real and worth fixing?
    • Does this actually fix it?
    • Does this prove that it fixes it (tests, benchmarks)?
    • Is this the "right" fix?
  • Motivation: Feature
    • Is the KEP cited (and implementable)?
    • Do we really want the feature?
    • How does it interact with other features?
    • Is the community (SIGs) OK with the design? Have the relevant SIGs reviewed it?
  • Style
    • Is this the simplest way to express the solution? Can it be broken down further? Prefactored?
    • Is the PR a complete, functional unit?
    • Are symbols named clearly?
    • Are the changes being made in appropriate places (types, functions, files, packages)
    • (Go) Do exported symbols need to be exported? Is it the right API?
  • Substance
    • Are there comments explaining design choices?
    • Are there comments explaining non-obvious code?
    • Are errors handled appropriately? (return, log or retry) Is contentext added?
    • Are failures handled gracefully? Is it clear from the code and comments? How will the user know something failed?
    • Logs
      • Are logs useful and/or actionable?
      • Are logs spammy?
      • Is there enough data or context to determine what actually happened?
    • Metrics
    • UX (config, APIs, CLIs, flags, etc.)
      • Do they feel natural?
      • Do they have good defaults?
      • Are they consistent with other instances?
      • Would they impede or confuse other planned work?
      • Do they follow the Principle of Least Surprise?
  • Testing
    • Are there tests? (Consider adding a missing test as a "prefactor")
    • Are there tests for the expected use cases?
    • Are there tests for the pathological use cases?
    • Are there tests for the error cases?
    • Avoid:
      • Twiddling global variables or flags
      • Use of time.Sleep
  • Post work
    • Are the commits organized / squashed?
    • Am I critiquing the code, not the coder?
    • Have I indicated progress? (remaining work, expected timeline)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment