Skip to content

Instantly share code, notes, and snippets.

@ZapDos7
Created February 19, 2024 10:53
Show Gist options
  • Save ZapDos7/f767ce9551c41c9c932290e2c4386e7c to your computer and use it in GitHub Desktop.
Save ZapDos7/f767ce9551c41c9c932290e2c4386e7c to your computer and use it in GitHub Desktop.

Code Reviews

Tips & Methodologies

What’s a code review?

A code review (also referred to as peer code review) is a process where one or more (usually 2) developers analyze a teammate’s code, identifying bugs, logic errors, and overlooked edge cases.

We perform these before merging code onto the main development branches of code repositories.

This code may:

  • introduce new features
  • fix existing bugs
  • ameliorating the code base
  • introduce improvements
  • update libraries & introducing needed changes to the codebase etc.

What makes a good code review?

Effective code reviews are a win for everybody, because:

  • The author will understand your feedback more clearly, resulting in fewer iterations of the code changes.
  • You will be providing the author with tips and points to consider which will affect their approach to the codebase as a whole, so there will be fewer “mistakes” in the next one.
  • The work will be merged more quickly, unblocking potential others.
  • At the very least you will have a quality discussion about an implementation choice.

Steps

  1. Create a code review checklist

    • A code review checklist is a predetermined set of questions and rules your team will follow during the code review process, giving you the benefit of a structured approach to necessary quality checks before you approve code into the codebase.
    • This checklist may include:
      • Is the code correct?/Is the change necessary?
      • Why is it done this way?
      • Is there any missing functionality?
      • Are there any poorly implemented functions?
      • Could they add any related functions the user would like?
      • Are the naming conventions maintained?
      • Are there any redundant comments in the code?
      • Are there any errors?
      • Is the code readable?
      • Is it tested? Is there a need to test more cases?
      • Does the code expose the system to a cyber attack?
      • Is the code maintainable?
      • Is the code well documented?
      • How will it impact other parts of the system?
      • Is the code tied to another system or an outdated program?
      • Is the code following the project’s/company’s coding guidelines/best practices?
      • Does the code use encapsulation and modularization to achieve separation of concerns?
      • Does the code use reusable components, functions, and services?
      • Should the changes be divided into smaller, more focused chunks?
  2. Introduce code review metrics

    • You can’t correct someone’s code quality without measuring it. Objective metrics help determine the efficiency of your reviews, analyze the impact of change on the process, and predict the number of hours required to complete a project.
    • Review Metrics Examples:
      • Inspection rate: The speed at which your team reviews a specific amount of code, calculated by dividing lines of code (LoC) by number of inspection hours. If it takes a long time to review the code, there may be readability issues that need to be addressed.
      • Defect rate: The frequency with which you identify a defect, calculated by dividing the defect count by hours spent on inspection. This metric helps determine the effectiveness of your testing procedures; for example, if your developers are slow to find defects, you may need better testing tools.
      • Defect density: The number of defects you identify in a specific amount of code, calculated by dividing the defect count by thousands of lines of code (kLOC). Defect density helps identify which components are more prone to defects than others, allowing you to allocate more resources toward the vulnerable components. For example, if one of your web applications has significantly more defects than others in the same project, you may need to assign more experienced developers to work on it.
      • Reaction time: This metric helps drive collaboration on projects with multiple developers. Simply chart how long it takes a reviewer to respond to a comment addressed to them. Shorter reaction times generally mean a more collaborative, responsive team.
      • Unreviewed PRs: Leaders refer to unreviewed PRs to see how long code waits for a peer review after it’s submitted. Shorter times point to an effective pipeline where code reviews happen on schedule.
      • Thoroughly reviewed PRs: This metric gauges the depth of each review. Tracking thoroughly reviewed PRs ensures that no one rubber stamps their reviews.
      • Iterated PRs: You can see how often code reviews result in fixed bugs or improved quality through iterated PRs. A high number of iterated PRs means your code reviews result in measurable improvements.
  3. Reviewer-Coder Discussions

    • The people involved in the code review (original coder and reviewer(s)) need to communicate with each other effectively in order for the process to be optimized.
    • Tips:
      • Ensure your feedback justifies your stance - when reviewing code, don’t simply suggest what needs to be fixed or improved upon – explain why the developer should make that change. Articulate your coding choices to explain your reasoning.
      • Select between async reviews (using comments) or synchronous ones (ad hoc -IRL, via a call etc)
      • Share knowledge between the coder & the reviewer(s) - both sides might have different ways to solve the same problem; it is rewarding for them both to exchange their views & experiences.
      • Ask open-ended questions instead of making strong or opinionated statements
      • Take your time
      • Review less than 400 lines of code & for less than 60 minutes per time.
      • Be empathetic
      • Build and Test: This ensures stability. And doing automated checks first will cut down on errors and save codeline. Make sure that the code does what is expected.
      • Participate: everyone should review & be reviewed
      • Try pair programming
      • Review thoroughly
      • Understand each changed line
      • Review temporary code as strictly as production code.
      • Aim for actionable feedback and probing questions: Rather than commenting on code, ask the author why they formatted the code in a certain way or their intent behind a decision. You’re aiming for a dialogue here.
      • Replace “you” with “we”: When calling out a problem, format your response in a “We like to do X because Y” statement. Avoid comments like: “You didn’t follow our style guidelines here.” Reminding devs that you’re all working for the same team keeps morale high.
      • Lean on principles, not opinions: Aim for objective feedback based on coding frameworks or principles. This also provides a learning culture for the author to better understand the “why” behind a certain bit of feedback.
      • Focus on the aspects that will bring the most value: Don’t focus on every opportunity to improve code during a review. Perfection is great, but given the time and scope of a project, focus on the areas that will make the biggest impact.
  4. Automation

    • There is a variety of tools which help automate the code review process, in order to facilitate, hasten and validate the checks that take place.
    • e.g.:
      • Static code analysis tools: Static analysis scans parse source code for errors and security issues by checking it against coding rules. Using one before your review can help you concentrate on harder-to-find problems. Running static analyzers over the code minimizes the number of issues that reach the peer review phase. (e.g.: SonarQube, Coverity, etc)
      • Plug-ins for corrections: Plug-ins for formatting, debugging, and suggesting best practices can help during your review. During the review, these plug-ins point out issues you might’ve missed.(e.g. intellij-java-google-style Code Formatting Plugin)
      • Code review comment trackers: Collaborative review apps and comment tracking tools outline who interacted with code and changed it. When your review goes through more than one phase, these tools can help organize the process. (e.g. GitLab comments)
      • AI

Sources

Steps - Personal notes

  1. Check what's the goal/task
  2. Start with readme, swagger/documentation so we can get a feel of the code
  3. Actual functionality - check out code & execute
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment