Skip to content

Instantly share code, notes, and snippets.

@EdOverflow
Last active May 22, 2020 13:05
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save EdOverflow/f1866046f4514ee74c9968d7910426f8 to your computer and use it in GitHub Desktop.
Save EdOverflow/f1866046f4514ee74c9968d7910426f8 to your computer and use it in GitHub Desktop.

As an ex-triager, what advice would you give other triagers?

I would like to preface this answer with an observation of mine. Please keep in mind, I do not have any conclusive evidence to back this assertion: this is purely an observation.

Based on my involvement and what I have heard from fellow triagers, I believe that triagers experience an unintentional form of „exposure therapy“ the more they evaluate incoming reports. In other words, triagers become desensitised to the impact and significance of a report due to previously-reviewed reports with significant impact. This phenomenon is better known as a Negative Contrast Effect in psychology.

When a triager reviews an insanely impactful bug — say remote code execution on google.com — they subconsciously set the bar so high for what they deem to be a critical issue. The triager develops a natural tendency to downplay future reports.

This is why I advocate for Google‘s approach to cycle through triagers weekly as a potential solution. The triager might then come back with a fresh mind able to evaluate bug reports without this bias (source: learnt about this from my many visits to Google VRP in Zürich).

If this solution is not possible, then remember to take regular breaks on your own terms. This includes breaks during work using an approach such as the Pomodoro Technique.

Without regular breaks, triaging becomes an arduous and repetitive process. It is not easy to assume benevolence first when reviewing a report if you are burned-out.

A supplementary remark to any members of bug bounty programs reading this: I encourage you to explore solutions for developing more objective rating systems to aid your triagers. Other fields which depend on human cognition to assess ratings using a scoring system have researched contrast biases. This topic was explored in a paper titled „Reducing assimilation and contrast effects on selection interview ratings using behaviorally anchored rating scales“ published in the International Journal of Selection and Assessment. The paper focuses on developing means for performance rating which combat the contrast bias. This may be a good starting point for you.

In addition to taking breaks, make the triaging process easier for yourself. Write templates where applicable but do not overuse them. Hackers often complain on social media that triagers are robotic partly as a result of the template language. I, for one, view the relationship with the team behind the bug bounty program as the main factor for continuing to work with a program more than any other factor (e.g. money). I prefer to feel like I am working with a human being and not a robot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment