Skip to content

Instantly share code, notes, and snippets.

@lrvick
Created December 12, 2018 03:03
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lrvick/5589f72cbab22475c1d536f79bb5f289 to your computer and use it in GitHub Desktop.
Save lrvick/5589f72cbab22475c1d536f79bb5f289 to your computer and use it in GitHub Desktop.
Random Red Team

Random Red Team

Summary

This document seeks to detail intentionally introducing security vulnerbilties into projects to test code review processes and foster a healthy and expected culture of distrust and higher security scrutiny during code reviews regardless of social standing, or experience level of the author.

Motivation

In modern organizations it is very commonplace for code to be reviewed for suboptimal patterns, poor commenting etc. It is far less common that code is carefully scrutinized for security, particularly around tough deadlines, or when the code is coming from Sr. engineers that are well trusted.

Likewise third party package inclusions such as new NPM dependencies are often not audited at all.

This culture of trust actually creates non intuitive danger for contributors as now any of them could be coerced by an sophisticated adversary such as a government (See Australia's Access And Assistance Bill 2018 ).

If a culture of high security scrutiny during code review is created then a coercion or supply chain dependency attack becomes no longer as desireable or worth the risk for an adversary, and in turn puts contributors at less risk.

This tactic might also further help to prevent subtle heartbleed style accidents.

Design

In short, we seek to gamify a reward system for better security practices.

A typical example of this is encouraging screen locking by using unlocked machines as a method to social engineer the delivery of donuts by the victim via impersonation. Another is to encourage badge checks by introducing badgeless "secret shoppers" that hold rewards for those that challenge them.

This approach extends this idea to code review.

The scheme is as follows:

  1. One engineer is picked randomly from a pool of participants every "sprint" or similar time period to be a "bad actor"

  2. During this time period, in additional to regular duties, the engineer has a free pass to try to sneak some type of vulnerability past code review that would allow them some ability to control a private key, execute code, or other attack that would give an outside adversary some significant advantage in defeating system security or privacy.

  3. Security and Release engineering teams are always informed of the current "bad actor" and knows to not actually release any code they are involved in.

  • Organizers can play a role in teaching typical red team tactics but can not have any direct participation in an attack.
  1. Organization puts up bounty that is provided to anyone that successfully spots a vulnerability, OR a higher one to the "bad actor" that successfully gets a peer to approve of an exploit.

  2. "Bad actor" and security/release engineering teams are all responsible for calling out introduced vulnerability before it can get past a dev environment.

Drawbacks

  • Engineers are constantly suspicious of their peers

    • Counter: They should be! Anyone could be compromised at any time.
  • Engineers may spend more time thinking about security rather than features

    • Counter: Accept that higher security slows down development for quality.
  • Engineers have a motivation to leave security vulnerabilities for their turn

    • Counter: Provide rewards for security issue discovery outside of game
  • Engineers have the ability to collude and split winnings

    • Counter: Terminate dishonest employees with extreme prejudice.

Unresolved Questions

How far should this be allowed to go? Is phishing and exploitation of unattended machines or keyloggers fair game?

Future Work

Deploy in real world organizations and share results :)

@Melvillian
Copy link

Security and Release engineering teams are always informed of the current "bad actor" and knows to not actually release any code they are involved in.

Can we tweak this so that engineer's code contributions aren't completely shut down during their sprint on the red team? For instance, they could alert the release team and sec team of which commit(s) contain vulnerabilities, and then it's on both the red team engineer AND release to make sure those commits get "fixed". The downsides of this are that if things get hectic and release, sec and the red team engineer all forget to remove the vulns, then we'll have bad code in production. You could also view this as a plus because it will make release and sec team more aware of what's going into production, and if a vuln does get in then we have practice fixing the vuln in production, which will happen whenever we find a real vuln.

I think we should make it clearer what is and isn't a vulnerability. This will save everyone time because the red team engineer won't waste their time trying to put in code changes that aren't really vulns (like adding a console.log statement in somewhere) and sec team won't be annoyed as much by questions from engineers about what is or isn't a vuln. Here's my two lists, please take what you agree with and update the gist:

An acceptable vulnerability is:

  • leaks public info to un-authenticated users
  • reduces the security of cryptographic material like keys, passwords, and 2FA
  • gives a user greater access to resources than they should normally have
  • makes a security-related feature or code complex enough that it could lead to misuse and loss of security
  • reduces the service uptime or opens up a DoS attack against one of our services

A non-vulnerability is:

  • introducing code that shouldn't be there but is harmless (e.g. console.log statements)
  • introducing code that a reviewer asked you to change but you did not change

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment