Skip to content

Instantly share code, notes, and snippets.

@cablej cablej/blog.md Secret
Last active Feb 1, 2019

Embed
What would you like to do?

Getting Started in Bug Bounties

Stanford is taking an exciting step today in security by launching a bug bounty program. For the first time, Stanford students and faculty will be allowed to probe Stanford websites for vulnerabilities -- and will be paid in doing so.

Participation in the Stanford Bug Bounty Program is restricted to current students and faculty. For all details, visit bounty.stanford.edu.

This post serves as a primer for anyone who wants to give hacking Stanford a shot. We are all hackers. By laying the groundwork with all information needed to get started, we hope to make this a learning experience for all.

Introduction

It seems that with every day, data breaches are becoming more and more commonplace. Chances are, your personal information is already compromised by hackers, your passwords known, with little recourse but to continue trusting companies with your data. It's clear that something isn't working with security. Attacks repeat themselves and companies keep making the same mistakes. Security starts, first and foremost, with education. Not everyone has to be a security professional, but if we can empower the next generation of computer scientists to more diligent and aware when it comes to security, we can significantly improve the state of security. The Stanford bug bounty program is an incredible opportunity to learn more about security by practicing on real-world targets and software that we are all already familiar with. Beyond Stanford, there are thousands of companies with bug bounty programs asking to be hacked. There is no better, and safer, time than now to begin learning security.

While helpful, this post assumes no prior knowledge of computer science. This is truly intended for anyone to pick up and start learning. Web security is a surprisingly easy field to pick up, and we welcome your feedback in this experiment.

The Basics

When you visit a website, your computer sends a message to the website's server requesting the contents of the website. The server responds in a format called HTML, which your browser renders to display the webpage. In this case, your request to the server is known as a "GET" request, because you are requesting to get information from the server. This is in contrast to other methods such as a "POST" request, which takes actions or modifies information. The request to get information from example.com might look like this:

https://gist.github.com/e5fd161165995a62c650b0f62f1f75c9

In this request, we first see the GET method being used, followed by the path, in this case a single slash /. The single slash is known as the root path, and specifies that we are requesting the front page of the website. At the end of the first line is the protocol being used, which will always be HTTP for our purposes.

We then see the specification that the host is www.example.com, followed by a number of other parameters. These parameters are known as headers, and pass additional metadata, such as a user's browser information, language, and cookies.

The response to this request looks similar:

https://gist.github.com/de73068e68ce4ac4eee574b4747e8811

In the first line, we see the response code. The server responded with 200 OK, indicating a successful request. Other common response codes include 404 Not Found, 403 Forbidden, 500 Internal Server Error, and 302 Found for redirections.

The server then responds with headers of its own, such as information on its software, the format of its response, and the response size. In this case, the content type is HTML, which the server returns to be displayed by the browser.

An important resource for storing information are cookies. Cookies allow for the server to specify information that the browser to store. For instance, when a user provides their username and password to log in, it would be inefficient for the user to have to send that information with every request to authenticate themselves. Instead, the server has the browser store a cookie corresponding to the session, a random string of characters that the user can then use to authenticate themselves. Let's take a look at how a login request might work.

https://gist.github.com/c4fd5f70c52769ed73da5435b3b7088f

Instead of a GET request, the user sends a POST request to log in, as the user wants to take an action. At the end of the request, the username and password are included. This section is known as the body of the request, and includes parameters separated by the & character. These parameters can also go in the path of the url, such as in a GET request, where we might see GET /search?q=dogs.

The server then responds indicating whether the login was successful. In the case of a success, the server instructs the browser to set a cookie storing the session:

https://gist.github.com/994d7fe69259ec8a8f9c402455a11928

In the next request to the server, the browser includes the cookie as a header, which the server validates against a database of known sessions. Now, the user can take actions as usual, with the server knowing that the user is logged in.

Trying it out

The most commonly used software in web security is called Burp Suite. Burp Suite acts as a proxy between your browser and websites, allowing you to inspect and modify requests and server responses. The feature-rich free version can be downloading at portswigger.net, after which you may follow their instructions to configure Burp Suite with your browser.

(A helpful tip for Burp Suite is to ensure that Intercept is off in the Proxy->Intercept tab and to then select the HTTP history tab.)

After configuring, try visiting any website in your web browser. Under the HTTP history tab in Burp Suite, the requests will be displayed as they are sent. By clicking on a request in Burp Suite, you can see the individual requests and responses, which follow the format as described above.

In the setup process for Burp Suite, you had to install a certificate from Burp Suite to visit HTTPS websites. You are likely familiar with the distinction between HTTP and HTTPS, with a green checkmark indicating if the website is secure. HTTP requests are sent in plaintext, meaning that anyone who can view your traffic has access to all information sent, such as your passwords and account information. Nowadays, almost every website uses SSL encryption (HTTPS), which encrypts your requests with a key residing on the server. That way, only you and the website you're visiting have access to the contents of the information being transmitted. Normally, Burp Suite would not be able to display the plaintext contents of a message as it is encrypted. By installing Burp Suite's certificate, this enables Burp Suite to display the request and allow modifying content. Worry not, even though all requests are displayed in plaintext in Burp Suite, they are still encrypted and secure from any snooping attackers on your network.

Try visiting some websites and taking a look at requests and responses as they come through Burp Suite. What types of request methods are you able to see, and what types of actions usually trigger them? It's surprising how many HTTP requests are made by just visiting one website. Every resource required by the website, whether an image, stylesheet, or JavaScript file, must be requested separately and takes time to load. This is why, when developing websites, it is important to be cognizant of the number of requests being made and the size of included files.

The true power of Burp Suite comes from the ability to modify requests and responses. By right clicking a request, you may send it to the Burp Repeater (Command-R on Mac and Control-R on Windows). Then, you are free to edit any contents of the request and send it again to the server to see how the server responds.

Your First Vulnerability

At this point, we're ready to take a look at a first vulnerability type. Known as an Insecure Direct Object Reference (IDOR, for short), this vulnerability category is remarkably simple and can lead to extremely dangerous outcomes. It occurs when a website fails to properly validate that you have access to a resource that you're requesting. For instance, let's say that you're assigned a user id of 1445 on a website. The website responds with your user information when requesting example.com/api/v1/users/1445. By changing the user ID in the request from 1445 to 1444, the server fails to validate that this account belongs to you, and instead returns sensitive information belonging to another user.

Insecure Direct Object References are all too common, even though they are easily preventable by properly validating that users have access to information when requested. When browsing websites, any time you see a numerical id, it is always worthwhile to see if changing the id returns another user's information (given that you have permission to test the website, of course!).

These vulnerabilities can occur in any type of request, with any type of identifier. For instance, it might be that in a POST request to update your account, your user id is present in the POST body. Or maybe you are assigned an alphanumeric id for a credit card associated with your account, and guessing another user's id reveals their credit card number. I have seen IDORs in everything from billing receipts to updating passwords, which can have an enormous impact on a company's security.

HackerOne, a bug bounty platform that hosts bug bounty programs for other companies, has a "hacktivity" stream where vulnerability reports to bug bounty programs can be published. Try searching HackerOne's hacktivity for IDORs. They are everywhere, and lead to some large bounties.

Vulnerabilities doesn't have to be complicated to be impactful. Sometimes, the most critical flaws stem from a trivial oversight, such as an Insecure Direct Object Reference.

Cross-Site Scripting: The Crowd Favorite

The next vulnerability, Cross-Site Scripting (XSS), is the most common flaw found in bug bounty programs. Whereas Insecure Direct Object References are known as a server-side vulnerability, arising due to a flaw in a website's server exposing information, Cross-Site Scripting is instead a client-side vulnerability, targeting an individual user's browser. In the case of Cross-Site Scripting, an attacker may inject malicious JavaScript into the HTML response of a website. JavaScript is the universal language running on the frontend of all websites (the frontend is the content that is rendered on a user's browser, as opposed to the backend consisting of the server and application logic). Websites may use JavaScript to make interactive pages, special animations, or new requests to a server once a page has been loaded. The JavaScript running on a website has access to all content on the web page, and sometimes a user's cookies (more on this later).

As a result, if an attacker can control the JavaScript running on a web page, they can compromise a user's account and all sensitive information associated with the account. How does this happen? Let's say that a website has a search page, which searches for a term inputted by the user. The page to search might look like https://example.com/search?q=stanford. After visiting the page, the website states "x results found for 'stanford'". The building blocks of HTML consist of tags separated by angle brackets (< and >). What if we were able to inject our own HTML tags into the response of the website. For instance, <h1>Hello</h1> displays a header with large text. If the server doesn't properly filter the query parameter, any HTML code sent in the parameter will be rendered in the page. Visiting https://example.com/search?q=<h1>Hello</h1> results in a large "Hello" being displayed, so we know that the server is vulnerable to Cross-Site Scripting. We can take this a step further by injecting a <script> HTML tag, which instructs the browser to execute the contents of the tag as JavaScript code. <script>alert(1)</script>, for instance, executes a JavaScript alert when the page loads. This is the most common XSS payload, a term for malicious input which causes JavaScript to execute.

Once we have the execution of JavaScript, what steps must we take to compromise the user's account? We mentioned earlier that cookies are used to authenticate a user to the server. If we can access the cookies in JavaScript, we could send them to our own server and gain access to the user's account. (Un)fortunately, websites can add a flag to its cookies known as HTTP-Only, which prevents them from being accessible by JavaScript. Despite making it more difficult for an attacker to compromise a user's account, this hardly reduces the impact of Cross-Site Scripting attacks. For instance, as we have access to the complete contents of the web page, we could simply send a user's sensitive information on the page back to our own server. Further, we can take actions on behalf of the user, such as changing their email address of even their password (to prevent this, this is why many websites require a user's current password to change their email or password). Thus, even with certain protections in place, it is difficult to stop an XSS attack once a vulnerability is identified.

Cross-Site Scripting attacks are also easy to prevent. Any time that text inputted by the user is displayed on the website, the text should be HTML encoded. This is a process of converting unsafe HTML code to text only. For instance, the < character is converted to &#x3c;, which your browser knows to display as an angle bracket when rendering the page. This has the effect of still displaying text exactly as it was inputted, without the risk for injecting HTML or JavaScript.

XSS attacks come in multiple different flavors. Explored above is known as a reflected XSS attack, where text present in a parameter in the url is rendered as HTML and can lead to JavaScript execution. Many browsers now include an XSS auditor which attempts to detect when HTML code in a parameter is improperly encoded, and block the page from rendering. Even with that, bypasses to the XSS auditor are often found, and some browsers like Firefox do not have an XSS auditor.

A more dangerous category of Cross-Site Scripting is stored XSS. Instead of being injected with every request, stored XSS occurs when malicious content is submitted once and stored by the server, such as in a user's name. This is especially concerning in the context of XSS worms, which can spread across users by modifying a user's profile. A humorous example occurred a few years ago on Twitter, where a user found that tweets followed by a heart emoji would be rendered as HTML. The user posted a script that would retweet itself, which quickly amassed tens of thousands of retweets.

Screen-Shot-2014-06-11-at-9.45.29-AM

SQL Injection

One of the most severe and commonly exploited vulnerabilities is SQL Injection. The cause of countless data breaches, SQL Injection occurs when an attacker can execute arbitrary operations against a server’s database.

SQL stands for Structured Query Language, and is the most common language used to interface with databases. Common databases that use SQL, for instance, are MySQL and PostgreSQL.

In SQL, the fundamental mode of storage are tables. Much like a spreadsheet, tables consist of any number of rows, each having values for pre-specified columns. For example, a Users table may have columns username , email, and password, representing the basic information for each user.

Where does SQL go wrong? Let’s say that we want to implement login functionality to check that a user’s password is correct. We build our query as follows:

SELECT * FROM Users WHERE username='' AND password=''

SELECT is SQL’s way of retrieving all rows that match a certain condition. In this case, we will put the user-submitted username and password between the quotes to see if their password is correct. If it is, the database will return 1 row, as both the username and password match a valid row in the database. If not, the database will return nothing, indicating that the password is incorrect.

Although this query is well-formatted and would work for innocent users, this is susceptible to an unfortunate attack. Characters that a user inserts in between the quotes are interpreted as a part of the SQL query. Thus, providing a password of ' OR 'a'='a transforms our query into the following:

SELECT * FROM Users WHERE username='' AND password='' OR 'a'='a'

In this case, we abuse SQL boolean operators to force the database to return all rows where 'a' is equal to 'a' . Of course, this is always true, so all rows are returned, meaning our query allowed us to log in. In practice, this can be used to exfiltrate or modify all information in a database. This allows an attacker to gain mass amounts of user information, which can be critically damaging to a company’s security and their user’s privacy.

The fix? Don’t let an attacker inject their own commands into your SQL queries. Instead, use a library function that escapes input to SQL queries by prepending a backslash to quotes. This way, quotes are interpreted as text and nothing more.

Cross-Site Request Forgery

Another client-side vulnerability is known as Cross-Site Request Forgery (CSRF). With Cross-Site Request Forgery, a malicious website can take actions on behalf of a logged in user’s account. This flaw stems from the fact that on the web, one website can submit a form (e.g. a POST request) to another website. Crucially, these requests automatically include the user’s cookies, meaning that a website can make requests that look like they originated from a logged-in user.

An attack scenario might look like this:

  1. A user logs onto their account on example.com. Example.com makes the mistake of not implementing CSRF protection.
  2. The user visits attacker.com. The website launches a POST request to example.com to change the user’s password.
  3. Example.com receives an authenticated request to change the user’s password. Given that the request is indistinguishable from a legitimate request, the website proceeds with the password change.
  4. The attacker is now able to log into the user’s account.

Cross-Site Request Forgery may be present on any action that modifies a user’s account state. In particular, places to look for Cross-Site Request Forgery are sensitive actions that may allow account compromise, such as changing passwords or email addresses.

There are a few methods to prevent CSRF attacks. The most common technique is to include a CSRF-preventing token in all requests. This unique token is specific to each user and known only by the website and the user. Thus, the website can validate that a request is legitimate if the correct CSRF token is present. (The CSRF token can go by many names, such as CSRF token, XSRF token, requestVerificationToken, and more)

Another way to prevent CSRF attacks, though generally less recommended, is to validate the Origin header of a request. This header tells the server what website a request came from, and is enforced by the browser to be correct. As a result, a website can verify that the Origin header matches the correct domain to enforce CSRF protection.

If you see neither of these protection mechanisms, it is always worthwhile to try creating an HTML page that submits a form. For instance, the following would submit a POST request to example.com with a body of newPassword=password :

<form action="https://example.com" method="POST">
    <input type="hidden" name="newPassword" value="password" />
    <input type="submit" value="Submit request" />
</form>

Conclusion

I hope that this provided a suitable introduction to the world of web security, and you now feel comfortable experimenting yourself. Burp Suite (or even the developer tools on a browser) is all that’s needed to start discovering critical vulnerabilities in bug bounty programs.

This post is by no means comprehensive, and there is much more to be explored in web security. With that said, below are additional resources to get you started.

Further Reading

Security is a broad field, and even within web security there exists tons of research devoted to specific vulnerability categories. As an more in-depth introduction to web vulnerabilities and bug bounty programs, I recommend Pete Yaworski’s Web Hacking 101. His book explores the most common vulnerability types and examines them in the context of real bug bounty reports.

Pete has generously provided Stanford students with free access to his ebook. Visit here to download the ebook PDF, after logging in with your Stanford account.

HackerOne offers Hacker101 as a resource with videos overviewing vulnerability types and mock websites to practice on. In particular, their newcomers video playlist delves deeper into basic vulnerabilities and is a good place to start.

For a quicker overview, the OWASP Top 10 is a project devoted to categorizing the 10 most common web vulnerabilities. OWASP researches in detail the impact of these vulnerabilities and successful prevention strategies.

Lastly, there’s no better way to learn about web exploitation than reading others’ discoveries in real websites. Compiled here is a list of some of the best bug bounty writeups that offer real-world insight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.