Skip to content

Instantly share code, notes, and snippets.

View EdOverflow's full-sized avatar
I swear there was an XSS somewhere around here...

EdOverflow EdOverflow

I swear there was an XSS somewhere around here...
View GitHub Profile
include:
- .well-known
$ curl -s "https://crawler.ninja/files/security-txt-values.txt" | grep -i "hiring: http" | sed 's/^#//g' | awk '{print $2}'
https://www.tumblr.com/jobs
https://kariera.shoptet.cz/
https://g.co/SecurityPrivacyEngJobs
https://www.shopify.com/careers
https://solarwinds.jobs
https://www.chcidozootu.cz/it-devel/
https://careerssearch.bbc.co.uk/jobs/search
https://www.npmjs.com/jobs
https://grab.careers/

As an ex-triager, what advice would you give other triagers?

I would like to preface this answer with an observation of mine. Please keep in mind, I do not have any conclusive evidence to back this assertion: this is purely an observation.

Based on my involvement and what I have heard from fellow triagers, I believe that triagers experience an unintentional form of „exposure therapy“ the more they evaluate incoming reports. In other words, triagers become desensitised to the impact and significance of a report due to previously-reviewed reports with significant impact. This phenomenon is better known as a Negative Contrast Effect in psychology.

When a triager reviews an insanely impactful bug — say remote code execution on google.com — they subconsciously set the bar so high for what they deem to be a critical issue. The triager develops a natural tendency to downplay future reports.

This is why I advocate for Google‘s approach to cycle through triagers weekly as a potential solution. The triager mi

Logical bugs require that you understand the app workflow as much as u can and that can take days and even weeks how do you stay motivated during that time and keep going even though you're not finding bugs?

It is true logic flaws require a comprehensive understanding of the target application and service. Part of the reason why I can deal with the concern of not finding bugs is rooted in my mentality and approach to bug bounty hunting in general.

Anyone who has worked closely with me will be able to attest that I have a tendency to come and go when it comes to bug bounty hunting. One week I am hunting and then I am on „holiday“ for a few months. This is to ensure I do not burn out and it gives me the freedom to ponder on issues rather than get all wrapped up in a program.

Someone once referred to my approach as the „Veni, vidi, vici“ of bug bounty. Although I am no Julius Caesar (and I hope on you

As an ex-triager what advice would you give to everyone?

Don‘t write an essay; get to the point. In other words, address the Five Ws in your opening paragraph. Do not waffle on about the issue, your life, your pet cats... oh and did I tell you about Mike‘s pet frog?

From personal experience, triagers typically have to triage around 180 reports a week (this may be more now ... I am looking at you, still). Do you think triagers want to hear what Wikipedia has to say on XSS?

Without breaching the terms of the bug bounty program‘s policy, focus more on the exploitability of the issue by illustrating this in your proof of concept rather than emphasising the type of vulnerability you are reporting. Let the impact do the talking; not the bug class. If you end up disagreeing with the final bounty amount, highlighting your description of the exploitability allows for civil discourse. You do not end up arguing hypotheticals with the program.

What was the w

How to store all the bug bounty assets like domains, ip addresses, etc. in file and automatically check for the any 0day vuln that comes into the market?

Here is a simple approach that might work for you. Perform reconnaissance as you would typically do and collect hosts and targets. Next, find an application running the target software or setup a local instance. Gather strings that would easily allow you to discern the piece of software from other applications (e.g. with GitLab this may be _gitlab_session). With that small list of keywords in hand, fingerprint all hosts by requesting the index page using a tool such as meg by @TomNomNom and then grepping for the strings. Make sure to then store your findings in a structured fashion that allows you to query applications running that software in future. I primarily use text files in folders for this purpose but I know of others who prefer to store everything in a database.

Many times I find myself removing ~98% LOC from JS assets due to them being bundled dependencies. Analyzing them from Chrome's debugger works but doesn't seem to be the best approach to me. What's your approach when hunting on JS-heavy apps?

Initially, I start by praying that the target has Webpack source map enabled. If that fails, know the type of JavaScript application you are targeting. This means I like to know what JavaScript framework a target is running and familiarise myself with the target‘s coding practices. This is where my motto “Learn to build it then break it” comes from. I set up a very bare-bones application locally using my JavaScript framework of choice and familiarise myself with the technology. While this won’t necessarily always result in more findings, it helps in getting past that initial first step of feeling lost when faced with a wall of code.

I also recommend searching for keywords that are more likely to be application-specific such as “auth”.

Another sneaky trick is to look

Do you collaborate a lot with others? And who do you collaborate with the most?

Back in 2017 during H1-702 in Las Vegas, NahamSec gave me some profound advice. It was something along the lines of “collaboration is key in the bug bounty industry”. Ben was right. Looking back since, collaborating with others has been instrumental. Some of my most successful bug bounty hunting sessions were while working with others.

For the past year, I have been running a collaboration program primarily with students from ETH Zürich. The goal of this program is to foster diversity and bring new brains to this industry. I help the members improve their bug bounty skills, and in return, I get to bounce ideas off them. It is a very symbiotic process that has resulted in some surprising findings. Most notably, last year, one of the members found a critical vulnerability on Google.

What’s the best report you’ve seen and what makes a really good report?

My favourite reports are those that demonstrate the hacker understands the application and has an interest in collaborating with the company. It is that extra bit of effort that is noticeable in a report and makes triaging fun. An excellent example of this is with Frans Rosén’s reports. It is quite apparent Frans has a comprehensive understanding of the target and the team could consult him whenever necessary.