My name is Peter Adkins, I live in Canada and go by Darkarnium on Bugcrowd, Twitter and the usual suspects.
I’ve been interested in hacking for as long as I can remember; spending quite a lot of time reading zines and printed articles given to me by a friend when I was much younger. Although most of the articles that I could grok at the time were heavily related to phreaking rather than hacking, I was immediately drawn to the idea of coercing systems into doing things they weren’t originally intended to do. I especially loved stories about Captain Crunch and co, boxes, toll fraud in old telephone systems and war dialing, and recommend looking some up now if you’ve not had the chance to read them before they’re a great read :)
That said, until around 2011 infosec was always an interest but never something I actively pursued past reading about vulnerabilities and exploit mitigations while trying to keep systems patched.
I started getting more actively involved in bug hunting after accidentally finding a trivial command injection bug in a switch while deploying a network for a customer at my day job. After this I started spending quite a bit of my spare time disassembling and messing with embedded devices. I finally started working on bug bounties at the start of 2015 after learning about Bugcrowd from their “Bug Bounty List”.
This is a hard question! I work on bounties in my spare time as I work full-time outside of infosec, so finding the right balance is difficult. Although I try and and work on an ‘evening on, evening off’ system sometimes the scope of a given program is too interesting to want to take a break.
I try to spend 10 - 15 hours a week working on hunting for bugs - but not always on bounties. That said, quite a lot of this lately has been outside of web and instead focusing on trying to improve my dismal binary analysis and reverse engineering skills.
On average I only report one or two bugs a month. That said, some months it’s closer to 10 and others it’s zero, so it varies wildly.
I was lucky enough to find a P1 in the first public Bugcrowd program I participated in a few days after I signed-up for Bugcrowd. The experience with the initial Bugcrowd triage time, feedback from the company (@Simple) and the speed of the patch really drew me in and made me start looking for more bugs. So I have to give a massive shout out to them both for that!
Providing concrete examples is difficult due to a large number of bounty programs unfortunately having ‘no disclosure’ policies. Of the bugs that are from companies that do allow disclosure, the first bug I found is probably still my favourite - and likely because it was my first reported bug and the vendor was fantastic to deal with.
The bug itself was related to bad user input sanitization which lead to authenticated users with access to perform low-privileged operations (such as ‘show clock’) being able to access the Linux subsystem in the Cisco Nexus platform. Totally trivial, but the first time I got the ‘rush’ of finding a new bug!
I’d say that the ‘breakthrough’ really happened when I attended a local security meetup in my city and I started talking to a few amazingly talented security researchers. It was equal parts motivating and depressing realizing exactly how little I understood about so many areas of InfoSec, which really made me want to learn more.
The worst problem I’ve encountered is probably tied between burnout, and feeling overwhelmed reading write-ups from complicated bug chains and going “How on earth did they find this?”
I try to keep up to date with a number of mailing lists on SecLists (Full Disclosure, OSS-Sec, and Bugtraq), follow a number of great security researchers on Twitter, IRC and Slack, and try to participate in at least one CTF every couple of months.
Where possible, I feel like working with other researchers is beneficial. Whether CTFs or security research outside of bounty programs, it’s always interesting to see how others work and to bounce ideas off one another. Finding ‘new’ ways to use existing tools by seeing how others approach a problem is a great way to learn.
Unfortunately, with private bounty programs this seems quite hard to co-ordinate. I’d love if the larger bounty providers provided some mechanism for researchers to indicate that they’d be open to working together on private programs - even if it’s just sharing notes!
As for working with others, I don’t want to drop any names here but y’all know who you are :)
If it’s a web target with a specific target, I’ll almost always start by firing up an SSL MiTM tool, and walk through as much of the application manually as I can - in order to map out endpoints, get an idea of the technologies in use, and look for areas that look like they might have some rough edges.
If it’s an open scope (*.) then I’ll usually kick off a DNS brute-force and start looking for Google dorks while that’s running. Based on the output from the DNS brute-force, I’ll then look for ‘interesting’ targets like development hosts, monitoring systems or orchestration tools.
If that doesn’t yield any interesting targets, I’ll usually then look to see whether the organization has an ASN or any netblocks assigned and then perform the same process again on those (if in scope).
If it’s a binary or embedded target, finding an update package and getting a root shell on the device is always the first point of call. From there i’ll usually look for any services that have non-localhost sockets bound, pull the binaries and drop them into IDA.
I tend to primarily focus on information disclosure bugs, and server side injection.
When looking for bugs in web targets I use OWASP ZAP for almost all of my HTTP(S) capture and replay, and traffic mangling. I tend to do a lot of manual testing so the featureset inside of ZAP has been more than enough for me in the past, and its websocket support doesn’t suck!
Although ZAP can be a bit of a strange beast sometimes, I much prefer it to Burp in my workflow. Then again, I might just be bitter because PortSwigger never replied to my request for an evaluation licence ;)
For infrastructure and service ‘recon’ I tend to use gobuster, subbrute, nmap, curl and a healthy dose of Python to glue everything together. Nothing too exciting here. Oh, and Visual Studio Code for writing PoCs and notes (without trying to sound TOO much like an advertisement: it’s free, fast, and has less ‘suck’ than Atom).
In terms of automation, I’ve spent some time working on some Python tools to help with recon and taking notes but I haven’t opened any of that up. That said, I did release some code to help with distributed data collection using AWS’ SQS which you can find on GitHub if you are interested in that sort of thing - albeit without security specific ‘plugins’ included, sorry :)
When looking at binary targets, it’s the usual suspects: IDA, binwalk and pwndbg. That said, yrp’s ‘rappel’ is damned amazing, and Vagrant is incredibly handy for creation of ephemeral VMs to house everything.
As mentioned above, I tend to do a lot of manual testing so I’ll usually start by walking the application and keeping a running list of areas that are likely to yield bugs - such as file uploads forms, remote file fetch utilities, image conversion processes, debugging and troubleshooting tools, etc.
Once I’ve got a list of potential targets, I’ll usually start throwing everything and the kitchen sink at them and try and spot patterns in the way that a given endpoint responds when met with ‘unexpected’ data. From here, any endpoints that respond with interesting looking errors will go straight to the top of the list of things to look at deeper. The rest might get a look in later if I have some spare time, but endpoints with more interesting errors / responses always get looked at first.
When I have a long enough list to keep me occupied for a few hours, I’ll dedicate a bit of time to each identified target and try to work out how it may have been implemented - based on application flow, error messages and additional meta-data (like HTTP Headers, Cookies, etc). Once I’ve got a ‘decent’ map of the target cobbled together, I’ll start looking for places to attempt to inject malformed language, protocol or template specific data to see how the service handles itself. This is the point in which I’ve found cracks tend to start to appear and bugs start to fall out, but it also tends to be the most tedious and time consuming part of the process.
I’ll also try and keep as many notes as possible during this process, as even relatively trivial pieces of information found in this manner - that aren’t considered vulnerabilities themselves - can be of great assistance later on when working on more in-depth bugs. There’s nothing worse than racking your brain trying to remember where you saw a particular bit of data a week later because you didn’t write it down.
Finally, and on the more ‘opportunistic’ side I’ve had a bit of success with directory brute forcing and infrastructure / service specific word-lists. Sometimes you’ll get lucky and find administrative panels - like Tomcat manager - installed on in-scope targets with ‘bad’ credentials; usually leading to easy bugs with just a few lines of code.
I try to be optimistic about bugs in older targets. Just because the target list hasn’t changed in two years, doesn’t mean the code hasn’t! Even with strict internal peer-review process and automated testing, mistakes happen and bugs get introduced.
Keeping an eye on long-running bounty programs and signing up for ‘marketing’ newsletters can also be a great way of finding out when new features are rolled out, which are more likely to contain undiscovered bugs in my experience.
In any case, just because a target has been active for a long time doesn’t mean there aren’t any bugs to be had :)
I’ve found having a systems operations and deployment background has helped a great deal in thinking about how a given system may have been built and deployed. Finding bugs related to caveats or gotchas with SDKs, libraries and service configuration is a lot easier if you’ve already encountered them yourself.
I listen to quite a bit of melodic death metal, deathcore, thrash and prog. I’m really digging Shadow Of Intent, Revocation, Hollow World, Black Therapy and Warbringer lately.
That said, when I need something different I’ll usually throw on some Neurofunk or DnB.
Play guitar, write code, and more recently a WHOLE lot of Overwatch. I like to travel as much as I can, but not nearly as often as I’d like :)
A pretty damned positive one! Bug hunting has provided a number of opportunities to collaborate with, and learn from, some amazing people in the community. It’s also forced me to keep learning about things I wouldn’t have dreamed about digging into and ‘pulling apart’ previously. That said, I’d be lying if I said the financial side of things wasn’t also a great motivator.
Aside from the technical and financial side of things, it’s a great way to meet new people in the community to have a pint and a chat with!
Some variation of “if you’re unable to explain it simply, you may not understand it as well as you think”. Trying to keep this in the back of my mind helps greatly while hunting for and reporting bugs as it forces me to read things more carefully and stop to properly digest information before proceeding.
Another strong contender is “take a break”. If you feel that you’re not getting anywhere with a problem, go get some sleep / fresh air / caffeine and try again later :)
Binary exploitation and reverse engineering are riiiiight up there.
Strawberry jam, OR cheese.
I’ve not had any overly negative experiences, but I have had a few ‘long running’ bugs (> 6 months). Although it sucks when a company stops responding to a bug report, it’s easy enough to ‘vote with your packets’ and just work on other programs instead.
There are far too many to choose from, and given that most of them are scary good at computers I’ll spare myself the embarrassment! :)
Definitely some sort of ‘matchmaking’ or collaboration system for those interested in sharing notes or working together on private bounties. Even if it’s just a mechanism for allowing researchers to opt-in to sharing with others on private programs that they are interested in collaborating / sharing.