If you’re into recon, you’ve probably heard of amass. It’s a powerful tool for mapping attack surfaces during bug bounty hunting or penetration testing. Here’s why I love it:
It’s close to an all-in-one recon tool.
It aggregates data from multiple resources (DNS, ASN, Whois, etc.).
Its capabilities can be extended with API keys.
It stores all data in a SQLite database, making information management and querying easier than relying on text files.
Instead of repeating what’s already in the official tutorial, I’ll take you through how I use Amass in my bug bounty recon workflow.
Global Configuration
Once you install amass, the first step is setting up its configuration files. For me, these live in:
~/.config/amass/
Then in here, I did the following:
1. Create the Required Files:
datasources.yaml: Stores API keys.
config.yaml: Default configuration file.
2. Set Up Datasources
Run the following to see which sources need configuring:
amass enum -list
Any source marked with a star requires an API key. Register for as many free resources as possible to maximize Amass’s capabilities. For example, my datasources.yaml file contains:
This keeps Amass focused on the program’s defined scope and prevents unnecessary noise. You can point amass to this project-specific configuration file with the -config flag.
Root Domain Discovery
If you’re starting with ASNs, IP ranges, or an organization’s name, the intel command helps find root domains to target:
This will map out subdomains for further analysis.
By default, amass enum runs passively, meaning it relies solely on 3rd parties for the information. You can use the active flag to tell it to directly interact with the target for (potentially) more results and increased accuracy:
This will likely take longer to run than the passive option. I recommend starting with passive then circling back to active once you’ve exhausted your exploitation efforts.
It’s also worth noting that amass has capabilities for performing subdomain brute-forcing. One useful option being a hashcat-like masking option. I’ll leave that for you to explore in the official tutorial.
Parsing Gathered Domains
amass organizes all data in a SQLite database stored in the directory specified by the -dir flag. Using sqlite3, you can query and manage the data:
As you can see, using sqlite as the DBM makes it easy to pull exactly what you need and plug it into other tools.
Why SQLite Beats Legacy Features
Older versions of Amass supported amass db for queries and amass viz for visualization. While those features were neat, I prefer direct database queries. They give you more control and are easy to script for repeated workflows.
For example, you could write a script to export all gathered domains and IPs into separate files for further analysis.
Also, the viz feature doesn’t add much value in my opinion. For me, visualizing massive amounts of data would be more overwhelming than useful.
I’d much rather pull only what stands out to me and throw it in a mind-mapping tool (like xmind). That way, I can work in a less cluttered environment.
Conclusion
amass is a game-changer for recon workflows. While setting up API keys may cost some time (and occasionally money), it’s an investment that will give you an edge in bug bounty programs.
I highly encourage you to experiment with this tool, tweak configurations, and build scripts to fit your needs.
Hello again! If you read my last post on AP and Client discovery with Airodump-ng, then get ready to take the skills you learned to the next level! We’re not just going to be observers anymore. We’re going to hack wireless networks by cracking their passwords!
This method involves using airodump-ng to capture the necessary traffic we need to use the infamous wireless cracking tool, aircrack-ng to crack passwords using a wordlist. And the best part is, this can be done completely offline!
Let’s get started!
Wireless Encryption
Before we get started with hacking WiFi, there are a few things we have to understand first.
Wireless Access Points (APs) support a variety of different encryption standards. Those in existence are WEP, WPA, WPA2, and WPA3.
WEP is the weakest and can be cracked quickly without a wordlist. WPA, WPA2, and WPA3 are improvements of each other. We’re not going to go into the details but just know that each one that follows offers stronger encryption and other security improvements.
The encryption standard we are going to be focusing on is WPA2. It has been around since 2004 and its successor has yet to make a full public appearance (at the time of this writing). If you did some snooping after reading my post about airodump-ng, you might have noticed that the most popular encryption type was WPA2. That’s because it’s the strongest encryption method that offers the most support for wireless devices. Your home network is probably encrypted with WPA2 and many businesses use WPA2 encryption.
We’re also going to be focusing on WPA2/PSK, not the Enterprise variety. PSK stands for Pre-shared Key and if you’ve ever logged into a WiFi network with a plain-text password, you’ve used PSK encryption. Enterprise requires an additional server, called a RADIUS server, and users are authenticated with an account. For this reason, it is used for businesses because it offers centralized control.
WPA2/PSK is the most common encryption method for a wireless network at the time of this writing. So, you’ll likely have plenty of options to experiment with.
The 4-way Handshake
The 4-way handshake occurs every time a client associates itself with a wireless access point (a user logs-in to the WiFi). Basically, the purpose of this 4-way handshake is to generate the encryption keys needed for an authorized client to communicate with the AP.
We’re not going to go super in-depth on the protocol behind this but this diagram from a Medium article sums the process up nicely:
I recommend checking that article out for a pretty decent explanation. This youtube video is also pretty good at explaining the 4-way handshake.
All you need to know is that when we capture this 4-way handshake, aircrack-ng is going to use each guess and necessary parameters from the handshake to reconstruct the encryption keys (PTK and/or GTK). Then, it will hash the encryption keys to form the MICs (Message Integrity Check). If the MIC matches the original MIC from the captured handshake, the guess is correct.
If you didn’t get all that don’t worry! Seeing this in action might help to clear things up. Even if it doesn’t, it took me a while to understand this process in-depth.
Capturing the Handshake
Ok! Onto the fun stuff! The first step in this process is to get that 4-way handshake. To do that we need a couple of things:
a running instance of airodump-ng
a client to associate with the network (a WiFi network you can log in to)
First, let’s get our network card into monitor mode:
$ sudo airmon-ng start wlx9cefd5fee020
Now, our network card is ready to start capturing information. Fire up airodump-ng:
You can stop here and wait for airmon-ng to capture a handshake from any network within range or target a specific one. aircrack-ng can recognize multiple handshakes and you can choose from the list you gathered but I’m going to target a specific network I already have access to:
This process should be pretty familiar to you by now. One difference is that we captured a WPA handshake. airodump-ng handily tells us this in the top right.
Remember to capture the handshake we need a client to associate with the AP. In my case, since I already knew the password, I just re-authenticated with my phone.
If you don’t know the password, you have a couple of options:
Wait until a client re-associates
Force a client to re-associate
The second option is illegal but I’ll show you how to do it for educational purposes only. Note: your traffic will be logged by the AP and there is a chance you could get caught for this, so do it at your own risk
To force re-authentication on clients, we can kick them off the network with aireplay-ng:
$ sudo aireplay-ng -0 100 -a D8:38:FC:FC:EB:A9 wlan0mon
This command will 100 send deauthentication frames (-0 100) to everyone on my network (-a <BSSID>). In other words, I would be DOS-ign my network if I were to run this.
To make this a little stealthier, we can target a single client with -c <client MAC> and spoof our MAC address with -h <spoofed MAC>:
Whichever route you go, you should now have captured a handshake and written the captured data to a file. I saved mine to ~/Desktop/mywifi. Let’s open Wireshark to see what the handshake looks like:
The protocol used for the handshake is EAPOL, so I filtered my results to display only the handshake. As expected, there are four of them.
Cracking the password
The moment we’ve all been waiting for…Now we get to crack the password!
At this point, we’ve gathered everything we need to start our attack. If you haven’t done so already, you can go ahead and take your network card out of monitor mode and shutdown airodump-ng:
$ sudo airmon-ng stop wlan0mon
To crack my network’s password, I’m going to use the infamous rockyou.txt wordlist. This wordlist is commonly used in CTFs and other hacking challenges involving password cracking because of its popularity. It even comes preinstalled in some hacking Linux distributions like Kali Linux. So, if your password shows up in rockyou, I suggest you change it…
To crack the password, all you need to give aircrack-ng is the wordlist and the capture file like so:
As you can see, it automatically identified the handshake and since there was only one in the file, it got straight to cracking. In no time at all, it was able to find the password: appletree. For me, it took less than a few seconds. Embarrassing…
We can also see that it was able to calculate the other parameters of the 4-way handshake including the PMK (Master Key), the PTK, and the MIC (EAPOL HMAC).
And that’s it! Easy peezee-lemon-squeezy! Of course, your results will depend on the wordlist you use. If your password is not in the wordlist, you won’t be able to crack the password. For proof of concept, you might want to make a small test wordlist with your WiFi password included (assuming you’re attacking a network you have access to).
Conclusion
You now know how to crack WiFi passwords running WPA2/PSK encryption! Give yourself a pat on the back hacker!
Of course, it goes without saying that if you crack the password of a network and then use that password to get unauthorized access to the network, that is where you cross the line into illegal territory. So please, don’t do that. 🙂
By all means, try this out 100% passively. See what information you can gather with airodump-ng just by leaving it running in your home for an hour or so. You might be surprised at what you’ve gathered. Then try cracking passwords for fun!
It’s surprising how many people will opt for insecure passwords. I mean, my own apartment complex’s network opted for one of the weakest passwords in the book… Come on! It’s 2023! No matter how secure a system is, people will always be the weakest link. I changed the BSSID and the ESSID of the network in the examples to avoid legal trouble but their WiFi might as well be public…
Anyways…Hope you enjoyed this article. I certainly enjoyed writing it! Stay tuned for more and practice responsibly!
We’re back from our slight detour to swing back into web app testing! Don’t worry though, I haven’t given up on wireless stuff. More content for that coming soon!
In this post, I’m going to walk through a demo that makes use of my favorite Burpsuite extension: Autorize. Autorize is a plugin that makes testing for access control vulnerabilities easy. You can just let it run in the background as you peruse the web app you’re testing and it will tell you what can be bypassed automagically!
So, let’s jump right into the demo.
Installing Autorize
Autorize doesn’t come with Burp by default. To use it’s magical powers, we’re going to have to install it ourselves. Luckily Burpsuite makes this super easy for us by hosting it on the BAppstore. You can find the BAppstore under the extensions tab. Then a simple search for “Autorize” will pull it right up for you:
Then just click the orange “Install” button to install it. My button says “Reinstall” because I already have it all setup.
If you have a greyed out button, you’ll need to install and configure Jython because Burpsuite is a Java tool and Autorize is written in Python. To do that, you can download the Jython standalone here. Then, just go to the “Extensions settings” under the “Extensions” tab and add the location of your Jython standalone JAR file like so:
And now you should be able to run Autorize!
Demo Time
For demonstration purposes, I’m going to be testing an instance of OWASP Juiceshop running locally. If you haven’t heard of it, Juiceshop is a web app built with modern web technologies designed to be intentionally vulnerable so that you can practice what you’ve learned. It also has a ton of guided walk-throughs so I highly recommend trying it out if you want to get into web penetration testing.
Ok, let’s get started. To run Autorize, we’re going to have to be running Burpsuite and have it proxy our web traffic (no surprise there). For Autorize to be able to auto-magically test for Authorization bugs, it needs whatever headers the web app uses for authentication.
I went ahead and created two accounts for testing: sambamsam1@gmail.com and sambamsam2@gmail.com. To start, I’ll login in with sambamsam1 and grab the header I need to copy over to Autorize:
OWASP Juice shop uses Bearer Tokens to authenticate, so I copied the entire Authorization header. Then, over in the new Authorize tab, you can just paste it in:
And that’s all you need to set up Autorize. Pretty simple really. I like to add some filters to the list to make things easier for me. One of the filters I set up on every run is the “Scope items only” filter. This is essential if you’re testing on a Bug Bounty Program.
Additionally, I like to add a “URL contains” filter (you can select them from the drop down then add content if needed), if I’m targeting a specific domain or endpoint. If I’m testing an API that makes excessive use of OPTIONS requests, there’s an option to filter those out too.
Once you’ve got that all setup, logout and log back in as another user (in my case, sambamsam2) and you’re good to go! Click the “Autorize is off” button at the top and it will turn bright blue to indicate that everything is running.
Now, all you have to do is browse the application like a user normally would. In this example, I added an item to my shopping cart as my sambamsam2 user.
Popping back over to the Autorize tab, you’ll see a ton of requests pile up:
The two columns with all the colors tell us if the authentication was bypassed with the temporary header (our other user’s authentication) or without any authentication at all. A lot of these requests aren’t of any interest to us. However, request 4 stood out to me because it seemed to have a simple numerical ID associated with it and seems to be accessible by our other user.
We can right-click the request and send it to the Repeater (or Ctrl+R) for a closer look.
In the Repeater tab, we can verify that the request is successfully being sent with the authorization token of our logged out user:
And it looks like any logged in user can access other user’s shopping cart just by adding the correct ID to the URL. This vulnerability is known as an Insecure Direct Object Reference (IDOR) and can vary in severity depending on the predictability of the ID and the confidentiality of the data. In our case, items in a users shopping cart isn’t as revealing as a user’s address or social security number, but a numberic ID in the single-digits can be easily guessed, making this a bug of medium severity.
I used jwt_tool in my terminal window to verify that the JWT Token in the request belonged to my other user, sambamsam1. jwt_tool is great for disecting JWT’s conveniently but it can also be used for exploiting the technology behind JWT’s. Maybe I’ll have to do a tutorial on that…
Conlcusion
As you can hopefully see, Autorize makes testing for Authentication vulnerabilities extremely easy. With Autorize, all you have to do is casually browse the application while it runs in the background.
I used to have to go back and copy headers to every request I thought was interesting and resend the request every time I wanted to verify an IDOR or some other similar vulnerability. Now I can let it run while I explore or do other testing. Just don’t forget to check back every once in a while or you’ll have tons of requests to dig through.
Autorize helped me find my first paid bug, which used a privilege escalation to perform IDORs. I found a JWT that seemed to belong to an admin user and used Autorize to see what it allowed me to have access to.
Hope you add this tool to your arsenal and it helps you as much as I do! Stay tuned for more hacking tutorials coming soon!
In today’s post, I’ll introduce you to a tool that should be a part of every bug hunter’s toolkit, wfuzz! I use wfuzz in every bug bounty program I participate in. If you don’t know what wfuzz is, it’s a web application fuzzer. And if you haven’t heard of a web application fuzzer, they’re a type of tool that automates web requests.
As you’ll see in this post, fuzzers are a simple yet extremely powerful tool and by the end of this reading you’ll be able to confidently use wfuzz, one of the best in the game!
Practical Examples
Instead of teaching you the syntax, I’m going to run you through a series of hypothetical, real-world examples so you can use it in your next pentest or bug hunt.
For simplicity, I’m going to use a popular repository of wordlists called SecLists. These wordlists were put together by a few well-known security professionals including Daniel Miessler and Jason Haddix. I highly recommend you check out the repo and get familiar with what’s in some of the lists.
Directory Discovery
Discovering hidden files or directories of a web application can be a gold mine for security testers. wfuzz makes this a snap by fuzzing the target url:
Now all that is shown are responses with status code 200. We can also use use --/sl/sw/sh/ss<number> for only showing responses with a specific number of lines, words, characters or that match a specific regex respectively.
Alternatively, you can replace --s with --h to hide matches rather than showing them.
This one-liner can lead to some big finds, including configuration files, admin logins, and much more depending on the wordlist.
Wordlist Contexts
A little bit of recon with a tool like wappalyzer can go a long way. Knowing what Content Management System (think WordPress or Drupal), API endpoint locations, etc. can result in more findings.
For example, if I knew my target was running on top of an Nginx server, I might use a wordlist catered towards default files and directories:
Using a wordlist that is right for the situation makes it more likely that you’ll find something interesting. Don’t just blindly use random wordlists and there’s no one wordlist to rule them all!
Subdomain Bruteforcing
Finding subdomains is key to expanding your attack surface. More often than not, the most interesting subdomains aren’t found passively.
wfuzz can fuzz anywhere the FUZZ keyword is located in a request, including in headers.
To bruteforce subdomains with wfuzz, fuzz in the Host header:
While wfuzz is capable, sometimes using a tool that is specifically built for this (like gobuster or amass) might be easier and yield better results.
IDOR
wfuzz offers a variety of payloads that can be used to fuzz with, including a list of numbers.
If I stumbled upon a an id parameter in JSON that used a guessable four digit number, I might fuzz it like so to see if I can access any accounts I don’t own:
The -z parameter is used to specify a payload type and the payload options. You can see the full list of payload’s with wfuzz -e payloads.
You might also notice that I changed the method from the default GET to POST with the -X option and added a body to my request with -d. This syntax should be familiar with you if you’ve every used curl before. Another reason why I love wfuzz!
Injection
If you’ve ever tested for injection vulnerabilities (like SQLi or command injection) then it is likely that your attempts have been blocked by a WAF (Web Application Firewall). WAFs are all fine and dandy but they’re not perfect.
It can be useful to fuzz common payloads to see if any might slip through the cracks
In this example, I used a Cross-site Scripting (XSS) wordlist to see if any common payloads weren’t blocked. This likely will not lead to a XSS bug right away, but might clue me in on what keywords or encoding methods might allow me to build a working payload.
If you’re into finding XSS bugs, fuzzing with Portswigger’s XSS Cheat Sheet can help you see what HTML tags and events are permitted so you can get an idea of how to build your own payload.
Password Spraying
Database leaks for a specific target can be a great asset when testing login pages, but with thousands of accounts to choose from, it can be hard to find valid credentials. Luckily, wfuzz can do this much faster than we can.
Password spraying is the act of guessing the password of many different accounts (not just one like in brute-force attacks). To achieve this, you need to tell wfuzz to fuzz in two separate locations each with its own payload like so:
For fuzzing multiple locations with different payloads, you need to supply FUZ<payload #>Z. In this example FUZZ is associated with users.txt and FUZ2Z with passwords.txt.
I also supplied an iterator type of zip with the -m argument. The zip iterator type will match each of the payloads to the other, 1-to-1, so it is perfect for password spraying. If you’ve used Burp’s Intruder before, than the zip iterator is just like the Intruder’s Pitchfork attack type.
We can list the other iterator types with the -e options like we did for payloads.
Wrapping it all up
wfuzz does one thing (and only one thing) well: spit out a bunch of requests really fast. It’s up to you to make it an effective hacking too.
That means choosing the right wordlist for the job. There are tons of wordlists out there for different jobs. Don’t just limit yourself to SecLists. It’s a great wordlist and popular for a reason, but it’s popularity just means that a lot of other hunters are going to be using it too, so you’ll likely stumble across the same bugs they do.
So, Google around and find wordlists that work for you. Or, better yet, make your own as you hunt!
Also don’t go fuzzing around unless the program allows it. Sometimes there’s a request delay requirement and other times companies won’t permit the use of automated tools at all. Read the policy carefully! Otherwise, you may end up overwhelming the company’s servers.
Now, you have most of what you need to unleash the power of wfuzz and hunt more efficiently! Happy hunting!
If there’s one tool every penetration tester should know it’s nmap. In this post, I’m going to teach you how to use it practically as one might use it in a real-world testing scenario.
Don’t let the title fool you. Although we’re going to cover the basics here, we’re going to take a deeper dive into how to use this tool effectively. So, I’ll assume you have at least a basic understanding of the Internet and the protocol stack.
Overview
nmap is an open-source command line tool used to discover hosts connected to a network and expose what services might be listening on those hosts. It is extremely popular because it can map an entire network automatically, while also being flexible.
I’m going to expose you to the more technical features of the tool while teaching you the basics so that you can use it flexibly and confidently. So, let’s jump into it!
Host Discovery
The first step in testing a network is figuring out what hosts (computers connected to a network) are up and running. After all, computers need to be powered on and connected to the same network we’re on for us to be able to attack them. We also don’t want to blindly spray exploits to all addresses–that’s just noisy and a waste of time.
To get started with host discovery with nmap it’s as simple as running it and giving it a range of ip’s (or a single hostname/ip):
$nmap192.168.186.0/24StartingNmap7.93(https://nmap.org ) at 2023-03-17 13:51 PDTNmap scan report for 192.168.186.2Host is up (0.00070s latency).Not shown:999 closed tcp ports (conn-refused)PORT STATE SERVICE53/tcp open domainNmap scan report for 192.168.186.71Host is up (0.0021s latency).Not shown:997 closed tcp ports (conn-refused)PORT STATE SERVICE22/tcp open ssh53/tcp open domain389/tcp open ldapNmap scan report for 192.168.186.93Host is up (0.0019s latency).Not shown:999 closed tcp ports (conn-refused)PORT STATE SERVICE22/tcp open sshNmap scan report for ms2 (192.168.186.131)Host is up (0.0023s latency).Not shown:977 closed tcp ports (conn-refused)PORT STATE SERVICE21/tcp open ftp22/tcp open ssh23/tcp open telnet25/tcp open smtp53/tcp open domain80/tcp open http111/tcp open rpcbind139/tcp open netbios-ssn445/tcp open microsoft-ds512/tcp open exec513/tcp open login514/tcp open shell1099/tcp open rmiregistry1524/tcp open ingreslock2049/tcp open nfs2121/tcp open ccproxy-ftp3306/tcp open mysql5432/tcp open postgresql5900/tcp open vnc6000/tcp open X116667/tcp open irc8009/tcp open ajp138180/tcp open unknownNmap done:256 IP addresses (4 hosts up) scanned in 14.55 seconds
I used the range 192.168.186.0/24 which is the equivalent of 192.168.186.0-255, or the subnet of my virtual network. As you can see, it produced quite a bit of output. I just wanted to see the hosts online and nmap went ahead and did a TCP-connect port scan on the top 1000 common ports as well.
This is nice and all but it clutters up my terminal with more than I needed. We can use the -sn option to tell nmap not to do a port scan:
$nmap-sn192.168.186.0/24StartingNmap7.93(https://nmap.org ) at 2023-03-17 13:57 PDTNmap scan report for 192.168.186.2Host is up (0.0032s latency).Nmap scan report for 192.168.186.71Host is up (0.0019s latency).Nmap scan report for 192.168.186.93Host is up (0.0013s latency).Nmap scan report for ms2 (192.168.186.131)Host is up (0.0042s latency).Nmap done:256 IP addresses (4 hosts up) scanned in 2.32 seconds
Much better. If you notice at the bottom of the scan, this was performed much faster (2.32 seconds vs 14.55 seconds in the first scan). This might seem marginal at first but imagine scanning a much larger network (like a business network).
By default, nmap uses a variety of methods to determine if a host is online. Let’s check this out by scanning a single host:
$sudonmap-sn-vvms2StartingNmap7.93(https://nmap.org ) at 2023-03-17 14:01 PDTInitiating ARP Ping Scan at 14:01Scanning ms2 (192.168.186.131) [1 port]Completed ARP Ping Scan at 14:01, 0.04s elapsed (1 totalhosts)Nmap scan report for ms2 (192.168.186.131)Host is up, received arp-response (0.0013s latency).MAC Address: 00:0C:29:7B:A1:58 (VMware)Read data files from: /usr/bin/../share/nmapNmap done: 1 IP address (1 hostup) scanned in 0.17 seconds Raw packets sent: 1 (28B) | Rcvd: 1 (28B)
I used the -vv option to make nmap spit out very verbose output so it can tell us what it’s doing step by step. ms2 is a hostname I configured my system to recognize as 192.168.186.131 (my metasploitable instance). As I mentioned earlier, you can supply nmap with a human-readable hostname and it will automatically go out and resolve the IP address for you.
Reading the output, we can see that nmap ran an ARP scan and started a port scan, but after the ARP scan finished, it deemed the host was up. This is because nmap will use a variety of methods to scan a host, including:
ARP scan (local network only)
ICMP echo request
sending a TCP SYN packet to port 443
sending a TCP ACK packet to port 80
sending an ICMP timestamp request
If these methods fail, the host is considered offline. Those of you with keen eyes will notice that I ran nmap with sudo. That’s because nmap relies on packet crafting to do some of it’s scans, which requires root level privileges. Let’s see what happens when we run it without sudo…
$nmap-vv-snms2StartingNmap7.93(https://nmap.org ) at 2023-03-17 14:00 PDTInitiating Ping Scan at 14:00Scanning ms2 (192.168.186.131) [2 ports]Completed Ping Scan at 14:00, 0.00s elapsed (1 totalhosts)Nmap scan report for ms2 (192.168.186.131)Host is up, received syn-ack (0.0020s latency).Nmap done: 1 IP address (1 hostup) scanned in 0.00 seconds
This time, nmap performed a ping scan instead of an ARP scan and performed a TCP scan as well. The TCP scan received a SYN-ACK which is how nmap was able to tell the system was online. It’s important to take this information into account as it could lead to some false negatives while testing (what if the host blocked ICMP traffic and the ports scanned)?
So, I would encourage you to use sudo to get the most out of nmap. You’ll definitely need it when I talk about performing different types of host discovery in the next few sections.
ARP Scanning
The first method of host discovery I’m going to talk about is ARP scanning. ARP or Address Resolution Protocol resolves IP addresses into MAC addresses. This protocol sits low on the stack, which means it has a low overhead (less data per packet) giving two key advantages when using it for scanning:
Faster
Stealthier (may slip past firewalls/IDS’s)
The only caveat is that it only works on local networks (can’t scan over the internet). So, know what context you’re scanning in!
We can perform an ARP scan with nmap by telling it not to do ping scans with the -Pn option:
$sudonmap-sn-PRn192.168.186.0/24StartingNmap7.93(https://nmap.org ) at 2023-03-17 14:59 PDTNmap scan report for 192.168.186.1Host is up (0.0013s latency).MAC Address:00:50:56:C0:00:08 (VMware)Nmap scan report for 192.168.186.2Host is up (0.00045s latency).MAC Address:00:50:56:FC:7E:8B (VMware)Nmap scan report for 192.168.186.71Host is up (0.0047s latency).MAC Address:00:0C:29:91:6A:82 (VMware)Nmap scan report for 192.168.186.73Host is up (0.00098s latency).MAC Address:00:0C:29:39:19:E4 (VMware)Nmap scan report for ms2 (192.168.186.131)Host is up (0.0016s latency).MAC Address:00:0C:29:7B:A1:58 (VMware)Nmap scan report for 192.168.186.254Host is up (0.00078s latency).MAC Address:00:50:56:EA:7A:B5 (VMware)Nmap scan report for 192.168.186.93Host is up.Nmap done:256 IP addresses (7 hosts up) scanned in 2.02 seconds
You’ll notice that I also used the R option with -P. This is because I wanted to eliminate false positives by having nmap perform Reverse DNS resolution on the hosts it finds, which it can only do with live hosts.
Doing it without, results in a mess:
$sudonmap-sn-Pn192.168.186.80-85StartingNmap7.93(https://nmap.org ) at 2023-03-17 15:03 PDTNmap scan report for 192.168.186.80Host is up.Nmap scan report for 192.168.186.81Host is up.Nmap scan report for 192.168.186.82Host is up.Nmap scan report for 192.168.186.83Host is up.Nmap scan report for 192.168.186.84Host is up.Nmap scan report for 192.168.186.85Host is up.Nmap done:6 IP addresses (6 hosts up) scanned in 0.01 seconds
Here, I purposefully chose a range I knew didn’t have any connected hosts, yet nmap still said they were up…
Using the R option felt like a cheat, so you might be better of using a different tool like arp-scan to do ARP scanning.
Ping Scanning
Higher up on the protocol stack is ping probing using the ICMP protocol. When most people think of pinging, they think of using the ping command to send ICMP echo requests.
nmap can use ICMP echo requests, as well as other methods of the protocol using the -P switch:
-PE: echo request
-PP: timestamp query
-PM: address map query
If a host replies, it’s online. I scanned my network using the --disable-arp-ping flag to tell nmap only to do ping scanning:
$sudonmap-sn-PE--disable-arp-ping192.168.186.0/24StartingNmap7.93(https://nmap.org ) at 2023-03-20 14:43 PDTNmap scan report for 192.168.186.2Host is up (0.0023s latency).MAC Address:00:50:56:FC:7E:8B (VMware)Nmap scan report for 192.168.186.71Host is up (0.0011s latency).MAC Address:00:0C:29:91:6A:82 (VMware)Nmap scan report for 192.168.186.72Host is up (0.0031s latency).MAC Address:00:0C:29:19:6C:BF (VMware)Nmap scan report for 192.168.186.73Host is up (0.0012s latency).MAC Address:00:0C:29:39:19:E4 (VMware)Nmap scan report for 192.168.186.74Host is up (0.0019s latency).MAC Address:00:0C:29:01:40:16 (VMware)Nmap scan report for 192.168.186.93Host is up.Nmap done:256 IP addresses (6 hosts up) scanned in 4.86 seconds
In this example I chose to use ICMP Echo requests to scan the network. As expected, this scan took a little longer than the ARP scan we did in the last example. Remember to consider network contexts when performing scans. Since, I’m on a local network, it’s preferable to use an ARP scan to get results faster and more reliably.
That being said, it’s important to try different protocols as systems are configured to respond differently:
$sudonmap-sn-PP--disable-arp-ping192.168.186.0/24StartingNmap7.93(https://nmap.org ) at 2023-03-20 14:42 PDTNmap scan report for 192.168.186.71Host is up (0.0038s latency).MAC Address:00:0C:29:91:6A:82 (VMware)Nmap scan report for 192.168.186.72Host is up (0.0015s latency).MAC Address:00:0C:29:19:6C:BF (VMware)Nmap scan report for 192.168.186.73Host is up (0.0012s latency).MAC Address:00:0C:29:39:19:E4 (VMware)Nmap scan report for 192.168.186.74Host is up (0.0016s latency).MAC Address:00:0C:29:01:40:16 (VMware)Nmap scan report for 192.168.186.93Host is up.Nmap done:256 IP addresses (5 hosts up) scanned in 9.63 seconds
In this example I told nmap to do a timestamp query scan and it took considerably longer. My VMware gateway was also configured not to respond to timestamp queries, so it did not show up in the results.
Always mix and match protocol types. Some systems might be online but are configured not to respond to echo requests. So, you might find something that was missed in a previous scan by running nmap with different options.
TCP/UDP/SCTP Scanning
We’re taking another step higher in the protocol stack using TCP/UDP/SCTP scanning. In this type of scan, we’re using these protocols to probe a single port. While this scan has the highest level of overhead (and will be slower) it an advantage of being able to scan a host over the internet.
As with the previous scans, we’re going to again use the -P option:
-PY: SCTP INIT
-PS: TCP SYN
-PA: TCP ACK
-PU: UDP
Let’s try a SYN ping scan:
$sudonmap-sn--disable-arp-ping-PS192.168.186.0/24StartingNmap7.93(https://nmap.org ) at 2023-03-20 15:04 PDTNmap scan report for 192.168.186.2Host is up (0.0020s latency).MAC Address:00:50:56:FC:7E:8B (VMware)Nmap scan report for 192.168.186.71Host is up (0.0012s latency).MAC Address:00:0C:29:91:6A:82 (VMware)Nmap scan report for 192.168.186.72Host is up (0.0060s latency).MAC Address:00:0C:29:19:6C:BF (VMware)Nmap scan report for 192.168.186.74Host is up (0.0018s latency).MAC Address:00:0C:29:01:40:16 (VMware)Nmap scan report for 192.168.186.93Host is up.Nmap done:256 IP addresses (5 hosts up) scanned in 4.88 seconds
nmap chooses ports to probe by default but you can specify them by putting a number or range next to the scan type:
$sudonmap-sn--disable-arp-ping-PU53192.168.186.0/24StartingNmap7.93(https://nmap.org ) at 2023-03-20 15:06 PDTNmap scan report for 192.168.186.72Host is up (0.0018s latency).MAC Address:00:0C:29:19:6C:BF (VMware)Nmap scan report for 192.168.186.74Host is up (0.0017s latency).MAC Address:00:0C:29:01:40:16 (VMware)Nmap scan report for 192.168.186.93Host is up.Nmap done:256 IP addresses (3 hosts up) scanned in 9.61 seconds
I chose port 53 (DNS) to perform my UDP scan because it’s super common. As you can see, I got fewer results with this and it took a lot longer. Again, it’s important to try different ports and protocols with these scans. For example, if I knew an organization had websites up and running, I might try ports 80 and 443 to see what servers they have online. If I’m scanning a business network with a lot of users, they may be using AD to manage their systems, so I might try port 389 or 636 for LDAP.
Context matters!
Port Scanning
Now that we know what hosts are up and running, we can check what services are running on these hosts. Open ports present an opportunity for access, so it’s important to know what can be potentially accessed or exploited.
We saw that nmap scans ports by default using full TCP connections, or just by using TCP SYN packets when running as root.
To specify a scan option, we can use the -s argument with nine different options. We’ll only go over a few but I’ll list some common scans:
-sS: SYN Scan (default)
-sT: TCP Scan
-sU: UDP Scan
–sY: SCTP Scan
This sort of syntax should be familiar to you as it is very similar to the various ping scan options with -P. I used a SYN scan to scan my metasploitable machine:
$sudonmap-sSms2StartingNmap7.93(https://nmap.org ) at 2023-03-22 21:16 PDTNmap scan report for ms2 (192.168.186.131)Host is up (0.0029s latency).Not shown:977 closed tcp ports (reset)PORT STATE SERVICE21/tcp open ftp22/tcp open ssh23/tcp open telnet25/tcp open smtp53/tcp open domain80/tcp open http111/tcp open rpcbind139/tcp open netbios-ssn445/tcp open microsoft-ds512/tcp open exec513/tcp open login514/tcp open shell1099/tcp open rmiregistry1524/tcp open ingreslock2049/tcp open nfs2121/tcp open ccproxy-ftp3306/tcp open mysql5432/tcp open postgresql5900/tcp open vnc6000/tcp open X116667/tcp open irc8009/tcp open ajp138180/tcp open unknownMAC Address:00:0C:29:7B:A1:58 (VMware)Nmap done:1 IP address (1 host up) scanned in 0.38 seconds
nmap was able to find quite a bit of open ports. I like using the SYN scan because it combines the reliability of a TCP scan without being as slow (this scan only took 0.38 seconds) because it doesn’t establish a full connection.
That being said, it’s important to try other scan types:
$sudonmap-sU-p53,1000-1010ms2StartingNmap7.93(https://nmap.org ) at 2023-03-22 21:20 PDTNmap scan report for ms2 (192.168.186.131)Host is up (0.0018s latency).PORT STATE SERVICE53/udp open domain1000/udp closed ock1001/udp open|filtered unknown1002/udp closed unknown1003/udp closed unknown1004/udp closed unknown1005/udp open|filtered unknown1006/udp closed unknown1007/udp closed unknown1008/udp open|filtered ufsd1009/udp closed unknown1010/udp closed surfMAC Address:00:0C:29:7B:A1:58 (VMware)Nmap done:1 IP address (1 host up) scanned in 4.87 seconds
This time, I told nmap to do a UDP scan. You’ll also notice that I specified what ports to scan using the -p option. You can tell nmap to scan a range, comma-separated values, ports specified by name, or every port using -p-.
Even though I scanned only 11 ports, it took a little longer than the SYN scan. Outside of finding ports that were open and closed, nmap claimed some ports were open|filtered. This just mean that nmap couldn’t tell if the port was truly open, or is protected by some sort of firewall.
Like I said when doing host discovery, it’s important to experiment with different scan methods. There are more than just the few types I listed, so feel free to check those out at nmap.org.
Output
As a penetration tester, it’s important to keep track of your scan results. Not only for you to go back over (instead of repeating scans), but for reporting purposes as well.
nmap makes this a snap with the -o option. There are few options that go along with -o:
-oN: normal output
-oX: XML output
-oG: grepable output
-oA: output in all formats
I usually stick with outputting my scan results in every format because they’re all useful in their own ways. I like normal output for reading over myself. Grepable output is great because it puts each host on a single line which makes it easier to grep by host (hence the name). Finally, XML output is extremely useful for web-based visualization tools or storing the results in a database. For example, I love feeding my XML scan results to metasploit, which makes it really easy to organize and search through my results.
Conclusion
If you made it this far, you can now add an invaluable tool to your hacking arsenal. This was a lengthy post so give your self a huge pat on the back for reading it all. Your learning is not over yet, though. To really make this stick, download nmap and play around on your own network. It’s fun and is a great way to make sure you retain what you learned. Otherwise (in my own experience) the knowledge won’t stick!
Now, the fun begins. If you followed my last post about the basics of DNS, you should be armed and ready to tackle the subject of this post: DNS enumeration.
DNS enumeration is the process of obtaining as much information about a target as possible by pulling from publicly available DNS records. This can expand the attack surface of a target by revealing their internet facing servers, email addresses, and more depending on how extensively you enumerate.
The keyword in this definition is “publicly“. That means that all of the methods I’m about to show you in the next section you can try on your own (and I encourage you to do so).
However, it’s important to note that some automated scripts brute-force subdomains by default, which may overload some older servers. This is unlikely though, but just make sure you know what the tool is going to do before you fire it off.
With that being said, let’s get into enumerating DNS!
Manual Techniques
There are several scripts out there that allow you to automatically scrape tons of DNS information in a point and click fashion. However, it’s important to know how to manually enumerate DNS in order to modularize enumeration and adapt it to more specific situations.
Don’t let the term “manual” scare you though. I’m going to go over a few tools that will make this process a snap.
Whois
Using the Whois service is a great starting point in your quest for information. With just the domain name of a target’s website, you can pull it’s registrar information, geographic location of its servers, name servers associated with it, and contact information system admins and technical support.
The whois command is built into most UNIX based operating systems. So, if you’re running one of it’s variants like a Linux distro or MacOS, you can run:
whois<domainname>
You’ll quickly be overwhelmed with a bunch of information about that domain name. For this reason, I recommend using an online tool like https://www.whois.com/whois. They organize data a bit more graphically than the plain terminal-based whois command does.
DiG
If there’s one manual tool you need to know, it’s the dig command. Like whois, it comes prepackaged with many UNIX variants. Unlike whois, it’s much more powerful.
To use dig, follow this format:
dig@<nameserver><domainname><recordtype>
The name server and record type positional arguments are optional. You can use dig in the same way as you would whois:
By default, dig pulls A records and uses whatever nameserver it can access from my /etc/resolv.conf file. As you can see, it decided to use Google’s nameserver at 8.8.8.8.
Of course, dig can be used for so much more. Let’s try pulling the nameservers of github.com:
In this example, I told dig to grab me the NS records associated with github.com and keep the output short and sweet with the +short query modifier. I like to use +short with dig because the output can be pretty cluttered.
Another useful query type is the ANY query, which pulls all available records associated with a domain:
As you can see, dig was able to pull a variety of records, including SOA and MX. I also used the +noall combined with +answer query modifiers to single out the responses in the output. This is useful if you just want to see the results of a query (no annoying header messages) but want a little more info than what’s provided with +short.
Additionally, dig can initiate zone transfer. A zone transfer is the process DNS servers use to transfer copies of records of a particular zone to another server. This is primarily used for redunancy–if one server goes down, another server with copies of all the records will be able to resume operation in its place.
Misconfigured servers allow zone transfers to occur between anyone that requests it (including you). This can potentially leak private records.
I used the AXFR query type to initiate a zone transfer with one of github.com‘s nameservers. It didn’t let me. Good job GitHub.
This is usually the case as zone transfers are a well-known vulnerability. However, never rule them out. Company nameservers usually have multiple nameservers for redundancy. In some cases, backups might be overlooked in terms of hardening. So, try zone transfers against all of a target’s nameservers. You might get lucky.
Host
Another tool worth mentioning that is very similar to dig and is bundled with most UNIX-based OS’s is host. Keep in mind this is not the same host command that is available on Windows by default.
Just typing host and supplying it a domain name can reveal some useful information:
As you can see, an absolutely enormous amount of information was pulled. That’s because using -a tells host to pull all of the records it can from it’s target.
And it seems that the default nameserver (123.456.78.9) didn’t allow us to do this.
Nslookup
This tool functions fairly similarly to host and dig but has an optional interactive mode. This is useful for when you want to frequently options such as query types and nameservers without retyping the name of the command all over again.
To access interactive mode, I ran nslookup without any arguments. From there I set the query type to NS to look up microsoft.com‘s nameservers, then used one of their nameservers to look up their mail servers.
As you can see, most of the commands I typed were only one or two words long. For this reason, I like nslookup for poking around if I’m not necessarily sure what I’m looking for and need to frequently update my commands.
It’s important to note that these three tools (dig, host, nslookup) function very similarly. So, I encourage you to try all of them, use them interchangeably, and read the man pages for each one. They each have their strengths and weaknesses. The one you choose depends on your situation and preferences.
Subdomain Enumeration
Subdomainenumeration is the process of finding as many subdomains associated with a domain as possible. This is an important process of DNS enumeration as it expands our attack surface even further. You can also apply the techniques described previously on the subdomains you find to make your enumeration even more effective.
Most modern search engines can be a useful tool for subdomain enumeration. If you read one of my earlier posts about Google Dorking, you may already be familiar with the technique I’m about to describe.
Google makes subdomain enumeration a snap with the inurl parameter:
iRobot’s subdomains provided by Google
Using inurl and the domain name, irobot.com, I was able to find numerous subdomains. I also used -www to tell Google to exclude search results containing www.
Another powerful tool for subdomain enumeration is the Certificate Search tool:
cert.sh results from the query %.irobot.com
The query I used to pull all of these results was %.irobot.com, where the % symbol was a wild card. This means that results with anything ending with “.irobot.com” will be a part of the search results.
Combining these techniques is a powerful way of discovering subdomains that give you an edge by providing you with more information.
Automated Techniques
Now that you know some basic (but powerful) techniques for enumerating subdomains, its time to automate the process.
There’s tons of tools out there that do all of these steps (and more) to provide you with as much information associated with a domain name as possible. One popular one is dnsenum. As described in its man pages dnsenum can perform:
nslookup, zonetransfer, google scraping, domain brute force (support also recursion), whois ip and reverse lookups.
Here’s what I was able to pull by pointing this tool at burgerking.com:
I cut the scan short but you can see it was able to pull some useful servers (as well as their IP addresses), find some domains, and attempted to do a zone transfer and figure out the DNS server’s bind version.
Other great tools include fierce, dnsrecon, and sublist3r (for enumerating subdomains). I encourage you to try them all out and see what tools work for you. There are also tons of opensource tools out there that I haven’t mentioned that you should try as well!
Now that you’ve seen a fantastic tool that formats everything you might need beautifully with color-coding, you may be asking yourself: “Why even bother with manual testing in the first place?”
While these tools are extremely efficient, they may not be a one line solution for all of your needs. It’s important when enumerating to supplement your automated information gathering with manual gathering.
Manual enumeration is extremely flexible, allowing you to poke around at every angle you can think of and dig deeper than some automated scripts. It’s also worth noting that these automated scripts generate a ton of traffic. So, in a scenario where stealth is required, it may be more appropriate to manual test to fly under the radar.
Conclusion
DNS enumeration is a critical step in the process of analyzing your target. When executed correctly, it can be used to expose a wealth of information. The more information you have on a target, the more effective your penetration test will be.
I only covered the basics to give you a starting point. I highly encourage you to look at the manual pages (with the man command, arguably the best UNIX command) to learn the ins and outs of these tools so you can make the most out of them. At the end of the day, these tools are simply lines of code–it’s up to you to make them useful.
Remember, this information is all public, so feel free to practice on your own (you definitely should) and see what you can dig up. Just be careful when using the brute-force tools against servers you don’t own (again read the man pages so you know what you’re doing) and don’t use any of the information you gather without permission.
That’s all and happy hunting! In the next post, I’m going to be talking about scanning with nmap so stay tuned for that!