FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

AzSubEnum - Azure Service Subdomain Enumeration

By: Zion3R


AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.


How it works?

AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.

With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.


Why i create this?

During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.


Usage
➜  AzSubEnum git:(main) βœ— python3 azsubenum.py --help
usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]

Azure Subdomain Enumeration

options:
-h, --help show this help message and exit
-b BASE, --base BASE Base name to use
-v, --verbose Show verbose output
-t THREADS, --threads THREADS
Number of threads for concurrent execution
-p PERMUTATIONS, --permutations PERMUTATIONS
File containing permutations

Basic enumeration:

python3 azsubenum.py -b retailcorp --thread 10

Using permutation wordlists:

python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt

With verbose output:

python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose




Argus - A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions

By: Zion3R

This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.

Visit our website - secureci.org for more information.


Features

  • Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.

  • Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.

Usage

This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.

python argus.py --mode [mode] --url [url] [--output-folder path_to_output] [--config path_to_config] [--verbose] [--branch branch_name] [--commit commit_hash] [--tag tag_name] [--action-path path_to_action] [--workflow-path path_to_workflow]

Parameters:

  • --mode: The mode of operation. Choose either 'repo' or 'action'. This parameter is required.
  • --url: The GitHub URL. Use USERNAME:TOKEN@URL for private repos. This parameter is required.
  • --output-folder: The output folder. The default value is '/tmp'. This parameter is optional.
  • --config: The config file. This parameter is optional.
  • --verbose: Verbose mode. If this option is provided, the logging level is set to DEBUG. Otherwise, it is set to INFO. This parameter is optional.
  • --branch: The branch name. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --commit: The commit hash. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --tag: The tag. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --action-path: The (relative) path to the action. You cannot provide --action-path in repo mode. This parameter is optional.
  • --workflow-path: The (relative) path to the workflow. You cannot provide --workflow-path in action mode. This parameter is optional.

Example:

To use this script to interact with a GitHub repo, you might run a command like the following:

python argus.py --mode repo --url https://github.com/username/repo.git --branch master

This would run the script in repo mode on the master branch of the specified repository.

How to use

Argus can be run inside a docker container. To do so, follow the steps:

  • Install docker and docker-compose
    • apt-get -y install docker.io docker-compose
  • Clone the release branch of this repo
    • git clone <>
  • Build the docker container
    • docker-compose build
  • Now you can run argus. Example run:
    • docker-compose run argus --mode {mode} --url {url to target repo}
  • Results will be available inside the results folder

Viewing SARIF Results

You can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.

  1. Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif) directly to the website to view the results.

  2. VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.

Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.

Troubleshooting

If there is an issue with needing the Github authorization for running, you can provide username:TOKEN in the GITHUB_CREDS environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.

Contributions

Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!

Cite Argus

If you use Argus in your research, please cite our paper:

  @inproceedings{muralee2023Argus,
title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions},
author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck,
A. Kapravelos, A. Machiry},
booktitle={32st USENIX Security Symposium (USENIX Security 23)},
year={2023},
}


BucketLoot - An Automated S3-compatible Bucket Inspector

By: Zion3R


BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

Features

Secret Scanning

Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

Sensitive File Checks

Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

Dig Mode

Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

Asset Extraction

Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

Searching

The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

To know more about our Attack Surface Management platform, check out NVADR.



Airgorah - A WiFi Auditing Software That Can Perform Deauth Attacks And Passwords Cracking

By: Zion3R


Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.

It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.

⭐ Don't forget to put a star if you like the project!

Legal

Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.

Requirements

This software only works on linux and requires root privileges to run.

You will also need a wireless network card that supports monitor mode and packet injection.

Installation

The installation instructions are available here.

Usage

The documentation about the usage of the application is available here.

License

This project is released under MIT license.

Contributing

If you have any question about the usage of the application, do not hesitate to open a discussion

If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request



Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


WebCopilot - An Automation Tool That Enumerates Subdomains Then Filters Out Xss, Sqli, Open Redirect, Lfi, Ssrf And Rce Parameters And Then Scans For Vulnerabilities

By: Zion3R


WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.

The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.


Features

Usage

g!2m0:~ webcopilot -h
             
──────▄▀▄─────▄▀▄
β”€β”€β”€β”€β”€β–„β–ˆβ–‘β–‘β–€β–€β–€β–€β–€β–‘β–‘β–ˆβ–„
β”€β–„β–„β”€β”€β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ”€β”€β–„β–„
β–ˆβ–„β–„β–ˆβ”€β–ˆβ–‘β–‘β–€β–‘β–‘β”¬β–‘β–‘β–€β–‘β–‘β–ˆβ”€β–ˆβ–„β–„β–ˆ
β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β•šβ•β•β–ˆβ–ˆβ•”β•β•β•
β–‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β–ˆβ–ˆβ•‘β–‘β–‘β•šβ•β•β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ•‘β–‘β–ˆβ–ˆβ•”β•β•β•β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘β•šβ•β•β–‘β–‘β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β• β–‘β•šβ•β•β•β•β•β–‘β•šβ•β•β–‘β–‘β–‘β–‘β–‘β•šβ•β•β•šβ•β•β•β•β•β•β•β–‘β•šβ•β•β•β•β•β–‘β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘
[●] @h4r5h1t.hrs | G!2m0

Usage:
webcopilot -d <target>
webcopilot -d <target> -s
webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]

Flags:
-d Add your target [Requried]
-o To save outputs in folder [Default: domain.com]
-t Number of threads [Default: 100]
-b Add your server for BXSS [Default: False]
-x Exclude out of scope domains [Default: False]
-s Run only Subdomain Enumeration [Default: False]
-h Show this help message

Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server

Installing WebCopilot

WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot

git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh

Tools Used:

SubFinder β€’ Sublist3r β€’ Findomain β€’ gf β€’ OpenRedireX β€’ dnsx β€’ sqlmap β€’ gobuster β€’ assetfinder β€’ httpx β€’ kxss β€’ qsreplace β€’ Nuclei β€’ dalfox β€’ anew β€’ jq β€’ aquatone β€’ urldedupe β€’ Amass β€’ gauplus β€’ waybackurls β€’ crlfuzz

Running WebCopilot

To run the tool on a target, just use the following command.

g!2m0:~ webcopilot -d bugcrowd.com

The -o command can be used to specify an output dir.

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd

The -s command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s 

The -t command can be used to add thrads to your scan for faster result.

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 

The -b command can be used for blind xss (OOB), you can get your server from xsshunter or interact

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss

The -x command can be used to exclude out of scope domains.

g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss

Example

Default options looks like this:

g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
                                ──────▄▀▄─────▄▀▄
β”€β”€β”€β”€β”€β–„β–ˆβ–‘β–‘β–€β–€β–€β–€β–€β–‘β–‘β–ˆβ–„
β”€β–„β–„β”€β”€β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ”€β”€β–„β–„
β–ˆβ–„β–„β–ˆβ”€β–ˆβ–‘β–‘β–€β–‘β–‘β”¬β–‘β–‘β–€β–‘β–‘β–ˆβ”€β–ˆβ–„β–„β–ˆ
β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β•šβ•β•β–ˆβ–ˆβ•”β•β•β•
β–‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆ β–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β–ˆβ–ˆβ•‘β–‘β–‘β•šβ•β•β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ•‘β–‘β–ˆβ–ˆβ•”β•β•β•β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘ β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘β•šβ•β•β–‘β–‘β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β•β–‘β•šβ•β•β–‘β–‘β–‘ β–‘β•šβ•β•β•šβ•β•β•β•β•β•β•β–‘β•šβ•β•β•β•β•β–‘β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘
[●] @h4r5h1t.hrs | G!2m0


[❌] Warning: Use with caution. You are responsible for your own actions.
[❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.


Target: bugcrowd.com
Output: /home/gizmo/targets/bugcrowd
Threads: 100
Server: False
Exclude: False
Mode: Running all Enumeration
Time: 30-08-2021 15:10:00

[!] Please wait while scanning...

[●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
[●] Subdoamin Scanned - [assetfinderβœ”] Subdomain Found: 34
[●] Subdoamin Scanned - [sublist3rβœ”] Subdomain Found: 29
[●] Subdoamin Scanned - [subfinderβœ”] Subdomain Found: 54
[●] Subdoamin Scanned - [amassβœ”] Subdomain Found: 43
[●] Subdoamin Scanned - [findomainβœ”] Subdomain Found: 27

[●] Active Subdoamin Scanning is in progress:
[!] Please be patient. This may take a while...
[●] Active Subdoamin Scanned - [gobusterβœ”] Subdomain Found: 11
[●] Active Subdoamin Scanned - [amassβœ”] Subdomain Found: 0

[●] Subdomain Scanning: Filtering out of scope subdomains
[●] Subdomain Scanning: Filtering Alive subdomains
[●] Subdomain Scanning: Getting titles of valid subdomains
[●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/

[●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30

[●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
[●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
[●] Vulnerabilities Scanned - [XSSβœ”] Found: 0
[●] Vulnerabilities Scanned - [SQLiβœ”] Found: 0
[●] Vulnerabilities Scanned - [LFIβœ”] Found: 0
[●] Vulnerabilities Scanned - [CRLFβœ”] Found: 0
[●] Vulnerabilities Scanned - [SSRFβœ”] Found: 0
[●] Vulnerabilities Scanned - [Sensitive Dataβœ”] Found: 0
[●] Vulnerabilities Scanned - [Open redirectβœ”] Found: 0
[●] Vulnerabilities Scanned - [Subdomain Takeoverβœ”] Found: 0
[●] Vulnerabilities Scanned - [Nuclieβœ”] Found: 0
[●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/


β–’β–ˆβ–€β–€β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–€ β–ˆβ–‘β–‘β–ˆ β–ˆβ–‘β–‘ β–€β–€β–ˆβ–€β–€
β–’β–ˆβ–„β–„β–€ β–ˆβ–€β–€ β–€β–€β–ˆ β–ˆβ–‘β–‘β–ˆ β–ˆβ–‘β–‘ β–‘β–‘β–ˆβ–‘β–‘
β–’β–ˆβ–‘β–’β–ˆ β–€β–€β–€ β–€β–€β–€ β–‘β–€β–€β–€ β–€β–€β–€ β–‘β–‘β–€β–‘β–‘

[+] Subdomains of bugcrowd.com
[+] Subdomains Found: 0
[+] Subdomains Alive: 0
[+] Endpoints: 11032
[+] XSS: 0
[+] SQLi: 0
[+] Open Redirect: 0
[+] SSRF: 0
[+] CRLF: 0
[+] LFI: 0
[+] Sensitive Data: 0
[+] Subdomain Takeover: 0
[+] Nuclei: 0

Acknowledgement

WebCopilot is inspired from Garud & Pinaak by ROX4R.

Thanks to the authors of the tools & wordlists used in this script.

@aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R

Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions.


Legba - A Multiprotocol Credentials Bruteforcer / Password Sprayer And Enumerator

By: Zion3R


Legba is a multiprotocol credentials bruteforcer / password sprayer and enumerator built with Rust and the Tokio asynchronous runtime in order to achieve better performances and stability while consuming less resources than similar tools (see the benchmark below).

For the building instructions, usage and the complete list of options check the project Wiki.


Supported Protocols/Features:

AMQP (ActiveMQ, RabbitMQ, Qpid, JORAM and Solace), Cassandra/ScyllaDB, DNS subdomain enumeration, FTP, HTTP (basic authentication, NTLMv1, NTLMv2, multipart form, custom requests with CSRF support, files/folders enumeration, virtual host enumeration), IMAP, Kerberos pre-authentication and user enumeration, LDAP, MongoDB, MQTT, Microsoft SQL, MySQL, Oracle, PostgreSQL, POP3, RDP, Redis, SSH / SFTP, SMTP, STOMP (ActiveMQ, RabbitMQ, HornetQ and OpenMQ), TCP port scanning, Telnet, VNC.

Benchmark

Here's a benchmark of legba versus thc-hydra running some common plugins, both targeting the same test servers on localhost. The benchmark has been executed on a macOS laptop with an M1 Max CPU, using a wordlist of 1000 passwords with the correct one being on the last line. Legba was compiled in release mode, Hydra compiled and installed via brew formula.

Far from being an exhaustive benchmark (some legba features are simply not supported by hydra, such as CSRF token grabbing), this table still gives a clear idea of how using an asynchronous runtime can drastically improve performances.

Test Name Hydra Tasks Hydra Time Legba Tasks Legba Time
HTTP basic auth 16 7.100s 10 1.560s (οš€ 4.5x faster)
HTTP POST login (wordpress) 16 14.854s 10 5.045s (οš€ 2.9x faster)
SSH 16 7m29.85s * 10 8.150s (οš€ 55.1x faster)
MySQL 4 ** 9.819s 4 ** 2.542s (οš€ 3.8x faster)
Microsoft SQL 16 7.609s 10 4.789s (οš€ 1.5x faster)

* While this result would suggest a default delay between connection attempts used by Hydra. I've tried to study the source code to find such delay but to my knowledge there's none. For some reason it's simply very slow.
** For MySQL hydra automatically reduces the amount of tasks to 4, therefore legba's concurrency level has been adjusted to 4 as well.

License

Legba is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license and then run cargo license.



APIDetector - Efficiently Scan For Exposed Swagger Endpoints Across Web Domains And Subdomains

By: Zion3R


APIDetector is a powerful and efficient tool designed for testing exposed Swagger endpoints in various subdomains with unique smart capabilities to detect false-positives. It's particularly useful for security professionals and developers who are engaged in API testing and vulnerability scanning.


Features

  • Flexible Input: Accepts a single domain or a list of subdomains from a file.
  • Multiple Protocols: Option to test endpoints over both HTTP and HTTPS.
  • Concurrency: Utilizes multi-threading for faster scanning.
  • Customizable Output: Save results to a file or print to stdout.
  • Verbose and Quiet Modes: Default verbose mode for detailed logs, with an option for quiet mode.
  • Custom User-Agent: Ability to specify a custom User-Agent for requests.
  • Smart Detection of False-Positives: Ability to detect most false-positives.

Getting Started

Prerequisites

Before running APIDetector, ensure you have Python 3.x and pip installed on your system. You can download Python here.

Installation

Clone the APIDetector repository to your local machine using:

git clone https://github.com/brinhosa/apidetector.git
cd apidetector
pip install requests

Usage

Run APIDetector using the command line. Here are some usage examples:

  • Common usage, scan with 30 threads a list of subdomains using a Chrome user-agent and save the results in a file:

    python apidetector.py -i list_of_company_subdomains.txt -o results_file.txt -t 30 -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
  • To scan a single domain:

    python apidetector.py -d example.com
  • To scan multiple domains from a file:

    python apidetector.py -i input_file.txt
  • To specify an output file:

    python apidetector.py -i input_file.txt -o output_file.txt
  • To use a specific number of threads:

    python apidetector.py -i input_file.txt -t 20
  • To scan with both HTTP and HTTPS protocols:

    python apidetector.py -m -d example.com
  • To run the script in quiet mode (suppress verbose output):

    python apidetector.py -q -d example.com
  • To run the script with a custom user-agent:

    python apidetector.py -d example.com -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"

Options

  • -d, --domain: Single domain to test.
  • -i, --input: Input file containing subdomains to test.
  • -o, --output: Output file to write valid URLs to.
  • -t, --threads: Number of threads to use for scanning (default is 10).
  • -m, --mixed-mode: Test both HTTP and HTTPS protocols.
  • -q, --quiet: Disable verbose output (default mode is verbose).
  • -ua, --user-agent: Custom User-Agent string for requests.

RISK DETAILS OF EACH ENDPOINT APIDETECTOR FINDS

Exposing Swagger or OpenAPI documentation endpoints can present various risks, primarily related to information disclosure. Here's an ordered list based on potential risk levels, with similar endpoints grouped together APIDetector scans:

1. High-Risk Endpoints (Direct API Documentation):

  • Endpoints:
    • '/swagger-ui.html', '/swagger-ui/', '/swagger-ui/index.html', '/api/swagger-ui.html', '/documentation/swagger-ui.html', '/swagger/index.html', '/api/docs', '/docs', '/api/swagger-ui', '/documentation/swagger-ui'
  • Risk:
    • These endpoints typically serve the Swagger UI interface, which provides a complete overview of all API endpoints, including request formats, query parameters, and sometimes even example requests and responses.
    • Risk Level: High. Exposing these gives potential attackers detailed insights into your API structure and potential attack vectors.

2. Medium-High Risk Endpoints (API Schema/Specification):

  • Endpoints:
    • '/openapi.json', '/swagger.json', '/api/swagger.json', '/swagger.yaml', '/swagger.yml', '/api/swagger.yaml', '/api/swagger.yml', '/api.json', '/api.yaml', '/api.yml', '/documentation/swagger.json', '/documentation/swagger.yaml', '/documentation/swagger.yml'
  • Risk:
    • These endpoints provide raw Swagger/OpenAPI specification files. They contain detailed information about the API endpoints, including paths, parameters, and sometimes authentication methods.
    • Risk Level: Medium-High. While they require more interpretation than the UI interfaces, they still reveal extensive information about the API.

3. Medium Risk Endpoints (API Documentation Versions):

  • Endpoints:
    • '/v2/api-docs', '/v3/api-docs', '/api/v2/swagger.json', '/api/v3/swagger.json', '/api/v1/documentation', '/api/v2/documentation', '/api/v3/documentation', '/api/v1/api-docs', '/api/v2/api-docs', '/api/v3/api-docs', '/swagger/v2/api-docs', '/swagger/v3/api-docs', '/swagger-ui.html/v2/api-docs', '/swagger-ui.html/v3/api-docs', '/api/swagger/v2/api-docs', '/api/swagger/v3/api-docs'
  • Risk:
    • These endpoints often refer to version-specific documentation or API descriptions. They reveal information about the API's structure and capabilities, which could aid an attacker in understanding the API's functionality and potential weaknesses.
    • Risk Level: Medium. These might not be as detailed as the complete documentation or schema files, but they still provide useful information for attackers.

4. Lower Risk Endpoints (Configuration and Resources):

  • Endpoints:
    • '/swagger-resources', '/swagger-resources/configuration/ui', '/swagger-resources/configuration/security', '/api/swagger-resources', '/api.html'
  • Risk:
    • These endpoints often provide auxiliary information, configuration details, or resources related to the API documentation setup.
    • Risk Level: Lower. They may not directly reveal API endpoint details but can give insights into the configuration and setup of the API documentation.

Summary:

  • Highest Risk: Directly exposing interactive API documentation interfaces.
  • Medium-High Risk: Exposing raw API schema/specification files.
  • Medium Risk: Version-specific API documentation.
  • Lower Risk: Configuration and resource files for API documentation.

Recommendations:

  • Access Control: Ensure that these endpoints are not publicly accessible or are at least protected by authentication mechanisms.
  • Environment-Specific Exposure: Consider exposing detailed API documentation only in development or staging environments, not in production.
  • Monitoring and Logging: Monitor access to these endpoints and set up alerts for unusual access patterns.

Contributing

Contributions to APIDetector are welcome! Feel free to fork the repository, make changes, and submit pull requests.

Legal Disclaimer

The use of APIDetector should be limited to testing and educational purposes only. The developers of APIDetector assume no liability and are not responsible for any misuse or damage caused by this tool. It is the end user's responsibility to obey all applicable local, state, and federal laws. Developers assume no responsibility for unauthorized or illegal use of this tool. Before using APIDetector, ensure you have permission to test the network or systems you intend to scan.

License

This project is licensed under the MIT License.

Acknowledgments



CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

By: Zion3R


CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


Key Features:

  • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

  • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

  • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

  • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

Limitation

infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
- Still in the development phase, sometimes it can't detect the real Ip.

- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

This tool is a Proof of Concept and is for Educational Purposes Only.

How to Use:

  1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

     git clone https://github.com/spyboy-productions/CloakQuest3r.git
    cd CloakQuest3r
    pip3 install -r requirements.txt
    python cloakquest3r.py example.com
  2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

  3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

  4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

  5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

Run It Online:

Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


OSINT-Framework - OSINT Framework

By: Zion3R


OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.

I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!

Please visit the framework at the link below and good hunting!


https://osintframework.com

Legend

(T) - Indicates a link to a tool that must be installed and run locally
(D) - Google Dork, for more information: Google Hacking
(R) - Requires registration
(M) - Indicates a URL that contains the search term and the URL itself must be edited manually

For Update Notifications

Follow me on Twitter: @jnordine - https://twitter.com/jnordine
Watch or star the project on Github: https://github.com/lockfale/osint-framework

Suggestions, Comments, Feedback

Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.

Contribute with a GitHub Pull Request

For new resources, please ensure that the site is available for public and free use.

  1. Update the arf.json file in the format shown below. If this isn't the first entry for a folder, add a comma to the last closing brace of the previous entry.
  • Submit pull request!
  • Thank you!

    OSINT Framework Website

    https://osintframework.com

    Happy Hunting!



    Goblob - A Fast Enumeration Tool For Publicly Exposed Azure Storage Blobs

    By: Zion3R


    Goblob is a lightweight and fast enumeration tool designed to aid in the discovery of sensitive information exposed publicy in Azure blobs, which can be useful for various research purposes such as vulnerability assessments, penetration testing, and reconnaissance.

    Warning. Goblob will issue individual goroutines for each container name to check in each storage account, only limited by the maximum number of concurrent goroutines specified in the -goroutines flag. This implementation can exhaust bandwidth pretty quickly in most cases with the default wordlist, or potentially cost you a lot of money if you're using the tool in a cloud environment. Make sure you understand what you are doing before running the tool.


    Installation

    go install github.com/Macmod/goblob@latest

    Usage

    To use goblob simply run the following command:

    $ ./goblob <storageaccountname>

    Where <storageaccountname> is the target storage account to enumerate public Azure blob storage URLs on.

    You can also specify a list of storage account names to check:

    $ ./goblob -accounts accounts.txt

    By default, the tool will use a list of common Azure Blob Storage container names to construct potential URLs. However, you can also specify a custom list of container names using the -containers option. For example:

    $ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt

    The tool also supports outputting the results to a file using the -output option:

    $ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt -output results.txt

    If you want to provide accounts to test via stdin you can also omit -accounts (or the account name) entirely:

    $ cat accounts.txt | ./goblob

    Wordlists

    Goblob comes bundled with basic wordlists that can be used with the -containers option:

    Optional Flags

    Goblob provides several flags that can be tuned in order to improve the enumeration process:

    • -goroutines=N - Maximum number of concurrent goroutines to allow (default: 5000).
    • -blobs=true - Report the URL of each blob instead of the URL of the containers (default: false).
    • -verbose=N - Set verbosity level (default: 1, min: 0, max: 3).
    • -maxpages=N - Maximum of container pages to traverse looking for blobs (default: 20, set to -1 to disable limit or to 0 to avoid listing blobs at all and just check if the container is public)
    • -timeout=N - Timeout for HTTP requests (seconds, default: 90)
    • -maxidleconns=N - MaxIdleConns transport parameter for HTTP client (default: 100)
    • -maxidleconnsperhost=N - MaxIdleConnsPerHost transport parameter for HTTP client (default: 10)
    • -maxconnsperhost=N - MaxConnsPerHost transport parameter for HTTP client (default: 0)
    • -skipssl=true - Skip SSL verification (default: false)
    • -invertsearch=true - Enumerate accounts for each container instead of containers for each account (default: false)

    For instance, if you just want to find publicly exposed containers using large lists of storage accounts and container names, you should use -maxpages=0 to prevent the goroutines from paginating the results. Then run it again on the set of results you found with -blobs=true and -maxpages=-1 to actually get the URLs of the blobs.

    If, on the other hand, you want to test a small list of very popular container names against a large set of storage accounts, you might want to try -invertsearch=true with -maxpages=0, in order to see the public accounts for each container name instead of the container names for each storage account.

    You may also want to try changing -goroutines, -timeout and -maxidleconns, -maxidleconnsperhost and -maxconnsperhost and -skipssl in order to best use your bandwidth and find results faster.

    Experiment with the flags to find what works best for you ;-)

    Example

    A fast enumeration tool for publicly exposed Azure Storage blobs. (6)

    Contributing

    Contributions are welcome by opening an issue or by submitting a pull request.

    TODO

    • Check blob domain for NXDOMAIN before trying wordlist to save bandwidth (maybe)
    • Improve default parameters for better performance

    Wordcloud

    An interesting visualization of popular container names found in my experiments with the tool:


    If you want to know more about my experiments and the subject in general, take a look at my article:



    CloudPulse - AWS Cloud Landscape Search Engine

    By: Zion3R


    During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
    CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.


    Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:

    • IPs
    • subdomains
    • domains associated with a target
    • organization name
    • discover origin ips

    1- Download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Run docker compose :

    docker-compose up -d

    3- Run script.py script

    docker-compose exec web python script.py

    4 - Now go to http://:8000/search and enjoy the search engine

    1- download CloudPulse :

    git clone https://github.com/yousseflahouifi/CloudPulse
    cd CloudPulse/

    2- Setup virtual environment :

    python3 -m venv myenv
    source myenv/bin/activate

    3- Install requirements.txt file :

    pip install -r requirements.txt

    4- run an instance of elasticsearch using docker :

    docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1

    5- update script.py and settings file to the host 'localhost':

    #script.py
    es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
    #se/settings.py

    ELASTICSEARCH_DSL = {
    'default': {
    'hosts': 'localhost:9200'
    },
    }

    6- Run script.py to index data in elasticsearch:

    python script.py

    7- Run the app:

    python manage.py runserver 0:8000

    Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)

    as an example searching for .mil data gives:

    searching for tesla as en example gives :

    CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
    Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.



    Mailchecker - Cross-language Temporary (Disposable/Throwaway) Email Detection Library. Covers 55 734+ Fake Email Providers

    By: Zion3R


    Cross-language email validation. Backed by a database of over 55 000 throwable email domains.

    This will be very helpful when you have to contact your users and you want to avoid errors causing lack of communication or want to block "spamboxes".


    Need to provide Webhooks inside your SaaS?

    Need to embed a charts into an email?

    It's over with Image-Charts, no more server-side rendering pain, 1 url = 1 chart.

    https://image-charts.com/chart?
    cht=lc // chart type
    &chd=s:cEAELFJHHHKUju9uuXUc // chart data
    &chxt=x,y // axis
    &chxl=0:|0|1|2|3|4|5| // axis labels
    &chs=873x200 // size

    Use Image-Charts for free


    Upgrade from 1.x to 3.x

    Mailchecker public API has been normalized, here are the changes:

    • NodeJS/JavaScript: MailChecker(email) -> MailChecker.isValid(email)
    • PHP: MailChecker($email) -> MailChecker::isValid($email)
    • Python
    import MailChecker
    m = MailChecker.MailChecker()
    if not m.is_valid('bla@example.com'):
    # ...

    became:

    import MailChecker
    if not MailChecker.is_valid('bla@example.com'):
    # ...

    MailChecker currently supports:


    Usage

    NodeJS

    var MailChecker = require('mailchecker');

    if(!MailChecker.isValid('myemail@yopmail.com')){
    console.error('O RLY !');
    process.exit(1);
    }

    if(!MailChecker.isValid('myemail.com')){
    console.error('O RLY !');
    process.exit(1);
    }

    JavaScript

    <script type="text/javascript" src="MailChecker/platform/javascript/MailChecker.js"></script>
    <script type="text/javascript">
    if(!MailChecker.isValid('myemail@yopmail.com')){
    console.error('O RLY !');
    }

    if(!MailChecker.isValid('myemail.com')){
    console.error('O RLY !');
    }
    </script>

    PHP

    include __DIR__."/MailChecker/platform/php/MailChecker.php";

    if(!MailChecker::isValid('myemail@yopmail.com')){
    die('O RLY !');
    }

    if(!MailChecker::isValid('myemail.com')){
    die('O RLY !');
    }

    Python

    pip install mailchecker
    # no package yet; just drop in MailChecker.py where you want to use it.
    from MailChecker import MailChecker

    if not MailChecker.is_valid('bla@example.com'):
    print "O RLY !"

    Django validator: https://github.com/jonashaag/django-indisposable

    Ruby

    require 'mail_checker'

    unless MailChecker.valid?('myemail@yopmail.com')
    fail('O RLY!')
    end

    Rust

     extern crate mailchecker;

    assert_eq!(true, mailchecker::is_valid("plop@plop.com"));
    assert_eq!(false, mailchecker::is_valid("\nok@gmail.com\n"));
    assert_eq!(false, mailchecker::is_valid("ok@guerrillamailblock.com"));

    Elixir

    Code.require_file("mail_checker.ex", "mailchecker/platform/elixir/")

    unless MailChecker.valid?("myemail@yopmail.com") do
    raise "O RLY !"
    end

    unless MailChecker.valid?("myemail.com") do
    raise "O RLY !"
    end

    Clojure

    ; no package yet; just drop in mailchecker.clj where you want to use it.
    (load-file "platform/clojure/mailchecker.clj")

    (if (not (mailchecker/valid? "myemail@yopmail.com"))
    (throw (Throwable. "O RLY!")))

    (if (not (mailchecker/valid? "myemail.com"))
    (throw (Throwable. "O RLY!")))

    Go

    package main

    import (
    "log"

    "github.com/FGRibreau/mailchecker/platform/go"
    )

    if !mail_checker.IsValid('myemail@yopmail.com') {
    log.Fatal('O RLY !');
    }

    if !mail_checker.IsValid('myemail.com') {
    log.Fatal("O RLY !")
    }

    Installation

    Go

    go get https://github.com/FGRibreau/mailchecker

    NodeJS/JavaScript

    npm install mailchecker

    Ruby

    gem install ruby-mailchecker

    PHP

    composer require fgribreau/mailchecker

    We accept pull-requests for other package manager.

    Data sources

    TorVPN

      $('td', 'table:last').map(function(){
    return this.innerText;
    }).toArray();

    BloggingWV

      Array.prototype.slice.call(document.querySelectorAll('.entry > ul > li a')).map(function(el){return el.innerText});

    ... please add your own dataset to list.txt.

    Regenerate libraries from list.txt

    Just run (requires NodeJS):

    npm run build

    Development

    Development environment requires docker.

    # install and setup every language dependencies in parallel through docker
    npm install

    # run every language setup in parallel through docker
    npm run setup

    # run every language tests in parallel through docker
    npm test

    Backers

    Maintainers

    These amazing people are maintaining this project:

    Contributors

    These amazing people have contributed code to this project:

    Discover how you can contribute by heading on over to the CONTRIBUTING.md file.

    Changelog



    PathFinder - Tool That Provides Information About A Website

    By: Zion3R


    Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more.Β 


    • Retrieve important information about a website
    • Gain insights into the technologies used by a website
    • Identify subdomains and DNS information
    • Check firewall names and certificate details
    • Perform bypass operations for captcha and JavaScript content

    1. Clone the repository:

      git clone https://github.com/HalilDeniz/PathFinder.git
    2. Install the required packages:

      pip install -r requirements.txt

    This will install all the required modules and their respective versions.

    Run the program using the following command:

    Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
    Ò””Ò”€# python3 web-info-explorer.py --help
    usage: wpathFinder.py [-h] url

    Web Information Program

    positional arguments:
    url Enter the site URL

    options:
    -h, --help show this help message and exit

    Replace <url> with the URL of the website you want to explore.

    Here is an example output of running the program:

    Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
    Ò””Ò”€# python3 pathFinder.py https://www.facebook.com/
    Site Information:
    Title: Facebook - Login or Register
    Last Updated Date: None
    First Creation Date: 1997-03-29 05:00:00
    Dns Information: []
    Sub Branches: ['157']
    Firewall Names: []
    Technologies Used: javascript, php, css, html, react
    Certificate Information:
    Certificate Issuer: US
    Certificate Start Date: 2023-02-07 00:00:00
    Certificate Expiration Date: 2023-05-08 23:59:59
    Certificate Validity Period (Days): 90
    Bypassed JavaScript content:
    </ div>

    Contributions are welcome! To contribute to PathFinder, follow these steps:

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes and commit them.
    4. Push your changes to your forked repository.
    5. Open a pull request in the main repository.

    • Thank you my friend Varol

    This project is licensed under the MIT License - see the LICENSE file for details.

    For any inquiries or further information, you can reach me through the following channels:



    Spoofy - Program That Checks If A List Of Domains Can Be Spoofed Based On SPF And DMARC Records

    By: Zion3R



    Spoofy is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"

    Well, Spoofy is different and here is why:

    1. Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
    2. Accurate bulk lookups
    3. Custom, manually tested spoof logic (No guessing or speculating, real world test results)
    4. SPF lookup counter

    Β 

    HOW TO USE

    Spoofy requires Python 3+. Python 2 is not supported. Usage is shown below:

    Usage:
    ./spoofy.py -d [DOMAIN] -o [stdout or xls]
    OR
    ./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]

    Install Dependencies:
    pip3 install -r requirements.txt

    HOW DO YOU KNOW ITS SPOOFABLE

    (The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here

    METHODOLOGY

    The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.

    After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.

    DISCLAIMER

    This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.

    CREDIT

    Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley

    DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf

    Logo: cobracode

    Tool was inspired by Bishop Fox's project called spoofcheck.



    DakshSCRA - Source Code Review Assist

    By: Zion3R


    Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.

    Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.

    What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.


    Debut

    Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.

    While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.

    Features and Functionalities

    Distinctive Features (Multiple World’s First)

    • Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.

    • Identifies Areas of Interest in File Paths (World’s First): Recognises patterns in file paths to pinpoint relevant sections for review.

    • Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.

    • Automated Scientific Effort Estimation for Code Review (World’s First): Providing a measurable approach for estimating efforts required for a code review process.

    Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.

    Additionally, the tool offers the following functionalities:

    • Options to use platform-specific rules specific for finding areas of interests
    • Options to extend or add new rules for any new or existing languages
    • Generates report in text, HTML and PDF format for inspection

    Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki

    Feel free to contribute towards updating or adding new rules and future development.

    If you find any bugs, report them to d3basis.m0hanty@gmail.com.

    Tool Setup

    Pre-requisites

    Python3 and all the libraries listed in requirements.txt

    Setting up environment to run this tool

    1. Setup a virtual environment

    $ pip install virtualenv

    $ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
    Example: virtualenv -p python3 venv

    $ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
    Example: source venv/bin/activate

    After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $

    2. Ensure all required libraries are installed within the virtual environment

    You must run the below command after activating the virtual environment as mentioned in the previous steps.

    pip install -r requirements.txt

    Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.

    Tool Usage

    $ python3 dakshscra.py -h // To view avaialble options and arguments

    usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]

    options:
    -h, --help show this help message and exit
    -r RULE_FILE Specify platform specific rule name
    -f FILE_TYPES Specify file types to scan
    -v Specify verbosity level {'-v', '-vv', '-vvv'}
    -t TARGET_DIR Specify target directory path
    -l {R,RF}, --list {R,RF}
    List rules [R] OR rules and filetypes [RF]
    -recon Detects platform, framework and programming language used
    -estimate Estimate efforts required for code review

    Example Usage

    $ python3 dakshscra.py // To view tool usage along with examples

    Examples:
    # '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
    dakshsca.py -r php -t /source_dir_path

    # To override default settings, other filetypes can be specified with '-f' option.
    dakshsca.py -r php -f dotnet -t /path_to_source_dir
    dakshsca.py -r php -f custom -t /path_to_source_dir

    # Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
    dakshsca.py -recon -r php -t /path_to_source_dir

    # Perform only reconnaissance if '-recon' used without the '-r' option.
    dakshsca.py -recon -t /path_to_source_dir

    # Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
    dakshsca.py -r php -vv -t /path_to_source_dir


    Supported RULE_FILE: dotnet, java, php, javascript
    Supported FILE_TY PES: dotnet, php, java, custom, allfiles

    Reports

    The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.

    Scanning (Areas of Security Concerns) Report

    HTML Report:
    • DakshSCRA/reports/html/report.html
    PDF Report:
    • DakshSCRA/reports/html/report.pdf
    RAW TEXT Based Reports:
    • Areas of Interest - Identified Patterns : DakshSCRA/reports/text/areas_of_interest.txt
    • Areas of Interest - Project Files: DakshSCRA/reports/text/filepaths_aoi.txt
    • Identified Project Files: DakshSCRA/runtime/filepaths.txt

    Reconnaissance (Recon) Report

    • Reconnaissance Summary: /reports/text/recon.txt

    Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.

    Code Review Effort Estimation Report

    • Effort estimation report: /reports/html/estimation.html

    Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.

    Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.



    Nodesub - Command-Line Tool For Finding Subdomains In Bug Bounty Programs

    By: Zion3R


    Nodesub is a command-line tool for finding subdomains in bug bounty programs. It supports various subdomain enumeration techniques and provides flexible options for customization.


    Features

    • Perform subdomain enumeration using CIDR notation (Support input list).
    • Perform subdomain enumeration using ASN (Support input list).
    • Perform subdomain enumeration using a list of domains.

    Installation

    To install Nodesub, use the following command:

    npm install -g nodesub

    NOTE:

    • Edit File ~/.config/nodesub/config.ini

    Usage

    nodesub -h

    This will display help for the tool. Here are all the switches it supports.

    Examples
    • Enumerate subdomains for a single domain:

       nodesub -u example.com
    • Enumerate subdomains for a list of domains from a file:

       nodesub -l domains.txt
    • Perform subdomain enumeration using CIDR:

      node nodesub.js -c 192.168.0.0/24 -o subdomains.txt

      node nodesub.js -c CIDR.txt -o subdomains.txt

    • Perform subdomain enumeration using ASN:

      node nodesub.js -a AS12345 -o subdomains.txt
      node nodesub.js -a ASN.txt -o subdomains.txt
    • Enable recursive subdomain enumeration and output the results to a JSON file:

       nodesub -u example.com -r -o output.json -f json

    Output

    The tool provides various output formats for the results, including:

    • Text (txt)
    • JSON (json)
    • CSV (csv)
    • PDF (pdf)

    The output file contains the resolved subdomains, failed resolved subdomains, or all subdomains based on the options chosen.



    AtlasReaper - A Command-Line Tool For Reconnaissance And Targeted Write Operations On Confluence And Jira Instances

    By: Zion3R

    Β 


    AtlasReaper is a command-line tool developed for offensive security purposes, primarily focused on reconnaissance of Confluence and Jira. It also provides various features that can be helpful for tasks such as credential farming and social engineering. The tool is written in C#.


    Blog post: Sowing Chaos and Reaping Rewards in Confluence and Jira

                                                       .@@@@
    @@@@@
    @@@@@ @@@@@@@
    @@@@@ @@@@@@@@@@@
    @@@@@ @@@@@@@@@@@@@@@
    @@@@, @@@@ *@@@@
    @@@@ @@@ @@ @@@ .@@@
    _ _ _ ___ @@@@@@@ @@@@@@
    /_\| |_| |__ _ __| _ \___ __ _ _ __ ___ _ _ @@ @@@@@@@@
    / _ \ _| / _` (_-< / -_) _` | '_ \/ -_) '_| @@ @@@@@@@@
    /_/ \_\__|_\__,_/__/_|_\___\__,_| .__/\___|_| @@@@@@@@ &@
    |_| @@@@@@@@@@ @@&
    @@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@. @@
    @werdhaihai

    Usage

    AtlasReaper uses commands, subcommands, and options. The format for executing commands is as follows:

    .\AtlasReaper.exe [command] [subcommand] [options]

    Replace [command], [subcommand], and [options] with the appropriate values based on the action you want to perform. For more information about each command or subcommand, use the -h or --help option.

    Below is a list of available commands and subcommands:

    Commands

    Each command has sub commands for interacting with the specific product.

    • confluence
    • jira

    Subcommands

    Confluence

    • confluence attach - Attach a file to a page.
    • confluence download - Download an attachment.
    • confluence embed - Embed a 1x1 pixel image to perform farming attacks.
    • confluence link - Add a link to a page.
    • confluence listattachments - List attachments.
    • confluence listpages - List pages in Confluence.
    • confluence listspaces - List spaces in Confluence.
    • confluence search - Search Confluence.

    Jira

    • jira addcomment - Add a comment to an issue.
    • jira attach - Attach a file to an issue.
    • jira createissue - Create a new issue.
    • jira download - Download attachment(s) from an issue.
    • jira listattachments - List attachments on an issue.
    • jira listissues - List issues in Jira.
    • jira listprojects - List projects in Jira.
    • jira listusers - List Atlassian users.
    • jira searchissues - Search issues in Jira.

    Common Commands

    • help - Display more information on a specific command.

    Examples

    Here are a few examples of how to use AtlasReaper:

    • Search for a keyword in Confluence with wildcard search:

      .\AtlasReaper.exe confluence search --query "http*example.com*" --url $url --cookie $cookie

    • Attach a file to a page in Confluence:

      .\AtlasReaper.exe confluence attach --page-id "12345" --file "C:\path\to\file.exe" --url $url --cookie $cookie

    • Create a new issue in Jira:

      .\AtlasReaper.exe jira createissue --project "PROJ" --issue-type Task --message "I can't access this link from my host" --url $url --cookie $cookie

    Authentication

    Confluence and Jira can be configured to allow anonymous access. You can check this by supplying omitting the -c/--cookie from the commands.

    In the event authentication is required, you can dump cookies from a user's browser with SharpChrome or another similar tool.

    1. .\SharpChrome.exe cookies /showall

    2. Look for any cookies scoped to the *.atlassian.net named cloud.session.token or tenant.session.token

    Limitations

    Please note the following limitations of AtlasReaper:

    • The tool has not been thoroughly tested in all environments, so it's possible to encounter crashes or unexpected behavior. Efforts have been made to minimize these issues, but caution is advised.
    • AtlasReaper uses the cloud.session.token or tenant.session.token which can be obtained from a user's browser. Alternatively, it can use anonymous access if permitted. (API tokens or other auth is not currently supported)
    • For write operations, the username associated with the user session token (or "anonymous") will be listed.

    Contributing

    If you encounter any issues or have suggestions for improvements, please feel free to contribute by submitting a pull request or opening an issue in the AtlasReaper repo.



    Surf - Escalate Your SSRF Vulnerabilities On Modern Cloud Environments

    By: Zion3R


    surf allows you to filter a list of hosts, returning a list of viable SSRF candidates. It does this by sending a HTTP request from your machine to each host, collecting all the hosts that did not respond, and then filtering them into a list of externally facing and internally facing hosts.

    You can then attempt these hosts wherever an SSRF vulnerability may be present. Due to most SSRF filters only focusing on internal or restricted IP ranges, you'll be pleasantly surprised when you get SSRF on an external IP that is not accessible via HTTP(s) from your machine.

    Often you will find that large companies with cloud environments will have external IPs for internal web apps. Traditional SSRF filters will not capture this unless these hosts are specifically added to a blacklist (which they usually never are). This is why this technique can be so powerful.


    Installation

    This tool requires go 1.19 or above as we rely on httpx to do the HTTP probing.

    It can be installed with the following command:

    go install github.com/assetnote/surf/cmd/surf@latest

    Usage

    Consider that you have subdomains for bigcorp.com inside a file named bigcorp.txt, and you want to find all the SSRF candidates for these subdomains. Here are some examples:

    # find all ssrf candidates (including external IP addresses via HTTP probing)
    surf -l bigcorp.txt
    # find all ssrf candidates (including external IP addresses via HTTP probing) with timeout and concurrency settings
    surf -l bigcorp.txt -t 10 -c 200
    # find all ssrf candidates (including external IP addresses via HTTP probing), and just print all hosts
    surf -l bigcorp.txt -d
    # find all hosts that point to an internal/private IP address (no HTTP probing)
    surf -l bigcorp.txt -x

    The full list of settings can be found below:

    ❯ surf -h

    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
    β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•” β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
    β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•šβ•β•

    by shubs @ assetnote

    Usage: surf [--hosts FILE] [--concurrency CONCURRENCY] [--timeout SECONDS] [--retries RETRIES] [--disablehttpx] [--disableanalysis]

    Options:
    --hosts FILE, -l FILE
    List of assets (hosts or subdomains)
    --concurrency CONCURRENCY, -c CONCURRENCY
    Threads (passed down to httpx) - default 100 [default: 100]
    --timeout SECONDS, -t SECONDS
    Timeout in seconds (passed down to httpx) - default 3 [default: 3]
    --retries RETRIES, -r RETRIES
    Retries on failure (passed down to httpx) - default 2 [default: 2]
    --disablehttpx, -x Disable httpx and only output list of hosts that resolve to an internal IP address - default false [default: false]
    --disableanalysis, -d
    Disable analysis and only output list of hosts - default false [default: false]
    --help, -h display this help and exit

    Output

    When running surf, it will print out the SSRF candidates to stdout, but it will also save two files inside the folder it is ran from:

    • external-{timestamp}.txt - Externally resolving, but unable to send HTTP requests to from your machine
    • internal-{timestamp}.txt - Internally resolving, and obviously unable to send HTTP requests from your machine

    These two files will contain the list of hosts that are ideal SSRF candidates to try on your target. The external target list has higher chances of being viable than the internal list.

    Acknowledgements

    Under the hood, this tool leverages httpx to do the HTTP probing. It captures errors returned from httpx, and then performs some basic analysis to determine the most viable candidates for SSRF.

    This tool was created as a result of a live hacking event for HackerOne (H1-4420 2023).



    Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

    By: Zion3R


    xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


    Features

    • Fetches domains from curated passive sources to maximize results.

    • Supports stdin and stdout for easy integration into workflows.

    • Cross-Platform (Windows, Linux & macOS).

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xsubfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

    ...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xsubfind3r.git 
    • Build the utility

       cd xsubfind3r/cmd/xsubfind3r && \
      go build .
    • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xsubfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.3.0
    sources:
    - alienvault
    - anubis
    - bevigil
    - chaos
    - commoncrawl
    - crtsh
    - fullhunt
    - github
    - hackertarget
    - intelx
    - shodan
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    chaos:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
    fullhunt:
    - 0d9652ce-516c-4315-b589-9b241ee6dc24
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xsubfind3r use the -h flag:

    xsubfind3r -h

    help message:


    _ __ _ _ _____
    __ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
    \ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
    > <\__ \ |_| | |_) | _| | | | | (_| |___) | |
    /_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

    USAGE:
    xsubfind3r [OPTIONS]

    INPUT:
    -d, --domain string[] target domains
    -l, --list string target domains' list file path

    SOURCES:
    --sources bool list supported sources
    -u, --sources-to-use string[] comma(,) separeted sources to use
    -e, --sources-to-exclude string[] comma(,) separeted sources to exclude

    OPTIMIZATION:
    -t, --threads int number of threads (default: 50)

    OUTPUT:
    --no-color bool disable colored output
    -o, --output string output subdomains' file path
    -O, --output-directory string output subdomains' directory path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

    Contribution

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    Columbus-Server - API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features

    By: Zion3R


    Columbus Project is an API first subdomain discovery service, blazingly fast subdomain enumeration service with advanced features.

    Columbus returned 638 subdomains of tesla.com in 0.231 sec.


    Usage

    By default Columbus returns only the subdomains in a JSON string array:

    curl 'https://columbus.elmasy.com/lookup/github.com'

    But we think of the bash lovers, so if you don't want to mess with JSON and a newline separated list is your wish, then include the Accept: text/plain header.

    DOMAIN="github.com"

    curl -s -H "Accept: text/plain" "https://columbus.elmasy.com/lookup/$DOMAIN" | \
    while read SUB
    do
    if [[ "$SUB" == "" ]]
    then
    HOST="$DOMAIN"
    else
    HOST="${SUB}.${DOMAIN}"
    fi
    echo "$HOST"
    done

    For more, check the features or the API documentation.

    Entries

    Currently, entries are got from Certificate Transparency.

    Command Line

    Usage of columbus-server:
    -check
    Check for updates.
    -config string
    Path to the config file.
    -version
    Print version informations.

    -check: Check the lates version on GitHub. Prints up-to-date and returns 0 if no update required. Prints the latest tag (eg.: v0.9.1) and returns 1 if new release available. In case of error, prints the error message and returns 2.

    Build

    git clone https://github.com/elmasy-com/columbus-server
    make build

    Install

    Create a new user:

    adduser --system --no-create-home --disabled-login columbus-server

    Create a new group:

    addgroup --system columbus

    Add the new user to the new group:

    usermod -aG columbus columbus-server

    Copy the binary to /usr/bin/columbus-server.

    Make it executable:

    chmod +x /usr/bin/columbus-server

    Create a directory:

    mkdir /etc/columbus

    Copy the config file to /etc/columbus/server.conf.

    Set the permission to 0600.

    chmod -R 0600 /etc/columbus

    Set the owner of the config file:

    chown -R columbus-server:columbus /etc/columbus

    Install the service file (eg.: /etc/systemd/system/columbus-server.service).

    cp columbus-server.service /etc/systemd/system/

    Reload systemd:

    systemctl daemon-reload

    Start columbus:

    systemctl start columbus-server

    If you want to columbus start automatically:

    systemctl enable columbus-server


    Chaos - Origin IP Scanning Utility Developed With ChatGPT

    By: Zion3R


    chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.

    An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.

    chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.

    usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x] 
    _..._
    .-'` `'-.
    __|___________|__
    \ /
    `._ CHAOS _.'
    `-------`
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    / \\
    /_____________________\\
    CHAtgpt Origin-ip Scanner
    _______ _______ _______ _______ _______
    |\\ /|\\ /|\\ /|\\ /|\\/|
    | +---+ | +---+ | +---+ | +---+ | +---+ |
    | |H | | |U | | |M | | |A | | |N | |
    | |U | | |S | | |A | | |N | | |C | |
    | |M | | |E | | |N | | |D | | |O | |
    | |A | | |R | | |C | | | | | |L | |
    | +---+ | +---+ | +---+ | +---+ | +---+ |
    |/_____|\\_____|\\_____|\\_____|\\_____\\

    Origin IP Scanner developed with ChatGPT
    cha*os (n): complete disorder and confusion
    (ver: 0.9.4)


    Features

    • Threaded for performance gains
    • Real-time status updates and progress bars, nice for large scans ;)
    • Flexible user options for various scenarios & constraints
    • Dataset reduction for improved scan times
    • Easy to use CSV output

    Installation

    1. Download / clone / unzip / whatever
    2. cd path/to/chaos
    3. pip3 install -U pip setuptools virtualenv
    4. virtualenv env
    5. source env/bin/activate
    6. (env) pip3 install -U -r ./requirements.txt
    7. (env) ./chaos.py -h

    Options

    -h, --help            show this help message and exit
    -f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
    -i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
    -a AGENT, --agent AGENT
    User-Agent header value for requests
    -C, --csv Append CSV output to OUTPUT_FILE.csv
    -D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
    -j JITTER, --jitter JITTER
    Add a 0-N second randomized delay to the sleep value
    -o OUTPUT, --output OUTPUT
    Append console output to FILE
    -p PORTS, --ports PORTS
    Comma-separated list of TCP ports to use (default: "80,443")
    -P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
    -r, --randomize Randomize(ish) the order IPs/ports are tested
    -s SLEEP, --sleep SLEEP
    Add N seconds before thread completes
    -t TIMEOUT, --timeout TIMEOUT
    Wait N seconds for an unresponsive host
    -T, --test Test-mode; don't send requests
    -v, --verbose Enable verbose output
    -x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2

    Examples

    Localhost Testing

    Launch python HTTP server

    % python3 -u -m http.server 8001
    Serving HTTP on :: port 8001 (http://[::]:8001/) ...

    Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang

    % while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
    Ncat: Version 7.94 ( https://nmap.org/ncat )
    Ncat: Listening on [::]:8443
    Ncat: Listening on 0.0.0.0:8443

    Also launch ncat as SSL on a port that will default to HTTP detection

    % while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done    
    Ncat: Version 7.94 ( https://nmap.org/ncat )
    Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
    Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
    Ncat: Listening on [::]:8444
    Ncat: Listening on 0.0.0.0:8444

    Prepare an FQDN file:

    % cat ../test_localhost_fqdn.txt 
    www.example.com
    localhost.example.com
    localhost.local
    localhost
    notreally.arealdomain

    Prepare an IP file / list:

    % cat ../test_localhost_ips.txt 
    127.0.0.1
    127.0.0.0/29
    not_an_ip_addr
    -6.a
    =4.2
    ::1

    Run the scan

    • Note an IPv6 network added to IPs on the CLI
    • -p to specify the ports we are listening on
    • -x for single threaded run to give our ncat servers time to restart
    • -s0.2 short sleep for our ncat servers to restart
    • -t1 to timeout after 1 second
    % ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1   
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
    2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
    2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
    2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
    2023-06-21 12:48:33 [INFO] * Version: 0.9.4
    2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
    2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
    2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
    2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
    2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
    2023-06-21 12:48:33 [INFO] * Thread(s): 1
    2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
    2023-06-21 12:48:33 [INFO] * Timeout: 1.0
    2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
    2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
    2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
    Prep Tests: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ&# 9608;β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 36/36 [00:29<00:00, 1.20it/s]
    2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
    2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
    2023-06-21 12:49:03 [INFO] Queuing 18 threads
    ++RCVD++ (200 OK) www.example.com @ :::8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
    ++RCVD++ (202 OK) www.example.com @ :::8444
    ++RCVD++ (200 OK) www.example.com @ ::1:8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
    ++RCVD++ (202 OK) www.example.com @ ::1:8444
    ++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
    ++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
    ++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
    ++RCVD++ (200 OK) localhost.example.com @ :::8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
    ++RCVD+ + (202 OK) localhost.example.com @ :::8444
    ++RCVD++ (200 OK) localhost.example.com @ ::1:8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
    ++RCVD++ (202 OK) localhost.example.com @ ::1:8444
    ++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
    ++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
    ++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
    Origin Scan: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ&#96 08;β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:06<00:00, 2.76it/s]
    2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
    ::1
    ::1:8444 => (202 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8001 => (200 / OK)

    127.0.0.1
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)

    ::
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)

    www.example.com
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)
    ::1:8001 => (200 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8444 => (202 / OK)
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)

    localhost.example.com
    :::8001 => (200 / OK)
    :::8443 => (204 / Plaintext OK)
    :::8444 => (202 / OK)
    ::1:8001 => (200 / OK)
    ::1:8443 => (204 / Plaintext OK)
    ::1:8444 => (202 / OK)
    127.0.0.1:8001 => (200 / OK)
    127.0.0.1:8443 => (204 / Plaintext OK)
    127.0.0.1:8444 => (202 / OK)


    rst@r57 chaos %

    Test & Verbose localhost

    -T runs in test mode (do everything except send requests)

    -v verbose option provides additional output


    Known Defects

    • HTTP/HTTPS detection is not ideal
    • Need option to adjust CSV newline delimiter
    • Need options to adjust where long strings / many lines are truncated
    • Try to figure out why we marked requests v2.x as required ;)
    • Options for very-verbose / quiet
    • Stagger thread launch when we're using sleep / jitter
    • Search for meta-refresh in 200 responses
    • Content-Location header for 201s ?
    • Improve thread name generation so we have the right number of unique names
    • Sanity check on IPv6 netmasks to prevent scans that outlive the sun?
    • TBD?

    Related Links

    Disclaimers

    • Copyright (C) 2023 RST
    • This software is distributed on an "AS IS" basis, without express or implied warranties of any kind
    • This software is intended for research and/or authorized testing; it is your responsibility to ensure you are authorized to use this software in any way
    • By using this software you acknowledge that you are responsible for your actions and assume all liability for any direct, indirect, or other damages


    Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

    By: Zion3R


    xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


    Features

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xurlfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

    ...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xurlfind3r.git 
    • Build the utility

       cd xurlfind3r/cmd/xurlfind3r && \
      go build .
    • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xurlfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.2.0
    sources:
    - bevigil
    - commoncrawl
    - github
    - intelx
    - otx
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xurlfind3r use the -h flag:

    xurlfind3r -h

    help message:

                     _  __ _           _ _____      
    __ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
    \ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
    > <| |_| | | | | _| | | | | (_| |___) | |
    /_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

    USAGE:
    xurlfind3r [OPTIONS]

    TARGET:
    -d, --domain string (sub)domain to match URLs

    SCOPE:
    --include-subdomains bool match subdomain's URLs

    SOURCES:
    -s, --sources bool list sources
    -u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
    --skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
    --skip-wayback-source bool with wayback , skip parsing source code snapshots

    FILTER & MATCH:
    -f, --filter string regex to filter URLs
    -m, --match string regex to match URLs

    OUTPUT:
    --no-color bool no color mode
    -o, --output string output URLs file path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

    Examples

    Basic

    xurlfind3r -d hackerone.com --include-subdomains

    Filter Regex

    # filter images
    xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

    Match Regex

    # match js URLs
    xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

    Contributing

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    AiCEF - An AI-assisted cyber exercise content generation framework using named entity recognition

    By: Zion3R


    AiCEF is a tool implementing the accompanying framework [1] in order to harness the intelligence that is available from online resources, as well as threat groups' activities, arsenal (eg. MITRE), to create relevant and timely cybersecurity exercise content. This way, we abstract the events from the reports in a machine-readable form. The produced graphs can be infused with additional intelligence, e.g. the threat actor profile from MITRE, also mapped in our ontology. While this may fill gaps that would be missing from a report, one can also manipulate the graph to create custom and unique models. Finally, we exploit transformer-based language models like GPT to convert the graph into text that can serve as the scenario of a cybersecurity exercise. We have tested and validated AiCEF with a group of experts in cybersecurity exercises, and the results clearly show that AiCEF significantly augments the capabilities in creating timely and relevant cybersecurity exercises in terms of both quality and time.

    We used Python to create a machine-learning-powered Exercise Generation Framework and developed a set of tools to perform a set of individual tasks which would help an exercise planner (EP) to create a timely and targeted Cybersecurity Exercise Scenario, regardless of her experience.


    Problems an Exercise Planner faces:

    • Constant table-top research to have fresh content
    • Realistic CSE scenario creation can be difficult and time-consuming
    • Meeting objectives but also keeping it appealing for the target audience
    • Is the relevance and timeliness aspects considered?
    • Can all the above be automated?

    Our Main Objective: Build an AI powered tool that can generate relevant and up-to-date Cyber Exercise Content in a few steps with little technical expertise from the user.

    Release Roadmap

    The updated project, AiCEF v.2.0 is planned to be publicly released by the end of 2023, pending heavy code review and functionality updates. Submodules with reduced functinality will start being release by early June 2023. Thank you for your patience.

    Installation

    The most convenient way to install AiCEF is by using the docker-compose command. For production deployment, we advise you deploy MySQL manually in a dedicated environment and then to start the other components using Docker.

    First, make sure you have docker-compose installed in your environment:

    
    Linux:

    $ sudo apt-get install docker-compose

    Then, clone the repository:

    $ git clone https://github.com/grazvan/AiCEF/docker.git /<choose-a-path>/AiCEF-docker
    $ cd /<choose-a-path>/AiCEF-docker

    Configure the environment settings

    Import the MySQL file in your

    $ mysql -u <your_username> Γ’β‚¬β€œ-password=<your_password> AiCEF_db < AiCEF_db.sql 

    Before running the docker-compose command, settings must be configured. Copy the sample settings file and change it accordingly to your needs.

    $ cp .env.sample .env

    Run AiCEF

    Note: Make sure you have an OpenAI API key available. Load the environment setttings (including your MySQL connection details):

    set -a ; source .env

    Finally, run docker-compose in detached (-d) mode:

    $ sudo docker-compose up -d

    Usage

    A common usage flow consists of generating a Trend Report to analyze patterns over time, parsing relevant articles and converting them into Incident Breadcrumbs using MLTP module and storing them in a knowledge database called KDb. Incidents are then generated using IncGen component and can be enhanced using the Graph Enhancer module to simulate known APT activity. The incidents come with injects that can be edited on the fly. The CSE scenario is then created using CEGen, which defines various attributes like CSE name, number of Events, and Incidents. MLCESO is a crucial step in the methodology where dedicated ML models are trained to extract information from the collected articles with over 80% accuracy. The Incident Generation & Enhancer (IncGen) workflow can be automated, generating a variety of incidents based on filtering parameters and the existing database. The knowledge database (KDB) consists of almost 3000 articles classified into six categories that can be augmented using APT Enhancer by using the activity of known APT groups from MITRE or manually.

    Find below some sample usage screenshots:

    Features

    • An AI-powered Cyber Exercise Generation Framework
    • Developed in Python & EEL
    • Open source library Stixview
    • Stores data in MYSQL
    • API to Text Synthesis Models (ex. GPT-3.5)
    • Can create incidents based on TTPs of 125 known APT actors
    • Models Cyber Exercise Content in machine readable STIX2.1 [2] (.json) and human readable format (.pdf)

    Authors

    AiCEF is a product designed and developed by Alex Zacharis, Razvan Gavrila and Constantinos Patsakis.

    References

    [1] https://link.springer.com/article/10.1007/s10207-023-00693-z

    [2] https://oasis-open.github.io/cti-documentation/stix/intro.html

    Contributing

    Contributions are welcome! If you'd like to contribute to AiCEF v2.0, please follow these steps:

    1. Fork this repository
    2. Create a new branch (git checkout -b feature/your-branch-name)
    3. Make your changes and commit them (git commit -m 'Add some feature')
    4. Push to the branch (git push origin feature/your-branch-name)
    5. Open a new pull request

    License

    AiCEF is licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. See for more information.

    Under the following terms:

    Attribution β€” You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. NonCommercial β€” You may not use the material for commercial purposes. No additional restrictions β€” You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.



    Polaris - Validation Of Best Practices In Your Kubernetes Clusters

    By: Zion3R

    Polaris is an open source policy engine for Kubernetes

    Polaris is an open source policy engine for Kubernetes that validates and remediates resource configuration. It includes 30+ built in configuration policies, as well as the ability to build custom policies with JSON Schema. When run on the command line or as a mutating webhook, Polaris can automatically remediate issues based on policy criteria.

    Polaris can be run in three different modes:

    • As a dashboard - Validate Kubernetes resources against policy-as-code.
    • As an admission controller - Automatically reject or modify workloads that don't adhere to your organization's policies.
    • As a command-line tool - Incorporate policy-as-code into the CI/CD process to test local YAML files.

    Validation of best practices in your Kubernetes clusters (6)

    Documentation

    Check out the documentation at docs.fairwinds.com

    Join the Fairwinds Open Source Community

    The goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap, and network with fellow Kubernetes users. Chat with us on Slack or join the user group to get involved!

    Other Projects from Fairwinds

    Enjoying Polaris? Check out some of our other projects:

    • Goldilocks - Right-size your Kubernetes Deployments by compare your memory and CPU settings against actual usage
    • Pluto - Detect Kubernetes resources that have been deprecated or removed in future versions
    • Nova - Check to see if any of your Helm charts have updates available
    • rbac-manager - Simplify the management of RBAC in your Kubernetes clusters

    Or check out the full list

    Fairwinds Insights

    If you're interested in running Polaris in multiple clusters, tracking the results over time, integrating with Slack, Datadog, and Jira, or unlocking other functionality, check out Fairwinds Insights, a platform for auditing and enforcing policy in Kubernetes clusters.



    Artemis - A Modular Web Reconnaissance Tool And Vulnerability Scanner

    By: Zion3R


    A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).

    The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.

    Artemis is experimental software, under active development - use at your own risk.

    Features

    For an up-to-date list of features, please refer to the documentation.

    Development

    Tests

    To run the tests, use:

    ./scripts/test

    Code formatting

    Artemis uses pre-commit to run linters and format the code. pre-commit is executed on CI to verify that the code is formatted properly.

    To run it locally, use:

    pre-commit run --all-files

    To setup pre-commit so that it runs before each commit, use:

    pre-commit install

    Building the docs

    To build the documentation, use:

    cd docs
    python3 -m venv venv
    . venv/bin/activate
    pip install -r requirements.txt
    make html

    How do I write my own module?

    Please refer to the documentation.

    Contributing

    Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.

    However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.



    ReconAIzer - A Burp Suite Extension To Add OpenAI (GPT) On Burp And Help You With Your Bug Bounty Recon To Discover Endpoints, Params, URLs, Subdomains And More!

    By: Zion3R


    ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.

    Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:


    Prerequisites

    • Burp Suite
    • Jython Standalone Jar

    Installation

    Follow these steps to install the ReconAIzer extension on Burp Suite:

    Step 1: Download Jython

    1. Download the latest Jython Standalone Jar from the official website: https://www.jython.org/download
    2. Save the Jython Standalone Jar file in a convenient location on your computer.

    Step 2: Configure Jython in Burp Suite

    1. Open Burp Suite.
    2. Go to the "Extensions" tab.
    3. Click on the "Extensions settings" sub-tab.
    4. Under "Python Environment," click on the "Select file..." button next to "Location of the Jython standalone JAR file."
    5. Browse to the location where you saved the Jython Standalone Jar file in Step 1 and select it.
    6. Wait for the "Python Environment" status to change to "Jython (version x.x.x) successfully loaded," where x.x.x represents the Jython version.

    Step 3: Download and Install ReconAIzer

    1. Download the latest release of ReconAIzer
    2. Open Burp Suite
    3. Go back to the "Extensions" tab in Burp Suite.
    4. Click the "Add" button.
    5. In the "Add extension" dialog, select "Python" as the "Extension type."
    6. Click on the "Select file..." button next to "Extension file" and browse to the location where you saved the ReconAIzer.py file in Step 3.1. Select the file and click "Open."
    7. Make sure the "Load" checkbox is selected and click the "Next" button.
    8. Wait for the extension to be loaded. You should see a message in the "Output" section stating that the ReconAIzer extension has been successfully loaded.

    Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.

    Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.

    Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!

    Happy bug hunting!



    Burpgpt - A Burp Suite Extension That Integrates OpenAI's GPT To Perform An Additional Passive Scan For Discovering Highly Bespoke Vulnerabilities, And Enables Running Traffic-Based Analysis Of Any Type

    By: Zion3R


    burpgpt leverages the power of AI to detect security vulnerabilities that traditional scanners might miss. It sends web traffic to an OpenAI model specified by the user, enabling sophisticated analysis within the passive scanner. This extension offers customisable prompts that enable tailored web traffic analysis to meet the specific needs of each user. Check out the Example Use Cases section for inspiration.

    The extension generates an automated security report that summarises potential security issues based on the user's prompt and real-time data from Burp-issued requests. By leveraging AI and natural language processing, the extension streamlines the security assessment process and provides security professionals with a higher-level overview of the scanned application or endpoint. This enables them to more easily identify potential security issues and prioritise their analysis, while also covering a larger potential attack surface.

    [!WARNING] Data traffic is sent to OpenAI for analysis. If you have concerns about this or are using the extension for security-critical applications, it is important to carefully consider this and review OpenAI's Privacy Policy for further information.

    [!WARNING] While the report is automated, it still requires triaging and post-processing by security professionals, as it may contain false positives.

    [!WARNING] The effectiveness of this extension is heavily reliant on the quality and precision of the prompts created by the user for the selected GPT model. This targeted approach will help ensure the GPT model generates accurate and valuable results for your security analysis.

    Β 

    Features

    • Adds a passive scan check, allowing users to submit HTTP data to an OpenAI-controlled GPT model for analysis through a placeholder system.
    • Leverages the power of OpenAI's GPT models to conduct comprehensive traffic analysis, enabling detection of various issues beyond just security vulnerabilities in scanned applications.
    • Enables granular control over the number of GPT tokens used in the analysis by allowing for precise adjustments of the maximum prompt length.
    • Offers users multiple OpenAI models to choose from, allowing them to select the one that best suits their needs.
    • Empowers users to customise prompts and unleash limitless possibilities for interacting with OpenAI models. Browse through the Example Use Cases for inspiration.
    • Integrates with Burp Suite, providing all native features for pre- and post-processing, including displaying analysis results directly within the Burp UI for efficient analysis.
    • Provides troubleshooting functionality via the native Burp Event Log, enabling users to quickly resolve communication issues with the OpenAI API.

    Requirements

    1. System requirements:
    • Operating System: Compatible with Linux, macOS, and Windows operating systems.

    • Java Development Kit (JDK): Version 11 or later.

    • Burp Suite Professional or Community Edition: Version 2023.3.2 or later.

      [!IMPORTANT] Please note that using any version lower than 2023.3.2 may result in a java.lang.NoSuchMethodError. It is crucial to use the specified version or a more recent one to avoid this issue.

    1. Build tool:
    • Gradle: Version 6.9 or later (recommended). The build.gradle file is provided in the project repository.
    1. Environment variables:
    • Set up the JAVA_HOME environment variable to point to the JDK installation directory.

    Please ensure that all system requirements, including a compatible version of Burp Suite, are met before building and running the project. Note that the project's external dependencies will be automatically managed and installed by Gradle during the build process. Adhering to the requirements will help avoid potential issues and reduce the need for opening new issues in the project repository.

    Installation

    1. Compilation

    1. Ensure you have Gradle installed and configured.

    2. Download the burpgpt repository:

      git clone https://github.com/aress31/burpgpt
      cd .\burpgpt\
    3. Build the standalone jar:

      ./gradlew shadowJar

    2. Loading the Extension Into Burp Suite

    To install burpgpt in Burp Suite, first go to the Extensions tab and click on the Add button. Then, select the burpgpt-all jar file located in the .\lib\build\libs folder to load the extension.

    Usage

    To start using burpgpt, users need to complete the following steps in the Settings panel, which can be accessed from the Burp Suite menu bar:

    1. Enter a valid OpenAI API key.
    2. Select a model.
    3. Define the max prompt size. This field controls the maximum prompt length sent to OpenAI to avoid exceeding the maxTokens of GPT models (typically around 2048 for GPT-3).
    4. Adjust or create custom prompts according to your requirements.

    Once configured as outlined above, the Burp passive scanner sends each request to the chosen OpenAI model via the OpenAI API for analysis, producing Informational-level severity findings based on the results.

    Prompt Configuration

    burpgpt enables users to tailor the prompt for traffic analysis using a placeholder system. To include relevant information, we recommend using these placeholders, which the extension handles directly, allowing dynamic insertion of specific values into the prompt:

    Placeholder Description
    {REQUEST} The scanned request.
    {URL} The URL of the scanned request.
    {METHOD} The HTTP request method used in the scanned request.
    {REQUEST_HEADERS} The headers of the scanned request.
    {REQUEST_BODY} The body of the scanned request.
    {RESPONSE} The scanned response.
    {RESPONSE_HEADERS} The headers of the scanned response.
    {RESPONSE_BODY} The body of the scanned response.
    {IS_TRUNCATED_PROMPT} A boolean value that is programmatically set to true or false to indicate whether the prompt was truncated to the Maximum Prompt Size defined in the Settings.

    These placeholders can be used in the custom prompt to dynamically generate a request/response analysis prompt that is specific to the scanned request.

    [!NOTE] > Burp Suite provides the capability to support arbitrary placeholders through the use of Session handling rules or extensions such as Custom Parameter Handler, allowing for even greater customisation of the prompts.

    Example Use Cases

    The following list of example use cases showcases the bespoke and highly customisable nature of burpgpt, which enables users to tailor their web traffic analysis to meet their specific needs.

    • Identifying potential vulnerabilities in web applications that use a crypto library affected by a specific CVE:

      Analyse the request and response data for potential security vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER}:

      Web Application URL: {URL}
      Crypto Library Name: {CRYPTO_LIBRARY_NAME}
      CVE Number: CVE-{CVE_NUMBER}
      Request Headers: {REQUEST_HEADERS}
      Response Headers: {RESPONSE_HEADERS}
      Request Body: {REQUEST_BODY}
      Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER} in the request and response data and report them.
    • Scanning for vulnerabilities in web applications that use biometric authentication by analysing request and response data related to the authentication process:

      Analyse the request and response data for potential security vulnerabilities related to the biometric authentication process:

      Web Application URL: {URL}
      Biometric Authentication Request Headers: {REQUEST_HEADERS}
      Biometric Authentication Response Headers: {RESPONSE_HEADERS}
      Biometric Authentication Request Body: {REQUEST_BODY}
      Biometric Authentication Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the biometric authentication process in the request and response data and report them.
    • Analysing the request and response data exchanged between serverless functions for potential security vulnerabilities:

      Analyse the request and response data exchanged between serverless functions for potential security vulnerabilities:

      Serverless Function A URL: {URL}
      Serverless Function B URL: {URL}
      Serverless Function A Request Headers: {REQUEST_HEADERS}
      Serverless Function B Response Headers: {RESPONSE_HEADERS}
      Serverless Function A Request Body: {REQUEST_BODY}
      Serverless Function B Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities in the data exchanged between the two serverless functions and report them.
    • Analysing the request and response data for potential security vulnerabilities specific to a Single-Page Application (SPA) framework:

      Analyse the request and response data for potential security vulnerabilities specific to the {SPA_FRAMEWORK_NAME} SPA framework:

      Web Application URL: {URL}
      SPA Framework Name: {SPA_FRAMEWORK_NAME}
      Request Headers: {REQUEST_HEADERS}
      Response Headers: {RESPONSE_HEADERS}
      Request Body: {REQUEST_BODY}
      Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the {SPA_FRAMEWORK_NAME} SPA framework in the request and response data and report them.

    Roadmap

    • Add a new field to the Settings panel that allows users to set the maxTokens limit for requests, thereby limiting the request size.
    • Add support for connecting to a local instance of the AI model, allowing users to run and interact with the model on their local machines, potentially improving response times and data privacy.
    • Retrieve the precise maxTokens value for each model to transmit the maximum allowable data and obtain the most extensive GPT response possible.
    • Implement persistent configuration storage to preserve settings across Burp Suite restarts.
    • Enhance the code for accurate parsing of GPT responses into the Vulnerability model for improved reporting.

    Project Information

    The extension is currently under development and we welcome feedback, comments, and contributions to make it even better.

    Sponsor

    If this extension has saved you time and hassle during a security assessment, consider showing some love by sponsoring a cup of coffee

    for the developer. It's the fuel that powers development, after all. Just hit that shiny Sponsor button at the top of the page or click here to contribute and keep the caffeine flowing.

    Reporting Issues

    Did you find a bug? Well, don't just let it crawl around! Let's squash it together like a couple of bug whisperers!

    Please report any issues on the GitHub issues tracker. Together, we'll make this extension as reliable as a cockroach surviving a nuclear apocalypse!

    Contributing

    Looking to make a splash with your mad coding skills?

    Awesome! Contributions are welcome and greatly appreciated. Please submit all PRs on the GitHub pull requests tracker. Together we can make this extension even more amazing!

    License

    See LICENSE.



    ❌