secator
is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.
Curated list of commands
Unified input options
Unified output schema
CLI and library usage
Distributed options with Celery
Complexity from simple tasks to complex workflows
secator
integrates the following tools:
Name | Description | Category |
---|---|---|
httpx | Fast HTTP prober. | http |
cariddi | Fast crawler and endpoint secrets / api keys / tokens matcher. | http/crawler |
gau | Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). | http/crawler |
gospider | Fast web spider written in Go. | http/crawler |
katana | Next-generation crawling and spidering framework. | http/crawler |
dirsearch | Web path discovery. | http/fuzzer |
feroxbuster | Simple, fast, recursive content discovery tool written in Rust. | http/fuzzer |
ffuf | Fast web fuzzer written in Go. | http/fuzzer |
h8mail | Email OSINT and breach hunting tool. | osint |
dnsx | Fast and multi-purpose DNS toolkit designed for running DNS queries. | recon/dns |
dnsxbrute | Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). | recon/dns |
subfinder | Fast subdomain finder. | recon/dns |
fping | Find alive hosts on local networks. | recon/ip |
mapcidr | Expand CIDR ranges into IPs. | recon/ip |
naabu | Fast port discovery tool. | recon/port |
maigret | Hunt for user accounts across many websites. | recon/user |
gf | A wrapper around grep to avoid typing common patterns. | tagger |
grype | A vulnerability scanner for container images and filesystems. | vuln/code |
dalfox | Powerful XSS scanning tool and parameter analyzer. | vuln/http |
msfconsole | CLI to access and work with the Metasploit Framework. | vuln/http |
wpscan | WordPress Security Scanner | vuln/multi |
nmap | Vulnerability scanner using NSE scripts. | vuln/multi |
nuclei | Fast and customisable vulnerability scanner based on simple YAML based DSL. | vuln/multi |
searchsploit | Exploit searcher. | exploit/search |
Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator
, you can plug it in (see the dev guide).
pipx install secator
pip install secator
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier: alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal: secator --help
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help
Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.
secator
uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.
We provide utilities to install required languages if you don't manage them externally:
secator install langs go
secator install langs ruby
secator
does not install any of the external tools it supports by default.
We provide utilities to install or update each supported tool which should work on all systems supporting apt
:
secator install tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use: secator install tools httpx
Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.
secator
comes installed with the minimum amount of dependencies.
There are several addons available for secator
:
secator install addons worker
secator install addons google
secator install addons mongodb
secator install addons redis
secator install addons dev
secator install addons trace
secator install addons build
secator
makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:
secator install cves
To figure out which languages or tools are installed on your system (along with their version):
secator health
secator --help
Run a fuzzing task (ffuf
):
secator x ffuf http://testphp.vulnweb.com/FUZZ
Run a url crawl workflow:
secator w url_crawl http://testphp.vulnweb.com
Run a host scan:
secator s host mydomain.com
and more... to list all tasks / workflows / scans that you can use:
secator x --help
secator w --help
secator s --help
To go deeper with secator
, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube
Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.
- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers
~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt
A detailed usage guide is available on Usage section of the Wiki.
But Some index of options is given below:
Ashok can be launched using a lightweight Python3.8-Alpine Docker image.
$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help
Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.
Site-wide Link Discovery:
Collects all links throughout the website based on the provided whitelist and the specified max_depth
.
Form and Input Extraction:
Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.
XSS Scanning:
Note:
The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.
Note:
This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (
".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"
). You can customize this list to better suit your needs by editing the setting.json file..
$ git clone https://github.com/joshkar/X-Recon
$ cd X-Recon
$ python3 -m pip install -r requirements.txt
$ python3 xr.py
You can use this address in the Get URL section
http://testphp.vulnweb.com
SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.
SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.
git clone https://github.com/justakazh/sicat.git && cd sicat
pip install -r requirements.txt
~$ python sicat.py --help
Command | Description |
---|---|
-h | Show help message and exit |
-k KEYWORD | |
-kv KEYWORK_VERSION | |
-nm | Identify via nmap output |
--nvd | Use NVD as info source |
--packetstorm | Use PacketStorm as info source |
--exploitdb | Use ExploitDB as info source |
--exploitalert | Use ExploitAlert as info source |
--msfmoduke | Use metasploit as info source |
-o OUTPUT | Path to save output to |
-ot OUTPUT_TYPE | Output file type: json or html |
From keyword
python sicat.py -k telerik --exploitdb --msfmodule
From nmap output
nmap --open -sV localhost -oX nmap_out.xml
python sicat.py -nm nmap_out.xml --packetstorm
I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.
During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.
1- Clone the repository
git clone https://github.com/yousseflahouifi/dorkish.git
2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
3- click on Load unpacked extension button and select the dorkish folder.
Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/
Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.
I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty
pip3 install swaggerhole
or cloning this repository and running git clone https://github.com/Liodeus/swaggerHole.git
pip3 install .
_____ _ __ ____ _ ____ _ ____ _ ___ _____
/ ___/| | /| / // __ `// __ `// __ `// _ \ / ___/
(__ ) | |/ |/ // /_/ // /_/ // /_/ // __// /
/____/ |__/|__/ \__,_/ \__, / \__, / \___//_/
__ __ __ /____/ /____/
/ / / /____ / /___
/ /_/ // __ \ / // _ \
/ __ // /_/ // // __/
/_/ /_/ \____//_/ \___/
usage: swaggerhole [-h] [-s SEARCH] [-o OUT] [-t THREADS] [-j] [-q] [-du] [-de]
optional arguments:
-h, --help show this help message and exit
-s SEARCH, --search SEARCH
Term to search
-o OUT, --out OUT Output directory
-t THREADS, --threads THREADS
Threads number (Default 25)
-j, --json Json ouput
-q, --quiet Remove banner
-du, --deactivate_url
Deactivate the URL filtering
-de, --deactivate_email
Deactivate the email filtering
swaggerHole -s test.com
echo test.com | swaggerHole
swaggerHole -s test.com --json
echo test.com | swaggerHole --json
swaggerHole -s test.com -t 100
echo test.com | swaggerHole -t 100
To know more about our Attack Surface
Management platform, check out NVADR.
Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.
To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:
go install github.com/devanshbatham/rayder@v0.0.4
Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:
rayder -w path/to/workflow.yaml
A workflow is defined in a YAML file with the following structure:
vars:
VAR_NAME: value
# Add more variables...
parallel: true|false
modules:
- name: task-name
cmds:
- command-1
- command-2
# Add more commands...
silent: true|false
# Add more modules...
Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars
section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}
).
To define variables, add them to the vars
section of your workflow YAML file:
vars:
VAR_NAME: value
ANOTHER_VAR: another_value
# Add more variables...
You can reference variables within your command strings using double curly braces ({{}}
). For example, if you defined a variable OUTPUT_DIR
, you can use it like this:
modules:
- name: example-task
cmds:
- echo "Output directory {{OUTPUT_DIR}}"
You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value
to provide values for specific variables. For example:
rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value
If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars
section of your workflow YAML file.
Remember that variables supplied via the command line will override the default values defined in the YAML configuration.
Here's an example of how you can define, reference, and supply variables in your workflow configuration:
vars:
ORG: "example.org"
OUTPUT_DIR: "results"
modules:
- name: example-task
cmds:
- echo "Organization {{ORG}}"
- echo "Output directory {{OUTPUT_DIR}}"
When executing the workflow, you can provide values for ORG
and OUTPUT_DIR
via the command line like this:
rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir
This will override the default values and use the provided values for these variables.
Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:
vars:
ORG: "Acme, Inc"
OUTPUT_DIR: "results-dir"
parallel: false
modules:
- name: reverse-whois
silent: false
cmds:
- mkdir -p {{OUTPUT_DIR}}
- revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt
- name: finding-subdomains
cmds:
- xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
silent: false
- name: cleaning-subdomains
cmds:
- cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
- rm *.out
silent: true
- name: resolving-subdomains
cmds:
- cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
silent: false
- name: checking-alive-subdomains
cmds:
- cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
silent: false
To execute the above workflow, run the following command:
rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results
The parallel
field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel
to true
allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false
, modules will execute one after another.
Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!
Inspiration of this project comes from Awesome taskfile project.
Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.
Uscrapper extracts the following details from the provided website:
Uscrapper 2.0:
git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems
To run Uscrapper, use the following command-line syntax:
python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]
Arguments:
Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.
The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.
To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.
Finding assets from certificates! Scan the web! Tool presented @DEFCON 31
** You must have CGO enabled, and may have to install gcc to run CloudRecon**
sudo apt install gcc
go install github.com/g0ldencybersec/CloudRecon@latest
CloudRecon
CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.
Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.
CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.
The tool suite is three parts in GO:
Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.
Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.
Retr - a tool to parse and search through the downloaded certs for keywords.
MAIN
Usage: CloudRecon scrape|store|retr [options]
-h Show the program usage message
Subcommands:
cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout
cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file
cloudrecon retr - Query local DB file for results
SCRAPE
scrape [options] -i <IPs/CIDRs or File>
-a Add this flag if you want to see all output including failures
-c int
How many goroutines running concurrently (default 100)
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" )
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)
STORE
store [options] -i <IPs/CIDRs or File>
-c int
How many goroutines running concurrently (default 100)
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE")
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)
RETR
retr [options]
-all
Return all the rows in the DB
-cn string
String to search for in common name column, returns like-results (default "NONE")
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-ip string
String to search for in IP column, returns like-results (default "NONE")
-num
Return the Number of rows (results) in the DB (By IP)
-org string
String to search for in Organization column, returns like-results (default "NONE")
-san string
String to search for in common name column, returns like-results (default "NONE")
WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.
The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.
g!2m0:~ webcopilot -h
โโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโฌโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโฆโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโฆโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[โ] @h4r5h1t.hrs | G!2m0
Usage:
webcopilot -d <target>
webcopilot -d <target> -s
webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]
Flags:
-d Add your target [Requried]
-o To save outputs in folder [Default: domain.com]
-t Number of threads [Default: 100]
-b Add your server for BXSS [Default: False]
-x Exclude out of scope domains [Default: False]
-s Run only Subdomain Enumeration [Default: False]
-h Show this help message
Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server
WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot
git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh
SubFinder โข Sublist3r โข Findomain โข gf โข OpenRedireX โข dnsx โข sqlmap โข gobuster โข assetfinder โข httpx โข kxss โข qsreplace โข Nuclei โข dalfox โข anew โข jq โข aquatone โข urldedupe โข Amass โข gauplus โข waybackurls โข crlfuzz
To run the tool on a target, just use the following command.
g!2m0:~ webcopilot -d bugcrowd.com
The -o
command can be used to specify an output dir.
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd
The -s
command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s
The -t
command can be used to add thrads to your scan for faster result.
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333
The -b
command can be used for blind xss (OOB), you can get your server from xsshunter or interact
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss
The -x
command can be used to exclude out of scope domains.
g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss
Default options looks like this:
g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
โโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโฌโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโ โโโโโโโโโโโโโฆโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโฆโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[โ] @h4r5h1t.hrs | G!2m0
[โ] Warning: Use with caution. You are responsible for your own actions.
[โ] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.
Target: bugcrowd.com
Output: /home/gizmo/targets/bugcrowd
Threads: 100
Server: False
Exclude: False
Mode: Running all Enumeration
Time: 30-08-2021 15:10:00
[!] Please wait while scanning...
[โ] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
[โ] Subdoamin Scanned - [assetfinderโ] Subdomain Found: 34
[โ] Subdoamin Scanned - [sublist3rโ] Subdomain Found: 29
[โ] Subdoamin Scanned - [subfinderโ] Subdomain Found: 54
[โ] Subdoamin Scanned - [amassโ] Subdomain Found: 43
[โ] Subdoamin Scanned - [findomainโ] Subdomain Found: 27
[โ] Active Subdoamin Scanning is in progress:
[!] Please be patient. This may take a while...
[โ] Active Subdoamin Scanned - [gobusterโ] Subdomain Found: 11
[โ] Active Subdoamin Scanned - [amassโ] Subdomain Found: 0
[โ] Subdomain Scanning: Filtering out of scope subdomains
[โ] Subdomain Scanning: Filtering Alive subdomains
[โ] Subdomain Scanning: Getting titles of valid subdomains
[โ] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/
[โ] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30
[โ] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
[โ] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
[โ] Vulnerabilities Scanned - [XSSโ] Found: 0
[โ] Vulnerabilities Scanned - [SQLiโ] Found: 0
[โ] Vulnerabilities Scanned - [LFIโ] Found: 0
[โ] Vulnerabilities Scanned - [CRLFโ] Found: 0
[โ] Vulnerabilities Scanned - [SSRFโ] Found: 0
[โ] Vulnerabilities Scanned - [Sensitive Dataโ] Found: 0
[โ] Vulnerabilities Scanned - [Open redirectโ] Found: 0
[โ] Vulnerabilities Scanned - [Subdomain Takeoverโ] Found: 0
[โ] Vulnerabilities Scanned - [Nuclieโ] Found: 0
[โ] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/
โโโโโ โโโ โโโ โโโโ โโโ โโโโโ
โโโโโ โโโ โโโ โโโโ โโโ โโโโโ
โโโโโ โโโ โโโ โโโโ โโโ โโโโโ
[+] Subdomains of bugcrowd.com
[+] Subdomains Found: 0
[+] Subdomains Alive: 0
[+] Endpoints: 11032
[+] XSS: 0
[+] SQLi: 0
[+] Open Redirect: 0
[+] SSRF: 0
[+] CRLF: 0
[+] LFI: 0
[+] Sensitive Data: 0
[+] Subdomain Takeover: 0
[+] Nuclei: 0
WebCopilot is inspired from Garud & Pinaak by ROX4R.
@aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R
Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions. |
PySQLRecon is a Python port of the awesome SQLRecon project by @sanjivkawa. See the commands section for a list of capabilities.
PySQLRecon can be installed with pip3 install pysqlrecon
or by cloning this repository and running pip3 install .
All of the main modules from SQLRecon have equivalent commands. Commands noted with [PRIV]
require elevated privileges or sysadmin rights to run. Alternatively, commands marked with [NORM]
can likely be run by normal users and do not require elevated privileges.
Support for impersonation ([I]
) or execution on linked servers ([L]
) are denoted at the end of the command description.
adsi [PRIV] Obtain ADSI creds from ADSI linked server [I,L]
agentcmd [PRIV] Execute a system command using agent jobs [I,L]
agentstatus [PRIV] Enumerate SQL agent status and jobs [I,L]
checkrpc [NORM] Enumerate RPC status of linked servers [I,L]
clr [PRIV] Load and execute .NET assembly in a stored procedure [I,L]
columns [NORM] Enumerate columns within a table [I,L]
databases [NORM] Enumerate databases on a server [I,L]
disableclr [PRIV] Disable CLR integration [I,L]
disableole [PRIV] Disable OLE automation procedures [I,L]
disablerpc [PRIV] Disable RPC and RPC Out on linked server [I]
disablexp [PRIV] Disable xp_cmdshell [I,L]
enableclr [PRIV] Enable CLR integration [I,L]
enableole [PRIV] Enable OLE automation procedures [I,L]
enablerpc [PRIV] Enable RPC and RPC Out on linked server [I]
enablexp [PRIV] Enable xp_cmdshell [I,L]
impersonate [NORM] Enumerate users that can be impersonated
info [NORM] Gather information about the SQL server
links [NORM] Enumerate linked servers [I,L]
olecmd [PRIV] Execute a system command using OLE automation procedures [I,L]
query [NORM] Execute a custom SQL query [I,L]
rows [NORM] Get the count of rows in a table [I,L]
search [NORM] Search a table for a column name [I,L]
smb [NORM] Coerce NetNTLM auth via xp_dirtree [I,L]
tables [NORM] Enu merate tables within a database [I,L]
users [NORM] Enumerate users with database access [I,L]
whoami [NORM] Gather logged in user, mapped user and roles [I,L]
xpcmd [PRIV] Execute a system command using xp_cmdshell [I,L]
PySQLRecon has global options (available to any command), with some commands introducing additional flags. All global options must be specified before the command name:
pysqlrecon [GLOBAL_OPTS] COMMAND [COMMAND_OPTS]
View global options:
pysqlrecon --help
View command specific options:
pysqlrecon [GLOBAL_OPTS] COMMAND --help
Change the database authenticated to, or used in certain PySQLRecon commands (query
, tables
, columns
rows
), with the --database
flag.
Target execution of a PySQLRecon command on a linked server (instead of the SQL server being authenticated to) using the --link
flag.
Impersonate a user account while running a PySQLRecon command with the --impersonate
flag.
--link
and --impersonate
and incompatible.
pysqlrecon uses Poetry to manage dependencies. Install from source and setup for development with:
git clone https://github.com/tw1sm/pysqlrecon
cd pysqlrecon
poetry install
poetry run pysqlrecon --help
PySQLRecon is easily extensible - see the template and instructions in resources
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.
I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!
Please visit the framework at the link below and good hunting!
(T) - Indicates a link to a tool that must be installed and run locally
(D) - Google Dork, for more information: Google Hacking
(R) - Requires registration
(M) - Indicates a URL that contains the search term and the URL itself must be edited manually
Follow me on Twitter: @jnordine - https://twitter.com/jnordine
Watch or star the project on Github: https://github.com/lockfale/osint-framework
Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.
For new resources, please ensure that the site is available for public and free use.
Thank you!
Happy Hunting!
Goblob is a lightweight and fast enumeration tool designed to aid in the discovery of sensitive information exposed publicy in Azure blobs, which can be useful for various research purposes such as vulnerability assessments, penetration testing, and reconnaissance.
Warning. Goblob will issue individual goroutines for each container name to check in each storage account, only limited by the maximum number of concurrent goroutines specified in the -goroutines
flag. This implementation can exhaust bandwidth pretty quickly in most cases with the default wordlist, or potentially cost you a lot of money if you're using the tool in a cloud environment. Make sure you understand what you are doing before running the tool.
go install github.com/Macmod/goblob@latest
To use goblob simply run the following command:
$ ./goblob <storageaccountname>
Where <storageaccountname>
is the target storage account to enumerate public Azure blob storage URLs on.
You can also specify a list of storage account names to check:
$ ./goblob -accounts accounts.txt
By default, the tool will use a list of common Azure Blob Storage container names to construct potential URLs. However, you can also specify a custom list of container names using the -containers
option. For example:
$ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt
The tool also supports outputting the results to a file using the -output
option:
$ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt -output results.txt
If you want to provide accounts to test via stdin
you can also omit -accounts
(or the account name) entirely:
$ cat accounts.txt | ./goblob
Goblob comes bundled with basic wordlists that can be used with the -containers
option:
Goblob provides several flags that can be tuned in order to improve the enumeration process:
-goroutines=N
- Maximum number of concurrent goroutines to allow (default: 5000
).-blobs=true
- Report the URL of each blob instead of the URL of the containers (default: false
).-verbose=N
- Set verbosity level (default: 1
, min: 0
, max: 3
).-maxpages=N
- Maximum of container pages to traverse looking for blobs (default: 20
, set to -1
to disable limit or to 0
to avoid listing blobs at all and just check if the container is public)-timeout=N
- Timeout for HTTP requests (seconds, default: 90
)-maxidleconns=N
- MaxIdleConns
transport parameter for HTTP client (default: 100
)-maxidleconnsperhost=N
- MaxIdleConnsPerHost
transport parameter for HTTP client (default: 10
)-maxconnsperhost=N
- MaxConnsPerHost
transport parameter for HTTP client (default: 0
)-skipssl=true
- Skip SSL verification (default: false
)-invertsearch=true
- Enumerate accounts for each container instead of containers for each account (default: false
)For instance, if you just want to find publicly exposed containers using large lists of storage accounts and container names, you should use -maxpages=0
to prevent the goroutines from paginating the results. Then run it again on the set of results you found with -blobs=true
and -maxpages=-1
to actually get the URLs of the blobs.
If, on the other hand, you want to test a small list of very popular container names against a large set of storage accounts, you might want to try -invertsearch=true
with -maxpages=0
, in order to see the public accounts for each container name instead of the container names for each storage account.
You may also want to try changing -goroutines
, -timeout
and -maxidleconns
, -maxidleconnsperhost
and -maxconnsperhost
and -skipssl
in order to best use your bandwidth and find results faster.
Experiment with the flags to find what works best for you ;-)
Contributions are welcome by opening an issue or by submitting a pull request.
An interesting visualization of popular container names found in my experiments with the tool:
If you want to know more about my experiments and the subject in general, take a look at my article:
During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization.
CloudPulse is a powerful tool that simplifies and enhances the analysis of SSL certificate data. It leverages the extensive repository of SSL certificates obtained from the AWS EC2 machines available at Trickest Cloud. With CloudPulse , security researchers can efficiently explore SSL certificate details, uncover potential vulnerabilities, and gather valuable insights for a variety of security-related tasks.
Simplifies security assessments with a user-friendly interface. It allows you to effortlessly find company's asset's on aws cloud:
1- Download CloudPulse :
git clone https://github.com/yousseflahouifi/CloudPulse
cd CloudPulse/
2- Run docker compose :
docker-compose up -d
3- Run script.py script
docker-compose exec web python script.py
4 - Now go to http://:8000/search and enjoy the search engine
1- download CloudPulse :
git clone https://github.com/yousseflahouifi/CloudPulse
cd CloudPulse/
2- Setup virtual environment :
python3 -m venv myenv
source myenv/bin/activate
3- Install requirements.txt file :
pip install -r requirements.txt
4- run an instance of elasticsearch using docker :
docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" elasticsearch:6.6.1
5- update script.py and settings file to the host 'localhost':
#script.py
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
#se/settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'localhost:9200'
},
}
6- Run script.py to index data in elasticsearch:
python script.py
7- Run the app:
python manage.py runserver 0:8000
Included in the CloudPulse repository is a sample data.csv file containing close to 4,000 records, which provides a glimpse of the tool's capabilities. For the full dataset, visit the Trickest Cloud repository clone the data and update data.csv file (it contains close to 9 millions data)
as an example searching for .mil data gives:
searching for tesla as en example gives :
CloudPulse heavily depends on the data.csv file, which is a sample dataset extracted from the larger collection maintained by Trickest. While the sample dataset provides valuable insights, the tool's full potential is realized when used in conjunction with the complete dataset, which is accessible in the Trickest repository here.
Users are encouraged to refer to the Trickest dataset for a more comprehensive and up-to-date analysis.
Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.
Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.
What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.
Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.
While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.
Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.
Identifies Areas of Interest in File Paths (Worldโs First): Recognises patterns in file paths to pinpoint relevant sections for review.
Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.
Automated Scientific Effort Estimation for Code Review (Worldโs First): Providing a measurable approach for estimating efforts required for a code review process.
Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.
Additionally, the tool offers the following functionalities:
Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki
Feel free to contribute towards updating or adding new rules and future development.
If you find any bugs, report them to d3basis.m0hanty@gmail.com.
Python3 and all the libraries listed in requirements.txt
$ pip install virtualenv
$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv
$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate
After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $
You must run the below command after activating the virtual environment as mentioned in the previous steps.
pip install -r requirements.txt
Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.
$ python3 dakshscra.py -h // To view avaialble options and arguments
usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]
options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review
$ python3 dakshscra.py // To view tool usage along with examples
Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path
# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir
# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir
# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir
# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir
Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles
The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.
Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.
Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.
Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.
Nodesub is a command-line tool for finding subdomains in bug bounty programs. It supports various subdomain enumeration techniques and provides flexible options for customization.
To install Nodesub, use the following command:
npm install -g nodesub
NOTE:
~/.config/nodesub/config.ini
nodesub -h
This will display help for the tool. Here are all the switches it supports.
Enumerate subdomains for a single domain:
nodesub -u example.com
Enumerate subdomains for a list of domains from a file:
nodesub -l domains.txt
Perform subdomain enumeration using CIDR:
node nodesub.js -c 192.168.0.0/24 -o subdomains.txt
node nodesub.js -c CIDR.txt -o subdomains.txt
Perform subdomain enumeration using ASN:
node nodesub.js -a AS12345 -o subdomains.txt
node nodesub.js -a ASN.txt -o subdomains.txt
Enable recursive subdomain enumeration and output the results to a JSON file:
nodesub -u example.com -r -o output.json -f json
The tool provides various output formats for the results, including:
The output file contains the resolved subdomains, failed resolved subdomains, or all subdomains based on the options chosen.
ย
AtlasReaper is a command-line tool developed for offensive security purposes, primarily focused on reconnaissance of Confluence and Jira. It also provides various features that can be helpful for tasks such as credential farming and social engineering. The tool is written in C#.
Blog post: Sowing Chaos and Reaping Rewards in Confluence and Jira
.@@@@
@@@@@
@@@@@ @@@@@@@
@@@@@ @@@@@@@@@@@
@@@@@ @@@@@@@@@@@@@@@
@@@@, @@@@ *@@@@
@@@@ @@@ @@ @@@ .@@@
_ _ _ ___ @@@@@@@ @@@@@@
/_\| |_| |__ _ __| _ \___ __ _ _ __ ___ _ _ @@ @@@@@@@@
/ _ \ _| / _` (_-< / -_) _` | '_ \/ -_) '_| @@ @@@@@@@@
/_/ \_\__|_\__,_/__/_|_\___\__,_| .__/\___|_| @@@@@@@@ &@
|_| @@@@@@@@@@ @@&
@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@. @@
@werdhaihai
AtlasReaper uses commands, subcommands, and options. The format for executing commands is as follows:
.\AtlasReaper.exe [command] [subcommand] [options]
Replace [command]
, [subcommand]
, and [options]
with the appropriate values based on the action you want to perform. For more information about each command or subcommand, use the -h
or --help
option.
Below is a list of available commands and subcommands:
Each command has sub commands for interacting with the specific product.
confluence
jira
confluence attach
- Attach a file to a page.confluence download
- Download an attachment.confluence embed
- Embed a 1x1 pixel image to perform farming attacks.confluence link
- Add a link to a page.confluence listattachments
- List attachments.confluence listpages
- List pages in Confluence.confluence listspaces
- List spaces in Confluence.confluence search
- Search Confluence.jira addcomment
- Add a comment to an issue.jira attach
- Attach a file to an issue.jira createissue
- Create a new issue.jira download
- Download attachment(s) from an issue.jira listattachments
- List attachments on an issue.jira listissues
- List issues in Jira.jira listprojects
- List projects in Jira.jira listusers
- List Atlassian users.jira searchissues
- Search issues in Jira.help
- Display more information on a specific command.Here are a few examples of how to use AtlasReaper:
Search for a keyword in Confluence with wildcard search:
.\AtlasReaper.exe confluence search --query "http*example.com*" --url $url --cookie $cookie
Attach a file to a page in Confluence:
.\AtlasReaper.exe confluence attach --page-id "12345" --file "C:\path\to\file.exe" --url $url --cookie $cookie
Create a new issue in Jira:
.\AtlasReaper.exe jira createissue --project "PROJ" --issue-type Task --message "I can't access this link from my host" --url $url --cookie $cookie
Confluence and Jira can be configured to allow anonymous access. You can check this by supplying omitting the -c/--cookie from the commands.
In the event authentication is required, you can dump cookies from a user's browser with SharpChrome or another similar tool.
.\SharpChrome.exe cookies /showall
Look for any cookies scoped to the *.atlassian.net
named cloud.session.token
or tenant.session.token
Please note the following limitations of AtlasReaper:
cloud.session.token
or tenant.session.token
which can be obtained from a user's browser. Alternatively, it can use anonymous access if permitted. (API tokens or other auth is not currently supported)If you encounter any issues or have suggestions for improvements, please feel free to contribute by submitting a pull request or opening an issue in the AtlasReaper repo.
Poastal is an email OSINT tool that provides valuable information on any email address. With Poastal, you can easily input an email address and it will quickly answer several questions, providing you with crucial information.
Make sure you have the requirements installed.
pip install -r requirements.txt
Navigate to the backend folder and run poastal.py
to start the Flask app. This points to port:8080.
python poastal.py
Open index.html
in the root directory to use the UI.
Enter an email address and see the results.
Test with example@gmail.com
.
There's a new GitHub module.
If you open up github.py
you'll see a section that asks you to replace it with your own API key.
I hope you find Poastal to be a valuable tool for your OSINT investigations. If you have any feedback or suggestions on how we can improve Poastal, please let me know. I'm always looking for ways to improve this tool to better serve the OSINT community.
xsubfind3r
is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.
Fetches domains from curated passive sources to maximize results.
Supports stdin
and stdout
for easy integration into workflows.
Cross-Platform (Windows, Linux & macOS).
Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget
or curl
:
...with wget
:
wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
...or, with curl
:
curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
...then, extract the binary:
tar xf xsubfind3r-<version>-linux-amd64.tar.gz
TIP: The above steps, download and extract, can be combined into a single step with this onliner
curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv
NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r
executable.
...move the xsubfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xsubfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r
to their PATH
.
Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.
go install ...
go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest
go build ...
the development VersionClone the repository
git clone https://github.com/hueristiq/xsubfind3r.git
Build the utility
cd xsubfind3r/cmd/xsubfind3r && \
go build .
Move the xsubfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xsubfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r
to their PATH
.
NOTE: While the development version is a good way to take a peek at xsubfind3r
's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.
xsubfind3r
will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml
file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.
Example config.yaml
:
version: 0.3.0
sources:
- alienvault
- anubis
- bevigil
- chaos
- commoncrawl
- crtsh
- fullhunt
- github
- hackertarget
- intelx
- shodan
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
chaos:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
fullhunt:
- 0d9652ce-516c-4315-b589-9b241ee6dc24
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8
To display help message for xsubfind3r
use the -h
flag:
xsubfind3r -h
help message:
_ __ _ _ _____
__ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
\ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
> <\__ \ |_| | |_) | _| | | | | (_| |___) | |
/_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0
USAGE:
xsubfind3r [OPTIONS]
INPUT:
-d, --domain string[] target domains
-l, --list string target domains' list file path
SOURCES:
--sources bool list supported sources
-u, --sources-to-use string[] comma(,) separeted sources to use
-e, --sources-to-exclude string[] comma(,) separeted sources to exclude
OPTIMIZATION:
-t, --threads int number of threads (default: 50)
OUTPUT:
--no-color bool disable colored output
-o, --output string output subdomains' file path
-O, --output-directory string output subdomains' directory path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)
CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)
Issues and Pull Requests are welcome! Check out the contribution guidelines.
This utility is distributed under the MIT license.
During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization. InfoHound performs passive analysis techniques (which do not interact directly with the target) using OSINT to extract a large amount of data given a web domain name. This tool will retrieve emails, people, files, subdomains, usernames and urls that will be later analyzed to extract even more valuable information.
git clone https://github.com/xampla/InfoHound.git
cd InfoHound/infohound
mv infohound_config.sample.py infohound_config.py
cd ..
docker-compose up -d
You must add API Keys inside infohound_config.py file
InfoHound has 2 different types of modules, those which retreives data and those which analyse it to extract more relevant information.
Name | Description |
---|---|
Get Whois Info | Get relevant information from Whois register. |
Get DNS Records | This task queries the DNS. |
Get Subdomains | This task uses Alienvault OTX API, CRT.sh, and HackerTarget as data sources to discover cached subdomains. |
Get Subdomains From URLs | Once some tasks have been performed, the URLs table will have a lot of entries. This task will check all the URLs to find new subdomains. |
Get URLs | It searches all URLs cached by Wayback Machine and saves them into the database. This will later help to discover other data entities like files or subdomains. |
Get Files from URLs | It loops through the URLs database table to find files and store them in the Files database table for later analysis. The files that will be retrieved are: doc, docx, ppt, pptx, pps, ppsx, xls, xlsx, odt, ods, odg, odp, sxw, sxc, sxi, pdf, wpd, svg, indd, rdp, ica, zip, rar |
Find Email | It looks for emails using queries to Google and Bing. |
Find People from Emails | Once some emails have been found, it can be useful to discover the person behind them. Also, it finds usernames from those people. |
Find Emails From URLs | Sometimes, the discovered URLs can contain sensitive information. This task retrieves all the emails from URL paths. |
Execute Dorks | It will execute the dorks defined in the dorks folder. Remember to group the dorks by categories (filename) to understand their objectives. |
Find Emails From Dorks | By default, InfoHound has some dorks defined to discover emails. This task will look for them in the results obtained from dork execution. |
Name | Description |
---|---|
Check Subdomains Take-Over | It performs some checks to determine if a subdomain can be taken over. |
Check If Domain Can Be Spoofed | It checks if a domain, from the emails InfoHound has discovered, can be spoofed. This could be used by attackers to impersonate a person and send emails as him/her. |
Get Profiles From Usernames | This task uses the discovered usernames from each person to find profiles from services or social networks where that username exists. This is performed using the Maigret tool. It is worth noting that although a profile with the same username is found, it does not necessarily mean it belongs to the person being analyzed. |
Download All Files | Once files have been stored in the Files database table, this task will download them in the "download_files" folder. |
Get Metadata | Using exiftool, this task will extract all the metadata from the downloaded files and save it to the database. |
Get Emails From Metadata | As some metadata can contain emails, this task will retrieve all of them and save them to the database. |
Get Emails From Files Content | Usually, emails can be included in corporate files, so this task will retrieve all the emails from the downloaded files' content. |
Find Registered Services using Emails | It is possible to find services or social networks where an email has been used to create an account. This task will check if an email InfoHound has discovered has an account in Twitter, Adobe, Facebook, Imgur, Mewe, Parler, Rumble, Snapchat, Wordpress, and/or Duolingo. |
Check Breach | This task checks Firefox Monitor service to see if an email has been found in a data breach. Although it is a free service, it has a limitation of 10 queries per day. If Leak-Lookup API key is set, it also checks it. |
InfoHound lets you create custom modules, you just need to add your script inside infohoudn/tool/custom_modules
. One custome module has been added as an example which uses Holehe tool to check if the emails previously are attached to an account on sites like Twitter, Instagram, Imgur and more than 120 others.
chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.
An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.
chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.
usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x]
_..._
.-'` `'-.
__|___________|__
\ /
`._ CHAOS _.'
`-------`
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/_____________________\\
CHAtgpt Origin-ip Scanner
_______ _______ _______ _______ _______
|\\ /|\\ /|\\ /|\\ /|\\/|
| +---+ | +---+ | +---+ | +---+ | +---+ |
| |H | | |U | | |M | | |A | | |N | |
| |U | | |S | | |A | | |N | | |C | |
| |M | | |E | | |N | | |D | | |O | |
| |A | | |R | | |C | | | | | |L | |
| +---+ | +---+ | +---+ | +---+ | +---+ |
|/_____|\\_____|\\_____|\\_____|\\_____\\
Origin IP Scanner developed with ChatGPT
cha*os (n): complete disorder and confusion
(ver: 0.9.4)
cd path/to/chaos
pip3 install -U pip setuptools virtualenv
virtualenv env
source env/bin/activate
(env) pip3 install -U -r ./requirements.txt
(env) ./chaos.py -h
-h, --help show this help message and exit
-f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
-i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
-a AGENT, --agent AGENT
User-Agent header value for requests
-C, --csv Append CSV output to OUTPUT_FILE.csv
-D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
-j JITTER, --jitter JITTER
Add a 0-N second randomized delay to the sleep value
-o OUTPUT, --output OUTPUT
Append console output to FILE
-p PORTS, --ports PORTS
Comma-separated list of TCP ports to use (default: "80,443")
-P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
-r, --randomize Randomize(ish) the order IPs/ports are tested
-s SLEEP, --sleep SLEEP
Add N seconds before thread completes
-t TIMEOUT, --timeout TIMEOUT
Wait N seconds for an unresponsive host
-T, --test Test-mode; don't send requests
-v, --verbose Enable verbose output
-x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2
Launch python HTTP server
% python3 -u -m http.server 8001
Serving HTTP on :: port 8001 (http://[::]:8001/) ...
Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang
% while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Listening on [::]:8443
Ncat: Listening on 0.0.0.0:8443
Also launch ncat as SSL on a port that will default to HTTP detection
% while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
Ncat: Listening on [::]:8444
Ncat: Listening on 0.0.0.0:8444
Prepare an FQDN file:
% cat ../test_localhost_fqdn.txt
www.example.com
localhost.example.com
localhost.local
localhost
notreally.arealdomain
Prepare an IP file / list:
% cat ../test_localhost_ips.txt
127.0.0.1
127.0.0.0/29
not_an_ip_addr
-6.a
=4.2
::1
Run the scan
% ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
2023-06-21 12:48:33 [INFO] * Version: 0.9.4
2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
2023-06-21 12:48:33 [INFO] * Thread(s): 1
2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
2023-06-21 12:48:33 [INFO] * Timeout: 1.0
2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
Prep Tests: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ&# 9608;โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 36/36 [00:29<00:00, 1.20it/s]
2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
2023-06-21 12:49:03 [INFO] Queuing 18 threads
++RCVD++ (200 OK) www.example.com @ :::8001
++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
++RCVD++ (202 OK) www.example.com @ :::8444
++RCVD++ (200 OK) www.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
++RCVD++ (202 OK) www.example.com @ ::1:8444
++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
++RCVD++ (200 OK) localhost.example.com @ :::8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
++RCVD+ + (202 OK) localhost.example.com @ :::8444
++RCVD++ (200 OK) localhost.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
++RCVD++ (202 OK) localhost.example.com @ ::1:8444
++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
Origin Scan: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ` 08;โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 18/18 [00:06<00:00, 2.76it/s]
2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
::1
::1:8444 => (202 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8001 => (200 / OK)
127.0.0.1
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)
::
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
www.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)
localhost.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)
rst@r57 chaos %
-T
runs in test mode (do everything except send requests)
-v
verbose option provides additional output
xurlfind3r
is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.
robots.txt
snapshots.Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget
or curl
:
...with wget
:
wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
...or, with curl
:
curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
...then, extract the binary:
tar xf xurlfind3r-<version>-linux-amd64.tar.gz
TIP: The above steps, download and extract, can be combined into a single step with this onliner
curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv
NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r
executable.
...move the xurlfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xurlfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r
to their PATH
.
Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.
go install ...
go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest
go build ...
the development VersionClone the repository
git clone https://github.com/hueristiq/xurlfind3r.git
Build the utility
cd xurlfind3r/cmd/xurlfind3r && \
go build .
Move the xurlfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xurlfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r
to their PATH
.
NOTE: While the development version is a good way to take a peek at xurlfind3r
's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.
xurlfind3r
will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml
file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.
Example config.yaml
:
version: 0.2.0
sources:
- bevigil
- commoncrawl
- github
- intelx
- otx
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8
To display help message for xurlfind3r
use the -h
flag:
xurlfind3r -h
help message:
_ __ _ _ _____
__ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
\ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
> <| |_| | | | | _| | | | | (_| |___) | |
/_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0
USAGE:
xurlfind3r [OPTIONS]
TARGET:
-d, --domain string (sub)domain to match URLs
SCOPE:
--include-subdomains bool match subdomain's URLs
SOURCES:
-s, --sources bool list sources
-u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
--skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
--skip-wayback-source bool with wayback , skip parsing source code snapshots
FILTER & MATCH:
-f, --filter string regex to filter URLs
-m, --match string regex to match URLs
OUTPUT:
--no-color bool no color mode
-o, --output string output URLs file path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)
CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)
xurlfind3r -d hackerone.com --include-subdomains
# filter images
xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'
# match js URLs
xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'
Issues and Pull Requests are welcome! Check out the contribution guidelines.
This utility is distributed under the MIT license.
Researchers this month uncovered a two-year-old Linux-based remote access trojan dubbed AVrecon that enslaves Internet routers into botnet that bilks online advertisers and performs password-spraying attacks. Now new findings reveal that AVrecon is the malware engine behind a 12-year-old service called SocksEscort, which rents hacked residential and small business devices to cybercriminals looking to hide their true location online.
Image: Lumenโs Black Lotus Labs.
In a report released July 12, researchers at Lumenโs Black Lotus Labs called the AVrecon botnet โone of the largest botnets targeting small-office/home-office (SOHO) routers seen in recent history,โ and a crime machine that has largely evaded public attention since first being spotted in mid-2021.
โThe malware has been used to create residential proxy services to shroud malicious activity such as password spraying, web-traffic proxying and ad fraud,โ the Lumen researchers wrote.
Malware-based anonymity networks are a major source of unwanted and malicious web traffic directed at online retailers, Internet service providers (ISPs), social networks, email providers and financial institutions. And a great many of these โproxyโ networks are marketed primarily to cybercriminals seeking to anonymize their traffic by routing it through an infected PC, router or mobile device.
Proxy services can be used in a legitimate manner for several business purposes โ such as price comparisons or sales intelligence โ but they are massively abused for hiding cybercrime activity because they make it difficult to trace malicious traffic to its original source. Proxy services also let users appear to be getting online from nearly anywhere in the world, which is useful if youโre a cybercriminal who is trying to impersonate someone from a specific place.
Spur.us, a startup that tracks proxy services, told KrebsOnSecurity that the Internet addresses Lumen tagged as the AVrecon botnetโs โCommand and Controlโ (C2) servers all tie back to a long-running proxy service called SocksEscort.
SocksEscort[.]com, is whatโs known as a โSOCKS Proxyโ service. The SOCKS (or SOCKS5) protocol allows Internet users to channel their Web traffic through a proxy server, which then passes the information on to the intended destination. From a websiteโs perspective, the traffic of the proxy network customer appears to originate from a rented/malware-infected PC tied to a residential ISP customer, not from the proxy service customer.
The SocksEscort home page says its services are perfect for people involved in automated online activity that often results in IP addresses getting blocked or banned, such as Craigslist and dating scams, search engine results manipulation, and online surveys.
Spur tracks SocksEscort as a malware-based proxy offering, which means the machines doing the proxying of traffic for SocksEscort customers have been infected with malicious software that turns them into a traffic relay. Usually, these users have no idea their systems are compromised.
Spur says the SocksEscort proxy service requires customers to install a Windows based application in order to access a pool of more than 10,000 hacked devices worldwide.
โWe created a fingerprint to identify the call-back infrastructure for SocksEscort proxies,โ Spur co-founder Riley Kilmer said. โLooking at network telemetry, we were able to confirm that we saw victims talking back to it on various ports.โ
According to Kilmer, AVrecon is the malware that gives SocksEscort its proxies.
โWhen Lumen released their report and IOCs [indicators of compromise], we queried our system for which proxy service call-back infrastructure overlapped with their IOCs,โ Kilmer continued. โThe second stage C2s they identified were the same as the IPs we labeled for SocksEscort.โ
Lumenโs research team said the purpose of AVrecon appears to be stealing bandwidth โ without impacting end-users โ in order to create a residential proxy service to help launder malicious activity and avoid attracting the same level of attention from Tor-hidden services or commercially available VPN services.
โThis class of cybercrime activity threat may evade detection because it is less likely than a crypto-miner to be noticed by the owner, and it is unlikely to warrant the volume of abuse complaints that internet-wide brute-forcing and DDoS-based botnets typically draw,โ Lumenโs Black Lotus researchers wrote.
Preserving bandwidth for both customers and victims was a primary concern for SocksEscort in July 2022, when 911S5 โ at the time the worldโs largest known malware proxy network โ got hacked and imploded just days after being exposed in a story here. Kilmer said after 911โs demise, SocksEscort closed its registration for several months to prevent an influx of new users from swamping the service.
Danny Adamitis, principal information security researcher at Lumen and co-author of the report on AVrecon, confirmed Kilmerโs findings, saying the C2 data matched up with what Spur was seeing for SocksEscort dating back to September 2022.
Adamitis said that on July 13 โ the day after Lumen published research on AVrecon and started blocking any traffic to the malwareโs control servers โ the people responsible for maintaining the botnet reacted quickly to transition infected systems over to a new command and control infrastructure.
โThey were clearly reacting and trying to maintain control over components of the botnet,โ Adamitis said. โProbably, they wanted to keep that revenue stream going.โ
Frustratingly, Lumen was not able to determine how the SOHO devices were being infected with AVrecon. Some possible avenues of infection include exploiting weak or default administrative credentials on routers, and outdated, insecure firmware that has known, exploitable security vulnerabilities.
KrebsOnSecurity briefly visited SocksEscort last year and promised a follow-up on the history and possible identity of its proprietors. A review of the earliest posts about this service on Russian cybercrime forums suggests the 12-year-old malware proxy network is tied to a Moldovan company that also offers VPN software on the Apple Store and elsewhere.
SocksEscort began in 2009 as โsuper-socks[.]com,โ a Russian-language service that sold access to thousands of compromised PCs that could be used to proxy traffic. Someone who picked the nicknames โSSCโ and โsuper-socksโ and email address โmichvatt@gmail.comโ registered on multiple cybercrime forums and began promoting the proxy service.
According to DomainTools.com, the apparently related email address โmichdomain@gmail.comโ was used to register SocksEscort[.]com, super-socks[.]com, and a few other proxy-related domains, including ip-score[.]com, segate[.]org seproxysoft[.]com, and vipssc[.]us. Cached versions of both super-socks[.]com and vipssc[.]us show these sites sold the same proxy service, and both displayed the letters โSSCโ prominently at the top of their homepages.
Image: Archive.org. Page translation from Russian via Google Translate.
According to cyber intelligence firm Intel 471, the very first โSSCโ identity registered on the cybercrime forums happened in 2009 at the Russian language hacker community Antichat, where SSC asked fellow forum members for help in testing the security of a website they claimed was theirs: myiptest[.]com, which promised to tell visitors whether their proxy address was included on any security or anti-spam block lists.
Myiptest[.]com is no longer responding, but a cached copy of it from Archive.org shows that for about four years it included in its HTML source a Google Analytics code of US-2665744, which was also present on more than a dozen other websites.
Most of the sites that once bore that Google tracking code are no longer online, but nearly all of them centered around services that were similar to myiptest[.]com, such as abuseipdb[.]com, bestiptest[.]com, checkdnslbl[.]com, dnsbltools[.]com and dnsblmonitor[.]com.
Each of these services were designed to help visitors quickly determine whether the Internet address they were visiting the site from was listed by any security firms as spammy, malicious or phishous. In other words, these services were designed so that proxy service users could easily tell if their rented Internet address was still safe to use for online fraud.
Another domain with the Google Analytics code US-2665744 was sscompany[.]net. An archived copy of the site says SSC stands for โServer Support Company,โ which advertised outsourced solutions for technical support and server administration.
Leaked copies of the hacked Antichat forum indicate the SSC identity registered on the forum using the IP address 71.229.207.214. That same IP was used to register the nickname โDeem3nยฎ,โ a prolific poster on Antichat between 2005 and 2009 who served as a moderator on the forum.
There was a Deem3nยฎย user on the webmaster forum Searchengines.guru whose signature in their posts says they run a popular community catering to programmers in Moldova called sysadmin[.]md, and that they were a systems administrator for sscompany[.]net.
That same Google Analytics code is also now present on the homepages of wiremo[.]co and a VPN provider called HideIPVPN[.]com.
Wiremo sells software and services to help website owners better manage their customer reviews. Wiremoโs Contact Us page lists a โServer Management LLCโ in Wilmington, DE as the parent company.ย Server Management LLC is currently listed in Appleโs App Store as the owner of a โfreeโ VPN app called HideIPVPN.
โThe best way to secure the transmissions of your mobile device is VPN,โ reads HideIPVPNโs description on the Apple Store. โNow, we provide you with an even easier way to connect to our VPN servers. We will hide your IP address, encrypt all your traffic, secure all your sensitive information (passwords, mail credit card details, etc.) form [sic] hackers on public networks.โ
When asked about the companyโs apparent connection to SocksEscort, Wiremo responded, โWe do not control this domain and no one from our team is connected to this domain.โ Wiremo did not respond when presented with the findings in this report.
A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).
The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.
Artemis is experimental software, under active development - use at your own risk.
For an up-to-date list of features, please refer to the documentation.
To run the tests, use:
./scripts/test
Artemis uses pre-commit
to run linters and format the code. pre-commit
is executed on CI to verify that the code is formatted properly.
To run it locally, use:
pre-commit run --all-files
To setup pre-commit
so that it runs before each commit, use:
pre-commit install
To build the documentation, use:
cd docs
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
make html
Please refer to the documentation.
Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.
However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.
ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.
Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:
Follow these steps to install the ReconAIzer extension on Burp Suite:
ReconAIzer.py
file in Step 3.1. Select the file and click "Open."Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.
Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.
Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!
Happy bug hunting!
NTLMRecon is a Golang version of the original NTLMRecon utility written by Sachin Kamath (AKA pwnfoo). NTLMRecon can be leveraged to perform brute forcing against a targeted webserver to identify common application endpoints supporting NTLM authentication. This includes endpoints such as the Exchange Web Services endpoint which can often be leveraged to bypass multi-factor authentication.
The tool supports collecting metadata from the exposed NTLM authentication endpoints including information on the computer name, Active Directory domain name, and Active Directory forest name. This information can be obtained without prior authentication by sending an NTLM NEGOTIATE_MESSAGE packet to the server and examining the NTLM CHALLENGE_MESSAGE returned by the targeted server. We have also published a blog post alongside this tool discussing some of the motiviations behind it's development and how we are approaching more advanced metadata collectoin within Chariot.
We wanted to perform brute-forcing and automated identification of exposed NTLM authentication endpoints within Chariot, our external attack surface management and continuous automated red teaming platform. Our primary backend scanning infrastructure is written in Golang and we didn't want to have to download and shell out to the NTLMRecon utility in Python to collect this information. We also wanted more control over the level of detail of the information we collected, etc.
The following command can be leveraged to install the NTLMRecon utility. Alternatively, you may download a precompiled version of the binary from the releases tab in GitHub.
go install github.com/praetorian-inc/NTLMRecon/cmd/NTLMRecon@latest
The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM authentication endpoints:
NTLMRecon -t https://autodiscover.contoso.com
The following command can be leveraged to invoke the NTLM recon utility and discover exposed NTLM endpoints while outputting collected metadata in a JSON format:
NTLMRecon -t https://autodiscover.contoso.com -o json
Below is an example JSON output with the data we collect from the NTLM CHALLENGE_MESSAGE returned by the server:
{
"url": "https://autodiscover.contoso.com/EWS/",
"ntlm": {
"netbiosComputerName": "MSEXCH1",
"netbiosDomainName": "CONTOSO",
"dnsDomainName": "na.contoso.local",
"dnsComputerName": "msexch1.na.contoso.local",
"forestName": "contoso.local"
}
}
โ ~ NTLMRecon -t https://adfs.contoso.com -o json | jq
{
"url": "https://adfs.contoso.com/adfs/services/trust/2005/windowstransport",
"ntlm": {
"netbiosComputerName": "MSFED1",
"netbiosDomainName": "CONTOSO",
"dnsDomainName": "corp.contoso.com",
"dnsComputerName": "MSEXCH1.corp.contoso.com",
"forestName": "contoso.com"
}
}
โ ~ NTLMRecon -t https://autodiscover.contoso.com
https://autodiscover.contoso.com/Autodiscover
https://autodiscover.contoso.com/Autodiscover/AutodiscoverService.svc/root
https://autodiscover.contoso.com/Autodiscover/Autodiscover.xml
https://autodiscover.contoso.com/EWS/
https://autodiscover.contoso.com/OAB/
https://autodiscover.contoso.com/Rpc/
โ ~
Our methodology when developing this tool has targeted the most barebones version of the desired capability for the initial release. The goal for this project was to create an initial tool we could integrate into Chariot and then allow community contributions and feedback to drive additional tooling improvements or functionality. Below are some ideas for additional functionality which could be added to NTLMRecon:
[1] https://www.praetorian.com/blog/automating-the-discovery-of-ntlm-authentication-endpoints/
How it works โข Installation โข Usage โข MODES โข For Developers โข Credits
Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.
SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.
In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.
SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3
[Thanks ChatGPT for the Description]
This tool mainly performs 3 tasks
SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-
git clone https://github.com/thecyberneh/scriptkiddi3.git
cd scriptkiddi3
bash installer.sh
scriptkiddi3 -h
This will display help for the tool. Here are all the switches it supports.
[ABOUT:]
Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
A recon and initial vulnerability detection tool built using shell script and open source tools.
[Usage:]
scriptkiddi3 [MODE] [FLAGS]
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
[MODES:]
['-m'/'--mode']
Available Options for MODE:
SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
URL | url Run scriptkiddi3 in URL ENUMERATION mode
EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode
Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
Vulnerability Detection with Nuclei,
an d Scan for SUBDOMAINE TAKEOVER
[FLAGS:]
[TARGET:] -d, --domain target domain to scan
[CONFIG:] -c, --config path of your configuration file for subfinder
[HELP:] -h, --help to get help menu
[UPDATE:] -u, --update to update tool
[Examples:]
Run scriptkiddi3 in full Exploitation mode
scriptkiddi3 -m EXP -d target.com
Use your own CONFIG file for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
scriptkiddi3 -m SUB -d target.com
Run scriptkiddi3 in URL ENUMERATION mode
scriptkiddi3 -m SUB -d target.com
Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE
scriptkiddi3 -m EXP -d target.com
FULL EXPLOITATION MODE contains following functions
Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE
scriptkiddi3 -m SUB -d target.com
SUBDOMAIN ENUMERATION MODE contains following functions
Run scriptkiddi3 in URL ENUMERATION MODE
scriptkiddi3 -m URL -d target.com
URL ENUMERATION MODE contains following functions
Using your own CONFIG File for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder
Updating tool to latest version You can run following command to update tool
scriptkiddi3 -u
An Example of config.yaml
binaryedge:
- 0bf8919b-aab9-42e4-9574-d3b639324597
- ac244e2f-b635-4581-878a-33f4e79a2c13
censys:
- ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
certspotter: []
passivetotal:
- sample-email@user.com:sample_password
securitytrails: []
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
github:
- ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
- ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
zoomeye:
- zoomeye_username:zoomeye_password
If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.
If you have any other queries, you can always contact me on Twitter(thecyberneh)
I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.
ย DataSurgeon (ds) is a versatile tool designed for incident response, penetration testing, and CTF challenges. It allows for the extraction of various types of sensitive information including emails, phone numbers, hashes, credit cards, URLs, IP addresses, MAC addresses, SRV DNS records and a lot more!
Please read the contributing guidelines here
wget -O - https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | bash
Enter the line below in an elevated powershell window.
IEX (New-Object Net.WebClient).DownloadString("https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.ps1")
Relaunch your terminal and you will be able to use ds
from the command line.
curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/Drew-Alleman/DataSurgeon/main/install/install.sh | sh
Here I use wget
to make a request to stackoverflow then I forward the body text to ds
. The -F
option will list all files found. --clean
is used to remove any extra text that might have been returned (such as extra html). Then the result of is sent to uniq
which removes any non unique files found.
wget -qO - https://www.stackoverflow.com | ds -F --clean | uniq
Here I am pulling all mac addresses found in autodeauth's log file using the -m
query. The --hide
option will hide the identifer string infront of the results. In this case 'mac_address: ' is hidden from the output. The -T
option is used to check the same line multiple times for matches. Normallly when a match is found the tool moves on to the next line rather then checking again.
$ ./ds -m -T --hide -f /var/log/autodeauth/log
2023-02-26 00:28:19 - Sending 500 deauth frames to network: BC:2E:48:E5:DE:FF -- PrivateNetwork
2023-02-26 00:35:22 - Sending 500 deauth frames to network: 90:58:51:1C:C9:E1 -- TestNet
The line below will will read all files in the current directory recursively. The -D
option is used to display the filename (-f is required for the filename to display) and -e used to search for emails.
$ find . -type f -exec ds -f {} -CDe \;
When no specific query is provided, ds
will search through all possible types of data, which is SIGNIFICANTLY slower than using individual queries. The slowest query is --files
. Its also slightly faster to use cat
to pipe the data to ds
.
Below is the elapsed time when processing a 5GB test file generated by ds-test. Each test was ran 3 times and the average time was recorded.
Processor Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s)
Ram 12.0 GB (11.9 GB usable)
Command | Speed |
---|---|
cat test.txt | ds -t | 00h:02m:04s |
ds -t -f test.txt | 00h:02m:05s |
cat test.txt | ds -t -o output.txt | 00h:02m:06s |
Command | Speed | Query Count |
---|---|---|
cat test.txt | ds -t -6 | 00h:00m:12s | 1 |
cat test.txt | ds -t -i -m | 00h:00m:22 | 2 |
cat test.txt | ds -tF6c | 00h:00m:32s | 3 |