PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.
NOTE
Some modules use
ExAllocatePool2
API to allocate kernel pool memory.ExAllocatePool2
API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replaceExAllocatePool2
API withExAllocatePoolWithTag
API.
Β
All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:
debugging-in-windbg--cdb--or-ntsd">Setting Up Kernel-Mode Debugging
Each options require to disable secure boot.
Detailed information is given in README.md in each project's directories. All modules are tested in Windows 11.
Module Name | Description |
---|---|
BlockImageLoad | PoCs to block driver loading with Load Image Notify Callback method. |
BlockNewProc | PoCs to block new process with Process Notify Callback method. |
CreateToken | PoCs to get full privileged SYSTEM token with ZwCreateToken() API. |
DropProcAccess | PoCs to drop process handle access with Object Notify Callback. |
GetFullPrivs | PoCs to get full privileges with DKOM method. |
GetProcHandle | PoCs to get full access process handle from kernelmode. |
InjectLibrary | PoCs to perform DLL injection with Kernel APC Injection method. |
ModHide | PoCs to hide loaded kernel drivers with DKOM method. |
ProcHide | PoCs to hide process with DKOM method. |
ProcProtect | PoCs to manipulate Protected Process. |
QueryModule | PoCs to perform retrieving kernel driver loaded address information. |
StealToken | PoCs to perform token stealing from kernelmode. |
More PoCs especially about following things will be added later:
Pavel Yosifovich, Windows Kernel Programming, 2nd Edition (Independently published, 2023)
Reversing-<a href=" https:="" title="Obfuscation">Obfuscation/dp/1502489309">Bruce Dang, Alexandre Gazet, Elias Bachaalany, and SΓ©bastien Josse, Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation (Wiley Publishing, 2014)
Evasion-Corners/dp/144962636X">Bill Blunden, The Rootkit Arsenal: Escape and Evasion in the Dark Corners of the System, 2nd Edition (Jones & Bartlett Learning, 2012)
A JSON Web Token (JWT, pronounced "jot") is a compact and URL-safe way of passing a JSON message between two parties. It's a standard, defined in RFC 7519. The token is a long string, divided into parts separated by dots. Each part is base64 URL-encoded.
What parts the token has depends on the type of the JWT: whether it's a JWS (a signed token) or a JWE (an encrypted token). If the token is signed it will have three sections: the header, the payload, and the signature. If the token is encrypted it will consist of five parts: the header, the encrypted key, the initialization vector, the ciphertext (payload), and the authentication tag. Probably the most common use case for JWTs is to utilize them as access tokens and ID tokens in OAuth and OpenID Connect flows, but they can serve different purposes as well.
This code snippet offers a tweak perspective aiming to enhance the security of the payload section when decoding JWT tokens, where the stored keys are visible in plaintext. This code snippet provides a tweak perspective aiming to enhance the security of the payload section when decoding JWT tokens. Typically, the payload section appears in plaintext when decoded from the JWT token (base64). The main objective is to lightly encrypt or obfuscate the payload values, making it difficult to discern their meaning. The intention is to ensure that even if someone attempts to decode the payload values, they cannot do so easily.
The idea behind attempting to obscure the value of the key named "userid" is as follows:
and..^^
in the example, the key is shown as { 'userid': 'random_value' },
making it apparent that it represents a user ID.
However, this is merely for illustrative purposes.
In practice, a predetermined and undisclosed name is typically used.
For example, 'a': 'changing_random_value'
Attempting to tamper with JWT tokens generated using this method requires access to both the JWT secret key and the XOR symmetric key used to create the UserID.
# python3 main.py
- Current Unix Timestamp: 1709160368
- Current Unix Timestamp to Human Readable: 2024-02-29 07:46:08
- userid: 23243232
- XOR Symmetric key: b'generally_user_salt_or_hash_or_random_uuid_this_value_must_be_in_dbms'
- JWT Secret key: yes_your_service_jwt_secret_key
- Encoded UserID and Timestamp: VVZcUUFTX14FOkdEUUFpEVZfTWwKEGkLUxUKawtHOkAAW1RXDGYWQAo=
- Decoded UserID and Hashed Timestamp: 23243232|e27436b7393eb6c2fb4d5e2a508a9c5c
- JWT Token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0aW1lc3RhbXAiOiIyMDI0LTAyLTI5IDA3OjQ2OjA4IiwidXNlcmlkIjoiVlZaY1VVRlRYMTRGT2tkRVVVRnBFVlpmVFd3S0VHa0xVeFVLYXd0SE9rQUFXMVJYREdZV1FBbz0ifQ.bM_6cBZHdXhMZjyefr6YO5n5X51SzXjyBUEzFiBaZ7Q
- Decoded JWT: {'timestamp': '2024-02-29 07:46:08', 'userid': 'VVZcUUFTX14FOkdEUUFpEVZfTWwKEGkLUxUKawtHOkAAW1RXDGYWQAo='}
# run again
- Decoded JWT: {'timestamp': '2024-02-29 08:16:36', 'userid': 'VVZcUUFTX14FaRNAVBRpRQcORmtWRGl eVUtRZlYXaBZZCgYOWGlDR10='}
- Decoded JWT: {'timestamp': '2024-02-29 08:16:51', 'userid': 'VVZcUUFTX14FZxMRVUdnEgJZEmxfRztRVUBabAsRZkdVVlJWWztGQVA='}
- Decoded JWT: {'timestamp': '2024-02-29 08:17:01', 'userid': 'VVZcUUFTX14FbxYQUkM8RVRZEmkLRWsNUBYNb1sQPREFDFYKDmYRQV4='}
- Decoded JWT: {'timestamp': '2024-02-29 08:17:09', 'userid': 'VVZcUUFTX14FbUNEVEVqEFlaTGoKQjxZBRULOlpGPUtSClALWD5GRAs='}
In the dynamic realm of cybersecurity, vigilance and proactive defense are key. Malicious actors often leverage Microsoft Office files and Zip archives, embedding covert URLs or macros to initiate harmful actions. This Python script is crafted to detect potential threats by scrutinizing the contents of Microsoft Office documents, Acrobat Reader PDF documents and Zip files, reducing the risk of inadvertently triggering malicious code.
The script smartly identifies Microsoft Office documents (.docx, .xlsx, .pptx), Acrobat Reader PDF documents (.pdf) and Zip files. These file types, including Office documents, are zip archives that can be examined programmatically.
For both Office and Zip files, the script decompresses the contents into a temporary directory. It then scans these contents for URLs using regular expressions, searching for potential signs of compromise.
To minimize false positives, the script includes a list of domains to ignore, filtering out common URLs typically found in Office documents. This ensures focused analysis on unusual or potentially harmful URLs.
Files with URLs not on the ignored list are marked as suspicious. This heuristic method allows for adaptability based on your specific security context and threat landscape.
Post-scanning, the script cleans up by erasing temporary decompressed files, leaving no traces.
To effectively utilize the script:
Execute the script with the command: python CanaryTokenScanner.py FILE_OR_DIRECTORY_PATH
(Replace FILE_OR_DIRECTORY_PATH
with the actual file or directory path.)
Interpretation
An example of the Canary Token Scanner script in action, demonstrating its capability to detect suspicious URLs.
This script is intended for educational and security testing purposes only. Utilize it responsibly and in compliance with applicable laws and regulations.
Legba
is a multiprotocol credentials bruteforcer / password sprayer and enumerator built with Rust and the Tokio asynchronous runtime in order to achieve better performances and stability while consuming less resources than similar tools (see the benchmark below).
For the building instructions, usage and the complete list of options check the project Wiki.
AMQP (ActiveMQ, RabbitMQ, Qpid, JORAM and Solace), Cassandra/ScyllaDB, DNS subdomain enumeration, FTP, HTTP (basic authentication, NTLMv1, NTLMv2, multipart form, custom requests with CSRF support, files/folders enumeration, virtual host enumeration), IMAP, Kerberos pre-authentication and user enumeration, LDAP, MongoDB, MQTT, Microsoft SQL, MySQL, Oracle, PostgreSQL, POP3, RDP, Redis, SSH / SFTP, SMTP, STOMP (ActiveMQ, RabbitMQ, HornetQ and OpenMQ), TCP port scanning, Telnet, VNC.
Here's a benchmark of legba
versus thc-hydra
running some common plugins, both targeting the same test servers on localhost. The benchmark has been executed on a macOS laptop with an M1 Max CPU, using a wordlist of 1000 passwords with the correct one being on the last line. Legba was compiled in release mode, Hydra compiled and installed via brew formula.
Far from being an exhaustive benchmark (some legba features are simply not supported by hydra, such as CSRF token grabbing), this table still gives a clear idea of how using an asynchronous runtime can drastically improve performances.
Test Name | Hydra Tasks | Hydra Time | Legba Tasks | Legba Time |
---|---|---|---|---|
HTTP basic auth | 16 | 7.100s | 10 | 1.560s (ο 4.5x faster) |
HTTP POST login (wordpress) | 16 | 14.854s | 10 | 5.045s (ο 2.9x faster) |
SSH | 16 | 7m29.85s * | 10 | 8.150s (ο 55.1x faster) |
MySQL | 4 ** | 9.819s | 4 ** | 2.542s (ο 3.8x faster) |
Microsoft SQL | 16 | 7.609s | 10 | 4.789s (ο 1.5x faster) |
* While this result would suggest a default delay between connection attempts used by Hydra. I've tried to study the source code to find such delay but to my knowledge there's none. For some reason it's simply very slow.
** For MySQL hydra automatically reduces the amount of tasks to 4, therefore legba's concurrency level has been adjusted to 4 as well.
Legba is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license
and then run cargo license
.
Existing tools don't really "understand" code. Instead, they mostly parse texts.
DeepSecrets expands classic regex-search approaches with semantic analysis, dangerous variable detection, and more efficient usage of entropy analysis. Code understanding supports 500+ languages and formats and is achieved by lexing and parsing - techniques commonly used in SAST tools.
DeepSecrets also introduces a new way to find secrets: just use hashed values of your known secrets and get them found plain in your code.
Under the hood story is in articles here: https://hackernoon.com/modernizing-secrets-scanning-part-1-the-problem
Pff, is it still regex-based?
Yes and no. Of course, it uses regexes and finds typed secrets like any other tool. But language understanding (the lexing stage) and variable detection also use regexes under the hood. So regexes is an instrument, not a problem.
Why don't you build true abstract syntax trees? It's academically more correct!
DeepSecrets tries to keep a balance between complexity and effectiveness. Building a true AST is a pretty complex thing and simply an overkill for our specific task. So the tool still follows the generic SAST-way of code analysis but optimizes the AST part using a different approach.
I'd like to build my own semantic rules. How do I do that?
Only through the code by the moment. Formalizing the rules and moving them into a flexible and user-controlled ruleset is in the plans.
I still have a question
Feel free to communicate with the maintainer
From Github via pip
$ pip install git+https://github.com/avito-tech/deepsecrets.git
From PyPi
$ pip install deepsecrets
The easiest way:
$ deepsecrets --target-dir /path/to/your/code --outfile report.json
This will run a scan against /path/to/your/code
using the default configuration:
Report will be saved to report.json
Run deepsecrets --help
for details.
Basically, you can use your own ruleset by specifying --regex-rules
. Paths to be excluded from scanning can be set via --excluded-paths
.
The built-in ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
Example ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
There are several core concepts:
File
Tokenizer
Token
Engine
Finding
ScanMode
Just a pythonic representation of a file with all needed methods for management.
A component able to break the content of a file into pieces - Tokens - by its logic. There are four types of tokenizers available:
FullContentTokenizer
: treats all content as a single token. Useful for regex-based search.PerWordTokenizer
: breaks given content by words and line breaks.LexerTokenizer
: uses language-specific smarts to break code into semantically correct pieces with additional context for each token.A string with additional information about its semantic role, corresponding file, and location inside it.
A component performing secrets search for a single token by its own logic. Returns a set of Findings. There are three engines available:
RegexEngine
: checks tokens' values through a special rulesetSemanticEngine
: checks tokens produced by the LexerTokenizer using additional context - variable names and valuesHashedSecretEngine
: checks tokens' values by hashing them and trying to find coinciding hashes inside a special rulesetThis is a data structure representing a problem detected inside code. Features information about the precise location inside a file and a rule that found it.
This component is responsible for the scan process.
PerFileAnalyzer
- the method called against each file, returning a list of findings. The primary usage is to initialize necessary engines, tokenizers, and rulesets.The current implementation has a CliScanMode
built by the user-provided config through the cli args.
The project is supposed to be developed using VSCode and 'Remote containers' feature.
Steps:
Designed to validate potential usernames by querying OneDrive and/or Microsoft Teams, which are passive methods.
Additionally, it can output/create a list of legacy Skype users identified through Microsoft Teams enumeration.
Finally, it also creates a nice clean list for future usage, all conducted from a single tool.
$ python3 .\KnockKnock.py -h
_ __ _ _ __ _
| |/ /_ __ ___ ___| | _| |/ /_ __ ___ ___| | __
| ' /| '_ \ / _ \ / __| |/ / ' /| '_ \ / _ \ / __| |/ /
| . \| | | | (_) | (__| <| . \| | | | (_) | (__| <
|_|\_\_| |_|\___/ \___|_|\_\_|\_\_| |_|\___/ \___|_|\_\
v0.9 Author: @waffl3ss
usage: KnockKnock.py [-h] [-teams] [-onedrive] [-l] -i INPUTLIST [-o OUTPUTFILE] -d TARGETDOMAIN [-t TEAMSTOKEN] [-threads MAXTHREADS] [-v]
options:
-h, --help show this help message and exit
-teams Run the Teams User Enumeration Module
-onedrive Run the One Drive Enumeration Module
-l Write legacy skype users to a seperate file
-i INPUTLIST Input file with newline-seperated users to check
-o OUTPUTFILE Write output to file
-d TARGETDOMAIN Domain to target
-t TEAMSTOKEN Teams Token (file containing token or a string)
-threads MAXTHREADS Number of threads to use in the Teams User Enumeration (default = 10)
-v Show verbose errors
./KnockKnock.py -teams -i UsersList.txt -d Example.com -o OutFile.txt -t BearerToken.txt
./KnockKnock.py -onedrive -i UsersList.txt -d Example.com -o OutFile.txt
./KnockKnock.py -onedrive -teams -i UsersList.txt -d Example.com -t BearerToken.txt -l
To get your bearer token, you will need a Cookie Manager plugin on your browser and login to your own Microsoft Teams through the browser.
Next, view the cookies related to the current webpage (teams.microsoft.com).
The cookie you are looking for is for the domain .teams.microsoft.com and is titled "authtoken".
You can copy the whole token as the script will split out the required part for you.
@nyxgeek - onedrive_user_enum
@immunIT - TeamsUserEnum
Gato, or GitHub Attack Toolkit, is an enumeration and attack tool that allows both blue teamers and offensive security practitioners to evaluate the blast radius of a compromised personal access token within a GitHub organization.
The tool also allows searching for and thoroughly enumerating public repositories that utilize self-hosted runners. GitHub recommends that self-hosted runners only be utilized for private repositories, however, there are thousands of organizations that utilize self-hosted runners.
Gato supports OS X and Linux with at least Python 3.7.
In order to install the tool, simply clone the repository and use pip install
. We recommend performing this within a virtual environment.
git clone https://github.com/praetorian-inc/gato
cd gato
python3 -m venv venv
source venv/bin/activate
pip install .
Gato also requires that git
version 2.27
or above is installed and on the system's PATH. In order to run the fork PR attack module, sed
must also be installed and present on the system's path.
After installing the tool, it can be launched by running gato
or praetorian-gato
.
We recommend viewing the parameters for the base tool using gato -h
, and the parameters for each of the tool's modules by running the following:
gato search -h
gato enum -h
gato attack -h
The tool requires a GitHub classic PAT in order to function. To create one, log in to GitHub and go to GitHub Developer Settings and select Generate New Token
and then Generate new token (classic)
.
After creating this token set the GH_TOKEN
environment variable within your shell by running export GH_TOKEN=<YOUR_CREATED_TOKEN>
. Alternatively, store the token within a secure password manager and enter it when the application prompts you.
For troubleshooting and additional details, such as installing in developer mode or running unit tests, please see the wiki.
Please see the wiki. for detailed documentation, as well as OpSec considerations for the tool's various modules!
If you believe you have identified a bug within the software, please open an issue containing the tool's output, along with the actions you were trying to conduct.
If you are unsure if the behavior is a bug, use the discussions section instead!
Contributions are welcome! Please review our design methodology and coding standards before working on a new feature!
Additionally, if you are proposing significant changes to the tool, please open an issue open an issue to start a conversation about the motivation for the changes.
The plugin is created to help automated scanning using Burp in the following scenarios:
Key advantages:
The inspiration for the plugin is from ExtendedMacro plugin: https://github.com/FrUh/ExtendedMacro
For usage with test application (Install this testing application (Tiredful application) from https://github.com/payatu/Tiredful-API)
Totally there are 4 different ways you can specify the error condition.
Idea : Record the Tiredful application request in BURP, configure the ATOR extender, check whether token is replaced by ATOR.
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
v1.0
Authors from Synopsys - Ashwath Reddy (@ka3hk) and Manikandan Rajappan (@rmanikdn)
This software is released by Synopsys under the MIT license.
UI Panel was splitted into 4 different configuration. Check out the code from v2 or use the executable from v2/bin.