PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.
NOTE
Some modules use
ExAllocatePool2
API to allocate kernel pool memory.ExAllocatePool2
API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replaceExAllocatePool2
API withExAllocatePoolWithTag
API.
Β
All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:
debugging-in-windbg--cdb--or-ntsd">Setting Up Kernel-Mode Debugging
Each options require to disable secure boot.
Detailed information is given in README.md in each project's directories. All modules are tested in Windows 11.
Module Name | Description |
---|---|
BlockImageLoad | PoCs to block driver loading with Load Image Notify Callback method. |
BlockNewProc | PoCs to block new process with Process Notify Callback method. |
CreateToken | PoCs to get full privileged SYSTEM token with ZwCreateToken() API. |
DropProcAccess | PoCs to drop process handle access with Object Notify Callback. |
GetFullPrivs | PoCs to get full privileges with DKOM method. |
GetProcHandle | PoCs to get full access process handle from kernelmode. |
InjectLibrary | PoCs to perform DLL injection with Kernel APC Injection method. |
ModHide | PoCs to hide loaded kernel drivers with DKOM method. |
ProcHide | PoCs to hide process with DKOM method. |
ProcProtect | PoCs to manipulate Protected Process. |
QueryModule | PoCs to perform retrieving kernel driver loaded address information. |
StealToken | PoCs to perform token stealing from kernelmode. |
More PoCs especially about following things will be added later:
Pavel Yosifovich, Windows Kernel Programming, 2nd Edition (Independently published, 2023)
Reversing-<a href=" https:="" title="Obfuscation">Obfuscation/dp/1502489309">Bruce Dang, Alexandre Gazet, Elias Bachaalany, and SΓ©bastien Josse, Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation (Wiley Publishing, 2014)
Evasion-Corners/dp/144962636X">Bill Blunden, The Rootkit Arsenal: Escape and Evasion in the Dark Corners of the System, 2nd Edition (Jones & Bartlett Learning, 2012)
A JSON Web Token (JWT, pronounced "jot") is a compact and URL-safe way of passing a JSON message between two parties. It's a standard, defined in RFC 7519. The token is a long string, divided into parts separated by dots. Each part is base64 URL-encoded.
What parts the token has depends on the type of the JWT: whether it's a JWS (a signed token) or a JWE (an encrypted token). If the token is signed it will have three sections: the header, the payload, and the signature. If the token is encrypted it will consist of five parts: the header, the encrypted key, the initialization vector, the ciphertext (payload), and the authentication tag. Probably the most common use case for JWTs is to utilize them as access tokens and ID tokens in OAuth and OpenID Connect flows, but they can serve different purposes as well.
This code snippet offers a tweak perspective aiming to enhance the security of the payload section when decoding JWT tokens, where the stored keys are visible in plaintext. This code snippet provides a tweak perspective aiming to enhance the security of the payload section when decoding JWT tokens. Typically, the payload section appears in plaintext when decoded from the JWT token (base64). The main objective is to lightly encrypt or obfuscate the payload values, making it difficult to discern their meaning. The intention is to ensure that even if someone attempts to decode the payload values, they cannot do so easily.
The idea behind attempting to obscure the value of the key named "userid" is as follows:
and..^^
in the example, the key is shown as { 'userid': 'random_value' },
making it apparent that it represents a user ID.
However, this is merely for illustrative purposes.
In practice, a predetermined and undisclosed name is typically used.
For example, 'a': 'changing_random_value'
Attempting to tamper with JWT tokens generated using this method requires access to both the JWT secret key and the XOR symmetric key used to create the UserID.
# python3 main.py
- Current Unix Timestamp: 1709160368
- Current Unix Timestamp to Human Readable: 2024-02-29 07:46:08
- userid: 23243232
- XOR Symmetric key: b'generally_user_salt_or_hash_or_random_uuid_this_value_must_be_in_dbms'
- JWT Secret key: yes_your_service_jwt_secret_key
- Encoded UserID and Timestamp: VVZcUUFTX14FOkdEUUFpEVZfTWwKEGkLUxUKawtHOkAAW1RXDGYWQAo=
- Decoded UserID and Hashed Timestamp: 23243232|e27436b7393eb6c2fb4d5e2a508a9c5c
- JWT Token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0aW1lc3RhbXAiOiIyMDI0LTAyLTI5IDA3OjQ2OjA4IiwidXNlcmlkIjoiVlZaY1VVRlRYMTRGT2tkRVVVRnBFVlpmVFd3S0VHa0xVeFVLYXd0SE9rQUFXMVJYREdZV1FBbz0ifQ.bM_6cBZHdXhMZjyefr6YO5n5X51SzXjyBUEzFiBaZ7Q
- Decoded JWT: {'timestamp': '2024-02-29 07:46:08', 'userid': 'VVZcUUFTX14FOkdEUUFpEVZfTWwKEGkLUxUKawtHOkAAW1RXDGYWQAo='}
# run again
- Decoded JWT: {'timestamp': '2024-02-29 08:16:36', 'userid': 'VVZcUUFTX14FaRNAVBRpRQcORmtWRGl eVUtRZlYXaBZZCgYOWGlDR10='}
- Decoded JWT: {'timestamp': '2024-02-29 08:16:51', 'userid': 'VVZcUUFTX14FZxMRVUdnEgJZEmxfRztRVUBabAsRZkdVVlJWWztGQVA='}
- Decoded JWT: {'timestamp': '2024-02-29 08:17:01', 'userid': 'VVZcUUFTX14FbxYQUkM8RVRZEmkLRWsNUBYNb1sQPREFDFYKDmYRQV4='}
- Decoded JWT: {'timestamp': '2024-02-29 08:17:09', 'userid': 'VVZcUUFTX14FbUNEVEVqEFlaTGoKQjxZBRULOlpGPUtSClALWD5GRAs='}
In the dynamic realm of cybersecurity, vigilance and proactive defense are key. Malicious actors often leverage Microsoft Office files and Zip archives, embedding covert URLs or macros to initiate harmful actions. This Python script is crafted to detect potential threats by scrutinizing the contents of Microsoft Office documents, Acrobat Reader PDF documents and Zip files, reducing the risk of inadvertently triggering malicious code.
The script smartly identifies Microsoft Office documents (.docx, .xlsx, .pptx), Acrobat Reader PDF documents (.pdf) and Zip files. These file types, including Office documents, are zip archives that can be examined programmatically.
For both Office and Zip files, the script decompresses the contents into a temporary directory. It then scans these contents for URLs using regular expressions, searching for potential signs of compromise.
To minimize false positives, the script includes a list of domains to ignore, filtering out common URLs typically found in Office documents. This ensures focused analysis on unusual or potentially harmful URLs.
Files with URLs not on the ignored list are marked as suspicious. This heuristic method allows for adaptability based on your specific security context and threat landscape.
Post-scanning, the script cleans up by erasing temporary decompressed files, leaving no traces.
To effectively utilize the script:
Execute the script with the command: python CanaryTokenScanner.py FILE_OR_DIRECTORY_PATH
(Replace FILE_OR_DIRECTORY_PATH
with the actual file or directory path.)
Interpretation
An example of the Canary Token Scanner script in action, demonstrating its capability to detect suspicious URLs.
This script is intended for educational and security testing purposes only. Utilize it responsibly and in compliance with applicable laws and regulations.
Legba
is a multiprotocol credentials bruteforcer / password sprayer and enumerator built with Rust and the Tokio asynchronous runtime in order to achieve better performances and stability while consuming less resources than similar tools (see the benchmark below).
For the building instructions, usage and the complete list of options check the project Wiki.
AMQP (ActiveMQ, RabbitMQ, Qpid, JORAM and Solace), Cassandra/ScyllaDB, DNS subdomain enumeration, FTP, HTTP (basic authentication, NTLMv1, NTLMv2, multipart form, custom requests with CSRF support, files/folders enumeration, virtual host enumeration), IMAP, Kerberos pre-authentication and user enumeration, LDAP, MongoDB, MQTT, Microsoft SQL, MySQL, Oracle, PostgreSQL, POP3, RDP, Redis, SSH / SFTP, SMTP, STOMP (ActiveMQ, RabbitMQ, HornetQ and OpenMQ), TCP port scanning, Telnet, VNC.
Here's a benchmark of legba
versus thc-hydra
running some common plugins, both targeting the same test servers on localhost. The benchmark has been executed on a macOS laptop with an M1 Max CPU, using a wordlist of 1000 passwords with the correct one being on the last line. Legba was compiled in release mode, Hydra compiled and installed via brew formula.
Far from being an exhaustive benchmark (some legba features are simply not supported by hydra, such as CSRF token grabbing), this table still gives a clear idea of how using an asynchronous runtime can drastically improve performances.
Test Name | Hydra Tasks | Hydra Time | Legba Tasks | Legba Time |
---|---|---|---|---|
HTTP basic auth | 16 | 7.100s | 10 | 1.560s (ο 4.5x faster) |
HTTP POST login (wordpress) | 16 | 14.854s | 10 | 5.045s (ο 2.9x faster) |
SSH | 16 | 7m29.85s * | 10 | 8.150s (ο 55.1x faster) |
MySQL | 4 ** | 9.819s | 4 ** | 2.542s (ο 3.8x faster) |
Microsoft SQL | 16 | 7.609s | 10 | 4.789s (ο 1.5x faster) |
* While this result would suggest a default delay between connection attempts used by Hydra. I've tried to study the source code to find such delay but to my knowledge there's none. For some reason it's simply very slow.
** For MySQL hydra automatically reduces the amount of tasks to 4, therefore legba's concurrency level has been adjusted to 4 as well.
Legba is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license
and then run cargo license
.
Existing tools don't really "understand" code. Instead, they mostly parse texts.
DeepSecrets expands classic regex-search approaches with semantic analysis, dangerous variable detection, and more efficient usage of entropy analysis. Code understanding supports 500+ languages and formats and is achieved by lexing and parsing - techniques commonly used in SAST tools.
DeepSecrets also introduces a new way to find secrets: just use hashed values of your known secrets and get them found plain in your code.
Under the hood story is in articles here: https://hackernoon.com/modernizing-secrets-scanning-part-1-the-problem
Pff, is it still regex-based?
Yes and no. Of course, it uses regexes and finds typed secrets like any other tool. But language understanding (the lexing stage) and variable detection also use regexes under the hood. So regexes is an instrument, not a problem.
Why don't you build true abstract syntax trees? It's academically more correct!
DeepSecrets tries to keep a balance between complexity and effectiveness. Building a true AST is a pretty complex thing and simply an overkill for our specific task. So the tool still follows the generic SAST-way of code analysis but optimizes the AST part using a different approach.
I'd like to build my own semantic rules. How do I do that?
Only through the code by the moment. Formalizing the rules and moving them into a flexible and user-controlled ruleset is in the plans.
I still have a question
Feel free to communicate with the maintainer
From Github via pip
$ pip install git+https://github.com/avito-tech/deepsecrets.git
From PyPi
$ pip install deepsecrets
The easiest way:
$ deepsecrets --target-dir /path/to/your/code --outfile report.json
This will run a scan against /path/to/your/code
using the default configuration:
Report will be saved to report.json
Run deepsecrets --help
for details.
Basically, you can use your own ruleset by specifying --regex-rules
. Paths to be excluded from scanning can be set via --excluded-paths
.
The built-in ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
Example ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
There are several core concepts:
File
Tokenizer
Token
Engine
Finding
ScanMode
Just a pythonic representation of a file with all needed methods for management.
A component able to break the content of a file into pieces - Tokens - by its logic. There are four types of tokenizers available:
FullContentTokenizer
: treats all content as a single token. Useful for regex-based search.PerWordTokenizer
: breaks given content by words and line breaks.LexerTokenizer
: uses language-specific smarts to break code into semantically correct pieces with additional context for each token.A string with additional information about its semantic role, corresponding file, and location inside it.
A component performing secrets search for a single token by its own logic. Returns a set of Findings. There are three engines available:
RegexEngine
: checks tokens' values through a special rulesetSemanticEngine
: checks tokens produced by the LexerTokenizer using additional context - variable names and valuesHashedSecretEngine
: checks tokens' values by hashing them and trying to find coinciding hashes inside a special rulesetThis is a data structure representing a problem detected inside code. Features information about the precise location inside a file and a rule that found it.
This component is responsible for the scan process.
PerFileAnalyzer
- the method called against each file, returning a list of findings. The primary usage is to initialize necessary engines, tokenizers, and rulesets.The current implementation has a CliScanMode
built by the user-provided config through the cli args.
The project is supposed to be developed using VSCode and 'Remote containers' feature.
Steps:
Designed to validate potential usernames by querying OneDrive and/or Microsoft Teams, which are passive methods.
Additionally, it can output/create a list of legacy Skype users identified through Microsoft Teams enumeration.
Finally, it also creates a nice clean list for future usage, all conducted from a single tool.
$ python3 .\KnockKnock.py -h
_ __ _ _ __ _
| |/ /_ __ ___ ___| | _| |/ /_ __ ___ ___| | __
| ' /| '_ \ / _ \ / __| |/ / ' /| '_ \ / _ \ / __| |/ /
| . \| | | | (_) | (__| <| . \| | | | (_) | (__| <
|_|\_\_| |_|\___/ \___|_|\_\_|\_\_| |_|\___/ \___|_|\_\
v0.9 Author: @waffl3ss
usage: KnockKnock.py [-h] [-teams] [-onedrive] [-l] -i INPUTLIST [-o OUTPUTFILE] -d TARGETDOMAIN [-t TEAMSTOKEN] [-threads MAXTHREADS] [-v]
options:
-h, --help show this help message and exit
-teams Run the Teams User Enumeration Module
-onedrive Run the One Drive Enumeration Module
-l Write legacy skype users to a seperate file
-i INPUTLIST Input file with newline-seperated users to check
-o OUTPUTFILE Write output to file
-d TARGETDOMAIN Domain to target
-t TEAMSTOKEN Teams Token (file containing token or a string)
-threads MAXTHREADS Number of threads to use in the Teams User Enumeration (default = 10)
-v Show verbose errors
./KnockKnock.py -teams -i UsersList.txt -d Example.com -o OutFile.txt -t BearerToken.txt
./KnockKnock.py -onedrive -i UsersList.txt -d Example.com -o OutFile.txt
./KnockKnock.py -onedrive -teams -i UsersList.txt -d Example.com -t BearerToken.txt -l
To get your bearer token, you will need a Cookie Manager plugin on your browser and login to your own Microsoft Teams through the browser.
Next, view the cookies related to the current webpage (teams.microsoft.com).
The cookie you are looking for is for the domain .teams.microsoft.com and is titled "authtoken".
You can copy the whole token as the script will split out the required part for you.
@nyxgeek - onedrive_user_enum
@immunIT - TeamsUserEnum
In large metropolitan areas, tourists are often easy to spot because theyβre far more inclined than locals to gaze upward at the surrounding skyscrapers. Security experts say this same tourist dynamic is a dead giveaway in virtually all computer intrusions that lead to devastating attacks like data theft and ransomware, and that more organizations should set simple virtual tripwires that sound the alarm when authorized users and devices are spotted exhibiting this behavior.
In a blog post published last month, Cisco Talos said it was seeing a worrisome βincrease in the rate of high-sophistication attacks on network infrastructure.β Ciscoβs warning comes amid a flurry of successful data ransom and state-sponsored cyber espionage attacks targeting some of the most well-defended networks on the planet.
But despite their increasing complexity, a great many initial intrusions that lead to data theft could be nipped in the bud if more organizations started looking for the telltale signs of newly-arrived cybercriminals behaving like network tourists, Cisco says.
βOne of the most important things to talk about here is that in each of the cases weβve seen, the threat actors are taking the type of βfirst stepsβ that someone who wants to understand (and control) your environment would take,β Ciscoβs Hazel Burton wrote. βExamples we have observed include threat actors performing a βshow config,β βshow interface,β βshow route,β βshow arp tableβ and a βshow CDP neighbor.β All these actions give the attackers a picture of a routerβs perspective of the network, and an understanding of what foothold they have.β
Ciscoβs alert concerned espionage attacks from China and Russia that abused vulnerabilities in aging, end-of-life network routers. But at a very important level, it doesnβt matter how or why the attackers got that initial foothold on your network.
It might be zero-day vulnerabilities in your network firewall or file-transfer appliance. Your more immediate and primary concern has to be: How quickly can you detect and detach that initial foothold?
The same tourist behavior that Cisco described attackers exhibiting vis-a-vis older routers is also incredibly common early on in ransomware and data ransom attacks β which often unfurl in secret over days or weeks as attackers methodically identify and compromise a victimβs key network assets.
These virtual hostage situations usually begin with the intruders purchasing access to the targetβs network from dark web brokers who resell access to stolen credentials and compromised computers. As a result, when those stolen resources first get used by would-be data thieves, almost invariably the attackers will run a series of basic commands asking the local system to confirm exactly who and where they are on the victimβs network.
This fundamental reality about modern cyberattacks β that cybercriminals almost always orient themselves by βlooking upβ who and where they are upon entering a foreign network for the first time β forms the business model of an innovative security company called Thinkst, which gives away easy-to-use tripwires or βcanariesβ that can fire off an alert whenever all sorts of suspicious activity is witnessed.
βMany people have pointed out that there are a handful of commands that are overwhelmingly run by attackers on compromised hosts (and seldom ever by regular users/usage),β the Thinkst website explains. βReliably alerting when a user on your code-sign server runs whoami.exe can mean the difference between catching a compromise in week-1 (before the attackers dig in) and learning about the attack on CNN.β
These canaries β or βcanary tokensβ β are meant to be embedded inside regular files, acting much like a web beacon or web bug that tracks when someone opens an email.
The Canary Tokens website from Thinkst Canary lists nearly two-dozen free customizable canaries.
βImagine doing that, but for file reads, database queries, process executions or patterns in log files,β the Canary Tokens documentation explains. βCanarytokens does all this and more, letting you implant traps in your production systems rather than setting up separate honeypots.β
Thinkst operates alongside a burgeoning industry offering so-called βdeceptionβ or βhoneypotβ services β those designed to confuse, disrupt and entangle network intruders. But in an interview with KrebsOnSecurity, Thinkst founder and CEO Haroon Meer said most deception techniques involve some degree of hubris.
βMeaning, youβll have deception teams in your network playing spy versus spy with people trying to break in, and it becomes this whole counterintelligence thing,β Meer said. βNobody really has time for that. Instead, we are saying literally the opposite: That youβve probably got all these [security improvement] projects that are going to take forever. But while youβre doing all that, just drop these 10 canaries, because everything else is going to take a long time to do.β
The idea here is to lay traps in sensitive areas of your network or web applications where few authorized users should ever trod. Importantly, the canary tokens themselves are useless to an attacker. For example, that AWS canary token sure looks like the digital keys to your cloud, but the token itself offers no access. Itβs just a lure for the bad guys, and you get an alert when and if it is ever touched.
One nice thing about canary tokens is that Thinkst gives them away for free. Head over to canarytokens.org, and choose from a drop-down menu of available tokens, including:
-a web bug / URL token, designed to alert when a particular URL is visited;
-a DNS token, which alerts when a hostname is requested;
-an AWS token, which alerts when a specific Amazon Web Services key is used;
-a βcustom exeβ token, to alert when a specific Windows executable file or DLL is run;
-a βsensitive commandβ token, to alert when a suspicious Windows command is run.
-a Microsoft Excel/Word token, which alerts when a specific Excel or Word file is accessed.
Much like a βwet paintβ sign often encourages people to touch a freshly painted surface anyway, attackers often canβt help themselves when they enter a foreign network and stumble upon what appear to be key digital assets, Meer says.
βIf an attacker lands on your server and finds a key to your cloud environment, itβs really hard for them not to try it once,β Meer said. βAlso, when these sorts of actors do land in a network, they have to orient themselves, and while doing that they are going to trip canaries.β
Meer says canary tokens are as likely to trip up attackers as they are βred teams,β security experts hired or employed by companies seeking to continuously probe their own computer systems and networks for security weaknesses.
βThe concept and use of canary tokens has made me very hesitant to use credentials gained during an engagement, versus finding alternative means to an end goal,β wrote Shubham Shah, a penetration tester and co-founder of the security firm Assetnote.Β βIf the aim is to increase the time taken for attackers, canary tokens work well.β
Thinkst makes money by selling Canary Tools, which are honeypots that emulate full blown systems like Windows servers or IBM mainframes. They deploy in minutes and include a personalized, private Canarytoken server.
βIf youβve got a sophisticated defense team, you can start putting these things in really interesting places,β Meer said. βEveryone says their stuff is simple, but we obsess over it. Itβs really got to be so simple that people canβt mess it up. And if it works, itβs the best bang for your security buck youβre going to get.β
Further reading:
Dark Reading: Credential Canaries Create Minefield for Attackers
NCC Group: Extending a Thinkst Canary to Become an Interactive Honeypot
Cruise Automationβs experience deploying canary tokens
Gato, or GitHub Attack Toolkit, is an enumeration and attack tool that allows both blue teamers and offensive security practitioners to evaluate the blast radius of a compromised personal access token within a GitHub organization.
The tool also allows searching for and thoroughly enumerating public repositories that utilize self-hosted runners. GitHub recommends that self-hosted runners only be utilized for private repositories, however, there are thousands of organizations that utilize self-hosted runners.
Gato supports OS X and Linux with at least Python 3.7.
In order to install the tool, simply clone the repository and use pip install
. We recommend performing this within a virtual environment.
git clone https://github.com/praetorian-inc/gato
cd gato
python3 -m venv venv
source venv/bin/activate
pip install .
Gato also requires that git
version 2.27
or above is installed and on the system's PATH. In order to run the fork PR attack module, sed
must also be installed and present on the system's path.
After installing the tool, it can be launched by running gato
or praetorian-gato
.
We recommend viewing the parameters for the base tool using gato -h
, and the parameters for each of the tool's modules by running the following:
gato search -h
gato enum -h
gato attack -h
The tool requires a GitHub classic PAT in order to function. To create one, log in to GitHub and go to GitHub Developer Settings and select Generate New Token
and then Generate new token (classic)
.
After creating this token set the GH_TOKEN
environment variable within your shell by running export GH_TOKEN=<YOUR_CREATED_TOKEN>
. Alternatively, store the token within a secure password manager and enter it when the application prompts you.
For troubleshooting and additional details, such as installing in developer mode or running unit tests, please see the wiki.
Please see the wiki. for detailed documentation, as well as OpSec considerations for the tool's various modules!
If you believe you have identified a bug within the software, please open an issue containing the tool's output, along with the actions you were trying to conduct.
If you are unsure if the behavior is a bug, use the discussions section instead!
Contributions are welcome! Please review our design methodology and coding standards before working on a new feature!
Additionally, if you are proposing significant changes to the tool, please open an issue open an issue to start a conversation about the motivation for the changes.
The plugin is created to help automated scanning using Burp in the following scenarios:
Key advantages:
The inspiration for the plugin is from ExtendedMacro plugin: https://github.com/FrUh/ExtendedMacro
For usage with test application (Install this testing application (Tiredful application) from https://github.com/payatu/Tiredful-API)
Totally there are 4 different ways you can specify the error condition.
Idea : Record the Tiredful application request in BURP, configure the ATOR extender, check whether token is replaced by ATOR.
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
v1.0
Authors from Synopsys - Ashwath Reddy (@ka3hk) and Manikandan Rajappan (@rmanikdn)
This software is released by Synopsys under the MIT license.
UI Panel was splitted into 4 different configuration. Check out the code from v2 or use the executable from v2/bin.
Cobalt Strike Beacon Object File (BOF) that uses WinStationConnect API to perform local/remote RDP session hijacking. With a valid access token / kerberos ticket (e.g., golden ticket) of the session owner, you will be able to hijack the session remotely without dropping any beacon/tool on the target server.
To enumerate sessions locally/remotely, you could use Quser-BOF.
Usage: bof-rdphijack [your console session id] [target session id to hijack] [password|server] [argument]
Command Description
-------- -----------
password Specifies the password of the user who owns the session to which you want to connect.
server Specifies the remote server that you want to perform RDP hijacking.
Sample usage
--------
Redirect session 2 to session 1 (require SYSTEM privilege):
bof-rdphijack 1 2
Redirect session 2 to session 1 with password of the user who owns the session 2 (require high integrity beacon):
bof-rdphijack 1 2 password P@ssw0rd123
Redirect session 2 to session 1 for a remote server (require token/ticket of the user who owns the session 2):
bof-rdphijack 1 2 server SQL01.lab.internal
make
tscon.exe
Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)
Β
Β
Install Node.js
Install Visual studio with C++ compilers and all enabled (is a bit gigs but u wont have errors)
Run install.bat file to install all necessary files
Replace WEBHOOK with your webhook in config.js
Run build.bat and wait for doenerium-win.exe to be built.
Exodus wallet injection (get the password whenever the user logs in the wallet)- More grabbers (VPN's, Gaming, Messengers)
- Keylogger
- Growtopia stealer
- Discord bot to build within discord ($build <webhook_url>)
- Dynamic encryption
By downloading this, you agree to the Commons Clause license and that you're not allowed to sell this repository or any code from this repository. For more info seeΒ commonsclause
There is no official telegram server of this project. I don't own t.me/doenerium
I am not responsible for any damages this software may cause. This was made for personal education.
Credits to Pandoric / PandoricGalaxy for creating this beautiful README file
kubeaudit
is a command line tool and a Go package to audit Kubernetes clusters for various different security concerns, such as:
tldr. kubeaudit
makes sure you deploy secure containers!
To use kubeaudit as a Go package, see the package docs.
The rest of this README will focus on how to use kubeaudit as a command line tool.
brew install kubeaudit
Kubeaudit has official releases that are blessed and stable: Official releases
Master may have newer features than the stable releases. If you need a newer feature not yet included in a release, make sure you're using Go 1.17+ and run the following:
go get -v github.com/Shopify/kubeaudit
Start using kubeaudit
with the Quick Start or view all the supported commands.
Prerequisite: kubectl v1.12.0 or later
With kubectl v1.12.0 introducing easy pluggability of external functions, kubeaudit can be invoked as kubectl audit
by
make plugin
and having $GOPATH/bin
available in your path.or
kubectl-audit
and having it available in your path.We also release a Docker image: shopify/kubeaudit
. To run kubeaudit as a job in your cluster see Running kubeaudit in a cluster.
kubeaudit has three modes:
If a Kubernetes manifest file is provided using the -f/--manifest
flag, kubeaudit will audit the manifest file.
Example command:
kubeaudit all -f "/path/to/manifest.yml"
Example output:
$ kubeaudit all -f "internal/test/fixtures/all_resources/deployment-apps-v1.yml"
---------------- Results for ---------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
namespace: deployment-apps-v1
--------------------------------------------
-- [error] AppArmorAnnotationMissing
Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/container' should be added.
Metadata:
Container: container
MissingAnnotation: container.apparmor.security.beta.kubernetes.io/container
-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.
-- [error] CapabilityShouldDropAll
Message: Capability not set to ALL. Ideally, you should drop ALL capabilities and add the specific ones you need to the add list.
Metadata:
Container: container
Capability: AUDIT_WRITE
...
If no errors with a given minimum severity are found, the following is returned:
All checks completed. 0 high-risk vulnerabilities found
Manifest mode also supports autofixing all security issues using the autofix
command:
kubeaudit autofix -f "/path/to/manifest.yml"
To write the fixed manifest to a new file instead of modifying the source file, use the -o/--output
flag.
kubeaudit autofix -f "/path/to/manifest.yml" -o "/path/to/fixed"
To fix a manifest based on custom rules specified on a kubeaudit config file, use the -k/--kconfig
flag.
kubeaudit autofix -k "/path/to/kubeaudit-config.yml" -f "/path/to/manifest.yml" -o "/path/to/fixed"
Kubeaudit can detect if it is running within a container in a cluster. If so, it will try to audit all Kubernetes resources in that cluster:
kubeaudit all
Kubeaudit will try to connect to a cluster using the local kubeconfig file ($HOME/.kube/config
). A different kubeconfig location can be specified using the --kubeconfig
flag. To specify a context of the kubeconfig, use the -c/--context
flag.
kubeaudit all --kubeconfig "/path/to/config" --context my_cluster
For more information on kubernetes config files, see https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
Kubeaudit produces results with three levels of severity:
Error
: A security issue or invalid kubernetes configurationWarning
: A best practice recommendationInfo
: Informational, no action required. This includes results that are overridden
The minimum severity level can be set using the --minSeverity/-m
flag.
By default kubeaudit will output results in a human-readable way. If the output is intended to be further processed, it can be set to output JSON using the --format json
flag. To output results as logs (the previous default) use --format logrus
. Some output formats include colors to make results easier to read in a terminal. To disable colors (for example, if you are sending output to a text file), you can use the --no-color
flag.
If there are results of severity level error
, kubeaudit will exit with exit code 2. This can be changed using the --exitcode/-e
flag.
For all the ways kubeaudit can be customized, see Global Flags.
Command | Description | Documentation |
---|---|---|
all | Runs all available auditors, or those specified using a kubeaudit config. | docs |
autofix | Automatically fixes security issues. | docs |
version | Prints the current kubeaudit version. |
Auditors can also be run individually.
Command | Description | Documentation |
---|---|---|
apparmor | Finds containers running without AppArmor. | docs |
asat | Finds pods using an automatically mounted default service account | docs |
capabilities | Finds containers that do not drop the recommended capabilities or add new ones. | docs |
deprecatedapis | Finds any resource defined with a deprecated API version. | docs |
hostns | Finds containers that have HostPID, HostIPC or HostNetwork enabled. | docs |
image | Finds containers which do not use the desired version of an image (via the tag) or use an image without a tag. | docs |
limits | Finds containers which exceed the specified CPU and memory limits or do not specify any. | docs |
mounts | Finds containers that have sensitive host paths mounted. | docs |
netpols | Finds namespaces that do not have a default-deny network policy. | docs |
nonroot | Finds containers running as root. | docs |
privesc | Finds containers that allow privilege escalation. | docs |
privileged | Finds containers running as privileged. | docs |
rootfs | Finds containers which do not have a read-only filesystem. | docs |
seccomp | Finds containers running without Seccomp. | docs |
Short | Long | Description |
---|---|---|
--format | The output format to use (one of "pretty", "logrus", "json") (default is "pretty") | |
--kubeconfig | Path to local Kubernetes config file. Only used in local mode (default is $HOME/.kube/config ) | |
-c | --context | The name of the kubeconfig context to use |
-f | --manifest | Path to the yaml configuration to audit. Only used in manifest mode. You may use - to read from stdin. |
-n | --namespace | Only audit resources in the specified namespace. Not currently supported in manifest mode. |
-g | --includegenerated | Include generated resources in scan (such as Pods generated by deployments). If you would like kubeaudit to produce results for generated resources (for example if you have custom resources or want to catch orphaned resources where the owner resource no longer exists) you can use this flag. |
-m | --minseverity | Set the lowest severity level to report (one of "error", "warning", "info") (default is "info") |
-e | --exitcode | Exit code to use if there are results with severity of "error". Conventionally, 0 is used for success and all non-zero codes for an error. (default is 2) |
--no-color | Don't use colors in the output (default is false) |
The kubeaudit config can be used for two things:
Any configuration that can be specified using flags for the individual auditors can be represented using the config.
The config has the following format:
enabledAuditors:
# Auditors are enabled by default if they are not explicitly set to "false"
apparmor: false
asat: false
capabilities: true
deprecatedapis: true
hostns: true
image: true
limits: true
mounts: true
netpols: true
nonroot: true
privesc: true
privileged: true
rootfs: true
seccomp: true
auditors:
capabilities:
# add capabilities needed to the add list, so kubeaudit won't report errors
allowAddList: ['AUDIT_WRITE', 'CHOWN']
deprecatedapis:
# If no versions are specified and the'deprecatedapis' auditor is enabled, WARN
# results will be genereted for the resources defined with a deprecated API.
currentVersion: '1.22'
targetedVersion: '1.25'
image:
# If no image is specified and the 'image' auditor is enabled, WARN results
# will be generated for containers which use an ima ge without a tag
image: 'myimage:mytag'
limits:
# If no limits are specified and the 'limits' auditor is enabled, WARN results
# will be generated for containers which have no cpu or memory limits specified
cpu: '750m'
memory: '500m'
For more details about each auditor, including a description of the auditor-specific configuration in the config, see the Auditor Docs.
Note: The kubeaudit config is not the same as the kubeconfig file specified with the --kubeconfig
flag, which refers to the Kubernetes config file (see Local Mode). Also note that only the all
and autofix
commands support using a kubeaudit config. It will not work with other commands.
Note: If flags are used in combination with the config file, flags will take precedence.
Security issues can be ignored for specific containers or pods by adding override labels. This means the auditor will produce info
results instead of error
results and the audit result name will have Allowed
appended to it. The labels are documented in each auditor's documentation, but the general format for auditors that support overrides is as follows:
An override label consists of a key
and a value
.
The key
is a combination of the override type (container or pod) and an override identifier
which is unique to each auditor (see the docs for the specific auditor). The key
can take one of two forms depending on the override type:
container.audit.kubernetes.io/[container name].[override identifier]
audit.kubernetes.io/pod.[override identifier]
If the value
is set to a non-empty string, it will be displayed in the info
result as the OverrideReason
:
$ kubeaudit asat -f "auditors/asat/fixtures/service-account-token-true-allowed.yml"
---------------- Results for ---------------
apiVersion: v1
kind: ReplicationController
metadata:
name: replicationcontroller
namespace: service-account-token-true-allowed
--------------------------------------------
-- [info] AutomountServiceAccountTokenTrueAndDefaultSAAllowed
Message: Audit result overridden: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.
Metadata:
OverrideReason: SomeReason
As per Kubernetes spec, value
must be 63 characters or less and must be empty or begin and end with an alphanumeric character ([a-z0-9A-Z]
) with dashes (-
), underscores (_
), dots (.
), and alphanumerics between.
Multiple override labels (for multiple auditors) can be added to the same resource.
See the specific auditor docs for the auditor you wish to override for examples.
To learn more about labels, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
If you'd like to fix a bug, contribute a feature or just correct a typo, please feel free to do so as long as you follow our Code of Conduct.
go get github.com/Shopify/kubeaudit
cd $GOPATH/src/github.com/Shopify/kubeaudit
git remote add fork https://github.com/you-are-awesome/kubeaudit
git checkout -b awesome-new-feature
make test
(to run tests without Kind: USE_KIND=false make test
)git commit -am 'Adds awesome feature'
git push fork
Note that if you didn't sign the CLA before opening your PR, you can re-run the check by adding a comment to the PR that says "I've signed the CLA!"!