FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

AzSubEnum - Azure Service Subdomain Enumeration

By: Zion3R


AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.


How it works?

AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.

With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.


Why i create this?

During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.


Usage
➜  AzSubEnum git:(main) βœ— python3 azsubenum.py --help
usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]

Azure Subdomain Enumeration

options:
-h, --help show this help message and exit
-b BASE, --base BASE Base name to use
-v, --verbose Show verbose output
-t THREADS, --threads THREADS
Number of threads for concurrent execution
-p PERMUTATIONS, --permutations PERMUTATIONS
File containing permutations

Basic enumeration:

python3 azsubenum.py -b retailcorp --thread 10

Using permutation wordlists:

python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt

With verbose output:

python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose




Argus - A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions

By: Zion3R

This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.

Visit our website - secureci.org for more information.


Features

  • Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.

  • Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.

Usage

This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.

python argus.py --mode [mode] --url [url] [--output-folder path_to_output] [--config path_to_config] [--verbose] [--branch branch_name] [--commit commit_hash] [--tag tag_name] [--action-path path_to_action] [--workflow-path path_to_workflow]

Parameters:

  • --mode: The mode of operation. Choose either 'repo' or 'action'. This parameter is required.
  • --url: The GitHub URL. Use USERNAME:TOKEN@URL for private repos. This parameter is required.
  • --output-folder: The output folder. The default value is '/tmp'. This parameter is optional.
  • --config: The config file. This parameter is optional.
  • --verbose: Verbose mode. If this option is provided, the logging level is set to DEBUG. Otherwise, it is set to INFO. This parameter is optional.
  • --branch: The branch name. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --commit: The commit hash. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --tag: The tag. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --action-path: The (relative) path to the action. You cannot provide --action-path in repo mode. This parameter is optional.
  • --workflow-path: The (relative) path to the workflow. You cannot provide --workflow-path in action mode. This parameter is optional.

Example:

To use this script to interact with a GitHub repo, you might run a command like the following:

python argus.py --mode repo --url https://github.com/username/repo.git --branch master

This would run the script in repo mode on the master branch of the specified repository.

How to use

Argus can be run inside a docker container. To do so, follow the steps:

  • Install docker and docker-compose
    • apt-get -y install docker.io docker-compose
  • Clone the release branch of this repo
    • git clone <>
  • Build the docker container
    • docker-compose build
  • Now you can run argus. Example run:
    • docker-compose run argus --mode {mode} --url {url to target repo}
  • Results will be available inside the results folder

Viewing SARIF Results

You can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.

  1. Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif) directly to the website to view the results.

  2. VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.

Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.

Troubleshooting

If there is an issue with needing the Github authorization for running, you can provide username:TOKEN in the GITHUB_CREDS environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.

Contributions

Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!

Cite Argus

If you use Argus in your research, please cite our paper:

  @inproceedings{muralee2023Argus,
title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions},
author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck,
A. Kapravelos, A. Machiry},
booktitle={32st USENIX Security Symposium (USENIX Security 23)},
year={2023},
}


PurpleKeep - Providing Azure Pipelines To Create An Infrastructure And Run Atomic Tests

By: Zion3R


With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.

To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.

Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.

Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.

To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.

TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.


Requirements

The project is based on Azure Pipelines and requires the following to be able to run:

  • Azure Service Connection to a resource group as described in the Microsoft Docs
  • Assignment of the "Key Vault Administrator" Role for the previously created Enterprise Application
  • MDE onboarding script, placed as a Secure File in the Library of Azure DevOps and make it accessible to the pipelines

Optional

You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.

Refer to the variables file for your configurable items.

Design

Infrastructure

Deploying the infrastructure uses the Azure Pipeline to perform the following steps:

  • Deploy Azure services:
    • Key Vault
    • Log Analytics Workspace
    • Data Connection Endpoint
    • Data Connection Rule
  • Generate SSH keypair and password for the Windows account and store in the Key Vault
  • Create a Windows 11 VM
  • Install OpenSSH
  • Configure and deploy the SSH public key
  • Install Invoke-AtomicRedTeam
  • Install Microsoft Defender for Endpoint and configure exceptions
  • (Optional) Apply security and/or audit policy files
  • Reboot

Simulation

Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:

  • T1059.003
  • T1027,T1049,T1003

The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.

There are currently two ways to run the simulation:

Rotating simulation

This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.

Warning: this will onboard a large number of hosts into your EDR

Single deploy simulation

A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.

TODO

Must have

  • Check if pre-reqs have been fullfilled before executing the atomic
  • Provide the ability to import own group policy
  • Cleanup biceps and pipelines by using a master template (Complete build)
  • Build pipeline that runs technique sequently with reboots in between
  • Add Azure ServiceConnection to variables instead of parameters

Nice to have

  • MDE Off-boarding (?)
  • Automatically join and leave AD domain
  • Make Atomics repository configureable
  • Deploy VECTR as part of the infrastructure and ingest results during simulation. Also see the VECTR API issue
  • Tune alert API call to Microsoft Defender for Endpoint (Microsoft.Security alertsSuppressionRules)
  • Add C2 infrastructure for manual or C2 based simulations

Issues

  • Atomics do not return if a simulation succeeded or not
  • Unreliable OpenSSH extension installer failing infrastructure deployment
  • Spamming onboarded devices in the EDR

References



Antisquat - Leverages AI Techniques Such As NLP, ChatGPT And More To Empower Detection Of Typosquatting And Phishing Domains

By: Zion3R


AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.


How to use

  • Clone the project via git clone https://github.com/redhuntlabs/antisquat.
  • Install all dependencies by typing pip install -r requirements.txt.
  • Get a ChatGPT API key at https://platform.openai.com/account/api-keys
  • Create a file named .openai-key and paste your chatgpt api key in there.
  • (Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
  • Create a file named β€˜domains.txt’. Type in a line-separated list of domains you’d like to scan.
  • (Optional) Create a file named blacklist.txt. Type in a line-separated list of domains you’d like to ignore. Regular expressions are supported.
  • Run antisquat using python3.8 antisquat.py domains.txt

Examples:

Let’s say you’d like to run antisquat on "flipkart.com".

Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.

AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.

Test case:

A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py

Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.

If you'd like to know more about the tool, make sure to check out our blog.

Acknowledgements

To know more about our Attack Surface Management platform, check out NVADR.



BestEdrOfTheMarket - Little AV/EDR Bypassing Lab For Training And Learning Purposes

By: Zion3R


Little AV/EDR Evasion Lab for training & learning purposes. (️ under construction..)​

 ____            _     _____ ____  ____     ___   __   _____ _
| __ ) ___ ___| |_ | ____| _ \| _ \ / _ \ / _| |_ _| |__ ___
| _ \ / _ \/ __| __| | _| | | | | |_) | | | | | |_ | | | '_ \ / _ \
| |_) | __/\__ \ |_ | |___| |_| | _ < | |_| | _| | | | | | | __/
|____/_\___||___/\__| |_____|____/|_| \_\ \___/|_| |_| |_| |_|\___|
| \/ | __ _ _ __| | _____| |_
| |\/| |/ _` | '__| |/ / _ \ __|
| | | | (_| | | | < __/ |_ Yazidou - github.com/Xacone
|_| |_|\__,_|_| |_|\_\___|\__|


BestEDROfTheMarket is a naive user-mode EDR (Endpoint Detection and Response) project, designed to serve as a testing ground for understanding and bypassing EDR's user-mode detection methods that are frequently used by these security solutions.
These techniques are mainly based on a dynamic analysis of the target process state (memory, API calls, etc.),

Feel free to check this short article I wrote that describe the interception and analysis methods implemented by the EDR.


Defensive Techniques

In progress:


Usage

        Usage: BestEdrOfTheMarket.exe [args]

/help Shows this help message and quit
/v Verbosity
/iat IAT hooking
/stack Threads call stack monitoring
/nt Inline Nt-level hooking
/k32 Inline Kernel32/Kernelbase hooking
/ssn SSN crushing
BestEdrOfTheMarket.exe /stack /v /k32
BestEdrOfTheMarket.exe /stack /nt
BestEdrOfTheMarket.exe /iat


Deepsecrets - Secrets Scanner That Understands Code

By: Zion3R


Yet another tool - why?

Existing tools don't really "understand" code. Instead, they mostly parse texts.

DeepSecrets expands classic regex-search approaches with semantic analysis, dangerous variable detection, and more efficient usage of entropy analysis. Code understanding supports 500+ languages and formats and is achieved by lexing and parsing - techniques commonly used in SAST tools.

DeepSecrets also introduces a new way to find secrets: just use hashed values of your known secrets and get them found plain in your code.

Under the hood story is in articles here: https://hackernoon.com/modernizing-secrets-scanning-part-1-the-problem


Mini-FAQ after release :)

Pff, is it still regex-based?

Yes and no. Of course, it uses regexes and finds typed secrets like any other tool. But language understanding (the lexing stage) and variable detection also use regexes under the hood. So regexes is an instrument, not a problem.

Why don't you build true abstract syntax trees? It's academically more correct!

DeepSecrets tries to keep a balance between complexity and effectiveness. Building a true AST is a pretty complex thing and simply an overkill for our specific task. So the tool still follows the generic SAST-way of code analysis but optimizes the AST part using a different approach.

I'd like to build my own semantic rules. How do I do that?

Only through the code by the moment. Formalizing the rules and moving them into a flexible and user-controlled ruleset is in the plans.

I still have a question

Feel free to communicate with the maintainer

Installation

From Github via pip

$ pip install git+https://github.com/avito-tech/deepsecrets.git

From PyPi

$ pip install deepsecrets

Scanning

The easiest way:

$ deepsecrets --target-dir /path/to/your/code --outfile report.json

This will run a scan against /path/to/your/code using the default configuration:

  • Regex checks by the built-in ruleset
  • Semantic checks (variable detection, entropy checks)

Report will be saved to report.json

Fine-tuning

Run deepsecrets --help for details.

Basically, you can use your own ruleset by specifying --regex-rules. Paths to be excluded from scanning can be set via --excluded-paths.

Building rulesets

Regex

The built-in ruleset for regex checks is located in /deepsecrets/rules/regexes.json. You're free to follow the format and create a custom ruleset.

HashedSecret

Example ruleset for regex checks is located in /deepsecrets/rules/regexes.json. You're free to follow the format and create a custom ruleset.

Contributing

Under the hood

There are several core concepts:

  • File
  • Tokenizer
  • Token
  • Engine
  • Finding
  • ScanMode

File

Just a pythonic representation of a file with all needed methods for management.

Tokenizer

A component able to break the content of a file into pieces - Tokens - by its logic. There are four types of tokenizers available:

  • FullContentTokenizer: treats all content as a single token. Useful for regex-based search.
  • PerWordTokenizer: breaks given content by words and line breaks.
  • LexerTokenizer: uses language-specific smarts to break code into semantically correct pieces with additional context for each token.

Token

A string with additional information about its semantic role, corresponding file, and location inside it.

Engine

A component performing secrets search for a single token by its own logic. Returns a set of Findings. There are three engines available:

  • RegexEngine: checks tokens' values through a special ruleset
  • SemanticEngine: checks tokens produced by the LexerTokenizer using additional context - variable names and values
  • HashedSecretEngine: checks tokens' values by hashing them and trying to find coinciding hashes inside a special ruleset

Finding

This is a data structure representing a problem detected inside code. Features information about the precise location inside a file and a rule that found it.

ScanMode

This component is responsible for the scan process.

  • Defines the scope of analysis for a given work directory respecting exceptions
  • Allows declaring a PerFileAnalyzer - the method called against each file, returning a list of findings. The primary usage is to initialize necessary engines, tokenizers, and rulesets.
  • Runs the scan: a multiprocessing pool analyzes every file in parallel.
  • Prepares results for output and outputs them.

The current implementation has a CliScanMode built by the user-provided config through the cli args.

Local development

The project is supposed to be developed using VSCode and 'Remote containers' feature.

Steps:

  1. Clone the repository
  2. Open the cloned folder with VSCode
  3. Agree with 'Reopen in container'
  4. Wait until the container is built and necessary extensions are installed
  5. You're ready


Nodesub - Command-Line Tool For Finding Subdomains In Bug Bounty Programs

By: Zion3R


Nodesub is a command-line tool for finding subdomains in bug bounty programs. It supports various subdomain enumeration techniques and provides flexible options for customization.


Features

  • Perform subdomain enumeration using CIDR notation (Support input list).
  • Perform subdomain enumeration using ASN (Support input list).
  • Perform subdomain enumeration using a list of domains.

Installation

To install Nodesub, use the following command:

npm install -g nodesub

NOTE:

  • Edit File ~/.config/nodesub/config.ini

Usage

nodesub -h

This will display help for the tool. Here are all the switches it supports.

Examples
  • Enumerate subdomains for a single domain:

     nodesub -u example.com
  • Enumerate subdomains for a list of domains from a file:

     nodesub -l domains.txt
  • Perform subdomain enumeration using CIDR:

    node nodesub.js -c 192.168.0.0/24 -o subdomains.txt

    node nodesub.js -c CIDR.txt -o subdomains.txt

  • Perform subdomain enumeration using ASN:

    node nodesub.js -a AS12345 -o subdomains.txt
    node nodesub.js -a ASN.txt -o subdomains.txt
  • Enable recursive subdomain enumeration and output the results to a JSON file:

     nodesub -u example.com -r -o output.json -f json

Output

The tool provides various output formats for the results, including:

  • Text (txt)
  • JSON (json)
  • CSV (csv)
  • PDF (pdf)

The output file contains the resolved subdomains, failed resolved subdomains, or all subdomains based on the options chosen.



Trawler - PowerShell Script To Help Incident Responders Discover Adversary Persistence Mechanisms

By: Zion3R


Dredging Windows for Persistence

What is it?

Trawler is a PowerShell script designed to help Incident Responders discover potential indicators of compromise on Windows hosts, primarily focused on persistence mechanisms including Scheduled Tasks, Services, Registry Modifications, Startup Items, Binary Modifications and more.

Currently, trawler can detect most of the persistence techniques specifically called out by MITRE and Atomic Red Team with more detections being added on a regular basis.


Main Features

  • Scanning Windows OS for a variety of persistence techniques (Listed below)
  • CSV Output with MITRE Technique and Investigation Jumpstart Metadata
  • Analysis and Remediation Guidance Documentation (https://github.com/joeavanzato/Trawler/wiki/Analysis-and-Remediation-Guidance)
  • Dynamic Risk Assignment for each detection
  • Built-in Allow Lists for common Windows configurations spanning Windows 10/Server 2012|2016|2019|2022 to reduce noise
  • Capture persistence metadata from 'golden' enterprise image for use as a dynamic allow-list at runtime
  • Analyze mounted disk images via drive re-targeting

How do I use it?

Just download and run trawler.ps1 from an Administrative PowerShell/cmd prompt - any detections will be displayed in the console as well as written to a CSV ('detections.csv') in the current working directory. The generated CSV will contain Detection Name, Source, Risk, Metadata and the relevant MITRE Technique.

Or use this one-liner from an Administrative PowerShell terminal:

iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/joeavanzato/Trawler/main/trawler.ps1'))

Certain detections have allow-lists built-in to help remove noise from default Windows configurations (10/2016/2019/2022) - expected Scheduled Tasks, Services, etc. Of course, it is always possible for attackers to hijack these directly and masquerade with great detail as a default OS process - take care to use multiple forms of analysis and detection when dealing with skillful adversaries.

If you have examples or ideas for additional detections, please feel free to submit an Issue or PR with relevant technical details/references - the code-base is a little messy right now and will be cleaned up over time.

Additionally, if you identify obvious false positives, please let me know by opening an issue or PR on GitHub! The obvious culprits for this will be non-standard COMs, Services or Tasks.

CLI Parameters

-scanoptions : Tab-through possible detections and select a sub-set using comma-delimited terms (eg. .\trawler.ps1 -scanoptions Services,Processes)
-hide : Suppress Detection output to console
-snapshot : Capture a "persistence snapshot" of the current system, defaulting to "$PSScriptRoot\snapshot.csv"
-snapshotpath : Define a custom file-path for saving snapshot output to.
-outpath : Define a custom file-path for saving detection output to (defaults to "$PSScriptRoot\detections.csv")
-loadsnapshot : Define the path for an existing snapshot file to load as an allow-list reference
-drivetarget : Define the variable for a mounted target drive (eg. .\trawler.ps1 -targetdrive "D:") - using this alone leads to an 'assumed homedrive' variable of C: for analysis purposes

What separates this from PersistenceSniper?

PersistenceSniper is an awesome tool - I've used it heavily in the past - but there are a few key points that differentiate these utilities

  • trawler is (currently) a local utility - it would be pretty straight-forward to wrap it in a loop and use WinRM/PowerShell Sessions to execute it on remote hosts though
  • trawler implements allow-listing for many 'noisy' detections to help remove expected detections from default configurations of Windows (10/2016/2019/2022) and these are constantly being updated
    • PersistenceSniper (for the most part) does not contain any type of allow-listing - therefore, there is more noise generated when considering items such as Services, Scheduled Tasks, general COM DLL scanning, etc.
  • trawler's output is much more simplified - Name, Risk, Source, MITRE Technique and Metadata are the only items provided for each detection to help analysts jump-start their persistence hunting efforts
  • Regex is used in many checks to help detect 'suspicious' keywords or patterns in various critical areas including scanned file contents, registry values, etc.
  • trawler supports 'snapshotting' a system (for example, an enterprise golden image) then using the generated snapshot as an allow-list to reduce noise.
  • trawler supports 'drive-retargeting' to check dead-boxes mounted to an analysis machine.

Overall, these tools are extremely similar but approach the problem from slightly different angles - PersistenceSniper provides all information back to the analyst for review while Trawler tries to limit what is returned to only results that are likely to be potential adversary persistence mechanisms. As such, there is a possibility for false-negatives with trawler if an adversary completely mimics an allow-listed item.

Tuning to your environment

Trawler supports loading an allow-list from a 'snapshot' - to do this requires two steps.

  1. Run '.\trawler.ps1 -snapshot' on a "Golden Image" representing the servers in your environment - once complete, in addition to the standard 'detections.csv' a file named 'snapshots.csv' will be generated
  2. This file can then be used as input to trawler when running on other hosts and the data will be loaded dynamically as an allow-list for each appropriate detection
    1. '.\trawler.ps1' -loadsnapshot "path\to\snapshot.csv"

That's it - all relevant detections will then draw from the snapshot file as an allow-list to reduce noise and identify any potential changes to the base image that may have occurred.

(Allow-listing is implemented for most of the checks but not all - still being actively implemented)

Drive ReTargeting

Often during an investigation, analysts may end up mounting a new drive that represents an imaged Windows device - Trawler now partially supports scanning these mounted drives through the use of the '-drivetarget' parameter.

At runtime, Trawler will re-target temporary script-level variables for use in checking file-based artifacts and also will attempt to load relevant Registry Hives (HKLM\SOFTWARE, HKLM\SYSTEM, NTUSER.DATs, USRCLASS.DATs) underneath HKLM/HKU and prefixed by 'ANALYSIS_'. Trawler will also attempt to unload these temporarily loaded hives upon script completion.

As an example, if you have an image mounted at a location such as 'F:\Test' which contains the NTFS file system ('F:\Test\Windows', 'F:\Test\User', etc) then you can invoke trawler like below;

.\trawler.ps1 -drivetarget "F:\Test"

Please note that since trawler attempts to load the registry hive files from the drive in question, mapping a UNC path to a live remote device will NOT work as those files will not be accessible due to system locks. I am working on an approach which will handle live remote devices, stay tuned.

What is not inspected when drive retargeting?

  • Running Processes
  • Network Connections
  • 'Phantom' DLLs
  • WMI Consumers (Being worked on)
  • BITS Jobs (Being worked on)
  • Certificate Parsing (Being worked on)

Most other checks will function fine because they are based entirely on reading registry hives or file-based artifacts (or can be converted to do so, such as directly reading Task XML as opposed to using built-in command-lets.)

Any limitations in checks when doing drive-retargeting will be discussed more fully in the GitHub Wiki.

Example ImagesΒ 



Β 

What is inspected?

  • Scheduled Tasks
  • Users
  • Services
  • Running Processes
  • Network Connections
  • WMI Event Consumers (CommandLine/Script)
  • Startup Item Discovery
  • BITS Jobs Discovery
  • Windows Accessibility Feature Modifications
  • PowerShell Profile Existence
  • Office Addins from Trusted Locations
  • SilentProcessExit Monitoring
  • Winlogon Helper DLL Hijacking
  • Image File Execution Option Hijacking
  • RDP Shadowing
  • UAC Setting for Remote Sessions
  • Print Monitor DLLs
  • LSA Security and Authentication Package Hijacking
  • Time Provider DLLs
  • Print Processor DLLs
  • Boot/Logon Active Setup
  • User Initialization Logon Script Hijacking
  • ScreenSaver Executable Hijacking
  • Netsh DLLs
  • AppCert DLLs
  • AppInit DLLs
  • Application Shimming
  • COM Object Hijacking
  • LSA Notification Hijacking
  • 'Office test' Usage
  • Office GlobalDotName Usage
  • Terminal Services DLL Hijacking
  • Autodial DLL Hijacking
  • Command AutoRun Processor Abuse
  • Outlook OTM Hijacking
  • Trust Provider Hijacking
  • LNK Target Scanning (Suspicious Terms, Multiple Extensions, Multiple EXEs)
  • 'Phantom' Windows DLL Names loaded into running process (eg. un-signed WptsExtensions.dll)
  • Scanning Critical OS Directories for Unsigned EXEs/DLLs
  • Un-Quoted Service Path Hijacking
  • PATH Binary Hijacking
  • Common File Association Hijacks and Suspicious Keywords
  • Suspicious Certificate Hunting
  • GPO Script Discovery/Scanning
  • NLP Development Platform DLL Overrides
  • AeDebug/.NET/Script/Process/WER Debug Replacements
  • Explorer 'Load'
  • Windows Terminal startOnUserLogin Hijacks
  • App Path Mismatches
  • Service DLL/ImagePath Mismatches
  • GPO Extension DLLs
  • Potential COM Hijacks
  • Non-Standard LSA Extensions
  • DNSServerLevelPluginDll Presence
  • Explorer\MyComputer Utility Hijack
  • Terminal Services InitialProgram Check
  • RDP Startup Programs
  • Microsoft Telemetry Commands
  • Non-Standard AMSI Providers
  • Internet Settings LUI Error DLL
  • PeerDist\Extension DLL
  • ErrorHandler.CMD Checks
  • Built-In Diagnostics DLL
  • MiniDumpAuxiliary DLLs
  • KnownManagedDebugger DLLs
  • WOW64 Compatibility Layer DLLs
  • EventViewer MSC Hijack
  • Uninstall Strings Scan
  • PolicyManager DLLs
  • SEMgr Wallet DLL
  • WER Runtime Exception Handlers
  • HTML Help (.CHM)
  • Remote Access Tool Artifacts (Files, Directories, Registry Keys)
  • ContextMenuHandler DLL Checks
  • Office AI.exe Presence
  • Notepad++ Plugins
  • MSDTC Registry Hijacks
  • Narrator DLL Hijack (MSTTSLocEnUS.DLL)
  • Suspicious File Location Checks

TODO

MITRE Techniques Evaluated

Please be aware that some of these are (of course) more detected than others - for example, we are not detecting all possible registry modifications but rather inspecting certain keys for obvious changes and using the generic MITRE technique "Modify Registry" where no other technique is applicable. For other items such as COM hijacking, we are inspecting all entries in the relevant registry section, checking against 'known-good' patterns and bubbling up unknown or mismatched values, resulting in a much more complete detection surface for that particular technique.

  • T1037: Boot or Logon Initialization Scripts
  • T1037.001: Boot or Logon Initialization Scripts: Logon Script (Windows)
  • T1037.005: Boot or Logon Initialization Scripts: Startup Items
  • T1055.001: Process Injection: Dynamic-link Library Injection
  • T1059: Command and Scripting Interpreter
  • T1071: Application Layer Protocol
  • T1098: Account Manipulation
  • T1112: Modify Registry
  • T1053: Scheduled Task/Job
  • T1136: Create Account
  • T1137.001: Office Application Office Template Macros
  • T1137.002: Office Application Startup: Office Test
  • T1137.006: Office Application Startup: Add-ins
  • T1197: BITS Jobs
  • T1505.005: Server Software Component: Terminal Services DLL
  • T1543.003: Create or Modify System Process: Windows Service
  • T1546: Event Triggered Execution
  • T1546.001: Event Triggered Execution: Change Default File Association
  • T1546.002: Event Triggered Execution: Screensaver
  • T1546.003: Event Triggered Execution: Windows Management Instrumentation Event Subscription
  • T1546.007: Event Triggered Execution: Netsh Helper DLL
  • T1546.008: Event Triggered Execution: Accessibility Features
  • T1546.009: Event Triggered Execution: AppCert DLLs
  • T1546.010: Event Triggered Execution: AppInit DLLs
  • T1546.011: Event Triggered Execution: Application Shimming
  • T1546.012: Event Triggered Execution: Image File Execution Options Injection
  • T1546.013: Event Triggered Execution: PowerShell Profile
  • T1546.015: Event Triggered Execution: Component Object Model Hijacking
  • T1547.002: Boot or Logon Autostart Execution: Authentication Packages
  • T1547.003: Boot or Logon Autostart Execution: Time Providers
  • T1547.004: Boot or Logon Autostart Execution: Winlogon Helper DLL
  • T1547.005: Boot or Logon Autostart Execution: Security Support Provider
  • T1547.009: Boot or Logon Autostart Execution: Shortcut Modification
  • T1547.012: Boot or Logon Autostart Execution: Print Processors
  • T1547.014: Boot or Logon Autostart Execution: Active Setup
  • T1553: Subvert Trust Controls
  • T1553.004: Subvert Trust Controls: Install Root Certificate
  • T1556.002: Modify Authentication Process: Password Filter DLL
  • T1574: Hijack Execution Flow
  • T1574.007: Hijack Execution Flow: Path Interception by PATH Environment Variable
  • T1574.009: Hijack Execution Flow: Path Interception by Unquoted Path

References

This tool would not exist without the amazing InfoSec community - the most notable references I used are provided below.

More References



Upload_Bypass - File Upload Restrictions Bypass, By Using Different Bug Bounty Techniques Covered In Hacktricks

By: Zion3R


Upload_Bypass is a powerful tool designed to assist Pentesters and Bug Hunters in testing file upload mechanisms. It leverages various bug bounty techniques to simplify the process of identifying and exploiting vulnerabilities, ensuring thorough assessments of web applications.

  • Simplifies the identification and exploitation of vulnerabilities in file upload mechanisms.
  • Leverages bug bounty techniques to maximize testing effectiveness.
  • Enables thorough assessments of web applications.
  • Provides an intuitive and user-friendly interface.
  • Enhances security assessments and helps protect critical systems.

New PoC Video:

Disclaimer

Please note that the use of Upload_Bypass and any actions taken with it are solely at your own risk. The tool is provided for educational and testing purposes only. The developer of Upload_Bypass is not responsible for any misuse, damage, or illegal activities caused by its usage.

While Upload_Bypass aims to assist Pentesters and Bug Hunters in testing file upload mechanisms, it is essential to obtain proper authorization and adhere to applicable laws and regulations before performing any security assessments. Always ensure that you have the necessary permissions from the relevant stakeholders before conducting any testing activities.

The results and findings obtained from using Upload_Bypass should be communicated responsibly and in accordance with established disclosure processes. It is crucial to respect the privacy and integrity of the tested systems and refrain from causing harm or disruption.

By using Upload_Bypass, you acknowledge that the developer cannot be held liable for any consequences resulting from its use. Use the tool responsibly and ethically to promote the security and integrity of web applications.

Features

  1. Webshell mode: The tool will try to upload a Webshell with a random name, and if the user specifies the location of the uploaded file, the tool enters an "Interactive shell".
  2. Eicar mode: The tool will try to upload an Eicar(Anti-Malware test file) instead of a Webshell, and if the user specifies the location of the uploaded file, the tool will check if the file uploaded successfully and exists in the system in order to determine if an Anti-Malware is present on the system.
  3. A directory with the name of the tested host will be created in the Tool's directory upon success, with the results saved in Excel and Text files.

Download:

Download the latest version from Releases page.

Installation:

pip install -r requirements.txt

Limitations:

The tool will not function properly if the file upload mechanism includes CAPTCHA implementation.

Perhaps in the future the tool will include an OCR.

Usage:

Attension

The Tool is compatible exclusively with output file requests generated by Burp Suite.

Before saving the Burp file, replace the file content with the string *content* and filename.ext with the string *filename* and Content-Type header with *mimetype*(only if the tool is not able to recognize it automatically).

How a request should look before the changes:


How it should look after the changes:


If the tool fails to recognize the mime type automatically, you can add *mimetype* in the parameter's value of the Content-Type header.

Options: -h, --help

 show this help message and exit

-b BURP_FILE, --burp-file BURP_FILE

 Required - Read from a Burp Suite file
Usage: -b / --burp-file ~/Desktop/output

-s SUCCESS_MESSAGE, --success SUCCESS_MESSAGE

 Required if -f is not set - Provide the success message when a file is uploaded
Usage: -s /--success 'File uploaded successfully.'

-f FAILURE_MESSAGE, --failure FAILURE_MESSAGE

 Required if -s is not set - Provide a failure message when a file is uploaded
Usage: -f /--failure 'File is not allowed!'

-e FILE_EXTENSION, --extension FILE_EXTENSION

 Required - Provide server backend extension
Usage: -e / --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)

-a ALLOWED_EXTENSIONS, --allowed ALLOWED_EXTENSIONS

 Required - Provide allowed extensions to be uploaded
Usage: -a /--allowed jpeg, png, zip, etc'

-l WEBSHELL_LOCATION, --location WEBSHELL_LOCATION

  Provide a remote path where the WebShell will be uploaded (won't work if the file will be uploaded with a UUID).
Usage: -l / --location /uploads/

-rl NUMBER, --rate-limit NUMBER

  Set rate-limiting with milliseconds between each request.
Usage: -r / --rate-limit 700

-p PROXY_NUM, --proxy PROXY_NUM

  Channel the HTTP requests via proxy client (i.e Burp Suite).
Usage: -p / --proxy http://127.0.0.1:8080

-S, --ssl

  If set, the tool will not validate TLS/SSL certificate.
Usage: -S / --ssl

-c, --continue

  If set, the brute force will continue even if one of the methods gets a hit!
Usage: -C /--continue

-E, --eicar

  If set, an Eicar file(Anti Malware Testfile) will be uploaded only. WebShells will not be uploaded (Suitable for real environments).
Usage: -E / --eicar

-v, --verbose

  If set, details about the test will be printed on the screen
Usage: -v / --verbose

-r, --response

  If set, HTTP response will be printed on the screen
Usage: -r / --response

--version

  Print the current version of the tool.     

--update

  Checks for new updates. If there is a new update, it will be downloaded and updated automatically.     

Examples

Running the tool with Eicar and Bruteforce mode along with a verbose output

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e php -a jpeg --response -v --eicar --continue

Running the tool with Webshell mode along with a verbose output

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e asp -a zip -v

Running the tool with a Proxy client

 python upload_bypass.py -b ~/Desktop/burp_output -s 'file upload successfully!' -e jsp -a png -v --proxy http://127.0.0.1:8080


BackupOperatorToolkit - The BackupOperatorToolkit Contains Different Techniques Allowing You To Escalate From Backup Operator To Domain Admin

By: Zion3R


The BackupOperatorToolkit contains different techniques allowing you to escalate from Backup Operator to Domain Admin.

Usage

The BackupOperatorToolkit (BOT) has 4 different mode that allows you to escalate from Backup Operator to Domain Admin.
Use "runas.exe /netonly /user:domain.dk\backupoperator powershell.exe" before running the tool.


Service Mode

The SERVICE mode creates a service on the remote host that will be executed when the host is rebooted.
The service is created by modyfing the remote registry. This is possible by passing the "REG_OPTION_BACKUP_RESTORE" value to RegOpenKeyExA and RegSetValueExA.
It is not possible to have the service executed immediately as the service control manager database "SERVICES_ACTIVE_DATABASE" is loaded into memory at boot and can only be modified with local administrator privileges, which the Backup Operator does not have.

.\BackupOperatorToolkit.exe SERVICE \\PATH\To\Service.exe \\TARGET.DOMAIN.DK SERVICENAME DISPLAYNAME DESCRIPTION

DSRM Mode

The DSRM mode will set the DsrmAdminLogonBehavior registry key found in "HKLM\SYSTEM\CURRENTCONTROLSET\CONTROL\LSA" to either 0, 1, or 2.
Setting the value to 0 will only allow the DSRM account to be used when in recovery mode.
Setting the value to 1 will allow the DSRM account to be used when the Directory Services service is stopped and the NTDS is unlocked.
Setting the value to 2 will allow the DSRM account to be used with network authentication such as WinRM.
If the DUMP mode has been used and the DSRM account has been cracked offline, set the value to 2 and log into the Domain Controller with the DSRM account which will be local administrator.

.\BackupOperatorToolkit.exe DSRM \\TARGET.DOMAIN.DK 0||1||2

DUMP Mode

The DUMP mode will dump the SAM, SYSTEM, and SECURITY hives to a local path on the remote host or upload the files to a network share.
Once the hives have been dumped you could PtH with the Domain Controller hash, crack DSRM and enable network auth, or possibly authenticate with another account found in the dumps. Accounts from other forests may be stored in these files, I'm not sure why but this has been observed on engagements with management forests. This mode is inspired by the BackupOperatorToDA project.

.\BackupOperatorToolkit.exe DUMP \\PATH\To\Dump \\TARGET.DOMAIN.DK

IFEO Mode

The IFEO (Image File Execution Options) will enable you to run an application when a specifc process is terminated.
This could grant a shell before the SERVICE mode will in case the target host is heavily utilized and rarely rebooted.
The executable will be running as a child to the WerFault.exe process.

.\BackupOperatorToolkit.exe IFEO notepad.exe \\Path\To\pwn.exe \\TARGET.DOMAIN.DK






AtomLdr - A DLL Loader With Advanced Evasive Features

By: Zion3R


A DLL Loader With Advanced Evasive Features

Features:

  • CRT library independent.
  • The final DLL file, can run the payload by loading the DLL (executing its entry point), or by executing the exported "Atom" function via the command line.
  • DLL unhooking from \KnwonDlls\ directory, with no RWX sections.
  • The encrypted payload is saved in the resource section and retrieved via custom code.
  • AES256-CBC Payload encryption using custom no table/data-dependent branches using ctaes; this is one of the best custom AES implementations I've encountered.
  • Aes Key & Iv Encryption.
  • Indirect syscalls, utilizing HellHall with ROP gadgets (for the unhooking part).
  • Payload injection using APC calls - alertable thread.
  • Payload execution using APC - alertable thread.
  • Api hashing using two different implementations of the CRC32 string hashing algorithm.
  • The total Size is 17kb + payload size (multiple of 16).

How Does The Unhooking Part Work

AtomLdr's unhooking method looks like the following

the program Unhooking from the \KnwonDlls\ directory is not a new method to bypass user-land hooks. However, this loader tries to avoid allocating RWX memory when doing so. This was obligatory to do in KnownDllUnhook for example, where RWX permissions were needed to replace the text section of the hooked modules, and at the same time allow execution of functions within these text sections.

This was changed in this loader, where it suspends the running threads, in an attempt to block any function from being called from within the targetted text sections, thus eliminating the need of having them marked as RWX sections before unhooking, making RW permissions a possible choice.

This approach, however, created another problem; when unhooking, NtProtectVirtualMemory syscall and others were using the syscall instruction inside of ntdll.dll module, as an indirect-syscall approach. Still, as mentioned above, the unhooked modules will be marked as RW sections, making it impossible to perform indirect syscalls, because the syscall instruction that we were jumping to, can't be executed now, so we had to jump to another executable place, this is where win32u.dll was used.

win32u.dll contains some syscalls that are GUI-related functions, making it suitable to jump to instead of ntdll.dll. win32u.dll is loaded (statically), but not included in the unhooking routine, which is done to insure that win32u.dll can still execute the syscall instruction we are jumping to.

The suspended threads after that are resumed.

It is worth mentioning that this approach may not be that efficient, and can be unstable, that is due to the thread suspension trick used. However, it has been tested with multiple processes with positive results, in the meantime, if you encountered any problems, feel free to open an issue.


Usage

  • PayloadBuilder is compiled and executed with the specified payload, it will output a PayloadConfig.pc file, that contains the encrypted payload, and its encrypted key and iv.
  • The generated PayloadConfig.pc file will then replace this in the AtomLdr project.
  • Compile the AtomLdr project as x64 Release.
  • To enable debug mode, uncomment this here.

Demo (1)

  • Executing AtomLdr.dll using rundll32.exe, running Havoc payload, and capturing a screenshot

  • AtomLdr.dll's Import Address Table


Demo - Debug Mode(2)

  • Running PayloadBuilder.exe, to encrypt demon[111].bin - a Havoc payload file


  • Running AtomLdr.dll using rundll32.exe


  • Havoc capturing a screenshot, after payload execution


Based on



❌